February 09, 2019
On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
> On Saturday, 9 February 2019 at 03:21:51 UTC, Murilo wrote:
>> Now, changing a little bit the subject. All FPs in D turn out to be printed differently than they are in C and in C it comes out a little more precise than in D. Is this really supposed to happen?
>
> Like I said in my first message, the D default rounds off more than the C default. This usually results in more readable stuff - the extra noise at the end is not that helpful in most cases.
>
> But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.

Thanks but here is the situation, I use printf("%.20f", 0.1); in both C and D, C returns 0.10000000000000000555 whereas D returns 0.10000000000000001000. So I understand your point, D rounds off more, but that causes loss of precision, isn't that something bad if you are working with math and physics for example?
February 09, 2019
On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
> On Saturday, 9 February 2019 at 03:21:51 UTC, Murilo wrote:
>> Now, changing a little bit the subject. All FPs in D turn out to be printed differently than they are in C and in C it comes out a little more precise than in D. Is this really supposed to happen?
>
> Like I said in my first message, the D default rounds off more than the C default. This usually results in more readable stuff - the extra noise at the end is not that helpful in most cases.
>
> But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.

Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?
February 09, 2019
On Saturday, 9 February 2019 at 03:33:13 UTC, Murilo wrote:
> Thanks but here is the situation, I use printf("%.20f", 0.1); in both C and D, C returns 0.10000000000000000555 whereas D returns 0.10000000000000001000. So I understand your point, D rounds off more, but that causes loss of precision, isn't that something bad if you are working with math and physics for example?

0.1 in floating point is actually 0.100000001490116119384765625 behind the scenes.

So why is it important that it displays as:

0.10000000000000000555

versus

0.10000000000000001000

?

*Technically* the D version has less error, relative to the internal binary representation. Since there's no exact way of representing 0.1 in floating point, the computer has no way of knowing you really mean "0.1 decimal". If the accuracy is that important to you, you'll probably have to look into software-only number representations, for arbitrary decimal precision (I've not explored them in D, but other languages have things like "BigDecimal" data types)
February 09, 2019
On Saturday, 9 February 2019 at 04:30:22 UTC, DanielG wrote:
> On Saturday, 9 February 2019 at 03:33:13 UTC, Murilo wrote:
>> Thanks but here is the situation, I use printf("%.20f", 0.1); in both C and D, C returns 0.10000000000000000555 whereas D returns 0.10000000000000001000. So I understand your point, D rounds off more, but that causes loss of precision, isn't that something bad if you are working with math and physics for example?
>
> 0.1 in floating point is actually 0.100000001490116119384765625 behind the scenes.
>
> So why is it important that it displays as:
>
> 0.10000000000000000555
>
> versus
>
> 0.10000000000000001000
>
> ?
>
> *Technically* the D version has less error, relative to the internal binary representation. Since there's no exact way of representing 0.1 in floating point, the computer has no way of knowing you really mean "0.1 decimal". If the accuracy is that important to you, you'll probably have to look into software-only number representations, for arbitrary decimal precision (I've not explored them in D, but other languages have things like "BigDecimal" data types)

Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?
February 09, 2019
On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
> Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?

Off the top of my head, you'd have to link against libc so you could use printf() directly.

But may I ask why that is so important to you?
February 09, 2019
On Saturday, 9 February 2019 at 04:36:26 UTC, DanielG wrote:
> On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
>> Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?
>
> Off the top of my head, you'd have to link against libc so you could use printf() directly.
>
> But may I ask why that is so important to you?

That is important because there are some websites with problems for you to send them your code and it automatically checks if you got it right, in D it will always say "Wrong Answer" cause it outputs a different string than what is expected.
February 08, 2019
On Sat, Feb 09, 2019 at 03:52:38AM +0000, Murilo via Digitalmars-d-learn wrote:
> On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
[...]
> > But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.
> 
> Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?

I say again, you're asking for far more digits than are actually there. The extra digits are only an illusion of accuracy; they are essentially garbage values that have no real meaning. It really does not make any sense to print more digits than are actually there.

On the other hand, if your purpose is to export the double in a textual representation that guarantees reproduction of exactly the same bits when imported later, you could use the %a format which writes out the mantissa in exact hexadecimal form. It can then be parsed again later to reproduce the same bits as was in the original double.

Or if your goal is simply to replicate C behaviour, there's also the option of calling the C fprintf directly. Just import core.stdc.stdio or declare the prototype yourself in an extern(C) declaration.


T

-- 
If it breaks, you get to keep both pieces. -- Software disclaimer notice
February 08, 2019
On Sat, Feb 09, 2019 at 04:36:26AM +0000, DanielG via Digitalmars-d-learn wrote:
> On Saturday, 9 February 2019 at 04:32:44 UTC, Murilo wrote:
> > Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?
> 
> Off the top of my head, you'd have to link against libc so you could
> use printf() directly.

There's no need to do that, D programs already link to libc by default. All you need is to declare the right extern(C) prototype (or just import core.stdc.*) and you can call the function just like in C.  With the caveat, of course, that D strings are not the same as C's char*, so you have to use .ptr for string literals and toStringz for dynamic strings.


T

-- 
People tell me I'm stubborn, but I refuse to accept it!
February 10, 2019
On Saturday, 9 February 2019 at 05:46:22 UTC, H. S. Teoh wrote:
> On Sat, Feb 09, 2019 at 03:52:38AM +0000, Murilo via Digitalmars-d-learn wrote:
>> On Saturday, 9 February 2019 at 03:28:24 UTC, Adam D. Ruppe wrote:
> [...]
>> > But you can change this with the format specifiers (use `writefln` instead of `writeln` and give a precision argument) or, of course, you can use the same C printf function from D.
>> 
>> Thanks, but which precision specifier should I use? I already tried "%.20f" but it still produces a result different from C? Is there a way to make it identical to C?
>
> I say again, you're asking for far more digits than are actually there. The extra digits are only an illusion of accuracy; they are essentially garbage values that have no real meaning. It really does not make any sense to print more digits than are actually there.
>
> On the other hand, if your purpose is to export the double in a textual representation that guarantees reproduction of exactly the same bits when imported later, you could use the %a format which writes out the mantissa in exact hexadecimal form. It can then be parsed again later to reproduce the same bits as was in the original double.
>
> Or if your goal is simply to replicate C behaviour, there's also the option of calling the C fprintf directly. Just import core.stdc.stdio or declare the prototype yourself in an extern(C) declaration.
>
>
> T

Thanks, but even using core.stdc.stdio.fprintf() it still shows a different result on the screen. It seems this is a feature of D I will have to get used to and accept the fact I can't always get the same number as in C even though in the end it doesn't make a difference cause as you explained the final digits are just garbage anyway.
February 10, 2019
On Sunday, 10 February 2019 at 20:25:02 UTC, Murilo wrote:
> It seems this is a feature of D I will have to get used to and accept the fact I can't always get the same number as in C

What compilers and settings do you use? What you're actually comparing here are different implementations of the C runtime library.

```
#include<stdio.h>

int main() {
	double a = 99999912343000007654329925.7865;
	printf("%f\n", a);
	return 0;
}
```

I compiled this with different C compilers on Windows 10 and found:

DMC:   99999912342999999472000000.000000
GCC:   99999912342999999000000000.000000
CLANG: 99999912342999999470108672.000000

As for D:
```
import core.stdc.stdio: printf;

int main() {
	double a = 99999912343000007654329925.7865;
	printf("%f\n", a);
	return 0;
}
```

DMD: 99999912342999999472000000.000000
LDC: 99999912342999999470108672.000000

DMC and DMD both use the Digital Mars runtime by default. I think CLANG and LDC use the static libcmt by default while GCC uses the dynamic msvcrt.dll (not sure about the exact one, but evidently it's different). So it really hasn't anything to do with D vs. C but rather what C runtime you use.