February 10, 2019
On Sunday, 10 February 2019 at 21:27:43 UTC, Dennis wrote:
> On Sunday, 10 February 2019 at 20:25:02 UTC, Murilo wrote:
>> It seems this is a feature of D I will have to get used to and accept the fact I can't always get the same number as in C
>
> What compilers and settings do you use? What you're actually comparing here are different implementations of the C runtime library.
>
> ```
> #include<stdio.h>
>
> int main() {
> 	double a = 99999912343000007654329925.7865;
> 	printf("%f\n", a);
> 	return 0;
> }
> ```
>
> I compiled this with different C compilers on Windows 10 and found:
>
> DMC:   99999912342999999472000000.000000
> GCC:   99999912342999999000000000.000000
> CLANG: 99999912342999999470108672.000000
>
> As for D:
> ```
> import core.stdc.stdio: printf;
>
> int main() {
> 	double a = 99999912343000007654329925.7865;
> 	printf("%f\n", a);
> 	return 0;
> }
> ```
>
> DMD: 99999912342999999472000000.000000
> LDC: 99999912342999999470108672.000000
>
> DMC and DMD both use the Digital Mars runtime by default. I think CLANG and LDC use the static libcmt by default while GCC uses the dynamic msvcrt.dll (not sure about the exact one, but evidently it's different). So it really hasn't anything to do with D vs. C but rather what C runtime you use.

Ahhh, alright, I was not aware of the fact that there are different runtimes, thanks for clearing this up.
February 12, 2019
On Saturday, 9 February 2019 at 03:03:41 UTC, H. S. Teoh wrote:

> If you want to hold more than 15 digits, you'll either have to use `real`, which depending on your CPU will be 80-bit (x86) or 128-bit (a few newer, less common CPUs), or an arbitrary-precision library that simulates larger precisions in software, like the MPFR module of libgmp. Note, however, that even even 80-bit real realistically only holds up to about 18 digits, which isn't very much more than a double, and still far too small for your number above.  You need at least a 128-bit quadruple precision type (which can represent up to about 34 digits) in order to represent your above number accurately.
>
>
> T

Thank you both for your lesson Adam D. Ruppe and H.S. Teoh.
Is there a wish or someone showing one's intention to implement into the language the hypothetical built-in 128 bit types via the "cent" and "ucent" reserved keywords?
February 12, 2019
On Tuesday, 12 February 2019 at 09:20:27 UTC, Aurélien Plazzotta wrote:
> Thank you both for your lesson Adam D. Ruppe and H.S. Teoh.
> Is there a wish or someone showing one's intention to implement into the language the hypothetical built-in 128 bit types via the "cent" and "ucent" reserved keywords?

cent and ucent would be integers with values between -170141183460469231731687303715884105728 to 170141183460469231731687303715884105727 and 0 to 340282366920938463463374607431768211455, respectively. Their implementation would not mean that any kind of 128-bit float would also be available. For bigger floating-point numbers there's at least two packages on dub:

https://code.dlang.org/packages/stdxdecimal
https://code.dlang.org/packages/decimal

These are arbitrary-precision, and probably quite a bit slower than any built-in floats.
--
  Simen
February 12, 2019
On Saturday, 9 February 2019 at 02:54:18 UTC, Adam D. Ruppe wrote:
>
> (The `real` thing in D was a massive mistake. It is slow and adds nothing but confusion.)

We've had occasional problems with `real` being 80-bit on FPU giving more precision than asked, and effectively hiding 32-bit float precision problems until run on SSE.

Not a big deal, but I would argue giving more precision than asked is a form of Postel's law: a bad idea.
1 2 3
Next ›   Last »