Thread overview
Why are float literals of type double instead of real?
Sep 19, 2005
Don Clugston
Sep 19, 2005
Walter Bright
Sep 21, 2005
Don Clugston
September 19, 2005
It has an annoying consequence with function overloading; it means
that there's always an implicit conversion to real.
This is a problem whenever another implicit conversion is also possible,
as in the code below.
An implicit conversion from (say)  1.2345 to real ought to be preferred over an implicit conversion to creal.

One simple solution might be perform lexical analysis exactly the way it is now, but assign the type of 'real' during syntax analysis. Ie, all floating point literals are reals, but you must use an L suffix if it exceeds the minimum guaranteed size of a real (and hence might not be portable). Not ideal, because then it would be difficult in the few cases where you actually WANTED a double, not a real.

To quote the function overloading docs:
------------------
In C++, there are many complex levels of function overloading, with some defined as "better" matches than others. If the code designer takes advantage of the more subtle behaviors of overload function selection, the code can become difficult to maintain. Not only will it take a C++ expert to understand why one function is selected over another, but different C++ compilers can implement this tricky feature differently, producing subtly disastrous results.

In D, function overloading is simple. It matches exactly, it matches with implicit conversions, or it does not match. If there is more than one match, it is an error.
--------------

void func(creal z)
{
   // complex version
}

void func(real x)
{
   // real version
}

void test()
{
   func(3.2L); // works OK
   func(4.6); // ambiguous
}

I would like to create complex forms of the standard math functions, but with the current situation, it means sin(2.2) would no longer compile.
Unless a sin(double) is also created, which will only ever be used for literals. And that's just silly. Especially since on some platforms, 'real' and 'double' will be the same.
September 19, 2005
"Don Clugston" <dac@nospam.com.au> wrote in message news:dgm1sc$1ah8$1@digitaldaemon.com...
> I would like to create complex forms of the standard math functions, but with the current situation, it means sin(2.2) would no longer compile. Unless a sin(double) is also created, which will only ever be used for literals. And that's just silly. Especially since on some platforms, 'real' and 'double' will be the same.

You can do:

    real sin(double x) { return sin(cast(real)x); }

which isn't too onerous.


September 21, 2005
Walter Bright wrote:
> "Don Clugston" <dac@nospam.com.au> wrote in message
> news:dgm1sc$1ah8$1@digitaldaemon.com...
> 
>>I would like to create complex forms of the standard math functions, but
>>with the current situation, it means sin(2.2) would no longer compile.
>>Unless a sin(double) is also created, which will only ever be used for
>>literals. And that's just silly. Especially since on some platforms,
>>'real' and 'double' will be the same.
> 
> 
> You can do:
> 
>     real sin(double x) { return sin(cast(real)x); }
> 
> which isn't too onerous.

It gets worse.
float f;
sin(f);
is also ambiguous. It could be float->real or float->creal.
If you also provide a sin(ireal x), you also need to provide a similar
wrapper for sin(idouble x) so that you can write sin(2i).

So you have to write
     real sin(double x) { return sin(cast(real)x); }
     real sin(idouble x) { return sin(cast(ireal)x); }
     real sin(float x) { return sin(cast(real)x); }
     real sin(ifloat x) { return sin(cast(ireal)x); }

It really feels like a workaround, and doesn't add any value.


Having thought about it a bit more, I think that the actual issue is implicit conversions from real (or ireal) to creal.
Given the fact that imaginary numbers are first-class types in D, I wonder whether those implicit conversions should exist at all. What would happen if they were disallowed?

(a) arithmetic operations should be no problem.
Eg. Given creal z, real x, ireal y
z += x
is already different from
z+= cast(creal)(x);
because more optimisation is possible (the former is
z.re +=x, the latter is z.re+=x, z.im+=0.0).

(b) function overloading would change slightly
given func(creal c)

func(z) would work as before
func(x) would now fail. You would need to write
func(x + 0.0i)
or func(x - 0.0i)
or else define
func(real a) { return func(a+0.0i); }
and similarly
func(ireal b) { return func(0.0 + b); }

For library functions like sin(), these functions would
exist already, because they provide optimisation opportunities.

I think there's a good case against these implicit conversions.
* Since zero can have a sign, there are two possible ways to implicitly
convert from real to creal.
* The implicit conversions don't seem to be documented, so removing
them ought not break any existing code.

Or is there something I've missed?