Thread overview
imaginary and complex types
May 22, 2005
Vegeta
May 22, 2005
G.Vidal
May 22, 2005
Vegeta
May 22, 2005
Burton Radons
May 22, 2005
Ben Hinkle
May 22, 2005
Burton Radons
May 23, 2005
Ben Hinkle
May 23, 2005
Dave
May 22, 2005
Ben Hinkle
May 23, 2005
Lionello Lunesu
May 22, 2005
I'm new to the D language and I like what I've seen so far.
But there is one thing that is wrong and it is the names of the types
ireal  and creal.
It looks like those names mean imaginary real and complex real.
I suppose these types exist so that D is better suited to numerical
programming. But anyone who knows a little mathematics will laugh at a
language that has types with those absurd names.
Having an int real makes much more sense.

Those types should be called imaginary and complex.

Probably this discussion has happened before, but just in case.


VS
May 22, 2005
This is absolutely true.

I guess "real" is there to remind us the size of the variable, maybe..?

Who really needs "complex integers" anyway?
I agree those type should be renamed.




May 22, 2005
G.Vidal wrote:

> This is absolutely true.
> 
> I guess "real" is there to remind us the size of the variable, maybe..?
> 

the real type has hat name because it is the largest precision floating
point type. I understand the criteria is that this type is not tied to a
particular size (e.g., double or single precision). the name real is bacuse
it describes the kind of numbers one would use with this data type.
if real does not refer to a specific size, why call the others creal and
ireal. the name should use the same criterion used to name the other type:
that is, by the type of numbers one would use with the data types. that is,
imaginary and complex.

Believe me, this alone could be a reason I wouldn't use D, especially numerical applications. Who is going to believe that a language that beleives in the existance of imaginary reals or complex reals is appropriate for numerical computing?
May 22, 2005
"Vegeta" <lord.vegeta@ica.luz.ve> wrote in message news:d6qlno$2ct8$2@digitaldaemon.com...
> I'm new to the D language and I like what I've seen so far.
> But there is one thing that is wrong and it is the names of the types
> ireal  and creal.
> It looks like those names mean imaginary real and complex real.
> I suppose these types exist so that D is better suited to numerical
> programming. But anyone who knows a little mathematics will laugh at a
> language that has types with those absurd names.
> Having an int real makes much more sense.
>
> Those types should be called imaginary and complex.
>
> Probably this discussion has happened before, but just in case.

It has come up: http://www.digitalmars.com/d/archives/28044.html,
http://www.digitalmars.com/d/archives/9770.html
and probably more. Search around in the D archives linked off the main
navigation bar.

I'm not aware of anyone who really likes the current names. Walter chose them because they are short, it seems. I haven't read the archives enough to see what happened to 'imaginary' and 'complex' since those look like the original names Walter chose.


May 22, 2005
Vegeta wrote:

> G.Vidal wrote:
> 
> 
>>This is absolutely true.
>>
>>I guess "real" is there to remind us the size of the variable, maybe..?
>>
> 
> 
> the real type has hat name because it is the largest precision floating
> point type. I understand the criteria is that this type is not tied to a
> particular size (e.g., double or single precision). the name real is bacuse
> it describes the kind of numbers one would use with this data type.
> if real does not refer to a specific size, why call the others creal and
> ireal. the name should use the same criterion used to name the other type:
> that is, by the type of numbers one would use with the data types. that is,
> imaginary and complex.

Have you noticed that there are six imaginary and complex types, parallel to the floating-point types?  cfloat and ifloat are to float, cdouble and idouble are to double, creal and ireal are to real.  There's no deeper significance, it's purely about the precision used to represent the types.

"real" itself has no meaning beside that specifically defined for it (the greatest precision floating point type representable by the hardware, or double precision if no hardware support exists).  It used to be "extended".  This was decided to be too long and somewhat incorrect and was changed.  That's all there is to it.  It has nothing to do with either the mathematical "real" or the English word.

If you don't want to refer to complex and imaginary types like that, you can alias them:

   alias creal complex;
   alias ireal imaginary;
May 22, 2005
"Burton Radons" <burton-radons@smocky.com> wrote in message news:d6qrj3$2i1a$1@digitaldaemon.com...
> Vegeta wrote:
>
>> G.Vidal wrote:
>>
>>
>>>This is absolutely true.
>>>
>>>I guess "real" is there to remind us the size of the variable, maybe..?
>>>
>>
>>
>> the real type has hat name because it is the largest precision floating
>> point type. I understand the criteria is that this type is not tied to a
>> particular size (e.g., double or single precision). the name real is
>> bacuse
>> it describes the kind of numbers one would use with this data type.
>> if real does not refer to a specific size, why call the others creal and
>> ireal. the name should use the same criterion used to name the other
>> type:
>> that is, by the type of numbers one would use with the data types. that
>> is,
>> imaginary and complex.
>
> Have you noticed that there are six imaginary and complex types, parallel to the floating-point types?  cfloat and ifloat are to float, cdouble and idouble are to double, creal and ireal are to real.  There's no deeper significance, it's purely about the precision used to represent the types.
>
> "real" itself has no meaning beside that specifically defined for it (the greatest precision floating point type representable by the hardware, or double precision if no hardware support exists).  It used to be "extended".  This was decided to be too long and somewhat incorrect and was changed.  That's all there is to it.  It has nothing to do with either the mathematical "real" or the English word.

It's too bad "extended" was tossed. I used that name for years as the 80-bit float in Apple's SANE (standard Apple numeric environment) and it never bothered me to type all those characters. Even the name "real" makes me cringe since I would expect "real" to be able to represent irrational numbers like sqrt(2) - eg a real number. Maybe we can call it "rat" for a "rational approximation to a real number". Then we'd have "rat" "irat" and "crat". It would give new meaning to the phrase "what is this crat?"

> If you don't want to refer to complex and imaginary types like that, you can alias them:
>
>    alias creal complex;
>    alias ireal imaginary;


May 22, 2005
Ben Hinkle wrote:
> It's too bad "extended" was tossed. I used that name for years as the 80-bit float in Apple's SANE (standard Apple numeric environment) and it never bothered me to type all those characters. Even the name "real" makes me cringe since I would expect "real" to be able to represent irrational numbers like sqrt(2) - eg a real number. Maybe we can call it "rat" for a "rational approximation to a real number". Then we'd have "rat" "irat" and "crat". It would give new meaning to the phrase "what is this crat?"

I agree, I liked "extended".  I'd even go so far as to prefer "extended_complex"; brevity should never be more important than clarity.  I objected to the change at the time.  But it's not like the current state was an irrational move made out of ignorance.  Perhaps it would have been better if it had been changed to something like "pont" or "curp"; no way to confuse it with something else then.
May 23, 2005
"Burton Radons" <burton-radons@smocky.com> wrote in message news:d6qti3$2jda$1@digitaldaemon.com...
> Ben Hinkle wrote:
>> It's too bad "extended" was tossed. I used that name for years as the 80-bit float in Apple's SANE (standard Apple numeric environment) and it never bothered me to type all those characters. Even the name "real" makes me cringe since I would expect "real" to be able to represent irrational numbers like sqrt(2) - eg a real number. Maybe we can call it "rat" for a "rational approximation to a real number". Then we'd have "rat" "irat" and "crat". It would give new meaning to the phrase "what is this crat?"
>
> I agree, I liked "extended".  I'd even go so far as to prefer "extended_complex"; brevity should never be more important than clarity. I objected to the change at the time.  But it's not like the current state was an irrational move made out of ignorance.  Perhaps it would have been better if it had been changed to something like "pont" or "curp"; no way to confuse it with something else then.

I could picture either of
 float, double, real, ifloat, idouble, imaginary, cfloat, cdouble, complex
or
 float, double, extended, ifloat, idouble, iextended, cfloat, cdouble,
cextended

The first is consistent with mathematical naming of "real", "imaginary" and "complex" while the second is consistent with prefixing the precision. Using the word "real", which is a matheamical word, to indicate precision doesn't seem like a workable solution.


May 23, 2005
In article <d6ssnk$1ddn$1@digitaldaemon.com>, Ben Hinkle says...
>
>
>"Burton Radons" <burton-radons@smocky.com> wrote in message news:d6qti3$2jda$1@digitaldaemon.com...
>> Ben Hinkle wrote:
>>> It's too bad "extended" was tossed. I used that name for years as the 80-bit float in Apple's SANE (standard Apple numeric environment) and it never bothered me to type all those characters. Even the name "real" makes me cringe since I would expect "real" to be able to represent irrational numbers like sqrt(2) - eg a real number. Maybe we can call it "rat" for a "rational approximation to a real number". Then we'd have "rat" "irat" and "crat". It would give new meaning to the phrase "what is this crat?"
>>
>> I agree, I liked "extended".  I'd even go so far as to prefer "extended_complex"; brevity should never be more important than clarity. I objected to the change at the time.  But it's not like the current state was an irrational move made out of ignorance.  Perhaps it would have been better if it had been changed to something like "pont" or "curp"; no way to confuse it with something else then.
>
>I could picture either of
> float, double, real, ifloat, idouble, imaginary, cfloat, cdouble, complex
>or
> float, double, extended, ifloat, idouble, iextended, cfloat, cdouble,
>cextended
>
>The first is consistent with mathematical naming of "real", "imaginary" and "complex" while the second is consistent with prefixing the precision. Using the word "real", which is a matheamical word, to indicate precision doesn't seem like a workable solution.
>

I strongly agree - there's been several people now who have suggested that scientific computing types would dismiss D out of hand if things stay as-is (a few actually used the term "laugh" independently of one another).

Another option:

float, double, intrinsic, ifloat, idouble, imaginary, cfloat, cdouble, complex

Intrinsic is perfect because it describes exactly what the type is, and on many types of CPU, double and intrinsic will actually be the same. There may even be some PDA's and such where intrinsic is smaller than a double. Likewise, imaginary and complex are great because the lack of a prefix implies intrinsic.

- Dave


May 23, 2005
> Those types should be called imaginary and complex.

I agree.
The name "real" is indeed a little confusing.

I'm toying with my own type names lately, using just 'int' and 'float', and deriving all the others from these two names.

Most of the time, you don't care how many bits something has, and you'll just want the platform's standard size. These should be called "int" and "float".

If you do care about size, you should specify it, ie. when declaring structs meant for network/files: int16, uint32, int128,  float32, etc..

For floating point types there's the special case where you'd want the highest available precision. I have no idea what a good name for this type would be ("long float" comes to mind, since that's what "long" was meant to do in the first place).

The type names "short" and "long" are inherently confusing and should be deprecated, or aliased to int16, int32 respectively for portability reasons.

These are the guidelines I wish to follow for the rewrite of our company's game engine. Just some thoughts.

L.

PS. "char", "bit" and "byte" can stay, because of their special meanings (characters, boolean, unit)