November 21, 2005
Don Clugston wrote:
> 
> I personally don't like the fact that integer literals default to 'int', unless you suffix them with L. Even if the number is too big to fit into an int! And floating-point constants default to 'double', not real.

Really?  I tested this a few days ago and it seemed like literals larger than int.max were treated as longs.  I'll mock up another test on my way to work.


Sean
November 21, 2005
In article <dlt2b9$6df$3@digitaldaemon.com>, Sean Kelly says...
>
>Don Clugston wrote:
>> 
>> I personally don't like the fact that integer literals default to 'int', unless you suffix them with L. Even if the number is too big to fit into an int! And floating-point constants default to 'double', not real.
>
>Really?  I tested this a few days ago and it seemed like literals larger than int.max were treated as longs.  I'll mock up another test on my way to work.

You are right, large integers are automatically treated as longs, but too large floating point literals are not automatically treated as real.

# import std.stdio;
#
# void main() {
#   writef("%s\n%s\n%s\n%s\n",
#          typeid(typeof(1231231231)),
#          typeid(typeof(12312312312312)),
#          typeid(typeof(1e100)),
#          //typeid(typeof(1e350)), // Error: number is not representable
#          typeid(typeof(1e350l))   // l-suffix
#          );
# }

Prints:
int
long
double
real

/Oskar


November 21, 2005
Oskar Linde wrote:
> 
> You are right, large integers are automatically treated as longs, but too large
> floating point literals are not automatically treated as real.

This seems reasonable though, since it's really a matter of precision with floating-point numbers moreso than representability.


Sean
November 23, 2005
Derek Parnell wrote:
> On Mon, 21 Nov 2005 11:14:47 +0000, Bruno Medeiros wrote:
> 
>>Derek Parnell wrote:
>>>
>>>Do we have a toReal(), toFloat(), toInt(), toDouble(), toLong(), toULong(),
>>>.... ?
>>>
>>
>>No, we don't. But the case is different: between primitive numbers the casts are usually (if not allways?) implicit, but most importantly, they are quite trivial. And by trivial I mean Assembly-level trivial. String enconding conversions on the other hand (as you surely are aware) are quite not trivial (both in terms of code, run time, and heap memory usage), and I don't think a cast should perform such non-trivial operations.
> 
> 
> Why? If documented, the user can be prepared.
> 
> And where is the tipping point? The point at which an operation becomes
> non-trivial? You mention 'assembly-level' by which I think you mean that a
> sub-routine is not called but the machine code is generated in-line for the
> operation. Would that be the trivial/non-trivial divide?
> 
A good question indeed. I was thinking something equivalent to what Don Clugston said: if the code run time depends on the object size, that is, is not constant bounded, then it's beyond the acceptable point. Another disqualifier is allocating memory on the heap.
A string enconding conversion does both things.

> Is conversion from byte to real done in-line or via sub-routine call? I
> don't actually know, just asking.
> 
I didn't know for sure the answer before Don replied, but I already suspected that it was merely an Assembly one-liner (i.e., one instruction only).

Note: I think the most complex cast we have right now is a class object downcast, which, altough not universally constant bounded, it's still compile-time constant bounded.


-- 
Bruno Medeiros - CS/E student
"Certain aspects of D are a pathway to many abilities some consider to be... unnatural."
1 2 3 4
Next ›   Last »