View mode: basic / threaded / horizontal-split · Log in · Help
November 22, 2005
Re: Var Types
In article <dlti74$k8i$1@digitaldaemon.com>, MK says...
>
>In article <dlt5jv$aj6$1@digitaldaemon.com>, Tomás Rossi says...
>>>>Maybe if D bit-length specifications were relative (don't know the downsides of
>>>>this approach but I'm all ears).
>>>>For example:
>>>>____________________________________________________________________________,
>>>> TYPE    | SIZE           | LEN IN 32-BIT MACHINES | LEN IN 64-BIT MACHINES |  
>>>>         | (relative to 1 | (in bits)              | (in bits)              |
>>>>         | CPU word)      |                        |                        |
>>>>         | (register size)|                        |                        | 
>>>>---------+----------------+------------------------+------------------------+
>>>>(u)short | 1/2            | 16                     | 32                     |
>>>>(u)int   | 1              | 32                     | 64                     |
>>>>(u)long  | 2              | 64 (as VC++s __int64)  | 128                    |
>>>
>>>This is exactly one of the things D was designed to avoid.
>>
>>And why is that? (don't really know, is it in D presentation or docs?)
>>
>
>From experience. Its best that Mr Bright stay away from implementation specific
>types. 

Could you be more specific with the "from experience" part? That didn't really
convince me, I mean, you didn't answer my question yet. What's the experience
you're referring to?!

>I believe its better to know absolutely what a given type is. You are
>programming in D, not x86.

That's exactly the suggestion that started the discussion and with wich I agree
in essence. There should be standard aliases (if there isn't yet) for intXX
types just to be sure what is the exact precision of the integer you need in a
platform independent manner.

But with actual D approach, say I have a very efficient D app (in wich
performance depends on the most efficient integer manipulation for the current
CPU), written originally with int data type (because it was conceived for a
32bit system). When I port it to 64bit system, I'll have to make changes (say
replacing int with long) to take advantages of the more powerful CPU.

Tom
November 22, 2005
Re: Var Types
"Tomás Rossi" <Tomás_member@pathlink.com> wrote.
> In article <dlti74$k8i$1@digitaldaemon.com>, MK says...
{snip]
> But with actual D approach, say I have a very efficient D app (in wich
> performance depends on the most efficient integer manipulation for the 
> current
> CPU), written originally with int data type (because it was conceived for 
> a
> 32bit system). When I port it to 64bit system, I'll have to make changes 
> (say
> replacing int with long) to take advantages of the more powerful CPU.

Can't you provide your own alias in such cases, and change it when you port? 
Or are you asking for a "fastint" (with a minimal width of 32-bits) to be 
defined within std.stdint?
November 22, 2005
Re: Var Types
On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:

[snip]

> But with actual D approach, say I have a very efficient D app (in wich
> performance depends on the most efficient integer manipulation for the current
> CPU), written originally with int data type (because it was conceived for a
> 32bit system). When I port it to 64bit system, I'll have to make changes (say
> replacing int with long) to take advantages of the more powerful CPU.

Yes, you are right. In D the 'int' always means 32-bits regardless of the
architecture running the application. So if you port it to a different
architecture *and* you want to take advantage of the longer integer then
you will have to change 'int' to 'long'. Otherwise use aliases of your own
making ...

version(X86) {
  alias int  stdint;
  alias long longint;
}

version(X86_64) {
  alias long  stdint;
  alias cent longint;
}

longint foo(stdint A) 
{
   return cast(longint)A * cast(longint)A + cast(longint)1;
}   

-- 
Derek
(skype: derek.j.parnell)
Melbourne, Australia
22/11/2005 1:35:33 PM
November 22, 2005
Re: Var Types
In article <dcr6iol0nzuz.ovrmy3qsc18d.dlg@40tude.net>, Derek Parnell says...
>
>On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:
>
>[snip]
>
>> But with actual D approach, say I have a very efficient D app (in wich
>> performance depends on the most efficient integer manipulation for the current
>> CPU), written originally with int data type (because it was conceived for a
>> 32bit system). When I port it to 64bit system, I'll have to make changes (say
>> replacing int with long) to take advantages of the more powerful CPU.
>
>Yes, you are right. In D the 'int' always means 32-bits regardless of the
>architecture running the application. So if you port it to a different
>architecture *and* you want to take advantage of the longer integer then
>you will have to change 'int' to 'long'. Otherwise use aliases of your own
>making ...
>
> version(X86) {
>   alias int  stdint;
>   alias long longint;
> }
>
> version(X86_64) {
>   alias long  stdint;
>   alias cent longint;
> }
>
> longint foo(stdint A) 
> {
>    return cast(longint)A * cast(longint)A + cast(longint)1;
> }   

So, what's the downsides of the platform dependent integer types?
Currently, applying your above workaroud (wich is almost a MUST from now on),
the downsides are very clear: developers will have to do this for sure in the
most projects because 64-bits sytems are a reality in this days and 32-bit ones
are rapidly staying behind. Plus the uglyness of having to use stdint everywhere
where you would use int and the obvious consequences of type obfuscation due to
alias.  

Tom
November 22, 2005
Re: Var Types
Derek Parnell wrote:
> On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:
> 
> 
>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>news:dlstrd$2i4$1@digitaldaemon.com...
>>
>>>What is wrong with the documented convetions laid out for the byte sizes 
>>>of the
>>>current values?
>>
>>Because although they're documented and strictly defined, they don't make 
>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>the default size.  So "long" would be the "normal" size.  Then there's 
>>short, which I suppose makes sense on both platforms, and int, but neither 
>>gives any indication of the size.  The only type that does is "byte."
>>
>>I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
>>new, larger types.  What comes after int64?  int128, of course.  But what 
>>comes after "long?"  Why, "cent."  What?!  Huh?
>>
>>But of course, none of this will ever happen / even be considered, so it's 
>>kind of an exercise in futility.
> 
> 
> Yes it is. However, my comments are that identifiers that are a mixture of
> alphas and digits reduce legibility. Also, why use the number of bits? Is
> it likely we would use a number that is not a power of 2? Or could we have
> an int24? Or an int30? Using a number of bytes seems more useful because
> I'm sure that all such integer would be one byte boundaries.
> 

What would you suggest?

Not saying that you're a proponent of it, but...

What happens to our short, int, long language types when 256-bit 
processors come along?  We'd find it hard to address a 16-bit integer in 
that system limited to only three type names.
November 22, 2005
Re: Var Types
> std.stdint contains aliases for those

That's the wrong way around if you'd ask me. I would also like decorated 
types, eventually aliased to the C int/short/long (with 'int' being the 
platform's default).

L.
November 22, 2005
Re: Var Types
In article <dlucev$16t5$1@digitaldaemon.com>, James Dunne says...
>
>Derek Parnell wrote:
>> On Mon, 21 Nov 2005 12:06:27 -0500, Jarrett Billingsley wrote:
>> 
>> 
>>>"pragma" <pragma_member@pathlink.com> wrote in message 
>>>news:dlstrd$2i4$1@digitaldaemon.com...
>>>
>>>>What is wrong with the documented convetions laid out for the byte sizes 
>>>>of the
>>>>current values?
>>>
>>>Because although they're documented and strictly defined, they don't make 
>>>much sense.  For example, long makes sense on a 32-bit machine, but on 
>>>64-bit machines (to which everything is moving relatively soon), 64 bits is 
>>>the default size.  So "long" would be the "normal" size.  Then there's 
>>>short, which I suppose makes sense on both platforms, and int, but neither 
>>>gives any indication of the size.  The only type that does is "byte."
>>>
>>>I'd personally like int8, int16, int32, etc.  This also makes it easy to add 
>>>new, larger types.  What comes after int64?  int128, of course.  But what 
>>>comes after "long?"  Why, "cent."  What?!  Huh?
>>>
>>>But of course, none of this will ever happen / even be considered, so it's 
>>>kind of an exercise in futility.
>> 
>> 
>> Yes it is. However, my comments are that identifiers that are a mixture of
>> alphas and digits reduce legibility. Also, why use the number of bits? Is
>> it likely we would use a number that is not a power of 2? Or could we have
>> an int24? Or an int30? Using a number of bytes seems more useful because
>> I'm sure that all such integer would be one byte boundaries.
>> 
>
>What would you suggest?
>
>Not saying that you're a proponent of it, but...
>
>What happens to our short, int, long language types when 256-bit 
>processors come along?  We'd find it hard to address a 16-bit integer in 
>that system limited to only three type names.

Exactly, what would happen? Would "we" have to engineer another language, would
it be D v2 :P? Certainly platform-dependent integral types are THE choice.
Aliases of the type intXXX would be necessary always.

Tom
November 22, 2005
Re: Var Types
>>What happens to our short, int, long language types when 256-bit 
>>processors come along?  We'd find it hard to address a 16-bit integer in 
>>that system limited to only three type names.
> 
> 
> Exactly, what would happen? Would "we" have to engineer another language, would
> it be D v2 :P? Certainly platform-dependent integral types are THE choice.
> Aliases of the type intXXX would be necessary always.

I think you guys are exaggerating the problem. Even 64-bit CPUs were 
developed (afaik) mainly because of the need to cleanly address more 
than 4GB of RAM, not because there's some overwhelming need for 64-bit 
calculations. Considering how much RAM/disk/whatever 2^64 is, I don't 
think anyone will need a CPU that is 128-bit, let alone 256-bit any time 
soon (and even if developed because of marketing purposes, I see no 
reason to use 32-byte variables to have loop counters from 0 to 99).

Now, if you have a working app on a 32-bit platform and you move it to a 
64-bit platform, is it any help if int becomes 64 bit? No, because if it 
was big enough before, it's big enough now (with the notable exception 
of memory locations and sizes, which are taken care of with size_t and 
ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
place will change, breaking any interface to outside-the-app.



xs0
November 22, 2005
Re: Var Types
In article <dlv33k$1o7u$1@digitaldaemon.com>, xs0 says...
>
>>>What happens to our short, int, long language types when 256-bit 
>>>processors come along?  We'd find it hard to address a 16-bit integer in 
>>>that system limited to only three type names.
>> 
>> 
>> Exactly, what would happen? Would "we" have to engineer another language, would
>> it be D v2 :P? Certainly platform-dependent integral types are THE choice.
>> Aliases of the type intXXX would be necessary always.
>
>I think you guys are exaggerating the problem. Even 64-bit CPUs were 
>developed (afaik) mainly because of the need to cleanly address more 
>than 4GB of RAM, not because there's some overwhelming need for 64-bit 
>calculations. Considering how much RAM/disk/whatever 2^64 is, I don't 
>think anyone will need a CPU that is 128-bit, let alone 256-bit any time 
>soon (and even if developed because of marketing purposes, I see no 
>reason to use 32-byte variables to have loop counters from 0 to 99).

The same thing said some people about 32-bit machines before those were
developed and now we have 64-bit CPUs. Plus, I´m sure that already exists
128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run
on home computers? For example, the PlayStation2 platform is builded upon a
128-bit CPU! 

>Now, if you have a working app on a 32-bit platform and you move it to a 
>64-bit platform, is it any help if int becomes 64 bit? No, because if it 
>was big enough before, it's big enough now (with the notable exception 
>of memory locations and sizes, which are taken care of with size_t and 
>ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
>place will change, breaking any interface to outside-the-app.

I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit
edition, not necessarily expecting to work interfacing against a 32-bit version.
Besides, why  are you so sure that moving to 64-bits won't be much of a gain?
"If it was big enough before, it's big enough now"???? Be careful, your ported
app will still work, but'll take no benefit of the upgraded processor! If your
app focus on std int performance to work better, this is much of a problem.



Tom
November 22, 2005
Re: Var Types
Tomás Rossi wrote:

>>I think you guys are exaggerating the problem. Even 64-bit CPUs were 
>>developed (afaik) mainly because of the need to cleanly address more 
>>than 4GB of RAM, not because there's some overwhelming need for 64-bit 
>>calculations. Considering how much RAM/disk/whatever 2^64 is, I don't 
>>think anyone will need a CPU that is 128-bit, let alone 256-bit any time 
>>soon (and even if developed because of marketing purposes, I see no 
>>reason to use 32-byte variables to have loop counters from 0 to 99).
> 
> 
> The same thing said some people about 32-bit machines before those were
> developed and now we have 64-bit CPUs. 

Well, sure, people always make mistakes, but can you think of any 
application anyone will develop in the next 30 years that will need more 
than 17,179,869,184 GB of ram (or 512x that of disk)? Older limits, like 
1MB of 8086 or 4GB of 80386 were somewhat easier to reach, I think :) I 
mean, even if both needs and technology double each year (and I think 
it's safe to say that they increase more slowly), it will take over 30 
years to reach that...


Plus, I´m sure that already exists
> 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run
> on home computers? For example, the PlayStation2 platform is builded upon a
> 128-bit CPU! 

Well, from what I can gather from
http://arstechnica.com/reviews/hardware/ee.ars/
the PS2 is actually 64-bit, what is 128-bit are SIMD instructions (which 
actually work on at most 32-bit vals) and some of the internal buses..

My point was that there's not much need for operating with values over 
64 bits, so I don't see the transition to 128 bits happening soon 
(again, I'm refering to single data items; bus widths, vectorized 
instructions' widths etc. are a different story, but one that is not 
relevant to our discussion)


>>Now, if you have a working app on a 32-bit platform and you move it to a 
>>64-bit platform, is it any help if int becomes 64 bit? No, because if it 
>>was big enough before, it's big enough now (with the notable exception 
>>of memory locations and sizes, which are taken care of with size_t and 
>>ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the 
>>place will change, breaking any interface to outside-the-app.
> 
> 
> I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit
> edition, not necessarily expecting to work interfacing against a 32-bit version.
> Besides, why  are you so sure that moving to 64-bits won't be much of a gain?

The biggest gain I see in 64 bits is, like I said, the ability to handle 
more memory, which naturally improves performance for some types of 
applications, like databases. I don't see much performance gain in 
general, because there aren't many quantities that require a 64-bit 
representation in the first place. Even if 64-bit ops are 50x faster on 
a 64-bit cpu than on a 32-bit cpu, they are very rare (at least in my 
experience), so the gain is small. Also note that you don't gain any 
speed by simply making your variables bigger, it that's all you do..


> "If it was big enough before, it's big enough now"???? Be careful, your ported
> app will still work, but'll take no benefit of the upgraded processor! 

Why not? It will be able to use more ram, and operations involving longs 
will be faster. Are there any other benefits a 64-bit architecture provides?


If your
> app focus on std int performance to work better, this is much of a problem.

I don't understand that sentence, sorry :)


xs0
1 2 3 4 5 6
Top | Discussion index | About this forum | D home