April 23, 2004
You might like to read these articles:

http://arstechnica.com/cpu/03q1/x86-64/x86-64-1.html

http://www.anandtech.com/guides/viewfaq.html?i=112

- Kris


"imr1984" <imr1984_member@pathlink.com> wrote in message news:c6b398$f01$1@digitaldaemon.com...
> im curious - when a D compiler is made for 64bit processors (in the near
future
> lets hope :) what will the size of an int be? I assume it will be 8, and
long
> will be 16. So then what will a 2 byte integer be? It cant be a short
because
> that will be a 4 byte integer.
>
> I assume that floating point names will stay the same, as they are defined
by
> the IEEE.
>
>


April 23, 2004
Unknown W. Brackets schrieb:

> Unless I'm entirely mistaken, in assembly - and thus compiled code - you lose something when you switch.  For example, if your ENTIRE program is 16 bit you don't lose much... but, every time you use 32 bit in that program you have to basically say, "I'm about to use 32 bit... get ready!" before hand.  It reverses for 32 bit using 16 bit...
> 
> I would assume 64 bit works the same way...

I would say this is usual but not inherent. A x86 is an evil CISC (well, less evil than VAX but anyway), which means the instruction sizes may vary. But x86 is an archaically old architecture, noone develops new CISC architectures these days. It has been recognized that architectures with uniform instruction size are more efficient, especially because decoding phase ceases to dominate the execution. The new CPUs will be either RISC, where i would guess smaller types are not bound to be slower if there are special instructions to load and store them, or VLIW of which i know too little to say anything sane.


> the problem comes that if pointers are 64 bit, which they are, does that put the program initially in 64 bit of 32 bit?  I'd assume 64 bit in which case you'd get the penalties.

I find it unlikely that penalties would come up on AMD64. Gotta read more about it though.

> However, I may just be remembering a little long - I can't remember how bad the performance penalty is, just that it adds to the bytes needed for each instruction.

It might not be of principal nature. Before Pentium Pro, performance of accessing 16 bit values was very decent. Pentium Pro was the one to implement a 64-bit (or was it more?) memory bus, and also "optimized" loading routins, which were optimized for everything from 32 bit onwards. That 8 bit is still fast is only due to its tiny size and vast space savings, but 16 bit fell into a "hole" noone really cared about. If the performace of accessing 32-bit values shall diminish someday, it would be the indication that the world has changed and we don't care any longer.

-eye
April 23, 2004
"imr1984" <imr1984_member@pathlink.com> wrote in message news:c6b398$f01$1@digitaldaemon.com...
> im curious - when a D compiler is made for 64bit processors (in the near
future
> lets hope :) what will the size of an int be? I assume it will be 8, and
long
> will be 16. So then what will a 2 byte integer be? It cant be a short
because
> that will be a 4 byte integer.
>
> I assume that floating point names will stay the same, as they are defined
by
> the IEEE.

All sizes will stay the same when moving to 64 bits, with the following exceptions:

1) pointers will be 64 bits
2) object references will be 64 bits
3) dynamic array references will be 128 bits
4) pointer differences will be 64 bits
5) pointer offsets will be 64 bits
6) sizeof will be 64 bits
7) Whether real.size will stay 10 or be forced to 8 is still up in the air.

To this end, and to ensure portability of D source code to 64 bits, follow the following rules:

1) Use the .sizeof property whenever depending on the size of a type.
2) Use ptrdiff_t (an alias in object.d) for signed pointer differences.
3) Use size_t (an alias in object.d) for type sizes, unsigned pointer
offsets and array indices.

Note that 1, 2, and 3 correspond to C's portable uses of sizeof, ptrdiff_t, and size_t.

In particular, int's and long's will remain the same size as for 32 bit computing.


April 23, 2004
and it's unsigned

"Ilya Minkov" <minkov@cs.tum.edu> wrote in message news:c6bki6$1auk$1@digitaldaemon.com...
> Matthew schrieb:
>
> > It does seem to me that we should have an additional integral type, say pint, that is an integer of the natural size of the architecture, for maximal efficiency.
>
> There is one, size_t -- but its name is ugly as hell!
>
> -eye


April 24, 2004
Hi, Matthew.

In article <c6b9vv$qdr$1@digitaldaemon.com>, Matthew says...
>
>I'm not a hardware-Johny, but it's my understanding that using 32-bit integers on 64-bit architectures will be less efficient than using 64-bit integers.
>
>"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message news:c6b8uq$okp$1@digitaldaemon.com...
>> Matthew wrote:
>>
>> >
>> >Well, the point is that using an inappropriately sized integer for a given architecture will have a performance cost. Therefore, anyone using an integer
>for
>> >"normal" counting and such will be at a disadvantage when porting between different sized architectures. To avoid this *every* programmer who is aware
>of
>> >the issue will end up creating their own versioned alias. Therefore, I think
>it
>> >should be part of the language, or at least part of Phobos. Does this not seem sensible?
>> >
>> >
>> It does.  However on 64 bit machines won't 32 bit integers still be faster because they can be sent two at a time (under certain conditions)?  The same can be said for 16-bit at the moment.
>>
>> -- 
>> -Anderson: http://badmama.com.au/~anderson/
>
>

As with most things, there's the way the world is, and then the way it should be.

IMO, in a perfect world, integer sizes would be a minimum size, not an exact size.  Any condition that would be affected by the upper bits would cause an exception in debug mode, and not in optimized mode.  There would be similar behavior for array-indexing, and other similar checks.  This would allow the compiler to size-up integers to fit it's register size.  Then, there's no need for a native integer type.

There are other problems that reality is throwing at 64-bit computing.  As Walter pointed out, all pointers will double in size.  Most programs I know of that use enough memory to justify the need for 64-bit pointers fill up that memory mostly with pointers.  In other words, if you switch to 64-bits, your applications may need close to 2x the memory just to run.  The cache also gets less efficient, so your program may also run slower.  So you pay more, and get less.

IMO, in a perfect world, our compilers would be able to used integers as object references.  This allows us to use up to 4 billion objects of any given class before making it's reference type 64-bits.  Also, applications would not use more memory just because they're running on a 64-bit machine.  This may sound far-fetched, but I've got over 500K lines of C code running this way.  So far as I can tell, there are no down-sides.  However, compatibility with original C keeps the world from taking this path... Just look at D, C++, and C# for good examples of this.

Then, there's that annoying fact that we can't get away from the x86 architecture.  Intel made a real try with Itanium, but Opteron is the architecture that has won.  Now that Intel is on-board, the whole world will soon be buying primarily x86 64-bit machines.  However, due to the historical limitations in our software tools, few applications will use the 64-bit mode for many years to come.

IMO, in a perfect world, we'd distrubute all our programs in a platform independent way.  In the open-source community, we do this.  For example, I just download the vim source tar-ball, and do the standard make install stuff.  The same exact method of installing vim works on many different CPU platforms.  If the world had nothing but open-source programs, we would have left x86 where it belongs: back in the  70's.  As it is, the monster just keeps getting fatter.

It's all a matter of history...

Bill


April 24, 2004
The *Word* we have been waiting for!

Nice to see you here!


-eye


Bill Cox schrieb:

> As with most things, there's the way the world is, and then the way it should
> be.
> 
> IMO, in a perfect world, integer sizes would be a minimum size, not an exact
> size.  Any condition that would be affected by the upper bits would cause an
> exception in debug mode, and not in optimized mode.  There would be similar
> behavior for array-indexing, and other similar checks.  This would allow the
> compiler to size-up integers to fit it's register size.  Then, there's no need
> for a native integer type.
> 
> There are other problems that reality is throwing at 64-bit computing.  As
> Walter pointed out, all pointers will double in size.  Most programs I know of
> that use enough memory to justify the need for 64-bit pointers fill up that
> memory mostly with pointers.  In other words, if you switch to 64-bits, your
> applications may need close to 2x the memory just to run.  The cache also gets
> less efficient, so your program may also run slower.  So you pay more, and get
> less.
> 
> IMO, in a perfect world, our compilers would be able to used integers as object
> references.  This allows us to use up to 4 billion objects of any given class
> before making it's reference type 64-bits.  Also, applications would not use
> more memory just because they're running on a 64-bit machine.  This may sound
> far-fetched, but I've got over 500K lines of C code running this way.  So far as
> I can tell, there are no down-sides.  However, compatibility with original C
> keeps the world from taking this path... Just look at D, C++, and C# for good
> examples of this.
> 
> Then, there's that annoying fact that we can't get away from the x86
> architecture.  Intel made a real try with Itanium, but Opteron is the
> architecture that has won.  Now that Intel is on-board, the whole world will
> soon be buying primarily x86 64-bit machines.  However, due to the historical
> limitations in our software tools, few applications will use the 64-bit mode for
> many years to come.
> 
> IMO, in a perfect world, we'd distrubute all our programs in a platform
> independent way.  In the open-source community, we do this.  For example, I just
> download the vim source tar-ball, and do the standard make install stuff.  The
> same exact method of installing vim works on many different CPU platforms.  If
> the world had nothing but open-source programs, we would have left x86 where it
> belongs: back in the  70's.  As it is, the monster just keeps getting fatter.
> 
> It's all a matter of history...
> 
> Bill
April 24, 2004
well if all sizes will stay the same, id like to know why D actually calls its integers by non exact names (int, long, short etc). Why arent they called int32, int64, int16 etc ?

In article <c6boe1$1ilt$2@digitaldaemon.com>, Walter says...
>
>
>"imr1984" <imr1984_member@pathlink.com> wrote in message news:c6b398$f01$1@digitaldaemon.com...
>> im curious - when a D compiler is made for 64bit processors (in the near
>future
>> lets hope :) what will the size of an int be? I assume it will be 8, and
>long
>> will be 16. So then what will a 2 byte integer be? It cant be a short
>because
>> that will be a 4 byte integer.
>>
>> I assume that floating point names will stay the same, as they are defined
>by
>> the IEEE.
>
>All sizes will stay the same when moving to 64 bits, with the following exceptions:
>
>1) pointers will be 64 bits
>2) object references will be 64 bits
>3) dynamic array references will be 128 bits
>4) pointer differences will be 64 bits
>5) pointer offsets will be 64 bits
>6) sizeof will be 64 bits
>7) Whether real.size will stay 10 or be forced to 8 is still up in the air.
>
>To this end, and to ensure portability of D source code to 64 bits, follow the following rules:
>
>1) Use the .sizeof property whenever depending on the size of a type.
>2) Use ptrdiff_t (an alias in object.d) for signed pointer differences.
>3) Use size_t (an alias in object.d) for type sizes, unsigned pointer
>offsets and array indices.
>
>Note that 1, 2, and 3 correspond to C's portable uses of sizeof, ptrdiff_t, and size_t.
>
>In particular, int's and long's will remain the same size as for 32 bit computing.
>
>


April 24, 2004
imr1984 schrieb:

> well if all sizes will stay the same, id like to know why D actually calls its
> integers by non exact names (int, long, short etc). Why arent they called int32,
> int64, int16 etc ?

According to specification, all bit-widths are to be understood as minimal. The types *might* be upscaled in the (probably far) future. For the sake of portability of algorithms, one should keep on the mind that the types might be larger someday, and such bit with names would become very unfortunate then.

BTW, this explains why there are no bit rotation intrinsics in D.

-eye
April 24, 2004
Ilya Minkov <minkov@cs.tum.edu> wrote:

> According to specification, all bit-widths are to be understood as minimal. The types *might* be upscaled in the (probably far) future. For the sake of portability of algorithms, one should keep on the mind that the types might be larger someday, and such bit with names would become very unfortunate then.

Oh no, I certainly hope that's not true. Where is this mentioned in the spec?


-- 
dave
April 24, 2004
> The *Word* we have been waiting for!
>
> Nice to see you here!


Yeah, where've you been, Bill? It's been ages.