April 23, 2004
Matthew wrote:

>I'm not a hardware-Johny, but it's my understanding that using 32-bit integers on
>64-bit architectures will be less efficient than using 64-bit integers.
>
>  
>
With 64 bit machines you have a range performace issues.  One of the biggest is memory.  If you use a 64-bit integer, array sizes double, now the slowest part of your achitechture's speed (other then the hard-drive which is memory anyway) has been halved.  Now considering thing like locals could be sent to the CPU as 64-bits, at the very least there shouldn't be any slow down.

I'm not arguing that 64-bit machines are a bad thing (64-bit calculations are now *almost* as fast as 32).

With portability.  Try loading an int from a file. If the int has changed to 64 bit then your program will most likely crash.

PS - I just read that apparently C++ is keeping int's as 32 bit.  What is changing is the pointer size, which isn't such a big issue if you avoid casting.

http://www.microsoft.com/whdc/winhec/partners/64bitAMD.mspx

-- 
-Anderson: http://badmama.com.au/~anderson/
April 23, 2004
But what about people who become concerned with speed. They're left in the position of having to backtrack through all their code and trying to judge which "int" is size-oriented and which is speed-oriented. Aren't we effectively back in (pre-C99) C world?

"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message news:c6bam7$qti$1@digitaldaemon.com...
> Matthew wrote:
>
> >could write much code before becoming aware of the issue, and be left with similar porting nasties that we currently have in C/C++, and which D is
intended
> >to avoid/obviate.
> >
> >Therefore, my preference is that we add a new type, "native", which is an
integer
> >of the ambient architecture size. If "native" is listed up there with the
other
> >integer types, it will be something that people will learn very early in their use of D, and will therefore not be forgotten or overlooked as is likely with
the
> >library approach.
> >
> >
> > The only downside to this is that it's less visible/obvious, and many people
>
>
> I think that people who are un-aware of the issue are more concerned about there code running, rather then running fast.  People who are concern with speed would learn this kinda thing pretty soon.
>
> -- 
> -Anderson: http://badmama.com.au/~anderson/


April 23, 2004
"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message news:c6bbih$ssb$1@digitaldaemon.com...
> Matthew wrote:
>
> >I'm not a hardware-Johny, but it's my understanding that using 32-bit integers
on
> >64-bit architectures will be less efficient than using 64-bit integers.
> >
> >
> >
> With 64 bit machines you have a range performace issues.  One of the biggest is memory.  If you use a 64-bit integer, array sizes double, now the slowest part of your achitechture's speed (other then the hard-drive which is memory anyway) has been halved.  Now considering thing like locals could be sent to the CPU as 64-bits, at the very least there shouldn't be any slow down.

We're not talking about arrays, but indexer and other local variables.

> I'm not arguing that 64-bit machines are a bad thing (64-bit calculations are now *almost* as fast as 32).
>
> With portability.  Try loading an int from a file. If the int has changed to 64 bit then your program will most likely crash.

Another advantage of native is that serialisation APIs would be written to specifically *not* accept "native" variables, which is actually a massive improvement on the situation we experience in C and C++. (I spend a fair amount of time on this hideously vexing issue in "Imperfect C++", due out Sept. <G>)



April 23, 2004
Matthew wrote:

> I'm not a hardware-Johny, but it's my understanding that using 32-bit integers on 64-bit architectures will be less efficient than using 64-bit integers.

Most certainly not. Doing one 32bit operation will never be more expensive than doing one 64bit operation. It would, though, most certainly be more efficient to do one 64bit op instead of two 32bit ops.

In general, I would think the question of performance between 32 and 64 bit is far too complex to just say: on this machine, 64 bit is more efficient, so it should be the default.

Especially, you have to consider that for many applications, the bottleneck is not the processor, but the cache and the speed of the ram. If you have to shuffle twice as much data as necessary, it will definitely slow the system down.

April 23, 2004
"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c6bcjs$uje$1@digitaldaemon.com...
> Matthew wrote:
>
> > I'm not a hardware-Johny, but it's my understanding that using 32-bit integers on 64-bit architectures will be less efficient than using 64-bit integers.
>
> Most certainly not. Doing one 32bit operation will never be more expensive than doing one 64bit operation. It would, though, most certainly be more efficient to do one 64bit op instead of two 32bit ops.

As I said, I'm no expert on this, but it's my understanding that it can be more expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some multi-architecture experts to weigh in.

> In general, I would think the question of performance between 32 and 64 bit is far too complex to just say: on this machine, 64 bit is more efficient, so it should be the default.

What should be the default?

> Especially, you have to consider that for many applications, the bottleneck is not the processor, but the cache and the speed of the ram. If you have to shuffle twice as much data as necessary, it will definitely slow the system down.

No-one's talking about shuffling twice as much data. The issue is whether a single indexer variable is more efficient when 64-bits on a 64-bit machine than when 32-bits. It "most certainly" won't be the case that a 32-bit get on a 64-bit bus will be cheaper than a 64-bit get, surely?


April 23, 2004
Matthew wrote:
> But what about people who become concerned with speed. They're left in the position of having to backtrack through all their code and trying to judge which "int" is size-oriented and which is speed-oriented. Aren't we effectively back in (pre-C99) C world?

If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.

April 23, 2004
I realize I'm always in a minority of one, but...

It has always been my opinion that int should be whatever the largest int size is. And then have int16 etc. for each of the specific sizes. So if it matters what size the value is, you use the specific one. If not, you use int, and as things progress, you don't need to keep modifying the code for the new largest size (only a recompile would be required).

A while back I was programming in Compaq C on OpenVMS and the largest int size was a 64-bit "long long int", and I wanted some of my code to work on that and in DOS too, so I had to typedef BIGGEST_INT to mean different things on the different platforms. (And is ANSI going to use "long long int" to mean a 64-bit int?)

I realize that you all are going to invoke "backward compatibility" and "easy porting of C code" as reasons for continuing the "C way". But I heartily disagree, D _is not_ and _should not_ be C. If D is to be better than C, this is one area I feel needs improvement.

Upon further reflection I would suggest defining only the specific-sized types, and allow the user (of the language) to typedef or alias the generic names as desired.

In article <c6b398$f01$1@digitaldaemon.com>, imr1984 says...
>
>im curious - when a D compiler is made for 64bit processors (in the near future lets hope :) what will the size of an int be? I assume it will be 8, and long will be 16. So then what will a 2 byte integer be? It cant be a short because that will be a 4 byte integer.
>
>I assume that floating point names will stay the same, as they are defined by the IEEE.
>
>


April 23, 2004
"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c6bcv2$uje$2@digitaldaemon.com...
> Matthew wrote:
> > But what about people who become concerned with speed. They're left in the position of having to backtrack through all their code and trying to judge which "int" is size-oriented and which is speed-oriented. Aren't we effectively back in (pre-C99) C world?
>
> If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.

Why?


April 23, 2004
Matthew wrote:
> "Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c6bcv2$uje$2@digitaldaemon.com...
>> If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.
> 
> Why?

Because usually, the time you spend optimizing code for one special machine could be just as well spend in waiting and bying a new machine half a year later. The compiler should have the means to optimize for a certain architecture, but the programmer should not think about the exact architecture too much.

Of course, there are exceptions to that, but then, people optimizing for a certain architecture will have to go through all kinds of pains. distinguishing between 32bit and 64bit integers and deciding which one to use when is just one fraction of the problem.
April 23, 2004
Matthew wrote:

>But what about people who become concerned with speed. 
>  
>
What if they suddenly have a porting issue.  Since 64-bit machines are bound to be faster, most 32-bit apps should run faster then there original target machine so they should met the required efficiency.  Now if someone wants to make a dynamic program that adapts to the speed of the processor then there are a lot of other issues they will have to consider in regards to the variable size.  They might as well use version statements.

An alias for the most efficiently sized variable could be useful but I certainly wouldn't encourage it's use unless people know what they are doing.

>They're left in the
>position of having to backtrack through all their code and trying to judge which
>"int" is size-oriented and which is speed-oriented. Aren't we effectively back in
>(pre-C99) C world?
>

You p idea with just make this even harder. Do you want the compiler to work out when to use an 32 and when to use a 64 bit?  How is that possible, the program has no-idea how much a particular variable will be reused and need to be kept in cache ect.... I doubt there would be very few cases where to compiler could improve performace and the programmer doesn't know about.  And then the programmer would have to make use of the extra 32 bits.

I've long come to the conclusion to use the best variable for the job at hand.  If you need 64-bits, then use 64 bits.  I think the biggest advantage of 64-bits is that double will become not much slower then float.  Then you'll be able to have really accurate calculations (ie like in 3d-graphics).

Double could definatly make use of this p idea, but not so much int.  What about pDouble?

If you can use a smaller variable then use it, it will save cache space. (I read somewhere that 62-bit processor are expected to use 30% more cache space).

-- 
-Anderson: http://badmama.com.au/~anderson/