April 23, 2004
"Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c6behs$11ou$1@digitaldaemon.com...
> Matthew wrote:
> > "Norbert Nemec" <Norbert.Nemec@gmx.de> wrote in message news:c6bcv2$uje$2@digitaldaemon.com...
> >> If you want to squeeze out performance, you will have to go through all kinds of pain. D should encourage people to write code that runs reasonably fast on any processor. People who want to go beyond that and optimize their code for their personal machine get all the tools to do so, but should not expect that it will be especially simple and comfortable.
> >
> > Why?
>
> Because usually, the time you spend optimizing code for one special machine could be just as well spend in waiting and bying a new machine half a year later. The compiler should have the means to optimize for a certain architecture, but the programmer should not think about the exact architecture too much.
>
> Of course, there are exceptions to that, but then, people optimizing for a certain architecture will have to go through all kinds of pains. distinguishing between 32bit and 64bit integers and deciding which one to use when is just one fraction of the problem.

I still fail to see why we should not address it. How do you solve a whole problem, composed of multiple parts, other than by addressing the parts?


April 23, 2004
"J Anderson" <REMOVEanderson@badmama.com.au> wrote in message news:c6beim$11h1$1@digitaldaemon.com...
> Matthew wrote:
>
> >But what about people who become concerned with speed.
> >
> >
> What if they suddenly have a porting issue.  Since 64-bit machines are bound to be faster, most 32-bit apps should run faster then there original target machine so they should met the required efficiency.  Now if someone wants to make a dynamic program that adapts to the speed of the processor then there are a lot of other issues they will have to consider in regards to the variable size.  They might as well use version statements.

I don't understand what you're saying here.

> An alias for the most efficiently sized variable could be useful but I certainly wouldn't encourage it's use unless people know what they are doing.
>
> >They're left in the
> >position of having to backtrack through all their code and trying to judge
which
> >"int" is size-oriented and which is speed-oriented. Aren't we effectively back
in
> >(pre-C99) C world?
> >
>
> You p idea with just make this even harder. Do you want the compiler to work out when to use an 32 and when to use a 64 bit?  How is that possible, the program has no-idea how much a particular variable will be reused and need to be kept in cache ect....

This is all wrong. The compiler would know exactly how large to make the "native" type, because D is a compile-to-host language. On a 64-bit architecture it would be 64-bits. On a 32-bit architecture it would be 32-bits.

>I doubt there would be very
> few cases where to compiler could improve performace and the programmer doesn't know about.

The programmer does know about them, that's the point. And the compiler handles the details, that's the point.

>  And then the programmer would have to make use of
> the extra 32 bits.

What are you talking about?

> I've long come to the conclusion to use the best variable for the job at hand.  If you need 64-bits, then use 64 bits.  I think the biggest advantage of 64-bits is that double will become not much slower then float.  Then you'll be able to have really accurate calculations (ie like in 3d-graphics).

This is nonsense. "If you need 64-bits, then use 64 bits". This totally misses the point. Of course, if you have a quantity that requires a specific size, then you use that size. I'm not talking about that. I'm talking about the times when you use an integer as an indexer, or another kind of size-agnostic variable. In such cases, you want the code to perform optimally whatever platform it happens to be compile for. Since the compiler knows what architecture it is being compiled for, why not let it make the decision in such cases, informed as it would be by one's using "native" (a variable sized int reflecting the optimal integral size for a given architecture) rather than int (32-bits) or long (64-bits)?



April 23, 2004
J Anderson wrote:

>
> Double could definatly make use of this p idea, but not so much int.  What about pDouble?

Sorry I meant what about pFloat and pDouble.  Of couse p would mean that it get's at-least that size.


-- 
-Anderson: http://badmama.com.au/~anderson/
April 23, 2004
Matthew wrote:

>As I said, I'm no expert on this, but it's my understanding that it can be more
>expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit
>costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some
>multi-architecture experts to weigh in.
>  
>

I think this is to do with alignment. It's cheaper to process one 32-bit variable then 2 16-bit variables indiviually.

-- 
-Anderson: http://badmama.com.au/~anderson/
April 23, 2004
>
>I'm talking about the times when
>you use an integer as an indexer, or another kind of size-agnostic variable. In
>such cases, you want the code to perform optimally whatever platform it happens
>to be compile for. Since the compiler knows what architecture it is being
>compiled for, why not let it make the decision in such cases, informed as it
>would be by one's using "native" (a variable sized int reflecting the optimal
>integral size for a given architecture) rather than int (32-bits) or long
>(64-bits)?
>
>  
>
Parhaps it could be called indexer?  That way it would be used correctly.

-- 
-Anderson: http://badmama.com.au/~anderson/
April 23, 2004
>> How is that
>>possible, the compiler has no-idea how much a particular variable will be
>>reused and need to be kept in cache ect....
>>    
>>
>
>This is all wrong. The compiler would know exactly how large to make the "native"
>type, because D is a compile-to-host language. On a 64-bit architecture it would
>be 64-bits. On a 32-bit architecture it would be 32-bits.
>  
>

I wouldn't say all wrong.  The compiler cannot predict how long a particular variable will be kept in cache.

>-Anderson: http://badmama.com.au/~anderson/
>
April 23, 2004
Matthew wrote:
> I still fail to see why we should not address it. How do you solve a whole problem, composed of multiple parts, other than by addressing the parts?

Back to citing your comment:

>> But what about people who become concerned with speed. They're left in the position of having to backtrack through all their code and trying to judge which "int" is size-oriented and which is speed-oriented. Aren't we effectively back in (pre-C99) C world?

This is on which I reacted by saying: Well, yes, bad luck!

There is no simple rule telling you where int64 might be faster than int32. On a 32bit processor, obviously int32 is faster in almost any case. On 64bit machines, we obviously do not know in general, but I can say for sure that int32 is faster at least in memory intensive code.

So, on 32bit machines, you can just stick with int32 and be pretty sure you get good performance.

On a 64bit machine, you have a choice:
a) pick int32 in general
b) pick int64 in general
c) sort through the code by hand like back in the good ol' days

On average, b) is unlikely to give better performance than a), so if you don't want to spend much time examining the code in question, a) is a good way to go. Picking c) will probably improve the performance, but this is just what I said: If you want to get optimum performance on your personal machine beyond what the compiler can do on portable code, be prepared to go through pains.
April 23, 2004
Matthew schrieb:

> It does seem to me that we should have an additional integral type, say pint,
> that is an integer of the natural size of the architecture, for maximal
> efficiency.

There is one, size_t -- but its name is ugly as hell!

-eye
April 23, 2004
Matthew schrieb:

> As I said, I'm no expert on this, but it's my understanding that it can be more
> expensive. "Most certainly not." sounds far too absolute for my tastes. 16-bit
> costs more than 32 on 32-bit machines, so why not 32 on 64? Maybe we need some
> multi-architecture experts to weigh in.

Though you are most certainly right, i would think, as long as memory sizes are not so huge yet, 64-bit CPUs would be approximately as fast for 32-bit values as for 64-bit values. If you remember, 386, 486 and Pentium were quite fast with 16-bit data, the slowdown was introduced in Pentium Pro. We might have another 3 CPU geneations until a similar change happens.

And at all: i wonder why the user should bother at all. If some data type is "packed", then he will have the minimal memory usage for the desired value range, and if a type is "aligned", well, it should be done so that the highest possible performance is reached. Then the user need not specify the actual width directly.

-eye
April 23, 2004
J Anderson wrote:

> 
> I think this is to do with alignment. It's cheaper to process one 32-bit variable then 2 16-bit variables indiviually.
> 

Unless I'm entirely mistaken, in assembly - and thus compiled code - you lose something when you switch.  For example, if your ENTIRE program is 16 bit you don't lose much... but, every time you use 32 bit in that program you have to basically say, "I'm about to use 32 bit... get ready!" before hand.  It reverses for 32 bit using 16 bit...

But I may be remembering wrong.  It has been about a year since I last worked in assembly...

I would assume 64 bit works the same way... the problem comes that if pointers are 64 bit, which they are, does that put the program initially in 64 bit of 32 bit?  I'd assume 64 bit in which case you'd get the penalties.

However, I may just be remembering a little long - I can't remember how bad the performance penalty is, just that it adds to the bytes needed for each instruction.

-[Unknown]