View mode: basic / threaded / horizontal-split · Log in · Help
November 24, 2005
Re: Var Types
A thought occurs! (to cite Zapp Brannigan)

How about a type called "register" which is guaranteed the platform's 
register size. Notice the similarity with C's "register" keyword (which is a 
type modifier):

for(register t=0; t<99; ++t) { }

What guarantees can be made with respect to a register's size?

L.
November 24, 2005
D blew it , was Re: Var Types
As a long time C programmer I still think D made a mistake when "fixing" int
size to 32 bits for a compiled language that is supposed to be compatible with
C. We are losing the abstraction that C was designed with. 

On the other hand, you absolutely need uint8, int32, etc types for communicating
with other programs, machines, etc but these types should only be used at the
point of interface.
====================================================
I will quote Paul Hsieh:
"
Misconception: Using smaller data types is faster than larger ones 

The original reason int was put into the C language was so that the fastest data
type on each platform remained abstracted away from the programmer himself. On
modern 32 and 64 bit platforms, small data types like chars and shorts actually
incur extra overhead when converting to and from the default machine word sized
data type. 

On the other hand, one must be wary of cache usage. Using packed data (and in
this vein, small structure fields) for large data objects may pay larger
dividends in global cache coherence, than local algorithmic optimization issues.
"
November 24, 2005
Re: D blew it , was Re: Var Types
In article <dm4eja$9hi$1@digitaldaemon.com>, Mark T says...
>
>As a long time C programmer I still think D made a mistake when "fixing" int
>size to 32 bits for a compiled language that is supposed to be compatible with
>C. We are losing the abstraction that C was designed with. 
>
>On the other hand, you absolutely need uint8, int32, etc types for communicating
>with other programs, machines, etc but these types should only be used at the
>point of interface.
>====================================================
>I will quote Paul Hsieh:
>"
>Misconception: Using smaller data types is faster than larger ones 
>
>The original reason int was put into the C language was so that the fastest data
>type on each platform remained abstracted away from the programmer himself. On
>modern 32 and 64 bit platforms, small data types like chars and shorts actually
>incur extra overhead when converting to and from the default machine word sized
>data type. 
>
>On the other hand, one must be wary of cache usage. Using packed data (and in
>this vein, small structure fields) for large data objects may pay larger
>dividends in global cache coherence, than local algorithmic optimization issues.
>"
>
>
>
>

As a long time C programmer I don't agree with that. I prefer having fixed types
that I can alias myself if I need. I have found that porting C code and having
my basic types change size is a real headache. I prefer knowing and specifiying
these instances myself. If you want a generic platform-dependant integer - alias
one. Its that simple.
November 24, 2005
Re: D blew it , was Re: Var Types
But surely as a long time C programmer you've also realised that 
portability is usually a far bigger issue than speed. You know what I 
mean, usually there are a couple of sections of code that need to be 
done up real tight. These often end up with assembly language in them 
etc. and are always unportable. That's OK, you just fix those 
non-portable bits when you move platform. What is a pain though, is 
having your *main* integer type in non-portable form, so that you risk 
randomly breaking stuff all over your code when you e.g. move your code 
onto a 16-bit microcontroller.

I can understand C being the way round that it is, CPU cycles were 
somewhat scarcer 30 years ago, and portability was little thought of. 
Nowadays though portability is a big issue. You can still use "alias" to 
define a "fastint" type if you need to, but it's the exception rather 
than the rule. Would I like "fastint" (or some similar) to be built in 
to D, so that I didn't have to make an alias? Sure. If I had to choose 
between the D way and the C way though, I'd choose the D way every time. 
I rarely have to worry about the speed impacts of using the wrong sized 
integer type, but portability, networking/serialisation issues come up 
on virtually everything I've worked on recently.

Times have changed.

Just my tuppence

Munch

Mark T wrote:
> As a long time C programmer I still think D made a mistake when "fixing" int
> size to 32 bits for a compiled language that is supposed to be compatible with
> C. We are losing the abstraction that C was designed with. 
> 
> On the other hand, you absolutely need uint8, int32, etc types for communicating
> with other programs, machines, etc but these types should only be used at the
> point of interface.
> ====================================================
> I will quote Paul Hsieh:
> "
> Misconception: Using smaller data types is faster than larger ones 
> 
> The original reason int was put into the C language was so that the fastest data
> type on each platform remained abstracted away from the programmer himself. On
> modern 32 and 64 bit platforms, small data types like chars and shorts actually
> incur extra overhead when converting to and from the default machine word sized
> data type. 
> 
> On the other hand, one must be wary of cache usage. Using packed data (and in
> this vein, small structure fields) for large data objects may pay larger
> dividends in global cache coherence, than local algorithmic optimization issues.
> "
November 24, 2005
Re: D blew it , was Re: Var Types
In article <dm4eja$9hi$1@digitaldaemon.com>, Mark T says...
>
>As a long time C programmer I still think D made a mistake when "fixing" int
>size to 32 bits for a compiled language that is supposed to be compatible with
>C. We are losing the abstraction that C was designed with. 
>
>On the other hand, you absolutely need uint8, int32, etc types for communicating
>with other programs, machines, etc but these types should only be used at the
>point of interface.
>====================================================
>I will quote Paul Hsieh:
>"
>Misconception: Using smaller data types is faster than larger ones 
>
>The original reason int was put into the C language was so that the fastest data
>type on each platform remained abstracted away from the programmer himself. On
>modern 32 and 64 bit platforms, small data types like chars and shorts actually
>incur extra overhead when converting to and from the default machine word sized
>data type. 
>
>On the other hand, one must be wary of cache usage. Using packed data (and in
>this vein, small structure fields) for large data objects may pay larger
>dividends in global cache coherence, than local algorithmic optimization issues.
>"

mmmmhhh mmhhm mmmmmhhhhhhhh (muzzled) wish I could speak about this :D

Tom
November 24, 2005
Re: Var Types
Don Clugston wrote:
> Georg Wrede wrote:
> 
>> Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all
>>  said "xxx ought to be plenty enough forever".
> 
> With respect, Bill was an idiot, and it was obvious at the time.

<joke mode="I just couldn't resist. Don't answer these!">

With respect to me or to Bill?   ;-)

"was obvious at the time": Bill being an idiot, or xxx being plenty?

</joke>

>> It is also conceivable (somebody here know the fact?) that most of
>>  those 64 bit modern PCs actually use a 32 bit data bus.
> 
> Actually many of the 32 bit Pentiums use a 64 bit or 128 bit data
> bus!

Ah, thanks!

>> In the eighties, it was common knowledge
>> that we wouldn't have hard disks by the time a hard disk goes
>> beyond gigabyte size. It  was supposed to be physically impossible.
> 
> Those arguments were not based on physics. They were based on 
> assumptions about manufacturing technology.

True. They were advertised as being based on physics, though. And more 
to the point, on the unability to envision the advances in theoretical 
physics that are needed in today's magnetic storage technology.

> The technological changes required for a 128 bit address bus are so 
> huge, the change in size_t would be irrelevant.

I wouldn't skip size_t on that assumption. :-)

Besides, there'll be smaller computers in the future too, like in 
gadgets. -- Aahh, and size_t is needed for the virtual address space, 
not the physical.

> To have 128 bits of addressable RAM, you need to store more than 1
> bit per atom. This means quantum computers would be already well
> developed.

True. Probably the address bus gets wider in the future at about the 
same rate the total memory of computers has grown historically. (What's 
that? Without googling around, I'd guess about 1 bit per year. 
Effectively doubling the RAM size each year.)

The data bus might grow faster, though. Imagine being able to fetch a 
kilobyte at a time! (Hmmm, this issue gets totally f***ed up with level 
this or level that cache being on-chip, so forget the whole thing!)

> Here's how I see it:
> 
> High probability: Multi-CPU systems with huge number of cores. Cures
> for most cancers.

Agreed.

> Medium probability: Quantum computers. Cures for all cancers. 

Disagreed. IMHO they'll go the way bubble memories went. And expert 
systems with AI. And Prolog.

> Colonisation of other planets.

:-)  Yeah, I guess it's inevitable, knowing our genes! After all, we are 
incurable Imperialists.

> Low probability: 128 bit physical address buses.
> 
> :-)

I agree!

> Sure, maybe we'll reach the end of that list. But the ones in the
> middle will have more impact (even on programmers!) than the last
> one.
> 
> This is not something D needs to worry about. But thousand-core CPUs?
> Definitely.´

Seriously, I think there's one category we forgot. A lot of buzz has 
lately been heard about using Graphics Processors in math and 
dataprosessing. They do a decent job especially when the operations are 
simple and can use multiple data. (Hey, game graphics isn't much else!)

I'd venture a guess that very soon we'll see expansion cards where an 
ATI or Nvidia chip exists just for coprosessing, i.e. without a monitor 
plug. Then it won't be long before they're right on the motherboard.

Damn, I forgot the link, I recently visited a site dedicated to this.
And already one can use the existing GP, using their library drivers.

That's parallell processing for the masses. And I honestly think a D 
library for such would give some nice brag value to D.
November 24, 2005
Re: D blew it , was Re: Var Types
Munchgreeble wrote:
> Nowadays though portability is a big issue. You can still use "alias"
> to define a "fastint" type if you need to, but it's the exception
> rather than the rule. Would I like "fastint" (or some similar) to be
> built in to D, so that I didn't have to make an alias? Sure. If I had
> to choose between the D way and the C way though, I'd choose the D
> way every time. I rarely have to worry about the speed impacts of
> using the wrong sized integer type, but portability,
> networking/serialisation issues come up on virtually everything I've
> worked on recently.

Interestingly, the different types for different cpus could (and IMHO
should) be aliased in a library. No need to either define this in the
language or have Walter use time coding this in DMD.

Or in the <your_company_here> import files.
November 24, 2005
Re: D blew it , was Re: Var Types
> ====================================================
> I will quote Paul Hsieh:
> "
> Misconception: Using smaller data types is faster than larger ones 
> 
> The original reason int was put into the C language was so that the fastest data
> type on each platform remained abstracted away from the programmer himself. On
> modern 32 and 64 bit platforms, small data types like chars and shorts actually
> incur extra overhead when converting to and from the default machine word sized
> data type.

Yes, this was true when we went from 16-> 32 bits, but I don't think 
this holds for the 32 -> 64 bit transition. AMD realised that almost all 
useful integers fit into small sizes. AFAIK, 32 bit operations are still 
equal fastest on AMD64. They have the important benefit of requiring 
less storage, and they require less bus bandwidth.

Consequently, most C++ compilers for AMD64 still keep int=32 bits.
(Most of them even keep long=32 bits!).

I've noticed that in the 16 bit days, my code was full of longs.
65535 is just too small, it was really annoying. But now that I have 32 
bit ints, there's not much need for anything bigger.

In the mid-80's I used a Pascal compiler that had 24 bit ints. They were 
always big enough, too. But even back then, 16 bits was not enough.

32 bits is a really useful size. I think D's got it right.
November 24, 2005
Re: D blew it , was Re: Var Types
In article <4385DCF1.3060100@nospam.org>, Georg Wrede says...
>
>Munchgreeble wrote:
>> Nowadays though portability is a big issue. You can still use "alias"
>> to define a "fastint" type if you need to, but it's the exception
>> rather than the rule. Would I like "fastint" (or some similar) to be
>> built in to D, so that I didn't have to make an alias? Sure. If I had
>> to choose between the D way and the C way though, I'd choose the D
>> way every time. I rarely have to worry about the speed impacts of
>> using the wrong sized integer type, but portability,
>> networking/serialisation issues come up on virtually everything I've
>> worked on recently.
>
>Interestingly, the different types for different cpus could (and IMHO
>should) be aliased in a library. No need to either define this in the
>language or have Walter use time coding this in DMD.
>
>Or in the <your_company_here> import files.


EXACTLY! And the real bottom line is any professional who seriously needs this
functionality will already have their own type aliases worked out. Its actually
a benifit starting from a language that is not ambiguous about its types. I dont
want my language changing on a particular platform - I will handle that myself
thank you.
November 24, 2005
Re: D blew it , was Re: Var Types
In article <dm4n1l$ikm$1@digitaldaemon.com>, Don Clugston says...
>
>> ====================================================
>> I will quote Paul Hsieh:
>> "
>> Misconception: Using smaller data types is faster than larger ones 
>> 
>> The original reason int was put into the C language was so that the fastest data
>> type on each platform remained abstracted away from the programmer himself. On
>> modern 32 and 64 bit platforms, small data types like chars and shorts actually
>> incur extra overhead when converting to and from the default machine word sized
>> data type.
>
>Yes, this was true when we went from 16-> 32 bits, but I don't think 
>this holds for the 32 -> 64 bit transition. AMD realised that almost all 
>useful integers fit into small sizes. AFAIK, 32 bit operations are still 
>equal fastest on AMD64. They have the important benefit of requiring 
>less storage, and they require less bus bandwidth.
>
>Consequently, most C++ compilers for AMD64 still keep int=32 bits.
>(Most of them even keep long=32 bits!).
>
>I've noticed that in the 16 bit days, my code was full of longs.
>65535 is just too small, it was really annoying. But now that I have 32 
>bit ints, there's not much need for anything bigger.
>
>In the mid-80's I used a Pascal compiler that had 24 bit ints. They were 
>always big enough, too. But even back then, 16 bits was not enough.
>
>32 bits is a really useful size. I think D's got it right.

Will this be true on 128 bit CPUs?

AMD64 (x86-64)is a compromise design because the pure 64 bit CPUs such as Alpha
and Itanium could not run x86-32 bit code efficiently. The game boxes, other
dedicated embedded systems, etc don't have to make this compromise and have used
other CPUs (I agree that for PCs the x86-64 is the proper migration path).  I
think many of the posters on this thread bring only a PC perspective to their
arguments. C has survived a long time on many CPUs from its PDP-11 beginnings, I
wonder what K + R would have to say on this topic.
Next ›   Last »
2 3 4 5 6
Top | Discussion index | About this forum | D home