November 22, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to xs0 | xs0 wrote:
> Tomás Rossi wrote:
>
> xs0
I think you're mixing up address bus and data bus widths. When people talk about a 64-bit machine, they're talking about the size of the address bus. That's what affects how much RAM you can address. The data bus is completely independent of this. For example IIRC the now somewhat dated N64 had a 64 bit address bus and a 256 data bus.
It's the data bus that affects the size of the ints you need, not the address bus. The address bus affects how wide your pointer types are.
Caveat: I'm no hardware expert ;-)
I hope this helps =)
Munch
|
November 22, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tomás Rossi | Tomás Rossi wrote: > In article <dlv33k$1o7u$1@digitaldaemon.com>, xs0 says... > >>>>What happens to our short, int, long language types when 256-bit processors come along? We'd find it hard to address a 16-bit integer in that system limited to only three type names. >>> >>> >>>Exactly, what would happen? Would "we" have to engineer another language, would >>>it be D v2 :P? Certainly platform-dependent integral types are THE choice. >>>Aliases of the type intXXX would be necessary always. >> >>I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99). > > > The same thing said some people about 32-bit machines before those were > developed and now we have 64-bit CPUs. Plus, I´m sure that already exists > 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run > on home computers? For example, the PlayStation2 platform is builded upon a > 128-bit CPU! Oh, come on. The reason for 64-bit registers is purely practical: computer programs need more memory (4GB limit -> 170_000_000_000GB). Another reason is that most systems have Y2038 problems: http://en.wikipedia.org/wiki/Year_2038_problem Current technology road maps tell that memory capacity will follow Moore's law for another 20-30 years now. Currently D specification even supports 128-bit registers. It means that D programs will work at least for another 60 years without any performance issues! The 64-bit time fields won't wrap around until the world explodes! I don't know about PS2, but I believe it needs 128-bit registers to achieve bigger bandwidth. I don't think this concerns you unless you're writing a fast memcpy(). Usually this code is done in assembly => you don't need high-level 128-bit ints. > > >>Now, if you have a working app on a 32-bit platform and you move it to a 64-bit platform, is it any help if int becomes 64 bit? No, because if it was big enough before, it's big enough now (with the notable exception of memory locations and sizes, which are taken care of with size_t and ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the place will change, breaking any interface to outside-the-app. > > > I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit > edition, not necessarily expecting to work interfacing against a 32-bit version. > Besides, why are you so sure that moving to 64-bits won't be much of a gain? > "If it was big enough before, it's big enough now"???? Be careful, your ported > app will still work, but'll take no benefit of the upgraded processor! If your > app focus on std int performance to work better, this is much of a problem. He means that now you can port your applications without any modifications (may not be highest performance, but works at least - actually smart compilers will optimize your code to 64 bits anyway). If you explicitly want to make high-performance 64-bit code, use compile-time logic structures and aliases. Besides, with machine-dependent ints you would need to check your code in either case. |
November 22, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to xs0 | In article <dlvfpq$2410$1@digitaldaemon.com>, xs0 says... > >Tomás Rossi wrote: > >>>I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99). >> >> >> The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. > >Well, sure, people always make mistakes, but can you think of any application anyone will develop in the next 30 years that will need more than 17,179,869,184 GB of ram (or 512x that of disk)? Older limits, like 1MB of 8086 or 4GB of 80386 were somewhat easier to reach, I think :) I mean, even if both needs and technology double each year (and I think it's safe to say that they increase more slowly), it will take over 30 years to reach that... > > >Plus, I´m sure that already exists >> 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU! > >Well, from what I can gather from >http://arstechnica.com/reviews/hardware/ee.ars/ >the PS2 is actually 64-bit, what is 128-bit are SIMD instructions (which >actually work on at most 32-bit vals) and some of the internal buses.. Ok, you're right, didn't know. >My point was that there's not much need for operating with values over 64 bits, so I don't see the transition to 128 bits happening soon (again, I'm refering to single data items; bus widths, vectorized instructions' widths etc. are a different story, but one that is not relevant to our discussion) Maybe not too soon, but someday it'll happen, that's the thing. When writing a program, you code with future in mind but not so distant future (you know it'll work on 64-bits with minor changes ideally but don't care about 128-bits possibility). But when designing a language that could be implemented in so many CPU architectures, you should take that fact into account. Again, maybe not soon but someday it'll happen and what would be the solution? >>>Now, if you have a working app on a 32-bit platform and you move it to a 64-bit platform, is it any help if int becomes 64 bit? No, because if it was big enough before, it's big enough now (with the notable exception of memory locations and sizes, which are taken care of with size_t and ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the place will change, breaking any interface to outside-the-app. >> >> >> I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain? > >The biggest gain I see in 64 bits is, like I said, the ability to handle more memory, which naturally improves performance for some types of applications, like databases. I don't see much performance gain in general, because there aren't many quantities that require a 64-bit representation in the first place. Even if 64-bit ops are 50x faster on a 64-bit cpu than on a 32-bit cpu, they are very rare (at least in my experience), so the gain is small. Also note that you don't gain any speed by simply making your variables bigger, it that's all you do.. Not sure, but I think there are some integral operations that could be done in half the time (doing some magic at least). It's been a long time since I coded in assembly so I can't recall of an example right now. >> "If it was big enough before, it's big enough now"???? Be careful, your ported app will still work, but'll take no benefit of the upgraded processor! > >Why not? It will be able to use more ram, and operations involving longs will be faster. Are there any other benefits a 64-bit architecture provides? I haven't time now, have to go, latter I'll continue with this ones. :) >If your >> app focus on std int performance to work better, this is much of a problem. > >I don't understand that sentence, sorry :) Again Tom |
November 22, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jari-Matti Mäkelä | In article <dlvi6c$278u$1@digitaldaemon.com>, =?ISO-8859-1?Q?Jari-Matti_M=E4kel=E4?= says... > >Tomás Rossi wrote: >> In article <dlv33k$1o7u$1@digitaldaemon.com>, xs0 says... >> >>>>>What happens to our short, int, long language types when 256-bit processors come along? We'd find it hard to address a 16-bit integer in that system limited to only three type names. >>>> >>>> >>>>Exactly, what would happen? Would "we" have to engineer another language, would it be D v2 :P? Certainly platform-dependent integral types are THE choice. Aliases of the type intXXX would be necessary always. >>> >>>I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99). >> >> >> The same thing said some people about 32-bit machines before those were developed and now we have 64-bit CPUs. Plus, I´m sure that already exists 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run on home computers? For example, the PlayStation2 platform is builded upon a 128-bit CPU! > >Oh, come on. The reason for 64-bit registers is purely practical: computer programs need more memory (4GB limit -> 170_000_000_000GB). Another reason is that most systems have Y2038 problems: > >http://en.wikipedia.org/wiki/Year_2038_problem > >Current technology road maps tell that memory capacity will follow Moore's law for another 20-30 years now. Currently D specification even supports 128-bit registers. It means that D programs will work at least for another 60 years without any performance issues! The 64-bit time fields won't wrap around until the world explodes! >I don't know about PS2, but I believe it needs 128-bit registers to achieve bigger bandwidth. I don't think this concerns you unless you're writing a fast memcpy(). Usually this code is done in assembly => you don't need high-level 128-bit ints. > >> >> >>>Now, if you have a working app on a 32-bit platform and you move it to a 64-bit platform, is it any help if int becomes 64 bit? No, because if it was big enough before, it's big enough now (with the notable exception of memory locations and sizes, which are taken care of with size_t and ptrdiff_t). Does it hurt? It sure can, as sizes of objects all over the place will change, breaking any interface to outside-the-app. >> >> >> I can't agree with this. You port an app to 64-bit and rebuild it as a 64-bit edition, not necessarily expecting to work interfacing against a 32-bit version. Besides, why are you so sure that moving to 64-bits won't be much of a gain? "If it was big enough before, it's big enough now"???? Be careful, your ported app will still work, but'll take no benefit of the upgraded processor! If your app focus on std int performance to work better, this is much of a problem. > >He means that now you can port your applications without any modifications (may not be highest performance, but works at least - actually smart compilers will optimize your code to 64 bits anyway). If you explicitly want to make high-performance 64-bit code, use compile-time logic structures and aliases. Besides, with machine-dependent ints you would need to check your code in either case. I know what he meant and I know you can do all the alias stuff (you should read the other posts). However, I can say that I could agree with you (and xs0) notwithstanding, I like the platform dependent approach because IMO it makes the language more platform independent and more independent from the current computation model. I understand it's impossible to make a language that suits perfect for everybodys taste. Regards Tom |
November 22, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tomás Rossi | Tomás Rossi wrote: >> He means that now you can port your applications without any modifications (may not be highest performance, but works at least - actually smart compilers will optimize your code to 64 bits anyway). If you explicitly want to make high-performance 64-bit code, use compile-time logic structures and aliases. Besides, with machine-dependent ints you would need to check your code in either case. > > I know what he meant and I know you can do all the alias stuff (you should read > the other posts). > However, I can say that I could agree with you (and xs0) notwithstanding, I like > the platform dependent approach because IMO it makes the language more platform > independent and more independent from the current computation model. For what it's worth, the C99 stdint header is available here: http://svn.dsource.org/projects/ares/trunk/src/ares/std/c/stdint.d It might serve as a good starting point for someone looking to experiment with platform-dependent types and such. Sean |
November 23, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Lionello Lunesu | Lionello Lunesu escribió: >>std.stdint contains aliases for those > > > That's the wrong way around if you'd ask me. I would also like decorated types, eventually aliased to the C int/short/long (with 'int' being the platform's default). > > L. > > I'm not saying it's the right or wrong way. It's just that every once in a while people come and ask exactly the same: int8, fast_int, etc., and while D doesn't have them as proper types, at least they exist as aliases in the standard library, so they're guaranteed to exist. And even with sizes, as other have mentioned, D's typesystem is all about fixed sizes, so maybe you don't get the names you want, but you get the functionality. -- Carlos Santander Bernal |
November 23, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Carlos Santander | > And even with sizes, as other have mentioned, D's typesystem is all about fixed sizes, so maybe you don't get the names you want, but you get the functionality.
Good point.
But there's one reason this doesn't quite go up. Most of the time I don't care what 'int' I use. I use 'int' for any number. I use it in for-loops, in simple structs, in interfaces. Most of the time the size of the int doesn't matter and I just want a number. Only when serializing to disk or network I want fixed size types.
It's nice that D fixes the size of an int, but it's not really helping me. On another platform the registers might not have 32-bits but now all my numbers I don't care about will generate excessive code since D forces them to 32-bits.
It's "the wrong way around" because D makes me create and use an alias for the cases I didn't care about. Now I have to care about those cases for portability.
L.
|
November 23, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tomás Rossi | Tomás Rossi wrote:
> In article <dlv33k$1o7u$1@digitaldaemon.com>, xs0 says...
>
>>>>What happens to our short, int, long language types when 256-bit processors come along? We'd find it hard to address a 16-bit integer in that system limited to only three type names.
>>>
>>>
>>>Exactly, what would happen? Would "we" have to engineer another language, would
>>>it be D v2 :P? Certainly platform-dependent integral types are THE choice.
>>>Aliases of the type intXXX would be necessary always.
>>
>>I think you guys are exaggerating the problem. Even 64-bit CPUs were developed (afaik) mainly because of the need to cleanly address more than 4GB of RAM, not because there's some overwhelming need for 64-bit calculations. Considering how much RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU that is 128-bit, let alone 256-bit any time soon (and even if developed because of marketing purposes, I see no reason to use 32-byte variables to have loop counters from 0 to 99).
>
>
> The same thing said some people about 32-bit machines before those were
> developed and now we have 64-bit CPUs. Plus, I´m sure that already exists
> 128/256-bit CPUs nowadays, maybe not for home PCs, but who say D only has to run
> on home computers? For example, the PlayStation2 platform is builded upon a
> 128-bit CPU!
No, it's a fundamentally different situation. We're running up against the laws of physics. 2^64 is a fantastically large number.
(a) Address buses.
If you could store one bit of RAM per silicon atom, a memory chip big enough to require 65 bit addressing would be one cubic centimetre in size. Consider that existing memory chips are only 2D, and you need wiring to connect to each bit. Even if the cooling issues are overcome, it's really hard to imagine that happening. A memory chip big enough to require 129 bit addressing would be larger than the planet.
The point is, that we're approaching Star Trek territory. The Enterprise computer probably only has a 128 bit address bus.
Many people think that doubling the number of bits is exponential growth. It's not. Adding one more bit is exponential growth!
It's exp(exp(x)) which is frighteningly fast function. Faster than a factorial!
(b) Data buses
I began programming with 8 bit data registers. That was limiting. Almost all useful numbers are greater than 256.
16 bits was better, but still many quanties are > 65536. But almost everything useful fits into a 32 bit register. 32 bits really is a natural size for an integer.
The % of applications where each of these is inadequate is decreasing exponentially. Very few applications need 128 bit integers.
But almost everything can use increased parallellism, hence 128 bit SIMD instructions. But they're still only using 32 bit integers.
I think that even the 60 year timeframe for 128 bit address buses is a bit optimistic. (But I think we will see 1024-core processors long before then. Maybe even by 2015. And we'll see 64K core systems).
|
November 23, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tomás Rossi | Tomás Rossi wrote:
> In article <dcr6iol0nzuz.ovrmy3qsc18d.dlg@40tude.net>, Derek Parnell says...
>
>>On Tue, 22 Nov 2005 02:24:19 +0000 (UTC), Tomás Rossi wrote:
>>
>>[snip]
>>
>>
>>>But with actual D approach, say I have a very efficient D app (in wich
>>>performance depends on the most efficient integer manipulation for the current
>>>CPU), written originally with int data type (because it was conceived for a
>>>32bit system). When I port it to 64bit system, I'll have to make changes (say
>>>replacing int with long) to take advantages of the more powerful CPU.
>>
>>Yes, you are right. In D the 'int' always means 32-bits regardless of the
>>architecture running the application. So if you port it to a different
>>architecture *and* you want to take advantage of the longer integer then
>>you will have to change 'int' to 'long'. Otherwise use aliases of your own
>>making ...
>>
>>version(X86) {
>> alias int stdint;
>> alias long longint;
>>}
>>
>>version(X86_64) {
>> alias long stdint;
>> alias cent longint;
>>}
>>
>>longint foo(stdint A) {
>> return cast(longint)A * cast(longint)A + cast(longint)1;
>>}
>
>
> So, what's the downsides of the platform dependent integer types?
> Currently, applying your above workaroud (wich is almost a MUST from now on),
> the downsides are very clear: developers will have to do this for sure in the
> most projects because 64-bits sytems are a reality in this days and 32-bit ones
> are rapidly staying behind. Plus the uglyness of having to use stdint everywhere
> where you would use int and the obvious consequences of type obfuscation due to
> alias.
>
> Tom
The downside is most programming is tailored to a task and not to a machine. Engineering of all types is based on knowing the capabilities of the raw materials we are working with. As a developer we *have* to know what the bounds of the data types we are working with.
There are very few places in most software where 'the fastest integer possible' is needed. And in those cases you would be better off with an alias anyways.
So far everything you have asked for is possible within the language, but you are asking for a change which requires hackery for *all the other cases*.
-DavidM
|
November 23, 2005 Re: Var Types | ||||
---|---|---|---|---|
| ||||
Posted in reply to Don Clugston | Don Clugston wrote: > Tomás Rossi wrote: >> In article <dlv33k$1o7u$1@digitaldaemon.com>, xs0 says... >> >>>>> What happens to our short, int, long language types when >>>>> 256-bit processors come along? We'd find it hard to address >>>>> a 16-bit integer in that system limited to only three type >>>>> names. >>>> >>>> Exactly, what would happen? Would "we" have to engineer another >>>> language, would it be D v2 :P? Certainly platform-dependent >>>> integral types are THE choice. Aliases of the type intXXX would >>>> be necessary always. >>> >>> I think you guys are exaggerating the problem. Even 64-bit CPUs >>> were developed (afaik) mainly because of the need to cleanly >>> address more than 4GB of RAM, not because there's some >>> overwhelming need for 64-bit calculations. Considering how much >>> RAM/disk/whatever 2^64 is, I don't think anyone will need a CPU >>> that is 128-bit, let alone 256-bit any time soon (and even if >>> developed because of marketing purposes, I see no reason to use >>> 32-byte variables to have loop counters from 0 to 99). (Hmm, so for(byte b=0; b<99; b++) is what one writes today?) ;-) One thing that comes to mind is cryptography. Doing serious encrypting on the fly would benefit from having, say, 1024 bit processors. Oh yes, and the NSA and other spooks really need double the width that everyone else has. This is a law of nature. :-) I remember reading about a graphics card that had a 256 bit cpu. This was so long ago that I think it's on the market already. >> The same thing said some people about 32-bit machines before those >> were developed and now we have 64-bit CPUs. Plus, I´m sure that >> already exists 128/256-bit CPUs nowadays, maybe not for home PCs, >> but who say D only has to run on home computers? For example, the >> PlayStation2 platform is builded upon a 128-bit CPU! Anybody remember the IBM boss? Or the Intel boss? Or Bill? They all said "xxx ought to be plenty enough forever". > No, it's a fundamentally different situation. We're running up > against the laws of physics. 2^64 is a fantastically large number. > > (a) Address buses. If you could store one bit of RAM per silicon > atom, a memory chip big enough to require 65 bit addressing would be > one cubic centimetre in size. Consider that existing memory chips are > only 2D, and you need wiring to connect to each bit. Even if the > cooling issues are overcome, it's really hard to imagine that > happening. A memory chip big enough to require 129 bit addressing > would be larger than the planet. > > The point is, that we're approaching Star Trek territory. The > Enterprise computer probably only has a 128 bit address bus. > > Many people think that doubling the number of bits is exponential growth. It's not. Adding one more bit is exponential growth! It's > exp(exp(x)) which is frighteningly fast function. Faster than a factorial! > > (b) Data buses > > I began programming with 8 bit data registers. That was limiting. > Almost all useful numbers are greater than 256. 16 bits was better, > but still many quanties are > 65536. But almost everything useful > fits into a 32 bit register. 32 bits really is a natural size for an > integer. The % of applications where each of these is inadequate is > decreasing exponentially. Very few applications need 128 bit > integers. > > But almost everything can use increased parallellism, hence 128 bit > SIMD instructions. But they're still only using 32 bit integers. > > I think that even the 60 year timeframe for 128 bit address buses is > a bit optimistic. (But I think we will see 1024-core processors long > before then. Maybe even by 2015. And we'll see 64K core systems). Depends. I started doing programming on an HP handheld. It had a 4 bit cpu. (Yes, four bits.) Its address bus was wider, though. Programming it was done in assembly, although they never said it in the manual, probably so as not to frighten folks away. My next assembly I wrote on the 6502, which was an 8 bit cpu. The address bus was 16 bits. Then I went on to the PC, which was touted as a 16 bit machine. True, the 8086 was 16 bits, but because that needed an expensive motherboard and memory hardware, a cheaper version was built for the masses, the 8088, so we had 16 bit PCs with an 8 bit data bus. Slow yes, but cheaper. But still 16 bit. (The software never knew.) Currently (I believe) none of the 64 bit cpus actually have address buses that are 64 bits wide. Nobody is the wiser, but when you go to the pc vendor and ask how much memory one can put on this or that 64 bit PC, the usual answer is like "16GB". It is also conceivable (somebody here know the fact?) that most of those 64 bit modern PCs actually use a 32 bit data bus. So, historically, the data bus, the address buss, and the accumulator (where integer math is done, and the width of which is often taken to be the "width of the cpu") have usually not all had the same width -- although folks seem to believe so. --- What we however do need, is a virtual address space that is large enough to accommodate the most demanding applications and data. This makes writing software (and especially operating systems and compilers) a lot easier, because we then don't have to start constructing kludges for the situations where we bang into the end of the memory range. (This is a separate issue from running out of memory.) --- The one thing that guarantees us a 256 bit cpu on everybody's table top eventually, has nothing to do with computers per se. It's to do with the structure of the Western society, and also with our inborn genes. (What???) First, the society thing: the world (within the foreseeable future) is based on capitalism. (I'm no commie, so it's fine with me.) This in itself makes vendors compete. And that's good, otherwise we'd still all be driving T-Fords. But this leads to bigger, faster, fancier, cooler, ... ad absurdum. Laws of physics, bah. In the eighties, it was common knowledge that we wouldn't have hard disks by the time a hard disk goes beyond gigabyte size. It was supposed to be physically impossible. It would have to be some kind of solid state tech instead. And now I read about Nokia phones having internal hard disks with multi gig capacity. Second, it's in our genes. There's a revealing commercial on my TV: day care kids on a break. "My mom makes better food than yours." "My mom makes better food than both of your mothers." A teenager walks by and says "My mother makes better food than any of yours." And then this 2-year old from the other kindergarten says over the fence: "My mother makes the food all your mothers serve." (Think Nestle, Kraft, whatever.) Suits and non-nerds live to brag. "A trillion bit cpu? Naaa, get out of here!" That day is nearer than Doomsday. I don't even bother to bet on it, would be like stealing the money. --- Ever heard "no matter how big your garage, it fills up with crap, and soon your car stays outside"? Ever heard "it makes no difference how big your hard disk is, it takes the same amount of time before it gets full"? Ever heard "it makes no difference how fast the PC is, the next Windows puts it on its knees anyhow"? --- Better computer games, anybody? Who wouldn't like to have a 6-month accurate weather forecast in the pocket the day Boss asks when you'd like to have this year's vacation? We need a good computer model of the human body, so we don't have to kill mice, pigs and apes whenever a new drug is developed. Earthquakes, anybody? Speech recognition that actually works? How about a computer that the law makers can ask every time they've invented a new law? They could get an answer to how the new law _really_ would impact citizens, tax revenue, and other laws! |
Copyright © 1999-2021 by the D Language Foundation