Jump to page: 1 211  
Page
Thread overview
OT (partially): about promotion of integers
Dec 11, 2012
eles
Dec 11, 2012
eles
Dec 11, 2012
eles
Dec 11, 2012
H. S. Teoh
Dec 11, 2012
Walter Bright
Dec 11, 2012
eles
Dec 11, 2012
Walter Bright
Dec 11, 2012
eles
Dec 11, 2012
eles
Dec 11, 2012
bearophile
Dec 11, 2012
Walter Bright
Dec 11, 2012
deadalnix
Dec 12, 2012
Walter Bright
Dec 11, 2012
bearophile
Dec 15, 2012
Isaac Gouy
Dec 16, 2012
SomeDude
Dec 16, 2012
jerro
Dec 16, 2012
Isaac Gouy
Dec 16, 2012
SomeDude
Dec 16, 2012
SomeDude
Dec 17, 2012
Walter Bright
Dec 17, 2012
Isaac Gouy
Dec 17, 2012
H. S. Teoh
Dec 16, 2012
Isaac Gouy
Dec 11, 2012
foobar
Dec 11, 2012
Walter Bright
Dec 11, 2012
foobar
Dec 12, 2012
bearophile
Dec 12, 2012
foobar
Dec 12, 2012
H. S. Teoh
Dec 12, 2012
bearophile
Dec 12, 2012
H. S. Teoh
Dec 12, 2012
Araq
Dec 12, 2012
Walter Bright
Dec 12, 2012
bearophile
Dec 12, 2012
d coder
Dec 12, 2012
jerro
Dec 12, 2012
Walter Bright
Dec 12, 2012
David Piepgrass
Dec 12, 2012
Walter Bright
Dec 13, 2012
David Piepgrass
Dec 12, 2012
bearophile
Dec 12, 2012
Max Samukha
Dec 12, 2012
foobar
Dec 12, 2012
Walter Bright
Dec 12, 2012
foobar
Dec 12, 2012
bearophile
Dec 12, 2012
Walter Bright
Dec 12, 2012
ixid
Dec 12, 2012
deadalnix
Dec 12, 2012
Walter Bright
Dec 12, 2012
Walter Bright
Dec 12, 2012
bearophile
Dec 12, 2012
Walter Bright
Dec 12, 2012
Timon Gehr
Dec 12, 2012
Walter Bright
Dec 12, 2012
bearophile
Dec 12, 2012
Walter Bright
Dec 13, 2012
bearophile
Dec 13, 2012
Walter Bright
Dec 13, 2012
jerro
Dec 13, 2012
SomeDude
Dec 12, 2012
Timon Gehr
Dec 12, 2012
Walter Bright
Dec 13, 2012
Timon Gehr
Dec 13, 2012
Walter Bright
Dec 13, 2012
Timon Gehr
Dec 13, 2012
Walter Bright
Dec 12, 2012
Walter Bright
Dec 13, 2012
deadalnix
Dec 13, 2012
Timon Gehr
Dec 13, 2012
Walter Bright
Dec 13, 2012
SomeDude
Dec 13, 2012
SomeDude
Dec 14, 2012
Timon Gehr
Dec 13, 2012
SomeDude
Dec 14, 2012
Timon Gehr
Dec 12, 2012
foobar
Dec 12, 2012
Walter Bright
Dec 13, 2012
foobar
Dec 12, 2012
Max Samukha
Dec 13, 2012
xenon325
Dec 14, 2012
evilrat
Dec 14, 2012
evilrat
Dec 12, 2012
Araq
Dec 12, 2012
bearophile
Dec 12, 2012
Walter Bright
Dec 12, 2012
Timon Gehr
Dec 12, 2012
Walter Bright
Dec 11, 2012
eles
Dec 11, 2012
Walter Bright
Dec 12, 2012
Michael
Dec 12, 2012
Michael
Dec 12, 2012
eles
Dec 12, 2012
eles
Dec 12, 2012
Michael
Dec 12, 2012
eles
Dec 12, 2012
Michael
Dec 12, 2012
eles
Dec 12, 2012
Michael
Dec 12, 2012
eles
December 11, 2012
Hello,

 The previous thread, about the int resulting from operations on bytes, rised me a question, that is somewhat linked to a difference between Pascal/Delphi/FPC (please, no flame here) and C/D.

 Basically, as far as i get it, both FPC and C use Integer (name it int, if you like), as a fundamental type. That means, among others, that this is the prefered type to cast (implicitely) to.

 Now, there is a difference between the int-FPC and the int-C: int-FPC is the *widest* integer type (and it is signed), and all others integral types are subranges of this int-FPC. That is, the unsigned type is simply a sub-range of positive numbers, the char type is simply the subrange between -128 and +127 and so on.

 This looks to me as a great advantage, since implicit conversions are always straightforward and simple: everything is first converted to the fundamental (widest) type, calculation is made (yes, there might be some optimizations made, but this should be handled by the compiler, not by the programmer), then the final result is obtanied.

 Note that this approach, of making unsigned integrals a subrange of the int-FPC halves the maximum-representable number as unsigned, since 1 bit is always reserved for the sign (albeit, for unsigned, it is always 0).

 OTOH, the fact that the int-FPC is the widest available, makes it very naturally as a fundamental type and justifies (I think, without doubt), the casting all other types to this type and of the result of the arithmetic operation. If this result is in a subrange, then it might get casted back to a subrange (that is, another integral type).

 In C/D, the problem is that int-C is the fundamental (and prefered for conversion) type, but it is not the widest. So, you have a plethora of implicit promotions.

 Now, the off-topic question: the loss in unsigned-range aside (that I find it to be a small price for the earned clarity), is that any other reason (except C-compatibility) that D would not implement that model (this is not a suggestion to do it now, I know D is almost ready for prime-time, but it is a question), that is the int-FPC like model for integral types?

Thank you,

 Eles
December 11, 2012
>  Now, the off-topic question: the loss in unsigned-range aside (that I find it to be a small price for the earned clarity), is that any other reason (except C-compatibility) that D would not implement that model (this is not a suggestion to do it now, I know D is almost ready for prime-time, but it is a question), that is the int-FPC like model for integral types?
>

Rephrasing all that, it would be just like the fundamental type in D would be the widest-integral type, and the unsigned variant of that widest-integral type would be supprimated.

Then, all operands in an integral operations would be first promoted to this widest-integral, computation would be made, then the final result may be demoted back (the compiler is free to optimize it as it wants, but behind the scene).

December 11, 2012
On 12/11/12 10:20 AM, eles wrote:
> Hello,
>
> The previous thread, about the int resulting from operations on bytes,
> rised me a question, that is somewhat linked to a difference between
> Pascal/Delphi/FPC (please, no flame here) and C/D.
[snip]

There's a lot to be discussed on the issue. A few quick thoughts:

* 32-bit integers are a sweet spot for CPU architectures. There's rarely a provision for 16- or 8-bit operations; the action is at 32- or 64-bit.

* Then, although most 64-bit operations are as fast as 32-bit ones, transporting operands takes twice as much internal bus real estate and sometimes twice as much core real estate (i.e. there are units that do either two 32-bit ops or one 64-bit op).

* The whole reserving a bit and halving the range means extra costs of operating with a basic type.


Andrei
December 11, 2012
> There's a lot to be discussed on the issue. A few quick thoughts:
>
> * 32-bit integers are a sweet spot for CPU architectures. There's rarely a provision for 16- or 8-bit operations; the action is at 32- or 64-bit.

Speed can be still optimized by the compiler, behind the scenes. The approach does not asks the compiler to promote everything to widest-integral, but to do the job "as if". Currently, the choice of int-C as the fastest-integral instead of widest-integral move the burden from the compiler to the user.

> * Then, although most 64-bit operations are as fast as 32-bit ones, transporting operands takes twice as much internal bus real estate and sometimes twice as much core real estate (i.e. there are units that do either two 32-bit ops or one 64-bit op).
>
> * The whole reserving a bit and halving the range means extra costs of operating with a basic type.

Yes, there is a cost. But, as always, there is a balance between advantages and drawbacks. What is favourable? Simplicity of promotion or a supplimentary bit?

Besides, at the end of the day, a half-approach would be to have a widest-signed-integral and a widest-unsigned-integral type and only play with those two.

Eles

December 11, 2012
On 12/11/12 11:29 AM, eles wrote:
>> There's a lot to be discussed on the issue. A few quick thoughts:
>>
>> * 32-bit integers are a sweet spot for CPU architectures. There's
>> rarely a provision for 16- or 8-bit operations; the action is at 32-
>> or 64-bit.
>
> Speed can be still optimized by the compiler, behind the scenes. The
> approach does not asks the compiler to promote everything to
> widest-integral, but to do the job "as if". Currently, the choice of
> int-C as the fastest-integral instead of widest-integral move the burden
> from the compiler to the user.

Agreed. But then that's one of them "sufficiently smart compiler" arguments. http://c2.com/cgi/wiki?SufficientlySmartCompiler

>> * Then, although most 64-bit operations are as fast as 32-bit ones,
>> transporting operands takes twice as much internal bus real estate and
>> sometimes twice as much core real estate (i.e. there are units that do
>> either two 32-bit ops or one 64-bit op).
>>
>> * The whole reserving a bit and halving the range means extra costs of
>> operating with a basic type.
>
> Yes, there is a cost. But, as always, there is a balance between
> advantages and drawbacks. What is favourable? Simplicity of promotion or
> a supplimentary bit?

A direct and natural mapping between language constructs and machine execution is very highly appreciated in the market D is in. I don't see that changing in the foreseeable future.

> Besides, at the end of the day, a half-approach would be to have a
> widest-signed-integral and a widest-unsigned-integral type and only play
> with those two.

D has terrific abstraction capabilities. Lave primitive types alone and define a UDT that implements your desired behavior. You can always implement safe on top of fast but not the other way around.


Andrei
December 11, 2012
On Tue, Dec 11, 2012 at 11:35:39AM -0500, Andrei Alexandrescu wrote:
> On 12/11/12 11:29 AM, eles wrote:
> >>There's a lot to be discussed on the issue. A few quick thoughts:
> >>
> >>* 32-bit integers are a sweet spot for CPU architectures. There's rarely a provision for 16- or 8-bit operations; the action is at 32- or 64-bit.
> >
> >Speed can be still optimized by the compiler, behind the scenes. The approach does not asks the compiler to promote everything to widest-integral, but to do the job "as if". Currently, the choice of int-C as the fastest-integral instead of widest-integral move the burden from the compiler to the user.
> 
> Agreed. But then that's one of them "sufficiently smart compiler" arguments. http://c2.com/cgi/wiki?SufficientlySmartCompiler
[...]

A sufficiently smart compiler can solve the halting problem. ;-)


T

-- 
Obviously, some things aren't very obvious.
December 11, 2012
On 12/11/2012 8:22 AM, Andrei Alexandrescu wrote:
> * 32-bit integers are a sweet spot for CPU architectures. There's rarely a
> provision for 16- or 8-bit operations; the action is at 32- or 64-bit.

Requiring integer operations to all be 64 bits would be a heavy burden on 32 bit CPUs.


December 11, 2012
On 12/11/2012 8:35 AM, Andrei Alexandrescu wrote:
>> Besides, at the end of the day, a half-approach would be to have a
>> widest-signed-integral and a widest-unsigned-integral type and only play
>> with those two.

Why stop at 64 bits? Why not make there only be one integral type, and it is of whatever precision is necessary to hold the value? This is quite doable, and has been done.

But at a terrible performance cost.

And, yes, in D you can create your own "BigInt" datatype which exhibits this behavior.
December 11, 2012
> Why stop at 64 bits? Why not make there only be one integral type, and it is of whatever precision is necessary to hold the value? This is quite doable, and has been done.

You really miss the point here. Nobody will ask you to promote those numbers to 64-bit or whatever *unless necessary*. It will only modify the implicit promotion rule, from "at least to int" to "widest-integral".

You may chose, as a compiler, to promote the numbers only to 16 bits, or 32 bits, if you like, but only if the final result is not viciated.

The compiler will be free to promote as it likes, as long as it guarantees that the final result is "as if" the promotion is to the widest-integral.

The point case is that this way the promotion rules, quite complex now, will go straightforward. Yes, the burden will be on the compiler rather than on the user. But this could improve in time: C++ classes are nothing else than a burden that falls on the compiler in order to make the programmer's life easier. Those classes too, started as big behemots, so slow that scared everyone.

Anyway, I will not defend this to the end of the world. Actually, if you look in my original post, you will see that this is a simple question, not a suggestion.

Until now the question received many backfights, but no answer.

A bit shameful.
December 11, 2012
On Tuesday, 11 December 2012 at 16:35:39 UTC, Andrei Alexandrescu wrote:
> On 12/11/12 11:29 AM, eles wrote:
>>> There's a lot to be discussed on the issue. A few quick thoughts:
>>>
>>> * 32-bit integers are a sweet spot for CPU architectures. There's
>>> rarely a provision for 16- or 8-bit operations; the action is at 32-
>>> or 64-bit.
>>
>> Speed can be still optimized by the compiler, behind the scenes. The
>> approach does not asks the compiler to promote everything to
>> widest-integral, but to do the job "as if". Currently, the choice of
>> int-C as the fastest-integral instead of widest-integral move the burden
>> from the compiler to the user.
>
> Agreed. But then that's one of them "sufficiently smart compiler" arguments. http://c2.com/cgi/wiki?SufficientlySmartCompiler
>
>>> * Then, although most 64-bit operations are as fast as 32-bit ones,
>>> transporting operands takes twice as much internal bus real estate and
>>> sometimes twice as much core real estate (i.e. there are units that do
>>> either two 32-bit ops or one 64-bit op).
>>>
>>> * The whole reserving a bit and halving the range means extra costs of
>>> operating with a basic type.
>>
>> Yes, there is a cost. But, as always, there is a balance between
>> advantages and drawbacks. What is favourable? Simplicity of promotion or
>> a supplimentary bit?
>
> A direct and natural mapping between language constructs and machine execution is very highly appreciated in the market D is in. I don't see that changing in the foreseeable future.
>
>> Besides, at the end of the day, a half-approach would be to have a
>> widest-signed-integral and a widest-unsigned-integral type and only play
>> with those two.
>
> D has terrific abstraction capabilities. Lave primitive types alone and define a UDT that implements your desired behavior. You can always implement safe on top of fast but not the other way around.
>
>
> Andrei

All of the above relies on the assumption that the safety problem is due to the memory layout. There are many other programming languages that solve this by using a different point of view - the problem lies in the implicit casts and not the memory layout. In other words, the culprit is code such as:
uint a = -1;
which compiles under C's implicit coercion rules but _really shouldn't_.
The semantically correct way would be something like:
uint a = 0xFFFF_FFFF;
but C/C++ programmers tend to think the "-1" trick is less verbose and "better".
Another way is to explicitly state the programmer's intention:
uint a = reinterpret!uint(-1); // no run-time penalty should occur

D decided to follow C's coercion rules which I think is a design mistake but one that cannot be easily changed.

Perhaps as Andrei suggested, a solution would be to use a higher level "Integer" type defined in a library that enforces better semantics.
« First   ‹ Prev
1 2 3 4 5 6 7 8 9 10 11