June 12, 2009
Derek Parnell wrote:
> On Fri, 12 Jun 2009 02:08:14 +0200, Don wrote:
> 
>> Walter Bright wrote:
>>> davidl wrote:
>>>> It seems that comparing two different operands with different size makes no sense. The compiler should issue an error against that.
>>> Consider:
>>>
>>>    byte b;
>>>    if (b == 1)
>>>
>>> here you're comparing two different sizes, a byte and an int. Disallowing such (in its various incarnations) is a heavy burden, as the user will have to insert lots of ugly casts.
>>>
>>> There really isn't any escaping from the underlying representation of 2's complement arithmetic with its overflows, wrap-arounds, sign extensions, etc.
>> The problem is a lot more specific than that.
>> The unexpected behaviour comes from the method used to promote two types to a common type, when both are smaller than int, but of different signedness. Intuitively, you expect the common type of {byte, ubyte} to be ubyte, by analogy to {int, uint}->uint, and {long, ulong}->ulong. But instead, the common type is int!
> 
> I think that the common type for byte and ubyte is short. Byte and ubyte
> have overlapping ranges of values (-127 to 127) and (0 to 255) so a common
> type would have to be able to hold both these ranges at least, and short
> (16-bit signed integer) does that.

But then you still have the problem that the high half of the short was extended from the low half in two different ways, once by sign-extend, once by zero-extend. Mixing sign-extend and zero-extend in the same expression is asking for trouble.
June 12, 2009
Frits van Bommel wrote:
> Don wrote:
>> For bonus points:
> [end of message]
> 
> I guess nobody'll be getting those bonus points then... :P
<g>

For bonus points:
Code like the following is also almost certainly a bug:
byte b = -1;
if (b == 255)  ... // FALSE!

When variable of byte or short type is compared with a positive literal of value > byte.max or short.max respectively, or when an ubyte or ushort is compared with a negative literal, it's pretty much the same situation.
Flagging an error for this situation would typically reveal the root cause: b should have been 'ubyte', not 'byte'.

June 12, 2009
Walter Bright wrote:
> davidl wrote:
>> It seems that comparing two different operands with different size makes no sense. The compiler should issue an error against that.
> 
> Consider:
> 
>    byte b;
>    if (b == 1)
> 
> here you're comparing two different sizes, a byte and an int. Disallowing such (in its various incarnations) is a heavy burden, as the user will have to insert lots of ugly casts.

Until we get polysemous values, that is ;-)  Assuming that's still on the radar...
June 12, 2009
Don wrote:
> But then you still have the problem that the high half of the short was extended from the low half in two different ways, once by sign-extend, once by zero-extend. Mixing sign-extend and zero-extend in the same expression is asking for trouble.

I disagree.  In fact, I don't sign extension or conversion to a common type should even be necessary.

Given value 's' of type 'sT' and unsigned value 'u' of type 'uT', where
'sT' and 'uT' have the same width, comparisons should be translated as
follows:
  's == u' --> 's >= 0 && cast(uT)(s) == u'
  's != u' --> 's < 0 || cast(uT)(s) != u'
  's < u' --> 's < 0 || cast(uT)(s) < u'
  's <= u' --> 's < 0 || cast(uT)(s) <= u'
  's > u' --> 's >= 0 && cast(uT)(s) > u'
  's >= u' --> 's > 0 && cast(uT)(s) >= u'

This system would always work, even when no type exists that can hold all possible values of both 'sT' and 'uT'.  And it would always be *correct*, i.e. negative values would always be smaller than and different from positive values, even when the positive value is outside the range of any signed type.

-- 
Rainer Deyke - rainerd@eldwood.com
June 12, 2009
Rainer Deyke wrote:
> Don wrote:
>> But then you still have the problem that the high half of the short was
>> extended from the low half in two different ways, once by sign-extend,
>> once by zero-extend. Mixing sign-extend and zero-extend in the same
>> expression is asking for trouble.
> 
> I disagree.  In fact, I don't sign extension or conversion to a common
> type should even be necessary.

Doing _no_ extension doesn't cause problems, of course.

> 
> Given value 's' of type 'sT' and unsigned value 'u' of type 'uT', where
> 'sT' and 'uT' have the same width, comparisons should be translated as
> follows:
>   's == u' --> 's >= 0 && cast(uT)(s) == u'
>   's != u' --> 's < 0 || cast(uT)(s) != u'
>   's < u' --> 's < 0 || cast(uT)(s) < u'
>   's <= u' --> 's < 0 || cast(uT)(s) <= u'
>   's > u' --> 's >= 0 && cast(uT)(s) > u'
>   's >= u' --> 's > 0 && cast(uT)(s) >= u'
> 
> This system would always work, even when no type exists that can hold
> all possible values of both 'sT' and 'uT'.  And it would always be
> *correct*, i.e. negative values would always be smaller than and
> different from positive values, even when the positive value is outside
> the range of any signed type.

That's true. What you are doing is removing the int/byte inconsistency, by making  uint == int comparisons behave the same way that ubyte == byte comparisons do now.
Notice that your proposal
(1) preserves the existing behaviour of byte==ubyte (which the original poster was complaing about);
(2) silently changes the behaviour of existing D and C code (that involves int==uint); and
(3) assumes that the code as written is what the programmer intended. I suspect that this type of code is frequently an indicator of a bug. Consider:

const ubyte u = 0xFF;
byte b;
if (b == u) ...

After your transformation, this will be:

if (false) ...

But actually the code has a simple bug: b should have been ubyte. I think this is a pretty common bug (I've done it several times myself).

(2) is fatal, I think.
June 12, 2009
Don wrote:
> That's true. What you are doing is removing the int/byte inconsistency,
> by making  uint == int comparisons behave the same way that ubyte ==
> byte comparisons do now.
> Notice that your proposal
> (1) preserves the existing behaviour of byte==ubyte (which the original
> poster was complaing about);

Yes.

> (2) silently changes the behaviour of existing D and C code (that
> involves int==uint); and

True.  I don't consider C compatibility a major issue, but others do. (If C compatibility was a major issue for me, I'd never even consider moving from C++ to D.)

> (3) assumes that the code as written is what the programmer intended. I suspect that this type of code is frequently an indicator of a bug.

Yes, but the opposite behavior is just as likely to be a bug.  Between two behaviors that mask possible bugs, I'd rather have the mathematically correct behavior.  The alternative is to flat-out ban comparison of mixed-sign types.


-- 
Rainer Deyke - rainerd@eldwood.com
1 2
Next ›   Last »