June 27, 2006
Alexander Panek wrote:
> If you take a look at how comparison works, you'll know why this one fails.
> 
> Lets take an uint a = 16; as in your example:
> 00000000 00000000 00000000 00010000
> 
> And now a signed integer with the value -1:
> 10000000 00000000 00000000 00000001
> 

Your point still stands, but -1 is represented as:
11111111 11111111 11111111 11111111

http://en.wikipedia.org/wiki/Two%27s_complement

> You might guess which number is bigger, when our comparison is done binary (and after all, that's what the processor does) :)
> 
> Regards,
> Alex
> 
> Paolo Invernizzi wrote:
> 
>> Hi all,
>>
>> What I'm missing?
>>
>>     uint a = 16;
>>     int b = -1;
>>     assert( b < a ); // this fails! I was expecting that -1 < 16
>>
>> Thanks
>>
>> ---
>> Paolo
June 28, 2006
You both are true, and I know about Two's complement...

But, as  others in the thread pointed out, that's just a way of "represent" a number, and, worst, in this case, they are just "constant":

  uint a = 16;
  int b = -1;
  assert(b < a);

No ambiguity at all. This can be folded.

And in the real world, -1 is less than 16. So, as one of the main principle of the D programming language is to minimize the risks of bugs, I feel that at least a Warning should be raised for that code...

But really, it's just a minor glitch...

Cheers

---
Paolo


Kirk McDonald wrote:
> Alexander Panek wrote:
>> If you take a look at how comparison works, you'll know why this one fails.
>>
>> Lets take an uint a = 16; as in your example:
>> 00000000 00000000 00000000 00010000
>>
>> And now a signed integer with the value -1:
>> 10000000 00000000 00000000 00000001
>>
> 
> Your point still stands, but -1 is represented as:
> 11111111 11111111 11111111 11111111
> 
> http://en.wikipedia.org/wiki/Two%27s_complement
> 
>> You might guess which number is bigger, when our comparison is done binary (and after all, that's what the processor does) :)
>>
>> Regards,
>> Alex
>>
>> Paolo Invernizzi wrote:
>>
>>> Hi all,
>>>
>>> What I'm missing?
>>>
>>>     uint a = 16;
>>>     int b = -1;
>>>     assert( b < a ); // this fails! I was expecting that -1 < 16
>>>
>>> Thanks
>>>
>>> ---
>>> Paolo
June 28, 2006
On Tue, 27 Jun 2006 12:33:32 -0700, Kirk McDonald <kirklin.mcdonald@gmail.com> wrote:

>Alexander Panek wrote:
>> If you take a look at how comparison works, you'll know why this one fails.
>> 
>> Lets take an uint a = 16; as in your example:
>> 00000000 00000000 00000000 00010000
>> 
>> And now a signed integer with the value -1:
>> 10000000 00000000 00000000 00000001
>> 
>
>Your point still stands, but -1 is represented as:
>11111111 11111111 11111111 11111111
>
>http://en.wikipedia.org/wiki/Two%27s_complement
>
>> You might guess which number is bigger, when our comparison is done binary (and after all, that's what the processor does) :)
>> 
>> Regards,
>> Alex
>> 
>> Paolo Invernizzi wrote:
>> 
>>> Hi all,
>>>
>>> What I'm missing?
>>>
>>>     uint a = 16;
>>>     int b = -1;
>>>     assert( b < a ); // this fails! I was expecting that -1 < 16
>>>
>>> Thanks
>>>
>>> ---
>>> Paolo

Maybe it's not a bug but it is very confusing, no matter how integer operations work internally. Compiler should give at least a warning about incompatibe types, or try to cast uint to int implicitly or require an explicit cast.
June 28, 2006
Max Samuha wrote:
> On Tue, 27 Jun 2006 12:33:32 -0700, Kirk McDonald
> <kirklin.mcdonald@gmail.com> wrote:
> 
> 
>>Alexander Panek wrote:
>>
>>>If you take a look at how comparison works, you'll know why this one fails.
>>>
>>>Lets take an uint a = 16; as in your example:
>>>00000000 00000000 00000000 00010000
>>>
>>>And now a signed integer with the value -1:
>>>10000000 00000000 00000000 00000001
>>>
>>
>>Your point still stands, but -1 is represented as:
>>11111111 11111111 11111111 11111111
>>
>>http://en.wikipedia.org/wiki/Two%27s_complement
>>
>>
>>>You might guess which number is bigger, when our comparison is done binary (and after all, that's what the processor does) :)
>>>
>>>Regards,
>>>Alex
>>>
>>>Paolo Invernizzi wrote:
>>>
>>>
>>>>Hi all,
>>>>
>>>>What I'm missing?
>>>>
>>>>    uint a = 16;
>>>>    int b = -1;
>>>>    assert( b < a ); // this fails! I was expecting that -1 < 16
>>>>
>>>>Thanks
>>>>
>>>>---
>>>>Paolo
> 
> 
> Maybe it's not a bug but it is very confusing, no matter how integer
> operations work internally. Compiler should give at least a warning
> about incompatibe types, or try to cast uint to int implicitly or
> require an explicit cast.

It's worth noting that this behavior (of a being less than b) follows the implicit conversion rules exactly:

http://www.digitalmars.com/d/type.html

[snipped non-applicable checks...]
5. Else the integer promotions are done on each operand, followed by:

   1. If both are the same type, no more conversions are done.
   2. If both are signed or both are unsigned, the smaller type is converted to the larger.
   3. If the signed type is larger than the unsigned type, the unsigned type is converted to the signed type.
   4. The signed type is converted to the unsigned type.

So the int is implicitly converted to the uint, and (apparently) it simply compares 2**32-1 to 16.

So I wouldn't call this a bug, just a potential oddity. Maybe it should detect and throw an overflow if a negative signed integer is converted to an unsigned type? Or not: I'd consider this an edge case. It's probably considered bad practice to promiscuously mix signed and unsigned types.

-Kirk McDonald
June 28, 2006
Kirk McDonald wrote:
> Max Samuha wrote:
>> On Tue, 27 Jun 2006 12:33:32 -0700, Kirk McDonald
>> <kirklin.mcdonald@gmail.com> wrote:
>>
>>
>>> Alexander Panek wrote:
>>>
>>>> If you take a look at how comparison works, you'll know why this one fails.
>>>>
>>>> Lets take an uint a = 16; as in your example:
>>>> 00000000 00000000 00000000 00010000
>>>>
>>>> And now a signed integer with the value -1:
>>>> 10000000 00000000 00000000 00000001
>>>>
>>>
>>> Your point still stands, but -1 is represented as:
>>> 11111111 11111111 11111111 11111111
>>>
>>> http://en.wikipedia.org/wiki/Two%27s_complement
>>>
>>>
>>>> You might guess which number is bigger, when our comparison is done binary (and after all, that's what the processor does) :)
>>>>
>>>> Regards,
>>>> Alex
>>>>
>>>> Paolo Invernizzi wrote:
>>>>
>>>>
>>>>> Hi all,
>>>>>
>>>>> What I'm missing?
>>>>>
>>>>>    uint a = 16;
>>>>>    int b = -1;
>>>>>    assert( b < a ); // this fails! I was expecting that -1 < 16
>>>>>
>>>>> Thanks
>>>>>
>>>>> ---
>>>>> Paolo
>>
>>
>> Maybe it's not a bug but it is very confusing, no matter how integer
>> operations work internally. Compiler should give at least a warning
>> about incompatibe types, or try to cast uint to int implicitly or
>> require an explicit cast.
> 
> It's worth noting that this behavior (of a being less than b) follows the implicit conversion rules exactly:
> 
> http://www.digitalmars.com/d/type.html
> 
> [snipped non-applicable checks...]
> 5. Else the integer promotions are done on each operand, followed by:
> 
>    1. If both are the same type, no more conversions are done.
>    2. If both are signed or both are unsigned, the smaller type is converted to the larger.
>    3. If the signed type is larger than the unsigned type, the unsigned type is converted to the signed type.
>    4. The signed type is converted to the unsigned type.
> 
> So the int is implicitly converted to the uint, and (apparently) it simply compares 2**32-1 to 16.
> 
> So I wouldn't call this a bug, just a potential oddity. Maybe it should detect and throw an overflow if a negative signed integer is converted to an unsigned type? 

That would introduce a massive performance hit. The existing conversion from signed to unsigned occurs only at compile time.

Or not: I'd consider this an edge case. It's
> probably considered bad practice to promiscuously mix signed and unsigned types.

It's a hard one. It would be really painful if equality comparisons between signed & unsigned types was an error; it's almost always OK.
Comparison of signed/unsigned variables with an unsigned/signed constant is always an error; it would be nice if the compiler detected it.
June 28, 2006
Don Clugston wrote:

> It's a hard one. It would be really painful if equality comparisons between signed & unsigned types was an error; it's almost always OK.
> Comparison of signed/unsigned variables with an unsigned/signed constant is always an error; it would be nice if the compiler detected it.

That was a comparison between an unsigned/signed CONSTANTS, that's the mess. They are foldable in that example....

---
Paolo
June 29, 2006
It would seem that this is not unique to D, C++ also exhibits this behaviour
under GCC 4.0
I was just burn't by it in std::vector access.


1 2
Next ›   Last »