On 8/9/2011 2:46 AM, Don wrote:
> From a discussion on D.learn.
>
> If x and y are different integral types, then in an expression like
> x >> y
> the integral promotion rules are applied to x and y.
> This behaviour is obviously inherited from C, but why did C use such a
> counter-intuitive and bug-prone rule?
> Why isn't typeof(x >> y) simply typeof(x) ?
> What would break if it did?
>
> You might think the the rule is that typeof( x >> y) is typeof( x + y),
> but it isn't: the arithmetic conversions are NOT applied:
> typeof(int >> long) is int, not long, BUT
> typeof(short >> int) is int.
> And we have this death trap (bug 2809):
>
> void main()
> {
> short s = -1;
> ushort u = s;
> assert( u == s );
> assert ( (s >>> 1) == (u >>> 1) ); // FAILS
> }
That last is why we can't just change the behavior from C.
The question though is whether that is ever _desired_ behavior in a C program.
If it's always a bug when it happens, then I'd argue that we can and should
change the behavior. If there's a legitimate reason why could would want the C
behavior, then changing it in D would cause problems for porting code, but if
the difference only matters when there's a bug in the C code, then breaking
compatibility is only an issue for broken code, and changing it would help
prevent issues in D.
- Jonathan M Davis