May 10, 2016
On 10 May 2016 at 06:25, Marco Leise via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Am Mon, 9 May 2016 02:10:19 -0700
> schrieb Walter Bright <newshound2@digitalmars.com>:
>
>> Don Clugston pointed out in his DConf 2016 talk that:
>>
>>      float f = 1.30;
>>      assert(f == 1.30);
>>
>> will always be false since 1.30 is not representable as a float. However,
>>
>>      float f = 1.30;
>>      assert(f == cast(float)1.30);
>>
>> will be true.
>>
>> So, should the compiler emit a warning for the former case?
>
> I'd say yes, but exclude the case where it can be statically verified, that the comparison can yield true, because the constant can be losslessly converted to the type of 'f'.
>
> By example, don't warn for these:
> f == 1.0, f == -0.5, f == 3.625, f == 2UL^^60
>
> But do warn for:
> f == 1.30, f == 2UL^^60+1
>
> As an extension of the existing "comparison is always false/true" check it could read "Comparison is always false: literal 1.30 is not representable as 'float'".
>
> There is a whole bunch in this warning category:
>   byte b;
>   if (b == 1000) {}
> "Comparison is always false: literal 1000 is not representable
> as 'byte'"
>
> --
> Marco

This.
May 10, 2016
On Monday, 9 May 2016 at 19:39:52 UTC, tsbockman wrote:
> Educating programmers who've never studied how to write correct FP code is too complex of a task to implement via compiler warnings. The warnings should be limited to cases that are either obviously wrong, or where the warning is likely to be a net positive even for FP experts.

Any warning message for this type of problem should mention the
"What Every Computer Scientist Should Know About Floating-Point
Arithmetic" paper (and perhaps give a standard public URL such as
https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
at which the paper can be easily accessed).
May 10, 2016
On 5/9/2016 8:00 PM, Xinok wrote:
> Maybe it's a bad idea to enable these warnings by default but what's wrong with
> providing a compiler flag to perform these checks anyways? For example, GCC has
> a compiler flag to yield warnings for signed+unsigned comparisons but it's not
> even enabled with the -Wall flag, only by specifying -Wextra or -Wsign-compare.

Warnings balkanize the language into endless dialects.
May 10, 2016
On 5/9/2016 1:25 PM, Marco Leise wrote:
> There is a whole bunch in this warning category:
>   byte b;
>   if (b == 1000) {}
> "Comparison is always false: literal 1000 is not representable
> as 'byte'"

You're right, we may be opening a can of worms with this.

May 10, 2016
On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote:
> Think of it like this; a float doesn't represent a precise point (it's
> an approximation by definition), so see the float as representing the
> interval from the absolute value it stores, and that + 1 mantissa bit.
> If you see float's that way, then the natural way to compare them is
> to demote to the lowest common precision, and it wouldn't be
> considered erroneous, or even warning-worthy; just documented
> behaviour.

Floating point behavior is so commonplace, I am wary of inventing new, unusual semantics for it.

May 11, 2016
On 2016-05-10 23:44, Walter Bright wrote:
> On 5/9/2016 1:25 PM, Marco Leise wrote:
>> There is a whole bunch in this warning category:
>>   byte b;
>>   if (b == 1000) {}
>> "Comparison is always false: literal 1000 is not representable
>> as 'byte'"
>
> You're right, we may be opening a can of worms with this.

Scala gives a warning/error (don't remember which) for "isInstanceOf" where it can prove at compile time it will never be true. That has helped me a couple of times.

-- 
/Jacob Carlborg
May 11, 2016
On 11 May 2016 at 07:47, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote:
>>
>> Think of it like this; a float doesn't represent a precise point (it's an approximation by definition), so see the float as representing the interval from the absolute value it stores, and that + 1 mantissa bit. If you see float's that way, then the natural way to compare them is to demote to the lowest common precision, and it wouldn't be considered erroneous, or even warning-worthy; just documented behaviour.
>
>
> Floating point behavior is so commonplace, I am wary of inventing new, unusual semantics for it.

Is it unusual to demote to the lower common precision? I think it's
the only reasonable solution.
It's never reasonable to promote a float, since it has already
suffered precision loss. It can't meaningfully be compared against
anything higher precision than itself.
What is the problem with this behaviour I suggest?

The reason I'm wary about emitting a warning is because people will
encounter the warning *all the time*, and for a user who doesn't have
comprehensive understanding of floating point (and probably many that
do), the natural/intuitive thing to do would be to place an explicit
cast of the lower precision value to the higher precision type, which
is __exactly the wrong thing to do__.
I don't think the warning improves the problem, it likely just causes
people to emit the same incorrect code explicitly.

Honestly, who would naturally respond to such a warning by demoting the higher precision type? I don't know that guy, other than those of us who have just watched Don's talk.
May 11, 2016
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote:
> I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice.

The language forces usage of unsigned types. Though in my experience it's relatively easy to fight back including interfacing with C that uses unsigned types exclusively.

> But then there are more problems created.

I've seen no problem from using signed types so far. The last prejudice left is usage of ubyte[] for buffers. How often one looks into individual bytes in some abstract buffer?
May 12, 2016
On 5/11/2016 2:24 AM, Manu via Digitalmars-d wrote:
>> Floating point behavior is so commonplace, I am wary of inventing new,
>> unusual semantics for it.
>
> Is it unusual to demote to the lower common precision?

Yes.


> I think it's the only reasonable solution.

It may be, but it is unusual and therefore surprising behavior.


> What is the problem with this behaviour I suggest?

Code will do one thing in C, and the same code will do something unexpectedly different in D.


> The reason I'm wary about emitting a warning is because people will
> encounter the warning *all the time*, and for a user who doesn't have
> comprehensive understanding of floating point (and probably many that
> do), the natural/intuitive thing to do would be to place an explicit
> cast of the lower precision value to the higher precision type, which
> is __exactly the wrong thing to do__.
> I don't think the warning improves the problem, it likely just causes
> people to emit the same incorrect code explicitly.

The warning is intended for people who understand, as then they will figure out what they actually wanted and implement that. People who randomly and without comprehension insert casts hoping to make the compiler shut up cannot be helped.

May 12, 2016
On Tuesday, 10 May 2016 at 21:44:45 UTC, Walter Bright wrote:
>>   if (b == 1000) {}
>> "Comparison is always false: literal 1000 is not representable
>> as 'byte'"

What's wrong with having this warning?

> You're right, we may be opening a can of worms with this.