May 09, 2016
Am Mon, 09 May 2016 15:56:21 +0000
schrieb Nordlöw <per.nordlow@gmail.com>:

> On Monday, 9 May 2016 at 12:28:04 UTC, Walter Bright wrote:
> > On 5/9/2016 4:38 AM, Nordlöw wrote:
> >> Would that include comparison of variables only aswell?
> > No.
> 
> Why?

Because the float would be converted to double without loss of
precision just like uint == ulong is a valid comparison which
can yield 'true' in 2^32 cases.
float == 1.30 on the other hand is false in any case.
You'd have to warn on _every_ comparison with a widening
conversion to be consistent!

-- 
Marco

May 09, 2016
On 5/9/16 4:22 PM, Walter Bright wrote:
> On 5/9/2016 6:46 AM, Steven Schveighoffer wrote:
>> I know this is a bit band-aid-ish, but if one is comparing literals to
>> a float,
>> why not treat the literal as the type being compared against? In other
>> words,
>> imply the 1.3f. This isn't integer-land where promotions do not change
>> the outcome.
>
> Because it's yet another special case, and we know where those lead. For
> example, what if the 1.30 was the result of CTFE?
>

This is true, it's a contrived example.

-Steve
May 09, 2016
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote:
> On 5/9/2016 11:51 AM, tsbockman wrote:
>> (4) is already planned; it's just taking *a lot* longer than anticipated to
>> actually implement it:
>>     https://issues.dlang.org/show_bug.cgi?id=259
>>     https://github.com/dlang/dmd/pull/1913
>>     https://github.com/dlang/dmd/pull/5229
>
> I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. But then there are more problems created.

What mess? The actual fix for issue 259 is simple, elegant, and shouldn't require much code in the wild to be changed.

The difficulties and delays have all been associated with the necessary improvements to VRP and constant folding, which are worthwhile in their own right, since they help the compiler generate faster code.
May 09, 2016
On Monday, 9 May 2016 at 20:20:00 UTC, Walter Bright wrote:
> On 5/9/2016 12:39 PM, tsbockman wrote:
>> Educating programmers who've never studied how to write correct FP code is too
>> complex of a task to implement via compiler warnings. The warnings should be
>> limited to cases that are either obviously wrong, or where the warning is likely
>> to be a net positive even for FP experts.
>
> I've seen a lot of proposals which try to hide the reality of how FP works. The cure is worse than the disease. The same goes for hiding signed/unsigned, and the autodecode mistake of pretending that code units aren't there.

I completely agree that complexity that cannot be properly hidden should not be hidden. The underlying mechanisms of floating point is complexity that we shouldn't paper over. However, the peculiarities of language conventions w.r.t. floating point expressions doesn't quite fit that category.
May 09, 2016
On Monday, 9 May 2016 at 20:16:59 UTC, Walter Bright wrote:
>> (4) is already planned; it's just taking *a lot* longer than anticipated to
>> actually implement it:
>>     https://issues.dlang.org/show_bug.cgi?id=259
>>     https://github.com/dlang/dmd/pull/1913
>>     https://github.com/dlang/dmd/pull/5229
>
> I oppose this change. You'd be better off not having unsigned types at all than this mess, which was Java's choice. But then there are more problems created.

One other thing - according to the bug report discussion, the proposed solution was pre-approved both by Andrei Alexandrescu and by *YOU*.

Proposing a solution, letting various people work on implementing it for three years, and then suddenly announcing that you "oppose this change" and calling the solution a "mess" with no explanation is a fantastic way to destroy all motivation for outside contributors.
May 09, 2016
On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote:
> Don Clugston pointed out in his DConf 2016 talk that:
>
>     float f = 1.30;
>     assert(f == 1.30);
>
> will always be false since 1.30 is not representable as a float. However,
>
>     float f = 1.30;
>     assert(f == cast(float)1.30);
>
> will be true.
>
> So, should the compiler emit a warning for the former case?

I think it really depends on what the warning actually says. I think people have different expectations for what that warning would be.

When you say 1.30 is not representable as a float, when is the "not representable" enforced? Because it looks like the programmer just represented it in the assignment of the literal – but that's not where the warning would be right? I mean I assume so because people need nonrational literals all the time, and this is the only way they can do it, which means it's a hole in the type system right? There should be a decimal type to cover all these cases, like some databases have.

Would the warning say that you can't compare 1.30 to a float because 1.30 is not representable as a float? Or would it say that f was rounded upon assignment and is no longer 1.30?

Short of a decimal type, I think it would be nice to have a "float equality" operator that covered this whole class of cases, where floats that started their lives as nonrational literals and floats that have been rounded with loss of precision can be treated as equal if they're within something like .0000001% of each other (well a  percentage that can actually be represented as a float...) Basically equality that covers the known mutational properties of fp arithmetic.

There's no way to do this right now without ranges right? I know that ~ is for concat. I saw ~= is an operator. What does that do? The Unicode ≈ would be nice for this.

I assume IEEE 754 or ISO 10967 don't cover this? I was just reading the latter (zip here: http://standards.iso.org/ittf/PubliclyAvailableStandards/c051317_ISO_IEC_10967-1_2012.zip)
May 10, 2016
On Monday, 9 May 2016 at 20:14:36 UTC, Walter Bright wrote:
> On 5/9/2016 11:37 AM, Xinok wrote:
>> All of these scenarios are capable of producing "incorrect" results, are a
>> source of discrete bugs (often corner cases that we failed to consider and
>> test), and can be hard to detect. It's about time we stopped being stubborn and
>> flagged these things as warnings. Even if they require a special compiler flag
>> and are disabled by default, that's better than nothing.
>
> I've used a B+D language that does as you suggest (Wirth Pascal). It was highly unpleasant to use, as the code became littered with casts. Casts introduce their own set of bugs.

Maybe it's a bad idea to enable these warnings by default but what's wrong with providing a compiler flag to perform these checks anyways? For example, GCC has a compiler flag to yield warnings for signed+unsigned comparisons but it's not even enabled with the -Wall flag, only by specifying -Wextra or -Wsign-compare.
May 10, 2016
On Monday, 9 May 2016 at 12:29:14 UTC, Temtaime wrote:
> Just get rid of the problem : remove == and != from floats.

Diagnostics Suggestion:

Issue a warning including a direction to approxEqual()

https://dlang.org/phobos/std_math.html#.approxEqual
May 10, 2016
On 9 May 2016 at 19:10, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> Don Clugston pointed out in his DConf 2016 talk that:
>
>     float f = 1.30;
>     assert(f == 1.30);
>
> will always be false since 1.30 is not representable as a float. However,
>
>     float f = 1.30;
>     assert(f == cast(float)1.30);
>
> will be true.
>
> So, should the compiler emit a warning for the former case?

Perhaps float comparison should *always* be done at the lower precision? There's no meaningful way to perform a float/double comparison where the float is promoted, whereas demoting the double for the comparison will almost certainly yield the expected result.
May 10, 2016
On 10 May 2016 at 17:28, Manu <turkeyman@gmail.com> wrote:
> On 9 May 2016 at 19:10, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>> Don Clugston pointed out in his DConf 2016 talk that:
>>
>>     float f = 1.30;
>>     assert(f == 1.30);
>>
>> will always be false since 1.30 is not representable as a float. However,
>>
>>     float f = 1.30;
>>     assert(f == cast(float)1.30);
>>
>> will be true.
>>
>> So, should the compiler emit a warning for the former case?
>
> Perhaps float comparison should *always* be done at the lower precision? There's no meaningful way to perform a float/double comparison where the float is promoted, whereas demoting the double for the comparison will almost certainly yield the expected result.

Think of it like this; a float doesn't represent a precise point (it's an approximation by definition), so see the float as representing the interval from the absolute value it stores, and that + 1 mantissa bit. If you see float's that way, then the natural way to compare them is to demote to the lowest common precision, and it wouldn't be considered erroneous, or even warning-worthy; just documented behaviour.