November 13, 2018
On Tue, 13 Nov 2018 09:46:17 -0500, Steven Schveighoffer wrote:
> Maybe the biggest gripe here is that enums don't prefer their base types over what their base types convert to. In the developer's mind, the conversion is:
> 
> A => int => (via VRP) short
> 
> which seems more complex than just
> 
> A => int

It affects explicit casts too:

    void foo(short a) { writefln("short %s", a); }
    void foo(int a) { writefln("int %s", a); }
    foo(cast(int)0);  // prints: short 0

In order to force the compiler to choose a particular overload, you either need to assign to a variable or use a struct with alias this.

C++, Java, and C# all default to int, even for bare literals that fit into bytes or shorts, and let you use casts to select overloads.

C++ has some weird stuff where an enum that doesn't fit into an int is an equal match for all integer types:

    void foo(unsigned long long);
    void foo(short);
    enum A : unsigned long long { a = 2 };
    foo(a);  // ambiguous!

But if you just have an unsigned long long that's not in an enum, it only matches the unsigned long long overload.

In C#, if you define multiple implicit casts from a type that match multiple overloads, the compiler prefers the smallest matching type, and it prefers signed over unsigned types. However, for this situation to come up at all, you need to define implicit conversions for multiple numeric types, so it's not directly comparable.

Anyway, VRP overload selection hit me yesterday (accepts-invalid sort): I was calling a function `init_color(short, short, short, short)` with a bunch of things that I explicitly casted to int. Tried wrapping it in a function and I discovered the compiler had implicitly casted int to short. Not the end of the world, but I thought a cast would set the type of the expression (instead of just, in this case, truncating floating point numbers).
November 13, 2018
On Tuesday, 13 November 2018 at 17:50:20 UTC, Neia Neutuladh wrote:
> On Tue, 13 Nov 2018 09:46:17 -0500, Steven Schveighoffer wrote:
>> Maybe the biggest gripe here is that enums don't prefer their base types over what their base types convert to. In the developer's mind, the conversion is:
>> 
>> A => int => (via VRP) short
>> 
>> which seems more complex than just
>> 
>> A => int
>
> It affects explicit casts too:
>
>     void foo(short a) { writefln("short %s", a); }
>     void foo(int a) { writefln("int %s", a); }
>     foo(cast(int)0);  // prints: short 0
>
Ok, now that has got to be a bug. If you explicit cast the number to an integer then you expect the overload function with int to be called.

-Alex

November 13, 2018
On Tue, 13 Nov 2018 17:53:27 +0000, 12345swordy wrote:
> Ok, now that has got to be a bug. If you explicit cast the number to an integer then you expect the overload function with int to be called.
> 
> -Alex

...my mistake, I can't reproduce that anymore. Pretend I didn't say anything.
November 13, 2018
On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis wrote:
>
> *sigh* Well, I guess that's the core issue right there. A lot of us would strongly disagree with the idea that bool is an integral type and consider code that treats it as such as inviting bugs. We _want_ bool to be considered as being completely distinct from integer types. The fact that you can ever pass 0 or 1 to a function that accepts bool without a cast is a problem in and of itself. But it doesn't really surprise me that Walter doesn't agree on that point, since he's never agreed on that point, though I was hoping that this DIP was convincing enough, and its failure is certainly disappointing.
>
> - Jonathan M Davis

Well, I think the DIP was too narrow in its thinking - by restricting itself to bool.

There is a bigger picture, which is more important.

Fact 1 - Implicit conversions are nothing more than a weakening of type safety.

Fact 2 - A weakening of type safety can (and often does) contribute to bugs.

If anyone wants to dispute facts 1 and 2, please go ahead.

Ideally, a 'modern' programming language would have addressed these two facts already.

(i.e Rust).

Unfortunately, D is very much tied to its C/C++ heritage, so 'modernizing' can be painful.

D still can still modernize though, without breaking backward compatibility, by providing 'an option' for the programmer to explicitly declare their desire for greater type safety - and not just with bools.

Fact 3 - Everyone will benefit from greater type safety
         (disputable - at least for those that prefer convenience over correctness).

There is just no reason that I can see, why any modern programming language should allow my bool to be implicitly converted to a char, int, short, byte, long, double, float.....and god knows what else...and certainly not without some warning.

Additionally, it really troubles me to see a programming language wanting to strut itself on the worlds stage, that can (and worst, just will) do things like that - no warning, no option to prevent it.

November 13, 2018
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
> On 11/12/2018 12:34 PM, Neia Neutuladh wrote:
>> Tell me more about this "consistency".
>
> int f(short s) { return 1; }
> int f(int i) { return 2; }
>
> enum : int { a = 0 }
> enum A : int { a = 0 }
>
> pragma (msg, f(a));   // calls f(int)
> pragma (msg, f(A.a)); // calls f(short)
>
> I.e. it's consistent.
>
> Here's how it works:
>
> f(a): `a` is a manifest constant of type `int`, and `int` is an exact match for f(int), and f(short) requires an implicit conversion. The exact match of f(int) is better.
>
> f(A.a): `a` is an enum of type `A`. `A` gets implicitly converted to `int`. The `int` then gets exact match to f(int), and an implicit match to f(short). The sequence of conversions is folded into one according to:
>
>     <implicit conversion> <exact>               => <implicit conversion>
>     <implicit conversion> <implicit conversion> => <implicit conversion>
>
> Both f(int) and f(short) match, because implicit conversions rank the same. To disambiguate, f(short) is pitted against f(int) using partial ordering rules,
> which are:
>
>     Can a short be used to call f(int)? Yes.
>     Can an int be used to call f(short)? No.
>
> So f(short) is selected, because the "Most Specialized" function is selected when there is an ambiguous match.
>
> Note: the "most specialized" partial ordering rules are independent of the arguments being passed.
>
> ---
>
> One could have <implicit conversion><exact> be treated as "better than" <implicit conversion><implicit conversion>, and it sounds like a good idea, but even C++, not known for simplicity, tried that and had to abandon it as nobody could figure it out once the code examples got beyond trivial examples.

This just seems like a bug to me. Any sane human being would expect all these functions to output the same thing. But it entirely depends on how you use it.

import std.stdio;

void foo(byte v) { writeln("byte ", v); }
void foo(int v) { writeln("int ", v); }

enum : int { a = 127 }
enum A : int { a = 127 }

void main()
{
    A v = A.a;
    foo(A.a); // byte 127 < These two are probably the best showcase of what's wrong
    foo(v);   // int 127  < same values being passed with same type but different result

    foo(a);   // int 127
    foo(127); // int 127
}

https://run.dlang.io/is/aARCDo

November 13, 2018
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
> int f(short s) { return 1; }
> int f(int i) { return 2; }
>
> enum : int { a = 0 }
> enum A : int { a = 0 }
>
> pragma (msg, f(a));   // calls f(int)
> pragma (msg, f(A.a)); // calls f(short)
>
> *snip*
>
> So f(short) is selected, because the "Most Specialized" function is selected when there is an ambiguous match.
>
> Note: the "most specialized" partial ordering rules are independent of the arguments being passed.

Walter, this still doesn't change the fact any _reasonable_ programmer would expect foo(A.a) to, in a way, convert to foo(int(0)) because that keeps the type information rather than ignoring the type information completely and just putting the literal value in like it was foo(0).

Honestly, while I (and most others in the community) wanted DIP1015, I'm not going to sweat it (even given the illogical reason for refusing it), but marking issue 10560 as "correct behavior" is asinine and ignorant.
November 14, 2018
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
> One could have <implicit conversion><exact> be treated as "better than" <implicit conversion><implicit conversion>, and it sounds like a good idea, but even C++, not known for simplicity, tried that and had to abandon it as nobody could figure it out once the code examples got beyond trivial examples.

I wonder what these examples are? What did C++ do instead, cause something tells me it didn't do what D is doing. An enum in C++ doesn't call different function overloads based on the constant value.

The trivial examples with D's current implementation aren't even understood by most people it seems like.


November 14, 2018
On Wed, 14 Nov 2018 00:43:54 +0000, Rubn wrote:
> I wonder what these examples are? What did C++ do instead, cause something tells me it didn't do what D is doing. An enum in C++ doesn't call different function overloads based on the constant value.

Long long and unsigned long long give an ambiguous overload error. Unsigned int uses the unsigned int overload. Everything else uses the int overload.

Test code:

```
#include <iostream>
#include <climits>
using namespace std;
void foo(bool c) { cout << "bool " << c << endl; }
void foo(unsigned char c) { cout << "unsigned char " << c << endl; }
void foo(char c) { cout << "char " << c << endl; }
void foo(int c) { cout << "int " << c << endl; }
void foo(unsigned int c) { cout << "unsigned int " << c << endl; }
void foo(long long c) { cout << "long long " << c << endl; }
void foo(unsigned long long c) { cout << "unsigned long long " << c <<
endl; }
enum Bool : bool { b = 1 };
enum Char : char { c = CHAR_MAX };
enum UChar : unsigned char { d = UCHAR_MAX };
enum Short : short { e = SHRT_MAX };
enum UShort : unsigned short { f = USHRT_MAX };
enum Int : int { g = INT_MAX };
enum UInt : unsigned int { h = UINT_MAX };
enum LongLong : long long { i = LLONG_MAX };
enum ULongLong : unsigned long long { j = ULLONG_MAX };
int main(int argc, char** argv)
{
    foo(b);
    foo(c);
    foo(d);
    foo(e);
    foo(f);
    foo(g);
    foo(h);
    //foo(i);
    //foo(j);
}
```

Output:
int 1
int 127
int 255
int 32767
int 65535
int 2147483647
unsigned int 4294967295
November 13, 2018
On 11/13/2018 3:29 PM, Rubn wrote:
> enum : int { a = 127 }

To reiterate, this does not create an anonymous enum type. 'a' is typed as 'int'. Technically,

`a` is a manifest constant of type `int` with a value of `127`.

> enum A : int { a = 127 }

`a` is a manifest constant of type `A` with a value of `127`.

Remember that `A` is not an `int`. It is implicitly convertible to an integer type that its value will fit in (Value Range Propagation). Other languages do not have VRP, so expectations from how those languages behave do not apply to D. VRP is a nice feature, it is why:

    enum s = 100;     // typed as int
    enum t = 300;     // also typed as int
    ubyte u = s + 50; // works, no cast required,
                      // although the type is implicitly converted
    ubyte v = t + 50; // fails

In your articles, it is crucial to understand the difference between a manifest constant of type `int` and one of type `A`.
November 14, 2018
On Wednesday, 14 November 2018 at 02:45:38 UTC, Walter Bright wrote:
> In your articles, it is crucial to understand the difference between a manifest constant of type `int` and one of type `A`.

Still doesn't change the fact that a typed enum should convert to its own type first (rather than blindly using the literal).