February 18, 2022
On Friday, 18 February 2022 at 04:33:39 UTC, forkit wrote:
> On the otherhand, implicit conversion of uint to int is inherently unsafe, since the compiler cannot determine whether the coercion 'avoids undefined behaviour'.

The behavior of converting a uint to an int is well-defined in D: the uint's bit pattern is re-interpreted as a signed int using 32-bit two's complement notation. This conversion is valid for every possible pattern of 32 bits, and therefore for every possible uint. There is absolutely no possibility of undefined behavior.

"Undefined behavior" is a technical term with a precise meaning. [1] It does not simply mean "undesirable behavior" or "error-prone behavior" or even "behavior that violates the rules of conventional mathematics."

[1] https://en.wikipedia.org/wiki/Undefined_behavior
February 18, 2022
On 2/17/2022 9:25 PM, Timon Gehr wrote:
> Except perhaps for somewhat long arrays in a 32-bit program.

Can't have everything.

If you've got an array length longer than int.max, you're going to have trouble distinguishing a subtraction from a wraparound addition in any case. Dealing with that means one is simply going to have to pay attention to how integer 2-s complement arithmetic works on a computer.

Just wait till you get into floating point!
February 18, 2022
You can select the behavior you want with:

https://dlang.org/phobos/std_experimental_checkedint.html
February 18, 2022
On Friday, 18 February 2022 at 04:32:56 UTC, Walter Bright wrote:
>     ptrdiff_t len = array.length;

The problem is remembering to do that, particularly in cases where the unsigned value is an inferred function result, or for an index involving $.

We need an error, not an implicit conversion. I expect you to say that will force users to cast, which can introduce bugs if the source type changes. The solution to that is to encourage using e.g. std.conv.signed:
https://dlang.org/library/std/conv/signed.html

February 18, 2022
On Friday, 18 February 2022 at 05:47:13 UTC, Paul Backus wrote:
>
> The behavior of converting a uint to an int is well-defined in D: the uint's bit pattern is re-interpreted as a signed int using 32-bit two's complement notation. This conversion is valid for every possible pattern of 32 bits, and therefore for every possible uint. There is absolutely no possibility of undefined behavior.
>
> "Undefined behavior" is a technical term with a precise meaning. [1] It does not simply mean "undesirable behavior" or "error-prone behavior" or even "behavior that violates the rules of conventional mathematics."
>
> [1] https://en.wikipedia.org/wiki/Undefined_behavior

The 'convertability' of a type may well be defined by the language, but the conversion itself may not be defined by the programmer.

I don't think it is unreasonable, to extend the concept of 'undefined behaviour' to include behaviour not defined by the programmer.

But in any case...semantics aside...

In a language that does implicit conversion on primitive types, I would prefer that the programmer have the tools to undefine those implicit conversions.

That is all there is to my argument.

February 18, 2022
On 18.02.22 09:05, Walter Bright wrote:
> On 2/17/2022 9:25 PM, Timon Gehr wrote:
>> Except perhaps for somewhat long arrays in a 32-bit program.
> 
> Can't have everything.
> ...

Well, you I guess you *could* just use `long` instead of `ptrdiff_t`. (It seemed to me the entire point of this exercise was to do things in a way that's less error prone.)

> If you've got an array length longer than int.max,

Seems I likely won't have that (compiled with -m32):

```d
void main(){
    import core.stdc.stdlib;
    import std.stdio;
    writeln(malloc(size_t(int.max)+1)); // null (int.max works)
    auto a=new ubyte[](int.max); // out of memory error
}
```

> you're going to have trouble distinguishing a subtraction from a wraparound addition in any case.

Why? A wraparound addition is an addition where the result's sign differs from that of both operands. Seems simple enough.

If course, I can just sign-extend both operands so the total width precludes a wraparound addition. (E.g., just use `long`.)

> Dealing with that means one is simply going to have to pay attention to how integer 2-s complement arithmetic works on a computer.
> ...

Most of that is not too helpful as it's not exposed by the language. (At least in D, signed arithmetic actually has 2-s complement semantics, but the hardware has some features to make dealing with 2-s complement convenient that are not really exposed by the programming language.)

In any case, I can get it right, the scenario I had in mind is competent programmers having to spend time debugging a weird issue and then ultimately fix some library dependency that silently acquires funky behavior once the data gets a bit bigger than what's in the unit tests because the library authors blindly followed a `ptrdiff_t` recommendation they once saw in the forums. Unlikely to happen to me personally, as I currently see little reason to write 32-bit programs, even less 32-bit programs dealing with large arrays, but it seemed to me that "works" merited some minor qualification, as you kind of went out of your way to explicitly use the sometimes overly narrow `int` on 32-bit machines. ;)

Especially given that QA might mostly happen on 64 bit builds, that's probably quite risky in some cases.
February 18, 2022
On Friday, 18 February 2022 at 09:27:08 UTC, Nick Treleaven wrote:
> We need an error, not an implicit conversion. I expect you to say that will force users to cast, which can introduce bugs if the source type changes.

In fact, a cast that changes both the integer size and the signedness at the same time could be made an error as well.

> The solution to that is to encourage using e.g. std.conv.signed:
> https://dlang.org/library/std/conv/signed.html

Of course, these would be easier to use if they were in object.d.
February 19, 2022

On Friday, 18 February 2022 at 10:13:07 UTC, forkit wrote:

>

In a language that does implicit conversion on primitive types, I would prefer that the programmer have the tools to undefine those implicit conversions.

The easy solution is to have LDC and GDC implement a command line switch that restricts unsigned to signed conversions without a cast?

February 19, 2022

On Saturday, 19 February 2022 at 09:49:20 UTC, Ola Fosheim Grøstad wrote:

>

On Friday, 18 February 2022 at 10:13:07 UTC, forkit wrote:

>

In a language that does implicit conversion on primitive types, I would prefer that the programmer have the tools to undefine those implicit conversions.

The easy solution is to have LDC and GDC implement a command line switch that restricts unsigned to signed conversions without a cast?

Or the opposite…

February 20, 2022
On Saturday, 5 February 2022 at 02:43:27 UTC, Walter Bright wrote:
> On 2/4/2022 3:51 PM, Adam Ruppe wrote:
>> 
>> To reiterate:
>> 
>> C's rule: int promote, DO allow narrowing implicit conversion.
>> 
>> D's rule: int promote, do NOT allow narrowing implicit conversion unless VRP passes.
>> 
>> My proposed rule: int promote, do NOT allow narrowing implicit conversion unless VRP passes OR the requested conversion is the same as the largest input type (with literals excluded unless their value is obviously out of range).
>
> We considered that and chose not to go that route, on the grounds that we were trying to minimize invisible truncation.
>

I do like Adam's proposal as well. If you're adding two shorts together and assigning them back to a short, there isn't really any surprising truncation happening, it's more like just any integer overflow:

```d
int a = 0x6000_0000;
int b = a+a; // overflow
short c = 0x6000;
short d = c+c; // overflow with Adam's proposal, disallowed now.
```

I can't see why that overflow would be any more surprising with `short` than with an `int`.

One thing that also speaks for the proposal it is 16-bit programming. Yes, I know that D is not designed for under 32 bits so 16 bits should be a secondary concern, but remember that D can already do that to some extent: https://forum.dlang.org/post/kctkzmrdhocsfummllhq@forum.dlang.org .

> P.S. as a pragmatic programmer, I find very little use for shorts other than saving some space in a data structure. Using shorts as temporaries is a code smell.