February 18
On Monday, 17 February 2025 at 22:24:37 UTC, Walter Bright wrote:
> On 2/17/2025 1:06 AM, Atila Neves wrote:
>>> (Did I mention that explicit casts also hide errors introduced by refactoring?)
>> 
>> `cast(typeof(foo)) bar`?
>
> That can work, but when best practices mean adding more code, the result is usually failure.
>
> Also, what if `foo` changes to something not anticipated by that cast?

Compilation or test failure, probably.
February 19
On Wednesday, February 5, 2025 4:43:37 AM MST Quirin Schroll via dip.ideas wrote:
> Those are annoying, yes. Especially unary operators. If you asked me right now what `~x` returns on a small integer type, I honestly don’t know.

IIRC, _all_ operations on integer types smaller than int get converted to int, and then if the compiler can determine for certain that the result would fit in a smaller type, then it can be implicitly converted to the smaller type, but in most cases, it can't know that. ~x would probably implicitly convert, but like you, I'd have to test it.

> D has C’s rules because of one design decision early on: If it looks like C, it acts like C or it’s an error.

Yes, but the issue with cases like this is more that they could be errors when they're not rather than us looking to change the behavior to something else.

- Jonathan M Davis




February 19
On Monday, February 17, 2025 3:24:37 PM MST Walter Bright via dip.ideas wrote:
> On 2/17/2025 1:06 AM, Atila Neves wrote:
> >> (Did I mention that explicit casts also hide errors introduced by refactoring?)
> >
> > `cast(typeof(foo)) bar`?
>
> That can work, but when best practices mean adding more code, the result is usually failure.
>
> Also, what if `foo` changes to something not anticipated by that cast?

That's part of why if I were creating a new language, I'd want a level of conversion in between implicit and explicit, though I don't have a good name for the idea, since explicit implicit casts isn't exactly good. But essentially, it would be nice to have a defined set of conversions like we get with implicit casts, but they don't actually happen implicitly. Rather, you use some sort of explicit cast to tell the compiler that you want it to occur, but it only allows that subset of "implicit" casts rather than being the blunt instrument that you typically get with casts which will then do things like reinterpret the memory.

But of course, we don't have anything like that in D, and it probably wouldn't make sense to retrofit it in at this point, though we could certainly define more restrictive casts via templated functions (e.g. like C++ does with stuff like dynamic_cast and const_cast) in order to allow a particular piece of code to be more selective about the casting that it allows so that it can have a cast but not risk it turning into a reintepret cast or whatnot.

As for converting between signed and unsigned... I'm definitely mixed on this one. I follow essentially the rules that you mentioned for using signed vs unsigned, but I _have_ been bitten by this (quite recently in fact), and it was hard to catch. On the other hand, I don't know how many casts would be required in general if we treated conversions between signed and unsigned as narrowing conversions and thus required a cast. Since I mostly just use unsigned via size_t (there are exceptions, but they're rare), I suspect that I wouldn't need many casts in my code, but I don't know. And the code that I got bitten with recently was templated, which could make handling it trickier (though in this case, I could have just cast to long, and that's what I needed to do anyway).

My guess is that we'd be better off with requiring the casts, but I don't know. It _is_ arguably trading off one set of bugs for another, but it would also force you to think about what you want with any particular conversion rather than silently doing something that you don't necessarily want. Casts do become more problematic with refactoring, but the lack of casts is similarly problematic, since those also can change behavior silently. It's just for a different set of types. Realistically, I would expect that some code would have fewer bugs with the cast requirement, and other code would have more, but I would _guess_ (based on how I code at least) that the net result would be fewer.

- Jonathan M Davis



February 20

On Thursday, 20 February 2025 at 03:14:08 UTC, Jonathan M Davis wrote:

>

IIRC, all operations on integer types smaller than int get converted to int,

Yup.

>

~x would probably implicitly convert, but like you, I'd have to test it.

~ on small types will generate a lot of higher set bits, so no, it will NOT convert back to same type. Same problem with -

This is so bad, I'm honestly surprised that it works with +

February 20
On Thursday, February 20, 2025 1:35:10 AM MST Dom DiSc via dip.ideas wrote:
> On Thursday, 20 February 2025 at 03:14:08 UTC, Jonathan M Davis wrote:
> > ~x would probably implicitly convert, but like you, I'd have to test it.
>
> ~ on small types will generate a lot of higher set bits, so no, it will NOT convert back to same type. Same problem with -

Actually, now that I think about it more, in this case, if it's doing the operation on int, then the result is _very_ different from if it had actually done the operation on (u)byte or (u)short. With most arithmetic operations, the result is the same except that you don't have overflow issues if you're operating on int with large bytes or shorts like you would if you'd operated directly on the type (though of course, casting back can then truncate the result), but with ~, the result is _very_ different. I don't even recall the last time that I used ~ and clearly didn't think it through enough, since I'm used to the result being the same so long as the result fits.

> This is so bad, I'm honestly surprised that it works with +

It doesn't work. This fails to compile

    byte b1 = 42;
    byte b2 = b1 + 120;

complaining that it can't convert from int to byte. The same hapens with

    byte b1 = 0;
    byte b2 = ~b1;

However, this does compile

    byte b1 = 42;
    byte b2 = ~b1;

So, I guess that it sees that it's doing enough data flow analysis to see that b1 is 42 and that ~b1 would fit into a byte, so it allows the conversion. Curiously though,

    byte b1 = 42;
    byte b2 = b1 + 1;

does not compile even though the result would fit, whereas

    byte b1 = 42;
    byte b2 = b1 + 0;

does compile. So, it would appear that VRP is being a tad weird with its decisions, but it does seem to be rejecting the result when it wouldn't fit (and of course, if it doesn't know the values, it's going to have to assume that the result doesn't fit).

- Jonathan M Davis



1 day ago

On Monday, 3 February 2025 at 18:40:20 UTC, Atila Neves wrote:

>

https://forum.dlang.org/post/pbhjffbxdqpdwtmcbikh@forum.dlang.org

On Sunday, 12 May 2024 at 13:32:36 UTC, Paul Backus wrote:

>

D inherited these implicit conversions from C and C++, where they are widely regarded as a source of bugs.

[...]

My bias is to not like any implicit conversions of any kind, but I'm not sure I can convince Walter of that.

Hello,

Wouldn't it be nice to deprecate unary minus operator for unsigned types? It typically produces an unsigned -> signed -> unsigned conversion that does not make much sense, in my humble opinion.

1 day ago
On 11/03/2025 11:23 PM, Olivier Pisano wrote:
> On Monday, 3 February 2025 at 18:40:20 UTC, Atila Neves wrote:
>> https://forum.dlang.org/post/pbhjffbxdqpdwtmcbikh@forum.dlang.org
>>
>> On Sunday, 12 May 2024 at 13:32:36 UTC, Paul Backus wrote:
>>> D inherited these implicit conversions from C and C++, where they are widely regarded as a source of bugs.
>>>
>>> [...]
>>
>> My bias is to not like any implicit conversions of any kind, but I'm not sure I can convince Walter of that.
> 
> Hello,
> 
> Wouldn't it be nice to deprecate unary minus operator for unsigned types?  It typically produces an unsigned -> signed -> unsigned conversion that does not make much sense, in my humble opinion.

Constants such as ``-1`` are used quite often with unsigned types.

Especially in C style API's for errors.

21 hours ago
On Tuesday, 11 March 2025 at 10:27:38 UTC, Richard (Rikki) Andrew Cattermole wrote:
> Constants such as ``-1`` are used quite often with unsigned types.
>
> Especially in C style API's for errors.

I wish in C everybody would use ~0 (or better: ~0u) instead of cast(uint)-1, which represent the same bit pattern but without using an overflow that just happens to produce a useful result :-(

But in D we have (more verbose, but explicitly stating the intention) uint.max, yeay!
21 hours ago
On Tuesday, 11 March 2025 at 10:27:38 UTC, Richard (Rikki) Andrew Cattermole wrote:
> Constants such as ``-1`` are used quite often with unsigned types.
>
> Especially in C style API's for errors.

-1 is a literal of type int, which is perfectly fine.

I was referring to this:

    void main ()
    {
        import std.stdio;

        uint i = 5;
        writeln(-i); // prints '4294967291'
    }

18 hours ago
On Tuesday, 11 March 2025 at 10:27:38 UTC, Richard (Rikki) Andrew Cattermole wrote:
> On 11/03/2025 11:23 PM, Olivier Pisano wrote:
>> Wouldn't it be nice to deprecate unary minus operator for unsigned types?  It typically produces an unsigned -> signed -> unsigned conversion that does not make much sense, in my humble opinion.

I'd like that. The compiler error can suggest using `0 - u` instead if intended. dmd seems to treat that the same as `-u` even without the `-O` switch.

> Constants such as ``-1`` are used quite often with unsigned types.
>
> Especially in C style API's for errors.

That would just be a signed to unsigned implicit conversion and should be unaffected by deprecating unary `-` on an unsigned expression.

1 2 3 4 5
Next ›   Last »