On Monday, 25 March 2024 at 22:27:10 UTC, Nick Treleaven wrote:
>On Sunday, 24 March 2024 at 08:23:03 UTC, Liam McGillivray wrote:
>On Thursday, 30 November 2023 at 15:25:52 UTC, Jonathan M Davis wrote:
>Because size_t is uint on 32-bit systems, using int with foreach works just fine aside from the issue of signed vs unsigned (which D doesn't consider to be a narrowing conversion, for better or worse). So, someone could use int with foreach on a 32-bit system and have no problems, but when they move to a 64-bit system, it could become a big problem, because there, size_t is ulong. So, code that worked fine on a 32-bit system could then break on a 64-bit system (assuming that it then starts operating on arrays that are larger than a 32-bit system could handle).
An interesting, not bad point, but I don't think it's enough to justify removing this language feature. It's just too unlikely of a scenario to be worth removing a feature which improves things far more often than not.
It's good to make any integer truncation visible rather than implicit - that's the main reason. And given that it will be safe to use a smaller integer type than size_t when the array length is statically known (after https://github.com/dlang/dmd/pull/16334), some future people might expect a specified index type to be verified as able to hold every index in the array.
>Firstly, how often would it be that a program wouldn't explicitly require more array values than uint
can fit, but is still capable of filling the array beyond that in places when the maximum array size is enough?
The 64-bit version of the program may be expected to handle more data than the 32-bit version. That could even be the reason why it was ported to 64-bit.
...
>Maybe disallow it from functions marked @safe
,
@safe is for memory-safety, it shouldn't be conflated with other types of safety.
I’d rather say @safe
means no undefined behavior, or more precisely, the compiler gives errors for operations that might be UB. If all code you write is @safe
, you don’t have UB in your program (given a perfect compiler).
Disallowing implicit integer truncation is not a UB issue (it’s not UB to implicitly truncate, i.e. it’s not like signed integer overflow in C), but it’s disallowed for other good reasons.
After reading this thread, I get the impression that size_t
should be its own type, with the guarantee that size_t
is equivalent to one of the other built-in unsigned integer types. The advantage would be that casts between other integer types and size_t
would have to be explicit even if they can’t fail on the given platform. I’d require explicit casts for all of them, just to be simple and consistent. Conceptually, a size is not a n-bit number for a fixed n, unlike values of type uint
or ulong
.
It’s a similar idea to having char
, wchar
, and dchar
separate from ubyte
, ushort
, and uint
even if they’re the same under the hood. Heck, unlike size_t
, they relate to the exact same integer types under the hood on every platform. So in some sense, the argument for having size_t
be different is even stronger than having character types be different.
My bet is that Walter strongly disagrees with this, since he’s stands firmly on Booleans are integers as well. It’s not unreasonable, only how much sense it makes depending on where you come from.