March 27

On Monday, 25 March 2024 at 22:27:10 UTC, Nick Treleaven wrote:


On Sunday, 24 March 2024 at 08:23:03 UTC, Liam McGillivray wrote:


On Thursday, 30 November 2023 at 15:25:52 UTC, Jonathan M Davis wrote:


Because size_t is uint on 32-bit systems, using int with foreach works just fine aside from the issue of signed vs unsigned (which D doesn't consider to be a narrowing conversion, for better or worse). So, someone could use int with foreach on a 32-bit system and have no problems, but when they move to a 64-bit system, it could become a big problem, because there, size_t is ulong. So, code that worked fine on a 32-bit system could then break on a 64-bit system (assuming that it then starts operating on arrays that are larger than a 32-bit system could handle).

An interesting, not bad point, but I don't think it's enough to justify removing this language feature. It's just too unlikely of a scenario to be worth removing a feature which improves things far more often than not.

It's good to make any integer truncation visible rather than implicit - that's the main reason. And given that it will be safe to use a smaller integer type than size_t when the array length is statically known (after, some future people might expect a specified index type to be verified as able to hold every index in the array.


Firstly, how often would it be that a program wouldn't explicitly require more array values than uint can fit, but is still capable of filling the array beyond that in places when the maximum array size is enough?

The 64-bit version of the program may be expected to handle more data than the 32-bit version. That could even be the reason why it was ported to 64-bit.



Maybe disallow it from functions marked @safe,

@safe is for memory-safety, it shouldn't be conflated with other types of safety.

I’d rather say @safe means no undefined behavior, or more precisely, the compiler gives errors for operations that might be UB. If all code you write is @safe, you don’t have UB in your program (given a perfect compiler).

Disallowing implicit integer truncation is not a UB issue (it’s not UB to implicitly truncate, i.e. it’s not like signed integer overflow in C), but it’s disallowed for other good reasons.

After reading this thread, I get the impression that size_t should be its own type, with the guarantee that size_t is equivalent to one of the other built-in unsigned integer types. The advantage would be that casts between other integer types and size_t would have to be explicit even if they can’t fail on the given platform. I’d require explicit casts for all of them, just to be simple and consistent. Conceptually, a size is not a n-bit number for a fixed n, unlike values of type uint or ulong.

It’s a similar idea to having char, wchar, and dchar separate from ubyte, ushort, and uint even if they’re the same under the hood. Heck, unlike size_t, they relate to the exact same integer types under the hood on every platform. So in some sense, the argument for having size_t be different is even stronger than having character types be different.

My bet is that Walter strongly disagrees with this, since he’s stands firmly on Booleans are integers as well. It’s not unreasonable, only how much sense it makes depending on where you come from.

March 28

On Monday, 11 December 2023 at 22:22:27 UTC, Quirin Schroll wrote:


On Wednesday, 29 November 2023 at 15:48:25 UTC, Steven Schveighoffer wrote:


On Wednesday, 29 November 2023 at 14:56:50 UTC, Steven Schveighoffer wrote:


I don’t know how many times I get caught with size_t indexes but I want them to be int or uint. It’s especially painful in my class that I’m teaching where I don’t want to yet explain why int doesn’t work there and have to introduce casting or use to!int. All for the possibility that I have an array larger than 2 billion elements.

I am forgetting why we removed this in the first place.

Can we have the compiler insert an assert at the loop start that the bounds are in range when you use a smaller int type? Clearly the common case is that the array is small enough for int indexes.

For those who are unaware, this used to work:

auto arr = [1, 2, 3];
foreach(int idx, v; arr) {

But was removed at some point. I think it should be brought back (we are bringing stuff back now, right? Like hex strings?)

Couldn’t you write a function withIntIndex or withIndexType!int such that you can check the array is indeed short enough?

Yes, but... it is still in the compiler, just deprecated (as I realized later in this thread). We can just undeprecate it (with some extra checks added).

Using a range/opApply wrapper also is going to bloat the code a bunch for not much benefit.

This really is a case of a problem being solved that didn't exist.


March 28

On Sunday, 24 March 2024 at 16:33:06 UTC, Walter Bright wrote:


Just use:

foreach (i; 0 .. array.length)

and let the compiler take care of it for you.

The use case I have is you need to pass i to a function that takes an int. This is very common in C libraries (e.g. raylib).

Now, in this case, the solution is quite easy:

foreach(i; 0 .. cast(int)array.length) // assumed
foreach(i; 0 ..!int) // checked

But the case is not as easy with a foreach over an array with an index:

foreach(int i, v; array)

In this case, without that mechanism, you have to cast i every time it's used. or have a goofy reassignment to another variable in each loop iteration.

This is one of those quality of life issues that would be nice to get back.


May 03


Why this basic syntax of foreach not working?
Been very annoyed for a few years now that it's no longer possible to have index inside foreach statement like in older times. What is this backwards development of D Language.

It looks great, it feels great to have index in the programming code, why suddenly remove it and break countless of projects and code examples.

1 2 3 4
Next ›   Last »