February 13
On Monday, 12 February 2024 at 18:22:46 UTC, H. S. Teoh wrote:
> On Mon, Feb 12, 2024 at 05:26:25PM +0000, Nick Treleaven via Digitalmars-d-learn wrote:
>> On Friday, 9 February 2024 at 15:19:32 UTC, bachmeier wrote:
>> > It's been discussed many, many times. The behavior is not going to change - there won't even be a compiler warning. (You'll have to check with the leadership for their reasons.)
>> 
>> Was (part of) the reason because it would disrupt existing code? If that was the blocker then editions are the solution.
>
> Honestly, I think this issue is blown completely out of proportion. The length of stuff in any language needs to be some type. D decided on an unsigned type. You just learn that and adapt your code accordingly, end of story.  Issues like these can always be argued both ways, and the amount of energy spent in these debates far outweigh the trivial workarounds in code, of which there are many (use std.conv.to for bounds checks, just outright cast it if you know what you're doing (or just foolhardy), use CheckedInt, etc.). And the cost of any change to the type now also far, far outweighs any meager benefits it may have brought.  It's just not worth it, IMNSHO.

I don't want the type of .length to change, that indeed would be too disruptive.
What I want is proper diagnostics like any well-regarded C compiler when I mix/implicit convert unsigned and signed types.

Due to D's generic abilities, it's easier to make wrong assumptions about whether some integer is signed or unsigned. But even without that, C compilers accepted that this is a task for the compiler to diagnose rather than humans, because it is too bug-prone for humans.
February 13
On Tue, Feb 13, 2024 at 06:36:22PM +0000, Nick Treleaven via Digitalmars-d-learn wrote:
> On Monday, 12 February 2024 at 18:22:46 UTC, H. S. Teoh wrote:
[...]
> > Honestly, I think this issue is blown completely out of proportion. The length of stuff in any language needs to be some type. D decided on an unsigned type. You just learn that and adapt your code accordingly, end of story.  Issues like these can always be argued both ways, and the amount of energy spent in these debates far outweigh the trivial workarounds in code, of which there are many (use std.conv.to for bounds checks, just outright cast it if you know what you're doing (or just foolhardy), use CheckedInt, etc.). And the cost of any change to the type now also far, far outweighs any meager benefits it may have brought.  It's just not worth it, IMNSHO.
> 
> I don't want the type of .length to change, that indeed would be too disruptive.  What I want is proper diagnostics like any well-regarded C compiler when I mix/implicit convert unsigned and signed types.

I agree, mixing signed/unsigned types in the same expression ought to require a cast, and error out otherwise. Allowing them to be freely mixed, or worse, implicitly convert to each other, is just too error-prone.


> Due to D's generic abilities, it's easier to make wrong assumptions about whether some integer is signed or unsigned. But even without that, C compilers accepted that this is a task for the compiler to diagnose rather than humans, because it is too bug-prone for humans.

Indeed.


T

-- 
Живёшь только однажды.
February 13
On Monday, 12 February 2024 at 19:56:09 UTC, H. S. Teoh wrote:
> But regardless, IMNSHO any programmer worth his wages ought to learn what an unsigned type is and how it works. A person should not be writing code if he can't even be bothered to learn how the machine that's he's programming actually works.

I'd like to note that even C++20 onwards has `.ssize`, which is signed size.

I do use lengths in arithmetic sometimes, and that leads to silent bugs currently.  On the other hand, since going from 16 bits to 32 and then 64, in my user-side programs, I had a flat zero bugs because some length was 2^{31} or greater -- but at the same time not 2^{32} or greater.  So, in D, I usually `to!int` or `to!long` them anyway.  Or cast in performance-critical places.

Another perspective.  Imagine a different perfect world, where programmers just had 64-bit integers and 64-bit address space, everywhere, from the start.  A clean slate, engineers and programmers designing their first hardware and languages, but with such sizes already feasible.  Kinda weird, but bear with me a bit.  Now, imagine someone proposing to make sizes unsigned.  Wouldn't that be a strange thing to do?  The benefit of having a universal arithmetic type for everything, from the ground up -- instead of two competing types producing bugs at glue points -- seems to far outweigh any potential gains.  Unsigned integers could have their small place, too, for bit masks and microoptimizations and whatnot, but why sizes?  The few applications that really benefit from sizes of [2^{63}..2^{64}) would be the most odd ones, deserving some workarounds.

Right now though, we just have to deal with the legacy, in software, hardware, and mind -- and with the fact that quite some environments are not 64-bit.

Ivan Kazmenko.

February 14
On Tuesday, 13 February 2024 at 23:57:12 UTC, Ivan Kazmenko wrote:
>
> I'd like to note that even C++20 onwards has `.ssize`, which is signed size.
>
> I do use lengths in arithmetic sometimes, and that leads to silent bugs currently.  On the other hand, since going from 16 bits to 32 and then 64, in my user-side programs, I had a flat zero bugs because some length was 2^{31} or greater -- but at the same time not 2^{32} or greater.  So, in D, I usually `to!int` or `to!long` them anyway.  Or cast in performance-critical places.
>
> Another perspective.  Imagine a different perfect world, where programmers just had 64-bit integers and 64-bit address space, everywhere, from the start.  A clean slate, engineers and programmers designing their first hardware and languages, but with such sizes already feasible.  Kinda weird, but bear with me a bit.  Now, imagine someone proposing to make sizes unsigned.  Wouldn't that be a strange thing to do?  The benefit of having a universal arithmetic type for everything, from the ground up -- instead of two competing types producing bugs at glue points -- seems to far outweigh any potential gains.  Unsigned integers could have their small place, too, for bit masks and microoptimizations and whatnot, but why sizes?  The few applications that really benefit from sizes of [2^{63}..2^{64}) would be the most odd ones, deserving some workarounds.
>
> Right now though, we just have to deal with the legacy, in software, hardware, and mind -- and with the fact that quite some environments are not 64-bit.
>
> Ivan Kazmenko.

Personally, I don't have a problem with .length being unsigned. How do you have a negative length? My problem is that the language doesn't correctly compare signed and unsigned.

Earlier in the thread, people mentioned NOT mentioning other languages but I just learned that Carbon correctly compares signed and unsigned ints.

cheers

February 16

On Thursday, 8 February 2024 at 05:56:57 UTC, Kevin Bailey wrote:

>

How many times does the following loop print? I ran into this twice doing the AoC exercises. It would be nice if it Just Worked.

import std.stdio;

int main()
{
  char[] something = ['a', 'b', 'c'];

  for (auto i = -1; i < something.length; ++i)
        writeln("less than");

  return 0;
}

Try this:

import std.stdio;

int ilength(T)(in T[] a)
{
	assert(a.length<=int.max);
	return cast(int)a.length;
}

int main()
{
	char[] something = ['a', 'b', 'c'];

	for (auto i = -1; i < something.ilength; ++i)
		writeln("less than");

	return 0;
}
February 16
On Tuesday, 13 February 2024 at 23:57:12 UTC, Ivan Kazmenko wrote:
> I do use lengths in arithmetic sometimes, and that leads to silent bugs currently.  On the other hand, since going from 16 bits to 32 and then 64, in my user-side programs, I had a flat zero bugs because some length was 2^{31} or greater -- but at the same time not 2^{32} or greater.  So, in D, I usually `to!int` or `to!long` them anyway.  Or cast in performance-critical places.

I had a similar bug in C++: the find function returns npos sentinel value when not found, it was assigned to uint and then didn't match npos on comparison, but it would if they were signed.
February 17

On Wednesday, 14 February 2024 at 00:56:21 UTC, Kevin Bailey wrote:

>

Personally, I don't have a problem with .length being unsigned. How do you have a negative length? My problem is that the language doesn't correctly compare signed and unsigned.

The length itself is technically the index of a non-existing element right after the array. And -1 is technically the index of a non-existing element right before the array. Hence just mechanically reversing the direction of processing array elements during refactoring may be potentially dangerous if one is not careful enough.

1 2 3 4 5
Next ›   Last »