FWIW, if you want C# array idiom
int count(T)(in T[] a)
{
debug assert(a.length==cast(int)a.length);
return cast(int)a.length;
}
long lcount(T)(in T[] a)
{
debug assert(long(a.length)>=0);
return long(a.length);
}
February 07 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to monkyyy | FWIW, if you want C# array idiom
|
February 07 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 6 February 2025 at 09:10:41 UTC, Walter Bright wrote: > [I'm not sure why a new thread was created?] > > This comes up now and then. It's an attractive idea, and seems obvious. But I've always been against it for multiple reasons. > > 1. Pascal solved this issue by not allowing any implicit conversions. The result was casts everywhere, which made the code ugly. I hate ugly code. I hate ugly code too, but I'd rather have explicit casts. > 3. Is `1` a signed int or an unsigned int? In Haskell, it could be either and the type would either be inferred. Or the programmer chooses: 1 :: Int > 4. What happens with `p[i]`? If p is the beginning of a memory object, we want i to be unsigned. If p points to the middle, we want i to be signed. What should be the type of `p - q`? signed or unsigned? Good questions. |
February 07 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 6 February 2025 at 20:44:46 UTC, Walter Bright wrote:
> Having a function that searches an array for a value and returns the index of the array if found, and -1 if not found, is not a good practice.
>
> An index being returned should be size_t, and the not-found value should be size_t.max.
>
[...]
Or, maintaining size_t, make first index of an array 1 not 0, and return 0 if not found.
Like malloc.
First array index is 1 also eliminates a fruitful source of off-by-one errors.
|
February 13 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 6 February 2025 at 20:52:53 UTC, Walter Bright wrote: >On 2/6/2025 7:18 AM, Quirin Schroll wrote: >
We already do VRP checks for cases:
I didn’t know that, but I hardly ever use floating-point types. However, that’s not exactly VRP, but a useful check that compile-time-known values are representable in the target type. VRP means that while you normally need a cast to assign an integer to a What you’re pointing out is that “micro-lossy narrowing conversions” are a compile-error if they’re definitely occurring. |
February 14 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 6 February 2025 at 09:10:41 UTC, Walter Bright wrote: >[I'm not sure why a new thread was created?] This comes up now and then. It's an attractive idea, and seems obvious. But I've always been against it for multiple reasons.
Let me guess: Pascal has no value-range propagation? >
Java 23 does not have unsigned types, though. There are only operations that essentially reinterpret the bits of signed integer types as unsigned integers and do operations on them. Signed and unsigned multiplication, division and modulo are completely different operations. >
Ideally, it has its own type that implicitly converts to anything that can be initialized by the constant. Of course,
D chooses the latter. None of those are a bad choice; tradeoffs everywhere. >
Two questions, two answers. >What happens with That’s a vague question. If What should be the type of Signed. If While it would be annoying for sure, it does make sense to use a function for pointer subtraction when one assumes the difference to be positive:
As I see it, 2’s complement for both signed and unsigned arithmetic is a straightforward choice D made to keep
In my experience, when signed and unsigned are mixed, it points to a design issue.
Making something valid in C do something it can’t do in C is a bad idea and invites bugs, that is true. Making questionable C things errors prima facie isn’t. AFAICT, D for the most part sticks to: If it looks like C, it behaves like C or doesn’t compile. Banning signed-to-unsigned conversions (unless VRP proves it’s okay) simply falls into the latter box. >
Of course VRP is great. For the most part, it means if an implicit conversion compiles, it’s because nothing weird happens, no data can be lost, etc. Signed to unsigned conversion breaks this expectation that VRP in fact co-created. >
It’s generally good. Almost no-one complains about it. >Andrei and I went around and around on this, pointing out the contradictions. There was no solution. There is no "correct" answer for integer 2's complement arithmetic. I don’t really know what that means. Integer types in C and most languages derived from it (D included) inherited have this oddity that addition and subtraction is 2’s complement, but multiplication, division, and modulo are not ( Here's what I do:
Stick with those and most of the problems will be avoided. Sounds reasonable. |
February 14 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kagamin | On Thursday, 6 February 2025 at 16:39:26 UTC, Kagamin wrote: >On Monday, 3 February 2025 at 18:40:20 UTC, Atila Neves wrote: >https://forum.dlang.org/post/pbhjffbxdqpdwtmcbikh@forum.dlang.org I agree with Bjarne, the problem is entirely caused by abuse of unsigned integers as positive numbers. And deprecation of implicit conversion is impossible due to this abuse: signed and unsigned integers will be mixed everywhere because signed integers are proper numbers and unsigned integers are everywhere due to abuse. What would be a “proper number”? At best, signed and unsigned types represent various slices of the infinite integers. >Counterexample is C# that uses signed integers in almost all interfaces and it just works. C# uses signed integers because not all CLR languages support unsigned types. There’s a |
February 15 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Quirin Schroll | On Friday, 14 February 2025 at 00:09:14 UTC, Quirin Schroll wrote: >What would be a “proper number”? At best, signed and unsigned types represent various slices of the infinite integers. The problem is they are incompatible slices that you have to mix due to abuse of unsigned integers everywhere. At best unsigned integer gives you an extra bit, but in practice it doesn't cut: when you want a bigger integer, you use a much wider integer, not one bit bigger integer. >C# uses signed integers because not all CLR languages support unsigned types. It demonstrates that the problem is due to abuse of unsigned integers. |
February 17 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to monkyyy | size_t is just an alias declaration. The compiler does not actually know it exists. |
February 17 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Richard (Rikki) Andrew Cattermole | On 2/6/2025 8:26 PM, Richard (Rikki) Andrew Cattermole wrote:
> That could resolve this quite nicely.
For popcount, not for anything else. There are a lot of functions with `int` or `uint` parameters, but the sign is meaningless to its operation.
|
February 17 Re: Deprecate implicit conversion between signed and unsigned integers | ||||
---|---|---|---|---|
| ||||
Posted in reply to Atila Neves | On 2/7/2025 4:50 AM, Atila Neves wrote:
> I hate ugly code too, but I'd rather have explicit casts.
Pascal required explicit casts. It sounded like a good idea. After a while, I hated it. It was so nice switching to C and leaving that behind.
(Did I mention that explicit casts also hide errors introduced by refactoring?)
|