January 31, 2022

On Monday, 31 January 2022 at 18:12:32 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 31 January 2022 at 17:52:17 UTC, Ola Fosheim Grøstad wrote:

>
int x;
for(int i=1; i<99998; i++){
   x = next_monotonically_increasing_int_with_no_sideffect();
}
assert(x <= maximum_integer_value - 99998);

Typo, should have been a "…" in the loop, assuming no sideffects.

Another typo: the loop termination should remain "i<99999", so to avoid further confusion… It is equivalent to:

int x;
for(int i=1; i<99999; i++){
    x = next_monotonically_increasing_int_with_no_sideffect();
    …
}
assert(x <= maximum_integer_value - 99998);

I hope I got it right now… Hm.

Of course a more drastic example would be code that test the negated conditional (always false), if you then can deduce the last value of x by computation then you can remove the loop entirely and only keep the assert statement.

I don't see how that would break the Go spec.

February 02, 2022

On Monday, 31 January 2022 at 08:38:28 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 31 January 2022 at 07:33:00 UTC, Elronnd wrote:

>

I have no doubt it comes up at all. What I am asking is that I do not believe it has an appreciable effect on any real software.

Not if you work around it, but ask yourself: is it a good idea to design your language in such a way that the compiler is unable to remove this:

if (x < x + 1) { … }

Probably not.

It is a good idea. You can manually optimise that if out if performance is important. Manual optimisation is a must to get performant code anyway so not really a big deal. In the opposite case we would have undefined behaviour at @safe code.

We have array bounds checks for the exact same reason. They do penaltise the performance a bit, but prevent undefined behaviour and can be manually optimised out when performace is more important than memory protection.

February 02, 2022

On Friday, 28 January 2022 at 02:15:51 UTC, Paul Backus wrote:

>

It's been argued in the past, on these forums, that these conversions are "just something you have to learn" if you want to do system-level programming. But if C++ programmers are still getting this stuff wrong, after all these years, perhaps the programmers aren't the problem. Is it possible that these implicit conversions are just too inherently error-prone for programmers to reliably use correctly?

As many downsides as warnings have in general, perhaps this is where we should go for them. Those conversions are probably too common to outright deprecate them. Still, old code would keep compiling but the langauge would still clearly endorse explicit conversions for new code.

We probably should not even warn on integer promotion. Code that would explicitly cast on every place where that is done would be incredibly ugly. But we could warn on unsigned/signed conversions. Implicitly conversions to larger integers with same signedness are not an antipattern imo, those can remain.

February 02, 2022
On 2/2/2022 2:14 PM, Dukc wrote:
> We probably should not even warn on integer promotion. Code that would explicitly cast on every place where that is done would be incredibly ugly.

It also *causes* bugs. When code gets refactored, and the types change, those forced casts may not be doing what is desired, and can do things like unexpectedly truncating integer values.

One of the (largely hidden because it works so well) advances D has over C is Value Range Propagation, where automatic conversions of integers to smaller integers is only done if no bits are lost.
February 02, 2022
On Wednesday, 2 February 2022 at 23:27:05 UTC, Walter Bright wrote:
> One of the (largely hidden because it works so well) advances D has over C is Value Range Propagation, where automatic conversions of integers to smaller integers is only done if no bits are lost.

D's behavior is worse than C's in actual use. This is a source of constant annoyance when doing anything with the byte and short types.

The value range propagation only works inside single expressions and is too conservative to help much in real code.
February 02, 2022
On 2/2/2022 3:37 PM, Adam Ruppe wrote:
> D's behavior is worse than C's in actual use.

How?


> The value range propagation only works inside single expressions and is too conservative to help much in real code.

I find it works well. For example,

    int i;
    byte b = i & 0xFF;

passes without complaint with VRP. As does:

    ubyte a, b, c;
    a = b | c;
February 03, 2022

On Wednesday, 2 February 2022 at 21:42:43 UTC, Dukc wrote:

>

In the opposite case we would have undefined behaviour at @safe code.

People in the D community has the wrong understanding of what "undefined behaviour" means in a standard specification… this is getting tiresome, but to state the obvious: it does not mean that the compiler cannot provide guarantees. The fact that C++ choose performance over other options does not make this a necessity. It is a choice, not a consequence.

February 03, 2022

On Thursday, 3 February 2022 at 01:05:15 UTC, Walter Bright wrote:

>

On 2/2/2022 3:37 PM, Adam Ruppe wrote:

>

The value range propagation only works inside single expressions and is too conservative to help much in real code.

I find it works well. For example,

int i;
byte b = i & 0xFF;

passes without complaint with VRP.

No, it's doesn't pass: Error: cannot implicitly convert expression i & 255 of type int to byte.

>

As does:

ubyte a, b, c;
a = b | c;

But a = b + c is rejected by the compiler. Maybe I'm expecting modular wrap-around arithmetic here? Or maybe I know the possible range of b and c variables and I'm sure that no overflows are possible? But the compiler requires an explicit cast. Why is it getting in the way?

Also if the type is changed to uint in the same example, then the compiler is suddenly okay with that and doesn't demand casting to ulong. This is inconsistent. You will probably say that it's because of integer promotion and 32-bit size is a special snowflake. But if the intention is to catch bugs at the compilation stage, then adding two ubytes together and adding two uints together isn't very different (both of these operations can potentially overflow). What's the reason to be anal about ubytes?

The other modern programming languages can catch arithmetic overflows at runtime. And allow to opt out of these checks in performance critical parts of the code.

February 02, 2022
On 2/2/2022 6:25 PM, Siarhei Siamashka wrote:
> On Thursday, 3 February 2022 at 01:05:15 UTC, Walter Bright wrote:
>> On 2/2/2022 3:37 PM, Adam Ruppe wrote:
>>> The value range propagation only works inside single expressions and is too conservative to help much in real code.
>>
>> I find it works well. For example,
>>
>>     int i;
>>     byte b = i & 0xFF;
>>
>> passes without complaint with VRP.
> 
> No, it's doesn't pass: `Error: cannot implicitly convert expression i & 255 of type int to byte`.

My mistake. b should have been declared as ubyte.

> 
>> As does:
>>
>>     ubyte a, b, c;
>>     a = b | c;
> 
> But `a = b + c` is rejected by the compiler.

That's because `b + c` may create a value that does not fit in a ubyte.

> Maybe I'm expecting modular wrap-around arithmetic here? Or maybe I know the possible range of `b` and `c` variables and I'm sure that no overflows are possible? But the compiler requires an explicit cast. Why is it getting in the way?

Because C bugs where there are hidden truncations to bytes are a problem.


> Also if the type is changed to `uint` in the same example, then the compiler is suddenly okay with that and doesn't demand casting to `ulong`. This is inconsistent.

It follows the C integral promotion rules. This is for consistent arithmetic behavior with C.

 You will probably say that it's because of integer promotion and
> 32-bit size is a special snowflake. But if the intention is to catch bugs at the compilation stage, then adding two ubytes together and adding two uints together isn't very different (both of these operations can potentially overflow). What's the reason to be anal about ubytes?

We do the best we can. There really is no solution that doesn't have its own issues.

> The other modern programming languages can catch arithmetic overflows at runtime. And allow to opt out of these checks in performance critical parts of the code.

They just have other problems. VRP makes many implicit conversions to bytes safely possible.
February 03, 2022

On 2/3/22 12:50 AM, Walter Bright wrote:

>

On 2/2/2022 6:25 PM, Siarhei Siamashka wrote:

>

On Thursday, 3 February 2022 at 01:05:15 UTC, Walter Bright wrote:

>

On 2/2/2022 3:37 PM, Adam Ruppe wrote:

>

The value range propagation only works inside single expressions and is too conservative to help much in real code.

I find it works well. For example,

    int i;
    byte b = i & 0xFF;

passes without complaint with VRP.

No, it's doesn't pass: Error: cannot implicitly convert expression i & 255 of type int to byte.

My mistake. b should have been declared as ubyte.

Which is interesting, because this is allowed:

int i;
ubyte _tmp = i & 0xFF;
byte b = _tmp;

-Steve