January 31, 2022
On Monday, 31 January 2022 at 08:38:28 UTC, Ola Fosheim Grøstad wrote:
> Not if you work around it

Maybe I was insufficiently clear.  I'm not talking about the case where you work around it, but the case where you leave the 'dead' code in..

> but ask yourself: is it a good idea to design your language
> in such a way that the compiler is unable to remove this

If you use modular arithmetic, then yes, you should not permit the compiler to remove that condition.
January 31, 2022

On Monday, 31 January 2022 at 09:48:42 UTC, Elronnd wrote:

>

Maybe I was insufficiently clear. I'm not talking about the case where you work around it, but the case where you leave the 'dead' code in..

It can certainly be a problem in an inner loop. This is like with caches, at some point you hit the threshold for the loop where it it is pushed out of the loop-buffer in the CPU pipeline and then it matters quite a bit.

Anyway, the argument you made is not suitable if you create a language that you want to be competitive in system level programming. When programmers get hit, they get hit, and then it is an issue.

You can remove many individual optimizations with only a small effect on the average program, but each optimization you remove make you less competitive.

Currently most C/C++ code bases are not written in a high level fashion in performance critical functions, but we are moving towards more higher level programming in performance code now that compilers are getting "smarter" and hardware is getting more diverse. The more diverse hardware you have, the more valuable is high quality optimization. Or rather, the cost of tuning code is increasing…

> >

but ask yourself: is it a good idea to design your language
in such a way that the compiler is unable to remove this

If you use modular arithmetic, then yes, you should not permit the compiler to remove that condition.

In D you always use modular arithmetics and you also don't have constraints on integers. Thus you get extra bloat.

It matters when it matters, and then people ask themselves: why not use language X where this is not an issue?

January 31, 2022

On Monday, 31 January 2022 at 08:38:28 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 31 January 2022 at 07:33:00 UTC, Elronnd wrote:

>

I have no doubt it comes up at all. What I am asking is that I do not believe it has an appreciable effect on any real software.

Not if you work around it, but ask yourself: is it a good idea to design your language in such a way that the compiler is unable to remove this:

if (x < x + 1) { … }

Probably not.

One such language would be Go[0], it doesn't seem to impact Docker, Kubernetes, gVisor, USB Armory, Android GPU debugger, containerd, TinyGo, as some of the proper systems programming where Go is used despite not being designed as such.

[0] - "A compiler may not optimize code under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true.". Go Language Specification.

January 31, 2022
On Monday, 31 January 2022 at 10:06:40 UTC, Ola Fosheim Grøstad wrote:
> It matters when it matters

Please show me a case where it matters.  I already asked for this: show me a case where a large-scale c or c++ application performs appreciably better because signed overflow is UB.  It is easy to test this: simply tell gcc or clang -fwrapv and they will stop treating overflow as UB.

I will add: does it also not matter when the compiler makes an assumption that does not accord with your intent, causing a bug?  I consider this more important than performance.
January 31, 2022

On Monday, 31 January 2022 at 10:23:59 UTC, Paulo Pinto wrote:

>

One such language would be Go[0], it doesn't seem to impact Docker, Kubernetes, gVisor, USB Armory, Android GPU debugger, containerd, TinyGo, as some of the proper systems programming where Go is used despite not being designed as such.

You can still remove it, you just need to assert the condition before you get any side-effects (I/O), but you can delay that test so it occurs outside the loop.

There is a difference between a language spec and consequences for what compilers can do.

January 31, 2022

On Monday, 31 January 2022 at 11:00:26 UTC, Elronnd wrote:

>

Please show me a case where it matters. I already asked for this: show me a case where a large-scale c or c++ application performs appreciably better because signed overflow is UB. It is easy to test this: simply tell gcc or clang -fwrapv and they will stop treating overflow as UB.

I've already answered this. You can say this about most individual optimizations. That does not mean that they don't have an impact when you didn't get them.

>

I will add: does it also not matter when the compiler makes an assumption that does not accord with your intent, causing a bug? I consider this more important than performance.

What does this mean in this context?

January 31, 2022

On Monday, 31 January 2022 at 16:09:33 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 31 January 2022 at 11:00:26 UTC, Elronnd wrote:

>

Please show me a case where it matters. I already asked for this: show me a case where a large-scale c or c++ application performs appreciably better because signed overflow is UB. It is easy to test this: simply tell gcc or clang -fwrapv and they will stop treating overflow as UB.

I've already answered this. You can say this about most individual optimizations. That does not mean that they don't have an impact when you didn't get them.

That sentence came out wrong.

What I meant is that missing an optimization is impactfull where it matters. In this case I pointed out that this is most impactfull in inner loops, but that current C/C++ codebases tend not to use high level programming in performance sensitive functions.

Meaning: the whole argument you are presenting is pointless. You will obviously not find conditionals that are always true/false in handt-tuned inner loops.

The crux is this: people don't want to hand tune inner loops if they can avoid it!

January 31, 2022

On Monday, 31 January 2022 at 16:05:23 UTC, Ola Fosheim Grøstad wrote:

>

On Monday, 31 January 2022 at 10:23:59 UTC, Paulo Pinto wrote:

>

One such language would be Go[0], it doesn't seem to impact Docker, Kubernetes, gVisor, USB Armory, Android GPU debugger, containerd, TinyGo, as some of the proper systems programming where Go is used despite not being designed as such.

You can still remove it, you just need to assert the condition before you get any side-effects (I/O), but you can delay that test so it occurs outside the loop.

There is a difference between a language spec and consequences for what compilers can do.

Compilers can do whatever they feel like, except when one doesn't follow "A compiler may not optimize code under the assumption that overflow does not occur" is no longer compliant with the language specification, no matter what.

Similarly a Scheme compiler that doesn't do tail call recursion isn't a Scheme proper, as the standard has specific details how tail recursion is required to exist.

Naturally on the cowboy land of what ISO says and the holes it leaves for UB and implementation defined on C and C++ compilers is another matter.

January 31, 2022

On Monday, 31 January 2022 at 17:22:42 UTC, Paulo Pinto wrote:

>

Compilers can do whatever they feel like, except when one doesn't follow "A compiler may not optimize code under the assumption that overflow does not occur" is no longer compliant with the language specification, no matter what.

If you get the same output/response from the same input then you haven't deviated from the specification.

Thus if you have overflow checks on integer artithmetics then this:

for(int i=1; i<99999; i++){
   int x = next_monotonically_increasing_int_with_no_sideffect();
   if (x < x+i){…}
}

has the same effect as this:

int x;
for(int i=1; i<99998; i++){
   x = next_monotonically_increasing_int_with_no_sideffect();
}
assert(x <= maximum_integer_value - 99998);

>

Similarly a Scheme compiler that doesn't do tail call recursion isn't a Scheme proper, as the standard has specific details how tail recursion is required to exist.

A well written language specification should only specify the requirements for observable behaviour (including memory requirements and interfacing requirements). If it is is observable in Scheme, then it makes sense, otherwise it makes no sense.

January 31, 2022

On Monday, 31 January 2022 at 17:52:17 UTC, Ola Fosheim Grøstad wrote:

>
int x;
for(int i=1; i<99998; i++){
   x = next_monotonically_increasing_int_with_no_sideffect();
}
assert(x <= maximum_integer_value - 99998);

Typo, should have been a "…" in the loop, assuming no sideffects.