On Monday, 31 January 2022 at 09:48:42 UTC, Elronnd wrote:
>Maybe I was insufficiently clear. I'm not talking about the case where you work around it, but the case where you leave the 'dead' code in..
It can certainly be a problem in an inner loop. This is like with caches, at some point you hit the threshold for the loop where it it is pushed out of the loop-buffer in the CPU pipeline and then it matters quite a bit.
Anyway, the argument you made is not suitable if you create a language that you want to be competitive in system level programming. When programmers get hit, they get hit, and then it is an issue.
You can remove many individual optimizations with only a small effect on the average program, but each optimization you remove make you less competitive.
Currently most C/C++ code bases are not written in a high level fashion in performance critical functions, but we are moving towards more higher level programming in performance code now that compilers are getting "smarter" and hardware is getting more diverse. The more diverse hardware you have, the more valuable is high quality optimization. Or rather, the cost of tuning code is increasing…
> >but ask yourself: is it a good idea to design your language
in such a way that the compiler is unable to remove this
If you use modular arithmetic, then yes, you should not permit the compiler to remove that condition.
In D you always use modular arithmetics and you also don't have constraints on integers. Thus you get extra bloat.
It matters when it matters, and then people ask themselves: why not use language X where this is not an issue?