On 9 September 2015 at 16:00, qznc via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
On Wednesday, 9 September 2015 at 09:56:10 UTC, Ola Fosheim Grøstad wrote:
I think the better approach is to write up the same algorithms in a high level fashion (using generic templates on both sides) from the ground up using the same constructs and measure the ability to optimize.

That is a good idea, if you want to measure compiler optimizations. Ideally g++ and gdc should always yield the same performance then?

However, it does answer the wrong question imho.

Suppose you consider using D with C/C++ as the stable alternative. D lures you with its high level features. However, you know that you will have to really optimize some hot spots sooner or later. Will D impose a penalty on you and C/C++ could have provided better performance?

Walter argues that there is no technical reason why D should be slower than C/C++. My experience with the benchmarks says, there seem to be such penalties. For example, there is no __builtin_ia32_cmplepd or __builtin_ia32_movmskpd like gcc has.

import gcc.builtins;  // OK, cheating. :-)