November 18, 2019
On Sunday, 17 November 2019 at 21:42:37 UTC, Jon Degenhardt wrote:
> At a high level, I feel I've seen this pattern a number of times. When people starting with D run benchmarks as part of their initial experiments, they naturally start with the simplest and most straightforward programming approaches. Nothing wrong with this. It's a strength of D that quality code can be written quickly.

I think it signifies a deeper problem with these kind of benchmarks. Most people would expect these benchmarks to measure idiomatic code, "every day" kind of code. Most people would write their code with associative arrays in this case. Sure, you can optimize it later, but just as well you can just drop into asm {} block and write hand optimized code.

Same with Java, you can write a lot of the code in a very C-like way for a large speedup, but the code will be completely foreign for most Java programmers and not very representative for the language.
November 18, 2019
On Monday, 18 November 2019 at 21:35:08 UTC, JN wrote:

> I think it signifies a deeper problem with these kind of benchmarks. Most people would expect these benchmarks to measure idiomatic code, "every day" kind of code. Most people would write their code with associative arrays in this case. Sure, you can optimize it later, but just as well you can just drop into asm {} block and write hand optimized code.

If you're in a position where you care about "fast as possible" code, how fast your "every day" code runs isn't really helpful.

Now, I do understand that you might want to measure the performance of a piece of code written when you aren't optimizing for execution speed. Someone in that position is going to care about speed of execution and speed of development, among other things. The problem is that you can't learn anything useful in that case from a benchmark that reports execution time and nothing else.
November 18, 2019
On Monday, 18 November 2019 at 21:50:04 UTC, bachmeier wrote:
> On Monday, 18 November 2019 at 21:35:08 UTC, JN wrote:
>
>> I think it signifies a deeper problem with these kind of benchmarks. Most people would expect these benchmarks to measure idiomatic code, "every day" kind of code. Most people would write their code with associative arrays in this case. Sure, you can optimize it later, but just as well you can just drop into asm {} block and write hand optimized code.
>
> If you're in a position where you care about "fast as possible" code, how fast your "every day" code runs isn't really helpful.
>
> Now, I do understand that you might want to measure the performance of a piece of code written when you aren't optimizing for execution speed. Someone in that position is going to care about speed of execution and speed of development, among other things. The problem is that you can't learn anything useful in that case from a benchmark that reports execution time and nothing else.

Yes, there are often multiple goals behind a benchmark like this, goals that may not be explicitly identified.

There is also the question of what "idiomatic" means. This is can be quite subjective, especially in multi-paradigm languages. And, what "idiomatic" means to an individual may change as familiarity with the language grows. For D performance studies, an example is that it can take time to learn how to use lazy, range-based programming facilities. This is certainly one idiomatic D coding style. And, it often results in much better memory management and performance improvements. Code can move further from the most common paradigms of course, including all the way to inline assembly blocks. Makes it difficult to say when versions of a program in different languages are similarly idiomatic.
1 2 3
Next ›   Last »