February 22, 2016
On Monday, 22 February 2016 at 07:10:23 UTC, Kapps wrote:
> If you do want to test the differences between the range approach and the loop approach, something like:
> auto sumtest4(Range)(Range range) @safe pure {
> 	return range.reduce!((a, b) => a + b);
> }
> is a more fair comparison. I get results within 15% of sumtest2 with this using dmd. I think with ldc this would be identical, but the version in homebrew is too old to compile this.

Using LDC with the mir version of ndslice so it compiles, and the following code:
sw.reset();
sw.start();
foreach (unused; 0..times) {
	for (int i=0; i<N; ++i) {
		res4[i] = sumtest4(f[i]);
	}
}
t3 = sw.peek().msecs;

and

auto sumtest4(Range)(Range range) {
	return range.reduce!((a, b) => a + b);
}

I get:
145 ms
19 ms
19 ms
19 ms

So, with LDC, there is no performance hit doing this. The only performance hit is when .sum uses a different algorithm for a more accurate result. Also, the LDC version appears to be roughly 5x faster than the DMD version.
February 22, 2016
First of all, I am pleasantly surprised by the rapid influx of helpful responses. The community here seems quite wonderful. In the interests of not cluttering the thread too much, since the advice given here has many commonalities, I will only try to respond once to each type of suggestion.

On Sunday, 21 February 2016 at 16:29:26 UTC, ZombineDev wrote:
> The problem is not with ranges, but with the particualr algorithm used for summing. If you look at the docs (http://dlang.org/phobos-prerelease/std_algorithm_iteration.html#.sum) you'll see that if the range has random-access `sum` will use the pair-wise algorithm. About the second and third tests, the problem is with DMD which should not be used when measuring performance (but only for development, because it has fast compile-times).
> ...
> According to `dub --verbose`, my command-line was roughly this:
> ldc2 -ofapp -release -O5 -singleobj -w source/app.d
> ../../../../.dub/packages/mir-0.10.1-alpha/source/mir/ndslice/internal.d
> ../../../../.dub/packages/mir-0.10.1-alpha/source/mir/ndslice/iteration.d
> ../../../../.dub/packages/mir-0.10.1-alpha/source/mir/ndslice/package.d
> ../../../../.dub/packages/mir-0.10.1-alpha/source/mir/ndslice/selection.d
> ../../../../.dub/packages/mir-0.10.1-alpha/source/mir/ndslice/slice.d

It appears that I cannot use the GDC compiler for this particular problem due to it using a comparatively older version of the DMD frontend (I understand Mir requires >=2.068), but I did manage to get LDC working on my system after a bit of work. Since I've been using dub to manage my project, I used the default "release" build type. I also tried compiling manually with LDC, using the -O5 switch you mentioned. These are the results (I increased the iteration count to lessen the noise, the array is now 10000x20, each function is run a thousand times):

            DMD    LDC (dub)    LDC (-release -enable-inlining -O5 -w -singleobj)
sumtest1:12067 ms  6899 ms      1940 ms
sumtest2: 3076 ms  1349 ms       452 ms
sumtest3: 2526 ms   847 ms       434 ms
sumtest4: 5614 ms  1481 ms       452 ms

The sumtest1, 2 and 3 functions are as given in the first post, sumtest4 uses the range.reduce!((a, b) => a + b) approach to enforce naive summation. Much to my satisfaction, the range.reduce version is now exactly as quick as the traditional loop and while function inlining isn't quite perfect, the 4% performance penalty incurred by the 10_000 function calls (or whatever inlined form the function finally takes) is quite acceptable.

I do have to wonder, however, about the default settings of dub in this case. Having gone through its documentation, I might still not have guessed to try the compiler options you provided, thereby losing out on a 2-3x performance improvement. What build options did you use in your dub.json that it managed to translate to the correct compiler switches?
February 22, 2016
On Sunday, 21 February 2016 at 16:20:30 UTC, bachmeier wrote:
> First, a minor point, the D community is usually pretty careful not to frown on a particular coding style (unlike some communities) so if you are comfortable writing loops and it gives you the fastest code, you should do so.
>
> On the performance issue, you can see this related post about performance with reduce:
> http://forum.dlang.org/post/mailman.4829.1434623275.7663.digitalmars-d@puremagic.com
>
> This was Walter's response:
> http://forum.dlang.org/post/mlvb40$1tdf$1@digitalmars.com
>
> And this shows that LDC flat out does a better job of optimization in this case:
> http://forum.dlang.org/post/mailman.4899.1434779705.7663.digitalmars-d@puremagic.com

While I certainly do not doubt the open mindedness of the D community, it was in part Walter Bright's statement during a keynote speech of how "loops are bugs" that motivated me to look at D for a fresh approach to writing numerical code. For decades, explicit loops have been the only way to attain good performance for certain kinds of code in virtually all languages (discounting a few quirky high level languages like MATLAB) and the notion that this need not be the case is quite attractive to many people, myself included.

While the point Walter makes, that there is no mathematical reason ranges should be slower than loops and that loops are generally easier to get wrong is certainly true, D is the first general purpose language I've ever seen that makes this sentiment come close to reality.
February 22, 2016
On Sunday, 21 February 2016 at 16:20:30 UTC, bachmeier wrote:
> On Sunday, 21 February 2016 at 14:32:15 UTC, dextorious wrote:
>> I had heard while reading up on the language that in D explicit loops are generally frowned upon and not necessary for the usual performance reasons.
>
> First, a minor point, the D community is usually pretty careful not to frown on a particular coding style (unlike some communities) so if you are comfortable writing loops and it gives you the fastest code, you should do so.
>
> On the performance issue, you can see this related post about performance with reduce:
> http://forum.dlang.org/post/mailman.4829.1434623275.7663.digitalmars-d@puremagic.com
>
> This was Walter's response:
> http://forum.dlang.org/post/mlvb40$1tdf$1@digitalmars.com
>
> And this shows that LDC flat out does a better job of optimization in this case:
> http://forum.dlang.org/post/mailman.4899.1434779705.7663.digitalmars-d@puremagic.com

I can't agree with that. Between `for` and `foreach` you should choose one that is more readable/understandable for particular situation. It's compiler's task to optimize such small things.
February 23, 2016
On Monday, 22 February 2016 at 15:43:23 UTC, dextorious wrote:
> I do have to wonder, however, about the default settings of dub in this case. Having gone through its documentation, I might still not have guessed to try the compiler options you provided, thereby losing out on a 2-3x performance improvement. What build options did you use in your dub.json that it managed to translate to the correct compiler switches?

Your experience is exactly what the D community needs to get right. You've come in as an interested user with patience and initially D has offered slightly disappointing performance for both technical reasons and because of the different compilers. You've gotten to the right place in the end but we need point A to point B to be a lot smoother and more obvious so more people get a good initial impression of D.

Every D user thread seems to go like this- someone starts with DMD, they then struggle a little and hopefully get LDC working with a list of slightly obscure compiler switches offered. A standard algorithm performs disappointingly for somewhat valid technical reasons and more clunky alternatives are then deployed. We really need to standard algorithms to be fast and perhaps have separate ones for perfect technical accuracy.

What are your thoughts on D now? What would have helped you get to the right place much faster?
February 23, 2016
On Tuesday, 23 February 2016 at 11:10:40 UTC, ixid wrote:
> We really need to standard algorithms to be fast and perhaps have separate ones for perfect technical accuracy.
>

While I agree with most of what you're saying, I don't think we should prioritize performance over accuracy or correctness. Especially for numerics people, precision is very important, and it can make a just as bad first impression if we don't get this right. We can however make the note in the documentation (which already talks about performance) a bit more prominent: http://dlang.org/phobos/std_algorithm_iteration.html#sum
February 23, 2016
On Tuesday, 23 February 2016 at 11:10:40 UTC, ixid wrote:
> On Monday, 22 February 2016 at 15:43:23 UTC, dextorious wrote:
>> I do have to wonder, however, about the default settings of dub in this case. Having gone through its documentation, I might still not have guessed to try the compiler options you provided, thereby losing out on a 2-3x performance improvement. What build options did you use in your dub.json that it managed to translate to the correct compiler switches?
>
> Your experience is exactly what the D community needs to get right. You've come in as an interested user with patience and initially D has offered slightly disappointing performance for both technical reasons and because of the different compilers.

His concern is with the default settings of Dub. I've tried Dub and given up several times, and I've been using D since 2013. The community needs to provide real documentation. It's embarrassing that it's pushed as the official package manager and will soon be included with DMD.

February 23, 2016
On Tuesday, 23 February 2016 at 14:07:22 UTC, Marc Schütz wrote:
> On Tuesday, 23 February 2016 at 11:10:40 UTC, ixid wrote:
>> We really need to standard algorithms to be fast and perhaps have separate ones for perfect technical accuracy.
>>
>
> While I agree with most of what you're saying, I don't think we should prioritize performance over accuracy or correctness. Especially for numerics people, precision is very important, and it can make a just as bad first impression if we don't get this right. We can however make the note in the documentation (which already talks about performance) a bit more prominent: http://dlang.org/phobos/std_algorithm_iteration.html#sum

Wouldn't it be better to have technically perfect implementations for those numerics people? Sum is a basic function that almost everyone may want to use, this is a factor of four slowdown for the sake of one user group who could be perfectly well served by a sub-library that contains high-accuracy versions. It might make sense if the speed difference were only a few percent.
February 23, 2016
On Tuesday, 23 February 2016 at 11:10:40 UTC, ixid wrote:
> On Monday, 22 February 2016 at 15:43:23 UTC, dextorious wrote:
>> I do have to wonder, however, about the default settings of dub in this case. Having gone through its documentation, I might still not have guessed to try the compiler options you provided, thereby losing out on a 2-3x performance improvement. What build options did you use in your dub.json that it managed to translate to the correct compiler switches?
>
> Your experience is exactly what the D community needs to get right. You've come in as an interested user with patience and initially D has offered slightly disappointing performance for both technical reasons and because of the different compilers. You've gotten to the right place in the end but we need point A to point B to be a lot smoother and more obvious so more people get a good initial impression of D.
>
> Every D user thread seems to go like this- someone starts with DMD, they then struggle a little and hopefully get LDC working with a list of slightly obscure compiler switches offered. A standard algorithm performs disappointingly for somewhat valid technical reasons and more clunky alternatives are then deployed. We really need to standard algorithms to be fast and perhaps have separate ones for perfect technical accuracy.
>
> What are your thoughts on D now? What would have helped you get to the right place much faster?

Personally, I think a few aspects of documentation for the various compilers, dub and possibly the dlang.org website itself could be improved, if accessibility is considered important. For instance, just to take my journey with trying out D as an example, I can immediately list a few points where I misunderstood or failed to find relevant information:

1. While the dlang.org website does a good job presenting the three compilers side by side with a short pro/con list for each and does mention that DMD produces slower code, I did not at first expect the difference to be half an order of magnitude or more. In retrospect, after reading the forums and learning about how each compiler works, this is quite obvious, but the initial impression was misleading.

2. The LDC compiler gave me a few issues during setup, particularly on Windows. The binaries supplied are dynamically linked against the MSVS2015 runtime (and will fail on any other system) and seem to require a full Visual Studio installation. I assume there are good reasons for this (though I hope in the future a more widely usable version could be made available), but the fact itself could be made clearer on the download page (it can be found after some searching on the D wiki and the forums).

3. The documentation for the dub package is useful, but somewhat difficult to read due to how it is structured and does not seem complete. For instance, I am still not sure how to make it pass the -O5 switch to the LDC2 compiler and the impression I got from the documentation is that explicit manual switches can only be supplied for the DMD compiler. It says that when using other compilers, the relevant switches are automatically translated to appropriate options for GDC/LDC, but no further details are supplied and no matter what options I set for the DMD compiler, using --compiler=ldc2 only yields -O and not -O5. For the moment, I'm compiling my code and managing dependencies manually like I would in C++, which is just fine for me personally, but does leave a slightly disappointing impression about what is apparently considered a semi-official package manager for the D language.

Of course, this is just my anecdotal experience and should not be taken as major criticism. It may be that I missed something or did not do enough research. Certainly, some amount of adjustment is to be expected when learning a new language, but there does seem to be some room for improvement.
February 23, 2016
On Tuesday, 23 February 2016 at 14:07:22 UTC, Marc Schütz wrote:
> On Tuesday, 23 February 2016 at 11:10:40 UTC, ixid wrote:
>> We really need to standard algorithms to be fast and perhaps have separate ones for perfect technical accuracy.
>>
>
> While I agree with most of what you're saying, I don't think we should prioritize performance over accuracy or correctness. Especially for numerics people, precision is very important, and it can make a just as bad first impression if we don't get this right. We can however make the note in the documentation (which already talks about performance) a bit more prominent: http://dlang.org/phobos/std_algorithm_iteration.html#sum

Being new to the language, I certainly make no claims about what the Phobos library should do, but coming from a heavy numerics background in many languages, I can say that this is the first time I've seen a common summation function do anything beyond naive summation. Some languages feature more accurate options separately, but never as the default, so it did not occur to me to specifically check the documentation for something like sum() (which is my fault, of course, no issues there). Having the more accurate pairwise summation algorithm in the standard library is certainly worthwhile for some applications, but I was a bit surprised to see it as the default.