September 24, 2018 Re: Simple parallel foreach and summation/reduction | ||||
---|---|---|---|---|
| ||||
Posted in reply to Chris Katko Attachments:
| Hi, Apologies for coming late to this thread. I started with: import std.random: uniform; import std.range: iota; import std.stdio: writeln; void main() { ulong sum; foreach(i; iota(1_000_000_000)) { if (uniform(0F,12F) > 6F) sum++; } writeln("The sum is ", sum); } and then transformed it to: import std.algorithm: map, reduce; import std.random: uniform; import std.range: iota; import std.stdio: writeln; void main() { ulong sum = iota(1_000_000_000).map!((_) => uniform(0F,12F) > 6F ? 1 : 0).reduce!"a +b"; writeln("The sum is ", sum); } and then made use of std.parallelism: import std.algorithm: map; import std.array:array; import std.parallelism: taskPool; import std.random: uniform; import std.range: iota; import std.stdio: writeln; void main() { ulong sum = taskPool().reduce!"a + b"(iota(1_000_000_000).map!((_) => uniform(0F,12F) > 6F ? 1 : 0)); writeln("The sum is ", sum); } I am not entirely sure how to capture the memory used but roughly (since this is a one off measure and not a statistically significant experiment): first takes 30s second takes 30s third takes 4s on an ancient twin Xeon workstation, so 8 cores but all ancient and slow. The issue here is that std.parallelism.reduce, std.parallelism.map, and std.parallelism.amap are all "top level" work scattering functions, they all assume total control of the resources. So the above is a parallel reduce using sequential map which works fine. Trying to mix parallel reduce and parallel map or amap ends up with two different attempts to make use of the resources to create tasks. std.parallelism isn't really a fork/join framework in the Java sense, if you want tree structure parallelism, you have to do things with futures. -- Russel. =========================================== Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk |
Copyright © 1999-2021 by the D Language Foundation