November 15, 2012
On 11/15/2012 12:31 PM, Joseph Rushton Wakeling wrote:
> On 11/15/2012 01:55 AM, Joseph Rushton Wakeling wrote:
>> An oddity here: although the correct results seem to come out of the
>> calculation, at the end, the program containing the parallel foreach hangs -- it
>> doesn't stop running, even though all the calculations are complete.
>>
>> Any thoughts as to why?  I guess a thread that has not closed correctly, but I
>> can't see why any one of them should not do so.
>
> On closer examination, this appears to be only with gdc-compiled code -- if I
> compile with ldc or dmd the program exits normally.

OK, this is a known bug with GDC:
http://www.gdcproject.org/bugzilla/show_bug.cgi?id=16

November 15, 2012
On 11/14/2012 10:17 PM, Jonathan M Davis wrote:
> I would point out though that given how expensive disk writes are, unless
> you're doing a lot of work within the parallel foreach loop, there's a good
> chance that it would be more efficient to use std.concurrency and pass the
> writes to another thread to do the writing. And the loop itself should still
> be able to be a parallel foreach, so you wouldn't have to change much
> otherwise. But with the synchronized block, you'll probably end up with each
> thread spending a lot of its time waiting on the lock, which will end up
> making the whole thing effectively single-threaded.

Do you mean that the synchronized {} blocks have to all be completed before the threads can all be terminated?

In the end the solution I came to was something like this:

    enum N = 16;   // number of cases
    shared real[N+1] results;

    foreach(i; parallel(iota(0, N+1)))
    {
        // ... do a lot of calculation ...
        results[i] = // result of calculation
    }

    // and now at the end we write out all the data

... which seems to work, although I'm not 100% confident about its safety.
November 15, 2012
I'm not a robot and didn't mean to spam, the page got stuck in this odd refresh loop and I wasn't sure what was going on.

On Wednesday, 14 November 2012 at 17:45:35 UTC, Vijay Nayar wrote:
> On Wednesday, 14 November 2012 at 16:43:37 UTC, Joseph Rushton Wakeling wrote:
>> On 11/14/2012 05:16 PM, H. S. Teoh wrote:
>> I take it there's no more "native-to-D" way of implementing a file lock? :-(
>
> Could you put the file access in a synchronized block?
>
> http://dlang.org/statement.html#SynchronizedStatement
>
>  - Vijay


November 15, 2012
On Thursday, November 15, 2012 15:33:31 Joseph Rushton Wakeling wrote:
> On 11/14/2012 10:17 PM, Jonathan M Davis wrote:
> > I would point out though that given how expensive disk writes are, unless
> > you're doing a lot of work within the parallel foreach loop, there's a
> > good
> > chance that it would be more efficient to use std.concurrency and pass the
> > writes to another thread to do the writing. And the loop itself should
> > still be able to be a parallel foreach, so you wouldn't have to change
> > much otherwise. But with the synchronized block, you'll probably end up
> > with each thread spending a lot of its time waiting on the lock, which
> > will end up making the whole thing effectively single-threaded.
> 
> Do you mean that the synchronized {} blocks have to all be completed before the threads can all be terminated?

No, I mean that if you have a bunch of threads all trying to get the mutex for the synchronized block, then the only one doing anything is the one in the synchronized block. Once it's done, it then just loops back around, quickly going through whatever calculations it has to do before hitting the synchronized block again. In the meantime one of the other threads got the synchronized block and is writing to disk. But all the other threads are still waiting. So, for each of the threads, almost all of the time is spent blocked, making it so that most of the time, only one thread is doing anything, which completely defeats the purpose of having multiple threads.

>From the sounds of it, this doesn't really affect you, because you're doing
expensive calculations, but anything with very fast but parallelizable calculations could be totally screwed by the synchronized block.

- Jonathan M Davis
1 2
Next ›   Last »