April 29, 2010
On 04/29/2010 06:03 AM, Petr Kalny wrote:
> On Fri, 23 Apr 2010 15:23:22 +0200, Walter Bright
> <newshound1@digitalmars.com> wrote:
>
>> bearophile wrote:
>>> Walter Bright:
>>>> OCaml has a global interpreter lock which explains its behavior.
>>>> Russell
>>>> didn't know why the Haskell behavior was so bad. He allowed that it was
>>>> possible he was misusing it.
>>> You have just the illusion to have learned something about this.
>>> Trying to
>>> read too much from this single example is very wrong. A single
>>> benchmark,
>>> written by a person not expert in the language, means nearly nothing.
>>> You
>>> need at least a suite of good benchmarks, written by people that know
>>> the
>>> respective languages. And even then, you have just an idea of the
>>> situation.
>>
>>
>> Fair enough, but in order to dismiss the results I'd need to know
>> *why* the Haskell version failed so badly, and why such a
>> straightforward attempt at parallelism is the wrong solution for Haskell.
>>
>> You shouldn't have to be an expert in a language that is supposedly
>> good at parallelism in order to get good results from it.
>>
>> (Russel may or not be an expert, but he is certainly not a novice at
>> FP or parallelism.)
>>
>> Basically, I'd welcome an explanatory riposte to Russel's results.
>
> IIRC Haskell's problems with concurrency have roots in its 100% lazy
> evaluation.
>
> Anyone wanting more details may find this page useful:
>
> http://www.haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency

Which specific papers are you referring to?

BTW, I wonder how current the page is. It features no paper from 2009 or 2010, one from 2008, none from 2007, and six from 2006. Of those, three links are broken.


Andrei
April 29, 2010
On Thu, 29 Apr 2010 16:02:03 +0200, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> On 04/29/2010 06:03 AM, Petr Kalny wrote:
>> On Fri, 23 Apr 2010 15:23:22 +0200, Walter Bright
>> <newshound1@digitalmars.com> wrote:
>>
>>> bearophile wrote:
>>>> Walter Bright:
>>>>> OCaml has a global interpreter lock which explains its behavior.
>>>>> Russell
>>>>> didn't know why the Haskell behavior was so bad. He allowed that it was
>>>>> possible he was misusing it.
>>>> You have just the illusion to have learned something about this.
>>>> Trying to
>>>> read too much from this single example is very wrong. A single
>>>> benchmark,
>>>> written by a person not expert in the language, means nearly nothing.
>>>> You
>>>> need at least a suite of good benchmarks, written by people that know
>>>> the
>>>> respective languages. And even then, you have just an idea of the
>>>> situation.
>>>
>>>
>>> Fair enough, but in order to dismiss the results I'd need to know
>>> *why* the Haskell version failed so badly, and why such a
>>> straightforward attempt at parallelism is the wrong solution for Haskell.
>>>
>>> You shouldn't have to be an expert in a language that is supposedly
>>> good at parallelism in order to get good results from it.
>>>
>>> (Russel may or not be an expert, but he is certainly not a novice at
>>> FP or parallelism.)
>>>
>>> Basically, I'd welcome an explanatory riposte to Russel's results.
>>
>> IIRC Haskell's problems with concurrency have roots in its 100% lazy
>> evaluation.
>>
>> Anyone wanting more details may find this page useful:
>>
>> http://www.haskell.org/haskellwiki/Research_papers/Parallelism_and_concurrency
>
> Which specific papers are you referring to?
>
> BTW, I wonder how current the page is. It features no paper from 2009 or 2010, one from 2008, none from 2007, and six from 2006. Of those, three links are broken.
>
>
> Andrei

Right, I couldn't find the paper, I have read about concurrency in Haskell, there as well.

(But I hoped there might be some other useful information :o).

After more searching I located that paper at:

http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/index.htm

Runtime Support for Multicore Haskell
http://research.microsoft.com/en-us/um/people/simonpj/papers/parallel/multicore-ghc.pdf

HTH

Petr
April 29, 2010
Pelle wrote:
> On 04/24/2010 01:13 AM, retard wrote:
>> Maybe Walter is trying to break the world record for implementing things
>> without understanding them first?
> 
> Didn't Walter implement templates without grokking them?

Yes.

> I think I read that somewhere around here.
> 
> That's quite a respectable feat, if you ask me.

I also passed the quantum mechanics final in physics without understanding QM. I still understood how to apply the rules, though. On the other hand, I "got" newtonian mechanics.
May 08, 2010
== Quote from Walter Bright (newshound1@digitalmars.com)'s article
> Walter Bright wrote:
> > Robert Jacques wrote:
> >> On Fri, 23 Apr 2010 11:10:48 -0300, Walter Bright <newshound1@digitalmars.com> wrote:
> >>
> >>> Michael Rynn wrote:
> >>>> OK where's the naive version of the D Pi program that scales up with 1,2,4 cores? How far off are we? Is the concurrency module working with it yet?
> >>>
> >>> Nobody's written a library function to parallelize a map/reduce yet.
> >>
> >> Dave Simcha has.
> >> Code:
> >>
http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
> >>
> >> Docs: http://cis.jhu.edu/~dsimcha/parallelFuture.html
> >
> > Cool!
> Unfortunately, it currently fails to compile with D2.

Can you tell me what errors you're getting?  I realize that map and reduce are slightly brittle due to a combination of severe abuse of templates and subtle differences in the way different compiler releases handle IFTI, but for me all the unittests still compile and run successfully on 2.045.  Also, I eat my own dogfood regularly and haven't noticed any problems with this lib, though the vast majority of my uses are the parallel foreach loop, not map and reduce.
1 2 3 4 5 6
Next ›   Last »