July 16, 2007
Reply to Sean,

> Well, there are a lot of ways to make it easier than explicit
> manipulation of mutexes and such--some of the involved research dates
> back to the early 60s--but even with these alternate methods,
> concurrency isn't easy.
> 


Murphy's Law #NaN: Concurrent programming is hard.

Might it be the case that there is something fundamental about concurrent programming that makes it difficult for most, if not all, people to work with? Might it be that the normal[*] human brain just can't think that way?

[*] think Rain Man <g>


July 16, 2007
Bruno Medeiros wrote:
> Sean Kelly wrote:
>>
>> Well, there are a lot of ways to make it easier than explicit manipulation of mutexes and such--some of the involved research dates back to the early 60s--but even with these alternate methods, concurrency isn't easy.
> 
> Hum, like conditional variables?

I was thinking of Agents.  Hoare's CSP is fairly old as well--I think the original paper was published in the mid-late 70s.  Condition variables are just a building-block, along with mutexes, semaphores, etc.


Sean
July 16, 2007
BCS wrote:
> Reply to Sean,
> 
>> Well, there are a lot of ways to make it easier than explicit
>> manipulation of mutexes and such--some of the involved research dates
>> back to the early 60s--but even with these alternate methods,
>> concurrency isn't easy.
> 
> Murphy's Law #NaN: Concurrent programming is hard.
> 
> Might it be the case that there is something fundamental about concurrent programming that makes it difficult for most, if not all, people to work with? Might it be that the normal[*] human brain just can't think that way?

I think the issue has more to do with the legacy of old decisions made for the sake of efficiency and the difficulty with which the result of these decisions scale as parallelism increases.  Near as I can tell, message-passing never became terribly popular in the 80s largely because mutually exclusive access to shared data required less memory overhead, and because it could be more easily done in library code for existing, popular programming languages (ie. C).

But perhaps you're right in that people tend to be self-centered in how they approach problems.  A recipe for baking a cake, for example, assumes a single baker in that it consists of a series of sequential steps from beginning to completion.  Most programs are written the same way.  But a more accomplished cook quickly learns that steps can be performed out of order, and kitchen staffs delegate different portions of the cooking process to different individuals to increase throughput.

For comparison, both mutual exclusion and message-passing delegate tasks to multiple distinct workers.  But the way each operate are subtly different.  Mutual exclusion can be thought of as having a single shared program state, and mutexes and such are a means of protecting this state from corruption.  By comparison, message-passing has no shared program state.  Each distinct worker could exist within the same process, a different process, or on another machine entirely.  So rather than the kitchen somehow delegating work to various chefs and micro-managing their interaction (the mutually exclusive approach), the chefs each go on about their assigned task and interact whenever they need an ingredient (the message-passing approach).

I think the important shift in mindset regards how to deal with common resources.  Typically, the mutually exclusive approach implies that workers queue up and take turns utilizing the resource.  Only one person can use an oven at any given time, for example.  The message-passing equivalent would be to designate a specific worker for baking cakes. When a cake is prepared, it is left on a table, and the baker takes cakes off the table as ovens are available and cooks them, placing the completed product on another table when the cakes are done.

So in conclusion, I think that the message-passing approach is the way teams of people work together cooperatively, while mutual exclusion is more like a person working on a task who suddenly finds himself surrounded by other people.  In the former case, concurrency is planned from the outset, while in the latter case, concurrency is more of a contingency mechanism.  I don't think either one is inherently incompatible with how people think, but message-passing does require a bit more consideration or planning than mutual exclusion.


Sean
July 17, 2007
Sean Kelly wrote

> But a more accomplished cook quickly learns that steps can be performed out of order, and kitchen staffs delegate different portions of the cooking process to different individuals to increase throughput.

This seems to be the main aspect. Current designers and coders are used to play their music as one-man-bands.

Concurrency requires everyone to upgrade to a conductor of equally skilled one-man-bands. This includes the ability to plan for the right equipment, to take into account an unstable number of available skilled personal and planning for the missing of the conductor itself.

Sadness lies in the fact that quasi-single-cpu cuncurrency seems much harder to master than massive concurrency.

-manfred

July 17, 2007
BCS wrote:
> Reply to Sean,
> 
>> Well, there are a lot of ways to make it easier than explicit
>> manipulation of mutexes and such--some of the involved research dates
>> back to the early 60s--but even with these alternate methods,
>> concurrency isn't easy.
>>
> 
> 
> Murphy's Law #NaN: Concurrent programming is hard.
> 
> Might it be the case that there is something fundamental about concurrent programming that makes it difficult for most, if not all, people to work with? Might it be that the normal[*] human brain just can't think that way?
>
> [*] think Rain Man <g>

There are moments where I wish I could think *like* Rain Man, especially when it comes to concurrency.

At a minimum, science fiction is right on target with your comment.  In the Ghost in The Shell (Standalone Series), there is the occasional reference to an "Autistic Mode" that some cyber-brains have.  So throughout the story, you have some of these cyborgs flipping that switch whenever they need some Rain Man style insight to a given situation - like searching the internet as one would drink from a firehose, or performing wide-area surveillance via 100+ cameras at once.  If nothing else, it illustrates that there's something extraordinary about such abilities that may be permanently out-of-reach for normal people, despite the fact that some people are just born that way.

Given that cybernetic brain augmentation is a long way off, I think we're stuck trying to develop a better way to express the concurrent world in the common tongue of us "flat-landers".

$0.02:
But if you ask me what's needed, I think it comes down to the fact that concurrency is between the code and data, not just in the code.  So either the developer needs to balance those two, or the compiler needs to know more about your data in order to parallelize things.  Algol family languages (C, D, Java, etc.) are all in the first category, hence the  nature of this thread.  Erlang is an example of the latter, and benefits mostly from being a functional language (and from being purpose-built for parallelization).

I really think that we have the tools we need If we were to teach the compiler how to perform some calculus on data structures when their handled in iteration, it's reasonable to assume that it can take steps to parallelize things for us - this would get us about half-way to the kind of stuff functional languages can pull off.  The D2.0 additions for invariance and const-ness will probably help here.

-- 
- EricAnderton at yahoo
July 17, 2007
Pragma wrote:
> BCS wrote:
>> Reply to Sean,
>>
>>> Well, there are a lot of ways to make it easier than explicit
>>> manipulation of mutexes and such--some of the involved research dates
>>> back to the early 60s--but even with these alternate methods,
>>> concurrency isn't easy.
>>>
>>
>>
>> Murphy's Law #NaN: Concurrent programming is hard.
>>
>> Might it be the case that there is something fundamental about concurrent programming that makes it difficult for most, if not all, people to work with? Might it be that the normal[*] human brain just can't think that way?
>  >
>> [*] think Rain Man <g>
> 
> There are moments where I wish I could think *like* Rain Man, especially when it comes to concurrency.
> 
> At a minimum, science fiction is right on target with your comment.  In the Ghost in The Shell (Standalone Series), there is the occasional reference to an "Autistic Mode" that some cyber-brains have.  So throughout the story, you have some of these cyborgs flipping that switch whenever they need some Rain Man style insight to a given situation - like searching the internet as one would drink from a firehose, or performing wide-area surveillance via 100+ cameras at once.  If nothing else, it illustrates that there's something extraordinary about such abilities that may be permanently out-of-reach for normal people, despite the fact that some people are just born that way.

Interesting.  In Vernor Vinge's "Fire in the Deep" (if I remember correctly), there are people who take drugs for basically the same purpose.  They're ship operators and such--jobs that require inhuman focus to perform optimally.

> But if you ask me what's needed, I think it comes down to the fact that concurrency is between the code and data, not just in the code.  So either the developer needs to balance those two, or the compiler needs to know more about your data in order to parallelize things.  Algol family languages (C, D, Java, etc.) are all in the first category, hence the  nature of this thread.  Erlang is an example of the latter, and benefits mostly from being a functional language (and from being purpose-built for parallelization).
> 
> I really think that we have the tools we need If we were to teach the compiler how to perform some calculus on data structures when their handled in iteration, it's reasonable to assume that it can take steps to parallelize things for us - this would get us about half-way to the kind of stuff functional languages can pull off.  The D2.0 additions for invariance and const-ness will probably help here.

Hm...  I guess the purpose would be some sort of optimal COW mechanism for shared data, or is there another use as well?  It's an intriguing idea, though I wonder if such a scheme would make the performance of code difficult to analyze.


Sean
July 17, 2007
Sean Kelly wrote:
> Pragma wrote:
>>
>> I really think that we have the tools we need If we were to teach the compiler how to perform some calculus on data structures when their handled in iteration, it's reasonable to assume that it can take steps to parallelize things for us - this would get us about half-way to the kind of stuff functional languages can pull off.  The D2.0 additions for invariance and const-ness will probably help here.
> 
> Hm...  I guess the purpose would be some sort of optimal COW mechanism for shared data, or is there another use as well?  It's an intriguing idea, though I wonder if such a scheme would make the performance of code difficult to analyze.
> 
> 
> Sean

Your guess is as good as mine.  I was just making the observation that the major hurdle is that we're adopting techniques that are deliberately explicit, to overcome the fact that the D compiler is unaware of the problem; the degree of specificity that is required can be very unwieldy.  In contrast, the clear winners in this area are languages that are /implicitly/ paralellizable by design, so clearly we need to move in that direction instead. :)

Really, what I'm thinking of is a way to say "give me your best shot, or tell me why you can't parallelize this".  The parallel() suggestion for foreach (I forgot by who) is a good example of this.  Adding a "shared" modifier for classes and typedefs might be another.

Like you suggest, a modified CoW would be a good start.  At a minimum, if the GC were more thread aware, we could do smarter things inside and outside the compiler.

-- 
- EricAnderton at yahoo
July 17, 2007
Reply to Pragma,

> There are moments where I wish I could think *like* Rain Man,
> especially when it comes to concurrency.
> 
[...]
> If nothing else, it illustrates that there's something
> extraordinary about such abilities that may be permanently
> out-of-reach for normal people, despite the fact that some people are
> just born that way.
> 

I have wondered if this is something like incomputableity with regards to a Turing machine. Might the normal brain be like a Turing machine and the autistic brain be something like a brain not limited in the same way? Given that some people can, for instance, identify large primes in near constant time, I'd say this is a distinct possibility.

At risk of sounding politically incorrect; does anyone known of an autistic person who might be interested in learning programming?


July 18, 2007
BCS wrote:
> Reply to Pragma,
> 
>> There are moments where I wish I could think *like* Rain Man,
>> especially when it comes to concurrency.
>>
> [...]
>> If nothing else, it illustrates that there's something
>> extraordinary about such abilities that may be permanently
>> out-of-reach for normal people, despite the fact that some people are
>> just born that way.
>>
> 
> I have wondered if this is something like incomputableity with regards to a Turing machine. Might the normal brain be like a Turing machine and the autistic brain be something like a brain not limited in the same way? Given that some people can, for instance, identify large primes in near constant time, I'd say this is a distinct possibility.
> 
> At risk of sounding politically incorrect; does anyone known of an autistic person who might be interested in learning programming?
> 
> 

Autism is not synonymous with savantism, which is what you where thinking of.

-- 
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
July 18, 2007
Brad Anderson wrote:
> Bruno Medeiros wrote:
>> I read in a recent article (I think it came from Slashdot, but not sure)
>> that a new programming paradigm is needed to make concurrency easier,
>> just in the same way as OO (and class encapsulation) improved on the
>> previous data abstraction paradigm to make code cleaner and easier to
>> write. Just in the same way as structured programming (ie, using
>> functions/scopes/modules) improved on the previous paradigm of
>> sequential/global/goto-using code, so to speak.
> 
> http://www.pragmaticprogrammer.com/articles/erlang.html
> 
> search for "Concurrency Oriented Programming"
> 
> BA


Hum, again Erlang, interesting. I had heard a bit about it before, on an article (again don't remember where) about a comparison between Apache and a web server built in Erlang. On a multicore machine Erlang did much because of it's massively parallel capabilities, etc..
This makes Erlang very interesting, but one must then ask questions like: What restrictions does Erlang's approach have? Does it have disadvantages in other areas or aspects of programming? Is it good as a general purpose programming language, or is it best only when doing concurrent applications? Can any of it's ideas be applied to imperative languages like D, Java, C#, etc.?
I personally am not looking deep into this (never had the use to study concurrency in-depth so far), I'm just pointing out that a lot of things have to be considered, and I have a feeling that there must be some downside to Erlang, or otherwise everyone else would be trying to bring Erlang aspects into their languages. Or maybe Erlang is just taking momentum. Time will tell.


-- 
Bruno Medeiros - MSc in CS/E student
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D