Jump to page: 1 25  
Page
Thread overview
The future of concurrent programming
May 29, 2007
Henrik
May 29, 2007
Daniel Keep
May 29, 2007
freeagle
May 29, 2007
Sean Kelly
May 30, 2007
Mike Capp
May 30, 2007
David B. Held
May 30, 2007
Brad Roberts
May 29, 2007
Pragma
May 29, 2007
Pragma
May 29, 2007
freeagle
May 29, 2007
Daniel Keep
May 29, 2007
Regan Heath
May 29, 2007
Regan Heath
May 29, 2007
Sean Kelly
May 29, 2007
Regan Heath
May 29, 2007
BCS
May 30, 2007
Regan Heath
May 30, 2007
James Dennett
May 29, 2007
Jeff Nowakowski
May 29, 2007
Sean Kelly
May 29, 2007
BCS
May 29, 2007
Jeff Nowakowski
May 29, 2007
Sean Kelly
May 30, 2007
David B. Held
May 30, 2007
Paul Findlay
May 30, 2007
Sean Kelly
May 30, 2007
eao197
May 30, 2007
Brad Anderson
May 31, 2007
Daniel Keep
May 31, 2007
David B. Held
May 29, 2007
Thomas de Grivel
May 29, 2007
Dave
May 29, 2007
Sean Kelly
May 29, 2007
janderson
May 29, 2007
Robert Fraser
May 29, 2007
Pragma
May 29, 2007
Robert Fraser
May 29, 2007
Sean Kelly
May 29, 2007
BCS
Intel TBB ?
May 29, 2007
Daniel919
May 30, 2007
Sean Kelly
May 29, 2007
BCS
May 29, 2007
Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven’t seen a veritable rush to adopt parallelized programming strategies, some which include:

* Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
* Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
* Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
* We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.

I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?


Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml


/// Henrik

May 29, 2007

Henrik wrote:
> Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven�t seen a veritable rush to adopt parallelized programming strategies, some which include:
> 
> * Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
> * Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
> * Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
> * We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community�s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the �old school� programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?
> 
> 
> Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
> 
> 
> /// Henrik

I think it's a combination of a lot of things.  Firstly, our languages suck.  I know about Erlang, but Erlang is an alien language to all the C, C++, C#, Java, etc. programmers out there; not only does it have a weird syntax, but it's "functional", too.  Tim Sweeny once remarked that Haskell would be a god-send for game programming if only they got rid of the weird syntax[1].

Secondly, people think very linearly.  Some people break the mould and seem to do well thinking in parallel, but it's damned hard.  The fact that, up until now, most of us were working with single core CPUs that meant there was little point in using parallelism for performance reasons, also isn't helping things.

Let's not forget that our tools suck, too.  I've tried to debug misbehaving multithreaded code before; I now avoid writing MT code where at all possible.

I think the comment about programmers being stupid is just wrong. People only learn what they're taught (either by someone else or by themselves).  If they're never taught how to write good parallel code, you can't suddenly expect them to turn around and start doing so.  Hell, after four and half years of university, I've never needed to write a single line of MT code for a subject.  Does that mean I'm stupid?

Before programmers can really start to get into parallel code, I think several things have to happen.  First, we need some new concepts for talking about parallel code.  Hell, maybe they already exist; but until they're widely used by programmers, the may as well not.  Second, we need a good, efficient C-style language to implement them and demonstrate how to use them and why they're useful.  It being C-style is absolutely critical; look how many people *haven't* switched over to "superior" languages like Erlang and Haskell.  My money is on "it is different and scary" being the primary reason.  We also need better tools for things like debugging.

So yeah; I think concurrent/parallel programming *is* too hard. Programmers aren't omniscient; just because you throw more cores at us doesn't mean we automatically know how to use them :P

</$0.02>

	-- Daniel

[1] And change it so that it wasn't lazily evaluated, at least by default.

-- 
int getRandomNumber()
{
    return 4; // chosen by fair dice roll.
              // guaranteed to be random.
}

http://xkcd.com/

v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP  http://hackerkey.com/
May 29, 2007
Henrik wrote:
> Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven’t seen a veritable rush to adopt parallelized programming strategies, some which include:
> 
> * Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
> * Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
> * Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
> * We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?
> 
> 
> Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
> 
> 
> /// Henrik
> 

I think the languages are enough for concurrent programming. I'm planing to work on a project that will use concurrency from the ground up ( it will be a multimedia library ). I'm also interested in game/graphics development. I've read a few articles that talk about using multiple threads and multi-core CPUs in such environments. The ideas published were based on current generation of programming languages. They provide methods/approaches of coding a multi-threaded game engine with performance rising nearly linearly with additional CPU cores. So I think the problem with MT applications is that people has not yet adapted to thinking in parallel. They don't divide the problem correctly into parts that can be executed concurrently. Therefor they use a lot of locking mechanisms, which lead into other, and harder to solve, problems, like deadlocks. What may be lacking are not features in current programming languages, but maybe tools that would help with designing such applications.

freeagle
May 29, 2007
Henrik wrote:
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?

It won't be via explicit threading, mutexes, etc.  I suspect that will largely be left to library programmers and people who have very specific requirements.  I'm not sure that we've seen the new means of concurrent programming yet, but there are a lot of options which have the right idea (some of which are 40 years old).  For now, I'd be happy with a version of CSP that works in-process as easily as it does across a network (this has come up in the Tango forums in the past, but we've all been too busy with the core library to spend much time on such things).  Transactions are another idea, though the common implementation of software transactional memory (cloning objects and such) isn't really ideal.  I think this will initially be most useful for fairly low-level work--kind of an LL/SC on steroids.


Sean
May 29, 2007
Henrik wrote:
> Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven’t seen a veritable rush to adopt parallelized programming strategies, some which include:
> 
> * Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
> * Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
> * Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
> * We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?
> 
> 
> Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
> 
> 
> /// Henrik
> 

The way I've often thought of it is that we're lacking the higher-level constructs needed to take advantage of what modern processors have to offer.  My apologies for not offering an exact solution, but rather my feelings on the matter.  Who knows, maybe someone already has a syntax for what I'm attempting to describe?

I liken the problem to the way that OOP redefined how we build large scale systems.  The change was so profound that it would be difficult and cumbersome to use a purely free-function design past a certain degree of complexity.  Likewise, with parallelism, we're still kind of at the free-function level with semaphores, mutexes and threads.  Concepts like "transactional memory" are on the right path, but there's more to it than that.

What is needed is something "higher level" that is easily grokked by the programmer, yet just as optimizable by the compiler.  Something like a "MT package definition" that allows us to bind code and data to a certain heap, processor, thread priority or whatever, so that parallelism happens in a controlled yet abstract way.  Kind of like what the GC has done for eliminating calls to delete()/free(), such a scheme should free our hands and minds in a similar way.

The overall idea I have is to whisper to the compiler about the kinds of things we'd like to see, instead of working with so much minutia all the time.  Let the compiler worry about how to cross heap boundaries and insert semaphores/mutexes/queues/whatever when contexts mix; it's make-work, and error prone stuff, which is what the compiler is for.  Now while you could do this stuff with compiler options, I think we need to be more far more expressive than "-optimize-the-hell-out-of-it-for-MT"; it needs to be in the language itself.

That way you can say things like "these modules are on a transactional heap, for at most 2 processors" and "these modules must have their own heap, and can use n processors", all within the same program.  At the same time, you could also say "parallelize this foreach statement", "single-thread this array operation", or "move this instance into package Foo's heap (whatever that is)".  The idea is to say what we really want done, and trust the compiler (and runtime library complete with multi-heap support and process/thread scheduling) to do it for us.

Sure, you'd loose a lot of fine-grained control with such an approach, but as new processors are produced with exponentially more cores than the generation before it, we're going to yearn for something more sledgehammer-like.

-- 
- EricAnderton at yahoo
May 29, 2007
Pragma wrote:
> Sure, you'd loose a lot of fine-grained control with such an approach, 

Erm.. rather /lose/ a lot of control.



-- 
- EricAnderton at yahoo
May 29, 2007
Why do people think there is a need for another language/paradigm to solve concurrent problem? OSes deal with parallelism for decades, without special purpose languages. Just plain C, C++. Just check Task manager in windows and you'll notice there's about 100+ threads running.
If microsoft can manage it with current languages, why we cant?

freeagle
May 29, 2007

freeagle wrote:
> Why do people think there is a need for another language/paradigm to solve concurrent problem? OSes deal with parallelism for decades, without special purpose languages. Just plain C, C++. Just check Task manager in windows and you'll notice there's about 100+ threads running. If microsoft can manage it with current languages, why we cant?
> 
> freeagle

We can; it's just hard as hell and thoroughly unenjoyable.  Like I said before: I can and have written multithreaded code, but it's so utterly painful that I avoid it wherever possible.

It's like trying to wash a car with a toothbrush and one of those giant novelty foam hands.  Yeah, you could do it, but wouldn't it be really nice if someone would go and invent the sponge and wash-cloth?

	-- Daniel

-- 
int getRandomNumber()
{
    return 4; // chosen by fair dice roll.
              // guaranteed to be random.
}

http://xkcd.com/

v2sw5+8Yhw5ln4+5pr6OFPma8u6+7Lw4Tm6+7l6+7D i28a2Xs3MSr2e4/6+7t4TNSMb6HTOp5en5g6RAHCP  http://hackerkey.com/
May 29, 2007
freeagle wrote:
> Why do people think there is a need for another language/paradigm to solve concurrent problem? OSes deal with parallelism for decades, without special purpose languages. Just plain C, C++. Just check Task manager in windows and you'll notice there's about 100+ threads running.

Why limit yourself to hundreds of threads when you can have thousands?

http://www.sics.se/~joe/apachevsyaws.html

-Jeff
May 29, 2007
freeagle Wrote:

> Why do people think there is a need for another language/paradigm to solve concurrent problem? OSes deal with parallelism for decades, without special purpose languages. Just plain C, C++. Just check Task manager in windows and you'll notice there's about 100+ threads running. If microsoft can manage it with current languages, why we cant?

Maybe because we may want to parallelize arbitrary bits of code,
much like URBI does by introducing other statements combinators than ';'.
They introduce syntax like

> whenever (ball.visible) {
>   head.rotX += ball.alpha   &   head.rotY += ball.tetha;
> }

Notice how the two statements are combined with '&' and not ';' which means that they are to run *simultaneously*. URBI is event driven : the 'whenever' keyword indicates the block is to be executed every time the statement is true. There are other combinators to force a task to start before another, express mutual exclusion, etc

This is so easy to understand and use that kids can play with it already. URBI was designed to control robots and this all implies an event-driven paradigm but to me it shows that powerful parallelization primitives can be expressed using very simple syntax.

I believe much parallel-computing research is still ongoing but now more
toward implementations than its theory as it was already done few decades
ago. Parallel computing primitives are well known by researchers but is
just not taught at all since it was only available to big corporations/universities
and large computing clusters. Now every new CPU has multiple cores and
we have to introduce PP concepts into "everyday programming" but I'm sure
there's a clever and simple way to achieve it, in the style of garbage collection
vs manual memory handling. These are like parallelisation *very* complex tools
to design and to understand fully, but they are made really easy to use by the
approach D has taken.

-- 
  Thomas de Grivel
  Epita 2009

« First   ‹ Prev
1 2 3 4 5