May 29, 2007
Sean Kelly wrote:
> Jeff Nowakowski wrote:
> 
>> freeagle wrote:
>>
>>> Why do people think there is a need for another language/paradigm to solve concurrent problem? OSes deal with parallelism for decades, without special purpose languages. Just plain C, C++. Just check Task manager in windows and you'll notice there's about 100+ threads running.
>>
>>
>> Why limit yourself to hundreds of threads when you can have thousands?
> 
> 
> Because context switching is expensive.  Running thousands of threads on a system with only a few CPUs may use more time simply switching between threads than it does executing the thread code.
> 
> 
> Sean

Why burn cycle on the context switch? If the CPU had a "back door" to swap out the register values on a second bank of registers, then a context switch could run in the time it takes to drain and refill the internal pipes. This would requirer a separate control system to manage the scheduling but that would have some interesting uses in and of it's self (drop user/kernel mode for user/kernel CPU's).
May 29, 2007
Sean Kelly wrote:
> Because context switching is expensive.  Running thousands of threads on a system with only a few CPUs may use more time simply switching between threads than it does executing the thread code.

Did you look at the throughput in the graph I linked to?  Erlang has it's own concept of threads that don't map directly to the OS.  Look again:

http://www.sics.se/~joe/apachevsyaws.html

-Jeff
May 29, 2007
Those are just comparing a thread-for-each-request model with a single-threaded model, showing the single-threaded model doing quite a bit better at high loads. The interesting graphs are on page 126, where the SEDA-based server is compared against Apache and Flash, and to a lesser extent 133, which shows pings against a single-threaded model vs. a SEDA model.

Whatever the case, though, I was just thinking of this as a concurrent-programming architecture that may feel more familiar to coders used to single-threaded development.

I didn't know games were so highly event-driven... I guess I always think of game programming by reflecting on my own short experience with it: an interconnected mess. I still wouldn't suggest SEDA for game programming, since its main focus is adaptive overload control, which shouldn't be needed in a well-controlled system. Networked gaming or AI might benefit from it, though.

Pragma Wrote:

> Robert Fraser wrote:
> > Henrik Wrote:
> > 
> >> Todays rant on Slashdot is about parallel programming and why the
> > ... [snip]
> > 
> > At work, I'm using SEDA:
> > 
> > http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf
> 
> Wow.
> 
> Anyone doubting if this makes a difference should compare the graphs on pages 18 and 25.
> 
> > 
> > Although it's designed primarily for internet services (and indeed, I'm crafting an internet service...), I'm using it for a lot more than just the server portions of the program, and I plan to use it in future (non-internet-based) solutions.
> > 
> > The general idea behind something like this is "fire and forget". There is a set of "stages", each with a queue of events, and a thread pool varying in size depending on load (all managed behind the scenes). The creator of a stage needs only to receive an event and process it, possibly (usually), pushing other events onto other stages, where they will be executed when there's time. Stages are highly modular, and tend to serve only one, or a small group of, functions, but each stage is managed behind the scenes with monitoring tools that increase and decrease thread count respective to thread load.
> > 
> > The advantage of such a system is it allows the designer to think in a very "single-threaded" mindset. You deal with a single event, and when you're done processing it, you let someone else (or a few other people) deal with the results. It also encourages encapsulation and modularity.
> > 
> > The disadvantage? It's not suited for all types of software. It's ideal for server solutions, and I could see its use in various GUI apps, but it might be hard to force an event-driven model onto something like a game.
> 
> Actually, just about everything in most modern 3d games is event driven, except for the renderer. FWIW, the renderer simply redraws the screen on a zen timer, based on what's sitting in the render queue, so it can easily be run in parallel with the event pump.  The event pump, in turn, modifies the render queue.
> 
> The only difference between a game and a typical GUI app is that even modest event latency can be a showstopper.  The renderer *must* run on time (or else you drop frames), I/O events must be handled quickly (sluggish handling), game/entity events must be fast, and render queue contention must be kept very low.
> 
> > 
> > Combined with something like the futures paradigm, though, I can see this being very helpful for allowing multi-threaded code to be written like single-threaded code.
> > 
> > Now only to port it to D...
> 
> 
> -- 
> - EricAnderton at yahoo

May 29, 2007
Robert Fraser wrote:
> 
> .... [snip]
> 
> At work, I'm using SEDA:
> 
> http://www.eecs.harvard.edu/~mdw/papers/mdw-phdthesis.pdf
> 
[...]
> 
> The general idea behind something like this is "fire and forget". There is a set of "stages", each with a queue of events, and a thread pool varying in size depending on load (all managed behind the scenes). The creator of a stage needs only to receive an event and process it, possibly (usually), pushing other events onto other stages, where they will be executed when there's time. Stages are highly modular, and tend to serve only one, or a small group of, functions, but each stage is managed behind the scenes with monitoring tools that increase and decrease thread count respective to thread load.
> 
[...]
> 
> Now only to port it to D...

This sounds somewhat Like an idea I has a while ago: Build a thread safe queue for delegates taking void and returning arrays of more delegates of the same type. Then have a bunch of threads spin on this loop

while(true) queue.Enqueue(queue.Dequeue()());

Each function is single threaded, but If multi threaded stuff is needed, return several delegate, one for each thread. Race conditions and sequencing would still be an issue but some administrative rules might mitigate some of that. One advantage of it is that it is somewhat agnostic about thread count.
May 29, 2007
Hi, what do you think about approaches like
Intel Threading Building Blocks ?
http://www.intel.com/cd/software/products/asmo-na/eng/threading/294797.htm

"It uses common C++ templates and coding style to eliminate tedious threading implementation work."

Anyone has made experiences with it ?
May 29, 2007
Henrik wrote:
> Todays rant on Slashdot is about parallel programming and why the support for multiple cores in programs is only rarely seen. There are a lot of different opinions on why we haven’t seen a veritable rush to adopt parallelized programming strategies, some which include:
> 
> * Multiple cores haven't been available/affordable all that long, programmers just need some time to catch up.
> * Parallel programming is hard to do (as we lack the proper programming tools for it). We need new concepts, new tools, or simply a new generation of programming languages created to handle parallelization from start.
> * Parallel programming is hard to do (as we tend to think in straight lines, lacking the proper cognitive faculties to parallelize problem solving). We must accept that this is an inherently difficult thing for us, and that there never will be an easy solution.
> * We have both the programming tools needed and the cognitive capacity to deal with them, only the stupidity of the current crop of programmers or their inability to adapt stand in the way. Wait a generation and the situation will have sorted itself out.
> 
> I know concurrent programming has been a frequent topic in the D community forums, so I would be interested to hear the community’s opinions on this. What will the future of parallel programming look like? Are new concepts and tools that support parallel programming needed, or just a new way of thinking? Will the “old school” programming languages fade away, as some seem to suggest, to be replaced by HOFL:s (Highly Optimized Functional Languages)? Where will/should D be in all this? Is it a doomed language if it does incorporate an efficient way of dealing with this (natively)?
> 
> 
> Link to TFA: http://developers.slashdot.org/developers/07/05/29/0058246.shtml
> 
> 
> /// Henrik
> 

My issue with functional languages is that, in the end, what the CPU does will be imperative. Why then wright stuff in a functional language? If it is a concession to our ability to understand stuff, why functional as the abstraction? I find it harder to think in a functional manner than in an imperative manner. If we need an abstraction between us and the CPU, I would rather make it an "operative" or "relational" abstraction (something like UML only higher level). This would provide the abstractions need for things other than just concurrence, like memory management (no deletes like GC'ed languages but all of the "manual" memory management is handled by the compiler) or data access, (pure getter functions would disappear and "wormhole" functions would appear for common complex access chains). I guess what I'm saying is that I see functional languages are in between the level that the computer needs to work at and the level that the programmer actually thinks at. There two hight level to allow the programmer to "bit-twiddle" but not high enough that the programmer to abstract to the level that would really let the compiler take over the boring but critical stuff.

But I rant :)
May 29, 2007
Regan Heath wrote:
> That just leaves the deadlock you get when you say:
> 
> synchronize(a) { synchronize(b) { .. } }
> 
> and in another thread:
> 
> synchronize(b) { synchronize(a) { .. } }
> 

what D need is a:

synchronize(a, b) // gets lock on a and b but not until it can get both

Now what about where the lock are in different functions.... :b
May 29, 2007
Jeff Nowakowski wrote:
> Sean Kelly wrote:
>> Because context switching is expensive.  Running thousands of threads on a system with only a few CPUs may use more time simply switching between threads than it does executing the thread code.
> 
> Did you look at the throughput in the graph I linked to?  Erlang has it's own concept of threads that don't map directly to the OS.  Look again:
> 
> http://www.sics.se/~joe/apachevsyaws.html

Sorry, I misundertood.  For some reason I thought you were saying Apache could scale to thousands of threads.  In any case, D has something roughly akin to Erlang's thread with Mikola Lysenko's StackThreads and Tango's Fibers.


Sean
May 29, 2007
Robert Fraser wrote:
> 
> I didn't know games were so highly event-driven... I guess I always think of game programming by reflecting on my own short experience with it: an interconnected mess. I still wouldn't suggest SEDA for game programming, since its main focus is adaptive overload control, which shouldn't be needed in a well-controlled system. Networked gaming or AI might benefit from it, though.


There have been one or two interesting articles on Valve's updated Half-Life engine.  They've done a fairly decent job of making it more parallel and have some good demos to show the difference.  I don't have any links offhand, though I recall one article being on http://www.arstechnica.com


Sean
May 30, 2007
== Quote from Sean Kelly (sean@f4.ca)'s article

> Transactions are another idea, though the common
> implementation of software transactional memory
> (cloning objects and such) isn't really ideal.

Would genuine compiler guarantees regarding const (or invariant, or final, or whatever it's called today) reduce the need for cloning?