June 19, 2005
In article <d94alb$2gld$1@digitaldaemon.com>, Manfred Nowak says...
>
>Sean Kelly <sean@f4.ca> wrote:
>[...]
>> Yikes.  So you're saying you'd have lockless sharing of data between the cores and only force a cache sync when communicating between processors?  Makes sense, I suppose, but it sounds risky.
>
>Why lockless?

If multiple cores share a single cache, then there's no need to force cache coherency when sharing data between them.  Of course, that assumes there's some way to tell you're running on two cores sharing a cache, which may not be possible.  As for why: cache synchs take time.  Less time than full locking, but time nevertheless.  I don't know how useful this would be for PCs, but for NUMA machines that have clusered cores but inter-cluster ops involve message-passing, this may be a reasonable strategy.  Though I'm speculating here, as I've never actually coded for such a machine.

>I see, that you catched the basic principal behind my example. And as you may see above it is difficult to the human brain to think in concurrency: you serialized the events but do not handle the case when depending on an unlucky implementation both cores might independently raise both exceptions, one core the fire exception and the other the alive-knob exception.
>
>In this case you have a control leak.

Why can't the exception handlers serialize error-handing though?  There ultimately has to be some coordination to resolve potentially conflicting directives.  Why should this happen when the exception is thrown as opposed to when it's caught?

>There is one more thing to mention: it is not seldom, that specifications are incomplete or even contradictory and that detection of this specification faults occurs late in the software production process.
>
>Depending on the awareness of the implementators such a fault might traverse into the final product.
>
>Have a look at your two cases: you are handling the case that the alive-knob exception comes first, but you missed that the fire-knob exception might be thrown, when the train stopped already, but in a tunnel.

And what if the train had already stopped because of an engine failure, or because someone pulled the emergency brake?  The 'fire' routine would need to know whether it should try and move a stopped train out of a tunnel, etc.  How can this be solved by prioritizing exceptions?  Or am I missing something?


Sean


June 19, 2005
It's been said in this thread before, but multi-threading control is a function of the OS and not the language.  Is C a dead language because it doesn't have dual-core functionality?  Of course not.  Although, we're still not clear on what dual-core functionality is being proposed to be added to the language. Regardless, it shouldn't be a concern.  Simple mutli-threading constructs and locking mechanisms should be enough to guarantee that D will work in dual-core systems.

In article <d94hu8$2l7i$1@digitaldaemon.com>, Sean Kelly says...
>
>In article <d94alb$2gld$1@digitaldaemon.com>, Manfred Nowak says...
>>
>>Sean Kelly <sean@f4.ca> wrote:
>>[...]
>>> Yikes.  So you're saying you'd have lockless sharing of data between the cores and only force a cache sync when communicating between processors?  Makes sense, I suppose, but it sounds risky.
>>
>>Why lockless?
>
>If multiple cores share a single cache, then there's no need to force cache coherency when sharing data between them.  Of course, that assumes there's some way to tell you're running on two cores sharing a cache, which may not be possible.  As for why: cache synchs take time.  Less time than full locking, but time nevertheless.  I don't know how useful this would be for PCs, but for NUMA machines that have clusered cores but inter-cluster ops involve message-passing, this may be a reasonable strategy.  Though I'm speculating here, as I've never actually coded for such a machine.
>
>>I see, that you catched the basic principal behind my example. And as you may see above it is difficult to the human brain to think in concurrency: you serialized the events but do not handle the case when depending on an unlucky implementation both cores might independently raise both exceptions, one core the fire exception and the other the alive-knob exception.
>>
>>In this case you have a control leak.
>
>Why can't the exception handlers serialize error-handing though?  There ultimately has to be some coordination to resolve potentially conflicting directives.  Why should this happen when the exception is thrown as opposed to when it's caught?
>
>>There is one more thing to mention: it is not seldom, that specifications are incomplete or even contradictory and that detection of this specification faults occurs late in the software production process.
>>
>>Depending on the awareness of the implementators such a fault might traverse into the final product.
>>
>>Have a look at your two cases: you are handling the case that the alive-knob exception comes first, but you missed that the fire-knob exception might be thrown, when the train stopped already, but in a tunnel.
>
>And what if the train had already stopped because of an engine failure, or because someone pulled the emergency brake?  The 'fire' routine would need to know whether it should try and move a stopped train out of a tunnel, etc.  How can this be solved by prioritizing exceptions?  Or am I missing something?
>
>
>Sean
>
>

Regards,
James Dunne
June 19, 2005
James Dunne <james.jdunne@gmail.com> wrote:

> Is C a dead language because it doesn't have dual-core functionality? Of course not.

True. But have you read why Buhr abandoned his concurrency project in C?

> Simple
> mutli-threading constructs and locking mechanisms should be
> enough to guarantee that D will work in dual-core systems.

Can you prove that?

[...]
>>>In this case you have a control leak.
>>Why can't the exception handlers serialize error-handing though?

Why should they? This kind of argument has shown up repeatedly: Why should a concurrent working machine be viewed as a serial working machine? In fact the AMD cores are designed to have a programmable lower bound on the priority of interrupts they will handle: so they will handle interrupts concurrently.

[...]
>>And what if the train had already stopped because of an engine failure, or because someone pulled the emergency brake?

You are right, that you can extend the security rules and will have more complex scenes to solve. Therefore I limited the example to only three variables.

>>The
>>'fire' routine would need to know whether it should try and move
>>a stopped train out of a tunnel, etc.  How can this be solved by
>>prioritizing exceptions?  Or am I missing something?

This truly cannot be done by prioritizing and therefore I said, that you have a control leak: depending on the implementation it might be necessary to preemptry both taks assigned to the cores and start one adapted to the more complex scene.

-manfred
June 19, 2005
In article <d94ls5$2o57$1@digitaldaemon.com>, Manfred Nowak says...
>
>>>>In this case you have a control leak.
>>>Why can't the exception handlers serialize error-handing though?
>
>Why should they? This kind of argument has shown up repeatedly: Why should a concurrent working machine be viewed as a serial working machine? In fact the AMD cores are designed to have a programmable lower bound on the priority of interrupts they will handle: so they will handle interrupts concurrently.

They should because the way errors are handled depends on system state.  And resources for handling there errors are shared.  If two errors are thrown concurrently that both want to do something with the speed of the train, for example, something will need to prioritize those operations.  What would the speed control do if it simultaneously received errors to stop and to accelerate?

>>>The
>>>'fire' routine would need to know whether it should try and move
>>>a stopped train out of a tunnel, etc.  How can this be solved by
>>>prioritizing exceptions?  Or am I missing something?
>
>This truly cannot be done by prioritizing and therefore I said, that you have a control leak: depending on the implementation it might be necessary to preemptry both taks assigned to the cores and start one adapted to the more complex scene.

This can all be done in code though.  Do multi-core CPUs actually offer instructions to do this in a way that requires language support beyond what D already has?  (I suppose I should go read the references you've been posting)


Sean


June 19, 2005
On Thu, 16 Jun 2005 16:09:44 +0000 (UTC), Manfred Nowak wrote:

> The shipping of the "AMD Athlon 64 X2" is announced to start at the end of this month.
> 
> A review is available: http://www.amdreview.com/reviews.php?rev=athlonx24200
> 
> As the review suggests WinXP and Sandra are prepared to use more than one CPU.
> 
> Will D be outdated before the release of 1.0 because D has no support for multi core units?

Yes. In the exact same manner that all existing 3+GL languages are. C/C++/C#/Eiffel,SmallTalk, Forth, COBOL, VB, Fortran, ... But maybe you are talking about library support rather than language support? Are you talking about the need for D to have new keywords or new object code generation when the target is a dual/triple/quadruple/quintuple/... core machine?

Maybe this thread can be renamed "Duel Core Support"  ;-)

-- 
Derek Parnell
Melbourne, Australia
20/06/2005 7:35:55 AM
June 20, 2005
>> Simple
>> mutli-threading constructs and locking mechanisms should be
>> enough to guarantee that D will work in dual-core systems.
>
>Can you prove that?

A dualcore isn't that mucgh different from dual CPUs. Make an example of what problem could arise on a dual core that can't on dual CPUs.


>[...]
>>>>In this case you have a control leak.
>>>Why can't the exception handlers serialize error-handing though?
>
>Why should they? This kind of argument has shown up repeatedly: Why should a concurrent working machine be viewed as a serial working machine? In fact the AMD cores are designed to have a programmable lower bound on the priority of interrupts they will handle: so they will handle interrupts concurrently.
>
>[...]
>>>And what if the train had already stopped because of an engine failure, or because someone pulled the emergency brake?
>
>You are right, that you can extend the security rules and will have more complex scenes to solve. Therefore I limited the example to only three variables.
>
>>>The
>>>'fire' routine would need to know whether it should try and move
>>>a stopped train out of a tunnel, etc.  How can this be solved by
>>>prioritizing exceptions?  Or am I missing something?
>
>This truly cannot be done by prioritizing and therefore I said, that you have a control leak: depending on the implementation it might be necessary to preemptry both taks assigned to the cores and start one adapted to the more complex scene.

Anyway, this isn't a new problem as real concurrency isn't an invention of this year. We have it for a long time. There are a lot of dual CPU-machines with real concurrency. You haven't described any problem that wouldn't arise on such machine.




June 20, 2005
Manfred Nowak wrote:
> James Dunne <james.jdunne@gmail.com> wrote:
<SNIP>
> 
>>Simple
>>mutli-threading constructs and locking mechanisms should be
>>enough to guarantee that D will work in dual-core systems.
> 
> 
> Can you prove that?
> 
I haven't had time to read the references that you posted, but the above begs the question - can you prove that existing multi-threaded controls will not work correctly on SMP machines?

I've read this thread, and I am sorry to say that I am too thick to see why dual core CPUs are any different to programming multiple CPU machines - or for that matter any different to programming a multi-threaded application.

Manfred, you look to be most concerned with concurrency issues - but from a programmers point of view I cannot see the difference between programming with multiple threads and programming with multiple CPUS/cores.  Assuming a general purpose OS (and I think we have to), then your train example has (to my mind) exactly the same problems regardless of what kind of machine it is run on.  The only true difference is that on a multiple core machine the instructions can actually run at the same physical time, on a single core machine the threads need to share the CPU, but that means nothing because the CPU could change threads every few operations - ie you need to provide the same locks and measures anyhow.

Brad
June 20, 2005
In article <d96n2l$11lq$1@digitaldaemon.com>, Brad Beveridge says...
>
>Manfred Nowak wrote:
>> James Dunne <james.jdunne@gmail.com> wrote:
><SNIP>
>> 
>>>Simple
>>>mutli-threading constructs and locking mechanisms should be
>>>enough to guarantee that D will work in dual-core systems.
>> 
>> 
>> Can you prove that?
>> 
>I haven't had time to read the references that you posted, but the above begs the question - can you prove that existing multi-threaded controls will not work correctly on SMP machines?

They will.

>I've read this thread, and I am sorry to say that I am too thick to see why dual core CPUs are any different to programming multiple CPU machines - or for that matter any different to programming a multi-threaded application.

AFAIK, dual core machines are indistuingishable from 'true' SMP machines to all but perhaps an OS programmer.  The most obvious example of this is that Windows reports each core of a multi-core machine as a separate CPU.

>Manfred, you look to be most concerned with concurrency issues - but from a programmers point of view I cannot see the difference between programming with multiple threads and programming with multiple CPUS/cores.

The only difference I can think of is that cache coherency is not an issue with single CPU machines, though you typically have to pretend that it is anyway (since not many applications are written to target a specific hardware configuration).  Theoretically, I could see some of what Manfred mentioned being a potential point of optimization for realtime systems, but those would probably be built with a custom compiler and target a specific run environment anyway.

>Assuming a general purpose OS (and I think we have to), then your train example has (to my mind) exactly the same problems regardless of what kind of machine it is run on.  The only true difference is that on a multiple core machine the instructions can actually run at the same physical time, on a single core machine the threads need to share the CPU, but that means nothing because the CPU could change threads every few operations - ie you need to provide the same locks and measures anyhow.

Exactly.  D is no different that any other procedural language in how it deals with concurrency.  Though as a point of geek interest I suppose it's worth mentioning that BS' original purpose for C++ was as a concurrent language--it just didn't really stay that way once he'd finished his research.

In any case, if there's anything that D lacks, I'd love to hear some concrete examples.  It's much easier to address issues when you know specifically what they are, and the discussion has remained pretty abstract up to this point.


Sean


June 20, 2005
Derek Parnell <derek@psych.ward> wrote:

[...]
>> Will D be outdated before the release of 1.0 because D has no support for multi core units?
> 
> Yes. In the exact same manner that all existing 3+GL languages are. C/C++/C#/Eiffel,SmallTalk, Forth, COBOL, VB, Fortran, ...

I disagree. All this languages are way beyond version 1.0 whereas D isnt.


> But maybe you are talking about library support rather than language support?

If the paper of Buhr, which I have mentioned somewehere above, is right then it is possible to include all concurrency support into a library, but only if the language follows the rules dictated by the library. And I agree with Buhr that such dicatation is the same as havin chnged the language.


> Are you talking about the need for D to have
> new keywords or new object code generation when the target is a
> dual/triple/quadruple/quintuple/... core machine?

According to my statement above a clear: maybe. And the reason for this is that I do not believe that the only two keyowrds in D that something have to do with concurrency can be show as aequivalents to Buhrs "mutex" and "monitor". But I may be wrong.


> Maybe this thread can be renamed "Duel Core Support"  ;-)

Thx for this broad hint. In fact I feel thrown onto a position which I did not want to be engaged in. All I wanted to know is whether there is a proof that D can handle concurrency in general and as the title shows dual cores as a special case. Maybe I should have posted this into the "learn" group.

However, I posted here and found myself confronted with opinions, that dual cores are not different from single cores or unfounded claims that D can handle any kind of concurrency.

Somehow I feel very uncomfortable.

-manfred

June 20, 2005
Manfred Nowak wrote:

> Thx for this broad hint. In fact I feel thrown onto a position which I did not want to be engaged in. All I wanted to know is whether there is a proof that D can handle concurrency in general and as the title shows dual cores as a special case. Maybe I should have posted this into the "learn" group.
> 
> However, I posted here and found myself confronted with opinions, that dual cores are not different from single cores or unfounded claims that D can handle any kind of concurrency.
> 
> Somehow I feel very uncomfortable.

If I have contributed to your discomfort, I am sorry - that was certainly not my intention.  I truly am interested in this topic, but as I've said before I just don't understand the problem.  I also have not read the references previously posted as they are not in a format I can easily open (need to get a ps viewer, etc).
I think the primary things I don't understand are (all are from a logical/programmers point of view)

1) Is there any difference between multiple core CPUs, and machines with multiple CPUs?
  * I don't believe that there is any significant difference, in which case we perhaps should agree that we are talking about SMP in general.

2) From a programmers point of view, what _is_ the difference between a program that runs in multiple threads and a program that runs in multiple threads on multiple cores?
  * I understand that physically there are different things happening, but I currently believe that logically there is no difference.

3) Can you please summerise the primitives that are required to program properly on SMP machines?
  * Although I do little multi-threaded programming, I understand that threads need to have atomic operations as a basic synchronizing mechanism, other than that I am not familiar enough to comment.

4) Could you please show a specific case that D is not able to handle an SMP situation, and how it could/should be fixed with additions to the language?
  * I liked the train example, could you perhaps make it pseudo-code & point out the weaknesses?

Thanks
Brad