June 20, 2005
In article <d948ms$2feb$1@digitaldaemon.com>, Manfred Nowak says...
>
>Sean Kelly <sean@f4.ca> wrote:
>
>[...]
>> This is an issue with C/C++.  Specifically, it relates to the "as if" rule and the fact that the theoretical virtual machine optimizers target has no concept of concurrency.  So there's no real way to ensure volatile instructions aren't being reordered unless you use a synchronization library.  D addresses this particular issue somewhat in its reinterpretation of "volatile," and I'm sure Walter is keeping an eye on the C++ standardization talks about this issue as well.
>
>Are you able to prove, that the argument holds for C++ only, which would be a contradiction to a paper accepted by ACM and available here:
>
>http://plg.uwaterloo.ca/~usystem/pub/uSystem/LibraryApproach.ps.gz

Okay, I dug up a copy of Ghostscript for the PC and read the first few pages of this paper.  I definately agree with it, but I don't know that it applies to D. For reference, here are the suggested solutions:

1. provide some explicit language facilities to control optimization (eg.
pragma, volatile, etc.)

2. provide some concurrency constructs that allow the translator to determine when to disable certain optimizations

3. a combination of approaches one and two

It's worth noting that D already provides both of these proposed solutions in language.  The 'synchronized' keyword could be used to prevent the compiler from optimizing code around these areas (if it isn't already).  And 'volatile' provides programmers who need to implement concurrent code outside of synchronization blocks a means of preventing compiler optimization of critical code blocks.  More work may still be useful in this area.  For example, 'volatile' in D just prevents optimization across a code block, but it might be worthwhile to provide a means for something akin to acquire and release semantics to allow *some* optimization to occur.


Sean


June 20, 2005
Sean Kelly wrote:
<Snip>
> It's worth noting that D already provides both of these proposed solutions in
> language.  The 'synchronized' keyword could be used to prevent the compiler from
> optimizing code around these areas (if it isn't already).  And 'volatile'
> provides programmers who need to implement concurrent code outside of
> synchronization blocks a means of preventing compiler optimization of critical
> code blocks.  More work may still be useful in this area.  For example,
> 'volatile' in D just prevents optimization across a code block, but it might be
> worthwhile to provide a means for something akin to acquire and release
> semantics to allow *some* optimization to occur.
> 
Does volatile prevent code movement within the block?  For example
...
some optimised code (A)
...
volatile {
...
some order critical code
...
}
...
some optimised code (B)

It is obvious from the description of volatile that the 3 sections of code above will have memory barriers, ie when the volatile section begins all memory writes from A will have occured, and when B begins executing all memory writes from the volatile block will have finished.

But, does code within the volatile block get optimised?  It would be nice if code within a volatile statement is strictly ordered, with no opportunity for the compiler to move memory read/write operations.
Does anybody know if this is true in practice?

Brad
June 21, 2005
>> Are you talking about the need for D to have
>> new keywords or new object code generation when the target is a
>> dual/triple/quadruple/quintuple/... core machine?
>
>According to my statement above a clear: maybe. And the reason for this is that I do not believe that the only two keyowrds in D that something have to do with concurrency can be show as aequivalents to Buhrs "mutex" and "monitor". But I may be wrong.

You can build mutexes and monitors with synchronized without problems.


June 21, 2005
In article <d97jeu$1mcv$1@digitaldaemon.com>, Brad Beveridge says...
>
>Sean Kelly wrote:
><Snip>
>> It's worth noting that D already provides both of these proposed solutions in language.  The 'synchronized' keyword could be used to prevent the compiler from optimizing code around these areas (if it isn't already).  And 'volatile' provides programmers who need to implement concurrent code outside of synchronization blocks a means of preventing compiler optimization of critical code blocks.  More work may still be useful in this area.  For example, 'volatile' in D just prevents optimization across a code block, but it might be worthwhile to provide a means for something akin to acquire and release semantics to allow *some* optimization to occur.
>> 
>Does volatile prevent code movement within the block?  For example
>...
>some optimised code (A)
>...
>volatile {
>...
>some order critical code
>...
>}
>...
>some optimised code (B)
>
>It is obvious from the description of volatile that the 3 sections of code above will have memory barriers, ie when the volatile section begins all memory writes from A will have occured, and when B begins executing all memory writes from the volatile block will have finished.
>
>But, does code within the volatile block get optimised?  It would be nice if code within a volatile statement is strictly ordered, with no opportunity for the compiler to move memory read/write operations. Does anybody know if this is true in practice?

The spec just says that "Memory writes occurring before the Statement are performed before any reads within or after the Statement. Memory reads occurring after the Statement occur after any writes before or within Statement are completed."  So the compiler is currently free to optimize within the code block, just not across the boundaries.  And now that I look at it, it sounds like volatile statements already implement acquire/release semantics.  I think the current behavior is actually okay though, as the code within the volatile block could theoretically be thousands of lines long, and I wouldn't want the optimizer to ignore that code completely, just not optimize it beyond the boundaries I've established.

Also, the requirements for 'synchronized' say nothing about optimizer behavior, and I think they should--'synchronized' should probably be identical to 'volatile' except that the block is also atomic.  I grant that it would be easy enough for a Mutex writer to add volatile blocks to his code, but as a synchronized block is implicitly volatile, it's worth changing simply to improve clarity if nothing else.


Sean


June 22, 2005
Matthias Becker <Matthias_member@pathlink.com> wrote:

> You can build mutexes and monitors with synchronized without problems.

So why did Buhr implement them?

-manfred
June 22, 2005
Manfred Nowak wrote:
> Matthias Becker <Matthias_member@pathlink.com> wrote:
> 
> 
>>You can build mutexes and monitors with synchronized without
>>problems. 
> 
> 
> So why did Buhr implement them?
> 
> -manfred
I read the library approaches paper from Buhr that you reference, I don't see that he implemented anything.
He made two basic points
1) Variables cached in registers will not be visible between tasks
2) Code optimisation can reorder instructions agressively, which can lead to code that should be inside critical sections being moved outside  critical sections.

C addresses point 1 with the volatile keyword, any variable that is "volatile" will be written to memory rather than kept solely in registers.

D's meaning of volatile addresses both concerns, code cannot move around a volatile statement, and reads and writes are performed to memory.  D also adds "synchronized", but in reality you could build your own locks on top of volatile without the language feature "sychronized".

So D as a language meets the criteria for concurrent programming that Buhr layed out.

Brad
1 2 3 4
Next ›   Last »