November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote: > That is correct. My point is that compiler implementers would follow some specification. That specification would contain informationt hat atomicLoad and atomicStore must have special properties that put them apart from any other functions. What are these special properties? Sorry, it seems like we are talking past each other… >> [1] I am not sure where the point of diminishing returns is here, >> although it might make sense to provide the same options as C++11. If I >> remember correctly, D1/Tango supported a lot more levels of >> synchronization. > > We could start with sequential consistency and then explore riskier/looser policies. I'm not quite sure what you are saying here. The functions in core.atomic already exist, and currently offer four levels (raw, acq, rel, seq). Are you suggesting to remove the other options? David |
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly | On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
> On Nov 15, 2012, at 5:16 AM, deadalnix <deadalnix@gmail.com> wrote:
>>
>> What is the point of ensuring that the compiler does not reorder load/stores if the CPU is allowed to do so ?
>
> Because we can write ASM to tell the CPU not to. We don't have any such ability for the compiler right now.
I think the question was: Why would you want to disable compiler code motion for loads/stores which are not atomic, as the CPU might ruin your assumptions anyway?
David
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On 11/15/12 1:29 PM, David Nadlinger wrote:
> On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote:
>> That is correct. My point is that compiler implementers would follow
>> some specification. That specification would contain informationt hat
>> atomicLoad and atomicStore must have special properties that put them
>> apart from any other functions.
>
> What are these special properties? Sorry, it seems like we are talking
> past each other…
For example you can't hoist a memory operation before a shared load or after a shared store.
Andrei
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On 11/15/12 2:18 PM, David Nadlinger wrote:
> On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
>> On Nov 15, 2012, at 5:16 AM, deadalnix <deadalnix@gmail.com> wrote:
>>>
>>> What is the point of ensuring that the compiler does not reorder
>>> load/stores if the CPU is allowed to do so ?
>>
>> Because we can write ASM to tell the CPU not to. We don't have any
>> such ability for the compiler right now.
>
> I think the question was: Why would you want to disable compiler code
> motion for loads/stores which are not atomic, as the CPU might ruin your
> assumptions anyway?
The compiler does whatever it takes to ensure sequential consistency for shared use, including possibly inserting fences in certain places.
Andrei
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Thursday, 15 November 2012 at 22:57:54 UTC, Andrei Alexandrescu wrote:
> On 11/15/12 1:29 PM, David Nadlinger wrote:
>> On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote:
>>> That is correct. My point is that compiler implementers would follow
>>> some specification. That specification would contain informationt hat
>>> atomicLoad and atomicStore must have special properties that put them
>>> apart from any other functions.
>>
>> What are these special properties? Sorry, it seems like we are talking
>> past each other…
>
> For example you can't hoist a memory operation before a shared load or after a shared store.
Well, to be picky, that depends on what kind of memory operation you mean – moving non-volatile loads/stores across volatile ones is typically considered acceptable.
But still, you can't move memory operations across any other arbitrary function call either (unless you can prove it is safe by inspecting the callee's body, obviously), so I don't see where atomicLoad/atomicStore would be special here.
David
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | On Thursday, 15 November 2012 at 22:58:53 UTC, Andrei Alexandrescu wrote:
> On 11/15/12 2:18 PM, David Nadlinger wrote:
>> On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
>>> On Nov 15, 2012, at 5:16 AM, deadalnix <deadalnix@gmail.com> wrote:
>>>>
>>>> What is the point of ensuring that the compiler does not reorder
>>>> load/stores if the CPU is allowed to do so ?
>>>
>>> Because we can write ASM to tell the CPU not to. We don't have any
>>> such ability for the compiler right now.
>>
>> I think the question was: Why would you want to disable compiler code
>> motion for loads/stores which are not atomic, as the CPU might ruin your
>> assumptions anyway?
>
> The compiler does whatever it takes to ensure sequential consistency for shared use, including possibly inserting fences in certain places.
>
> Andrei
How does this have anything to do with deadalnix' question that I rephrased at all? It is not at all clear that shared should do this (it currently doesn't), and the question was explicitly about Walter's statement that shared should disable compiler reordering, when at the same time *not* inserting barriers/atomic ops. Thus the »which are not atomic« qualifier in my message.
David
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On Nov 15, 2012, at 2:18 PM, David Nadlinger <see@klickverbot.at> wrote:
> On Thursday, 15 November 2012 at 16:43:14 UTC, Sean Kelly wrote:
>> On Nov 15, 2012, at 5:16 AM, deadalnix <deadalnix@gmail.com> wrote:
>>> What is the point of ensuring that the compiler does not reorder load/stores if the CPU is allowed to do so ?
>>
>> Because we can write ASM to tell the CPU not to. We don't have any such ability for the compiler right now.
>
> I think the question was: Why would you want to disable compiler code motion for loads/stores which are not atomic, as the CPU might ruin your assumptions anyway?
A barrier isn't always necessary to achieve the desired ordering on a given system. But I'd still call out to ASM to make sure the intended operation happened. I don't know that I'd ever feel comfortable with "volatile x=y" even if what I'd do instead is just a MOV.
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On Nov 15, 2012, at 3:05 PM, David Nadlinger <see@klickverbot.at> wrote:
> On Thursday, 15 November 2012 at 22:57:54 UTC, Andrei Alexandrescu wrote:
>> On 11/15/12 1:29 PM, David Nadlinger wrote:
>>> On Wednesday, 14 November 2012 at 17:54:16 UTC, Andrei Alexandrescu wrote:
>>>> That is correct. My point is that compiler implementers would follow some specification. That specification would contain informationt hat atomicLoad and atomicStore must have special properties that put them apart from any other functions.
>>>
>>> What are these special properties? Sorry, it seems like we are talking past each other…
>>
>> For example you can't hoist a memory operation before a shared load or after a shared store.
>
> Well, to be picky, that depends on what kind of memory operation you mean – moving non-volatile loads/stores across volatile ones is typically considered acceptable.
Usually not, really. Like if you implement a mutex, you don't want non-volatile operations to be hoisted above the mutex acquire or sunk below the mutex release. However, it's safe to move additional operations into the block where the mutex is held.
|
November 15, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly | On Thursday, 15 November 2012 at 23:22:32 UTC, Sean Kelly wrote:
> On Nov 15, 2012, at 3:05 PM, David Nadlinger <see@klickverbot.at> wrote:
>> Well, to be picky, that depends on what kind of memory operation you mean – moving non-volatile loads/stores across volatile ones is typically considered acceptable.
>
> Usually not, really. Like if you implement a mutex, you don't want non-volatile operations to be hoisted above the mutex acquire or sunk below the mutex release. However, it's safe to move additional operations into the block where the mutex is held.
Oh well, I was just being stupid when typing up my response: What I meant to say is that you _can_ reorder a set of memory operations involving atomic/volatile ones unless you violate the guarantees of the chosen memory order option.
So, for Andrei's statement to be true, shared needs to be defined as making all memory operations sequentially consistent. Walter doesn't seem to think this is the way to go, at least if that is what he is referring to as »memory barriers«.
David
|
November 16, 2012 Re: Something needs to happen with shared, and soon. | ||||
---|---|---|---|---|
| ||||
Posted in reply to David Nadlinger | On Nov 15, 2012, at 3:30 PM, David Nadlinger <see@klickverbot.at> wrote:
> On Thursday, 15 November 2012 at 23:22:32 UTC, Sean Kelly wrote:
>> On Nov 15, 2012, at 3:05 PM, David Nadlinger <see@klickverbot.at> wrote:
>>> Well, to be picky, that depends on what kind of memory operation you mean – moving non-volatile loads/stores across volatile ones is typically considered acceptable.
>>
>> Usually not, really. Like if you implement a mutex, you don't want non-volatile operations to be hoisted above the mutex acquire or sunk below the mutex release. However, it's safe to move additional operations into the block where the mutex is held.
>
> Oh well, I was just being stupid when typing up my response: What I meant to say is that you _can_ reorder a set of memory operations involving atomic/volatile ones unless you violate the guarantees of the chosen memory order option.
>
> So, for Andrei's statement to be true, shared needs to be defined as making all memory operations sequentially consistent. Walter doesn't seem to think this is the way to go, at least if that is what he is referring to as »memory barriers«.
I think because of the as-if rule, the compiler can continue to optimize all it wants between volatile operations. Just not across them.
|
Copyright © 1999-2021 by the D Language Foundation