February 14
Am Mon, 13 Feb 2017 17:44:10 +0000
schrieb Moritz Maxeiner <moritz@ucworks.org>:

> > Thread unsafe methods shouldn't be marked shared, it doesn't make sense. If you don't want to provide thread-safe interface, don't mark methods as shared, so they will not be callable on a shared instance and thus the user will be unable to use the shared object instance and hence will know the object is thread unsafe and needs manual synchronization.
> 
> To be clear: While I might, in general, agree that using shared methods only for thread safe methods seems to be a sensible restriction, neither language nor compiler require it to be so; and absence of evidence of a useful application is not evidence of absence.

The compiler of course can't require shared methods to be thread-safe as it simply can't prove thread-safety in all cases. This is like shared/trusted: You are supposed to make sure that a function behaves as expected. The compiler will catch some easy to detect mistakes (like calling a non-shared method from a shared method <=> system method from safe method) but you could always use casts, pointers, ... to fool the compiler.

You could use the same argument to mark any method as @trusted. Yes it's possible, but it's a very bad idea.

Though I do agree that there might be edge cases: In a single core, single threaded environment, should an interrupt function be marked as shared? Probably not, as no synchronization is required when calling the function.

But if the interrupt accesses a variable and a normal function accesses the variable as well, the access needs to be 'volatile' (not cached into a register by the compiler; not closely related to this discussion) and atomic, as the interrupt might occur in between multiple partial writes. So the variable should be shared, although there's no multithreading (in the usual sense).

> you'd still need those memory barriers. Also note that the synchronization in the above is not needed in terms of semantics.

However, if you move you synchronized to the complete sub-code blocks barriers are not necessary. Traditional mutex locking is basically a superset and is usually implemented using barriers AFAIK. I guess your point is we need to define whether shared methods guarantee some sort of sequential consistency?

struct Foo
{
    shared void doA() {lock{_tmp = "a";}};
    shared void doB() {lock{_tmp = "b";}};
    shared getA() {lock{return _tmp;}};
    shared getB() {lock{return _tmp;}};
}

thread1:
foo.doB();

thread2:
foo.doA();
auto result = foo.getA(); // could return "b"

I'm not sure how a compiler could prevent such 'logic' bugs. However, I think it should be considered a best practice to always make a shared function a self-contained entity so that calling any other function in any order does not negatively effect the results. Though that might not always be possible.

> My opinion on the matter of `shared` emitting memory barriers is that either the spec and documentation[1] should be updated to reflect that sequential consistency is a non-goal of `shared` (and if that is decided this should be accompanied by an example of how to add memory barriers yourself), or it should be implemented. Though leaving it in the current "not implemented, no comment / plan on whether/when it will be implemented" state seems to have little practical consequence - since no one seems to actually work on this level in D - and I can thus understand why dealing with that is just not a priority.

I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables. Instead we have well-defined intrinsics in std.atomic dealing with this. Of course most of this stuff isn't implemented (no shared support in core.sync).

-- Johannes

February 14
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> The compiler of course can't require shared methods to be thread-safe as it simply can't prove thread-safety in all cases. This is like shared/trusted: You are supposed to make sure that a function behaves as expected. The compiler will catch some easy to detect mistakes (like calling a non-shared method from a shared method <=> system method from safe method) but you could always use casts, pointers, ... to fool the compiler.
>
> You could use the same argument to mark any method as @trusted. Yes it's possible, but it's a very bad idea.
>
> Though I do agree that there might be edge cases: In a single core, single threaded environment, should an interrupt function be marked as shared? Probably not, as no synchronization is required when calling the function.
>
> But if the interrupt accesses a variable and a normal function accesses the variable as well, the access needs to be 'volatile' (not cached into a register by the compiler; not closely related to this discussion) and atomic, as the interrupt might occur in between multiple partial writes. So the variable should be shared, although there's no multithreading (in the usual sense).

Of course, I just wanted to point out that Kagamin's post scriptum is a simplification I cannot agree with. As a best practice? Sure. As a "never do it"? No.

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> Am Mon, 13 Feb 2017 17:44:10 +0000
> schrieb Moritz Maxeiner <moritz@ucworks.org>:
>> you'd still need those memory barriers. Also note that the synchronization in the above is not needed in terms of semantics.
>
> However, if you move you synchronized to the complete sub-code blocks barriers are not necessary. Traditional mutex locking is basically a superset and is usually implemented using barriers AFAIK. I guess your point is we need to define whether shared methods guarantee some sort of sequential consistency?

My point in those paragraphs was that synchronization and memory barriers solve two different problems that can occur in non-sequential programming and because of Kagamin's statement

> Memory barriers are a bad idea because they don't defend from a race condition
makes no sense (to me). But yes, I do think that the definition should have more background /context and not the current "D FAQ states `shared` guarantees sequential consistency (not implemented)". Considering how many years that has been the state I have personally concluded (for myself and how I deal with D) that sequential consistency is a non-goal of `shared`, but what's a person new to D supposed to think?

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
>
> struct Foo
> {
>     shared void doA() {lock{_tmp = "a";}};
>     shared void doB() {lock{_tmp = "b";}};
>     shared getA() {lock{return _tmp;}};
>     shared getB() {lock{return _tmp;}};
> }
>
> thread1:
> foo.doB();
>
> thread2:
> foo.doA();
> auto result = foo.getA(); // could return "b"
>
> I'm not sure how a compiler could prevent such 'logic' bugs.

It's not supposed to. Also, your example does not implement the same semantics as what I posted and yes, in your example, there's no need for memory barriers. In the example I posted, synchronization is not necessary, memory barriers are (and since synchronization is likely to have a significantly higher runtime cost than memory barriers, why would you want to, even if it were possible).

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> However, I think it should be considered a best practice to always make a shared function a self-contained entity so that calling any other function in any order does not negatively effect the results. Though that might not always be possible.

Yes, that matches what I tried to express.

On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> Am Mon, 13 Feb 2017 17:44:10 +0000
> schrieb Moritz Maxeiner <moritz@ucworks.org>:
>> My opinion on the matter of `shared` emitting memory barriers is that either the spec and documentation[1] should be updated to reflect that sequential consistency is a non-goal of `shared` (and if that is decided this should be accompanied by an example of how to add memory barriers yourself), or it should be implemented. Though leaving it in the current "not implemented, no comment / plan on whether/when it will be implemented" state seems to have little practical consequence - since no one seems to actually work on this level in D - and I can thus understand why dealing with that is just not a priority.
>
> I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables. Instead we have well-defined intrinsics in std.atomic dealing with this. Of course most of this stuff isn't implemented (no shared support in core.sync).
>
> -- Johannes

Good to know, thanks, I seem to have missed that final decision. If that was indeed the case, then that should be reflected in the documentation of `shared` (including the FAQ).
February 14
On Tuesday, 14 February 2017 at 13:01:44 UTC, Moritz Maxeiner wrote:
> Of course, I just wanted to point out that Kagamin's post scriptum is a simplification I cannot agree with. As a best practice? Sure. As a "never do it"? No.

Sorry for the double post, error in the above, please use this instead:

Of course, I just wanted to point out that Kagamin's
> Thread unsafe methods shouldn't be marked shared, it doesn't make sense
is a simplification I cannot agree with. As a best practice? Sure. As a "never do it"? No.
February 14
On Monday, 13 February 2017 at 17:44:10 UTC, Moritz Maxeiner wrote:
> To be clear: While I might, in general, agree that using shared methods only for thread safe methods seems to be a sensible restriction, neither language nor compiler require it to be so; and absence of evidence of a useful application is not evidence of absence.

Right, a private shared method can be a good use case for a thread-unsafe shared method.

> ---
> __gshared int f = 0, x = 0;
> Object monitor;
>
> // thread 1
> synchronized (monitor) while (f == 0);
> // Memory barrier required here
> synchronized (monitor) writeln(x)
>
> // thread 2
> synchronized (monitor) x = 42;
> // Memory barrier required here
> synchronized (monitor) f = 1;
> ---

Not sure about this example, it demonstrates a deadlock.

> My opinion on the matter of `shared` emitting memory barriers is that either the spec and documentation[1] should be updated to reflect that sequential consistency is a non-goal of `shared` (and if that is decided this should be accompanied by an example of how to add memory barriers yourself), or it should be implemented.

I'm looking at this in terms of practical consequences and useful language features. What people are supposed to think and do when they see "guarantees sequential consistency"? I mean people at large.

> I agree, message passing is considerably less tricky and you're unlikely to shoot yourself in the foot. Nonetheless, there are valid use cases where the overhead of MP may not be acceptable.

Performance was a reason to not provide barriers. People, who are concerned with performance, are even unhappy with virtual methods, they won't be happy with barriers on every memory access.
February 14
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables.

It was so some years ago, not sure if it's still so. I suspect automatic barriers come from TDPL book and have roughly the same rationale as autodecoding. They fix something, guess if this something is what you need.
February 14
Am Tue, 14 Feb 2017 13:01:44 +0000
schrieb Moritz Maxeiner <moritz@ucworks.org>:

> 
> It's not supposed to. Also, your example does not implement the same semantics as what I posted and yes, in your example, there's no need for memory barriers. In the example I posted, synchronization is not necessary, memory barriers are (and since synchronization is likely to have a significantly higher runtime cost than memory barriers, why would you want to, even if it were possible).
> 

I'll probably have to look up about memory barriers again, I never really understood when they are necessary ;-)


> >
> > I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables. Instead we have well-defined intrinsics in std.atomic dealing with this. Of course most of this stuff isn't implemented (no shared support in core.sync).
> >
> > -- Johannes
> 
> Good to know, thanks, I seem to have missed that final decision. If that was indeed the case, then that should be reflected in the documentation of `shared` (including the FAQ).

https://github.com/dlang/dlang.org/pull/1570

I think it's probably somewhere in this thread:

http://forum.dlang.org/post/k7pn19$bre$1@digitalmars.com

>1. Slapping shared on a type is never going to make algorithms on that type work in a concurrent context, regardless of what is done with memory barriers. Memory barriers ensure sequential consistency, they do nothing for race conditions that are sequentially consistent. Remember, single core CPUs are all sequentially consistent, and still have major concurrency problems. This also means that having templates accept shared(T) as arguments and have them magically generate correct concurrent code is a pipe dream.
>
>2. The idea of shared adding memory barriers for access is not going to ever work. Adding barriers has to be done by someone who knows what they're doing for that particular use case, and the compiler inserting them is not going to substitute.
>
>However, and this is a big however, having shared as compiler-enforced self-documentation is immensely useful. It flags where and when data is being shared.

http://forum.dlang.org/post/mailman.1904.1352922666.5162.digitalmars-d@puremagic.com

> Most of the reason for this was that I didn't like the old implications of shared, which was that shared methods would at some time in the future end up with memory barriers all over the place. That's been dropped, [...]


-- Johannes

February 14
Am Tue, 14 Feb 2017 14:38:32 +0000
schrieb Kagamin <spam@here.lot>:

> On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> > I remember some discussions about this some years ago and IIRC the final decision was that the compiler will not magically insert any barriers for shared variables.
> 
> It was so some years ago, not sure if it's still so. I suspect automatic barriers come from TDPL book and have roughly the same rationale as autodecoding. They fix something, guess if this something is what you need.

At least this thread is from 2012, so more recent than TDPL: http://forum.dlang.org/post/k7pn19$bre$1@digitalmars.com

I'm not sure though if there were any further discussions/decisions after that discussion.

-- Johannes

February 14
On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
> On Monday, 13 February 2017 at 17:44:10 UTC, Moritz Maxeiner wrote:
>> To be clear: While I might, in general, agree that using shared methods only for thread safe methods seems to be a sensible restriction, neither language nor compiler require it to be so; and absence of evidence of a useful application is not evidence of absence.
>
> Right, a private shared method can be a good use case for a thread-unsafe shared method.
>
>> ---
>> __gshared int f = 0, x = 0;
>> Object monitor;
>>
>> // thread 1
>> synchronized (monitor) while (f == 0);
>> // Memory barrier required here
>> synchronized (monitor) writeln(x)
>>
>> // thread 2
>> synchronized (monitor) x = 42;
>> // Memory barrier required here
>> synchronized (monitor) f = 1;
>> ---
>
> Not sure about this example, it demonstrates a deadlock.

That's beside the point, but I guess I should've clarified the "not needed" as "harmful". The point was that memory barriers and synchronization are two separate solutions for two separate problems and your post scriptum about memory barriers disregards that synchronization does not apply to the problem memory barriers solve.

On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
>
>> My opinion on the matter of `shared` emitting memory barriers is that either the spec and documentation[1] should be updated to reflect that sequential consistency is a non-goal of `shared` (and if that is decided this should be accompanied by an example of how to add memory barriers yourself), or it should be implemented.
>
> I'm looking at this in terms of practical consequences and useful language features.

So am I.

On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
> What people are supposed to think and do when they see "guarantees sequential consistency"? I mean people at large.

That's a documentation issue, however, and is imho not relevant to the decision whether one should, or should not emit memory barriers. It's only relevant to how the decision is then presented to the people at large.

On Tuesday, 14 February 2017 at 14:27:05 UTC, Kagamin wrote:
>
>> I agree, message passing is considerably less tricky and you're unlikely to shoot yourself in the foot. Nonetheless, there are valid use cases where the overhead of MP may not be acceptable.
>
> Performance was a reason to not provide barriers. People, who are concerned with performance, are even unhappy with virtual methods, they won't be happy with barriers on every memory access.

You seem to be trying to argue against someone stating memory barriers should be emitted automatically, though I don't know why you think that's me; You initially stated that
> Memory barriers are a bad idea because they don't defend from a race condition, but they look like they do
which I rebutted since memory barriers have nothing to do with race conditions. Whether memory barriers should automatically emitted by the compiler is a separate issue, one on which my position btw is that they shouldn't. The current documentation of `shared`, however, implies that such an emission (and the related sequential consistency) is a goal of `shared` (and just not - yet? - implemented) and does not reflect the apparently final decision that it's not.

February 14
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
> But if the interrupt accesses a variable and a normal function accesses the variable as well, the access needs to be 'volatile' (not cached into a register by the compiler; not closely related to this discussion) and atomic, as the interrupt might occur in between multiple partial writes. So the variable should be shared, although there's no multithreading (in the usual sense).

Single publishing (loggers, singletons) doesn't need atomic load: https://forum.dlang.org/post/ozssiniwkaghcjvrlhjq@forum.dlang.org
February 15
On Tuesday, 14 February 2017 at 15:57:47 UTC, Moritz Maxeiner wrote:
> You seem to be trying to argue against someone stating memory barriers should be emitted automatically, though I don't know why you think that's me; You initially stated that
>> Memory barriers are a bad idea because they don't defend from a race condition, but they look like they do

You were concerned that implementation doesn't match FAQ regarding automatic barriers. I thought we were talking about that.
1 2 3