June 24, 2019
On 24.06.19 04:44, Walter Bright wrote:
> On 6/22/2019 4:24 PM, Manu wrote:
>> If you engage in @system code to distribute across threads, it's a no brainer to expect that code to handle cache coherency measures. It would be broken if it didn't.
> 
> It is not a no-brainer. Nowhere is it specified, and all the examples and tutorials I've seen say DO NOT follow an atomic write with a non-atomic read of the same variable in another thread. Not only that, it's usually a CENTRAL THEME of these expositions.
> ...

Are you saying code like the following would have UB?

__gshared int x=0, y=0;
void thread1(){
    x.write(1,release);
    y.write(1,release);
}

void thread2(){
    if(y.read(acquire)==1){
        assert(x==1); // plain read
    }
}
June 24, 2019
On Monday, June 24, 2019 5:38:29 AM MDT aliak via Digitalmars-d wrote:
> On Monday, 24 June 2019 at 02:55:39 UTC, Walter Bright wrote:
> > On 6/23/2019 7:42 PM, Timon Gehr wrote:
> >> I really don't understand this "before" without memory barriers. Without memory barriers, there is no "before". How to formalize this? What is it exactly that "scope" guarantees?
> >
> > The reference to shared i does not extend past the end of the function call. That doesn't mean the cached value of i is propagated to all other threads.
> >
> >     int x;
> >     void fun(scope ref shared(int) x) {
> >
> >         x = 3; // (*)
> >
> >     } // end of reference to x is guaranteed
> >     assert(x == 3); // all caches not updated, assert is not
> >
> > guaranteed
> >
> > (*) now replace this with a call to another thread to set x to
> > 3.
>
> You can assign to a shared primitive without a compilation error?
>
> Does the compiler do the necessary synchronization under the hood?
>
> I would've assumed you'd have to use atomicStore or the like?

A number of us think that simply reading or assigning shared variables should be illegal, because the compiler doesn't do the necessary synchronization stuff for you. Currently, the compiler prevents it for ++ and -- on than basis, but that's it.

However, one issue with that from a usability perspective is that even if you're doing all of the right stuff like protecting access to the shared variable via a mutex and then temporarily casting it to thread-local to operate on it while the mutex is locked, if you want to actually assign anything to the shared variable before releasing the mutex, you'd have to use a pointer to it that had been cast to thread-local, whereas right now, you're actually allowed to just assign to it. e.g.

shared MyClass obj;

synchronized(mutex)
{
    obj = cast(shared)someObj;
}

works right now, but if writing to shared variables were illegal like it arguably should be, then you'd have to do something ugly like

shared MyClass obj;

synchronized(mutex)
{
    auto ptr = cast(MyClass*)&obj;
    *ptr = someObj;
}

Either way, it's quite clear that as things stand, reading or writing a shared variable without doing something with threading primitives is non-atomic and won't be synchronized properly. So, arguably, the type system shouldn't allow it (and that was started by disalowing ++ and --, but it wasn't ever finished). However, when this was discussed at this last dconf, Walter and Andrei didn't want to make any changes along those lines until we'd nailed down the exact semantics of how shared is supposed to work. Right now, shared is pretty much just defined as not being thread-local without the finer details really being ironed out.

So, for now, shared does prevent you from accidentally converting between thread-local and shared, but on the whole, it doesn't actually do anything to prevent you from doing stuff to a shared variable that isn't going to be atomic or thread-safe, and it doesn't introduce stuff like fences to synchronize anything across threads. It's entirely up to the programmer to use it correctly with almost no help from the compiler. All it's really doing is segregating the shared data from the thread-local data and preventing you from crossing that barrier without casting, making such code @system. And that's definitely something useful, but the specification needs to be fully nailed down, and there are clearly some improvements that need to be made - though of course, a lot of the arguing is over the changes that should be made.

 Jonathan M Davis



June 24, 2019
On Sunday, 23 June 2019 at 22:15:18 UTC, Timon Gehr wrote:
> Also, I think Manu argued correctly in this thread.

His wish is known: he has an idea about his own language and wants a compiler for it. We already had this very argument about const and shared before, now it's about scope and safe.
June 24, 2019
On Monday, 24 June 2019 at 13:20:23 UTC, Jonathan M Davis wrote:
> However, one issue with that from a usability perspective is that even if you're doing all of the right stuff like protecting access to the shared variable via a mutex and then temporarily casting it to thread-local to operate on it while the mutex is locked, if you want to actually assign anything to the shared variable before releasing the mutex, you'd have to use a pointer to it that had been cast to thread-local, whereas right now, you're actually allowed to just assign to it. e.g.
>
> shared MyClass obj;
>
> synchronized(mutex)
> {
>     obj = cast(shared)someObj;
> }
>
> works right now, but if writing to shared variables were illegal like it arguably should be, then you'd have to do something ugly like
>
> shared MyClass obj;
>
> synchronized(mutex)
> {
>     auto ptr = cast(MyClass*)&obj;
>     *ptr = someObj;
> }

You can have the same usability/aesthetics:

synchronized {
  cast(MyClass)obj = someObj;
}

Doesn't that work?

Ideally though, I think you'd want:

class MyClass {
  void opAssign(MyClass c) shared { ... }
}
June 24, 2019
On Monday, June 24, 2019 4:06:03 PM MDT aliak via Digitalmars-d wrote:
> On Monday, 24 June 2019 at 13:20:23 UTC, Jonathan M Davis wrote:
> > However, one issue with that from a usability perspective is that even if you're doing all of the right stuff like protecting access to the shared variable via a mutex and then temporarily casting it to thread-local to operate on it while the mutex is locked, if you want to actually assign anything to the shared variable before releasing the mutex, you'd have to use a pointer to it that had been cast to thread-local, whereas right now, you're actually allowed to just assign to it. e.g.
> >
> > shared MyClass obj;
> >
> > synchronized(mutex)
> > {
> >
> >     obj = cast(shared)someObj;
> >
> > }
> >
> > works right now, but if writing to shared variables were illegal like it arguably should be, then you'd have to do something ugly like
> >
> > shared MyClass obj;
> >
> > synchronized(mutex)
> > {
> >
> >     auto ptr = cast(MyClass*)&obj;
> >     *ptr = someObj;
> >
> > }
>
> You can have the same usability/aesthetics:
>
> synchronized {
>    cast(MyClass)obj = someObj;
> }
>
> Doesn't that work?

I've never seen a cast used in that manner. It's been my understanding that casts always result in rvalues, but it's quite possible that your example works, and I've just made a wrong assumption about how casts work with regards to lvalues. Right or wrong though, I'm not the only one who has thought that, since this issue has been brought up before. But even if it doesn't work, it would be a possible improvement to make dealing with shared more palatable.

> Ideally though, I think you'd want:
>
> class MyClass {
>    void opAssign(MyClass c) shared { ... }
> }

Sure, but if you're doing that, that means that the class itself is handling its own synchronization, which doesn't work in all cases, and even if it did, it's just pushing the problem into the class. The implementation for opAssign is going to have deal with assigning to types that don't have shared opAssign where either atomic writes or mutexes with casts or whatnot are going to be needed. In general, encapsulating such code is great where possible, but at some point, the built-in types are going to need to be manipulated, and if the type is a user-defined type that's designed to be thread-local but works as shared with mutexes and casts, then it's not going to have a shared opAssign any more than an int or string will. So, such an opAssign is just encapsulating the problem, not making it go away.

- Jonathan M Davis



June 26, 2019
On Mon, Jun 24, 2019 at 12:50 PM Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On 6/22/2019 4:44 PM, Manu wrote:
> > Not being required to duplicate every threadsafe function is a huge advantage. We went on about this for weeks 6 months ago.
>
> You've suggested that all operations on shared data be done with atomic function calls rather than operators. This implies the function bodies should be different.

I don't follow that logic? There may be atomics, but there are also cast-away-shared patterns that may be used. It's all very situational. Whatever a shared function does is its own business, I'm interested in the API from the callers perspective here.

Whatever a shared method does, it simply must be threadsafe. Such a
function is still threadsafe whether there are many threads with a
reference to the shared object, or just one.
A method that performs a threadsafe operation on some object can
safely perform it thread-locally, and we should be able to allow that
assuming the callee does NOT retain a shared reference to the object
beyond the life of the call. We should be able to rely on `scope` to
guarantee references are not retained beyond the life of the function.

This would be extremely useful.
1 2 3 4 5 6 7 8
Next ›   Last »