October 01, 2019
On 10/1/2019 7:14 AM, IGotD- wrote:
> On Tuesday, 1 October 2019 at 10:40:52 UTC, Mike Parker wrote:
>> "This change will require using core.atomic or equivalent functions to read and >write to shared memory objects. It will prevent unintended, inadvertent non-use of >atomic access."
> 
> What if the type is complex (like a struct) outside what the ISA can handle? Usually this only applies for integers 8 - 128 bits depending on architecture, anything beyond that is out of reach unless you want to dig into Intel TSX which the shared attribute isn't suitable for.
> 
> Another thing that is left out are the effects of operator overloading.
> 
> shared int s;
> 
> // What would this do? Atomically increment s or just atomically load s, add one
> // and then atomically store s?
> s = s + 1;
> // Would this atomically increase s and return the value after increment?
> s++;


Actually, shared operations would no longer be generated from the syntax. They'd all be done with library functions, which could be implemented as intrinsics by the compiler.

If it's outside what the ISA could handle, then those library functions would not be intrinsics, and it's up to the implementation of those functions to make it work.
October 01, 2019
On 10/1/2019 9:08 AM, 12345swordy wrote:
> There is no detail on how is this going to be implemented.

It'll disallow direct translation of operators to code, instead the user will have to call core.atomic library functions to do it, and the compiler may treat those as intrinsics.

October 01, 2019
On Tuesday, 1 October 2019 at 20:35:51 UTC, Walter Bright wrote:
> On 10/1/2019 9:08 AM, 12345swordy wrote:
>> There is no detail on how is this going to be implemented.
>
> It'll disallow direct translation of operators to code, instead the user will have to call core.atomic library functions to do it, and the compiler may treat those as intrinsics.

Does this mean that the user can rollout their own custom implementation?

-Alex
October 02, 2019
On 02/10/2019 9:31 AM, Walter Bright wrote:
> On 10/1/2019 4:21 AM, rikki cattermole wrote:
>> This DIP will need to detail how it does (and doesn't) interact with methods with the shared attribute on it, along with examples of the shared qualifier on a struct/class.
> 
> The DIP does not change how `shared` is part of the type system, and does not change how `shared` affects the types of structs or classes or instances of them.

Okay, if we get a statement to that affect in the DIP, maybe with an example then nobody can interpret it wrong :)

>> As it stands, if I was to implement it, I could interpret the DIP to only affect globals.
> 
> `shared` isn't a storage class, it's a type constructor, so I don't see how you could infer that.

Okay in that case, the spec needs updating. Because right now it is listed as part of `StorageClass` along with `const`.
October 02, 2019
On 01.10.19 22:34, Walter Bright wrote:
> Actually, shared operations would no longer be generated from the syntax. They'd all be done with library functions

To clarify, we're going to get this(?):

    shared int x;
    void main()
    {
        x = 42; /* error */
    }

You should change the first sentence of the DIP then. It says: "Reads and writes to data typed as shared are made atomic where the target CPU supports atomic operations".

From that sentence I would very much expect the code be accepted and generate an atomic write.
October 01, 2019
On Tue, Oct 1, 2019 at 3:45 AM Mike Parker via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> [...]

At best, this DIP is premature; we need to make shared correctly
restrict read/write access before trying to do fancy stuff.
That flag is now available, but the implementation has bugs, and it
needs to be enabled by default.

Assuming that the supporting material I mention above is available and
works properly, then I think it might be reasonable to open some
discussion of this kind, and on that note... I think this is a really
bad idea, and almost the opposite of what you want in virtually every
case.
I've written a whole lot of lock-free concurrent code, which involves
a lot of atomics, and the semantics in this DIP are not anything I
have ever seen or wanted (as I've argued here in the past). I think
the semantics described are misleading, and would only serve to
improperly educate an amateur user how to incorrectly write lock-free
algorithms, and increase their confidence that they're doing it right
(they're probably not).
1. Most atomic operations are NOT seq-cst. Seq-cst operations are *super rare*.
  a. I just searched our entire engine codebase; there are 971 calls
to atomic functions; there are *four* instances of seq-cst, and I
think at least some of them are actually wrong (over-synchronised),
possibly all of them, I would need to consider closer.
2. Many atomic operations appear in a series, in which case, it's
typical that exactly one operation in the series may be the
synchronisation point.
3. In reality, probably at least half of operations are 'relaxed'.
4. xchg and cas operations are fundamental to any atomic code, and
there are no syntax or operators for those operations; so, you're
undoubtedly using core.atomic aggressively anyway, and I think it's
just weird to litter naked arithmetic in between calls to core.atomic
functions. I feel that's misleading by making the naked arithmetic
look like NOT-atomic operations by contrast to a bunch of atomic
library calls.
5. In almost all cases when working with atomics, you tend to do
quirky stuff not captured by regular usage of data types, and not
conveniently facilitated by this DIP.

For instance; this is a common implementation for releasing a ref count:
  if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed))
  {
    // maybe acquire if necessary
    // do release logic...
  }

How do I write that code with this DIP?

We need to make shared correct before we do any stuff like this, and then when it's correct, we can talk... but I will probably continue to resist this, because the only result I see is that it will train amateurs to write bad atomic algorithms with an improper sense of confidence gained by being sanctioned by the language.

This is a bad idea.
October 01, 2019
On Tue, Oct 1, 2019 at 10:57 PM Manu <turkeyman@gmail.com> wrote:
>
> On Tue, Oct 1, 2019 at 3:45 AM Mike Parker via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> >
> > [...]
>
> At best, this DIP is premature; we need to make shared correctly
> restrict read/write access before trying to do fancy stuff.
> That flag is now available, but the implementation has bugs, and it
> needs to be enabled by default.
>
> Assuming that the supporting material I mention above is available and
> works properly, then I think it might be reasonable to open some
> discussion of this kind, and on that note... I think this is a really
> bad idea, and almost the opposite of what you want in virtually every
> case.
> I've written a whole lot of lock-free concurrent code, which involves
> a lot of atomics, and the semantics in this DIP are not anything I
> have ever seen or wanted (as I've argued here in the past). I think
> the semantics described are misleading, and would only serve to
> improperly educate an amateur user how to incorrectly write lock-free
> algorithms, and increase their confidence that they're doing it right
> (they're probably not).
> 1. Most atomic operations are NOT seq-cst. Seq-cst operations are *super rare*.
>   a. I just searched our entire engine codebase; there are 971 calls
> to atomic functions; there are *four* instances of seq-cst, and I
> think at least some of them are actually wrong (over-synchronised),
> possibly all of them, I would need to consider closer.
> 2. Many atomic operations appear in a series, in which case, it's
> typical that exactly one operation in the series may be the
> synchronisation point.
> 3. In reality, probably at least half of operations are 'relaxed'.
> 4. xchg and cas operations are fundamental to any atomic code, and
> there are no syntax or operators for those operations; so, you're
> undoubtedly using core.atomic aggressively anyway, and I think it's
> just weird to litter naked arithmetic in between calls to core.atomic
> functions. I feel that's misleading by making the naked arithmetic
> look like NOT-atomic operations by contrast to a bunch of atomic
> library calls.
> 5. In almost all cases when working with atomics, you tend to do
> quirky stuff not captured by regular usage of data types, and not
> conveniently facilitated by this DIP.
>
> For instance; this is a common implementation for releasing a ref count:
>   if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed))
>   {
>     // maybe acquire if necessary
>     // do release logic...
>   }
>
> How do I write that code with this DIP?
>
> We need to make shared correct before we do any stuff like this, and then when it's correct, we can talk... but I will probably continue to resist this, because the only result I see is that it will train amateurs to write bad atomic algorithms with an improper sense of confidence gained by being sanctioned by the language.
>
> This is a bad idea.

Sorry, that should have read:

  if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed) == 1)
  {
    // maybe acquire if necessary
    // do release logic...
  }
October 01, 2019
On Tue, Oct 1, 2019 at 11:01 PM Manu <turkeyman@gmail.com> wrote:
>
> On Tue, Oct 1, 2019 at 10:57 PM Manu <turkeyman@gmail.com> wrote:
> >
> > On Tue, Oct 1, 2019 at 3:45 AM Mike Parker via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> > >
> > > [...]
> >
> > At best, this DIP is premature; we need to make shared correctly
> > restrict read/write access before trying to do fancy stuff.
> > That flag is now available, but the implementation has bugs, and it
> > needs to be enabled by default.
> >
> > Assuming that the supporting material I mention above is available and
> > works properly, then I think it might be reasonable to open some
> > discussion of this kind, and on that note... I think this is a really
> > bad idea, and almost the opposite of what you want in virtually every
> > case.
> > I've written a whole lot of lock-free concurrent code, which involves
> > a lot of atomics, and the semantics in this DIP are not anything I
> > have ever seen or wanted (as I've argued here in the past). I think
> > the semantics described are misleading, and would only serve to
> > improperly educate an amateur user how to incorrectly write lock-free
> > algorithms, and increase their confidence that they're doing it right
> > (they're probably not).
> > 1. Most atomic operations are NOT seq-cst. Seq-cst operations are *super rare*.
> >   a. I just searched our entire engine codebase; there are 971 calls
> > to atomic functions; there are *four* instances of seq-cst, and I
> > think at least some of them are actually wrong (over-synchronised),
> > possibly all of them, I would need to consider closer.
> > 2. Many atomic operations appear in a series, in which case, it's
> > typical that exactly one operation in the series may be the
> > synchronisation point.
> > 3. In reality, probably at least half of operations are 'relaxed'.
> > 4. xchg and cas operations are fundamental to any atomic code, and
> > there are no syntax or operators for those operations; so, you're
> > undoubtedly using core.atomic aggressively anyway, and I think it's
> > just weird to litter naked arithmetic in between calls to core.atomic
> > functions. I feel that's misleading by making the naked arithmetic
> > look like NOT-atomic operations by contrast to a bunch of atomic
> > library calls.
> > 5. In almost all cases when working with atomics, you tend to do
> > quirky stuff not captured by regular usage of data types, and not
> > conveniently facilitated by this DIP.
> >
> > For instance; this is a common implementation for releasing a ref count:
> >   if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed))
> >   {
> >     // maybe acquire if necessary
> >     // do release logic...
> >   }
> >
> > How do I write that code with this DIP?
> >
> > We need to make shared correct before we do any stuff like this, and then when it's correct, we can talk... but I will probably continue to resist this, because the only result I see is that it will train amateurs to write bad atomic algorithms with an improper sense of confidence gained by being sanctioned by the language.
> >
> > This is a bad idea.
>
> Sorry, that should have read:
>
>   if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed) == 1)
>   {
>     // maybe acquire if necessary
>     // do release logic...
>   }

Wait up... it's possible that I've misunderstood this DIP...

There are 2 paragraphs that seem to contradict each other.

First there is A:
"""
By prohibiting direct access to shared data, the user will be required
to use core.atomic and to consider the correctness of their code.

# Using core.atomic instead

This change will require using core.atomic or equivalent functions to
read and write to shared memory objects. It will prevent unintended,
inadvertent non-use of atomic access.
"""

I absolutely agree with this text.

Then there's B:
"""
Atomic reads perform an acquire operation, writes perform a release
operation, and read-modify-write performs an acquire, then a
modification, and then a release. The result is sequentially
consistent ordering, and each thread observes the same order of
operations (total order).

The code generated would be equivalent to that from the core.atomic
library for the supported operations.
"""

And then goes on with examples like this where `x` is clearly accessed
without a call to core.atomic:
```
__gshared int g=0;
shared int x=0;

void thread1(){
    g=1; // non-atomic store
    x=1; // atomic store, release, happens after everything above
}

void thread2(){
    if(x==1){ // atomic load, acquire, happens before everything below
        assert(g==1); // non-atomic load
    }
}
```

I'm confused, which is it, A or B? I presume B, and my prior commentary applies.
October 01, 2019
On Tue, Oct 1, 2019 at 11:30 PM Manu <turkeyman@gmail.com> wrote:
>
> [...]

And then I notice everyone else has pointed out the confusion too.
Sorry for the noise!
Please clarity the confusion, then I can revoke my criticisms, or not...
October 02, 2019
Access to shared memory should be disallowed only for safe code, but not for system code. For safe code it's justified, because safety is unavoidable there, for system code there's a concern of ease of use of shared data, which is affected by littering and is not addressed in this DIP. Stripping shared qualifier off for assignment isn't necessarily correct if, say, the data type switches between atomic and nonatomic reference counting based on its type.