June 23, 2019
On Saturday, 22 June 2019 at 23:41:49 UTC, Manu wrote:
> And in this case, the compiler may treat the scope ref as if it remained thread local and freely ignore the possibility of synchronisation issues, because that's the reality made possible by @safe. Any @trusted code must maintain those assumptions.

Although, it isn't quite as simple, since there are many factors that play into this. Including the specifics of the compiler optimization level and the concrete hardware (x86 does a lot to maintain cache coherency in hardware).

So without language support you risk the compiler repeating all the sync work you do in your @trusted code.  Or you risk it breaking if you use a compiler with whole program optimization.  What works with separate compilation does not necessarily work with full flow analysis.

But this really depends on the details of the language semantics and how far the compiler can go with optimization.  Any competitor to C++ ought to do better than C++ when it comes to optimization.  Concurrency is an area where there is ample opportunity, as concurrency features in C++ are bolted on as an after thought. (Very primitive.)

For instance, is the compiler allowed to elide atomic access if it marked as nonshared or immutable?  It should be allowed to do it (assume it is known that the memory range is nonvolatile).

So, if @trusted means you can do anything, then optimization opportunities evaporates.

In my view the type system should be very strict.  I don't think @trusted (or even @system) should be allowed to break the core guarantees of type system without clearly marking the section and how it breaks it using designated language features. So @trusted marks that you access language features that are not allowed in @safe that could be used in a way that breaks memory safety. What you should ask for then is something similar for thread safety.

What you might want to ask for is a rendezvous like language feature that temporarily turns nonshared to shared within its scope.  Then you can implement parallel for with rendezvous as a primitive that the optimizer has to account for. That would be much safer than the programmer bypassing the type system without any proof that what they do is correct.


Proving correctness for concurrency is very hard, you usually have to account for all possible combinations, so to do it in a reasonably convincing manner you'll have to build a model that proves all combinations.  It isn't something that can be done on the back of an envelope.

There are languages/tools for this (some use term rewriting languages to do such proofs), but I think it is safe to assume that few D programmers know how to do it or are willing to do it even if they know how.


In C++ the type system is somewhat non-strict for historical and cultural reasons, but as the result the compiler can make few assumptions based on types, like with const. (Although they have made union more strict than i C). That means that other languages in theory can provide better optimization and code gen than C++.

I think languages that want to compete with C++ should focus real hard on the weak spots of the C++ type system and find new opportunities in that area.


So what you want is a stronger type system (e.g. language features like rendezvous), not to allow the programmer to bypass and weaken the type system.

Ola.
June 24, 2019
On 23.06.19 08:27, Kagamin wrote:
> On Saturday, 22 June 2019 at 23:41:49 UTC, Manu wrote:
>> Wrong. I wish people that have never tried to use shared would just let those of us that have, and want to make the language feature useful have some authority on the matter.
> 
> Eh? No. You don't understand const, shared, scope, @safe and @trusted. 

This kind of statement is useless without a demonstration or explanation. Also, I think Manu argued correctly in this thread.
June 23, 2019
On 6/22/2019 4:22 PM, Manu wrote:
> Show how to escape the pointer when it is scope...?

Use @system code. The programmer only has to ensure the escaped reference ceases before the function returns to fulfill the 'scope' semantics.

The scope semantics do not include memory barriers, though.
June 24, 2019
On 24.06.19 04:31, Walter Bright wrote:
> On 6/22/2019 4:22 PM, Manu wrote:
>> Show how to escape the pointer when it is scope...?
> 
> Use @system code. The programmer only has to ensure the escaped reference ceases before the function returns to fulfill the 'scope' semantics.
> 
> The scope semantics do not include memory barriers, though.

You keep saying "before" and then "but no memory barriers".

I really don't understand this "before" without memory barriers. Without memory barriers, there is no "before". How to formalize this? What is it exactly that "scope" guarantees?
June 23, 2019
On 6/22/2019 4:24 PM, Manu wrote:
> If you engage in @system code to distribute across threads, it's a no brainer to expect that code to handle cache coherency measures. It would be broken if it didn't.

It is not a no-brainer. Nowhere is it specified, and all the examples and tutorials I've seen say DO NOT follow an atomic write with a non-atomic read of the same variable in another thread. Not only that, it's usually a CENTRAL THEME of these expositions.

The only synchronization required (will be) for atomics, and that assumes atomic write followed by atomic read. Not atomic write followed by non-atomic read of the same memory.

The code may appear to work on the x86 because Intel CPUs do some additional not-required synchronization on reads. But it'll be leaving a nightmare for some poor sap who tries to port the code to the ARM.



June 23, 2019
On 6/22/2019 4:44 PM, Manu wrote:
> Not being required to duplicate every threadsafe function is a huge advantage. We went on about this for weeks 6 months ago.

You've suggested that all operations on shared data be done with atomic function calls rather than operators. This implies the function bodies should be different.
June 23, 2019
On 6/23/2019 7:42 PM, Timon Gehr wrote:
> I really don't understand this "before" without memory barriers. Without memory barriers, there is no "before". How to formalize this? What is it exactly that "scope" guarantees?

The reference to shared i does not extend past the end of the function call. That doesn't mean the cached value of i is propagated to all other threads.

    int x;
    void fun(scope ref shared(int) x) {
        x = 3; // (*)
    } // end of reference to x is guaranteed
    assert(x == 3); // all caches not updated, assert is not guaranteed

(*) now replace this with a call to another thread to set x to 3.
June 24, 2019
On Monday, 24 June 2019 at 02:55:39 UTC, Walter Bright wrote:
> On 6/23/2019 7:42 PM, Timon Gehr wrote:
>> I really don't understand this "before" without memory barriers. Without memory barriers, there is no "before". How to formalize this? What is it exactly that "scope" guarantees?
>
> The reference to shared i does not extend past the end of the function call. That doesn't mean the cached value of i is propagated to all other threads.
>
>     int x;
>     void fun(scope ref shared(int) x) {
>         x = 3; // (*)
>     } // end of reference to x is guaranteed
>     assert(x == 3); // all caches not updated, assert is not guaranteed
>
> (*) now replace this with a call to another thread to set x to 3.

You can assign to a shared primitive without a compilation error?

Does the compiler do the necessary synchronization under the hood?

I would've assumed you'd have to use atomicStore or the like?
June 24, 2019
On Monday, 24 June 2019 at 11:38:29 UTC, aliak wrote:
> On Monday, 24 June 2019 at 02:55:39 UTC, Walter Bright wrote:
>>
>>     int x;
>>     void fun(scope ref shared(int) x) {
>>         x = 3; // (*)
>>     } // end of reference to x is guaranteed
>>     assert(x == 3); // all caches not updated, assert is not guaranteed
>>
>> (*) now replace this with a call to another thread to set x to 3.
>
> You can assign to a shared primitive without a compilation error?
>
> Does the compiler do the necessary synchronization under the hood?
>
> I would've assumed you'd have to use atomicStore or the like?

(*) now replace this with a call to another thread to set x to 3.
June 24, 2019
On Monday, 24 June 2019 at 11:38:29 UTC, aliak wrote:
> On Monday, 24 June 2019 at 02:55:39 UTC, Walter Bright wrote:
>> On 6/23/2019 7:42 PM, Timon Gehr wrote:
>>> [...]
>>
>> The reference to shared i does not extend past the end of the function call. That doesn't mean the cached value of i is propagated to all other threads.
>>
>>     int x;
>>     void fun(scope ref shared(int) x) {
>>         x = 3; // (*)
>>     } // end of reference to x is guaranteed
>>     assert(x == 3); // all caches not updated, assert is not guaranteed
>>
>> (*) now replace this with a call to another thread to set x to 3.
>
> You can assign to a shared primitive without a compilation error?
>
> Does the compiler do the necessary synchronization under the hood?
>
> I would've assumed you'd have to use atomicStore or the like?

ok so I guess you can indeed do that: https://d.godbolt.org/z/Ge7bTc

Is that intended behavior? I would've expected an `xchg` or  lock xchg or something?