November 12, 2012
On 2012-11-12, 15:11, Jacob Carlborg wrote:

> On 2012-11-12 12:55, Regan Heath wrote:
>> On Mon, 12 Nov 2012 02:30:17 -0000, Walter Bright
>> <newshound2@digitalmars.com> wrote:
>>> To make a shared type work in an algorithm, you have to:
>>>
>>> 1. ensure single threaded access by aquiring a mutex
>>> 2. cast away shared
>>> 3. operate on the data
>>> 4. cast back to shared
>>> 5. release the mutex
>>
>> So what we actually want, in order to make the above "nice" is a
>> "scoped" struct wrapping the mutex and shared object which does all the
>> "dirty" work for you.  I'm thinking..
>>
>> // (0)
>> with(ScopedLock(obj,lock))  // (1)
>> {
>>    obj.foo = 2;              // (2)
>> }                           // (3)
>> // (4)
>>
>> (0) obj is a "shared" reference, lock is a global mutex
>> (1) mutex is acquired here, shared is cast away
>> (2) 'obj' is not "shared" here so data access is allowed
>> (3) ScopedLock is "destroyed" and the mutex released
>> (4) obj is shared again
>>
>> I think most of the above can be done without any compiler support but
>> it would be "nice" if the compiler did something clever with 'obj' such
>> that it knew it wasn't 'shared' inside the the 'with' above.  If not, if
>> a full library solution is desired we could always have another
>> temporary "unshared" variable referencing obj.
>
> I'm just throwing it in here again, AST macros could probably solve this.

Until someone writes a proper DIP on them, macros can write entire software
packages, download Hitler, turn D into lisp, and bake bread. Can we please
stop with the 'macros could do that' until there's any sort of consensus as
to what macros *could* do?

-- 
Simen
November 12, 2012
Am 12.11.2012 07:50, schrieb Walter Bright:
> On 11/11/2012 10:05 PM, Benjamin Thaut wrote:
>> The only problem beeing that you can not really have user defined
>> shared (value)
>> types:
>>
>> http://d.puremagic.com/issues/show_bug.cgi?id=8295
>
> If you include an object designed to work only in a single thread
> (non-shared), make it shared, and then destruct it when other threads
> may be pointing to it ...
>
> What should happen?
>

I'm not talking about objects, I'm talking about value types.
And you can't make it work at all. If you do

  shared ~this()
  {
    buf = null;
  }

it won't work either. You don't have _any_ option to destroy a shared struct.

Kind Regards
Benjamin Thaut
November 12, 2012
On Monday, 12 November 2012 at 11:19:57 UTC, Walter Bright wrote:
> On 11/12/2012 2:57 AM, Johannes Pfau wrote:
>> But there are also shared member functions and they're kind of annoying
>> right now:
>>
>> * You can't call shared methods from non-shared methods or vice versa.
>>   This leads to code duplication, you basically have to implement
>>   everything twice:
>
> You can't get away from the fact that data that can be accessed from multiple threads has to be dealt with in a *fundamentally* different way than single threaded code. You cannot share code between the two. There is simply no conceivable way that "share" can be added and then code will become thread safe.

I know share can't automatically make the code thread safe. I
just wanted to point out that this casting / code duplication is
annoying but I don't know either how this could be solved.


>
> Yes, mutexes will need to exist in a global space.

I'm not sure if I undestand this. Don't you think shared(Mutex)
should work?
AFAICS that's only a library problem: Add shared to the lock /
unlock methods in druntime and it should work?

Or global as in not in the struct instance?

>>
>>
>> And then there are some open questions with advanced use cases:
>> * How do I make sure that a non-shared delegate is only accepted if I
>>   have an A, but a shared delegate should be supported
>>   for shared(A) and A? (calling a shared delegate from a non-shared
>>   function should work, right?)
>>
>> struct A
>> {
>>     void a(T)(T v)
>>     {
>>         writeln("non-shared");
>>     }
>>     shared void a(T)(T v)  if (isShared!v) //isShared doesn't exist
>>     {
>>         writeln("shared");
>>     }
>> }
>
> First, you have to decide what you mean by a shared delegate. Do you mean the variable containing the two pointers that make up a delegate are shared, or the delegate is supposed to deal with shared data?

I'm talking about a delegate pointing to a method declared with
the "shared" keyword and the "this pointer" pointing to a shared
object:
struct A
{
    shared void a(){}
}
shared A instance;
auto del = &instance.a; //I'm talking about this type

To explain that usecase: I think of a shared delegate as a
delegate that can be safely called from different threads. So I
can store it in a struct instance and later on call it from any
thread:

struct Signal
{
     //The variable is shared _AND_ the method is shared
     shared(shared void delegate()) _handler;

     shared void call() //Can be called from any thread
     {
         //Would have to synchronize access to the variable in a
real world case,
         //but the call itself wouldn't have to be synchronized
         shared void delegate() localHandler;
         synchronized(mutex)
         {
             localHandler = _handler;
         }
         localHandler ();
     }
}

>
>
>>
>> And having fun with this little example:
>> http://dpaste.dzfl.pl/7f6a4ad2
>>
>> * What's the difference between: "void delegate() shared"
>>   and "shared(void delegate())"?
>>
>> Error: cannot implicitly convert expression (&a.abc) of type void
>> delegate() shared
>
> The delegate deals with shared data.

OK so that's what I need but the compiler doesn't let me declare
that type.

alias void delegate() shared del;
Error: const/immutable/shared/inout attributes are only valid for
non-static member functions

>> to shared(void delegate())
>
> The variable holding the delegate is shared.

OK, but when it's used as a function parameter, which is
pass-by-value for delegates and because of tail-shared there's
effectively no difference, right? In that case it's not possible
to pass a shared variable to the function as this will always
create a copy?

void abcd(shared(void delegate()) del)
which is the same as
void abcd(shared void delegate() del)

How would you pass del as a shared variable?

>
>
>> * So let's call it void delegate() shared instead:
>> void incrementA(void delegate() shared del)
>> /home/c684/c922.d(7): Error: const/immutable/shared/inout attributes
>>   are only valid for non-static member functions
November 12, 2012
On 12 November 2012 04:30, Walter Bright <newshound2@digitalmars.com> wrote:

> On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
>
>> It's starting to get outright embarrassing to talk to newcomers about D's
>> concurrency support because the most fundamental part of it -- the shared
>> type
>> qualifier -- does not have well-defined semantics at all.
>>
>
> I think a couple things are clear:
>
> 1. Slapping shared on a type is never going to make algorithms on that type work in a concurrent context, regardless of what is done with memory barriers. Memory barriers ensure sequential consistency, they do nothing for race conditions that are sequentially consistent. Remember, single core CPUs are all sequentially consistent, and still have major concurrency problems. This also means that having templates accept shared(T) as arguments and have them magically generate correct concurrent code is a pipe dream.
>
> 2. The idea of shared adding memory barriers for access is not going to ever work. Adding barriers has to be done by someone who knows what they're doing for that particular use case, and the compiler inserting them is not going to substitute.
>
>
> However, and this is a big however, having shared as compiler-enforced self-documentation is immensely useful. It flags where and when data is being shared. So, your algorithm won't compile when you pass it a shared type? That is because it is NEVER GOING TO WORK with a shared type. At least you get a compile time indication of this, rather than random runtime corruption.
>
> To make a shared type work in an algorithm, you have to:
>
> 1. ensure single threaded access by aquiring a mutex
> 2. cast away shared
> 3. operate on the data
> 4. cast back to shared
> 5. release the mutex
>
> Also, all op= need to be disabled for shared types.
>

I agree completely the OP, shared is really very unhelpful right now. It
just inconveniences you, and forces you to perform explicit casts (which
may cast away other attributes like const).
I've thought before that what it might be useful+practical for shared to
do, is offer convenient methods to implement precisely what you describe
above.

Imagine a system where tagging a variable 'shared' would cause it to gain
some properties:
Gain a mutex, implicit var.lock()/release() methods to call on either side
of access to your shared variable, and unlike the current situation where
assignment is illegal, rather, assignment works as usual, but the shared
tag implies a runtime check to verify the item is locked when performing
assignment (perhaps that runtime check would be removed in -release for
performance).

This would make implementing the logic you describe above convenient, and you wouldn't need to be declaring explicit mutexes around the place. It would also address the safety by asserting that it is locked whenever accessed.


November 12, 2012
On 2012-11-12 17:57, Simen Kjaeraas wrote:

> Until someone writes a proper DIP on them, macros can write entire software
> packages, download Hitler, turn D into lisp, and bake bread. Can we please
> stop with the 'macros could do that' until there's any sort of consensus as
> to what macros *could* do?

Sure, I can try and stop doing that :)

-- 
/Jacob Carlborg
November 12, 2012
Am 12.11.2012 16:27, schrieb Sönke Ludwig:
> I generated some quick documentation with examples here:
> 
> http://vibed.org/temp/d-isolated-test/stdx/typecons/lock.html http://vibed.org/temp/d-isolated-test/stdx/typecons/makeIsolated.html http://vibed.org/temp/d-isolated-test/stdx/typecons/makeIsolatedArray.html
> 
> It does offer some nice improvements. No single cast and everything is statically checked.
> 

All examples compile now. Put everything on github for reference:

https://github.com/s-ludwig/d-isolated-test
November 12, 2012
Here i as wild idea:

//////////

void main () {

  mutex x;
  // mutex is not a type but rather a keyword
  // x is a symbol in order to allow
  // different x in different scopes

  shared(x) int i;
  // ... or maybe use UDA ?
  // mutex x must be locked
  // in order to change i

  synchronized (x) {
    // lock x in a compiler-aware way
    i++;
    // compiler guarantees that i will not
    // be changed outside synchronized(x)
  }

}

//////////

so I tried something similar with current implementation:

//////////

import std.stdio;

void main () {

  shared(int) i1;
  auto m1 = new MyMutex();

  i1.attachMutex(m1);
  // m1 must be locked in order to modify i1
	
  // i1++;
  // should throw a compiler error

  // sharedAccess(i1)++;
  // runtime exception, m1 is not locked

  synchronized (m1) {
    sharedAccess(i1)++;
    // ok, m1 is locked
  }

}

// some generic code

import core.sync.mutex;

class MyMutex : Mutex {
  @property bool locked = false;
  @trusted void lock () {
    super.lock();
    locked = true;
  }
  @trusted void unlock () {
    locked = false;
    super.unlock();
  }
  bool tryLock () {
    bool result = super.tryLock();
    if (result)
      locked = true;
    return result;
  }
}

template unshared (T : shared(T)) {
  alias T unshared;
}

template unshared (T : shared(T)*) {
  alias T* unshared;
}

auto ref sharedAccess (T) (ref T value) {
  assert(value.attachMutex().locked);
  unshared!(T)* refVal = (cast(unshared!(T*)) &value);
  return *refVal;
}

MyMutex attachMutex (T) (T value, MyMutex mutex = null) {
  static __gshared MyMutex[T] mutexes;
  // this memory leak can be solved
  // but it's left like this to make the code simple
  synchronized if (value !in mutexes && mutex !is null)
    mutexes[value] = mutex;
  assert(mutexes[value] !is null);
  return mutexes[value];
}

//////////

and another example with methods:

//////////

import std.stdio;

class a {
  int i;
  void increment () { i++; }
}

void main () {

  auto a1 = new shared(a);
  auto m1 = new MyMutex();

  a1.attachMutex(m1);
  // m1 must be locked in order to modify a1
	
  // a1.increment();
  // compiler error

  // sharedAccess(a1).increment();
  // runtime exception, m1 is not locked

  synchronized (m1) {
    sharedAccess(a1).increment();
    // ok, m1 is locked
  }

}

// some generic code

import core.sync.mutex;

class MyMutex : Mutex {
  @property bool locked = false;
  @trusted void lock () {
    super.lock();
    locked = true;
  }
  @trusted void unlock () {
    locked = false;
    super.unlock();
  }
  bool tryLock () {
    bool result = super.tryLock();
    if (result)
      locked = true;
    return result;
  }
}

template unshared (T : shared(T)) {
  alias T unshared;
}

template unshared (T : shared(T)*) {
  alias T* unshared;
}

auto ref sharedAccess (T) (ref T value) {
  assert(value.attachMutex().locked);
  unshared!(T)* refVal = (cast(unshared!(T*)) &value);
  return *refVal;
}

MyMutex attachMutex (T) (T value, MyMutex mutex = null) {
  static __gshared MyMutex[T] mutexes;
  // this memory leak can be solved
  // but it's left like this to make the code simple
  synchronized if (value !in mutexes && mutex !is null)
    mutexes[value] = mutex;
  assert(mutexes[value] !is null);
  return mutexes[value];
}

//////////

In any case, if shared itself does not provide locking and does not fixes problems but only points them out (not to be misunderstood, I completely agree with that) then I think that assigning a mutex to the variable is a must.

Aldo latter examples already work with current implementation I like the first one (or something similar to the first one) more, it looks cleaner and leaves space for additional optimizations.


On 12.11.2012 17:14, deadalnix wrote:
> Le 12/11/2012 16:00, luka8088 a écrit :
>> If I understood correctly there is no reason why this should not
>> compile ?
>>
>> import core.sync.mutex;
>>
>> class MyClass {
>> void method () {}
>> }
>>
>> void main () {
>> auto myObject = new shared(MyClass);
>> synchronized (myObject) {
>> myObject.method();
>> }
>> }
>>
>
> D has no ownership, so the compiler can't know what
> if it is safe to do so or not.

November 13, 2012
On 12.11.2012 3:30, Walter Bright wrote:
> On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
>> It's starting to get outright embarrassing to talk to newcomers about D's
>> concurrency support because the most fundamental part of it -- the
>> shared type
>> qualifier -- does not have well-defined semantics at all.
>
> I think a couple things are clear:
>
> 1. Slapping shared on a type is never going to make algorithms on that
> type work in a concurrent context, regardless of what is done with
> memory barriers. Memory barriers ensure sequential consistency, they do
> nothing for race conditions that are sequentially consistent. Remember,
> single core CPUs are all sequentially consistent, and still have major
> concurrency problems. This also means that having templates accept
> shared(T) as arguments and have them magically generate correct
> concurrent code is a pipe dream.
>
> 2. The idea of shared adding memory barriers for access is not going to
> ever work. Adding barriers has to be done by someone who knows what
> they're doing for that particular use case, and the compiler inserting
> them is not going to substitute.
>
>
> However, and this is a big however, having shared as compiler-enforced
> self-documentation is immensely useful. It flags where and when data is
> being shared. So, your algorithm won't compile when you pass it a shared
> type? That is because it is NEVER GOING TO WORK with a shared type. At
> least you get a compile time indication of this, rather than random
> runtime corruption.
>
> To make a shared type work in an algorithm, you have to:
>
> 1. ensure single threaded access by aquiring a mutex
> 2. cast away shared
> 3. operate on the data
> 4. cast back to shared
> 5. release the mutex
>
> Also, all op= need to be disabled for shared types.


This clarifies a lot, but still a lot of people get confused with:
http://dlang.org/faq.html#shared_memory_barriers
is it a faq error ?

and also with http://dlang.org/faq.html#shared_guarantees said, I come to think that the fact that the following code compiles is either lack of implementation, a compiler bug or a faq error ?

//////////

import core.thread;

void main () {
  shared int i;
  (new Thread({ i++; })).start();
}


November 13, 2012
On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:
> On 12.11.2012 3:30, Walter Bright wrote:
>> On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
>>> It's starting to get outright embarrassing to talk to newcomers about D's
>>> concurrency support because the most fundamental part of it -- the
>>> shared type
>>> qualifier -- does not have well-defined semantics at all.
>>
>> I think a couple things are clear:
>>
>> 1. Slapping shared on a type is never going to make algorithms on that
>> type work in a concurrent context, regardless of what is done with
>> memory barriers. Memory barriers ensure sequential consistency, they do
>> nothing for race conditions that are sequentially consistent. Remember,
>> single core CPUs are all sequentially consistent, and still have major
>> concurrency problems. This also means that having templates accept
>> shared(T) as arguments and have them magically generate correct
>> concurrent code is a pipe dream.
>>
>> 2. The idea of shared adding memory barriers for access is not going to
>> ever work. Adding barriers has to be done by someone who knows what
>> they're doing for that particular use case, and the compiler inserting
>> them is not going to substitute.
>>
>>
>> However, and this is a big however, having shared as compiler-enforced
>> self-documentation is immensely useful. It flags where and when data is
>> being shared. So, your algorithm won't compile when you pass it a shared
>> type? That is because it is NEVER GOING TO WORK with a shared type. At
>> least you get a compile time indication of this, rather than random
>> runtime corruption.
>>
>> To make a shared type work in an algorithm, you have to:
>>
>> 1. ensure single threaded access by aquiring a mutex
>> 2. cast away shared
>> 3. operate on the data
>> 4. cast back to shared
>> 5. release the mutex
>>
>> Also, all op= need to be disabled for shared types.
>
>
> This clarifies a lot, but still a lot of people get confused with:
> http://dlang.org/faq.html#shared_memory_barriers
> is it a faq error ?
>
> and also with http://dlang.org/faq.html#shared_guarantees said, I come to think that the fact that the following code compiles is either lack of implementation, a compiler bug or a faq error ?
>
> //////////
>
> import core.thread;
>
> void main () {
>   shared int i;
>   (new Thread({ i++; })).start();
> }

Um, sorry, the following code:

//////////

import core.thread;

void main () {
  int i;
  (new Thread({ i++; })).start();
}

November 13, 2012
Am 13.11.2012 10:14, schrieb luka8088:
> On Tuesday, 13 November 2012 at 09:11:15 UTC, luka8088 wrote:
>> On 12.11.2012 3:30, Walter Bright wrote:
>>> On 11/11/2012 10:46 AM, Alex Rønne Petersen wrote:
>>>> It's starting to get outright embarrassing to talk to newcomers about D's
>>>> concurrency support because the most fundamental part of it -- the
>>>> shared type
>>>> qualifier -- does not have well-defined semantics at all.
>>>
>>> I think a couple things are clear:
>>>
>>> 1. Slapping shared on a type is never going to make algorithms on that type work in a concurrent context, regardless of what is done with memory barriers. Memory barriers ensure sequential consistency, they do nothing for race conditions that are sequentially consistent. Remember, single core CPUs are all sequentially consistent, and still have major concurrency problems. This also means that having templates accept shared(T) as arguments and have them magically generate correct concurrent code is a pipe dream.
>>>
>>> 2. The idea of shared adding memory barriers for access is not going to ever work. Adding barriers has to be done by someone who knows what they're doing for that particular use case, and the compiler inserting them is not going to substitute.
>>>
>>>
>>> However, and this is a big however, having shared as compiler-enforced self-documentation is immensely useful. It flags where and when data is being shared. So, your algorithm won't compile when you pass it a shared type? That is because it is NEVER GOING TO WORK with a shared type. At least you get a compile time indication of this, rather than random runtime corruption.
>>>
>>> To make a shared type work in an algorithm, you have to:
>>>
>>> 1. ensure single threaded access by aquiring a mutex
>>> 2. cast away shared
>>> 3. operate on the data
>>> 4. cast back to shared
>>> 5. release the mutex
>>>
>>> Also, all op= need to be disabled for shared types.
>>
>>
>> This clarifies a lot, but still a lot of people get confused with:
>> http://dlang.org/faq.html#shared_memory_barriers
>> is it a faq error ?
>>
>> and also with http://dlang.org/faq.html#shared_guarantees said, I come to think that the fact that the following code compiles is either lack of implementation, a compiler bug or a faq error ?
>>
>> //////////
>>
>> import core.thread;
>>
>> void main () {
>>   shared int i;
>>   (new Thread({ i++; })).start();
>> }
> 
> Um, sorry, the following code:
> 
> //////////
> 
> import core.thread;
> 
> void main () {
>   int i;
>   (new Thread({ i++; })).start();
> }
> 

Only std.concurrency (using spawn() and send()) enforces that unshared data cannot be pass between
threads. The core.thread module is just a low-level module that just represents the OS functionality.