June 01, 2012
On Thu, 31 May 2012 19:35:50 +0100, Steven Schveighoffer <schveiguy@yahoo.com> wrote:

> On Thu, 31 May 2012 14:29:27 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
>
>> On 5/31/12 7:01 AM, Regan Heath wrote:
>
>>> Sorry, I have no spare time to spare. You're getting free ideas/thoughts
>>> from me, feel free to ignore them.
>>
>> Thanks. Let me know if I understand correctly that your idea boils down to "I don't like synchronized, let's deprecate it and get back to core.sync.mutex and recommend the private thingamaroo." In that case, I disagree. I believe synchronized has good merits that are being ignored.
>
> No, this is definitely *not* what we are saying.  The idea is that synchronized(x) is still present, but what objects you can call this on, and more importantly, *who* can do this is restricted.

Exactly.

> Nobody is advocating abandoning synchronized in favor of manual locks.  In fact, I think we all want to *avoid* manual locks as much as possible.  It's all about controlling access.  If it comes down to "you must use a private, error-prone mutex member in order to prevent deadlocks," then I think we have room for improvement.

Indeed.

R

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
June 01, 2012
Le 31/05/2012 20:17, Andrei Alexandrescu a écrit :
> On 5/31/12 5:19 AM, deadalnix wrote:
>> The solution consisting in passing a delegate as parameter or as
>> template is superior, because it is now clear who is in charge of the
>> synchronization, reducing greatly chances of deadlock.
>
> It can also be a lot clunkier for certain abstractions. Say I want a
> ProducerConsumerQueue. It's much more convenient to simply make it a
> synchronized class with the classic primitives, instead of primitives
> that accept delegates etc.
>
> Nevertheless I think there's merit in this idea. One thing to point out
> is that the idiom can easily be done today with a regular class holding
> a synchronized class private member.
>
> So we got everything we need.
>
>
> Andrei

I was thinking about that. Here is what I ended up to think is the best solution :

synchronized classes exists. By default, they can't be use as parameter for synchronized(something) .

synchronized(something) will be valid is something provide opSynchronized(scope delegate void()) or something similar. Think opApply here. The synchronized statement is rewritten in a call to that delegate.

Here are the benefit of such an approach :
1/ Lock and unlock are not exposed. You can only use them by pair.
2/ You cannot lock on any object, so you avoid most liquid locks and don't waste memory.
3/ synchronized classes ensure that a class can be shared and it internal are protected from concurrent access.
4/ It is not possible possible by default to lock on synchronized classes's instances. It grant better control over the lock and it is now clear which piece of code is responsible of it.
5/ The design allow the programmer to grant the permission to lock on synchronized classes's instances if he/she want to.
6/ It is now possible to synchronize on a broader range of user defined stuffs.

The main drawback is the same as opApply : return (and break/continue but it is less relevant for opSynchronized). Solution to this problem have been proposed in the past using compiler and stack magic.

It open door for stuff like :
ReadWriteLock rw;
synchronized(rw.read) {

}

synchronized(rw.write) {

}

And many types of lock : spin lock, interprocesses locks, semaphores, . . . And all can be used with the synchronized syntax, and without exposing locking and unlocking primitives.

What do people think ?
June 01, 2012
On 01.06.2012 16:26, deadalnix wrote:
>  Here is what I ended up to think is the best
> solution :
>
> synchronized classes exists. By default, they can't be use as parameter
> for synchronized(something) .
>
> synchronized(something) will be valid is something provide
> opSynchronized(scope delegate void()) or something similar. Think
> opApply here. The synchronized statement is rewritten in a call to that
> delegate.
>
> Here are the benefit of such an approach :
> 1/ Lock and unlock are not exposed. You can only use them by pair.
> 2/ You cannot lock on any object, so you avoid most liquid locks and
> don't waste memory.
> 3/ synchronized classes ensure that a class can be shared and it
> internal are protected from concurrent access.
> 4/ It is not possible possible by default to lock on synchronized
> classes's instances. It grant better control over the lock and it is now
> clear which piece of code is responsible of it.
> 5/ The design allow the programmer to grant the permission to lock on
> synchronized classes's instances if he/she want to.
> 6/ It is now possible to synchronize on a broader range of user defined
> stuffs.
>
> The main drawback is the same as opApply : return (and break/continue
> but it is less relevant for opSynchronized). Solution to this problem
> have been proposed in the past using compiler and stack magic.
>
> It open door for stuff like :
> ReadWriteLock rw;
> synchronized(rw.read) {
>
> }
>
> synchronized(rw.write) {
>
> }
>
> And many types of lock : spin lock, interprocesses locks, semaphores, .
> . . And all can be used with the synchronized syntax, and without
> exposing locking and unlocking primitives.
>
> What do people think ?

+1. Works for me.

It refines what I believe the shadow cabinet (loosely: me, you, Alex, Regan Heath and Steven) propose.

P.S. Removing monitor from non-synced/shared classes would be good too. As a separate matter.

-- 
Dmitry Olshansky
June 01, 2012
On Fri, 01 Jun 2012 08:38:45 -0400, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:

> On 01.06.2012 16:26, deadalnix wrote:
>>  Here is what I ended up to think is the best
>> solution :
>>
>> synchronized classes exists. By default, they can't be use as parameter
>> for synchronized(something) .
>>
>> synchronized(something) will be valid is something provide
>> opSynchronized(scope delegate void()) or something similar. Think
>> opApply here. The synchronized statement is rewritten in a call to that
>> delegate.
>>
>> Here are the benefit of such an approach :
>> 1/ Lock and unlock are not exposed. You can only use them by pair.
>> 2/ You cannot lock on any object, so you avoid most liquid locks and
>> don't waste memory.
>> 3/ synchronized classes ensure that a class can be shared and it
>> internal are protected from concurrent access.
>> 4/ It is not possible possible by default to lock on synchronized
>> classes's instances. It grant better control over the lock and it is now
>> clear which piece of code is responsible of it.
>> 5/ The design allow the programmer to grant the permission to lock on
>> synchronized classes's instances if he/she want to.
>> 6/ It is now possible to synchronize on a broader range of user defined
>> stuffs.
>>
>> The main drawback is the same as opApply : return (and break/continue
>> but it is less relevant for opSynchronized). Solution to this problem
>> have been proposed in the past using compiler and stack magic.
>>
>> It open door for stuff like :
>> ReadWriteLock rw;
>> synchronized(rw.read) {
>>
>> }
>>
>> synchronized(rw.write) {
>>
>> }
>>
>> And many types of lock : spin lock, interprocesses locks, semaphores, .
>> . . And all can be used with the synchronized syntax, and without
>> exposing locking and unlocking primitives.
>>
>> What do people think ?
>
> +1. Works for me.
>
> It refines what I believe the shadow cabinet (loosely: me, you, Alex, Regan Heath and Steven) propose.

Is this really necessary?  When is opSynchronized going to be written any way other than:

_mutex.lock();
scope(exit) _mutex.unlock();
dg();

I'll note that it's easier to forget to lock or unlock if the compiler isn't enforcing it.  You might even naively do this:

_mutex.lock();
dg();
_mutex.unlock(); // not called on exception thrown!

I kind of like the __lock() __unlock() pair that the compiler always calls both in the right place/way.  Yes, you could just leave those implementations blank, but very unlikely.

Plus, we already have issues with inout and delegates for opApply, this would have the same issues.

> P.S. Removing monitor from non-synced/shared classes would be good too. As a separate matter.

I think at this point, we should leave it there until we can really figure out a detailed plan on how to deal with it.  It currently affects all runtime code which does virtual function lookups or interface lookups, and alignment.  We would have to change a lot of compiler and runtime code to remove it.

I personally don't see it as a huge issue, we are already allocating on power-of-two boundaries which can almost double required space.  I feel class size is really one of those things you should be oblivious to.  That being said, I'm all for performance improvement, even small ones, if they are free and someone is willing to do all the leg work :)

-Steve
June 01, 2012
On 01-06-2012 14:26, deadalnix wrote:
> Le 31/05/2012 20:17, Andrei Alexandrescu a écrit :
>> On 5/31/12 5:19 AM, deadalnix wrote:
>>> The solution consisting in passing a delegate as parameter or as
>>> template is superior, because it is now clear who is in charge of the
>>> synchronization, reducing greatly chances of deadlock.
>>
>> It can also be a lot clunkier for certain abstractions. Say I want a
>> ProducerConsumerQueue. It's much more convenient to simply make it a
>> synchronized class with the classic primitives, instead of primitives
>> that accept delegates etc.
>>
>> Nevertheless I think there's merit in this idea. One thing to point out
>> is that the idiom can easily be done today with a regular class holding
>> a synchronized class private member.
>>
>> So we got everything we need.
>>
>>
>> Andrei
>
> I was thinking about that. Here is what I ended up to think is the best
> solution :
>
> synchronized classes exists. By default, they can't be use as parameter
> for synchronized(something) .
>
> synchronized(something) will be valid is something provide
> opSynchronized(scope delegate void()) or something similar. Think
> opApply here. The synchronized statement is rewritten in a call to that
> delegate.
>
> Here are the benefit of such an approach :
> 1/ Lock and unlock are not exposed. You can only use them by pair.
> 2/ You cannot lock on any object, so you avoid most liquid locks and
> don't waste memory.
> 3/ synchronized classes ensure that a class can be shared and it
> internal are protected from concurrent access.
> 4/ It is not possible possible by default to lock on synchronized
> classes's instances. It grant better control over the lock and it is now
> clear which piece of code is responsible of it.
> 5/ The design allow the programmer to grant the permission to lock on
> synchronized classes's instances if he/she want to.
> 6/ It is now possible to synchronize on a broader range of user defined
> stuffs.
>
> The main drawback is the same as opApply : return (and break/continue
> but it is less relevant for opSynchronized). Solution to this problem
> have been proposed in the past using compiler and stack magic.
>
> It open door for stuff like :
> ReadWriteLock rw;
> synchronized(rw.read) {
>
> }
>
> synchronized(rw.write) {
>
> }
>
> And many types of lock : spin lock, interprocesses locks, semaphores, .
> .. . And all can be used with the synchronized syntax, and without
> exposing locking and unlocking primitives.
>
> What do people think ?

Your idea is great, but it has one (or arguably many) fundamental flaw, the same one that opApply does: The delegate's type is fixed. That is, you can't call opSynchronized() in a pure function, for example (same goes for nothrow, @safe, ...).

-- 
Alex Rønne Petersen
alex@lycus.org
http://lycus.org
June 01, 2012
On Thu, 31 May 2012 19:29:27 +0100, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
> On 5/31/12 7:01 AM, Regan Heath wrote:
>> Sorry, I have no spare time to spare. You're getting free ideas/thoughts
>> from me, feel free to ignore them.
>
> Thanks. Let me know if I understand correctly that your idea boils down to "I don't like synchronized, let's deprecate it and get back to core.sync.mutex and recommend the private thingamaroo." In that case, I disagree. I believe synchronized has good merits that are being ignored.

To present this another way..

Your motivation for the construct: "synchronized(a, b, ...)" was to prevent deadlocks caused by:

[thread1]
synchronized(a)
{
  synchronized(b)
  {
  }
}

[thread2]
synchronized(b)
{
  synchronized(a)
  {
  }
}

right?

Well, this is the same problem expressed several other less-obvious (to code inspection) ways:

1.
[thread1]
synchronized(a)
{
  b.foo();  // where b.foo is { synchronized(this) { ... } }
}

[thread2]
synchronized(b)
{
  a.foo();  // where a.foo is { synchronized(this) { ... } }
}

2.
[thread1]
synchronized(a)
{
  b.foo();  // where b.foo is synchronized void foo() { ... }
}

[thread2]
synchronized(b)
{
  a.foo();  // where a.foo is synchronized void foo() { ... }
}

#1 can be solved (in most/many cases) by doing 2 things, first by disallowing that idiom completely in favour of synchronized classes/class methods (which I think TDPL does?), and second by adding more control as described below in(#2)

#2 can be solved (in most/many cases) by allowing greater control over who can participate in synchronized statements.  If either 'a' or 'b' were not allowed to participate in a synchronized statement, then either thread1 or thread2 would be invalid code and a deadlock involving these 2 objects would be impossible(*).

There will still exist some synchronized classes which want to participate in synchronized statements but I'm thinking/hoping this is rare and if the default for D is 'not allowed' then it becomes a conscious choice and we can supply the developer with a warning in the docs which describe how to do it, introduce the synchronized(a, b, ...) construct, etc.

From another angle.. I'm guessing it's either impossible or very hard to detect the 2 cases presented above at compile time?  Essentially the compiler would need to know which code could execute in separate threads, then determine lock ordering for all shared/lockable objects, then detect cases of both (lock a, b) and (lock b, a) in separate threads.  Sounds tricky.

R

(*)using synchronized statements - one could still keep a reference to the other internally and call a synchronized member function from within a synchronized member function

-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/
June 01, 2012
Le 01/06/2012 14:55, Regan Heath a écrit :
> On Thu, 31 May 2012 19:29:27 +0100, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> wrote:
>> On 5/31/12 7:01 AM, Regan Heath wrote:
>>> Sorry, I have no spare time to spare. You're getting free ideas/thoughts
>>> from me, feel free to ignore them.
>>
>> Thanks. Let me know if I understand correctly that your idea boils
>> down to "I don't like synchronized, let's deprecate it and get back to
>> core.sync.mutex and recommend the private thingamaroo." In that case,
>> I disagree. I believe synchronized has good merits that are being
>> ignored.
>
> To present this another way..
>
> Your motivation for the construct: "synchronized(a, b, ...)" was to
> prevent deadlocks caused by:
>
> [thread1]
> synchronized(a)
> {
> synchronized(b)
> {
> }
> }
>
> [thread2]
> synchronized(b)
> {
> synchronized(a)
> {
> }
> }
>
> right?
>
> Well, this is the same problem expressed several other less-obvious (to
> code inspection) ways:
>
> 1.
> [thread1]
> synchronized(a)
> {
> b.foo(); // where b.foo is { synchronized(this) { ... } }
> }
>
> [thread2]
> synchronized(b)
> {
> a.foo(); // where a.foo is { synchronized(this) { ... } }
> }
>
> 2.
> [thread1]
> synchronized(a)
> {
> b.foo(); // where b.foo is synchronized void foo() { ... }
> }
>
> [thread2]
> synchronized(b)
> {
> a.foo(); // where a.foo is synchronized void foo() { ... }
> }
>
> #1 can be solved (in most/many cases) by doing 2 things, first by
> disallowing that idiom completely in favour of synchronized
> classes/class methods (which I think TDPL does?), and second by adding
> more control as described below in(#2)
>
> #2 can be solved (in most/many cases) by allowing greater control over
> who can participate in synchronized statements. If either 'a' or 'b'
> were not allowed to participate in a synchronized statement, then either
> thread1 or thread2 would be invalid code and a deadlock involving these
> 2 objects would be impossible(*).
>
> There will still exist some synchronized classes which want to
> participate in synchronized statements but I'm thinking/hoping this is
> rare and if the default for D is 'not allowed' then it becomes a
> conscious choice and we can supply the developer with a warning in the
> docs which describe how to do it, introduce the synchronized(a, b, ...)
> construct, etc.
>
>  From another angle.. I'm guessing it's either impossible or very hard
> to detect the 2 cases presented above at compile time? Essentially the
> compiler would need to know which code could execute in separate
> threads, then determine lock ordering for all shared/lockable objects,
> then detect cases of both (lock a, b) and (lock b, a) in separate
> threads. Sounds tricky.
>
> R
>
> (*)using synchronized statements - one could still keep a reference to
> the other internally and call a synchronized member function from within
> a synchronized member function
>

I think it is unrealistic to prevent all deadlock, unless you can come up with a radically new approach.

It is still possible to provide interface that prevent common traps.
June 01, 2012
Le 01/06/2012 14:52, Alex Rønne Petersen a écrit :
> On 01-06-2012 14:26, deadalnix wrote:
>> Le 31/05/2012 20:17, Andrei Alexandrescu a écrit :
>>> On 5/31/12 5:19 AM, deadalnix wrote:
>>>> The solution consisting in passing a delegate as parameter or as
>>>> template is superior, because it is now clear who is in charge of the
>>>> synchronization, reducing greatly chances of deadlock.
>>>
>>> It can also be a lot clunkier for certain abstractions. Say I want a
>>> ProducerConsumerQueue. It's much more convenient to simply make it a
>>> synchronized class with the classic primitives, instead of primitives
>>> that accept delegates etc.
>>>
>>> Nevertheless I think there's merit in this idea. One thing to point out
>>> is that the idiom can easily be done today with a regular class holding
>>> a synchronized class private member.
>>>
>>> So we got everything we need.
>>>
>>>
>>> Andrei
>>
>> I was thinking about that. Here is what I ended up to think is the best
>> solution :
>>
>> synchronized classes exists. By default, they can't be use as parameter
>> for synchronized(something) .
>>
>> synchronized(something) will be valid is something provide
>> opSynchronized(scope delegate void()) or something similar. Think
>> opApply here. The synchronized statement is rewritten in a call to that
>> delegate.
>>
>> Here are the benefit of such an approach :
>> 1/ Lock and unlock are not exposed. You can only use them by pair.
>> 2/ You cannot lock on any object, so you avoid most liquid locks and
>> don't waste memory.
>> 3/ synchronized classes ensure that a class can be shared and it
>> internal are protected from concurrent access.
>> 4/ It is not possible possible by default to lock on synchronized
>> classes's instances. It grant better control over the lock and it is now
>> clear which piece of code is responsible of it.
>> 5/ The design allow the programmer to grant the permission to lock on
>> synchronized classes's instances if he/she want to.
>> 6/ It is now possible to synchronize on a broader range of user defined
>> stuffs.
>>
>> The main drawback is the same as opApply : return (and break/continue
>> but it is less relevant for opSynchronized). Solution to this problem
>> have been proposed in the past using compiler and stack magic.
>>
>> It open door for stuff like :
>> ReadWriteLock rw;
>> synchronized(rw.read) {
>>
>> }
>>
>> synchronized(rw.write) {
>>
>> }
>>
>> And many types of lock : spin lock, interprocesses locks, semaphores, .
>> .. . And all can be used with the synchronized syntax, and without
>> exposing locking and unlocking primitives.
>>
>> What do people think ?
>
> Your idea is great, but it has one (or arguably many) fundamental flaw,
> the same one that opApply does: The delegate's type is fixed. That is,
> you can't call opSynchronized() in a pure function, for example (same
> goes for nothrow, @safe, ...).
>

I was also thinking about passing the delegate as template parameter but wanted to keep design proposal consistent with what already exists. It solve some issues.

Maybe opApply have to be modified that way, or, better, both could work.

Anyway, some issue are known and have to be fixed with opApply, and the same issue are present here.I did mention that in my proposal. Still, I think it is better to have consistent mecanisms in the language, and it is no more work to fix that for opApply than to fix that for both opApply and opSynchronized .
June 01, 2012
On Fri, 01 Jun 2012 10:09:01 -0400, deadalnix <deadalnix@gmail.com> wrote:


> I think it is unrealistic to prevent all deadlock, unless you can come up with a radically new approach.
>
> It is still possible to provide interface that prevent common traps.

Right, the idea is not to exterminate all deadlock, it's to *make it possible to* prevent deadlock.  Currently, it's not possible unless you avoid using synchronized(this).

-Steve
June 01, 2012
On Fri, 01 Jun 2012 15:34:20 +0100, Steven Schveighoffer <schveiguy@yahoo.com> wrote:

> On Fri, 01 Jun 2012 10:09:01 -0400, deadalnix <deadalnix@gmail.com> wrote:
>
>
>> I think it is unrealistic to prevent all deadlock, unless you can come up with a radically new approach.
>>
>> It is still possible to provide interface that prevent common traps.
>
> Right, the idea is not to exterminate all deadlock, it's to *make it possible to* prevent deadlock.  Currently, it's not possible unless you avoid using synchronized(this).

What he ^ said :)

R


-- 
Using Opera's revolutionary email client: http://www.opera.com/mail/