Jump to page: 1 24  
Page
Thread overview
TLBB: The Last Big Breakage
Mar 16, 2014
Daniel Kozak
Mar 16, 2014
deadalnix
Mar 16, 2014
Russel Winder
Mar 16, 2014
Michel Fortin
Mar 16, 2014
luka8088
Not the last breakage (Was: TLBB: The Last Big Breakage)
Mar 16, 2014
bearophile
Mar 16, 2014
Benjamin Thaut
Mar 16, 2014
bearophile
Mar 19, 2014
Kagamin
Mar 19, 2014
bearophile
Mar 19, 2014
David Nadlinger
Mar 16, 2014
bearophile
Mar 16, 2014
bearophile
Mar 16, 2014
Mathias LANG
Mar 16, 2014
Andrea
Mar 16, 2014
Andrej Mitrovic
Mar 17, 2014
Andrea
Mar 16, 2014
Russel Winder
Mar 16, 2014
Jesse Phillips
Mar 16, 2014
Martin Nowak
Mar 17, 2014
Jesse Phillips
Mar 17, 2014
Marco Leise
Mar 17, 2014
Andrea
Mar 17, 2014
Jesse Phillips
Mar 17, 2014
Nick Treleaven
Mar 17, 2014
Jesse Phillips
Mar 18, 2014
Marco Leise
Mar 18, 2014
Dicebot
Mar 18, 2014
deadalnix
Oct 11, 2015
Andrej Mitrovic
March 16, 2014
D1's approach to multithreading was wanting. D2 executed a big departure from that with the shared qualifier and the default-thread-local approach to data.

We think this is a win, but D2 inherited a lot of D1's thread-related behavior by default, and some of the rules introduced by TDPL (http://goo.gl/9gtH0g) remained in the "I have a dream" stage.

Fixing that has not gained focus until recently, when e.g. https://github.com/D-Programming-Language/dmd/pull/3067 has come about. There is other more stringent control of shared members, e.g. "synchronized" is all or none, "synchronized" only makes direct member variables unshared, and more.

This will statically break code. It will refuse to compile code that is incorrect, but also plenty of code that is correct; the compiler will demand extra guarantees from user code, be they in the form of casts and stated assumptions.

I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?


Andrei

March 16, 2014
ASAP please.
Dne 16. 3. 2014 5:10 "Andrei Alexandrescu" <SeeWebsiteForEmail@erdani.org>
napsal(a):

> D1's approach to multithreading was wanting. D2 executed a big departure from that with the shared qualifier and the default-thread-local approach to data.
>
> We think this is a win, but D2 inherited a lot of D1's thread-related behavior by default, and some of the rules introduced by TDPL ( http://goo.gl/9gtH0g) remained in the "I have a dream" stage.
>
> Fixing that has not gained focus until recently, when e.g. https://github.com/D-Programming-Language/dmd/pull/3067 has come about. There is other more stringent control of shared members, e.g. "synchronized" is all or none, "synchronized" only makes direct member variables unshared, and more.
>
> This will statically break code. It will refuse to compile code that is incorrect, but also plenty of code that is correct; the compiler will demand extra guarantees from user code, be they in the form of casts and stated assumptions.
>
> I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?
>
>
> Andrei
>
>


March 16, 2014
On Sunday, 16 March 2014 at 04:08:15 UTC, Andrei Alexandrescu wrote:
> D1's approach to multithreading was wanting. D2 executed a big departure from that with the shared qualifier and the default-thread-local approach to data.
>
> We think this is a win, but D2 inherited a lot of D1's thread-related behavior by default, and some of the rules introduced by TDPL (http://goo.gl/9gtH0g) remained in the "I have a dream" stage.
>
> Fixing that has not gained focus until recently, when e.g. https://github.com/D-Programming-Language/dmd/pull/3067 has come about. There is other more stringent control of shared members, e.g. "synchronized" is all or none, "synchronized" only makes direct member variables unshared, and more.
>
> This will statically break code. It will refuse to compile code that is incorrect, but also plenty of code that is correct; the compiler will demand extra guarantees from user code, be they in the form of casts and stated assumptions.
>
> I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?
>
>
> Andrei

We should probably relax the restriction.

synchronized classes can have public fields, but they can't be accessed on shared instances. That way no code is broken when shared isn't used. That should dramatically decrease the difficulty of introducing the change, and that doesn't make the construct any less safe.

Or expressed as simpler rule: when a synchronized class is shared, public field become protected.
March 16, 2014
On Sat, 2014-03-15 at 21:08 -0700, Andrei Alexandrescu wrote: […]
> I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?

Given that it is a breaking change, the sooner the better and as much of it at once as possible. Perhaps changing the numbering system to loose the leading insignificant 0 as well.

If D is to get into the "no breaking change during minor releases and bugfix releases", then all the breaking changes to Dv2 need to be done now if they are to happen to Dv2, otherwise they have to wait for Dv3.

Java is the example of the mess that happens when you commit to no breaking changes. It is hugely irritating for "early adopter" types. Once the "if it ain't broke don't touch it" types get involved evolution becomes a Java like activity :-(

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


March 16, 2014
On 2014-03-16 04:08:18 +0000, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> said:

> Fixing that has not gained focus until recently, when e.g. https://github.com/D-Programming-Language/dmd/pull/3067 has come about.

Synchronized classes should be trashed.

The whole concept is very prone to mistakes that could cause deadlocks and offers no simple path to fix those errors once they're found. The concept encourage people to keep locks longer than needed to access the data. For one thing is bad for performance. It also makes callbacks happen while the lock is held, which has a potential for deadlock if the callback locks something else (through synchronized or other means).

Sure, there are safe ways to implement a synchronized class: you have to use it solely as a data holder that does nothing else than store a couple of variables and provide accessors to that data. Then you build the business logic -- calculations, callbacks, observers -- in a separate class that holds your synchronized class but does the work outside of the lock.

The problem is that it's a very unnatural way to think of classes. Also you have a lot of boilerplate code to write (synchronized class + accessors) for every piece of synchronized data you want to hold. I bet most people will not bother and won't realize that deadlocks could happen.

Is there any example of supposedly well-written synchronized classes in the wild that I could review looking for that problem?

-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

March 16, 2014
On 16.3.2014. 5:08, Andrei Alexandrescu wrote:
> D1's approach to multithreading was wanting. D2 executed a big departure from that with the shared qualifier and the default-thread-local approach to data.
> 
> We think this is a win, but D2 inherited a lot of D1's thread-related behavior by default, and some of the rules introduced by TDPL (http://goo.gl/9gtH0g) remained in the "I have a dream" stage.
> 
> Fixing that has not gained focus until recently, when e.g. https://github.com/D-Programming-Language/dmd/pull/3067 has come about. There is other more stringent control of shared members, e.g. "synchronized" is all or none, "synchronized" only makes direct member variables unshared, and more.
> 
> This will statically break code. It will refuse to compile code that is incorrect, but also plenty of code that is correct; the compiler will demand extra guarantees from user code, be they in the form of casts and stated assumptions.
> 
> I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?
> 
> 
> Andrei
> 

+1 on fixing this!

March 16, 2014
Another breaking change could be the deprecation of the now nearly useless opApply() (only std.array.array works with it) and replacing it with a nice external iterator D syntax sugar to define a forward iterator:

http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/

Another smaller breaking change could be the removal of some usages of the comma operator.

Another breaking change are related to changes to object.

Bye,
bearophile
March 16, 2014
On Sunday, 16 March 2014 at 04:08:15 UTC, Andrei Alexandrescu wrote:
> I believe this is a bridge we do need to cross. One question is how we go about it: all at once, or gradually?

IMHO, it would be better to do all-at-once.
It would also be better to target a release to work on this issue (eg. 2.070).
Then we can issue a massive, red "This break the world" warning.
March 16, 2014
Am 16.03.2014 13:44, schrieb bearophile:
> Another breaking change could be the deprecation of the now nearly
> useless opApply() (only std.array.array works with it) and replacing it
> with a nice external iterator D syntax sugar to define a forward iterator:
>
> http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/
>
> Another smaller breaking change could be the removal of some usages of
> the comma operator.
>
> Another breaking change are related to changes to object.
>
> Bye,
> bearophile

Wait what?

I find opApply really usefull, especially because it passes a lambda in, and you have control over the program flow within opApply. This for example makes it possible to mark a container as "is currently iterated over" so you can assert when modifications are made to the container while it is currently used for iteration. In fact all of my custom container classes rely on opApply, so I do not agree at all, that it is useless. Also, as far as I know, std.paralellism heavily relies on opApply to actually be able to parallelise foreach loops.

Could you please explain in a bit more detail why you think opApply should be deprecated?

Kind Regards
Benjamin Thaut
March 16, 2014
Benjamin Thaut:

> Could you please explain in a bit more detail why you think opApply should be deprecated?

To replace it with something better. But this discussion is off-topic for this thread, that is about "synchronized" or its alternatives. I didn't mean to derail the main thread.

Bye,
bearophile
« First   ‹ Prev
1 2 3 4