May 05, 2014
Initial post in this thread makes focus on a change that does not fix anything and implies silent semantical breakage. I am glad Andrei has reconsidered it but feels like key problem is not that proposal itself was bad but that it does not even try to solve.

Real issue being deterministic destruction and/or deallocation of polymorphic objects. Class destructors are bad for that indeed. But I want some better replacement before prohibiting anything.

There is a very practical use case that highlights existing problem, it was discussed few months ago in exception performance thread. To speed up throwing of non-const exceptions it is desirable to use pool of exception objects. However currently there is no place where to put code for releasing object back into pool. One cannot use destructor because it will never be called if pool keeps the reference. One cannot use reference counting struct wrapper because breaks polymorphic catching of exceptions.

Any good solution to be proposed should be capable of solving this problem.
May 05, 2014
05-May-2014 10:16, Arlen пишет:
> On Sunday, 4 May 2014 at 22:56:41 UTC, H. S. Teoh via
> Digitalmars-d wrote:
>> On Sat, May 03, 2014 at 10:48:47PM -0500, Caligo via Digitalmars-d wrote:
>> [...]
>>> Last but not least, currently there are two main ways for new features
>>> to make it into D/Phobos: you either have to belong to the inner
>>> circle, or have to represent some corporation that's doing something
>>> with D.
>>
>> I'm sorry, but this is patently false. I am neither in the inner circle,
>> nor do I represent any corporation, yet I've had many changes pulled
>> into Phobos (including brand new code).
>>
>> I can't say I'm perfectly happy with the D development process either,
>> but this kind of accusation is bordering on slander, and isn't helping
>> anything.
>>
>>
>> T
>
> There is a lot of truth in what Caligo has said, but I would word
> that part of it differently.
>
> A couple years ago I submitted std.rational, but it didn't go
> anywhere.  About a year later I discovered that someone else had
> done a similar thing, but it never made it into Phobos either.

The key to getting things done is persistence. Everybody is on their spare time, nobody aside from the author would be able to push it through.

The process is not "I submit code and it finds its way into the standard library". It's rather getting people to try your stuff first and listening to them. Then with enough momentum and feedback one would go to review queue. Then start a review if nobody objects, then get into pass or postpone cycle, then survive the mess as the pull request goes into Phobos proper.

Last but not least the burden of getting something into it is minor compared to tending the bugs and maintaining the stuff afterwards.

> Of course, it's not because we didn't belong to some "inner
> circle", but I think it has to do with the fact that D has a very
> poor development process.

What that makes of some other open-source projects, that still traffic in patches over email :)

> The point being, something as simple
> as a Rational library shouldn't take years for it to become part
> of Phobos, specially when people are taking the time to do the
> work.

Look at it this way - when something is simpler, it makes it that much harder to make the one and true version of it. Everybody knows what it is, and tries to put in some of his favorite sauce. The hardest things to push into Phobos are one-liners even if it makes a ton of things look better, more correct and whatnot.

Anyhow I agree that Phobos development process (the one I know about most) is slow and imperfect largely due to the informal nature of participation. Some reviews were lively and great, some went in a gloomy silence with uncertain results without any good indication of the reason.


>
> --Arlen


-- 
Dmitry Olshansky
May 05, 2014
On 5/4/14, 11:16 PM, Arlen wrote:
> A couple years ago I submitted std.rational, but it didn't go
> anywhere.  About a year later I discovered that someone else had
> done a similar thing, but it never made it into Phobos either.
> Of course, it's not because we didn't belong to some "inner
> circle", but I think it has to do with the fact that D has a very
> poor development process.  The point being, something as simple
> as a Rational library shouldn't take years for it to become part
> of Phobos, specially when people are taking the time to do the
> work.

I looked into this (not sure to what extent it's representative of a pattern), and probably we could and should fix it. Looks like back in 2012 you've done the right things (http://goo.gl/kbYQJM) but for whatever reason there was not enough response from the community.

Later on, Joseph Rushton Wakeling tried (http://goo.gl/XyQu3D) to put std.rational through the review process but things got stuck at https://github.com/D-Programming-Language/phobos/pull/1616 with support of traits by BigInt.

I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type.

If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed.


Andrei

May 05, 2014
Andrei Alexandrescu:

> I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type.

Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints.


> If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed.

Rationals are rather basic (important) things, so a little of persistence is well spent here :-)

Bye,
bearophile
May 05, 2014
On Mon, May 05, 2014 at 03:55:12PM +0000, bearophile via Digitalmars-d wrote:
> Andrei Alexandrescu:
> 
> >I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type.
> 
> Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints.

Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code.


> >If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed.
> 
> Rationals are rather basic (important) things, so a little of
> persistence is well spent here :-)
[...]

I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :)


T

-- 
I see that you JS got Bach.
May 05, 2014
Am Mon, 5 May 2014 09:39:30 -0700
schrieb "H. S. Teoh via Digitalmars-d"
<digitalmars-d@puremagic.com>:

> On Mon, May 05, 2014 at 03:55:12PM +0000, bearophile via Digitalmars-d wrote:
> > Andrei Alexandrescu:
> > 
> > >I think the "needs to support BigInt" argument is not a blocker - we can release std.rational to only support built-in integers, and then adjust things later to expand support while keeping backward compatibility. I do think it's important that BigInt supports appropriate traits to be recognized as an integral-like type.
> > 
> > Bigints support is necessary for usable rationals, but I agree this can't block their introduction in Phobos if the API is good and adaptable to the successive support of bigints.
> 
> Yeah, rationals without bigints will overflow very easily, causing many usability problems in user code.
> 
> 
> > >If you, Joseph, or both would want to put std.rational again through the review process I think it should get a fair shake. I do agree that a lot of persistence is needed.
> > 
> > Rationals are rather basic (important) things, so a little of
> > persistence is well spent here :-)
> [...]
> 
> I agree, and support pushing std.rational through the queue. So, please don't give up, we need it get it in somehow. :)
> 
> 
> T

That experimental package idea that was discussed months ago comes to my mind again. Add that thing as exp.rational and have people report bugs or shortcomings to the original author. When it seems to be usable by everyone interested it can move into Phobos proper after the formal review (that includes code style checks, unit tests etc. that mere users don't take as seriously).

As long as there is nothing even semi-official, it is tempting
to write such a module from scratch in a quick&dirty fashion
and ignore existing work.
The experimental package makes it clear that this code is
eventually going to the the official way and home brewed stuff
wont have a future. Something in the standard library is much
less likely to be reinvented. On the other hand, once a module
is in Phobos proper, it is close to impossible to change the
API to accommodate for a new use case. That's why I think the
most focused library testing and development can happen in the
experimental phase of a module. The longer it is, the more
people will have tried it in their projects before formal
review, which would greatly improve informed decisions.
The original std.rationale proposal could have been in active
use now for months!

-- 
Marco

May 05, 2014
On Monday, 5 May 2014 at 17:22:58 UTC, Marco Leise wrote:
> Am Mon, 5 May 2014 09:39:30 -0700
> schrieb "H. S. Teoh via Digitalmars-d"
> <digitalmars-d@puremagic.com>:
>
>> On Mon, May 05, 2014 at 03:55:12PM +0000, bearophile via Digitalmars-d wrote:
>> > Andrei Alexandrescu:
>> > 
>> > >I think the "needs to support BigInt" argument is not a blocker - we
>> > >can release std.rational to only support built-in integers, and then
>> > >adjust things later to expand support while keeping backward
>> > >compatibility. I do think it's important that BigInt supports
>> > >appropriate traits to be recognized as an integral-like type.
>> > 
>> > Bigints support is necessary for usable rationals, but I agree this
>> > can't block their introduction in Phobos if the API is good and
>> > adaptable to the successive support of bigints.
>> 
>> Yeah, rationals without bigints will overflow very easily, causing many
>> usability problems in user code.
>> 
>> 
>> > >If you, Joseph, or both would want to put std.rational again through
>> > >the review process I think it should get a fair shake. I do agree
>> > >that a lot of persistence is needed.
>> > 
>> > Rationals are rather basic (important) things, so a little of
>> > persistence is well spent here :-)
>> [...]
>> 
>> I agree, and support pushing std.rational through the queue. So, please
>> don't give up, we need it get it in somehow. :)
>> 
>> 
>> T
>
> That experimental package idea that was discussed months ago
> comes to my mind again. Add that thing as exp.rational and
> have people report bugs or shortcomings to the original
> author. When it seems to be usable by everyone interested it
> can move into Phobos proper after the formal review (that
> includes code style checks, unit tests etc. that mere users
> don't take as seriously).

And same objections still remain.
May 05, 2014
On Monday, 5 May 2014 at 00:44:43 UTC, Caligo via Digitalmars-d wrote:
> On Sun, May 4, 2014 at 12:22 AM, Andrei Alexandrescu via Digitalmars-d <
> digitalmars-d@puremagic.com> wrote:
>> The on/off switch may be a nice idea in the abstract but is hardly the
>> perfect recipe to good language feature development; otherwise everybody
>> would be using it, and there's not overwhelming evidence to that. (I do
>> know it's been done a few times, such as the (in)famous "new scoping rule
>> of the for statement" for C++ which has been introduced as an option by
>> VC++.)
>>
>>
> No, it's nothing abstract, and it's very practical and useful.  Rust has
> such a thing, #![feature(X,Y,Z)].  So does Haskell, with {-# feature #-}.
>  Even Python has __future__, and many others.

Well, python __future__ it's not exactly that: it's for introducing changes that are impacting the actual codebase...

It's some sort of extreme care for not braking anything out there.

/Paolo
May 06, 2014
On 4 May 2014 19:00, via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Saturday, 3 May 2014 at 11:12:56 UTC, Michel Fortin wrote:
>>
>> On 2014-05-01 17:35:36 +0000, "Marc Schütz" <schuetzm@gmx.net> said:
>>
>>> Maybe the language should have some way to distinguish between GC-managed and manually-managed objects, preferably in the type system. Then it could be statically checked whether an object is supposed to be GC-managed, and consequentially shouldn't have a destructor.
>>
>>
>> Or turn the rule on its head: make it so having a destructor makes the heap memory block reference counted. With this adding a destructor always cause deterministic destruction.
>>
>> The compiler knows statically whether a struct has a destructor. For a class you need a runtime trick because the root object which can be either. Use a virtual call or a magic value in the reference count field to handle the reference count management. You also need a way to tag a class to be guarantied it has no derived class with a destructor (to provide a static proof for the compiler it can omit ARC code), perhaps @disable ~this().
>>
>> Then remains the problem of cycles. It could be a hard error if the destructor is @safe (error thrown when the GC collects it). The destructor could be allowed to run (in any thread) if the destructor is @system or @trusted.
>>
>> The interesting thing with this is that the current D semantics are preserved, destructors become deterministic (except in the presence of cycles, which the GC will detect for you), and if you're manipulating pointers to pure memory (memory blocks having no destructor) there's no ARC overhead. And finally, no new pointer attributes; Walter will like this last one.
>
>
> This is certainly also an interesting idea, but I suspect it is bound to fail, simply because it involves ARC. Reference counting always makes things so much more complicated... See for example the cycles problem you mentioned: If you need a GC for that, you cannot guarantee that the objects will be collected, which was the reason to introduce ARC in the first place.

So specify that improper weak reference attribution may lead to
interference with proper execution of destructors. People generally
understand this, and at least they'd have such a tool to make their
code behave correctly.
Perhaps even have rules that things with destructors create static
errors if they are used in a way that may create circular references
when effective weak attribution is not detected by the compiler (if
such a thing is statically possible?).

> Then there are the problems with shared vs. thread-local RC (including
> casting between the two),

The problem is exactly the same as 'shared' exists already. What makes
it any different?
shared <-> not-shared requires blunt casting, and the same can apply
to shared RC. 'shared' implies RC access must use atomics, and
otherwise not, I don't imagine any distinction in data structure?

> and arrays/slices of RC objects.

Slices need to know their offset (or base pointer), or have an
explicit RC pointer. Either way, I don't see how slices are a
particularly troublesome case.
12byte slices on x32 - needs an extra field, 16 bytes should still be
sufficient on x64 considering that 64bit pointers are only 40-48 bits,
which means there are loads of spare bits in the pointer and in the
slice length field; should be plenty to stash an offset.

> And, of course,
> Walter doesn't like it ;-)

True. But I'm still waiting to see another even theoretically workable solution.

May 06, 2014
On 5 May 2014 14:09, Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 5/4/14, 5:38 PM, Caligo via Digitalmars-d wrote:
>>
>> On Sun, May 4, 2014 at 12:22 AM, Andrei Alexandrescu via Digitalmars-d <digitalmars-d@puremagic.com <mailto:digitalmars-d@puremagic.com>> wrote: Here is an idea:  include new features in DMD/Phobos as soon as they arrive, and make them part of the official binary release so that the average D user can try them out.  Make sure they are marked as unstable, and put a on/off switch on them (something like what Rust/Haskell have; not a compiler switch).  If the feature receives no implementation bug reports for X consecutive days AND no design bug reports for Y consecutive days, then the feature is marked stable and officially becomes part of DMD/Phobos.  The X and the Y can be decreased as D's number of users increases over the years.  The whole idea is very much like farming: you are planting seeds.  As the plants grow, some of them will not survive, others will be destroyed, and some of them will take years to grow.  In any case, you harvest the fruits when they are ready.
>>
>>   Here are good starting values for X and Y:
>> X = 90 days
>> Y = 180 days
>
>
> This is nice, but on the face of it it's just this: an idea on how other people should do things on their free time. I'd have difficulty convincing people they should work that way. The kind of ideas that I noticed are successful are those that actually carry the work through and serve as good examples to follow.

There's imperfect but useful pull requests hanging around for years,
extern(Obj-C) for instance, which may be useful as an experimental
feature to many users, even if it's not ready for inclusion in the
official feature list and support.
I suspect it's (experimental) presence would stimulate further
contribution towards D on iOS for instance; it may be an enabler for
other potential contributors.

What about AST macros? It seems to me that this is never going to be explored and there are competing proposals, but I wonder if there's room for experimental implementations that anyone in the community can toy with?

UDA's are super-useful, but they're still lacking the thing to really set them off, which is the ability to introduce additional boilerplate code at the site of the attribute.

I reckon there's a good chance that creating a proper platform for experimental features would also have an advantage for community building and increase contribution in general. If new contributors can get in, have some fun, and start trying their ideas while also being able to share them with the community for feedback without fear they'll just be shot down and denied after all their work... are they not more likely to actually make a contribution in the first place? Once they've made a single contribution of any sort, are they then more likely to continue making other contributions in the future (having now taken the time to acclimatise themselves with the codebase)?

I personally feel the perceived unlikeliness of any experimental contribution being accepted is a massive deterrence to making compiler contributions in the first place by anyone other than the most serious OSS advocates. I have no prior experience with OSS, and it's certainly a factor that's kept me at arms length.