July 11, 2014
On Tuesday, 8 July 2014 at 23:43:47 UTC, Meta wrote:
>> Is the code public already ?
>
> https://github.com/andralex/std_allocator

Maybe Andrei should remove this outdated version to reduce confusion, if nobody uses it that is :)

/Per
July 11, 2014
On Thursday, 10 July 2014 at 21:46:50 UTC, Ola Fosheim Grøstad wrote:
> On Thursday, 10 July 2014 at 21:40:15 UTC, Sean Kelly wrote:
>> :-)  To compensate, I use the same "virtual function" literally
>> everywhere.  Same icon photo too.
>
> That's Go…

And Go is awesome. I could change it to my face, but since that's on gravitar it would show up all over the place and I don't really want that.
July 11, 2014
On Thursday, 10 July 2014 at 20:31:53 UTC, Walter Bright wrote:
>
> I reiterate my complaint that people use "virtual functions" for their github handles. There's no reason to. Who knows that 9il is actually Ilya Yaroshenko? Took me 3 virtual function dispatches to find that out!
>
So, final by default in D? ;)

-Wyatt
July 11, 2014
On Fri, Jul 11, 2014 at 01:14:37AM +0000, Meta via Digitalmars-d wrote:
> On Friday, 11 July 2014 at 01:08:59 UTC, Andrei Alexandrescu wrote:
> >On 7/10/14, 2:25 PM, Walter Bright wrote:
> >>On 7/10/2014 1:49 PM, Robert Schadek via Digitalmars-d wrote:
> >>>https://github.com/D-Programming-Language/phobos/pull/1977 indexOfNeither
> >>
> >>I want to defer this to Andrei.
> >
> >Merged. -- Andrei
> 
> For any other aspiring lieutenants out there, this[0] has been sitting around for 5 months now.
> 
> [0]https://github.com/D-Programming-Language/phobos/pull/1965#issuecomment-40362545

Not that I'm a lieutenant or anything, but I did add some comments.


T

-- 
Some days you win; most days you lose.
July 11, 2014
On 7/11/2014 4:44 AM, Nick Treleaven wrote:
> On 10/07/2014 19:03, Walter Bright wrote:
>> On 7/10/2014 9:00 AM, Nick Treleaven wrote:
>>> On 09/07/2014 20:55, Walter Bright wrote:
>>>>    Unique!(int*) u = new int;   // must work
>>>
>>> That works, it's spelled:
>>>
>>> Unique!int u = new int;
>>
>> I'm unconfortable with that design, as T can't be a class ref or a
>> dynamic array.
>
> It does currently work with class references, but not dynamic arrays:
>
>      Unique!Object u = new Object;
>
> It could be adjusted so that all non-value types are treated likewise:
>
>      Unique!(int[]) v = [1, 3, 2];
>
>>>>    int* p = new int;
>>>>    Unique!(int*) u = p;         // must fail
>>>
>>> The existing design actually allows that, but nulls p:
>>  > [...]
>>> If there are aliases of p before u is constructed, then u is not the
>>> sole owner
>>> of the reference (mentioned in the docs):
>>> http://dlang.org/phobos-prerelease/std_typecons.html#.Unique
>>
>> Exactly. It is not checkable and not good enough.
>
> In that case we'd need to deprecate Unique.this(ref RefT p) then.
>
>> Note that as of 2.066 the compiler tests for uniqueness of an expression
>> by seeing if it can be implicitly cast to immutable. It may be possible
>> to do that with Unique without needing compiler modifications.
>
> Current Unique has a non-ref constructor that only takes rvalues. Isn't that
> good enough to detect unique expressions?

No, see the examples I gave earlier.

July 11, 2014
On Thursday, 10 July 2014 at 22:53:18 UTC, Walter Bright wrote:
> On 7/10/2014 1:57 PM, "Marc Schütz" <schuetzm@gmx.net>" wrote:
>> That leaves relatively few cases
>
> Right, and do those cases actually matter?
>

Besides what I mentioned there is also slicing and ranges (not only of arrays). These are more likely to be implemented as templates, though.

> I'm a big believe in attribute inference, because explicit attributes are generally a failure with users.

The average end user probably doesn't need to use explicit annotations a lot, but they need to be there for library authors. I don't think it's possible to avoid annotations completely but still get the same functionality just by inferring them internally, if that is what you're aiming at...
July 11, 2014
On Friday, 11 July 2014 at 06:49:26 UTC, deadalnix wrote:
> On Thursday, 10 July 2014 at 20:10:38 UTC, Marc Schütz wrote:
>> Instead of lifetime intersections with `&` (I believe Timon proposed that in the original thread), simply specify multiple "owners": `scope!(a, b)`. This works, because as far as I can see there is no need for lifetime unions, only intersections.
>>
>
> There are unions.
>
> class A {
>    scope!s1(A) a;
> }
>
> scope!s2(A) b;
>
> b.a; // <= this has union lifetime of s1 and s2.

How so? `s2` must not extend after `s1`, because otherwise it would be illegal to store a `scope!s1` value in `scope!s2`. From the other side, `s1` must not start after `s2`.

This means that the lifetime of `b.a` is `s1`, just as it has been annotated, no matter what the lifetime of `b` is. In fact, because `s1` can be longer than `s2`, a copy of `a.b` may safely be kept around after `b` is deleted (but of course not longer than `s1`).
July 11, 2014
On Thu, Jul 10, 2014 at 08:10:36PM +0000, via Digitalmars-d wrote:
> I've been working on a proposal for ownership and borrowing since some time, and I seem to have come to a very similar result as you have. It is not really ready, because I keep discovering weaknesses, and can only work on it in my free time, but I'm glad this topic is finally addressed. I'll write about what I have now:
> 
> First of all, as you've already stated, scope needs to be a type modifier (currently it's a storage class, I think). This has consequences for the syntax of any parameters it takes, because for type modifiers there need to be type constructors. This means, the `scope(...)` syntax is out. I suggest to use template instantiation syntax instead: `scope!(...)`, which can be freely combined with the type constructor syntax: `scope!lifetime(MyClass)`.
> 
> Explicit lifetimes are indeed necessary, but dedicated identifiers for them are not. Instead, it can directly refer to symbol of the "owner". Example:
> 
>     int[100] buffer;
>     scope!buffer(int[]) slice;

Hmm. Seems that you're addressing a somewhat wider scope than what I had in mind. I was thinking mainly of 'scope' as "does not escape the body of this block", but you're talking about a more general case of being able to specify explicit lifetimes.

[...]
> A problem that has been discussed in a few places is safely returning a slice or a reference to an input parameter. This can be solved nicely:
> 
>     scope!haystack(string) findSubstring(
>         scope string haystack,
>         scope string needle
>     );
> 
> Inside `findSubstring`, the compiler can make sure that no references to `haystack` or `needle` can be escape (an unqualified `scope` can be used here, no need to specify an "owner"), but it will allow returning a slice from it, because the signature says: "The return value will not live longer than the parameter `haystack`."

This does seem to be quite a compelling argument for explicit scopes. It does make it more complex to implement, though.


[...]
> An interesting application is the old `byLine` problem, where the function keeps an internal buffer which is reused for every line that is read, but a slice into it is returned. When a user naively stores these slices in an array, she will find that all of them have the same content, because they point to the same buffer. See how this is avoided with `scope!(const ...)`:

This seems to be something else now. I'll have to think about this a bit more, but my preliminary thought is that this adds yet another level of complexity to 'scope', which is not necessarily a bad thing, but we might want to start out with something simpler first.


[...]
> An open question is whether there needs to be an explicit designation of GC'd values (for example by `scope!static` or `scope!GC`), to say that a given values lives as long as it's needed (or "forever").

Shouldn't unqualified values already serve this purpose?


[...]
> Now, for the problems:
> 
> Obviously, there is quite a bit of complexity involved. I can imagine that inferring the scope for templates (which is essential, just as for const and the other type modifiers) can be complicated.

I'm thinking of aiming for a design where the compiler can infer all lifetimes automatically, and the user doesn't have to. I'm not sure if this is possible, but based on what Walter said, it would be best if we infer as much as possible, since users are lazy and are unlikely to be thrilled at the idea of having to write additional annotations on their types.

My original proposal was aimed at this, that's why I didn't put in explicit lifetimes. I was hoping to find a way to define things such that the lifetime is unambiguous from the context in which 'scope' is used, so that users don't ever have to write anything more than that. This also makes the compiler's life easier, since we don't have to keep track of who owns what, and can just compute the lifetime from the surrounding context. This may require sacrificing some precision in lifetimes, but if it helps simplify things while still giving adequate functionality, I think it's a good compromise.


[...]
> I also have a few ideas about owned types and move semantics, but this is mostly independent from borrowing (although, of course, it integrates nicely with it). So, that's it, for now. Sorry for the long text. Thoughts?

It seems that you're the full borrowed reference/pointer problem, which is something necessary. But I was thinking more in terms of the baseline functionality -- what is the simplest design for 'scope' that still gives useful semantics that covers most of the cases? I know there are some tricky corner cases, but I'm wondering if we can somehow find an easy solution for the easy parts (presumably the more common parts), while still allowing for a way to deal with the hard parts.

At least for now, I'm thinking in the direction of finding something with simple semantics that, at the same time, produces complex (interesting) effects when composed, that we can use to solve the borrowed pointer problem.


T

-- 
Computers are like a jungle: they have monitor lizards, rams, mice, c-moss, binary trees... and bugs.
July 11, 2014
On Fri, Jul 11, 2014 at 06:41:47AM +0000, deadalnix via Digitalmars-d wrote: [...]
> On Thursday, 10 July 2014 at 17:04:24 UTC, H. S. Teoh via Digitalmars-d wrote:
> >   - For function parameters, this lifetime is the scope of the
> >   function body.
> 
> Some kind of inout scope seem less limiting. The caller know the scope, the callee know that is is greater than itself. It is important as local variable in the outer scope of the function have more restricted scope and must not be assignable.
> 
> Each parameter have a DIFFERENT lifetime, but it is impossible to tell which one is larger from the callee perspective. Thus you must have a more complex lifetime definition than grater/smaller lifetime. Yup, when you get into the details, quantum effects start to arise.

Looks like we might need to use explicit lifetimes for this. Unless there's a way to simplify it -- i.e., we don't always need exact lifetimes, as long as the estimated lifetime is never larger than the actual lifetime. From the perspective of the callee, for example, if the lifetimes of both parameters are longer than it can see (i.e., longer than the lifetimes of its parent lexical scopes) then it doesn't matter what the exact lifetimes are, it can be treated as an unknown value with a lower bound, as long as it never tries to assign anything with lifetime >= that lower bound. The caller already knows what these lifetimes are from outside, but the function may not need to know.

At least, I'm hoping this kind of simplifications will still allow us to do what we need, while reducing complexity.


> >>   - An unscoped variable is regarded to have infinite lifetime.
> >>
> 
> So it is not unscoped, but I'm simply nitpicking on that one.

Well, yes, the reason I wrote that line was to make the definition uniform across both scoped and unscoped types. :)


> >>      - Since a scoped return type has its lifetime as part of its
> >>        type, the type system ensures that scoped values never
> >>        escape their lifetime. For example, if we are sneaky and
> >>        return a pointer to an inner function, the type system will
> >>        prevent leakage of the
> 
> This get quite tricky to define when you can have both this and a context pointer. Once again, you get into a situation where you have 2 non sortable lifetime to handle. And worse, you'll be creating values out of that mess :)

Is it possible to simplify this by taking the minimum of the two lifetimes (i.e. intersection)? Or will that run into unsolvable cases?


> >>- Aggregates:
> >>
> >>   - It's turtles all the way down: members of scoped aggregates
> >>     also have scoped type, with lifetime inherited from the parent
> >>     aggregate. In other words, the lifetime of the aggregate is
> >>     transitive to the lifetime of its members.
> 
> Yes rule for access is transitivity. But the rule to write is "antitransitive". It gets tricky when you consider that a member variable may have to be able to "extend" the lifetime of one of its member.
> 
> IE a member of lifetime B in a value of lifetime A sees it lifetime
> becoming max(A, B). Considering lifetime aren't always sortable (as
> show in 2 examples), this is tricky.
> 
> This basically means that you have to define what happen for non sortable lifetime, and what happen for union/intersection of lifetime. As you see, I've banged my head quite a lot on that one. I'm fairly confident that this is solvable, but definitively require a lot of effort to iron out all the details.

Along these lines, I'm wondering if "turtles all the way down" is the wrong way of looking at it. Consider, for example, an n-level deep nesting of aggregates. If obj.nest1 is const, then obj.nest1.nest2.x must also be const, because otherwise we break the const system. So const is transitive downwards. But if obj.nest1 is a scoped reference type with lifetime L1, that doesn't necessarily mean obj.nest1.y only has lifetime L1. It may be a pointer that points to an infinite lifetime object, for example, so it's not a problem that the pointer goes out of scope before the object pointed to. OTOH, if obj.nest1 has scope L1, then obj itself cannot have a longer lifetime than L1, otherwise we may access obj.nest1 after its lifetime is over. So the lifetime of obj.nest1 must propagate *upwards* (or outwards).

This means that scope is transitive outwards, which is the opposite of const/immutable, which are transitive inwards! So it's not turtles all the way down, but "pigeons all the way up". :-P


> >>- Passing parameters: since unscoped values are regarded to have
> >>  infinite lifetime, it's OK to pass unscoped values into scoped
> >>  function parameters: it's a narrowing of lifetime of the original
> >>  value, which is allowed. (What's not allowed is expanding the
> >>  lifetime of a scoped value.)
> >>
> 
> Get rid of the whole concept of unscopped, and you get rid of a whole class of redundant definition that needs to be done.

OK, let's just say unscoped == scope with infinite lifetime. :)


> >>I'm sure there are plenty of holes in this proposal, so destroy away.  ;-)
> >>
> 
> Need some more iron. But I'm happy to see that some people came up with proposal that are close to what I had in mind.
[...]

Maybe what we should do, is to have everyone post their current (probably incomplete) drafts of what scope should do, so that we have everything on the table and we can talk about what should be kept, what should be discarded, etc.. It may be, that the best design is not what any one of us has right now, but some combination of multiple current proposals.


T

-- 
Be in denial for long enough, and one day you'll deny yourself of things you wish you hadn't.
July 11, 2014
On Thursday, 10 July 2014 at 21:29:30 UTC, Dmitry Olshansky wrote:
>
> Not digging into the whole thread.
>
> 9. Extensible I/O package to replace our monolitic std.stdio sitting awkwardly on top of C's legacy. That would imply integrating it with sockets/pipes and filters/codecs (compression, transcoding and the like) as well.
>
> I was looking of into it with Steven, but currently have little spare time (and it seems he does too). I'd gladly guide a capable recruit to join the effort in proper rank.

A short write up of where Steven left off on his work would probably help kickstart this.

I remember you had some great ideas for handling buffering but it changed a few times during that thread and I don't remember what the final idea was.