May 17, 2019
On 5/17/2019 11:45 AM, Jonathan M Davis wrote:
> It is my understanding that DIP 1000 really doesn't track lifetimes at all.
> It just ensures that no references to the data escape. So, you can't do
> something like take a scope variable and put any references to it or what it
> refers to in a container. Honestly, from what I've seen, what you can
> ultimately do with scope is pretty limited. It definitely helps in simple
> cases, but it quickly gets to the point that it's unable to be used in more
> complex cases - at least not without casting and needing to use @trusted.
> So, it's an improvement for some kinds of code, but I suspect that in
> general, it's just going to be more annoying than it's worth. Time will tell
> though.

Dip1000 is key to enable containers to control access to pointers to their innards that they expose.
May 17, 2019
On Friday, 17 May 2019 at 17:03:51 UTC, Meta wrote:

> If you look at `main` above, `rawData` has the same lifetime as the `dataRange` struct returned from `makeDataRange` and the queue returned from `copyToQueue`. True, there is some traditionally unsafe stuff happening in between; however, I thought that the point of adding all these annotations is to tell the compiler how the lifetimes of these objects propagate up and down the call stack, so that it can check that there will be no memory corruption. I'm not doing anything here that will result in a pointer to an expired stack frame, or otherwise cause memory corruption or use after free, or anything like that (*unless* I allow either `dataRange` or `result` to escape from the main function - which dip1000 correctly disallows).

I don't think it does because `Queue!(T).store` has infinite lifetime beyond that of even `main`, at least as far as the compiler is concerned.  The compiler doesn't have enough information to know that `store` is tied to the lifetime of `Queue!(T)` (a.k.a `rawData`) and maybe that's a missing language feature.  Maybe we should be allowed to declare aggregate fields as `scope` to convey that, but the compiler currently disallows it.

loosely related:  https://issues.dlang.org/show_bug.cgi?id=18788#c7

Mike


May 17, 2019
On Friday, 17 May 2019 at 20:59:43 UTC, Mike Franklin wrote:

> I don't think it does because `Queue!(T).store` has infinite lifetime beyond that of even `main`, at least as far as the compiler is concerned.  The compiler doesn't have enough information to know that `store` is tied to the lifetime of `Queue!(T)` (a.k.a `rawData`) and maybe that's a missing language feature.  Maybe we should be allowed to declare aggregate fields as `scope` to convey that, but the compiler currently disallows it.

Or we build in some way for slices to know their lifetime relative to the source array from which they were created.  But I'm not sure how that would work.

Mike


May 17, 2019
On Friday, 17 May 2019 at 20:59:43 UTC, Mike Franklin wrote:
> On Friday, 17 May 2019 at 17:03:51 UTC, Meta wrote:
>
>> If you look at `main` above, `rawData` has the same lifetime as the `dataRange` struct returned from `makeDataRange` and the queue returned from `copyToQueue`. True, there is some traditionally unsafe stuff happening in between; however, I thought that the point of adding all these annotations is to tell the compiler how the lifetimes of these objects propagate up and down the call stack, so that it can check that there will be no memory corruption. I'm not doing anything here that will result in a pointer to an expired stack frame, or otherwise cause memory corruption or use after free, or anything like that (*unless* I allow either `dataRange` or `result` to escape from the main function - which dip1000 correctly disallows).
>
> I don't think it does because `Queue!(T).store` has infinite lifetime beyond that of even `main`, at least as far as the compiler is concerned.

I see what you're getting at. The compiler sees a slice type (i.e., Data[]), knows that it's GC-backed and thus has infinite lifetime, and concludes "the data you're trying to put in the store has too long of a lifetime". That makes sense, but slices don't necessarily have to be backed by the GC, so that seems like a faulty heuristic to me and possibly a vector for bugs.

> The compiler doesn't have enough information to know that `store` is tied to the lifetime of `Queue!(T)` (a.k.a `rawData`) and maybe that's a missing language feature.

According to the DIP, "from a lifetime analysis viewpoint, a struct is considered a juxtaposition of its direct members." Who knows if that's still the case, because Walter has considerably changed how it works but has not documented those changes (IIRC, I may be wrong on that). That probably means that a Queue!T has an infinite lifetime, assuming that the compiler sees its T[] member as having an infinite lifetime.

> Maybe we should be allowed to declare aggregate fields as `scope` to convey that, but the compiler currently disallows it.

That might be nice but would also probably cause a dramatic increase in complexity. I haven't thought through the possible ramifications of making a change like that.


May 17, 2019
On Friday, 17 May 2019 at 18:45:12 UTC, Jonathan M Davis wrote:
> On Friday, May 17, 2019 11:25:40 AM MDT Meta via Digitalmars-d-announce wrote:
>> I don't want to *restrict* the lifetime of a heap allocation. I want the compiler to recognize that the lifetime of my original data is the same as the processed output, and thus allow my code to compile.
>
> It is my understanding that DIP 1000 really doesn't track lifetimes at all.

Then why does the DIP, in addition to many of the error messages, use the word lifetime? I feel like I know less about DIP1000 and what it actually does than when I started. Can someone _please_ point me at any up to date documentation on this?
May 17, 2019
On Friday, 17 May 2019 at 21:57:51 UTC, Meta wrote:
> I see what you're getting at. The compiler sees a slice type (i.e., Data[]), knows that it's GC-backed and thus has infinite lifetime, and concludes "the data you're trying to put in the store has too long of a lifetime".

Should be "too _short_".
May 18, 2019
On Friday, 17 May 2019 at 20:04:42 UTC, Walter Bright wrote:
> Dip1000 is key to enable containers to control access to pointers to their innards that they expose.

I haven't looked at the subject for a while, but every time I did the takeaway was the same: dip1000 works great for containers until you need to reallocate or free something, at which point it's back to @trusted code with you.

I think you said at some point "It's still useful, because it reduces the surface of code that needs to be checked", but even then saying containers can "control access to the data they expose" is a little optimistic.

They only control that access as long as you don't need to call resize(). I'd wager that a large fraction of dangling pointer errors made by non-beginner C++ developpers come specifically from this use case.
May 18, 2019
If all access to internals is returned by ref, those lifetimes are restricted to the current expression.
May 19, 2019
On Saturday, 18 May 2019 at 19:44:37 UTC, Walter Bright wrote:
> If all access to internals is returned by ref, those lifetimes are restricted to the current expression.

Oh my god, I try my best to be open-minded, but talking about dip1000 design with you is like pulling teeth *at best*.

Yes, containers work perfectly if you allocate them on the stack and use their contents during the current stack frame, and then de-allocate them statically. By definition, this represents 0% of the use cases of dynamic containers.

Dynamic containers need methods like "push_back", "reserve", "resize", "concatenate" or "clear", which are all impossible to implement with dip1000 without making their implementations trusted, which in turns opens up the program to use-after-free memory corruption.

See also:

https://forum.dlang.org/post/qbbipvkjqjeweasxknbn@forum.dlang.org

https://forum.dlang.org/post/rxmwjjphnmkszaxonmje@forum.dlang.org

Have you talked to Atila Neves at all for the past six months? Why the hell are we having this discussion?

This is not a new issue. I have raised it repeatedly in the past (I can even dig up the posts if you're interested; I remember writing a fairly in-depth analysis at some point). Atila's automem and Skoppe's spasm have the same limitation: you can't reallocate memory without writing unsafe code (I'm told spasm gets around that by never deallocating anything).

Honestly, the fact that you're the only person with a coherent vision of dip1000, and yet you keep ignoring problems when they're pointed out to you is both worrying and infuriating. Eg:

> So far, the only real shortcoming in the initial design was revealed by the put() semantics, and was fixed with that PR that transmitted scope-ness through the first argument.

Like, yes, I understand that dip1000 is an achievement even if it doesn't allow for resizable containers, and that immutable already allow for functional programming patterns and that's great, but you need to stop acting like everything's going perfect when community members (including highly involved library writers) have complained about the same things over and over again (imprecise semantics, lack of documentation, the resize() use case) and you've kept ignoring them.

Seriously, I'm not asking for much. I'm not demanding you take any architecture decision or redesign the language (like some people are prone to demanding here). But it would be nice if you stopped acting like you didn't read a word I wrote, over and over again.
1 2 3 4 5
Next ›   Last »