May 26, 2022
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
> On 5/26/2022 3:54 PM, deadalnix wrote:
>> To begin with, I don't expect the same level of analysis from a static analyzer than from the compiler. I can ignore the static analyzer if it is wrong, I cannot ignore the compiler, after all, I need it to compile my code. False positive are therefore much more acceptable from the tool than the compiler.
>
> The static analyzer should be built in to the language when looking for bugs, not stylistic issues.
>

No, no, no, no and no.

You are just breaking my code when you do this. Leave my code alone. Leave the libraries I rely upon alone. It's working fine, great even.

This is purely destructive. Every time this is done, we lose a chunk of the ecosystem.

If it can detect bugs => static analysis. Chip the analyzer with the rest of the toolchain and be done with it.
If it allows for more expressiveness on another axis => ship it with the language, and increase my powers.

May 27, 2022

On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:

>

On 5/26/2022 3:54 PM, deadalnix wrote:

>

Second, I expect the constraint checked by the compiler to provide me with useful invariant I can rely upon. For instance, if some data is immutable, and that it is really an invariant in the program, then I can know this data can be shared safely as nobody else is going to modify it.  The existence of the invariant limits my options on one axis - I cannot mutate this data - while opening my option in another axis - i can share this data safely without synchronization.

If immutable instead meant immutable in most places, you you can mutate it with this weird construct, then it is effectively useless as a language construct, because it restrict my expressiveness on one axis without granting me greater expressiveness on another.

Immutable means immutable in D, all the way. I've fended off many attempts to turn it into "logical const" and add "mutable" overrides. Most of the complaints about immutable and const in D is they are relentless and brutal, not that they don't work.

People avoid const, very easy work around, you don't hear much about const other than to avoid it. I guess people don't want to optionally force themselves to use something "relentless and brutal". The argument can be made for its removal, and it would be a boon. So much baggage like 'inout' could be removed. It would remove complexity.

May 27, 2022
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
> On 5/26/2022 3:54 PM, deadalnix wrote:
>> The problem with DIP1000, is that it doesn't provide me with invariant I can rely upon,
>
> It does for stack based allocation.
>
>
>> because it is unable to track more than one level indirection. It can only detect some violation of the invariant, but not all (in fact, probably not the majority).
>
> It's designed to track all attempts to escape pointers to the stack, and I mean all as in 100%. It will not allow building multilevel data structures on the stack.

I would say that the biggest issue with DIP 1000 is that it spends a significant chunk of D's complexity budget, while offering only relatively modest benefits.

On the one hand, DIP 1000's limitations:

* It only works for stack allocation
* It only handles one level of indirection
* It cannot express `return ref` and `return scope` on the same parameter (so no small-string optimization, no StackFront allocator...)

On the other hand, the investment it demands from potential users:

* Learn the differences between `ref`, `return ref`, `scope`, `return scope`, `return ref scope`, and `ref return scope`.
* Learn how the various rules apply in situations where the pointers/references are hidden or implicit (e.g., `this` is considered a `ref` parameter).
* Learn when `scope` and `return` are inferred and when they are not.
* Probably more that I'm forgetting...

Is it any wonder that a lot of D programmers look at the two lists above and conclude, "this newfangled DIP 1000 stuff just ain't worth it"?

May 27, 2022
On 27/05/2022 12:34 PM, Paul Backus wrote:
> Is it any wonder that a lot of D programmers look at the two lists above and conclude, "this newfangled DIP 1000 stuff just ain't worth it"?

That is because its not worth it.

I've got a whole lot of new code, with dip1000 turned on.

It has caught exactly zero bugs.

On the other hand it has required me to annotate scope methods as @trusted just because I returned a RC type that is allocated on the heap.

To be blunt, even though I have it on, the entire dip1000 needs to be chucked out and a new design considered.

What I do know that must be meet for any future designs:

- In non-virtual functions no annotations should be required to be written.
- Inference is key to success.
- The lifetime of memory must be able to be in relation to a data structure that owns that memory.
May 26, 2022
On 5/26/2022 4:52 PM, deadalnix wrote:
> This is purely destructive. Every time this is done, we lose a chunk of the ecosystem.

Then just use @system, which presumes the coder knows best.

@live is purely opt-in. If you don't want it, don't use it :-)
May 26, 2022
On 5/26/2022 5:34 PM, Paul Backus wrote:
> Is it any wonder that a lot of D programmers look at the two lists above and conclude, "this newfangled DIP 1000 stuff just ain't worth it"?

Fortunately,

1. the attributes are all subtractive, i.e. they all restrict what the programmer can do when they are there. This is deliberate. If you don't want the restrictions, don't use them.

2. the attributes are inferred in templates. This helps a great deal. They'll probably get inferred for general functions at some point, it's just too useful to infer them.

3. D code defaults to @system. If you just want to blitz out code, write it that way. If you want the compiler to check for memory safety errors, well, ya gotta use the memory safe features.


May 26, 2022
On 5/26/2022 6:30 PM, rikki cattermole wrote:
> It has caught exactly zero bugs.

But did your code have any memory corrupting bugs?

If your code didn't, then nothing can catch a bug that isn't there.

If you don't write code that corrupts memory, then great! But the #1 problem C has is memory corruption getting into production code. People are sick of this, and C will die because of this.

I used to write a lot of memory corruption bugs. I gradually learned not to do that. But I also like the self-documentation aspect of `scope`, for example. I know I can safely pass a pointer to a stack array to a function marking the parameter as `scope`.
May 27, 2022
On 27/05/2022 2:54 PM, Walter Bright wrote:
> But did your code have any memory corrupting bugs?
> 
> If your code didn't, then nothing can catch a bug that isn't there.

It doesn't worry me that its not finding anything.

I tend to program pretty defensively these days due to it being good for optimizers.

My issue with it is that it has so many false negatives and they do seem to be by design.

Its basically "boy who cried wolf" type of situation. Most people would turn off these checks because it is hurting and not helping and that certainly is not a good thing.

I want lifetime checks to work, I want users of my libraries to be able to use my code without caring about stuff like memory lifetimes and be strongly likely to never hit an issue even with concurrency. I want the average programmer to be able to use my stuff without having to care about the little details.

DIP1000 so far does not appear to be working towards a design that meets these goals.
May 26, 2022
On 5/26/2022 8:07 PM, rikki cattermole wrote:
> I want lifetime checks to work,

So do I. But nobody has figured out how to make this work without strongly impacting the programmer.


> I want users of my libraries to be able to use my code without caring about stuff like memory lifetimes and be strongly likely to never hit an issue even with concurrency. I want the average programmer to be able to use my stuff without having to care about the little details.

The garbage collector does this famously.


> DIP1000 so far does not appear to be working towards a design that meets these goals.

Rust is famous for forcing programmers to not just recode their programs, but redesign them from the ground up. How they managed to sell that is amazing.
May 27, 2022
On 27/05/2022 5:06 PM, Walter Bright wrote:
> On 5/26/2022 8:07 PM, rikki cattermole wrote:
>> I want lifetime checks to work,
> 
> So do I. But nobody has figured out how to make this work without strongly impacting the programmer.

I did come up with a design, but I'm not convinced it is entirely doable.

I explained this to Timon earlier today (so have examples).

Its basically making all memory objects (as in C definition), in reference to other memory objects, and tieing their lifetime to them.

My worked example:

```d
int[] array = [1, 2, 3]; // ownership: []
// arg ownership: [] (not ref/out)
// return ownership: [first argument] -> [array]
int* got = func(array); // ownership: [array]

int* func(int[] source) {
  int[] slice = source[1 .. 2]; // ownership: [source]
  int* ptr = &slice[0]; // ownership: [slice, source] -> [source]
  return ptr; // ownership: [ptr] -> [source]
}
```

A question Timon came up with that I answered:

```d
int[][] func(int[] a, int[] b){
    return [a,b];
}
```

(for return value): ``// ownership: [first argument, second argument]``

This ticks a lot of the boxes I'm using as preferable acceptance criteria:

1. Inferred for anything non-virtual
2. Nothing outlives its owner
3. Fairly straight forward to understand

But yeah none of this is easy to resolve.

On that note, an issue I have found to have in practice is[0]. I haven't been bothered to report it, but at least this shouldn't require going back to the drawing board ;) #dbugfix

[0] https://issues.dlang.org/show_bug.cgi?id=23142