January 15
On Saturday, 15 January 2022 at 10:17:39 UTC, Timon Gehr wrote:
> Seems like that would make class object construction impure.

Meh paper over it with implementation-specific hacks.  No one complains about pureMalloc.  (Well, arguably people _should_ be complaining about pureFree--pureMalloc is fine though--but.)
January 15
On 15.01.22 11:20, Elronnd wrote:
> On Saturday, 15 January 2022 at 10:15:16 UTC, Timon Gehr wrote:
>> The issue is not that addresses might change, it's that they depend on global state. Therefore, e.g., iteration order for an associative array will depend on global state too. It's just not `pure`.
> 
> It seems to me that any data passed from an impure function to a pure one will depend on global state.

Not even true, but not getting into that. In any case, by passing it, you turn it into local state, that's why the qualifier "global" was there in the first place.

The point is that a pure function's result should _only_ depend on parameters. `pure` functions are functions that can't turn global state into local state.

> And d has no problem with pure functions doing things to pointers that are passed to them anyway.

It makes absolutely no sense that a @safe pure function can cast an int* to size_t. It violates the spec of pure functions, because pure functions can create new int* with addresses that depend on global state, so if they can in turn create integers from those, that will produce non-deterministic results.

> So the current state seems fine (or, at least, fully consistent).

You mean fully consistent in its inconsistency? "If it's broken, don't fix it, embrace it"? A qualifier that does not exist is much better than a qualifier that does not mean anything. Why are people so obsessed with slapping qualifiers on functions that do not satisfy the restrictions which that qualifier is meant to guarantee? I will never understand, and I think neither should you.
January 15
On 15.01.22 11:22, Elronnd wrote:
> On Saturday, 15 January 2022 at 10:17:39 UTC, Timon Gehr wrote:
>> Seems like that would make class object construction impure.
> 
> Meh paper over it with implementation-specific hacks.

You can't, as any `pure` function can just call toHash.

> No one complains about pureMalloc.

Not true, but that problem is a bit more nuanced and I think it can be fixed.

https://dlang.org/library/core/memory/pure_malloc.html


```
UNIX 98 requires that errno be set to ENOMEM upon failure. Purity is achieved by saving and restoring the value of errno, thus behaving as if it were never changed.
```

Of course, never mind that it will return `null` upon failure. Some sort of special casing of out-of-memory conditions seems inevitable (by the GC, it's treated as an unrecoverable error, probably pureMalloc should throw an OutOfMemoryError too in this case).

> (Well, arguably people _should_ be complaining about pureFree--pureMalloc is fine though--but.)

At least pureFree is not @safe. Of course, there is the whole issue that its signature looks exactly like something that can often just be optimized away. x)
January 15
On Saturday, 15 January 2022 at 10:40:29 UTC, Timon Gehr wrote:
> It makes absolutely no sense that a @safe pure function can cast an int* to size_t. It violates the spec of pure functions, because pure functions can create new int* with addresses that depend on global state, so if they can in turn create integers from those, that will produce non-deterministic results.

This is why d needs a provenance model.  I mentioned this before in the RC/immut thread, no one seemed to care.  It is not sufficient to consider these issues in isolation.  I will add: pure does not mean @safe.  The following program prints 17 on my computer with a recent dmd, for instance:

pure int f(int *x) {
	return x[1];
}
int main() {
	import std.stdio;
	int x, y = 17;
	writeln(f(&x));
	return y;
}

>> Meh paper over it with implementation-specific hacks.

> You can't, as any `pure` function can just call toHash.

Not sure what you mean.  I propose to pretend to the compiler that Object's constructor is pure, even though it is not (it must access global state to calculate a hash for the object).

If the issue is that pure functions can call toHash and its output is 'non-deterministic' then ... I really don't have any more to say.  toHash always returns the same result given the same input.
January 15
On Saturday, 15 January 2022 at 09:49:43 UTC, Alexandru Ermicioi wrote:
> That was an example ofc. Equals method could be overloaded with const and immutable version in there too, or have separate iface.

So just like today...

>> Note that opEquals like this already works on dmd master with the specific class and attributes.
>
> From what I'm aware, with couple of compiler hacks, while this offers no hacks.

No, there's no compiler hacks, this is a direct consequence of the Liskov substitution principle allowing you to tighten constraints in specializations.

How do you think it works today?

> Anyway, if there is no possibility to make friends inheritance and method attributes easily, then best is to just remove them from Object and be done with it.

Not only is there a possibility, it *already works*.

The DIP authors just don't know very much about D's classes.

> P.S. Can't we enhance compiler and bake the attribute info into interface itself, and then allow downcasting it to an interface with specific subset of attribtues?
> I.e. say we have safe nothrow nogc equals interface. We then in some random code could downcast safely to safe nogc equals interface?

Have you tried this?

```
interface GcEqual(T) {
        bool opEquals(T rhs);
}

interface NoGcEqual(T) {
        bool opEquals(T rhs) @nogc;
}

class A : GcEqual!A, NoGcEqual!A {
        override bool opEquals(Object rhs) {
                return this.opEquals(cast(A) rhs);
        }
        override bool opEquals(A rhs) @nogc {
                if(rhs is null) return false;
                return true;
        }
}

void main() {
        A a = new A;

        GcEqual!A g = a;
        NoGcEqual!A n = a;
}
```

Works today. Restricted functions can implicitly cast to less restricted functions.
January 15
On Saturday, 15 January 2022 at 13:19:10 UTC, Adam D Ruppe wrote:
> No, there's no compiler hacks, this is a direct consequence of the Liskov substitution principle allowing you to tighten constraints in specializations.
>
> How do you think it works today?

I remember seeing some kind of magic equals function in object.d that was used implicitly during equals comparison. Maybe got something wrong.
>
>> Anyway, if there is no possibility to make friends inheritance and method attributes easily, then best is to just remove them from Object and be done with it.
>
> Not only is there a possibility, it *already works*.
>
> The DIP authors just don't know very much about D's classes.

Yeah narrowing down the method signature is. I just suggested to remove opEquals and other operator overloads from Object, and provide them as interfaces. Then the devs could choose either to make the object equatable and etc. or not.

>> P.S. Can't we enhance compiler and bake the attribute info into interface itself, and then allow downcasting it to an interface with specific subset of attribtues?
>> I.e. say we have safe nothrow nogc equals interface. We then in some random code could downcast safely to safe nogc equals interface?
>
> Have you tried this?
>
> ```
> interface GcEqual(T) {
>         bool opEquals(T rhs);
> }
>
> interface NoGcEqual(T) {
>         bool opEquals(T rhs) @nogc;
> }
>
> class A : GcEqual!A, NoGcEqual!A {
>         override bool opEquals(Object rhs) {
>                 return this.opEquals(cast(A) rhs);
>         }
>         override bool opEquals(A rhs) @nogc {
>                 if(rhs is null) return false;
>                 return true;
>         }
> }
>
> void main() {
>         A a = new A;
>
>         GcEqual!A g = a;
>         NoGcEqual!A n = a;
> }
> ```

I do know that you can do such thing today. The problem is that, the combinations of attributes is huge, and therefore defining for each combination an implementation and dedicated interface is cumbersome, in cases where code base has various requirements to equals comparison for example.

>
> Works today. Restricted functions can implicitly cast to less restricted functions.

Yes, it does. The last suggestion was in order to avoid the huge park of interfaces that sprawl.

Say you have a class that implements an interface with equals that is nothrow, nogc and safe.

Then you have a function/method that works only with nothrow equals, i.e. the parameter type is equals interface with just nothrow.
Trying to pass the instance of that class to the function will fail, since they are different types.

The idea, was to have only one interface, and the class have implemented safe, nothrow, nogc version, and then have the compiler, check and allow passing of the object into the method, since they are same iface, just that method has relaxed constraints. The same should work when you cast interface, i.e. having a nothrow nogc interface, you could cast it to same interface with just nothrow, and have the runtime guarantee that it is possible to do so.
This kind of behavior might solve some problems with attribute incompatibility and inheritance.





January 15

On Saturday, 15 January 2022 at 10:40:29 UTC, Timon Gehr wrote:

> >

And d has no problem with pure functions doing things to pointers that are passed to them anyway.

It makes absolutely no sense that a @safe pure function can cast an int* to size_t. It violates the spec of pure functions, because pure functions can create new int* with addresses that depend on global state, so if they can in turn create integers from those, that will produce non-deterministic results.

The spec explicitly allows such non-deterministic results; it just says that the behavior of calling such a function is implementation-defined:

>

Implementation Defined: An implementation may assume that a strongly pure function that returns a result without mutable indirections will have the same effect for all invocations with equivalent arguments. It is allowed to memoize the result of the function under the assumption that equivalent parameters always produce equivalent results. A strongly pure function may still have behavior inconsistent with memoization by e.g. using casts or by changing behavior depending on the address of its parameters. An implementation is currently not required to enforce validity of memoization in all cases.

(From https://dlang.org/spec/function.html#pure-functions. Emphasis added.)

January 15
On Saturday, 15 January 2022 at 14:08:25 UTC, Alexandru Ermicioi wrote:
> I remember seeing some kind of magic equals function in object.d that was used implicitly during equals comparison.

There is an equals function, but it is nothing really magical and certainly not what I'd call a hack. The compiler regularly turns overloaded operators into calls to helper functions.

This helper function has two jobs: 1) do a null check before the virtual call, so a == b doesn't segfault when a is null and 2) check for cases where a == b but b != a which can happen if one variable is an extension of the other; a is base class, b is derived class, a.equals(b) passes cuz the base is the same, but b.equals(a) fails because of extended info that is no longer equal. The helper function checks both.

This helper function was actually poorly implemented in druntime until last week! This poor implementation is where @safe, @nogc, etc., got lost in the comparison.

I seriously think the DIP authors misread the error message. I mean it, take a look at this quote from the dip:

"""
It fails because the non-safe Object.opEquals method is called in a safe function. In fact, just comparing two classes with no user-defined opEquals, e.g., assert (c == c), will issue an error in @safe code: "@safe function D main cannot call @system function object.opEquals".
"""

They blame Object.opEquals yet the error message they copy/pasted does NOT say Object.opEquals. It says *o*bject.opEquals. Those are completely different things!

I pointed this out in the dip PR review thread, but the authors chose to ignore me. (Perhaps because the whole house of cards they built up uses their error as a foundation, and correcting this obvious mistake brings their whole dip crashing down.)

Anyway, I fixed the implementation and that fix was merged in druntime master last week. It was very easy to do and it now respects user-defined @safe annotations. It had nothing to do with Object.

opCmp and opHash don't use a helper function. They're a direct call, and work just like if you call it directly..... including segfaulting on null inputs, but also respecting the user-defined attributes and/or overloads. Again, it has nothing to do with Object.

> Yeah narrowing down the method signature is. I just suggested to remove opEquals and other operator overloads from Object, and provide them as interfaces. Then the devs could choose either to make the object equatable and etc. or not.

The interfaces are actually not necessary, even if we were to remove opEquals and friends from Object, you can still define them in your subclasses and they'll be respected when they are used. Just like with structs and operator overloading today.

The one time you might want to use them is for a virtual-dispatch-based collection. The main example is druntime's associative arrays.

This could potentially be changed to a templated interface. Even if it kept a virtual dispatch internally, it can do that with function pointers... which is, once again, actually exactly what it does with structs today. These would surely take the static type of the key as the argument, which might be an interface from druntime, but could also just as well simply be whatever concrete base class the user defined.

But let's put that aside and look at today's impl. It actually uses just opEquals and opHash but it does need both... so which interface would it be? Equals!T or Hashable? It'd have to be both, more like AAKeyable!T probably which is both of them.

Sure, you could put that in and define it... but since you need to do some function pointer stuff for structs anyway... might as well just do that for classes too and use them internally. The interface would then just be a helper to guide you toward implementing the right methods correctly. There's some value in that, but it is hardly revolutionary.

And if you have that interface... which attributes do you put on it? If you don't put @nogc etc, since the implementation goes through it, and user-added attributes will be ignored anyway. And if you do put @nogc on it, now the user is restricted, which can be a dealbreaker for them (see people's troubles with const for example) so that's painful.

Static analysis and dynamic dispatch are at some conflict, whether it is from Object, from some other class, or from some newly defined interface.

My preference is to do some kind of type erasure; something more like an extension of dip 1041. That actually fixes real problems and works for all this stuff.

Or we can template the whole thing and get the static analysis at the cost of more generated code bloat.

But mucking with Object is nothing but a distraction.

> I do know that you can do such thing today. The problem is that, the combinations of attributes is huge, and therefore defining for each combination an implementation and dedicated interface is cumbersome, in cases where code base has various requirements to equals comparison for example.

Yeah, that's why just using the function directly without an intermediate interface is the easiest way to get it all right. Which works today....


> Then you have a function/method that works only with nothrow equals, i.e. the parameter type is equals interface with just nothrow.
> Trying to pass the instance of that class to the function will fail, since they are different types.

Yeah, the interface won't.... but a delegate will. And if the user class listed both interface, the one method will satisfy them all.

Of course, listing all those interfaces gets verbose, like you said, and delegates have to be done individually, but you can still do delegates of the group you need from an interface.

> The idea, was to have only one interface, and the class have implemented safe, nothrow, nogc version, and then have the compiler, check and allow passing of the object into the method, since they are same iface, just that method has relaxed constraints. The same should work when you cast interface, i.e. having a nothrow nogc interface, you could cast it to same interface with just nothrow, and have the runtime guarantee that it is possible to do so.

Yeah, since an interface is kinda like a collection of delegates, and it works with a collection of delegates, it might be possible to do it across a whole interface.

A duck type template can probably do it in the library right now... a while ago one of those almost got added to Phobos. I think std.typecons.wrap more-or-less does it.

January 15

On Monday, 10 January 2022 at 13:48:14 UTC, Mike Parker wrote:

>

Discussion Thread

This is the discussion thread for the first round of Community Review of DIP 1042, "ProtoObject":

https://github.com/dlang/DIPs/blob/2e6d428f42b879c0220ae6adb675164e3ce3803c/DIPs/DIP1042.md

Currently 'struct' can be used as below. So why Object/class not behave the same.
They are both aggregated type

struct Foo
{
    int i;
}

void main()
{
    import std.stdio;

    Foo a, b;
    writeln("a=", a);
    bool e = a == b;
    writeln("equ=", e);

    string[Foo] as;
    as[a] = "a";
    writeln("as[a]=", as[a]);

    //int c = a > b; -> Error: need member function `opCmp()` for struct `Foo` to compare
    //writeln("cmp=", c);
}
  1. opEquals, opCmp, toHash, toString
    Get rid of these Object build in functions
    1a. If a object is not defined one, do same as struct;
    since it does not have any member, use identity
    Object a, b;
    a == b; // Compare using 'a is b' construct
    Note:
    Can search for virtual function with opEquals name -> if found use it -> slow;
    assume attributes are same as of now

1b. If a object is not defined one, do same as struct
class A { int a; }
A a, b;
a == b;
Note:
Can search for virtual function with opEquals name -> if found use it -> slow;
assume attributes are same as of now

1c. Defined one -> use it
class B { int a; bool opEquals(B rhs) {...} }
B a, b;
a == b;

  1. monitor member for synchronized(this)
    2a. Get rid of this build in member
    2b. If there is a call, synchronized(this) {}, create one global monitor (mutex) per class type using the module name where it is defined and use it. Document it as such. Most of the usage for this construct is one global instance.
    2c. 'synchronized' can be extended to send in monitor (mutex) object such as synchronized(this, existing_monitor...) {}

Cheers - Happy coding

January 15
On 1/15/22 13:02, Elronnd wrote:
> 
>> You can't, as any `pure` function can just call toHash.
> 
> Not sure what you mean.  I propose to pretend to the compiler that Object's constructor is pure, even though it is not (it must access global state to calculate a hash for the object).
> 
> If the issue is that pure functions can call toHash and its output is 'non-deterministic' then ... I really don't have any more to say.  toHash always returns the same result given the same input.

I said toHash can't be pure, you suggested to make the constructor cheat so toHash can be pure. I said that does not work. I also don't have much more to say, but maybe I can say the same thing again.

The problem is this:

```d
hash_t stronglyPure()@safe pure nothrow;
```

This returns an integer (or perhaps it throws an error). It should always be the same integer as it is a `pure` function without any parameters. However, it will return a different result on each invocation if I implement it like this:

```d
hash_t stronglyPure()@safe pure nothrow{
    return new Object().toHash();
}
```

I really don't care if the constructor is cheating or toHash. The point is, you can't cheat.
1 2 3 4 5 6 7 8 9