On Tuesday, 13 April 2021 at 00:14:38 UTC, Timon Gehr wrote:
>On 4/12/21 11:59 PM, Q. Schroll wrote:
>On Monday, 12 April 2021 at 17:21:47 UTC, Timon Gehr wrote:
>On 12.04.21 16:44, Q. Schroll wrote:
>On Monday, 12 April 2021 at 11:05:14 UTC, Timon Gehr wrote:
>Unfortunately, it is not written too well: The reader gets flooded with details way before being told what the problem actually is or how the proposal addresses it.
Doesn't the Abstract explain what the problem is and give a general idea how it is addressed?
It does not. It's generic fluff. It's only marginally more explicit than: "there are problems and to address them we should change the language".
I had a more detailed Abstract in previous drafts, but if you think I watered it down too much, I can add more details.
> > >As far as I can tell, this is trying to introduce attribute polymorphism without actually adding polymorphism, much like inout
attempted and ultimately failed to do. I am very skeptical. It's taking a simple problem with a simple solution and addressing it using an overengineered non-orthogonal mess in the hopes of not having to add additional syntax.
You're mistaken. You can take a look at the Alternatives for seemingly simple solutions. There ain't any.
I know there are, and I literally state how to do it in the quoted excerpt.
If by "quoted excerpt" you mean "As far as I can tell, this", I read it, but to be honest, I didn't really understand what attribute polymorphism really means. Googling "polymorphism" the closet I come to would be that a @safe
delegate can be used in place of a @system
delegate. This is already the case, I can't see how anything would "introduce" it.
...
That's subtyping, not polymorphism. Polymorphism is when a term depends on a type:
https://en.wikipedia.org/wiki/Lambda_cube
https://en.wikipedia.org/wiki/Parametric_polymorphism
That sounds a lot like generics (in the sense of Java or C# generics). At least, it's very similar. I'd be glad if generics were introduced to D. The only language I know of that has templates and generics is C++/CLI. The main practical advantage of generics over templates is that if your generic construct compiles, it'll be type-correct for any arguments a user would supply. Unlike templates, generics never fail on instantiation.
>In this case, a term would depend on an attribute.
Attribute polymorphism as I understand you is that you have variables for attributes so that for every attribute, there's a separate "type" of variable, like @safe-ty a
such that a
can be applied as an attribute to stuff that can carry one.
Giving D any kind of polymorphism (half-baked inout
aside) would be a major language change.
If you don't go the const
route, you have to deal with assignments to the parameter before it's called. You have to disallow assignments that, looking at the types, are a 1-to-1 assignment. IMO, going via const
is far more intuitive.
It's a bad, non-orthogonal solution building on a compiler bug.
>In fact, "not having to add additional syntax" was never the motivation for the proposal. Not having to introduce attributes specific to higher-order function was.
It adds higher-order specific rules to existing attributes without a good reason, which is a lot worse.
I guess removing higher-order functions as a road-bump when it comes to attributes is a good reason. It's adding higher-order specific rules vs. adding another higher-order specific something.
...
It does not have to be higher-order specific at all. Might as well fix inout
at the same time.
As I understand you, inout
is a fixed-name qualifier variable and your sense is that it's broken in D, because one can only have one of them. I guess that was a design choice. I cannot estimate the extent of how conscious that decision was. Probably no one had a use-case in mind that really required more than one qualifier variable so inout
sufficed.
Could also be that you mean the fact that one cannot have Array!(inout int) f(Array!(inout int))
. Unfortunately, inout
is weird in many ways.
To add insult to injury, the first example that's shown in the DIP as motivation abuses an existing type system hole.
I disagree that it is a hole in the type system.
You are wrong, and I am not sure how to make that point to you. (When I tried last time, you just claimed that some other well-documented intentionally-designed feature, like attribute transitivity, is actually a bug.)
>When having qual₁(R delegate(Ps) qual₂)
where qual₁
and qual₂
are type qualifiers (const
, immutable
, etc.) it is practically most useful if qual₁
only applies to the function pointer and (the outermost layer of) the context pointer while qual₂
refers to the property of the context itself.
That allows building a gadget to completely bypass transitivity of qualifiers, including immutable
and shared
.
I had a look at issue 1983 again where (I guess) the source of disagreement is how delegates should be viewed theoretically. If I understand you correctly, you say delegates cannot possibly be defined differently than having their contexts be literally part of them. I tried to explore definitions in which the context is associated with but not literally part of the delegate.
...
I get that, but it is impossible because you can use delegate contexts as arbitrary storage:
int x;
auto dg = (int* update) {
if (update) x = *update;
return x;
};
If you can have "associated" delegate contexts, you can have "associated" struct fields. But we don't have those.
Associated is a concept in ones head, not a definition by the language. It is what Jonathan Davis described in this article as "the object's state could actually live outside of the object itself and be freely mutated even though the function is const
".
Qualifiers and attributes have an idea behind them, but when it comes to practical use, there's often a compromise. Set debug
blocks aside, set slices' capacity
being marked pure
aside, the fact that allocating objects (including arrays) is pure
is a deliberate and explicitly stated decision that could easily be argued against: If f(n)
is a pure
operation, (where n
is a mere int
, just to be clear) how could it happen that if I do it twice, it succeeds once and fails the second time? But allocating a large amount of memory can amount to exactly that. There are deliberate boundaries to any concept. I have no idea of the state of __metadata
, but it would be another one. __metadata
would be to const
what @trusted
is to @safe
.
Assuming the concept has merit, it would still be bad design to abitrarily tie it to delegate contexts.
>My goal was to find a theoretic foundation that is practically useful and doesn't defy expectations. For if a closure mutates a captured variable, one can't assign that closure to a const
variable, notably, you cannot bind it to a functional's const
parameter, I guess does defy expectations greatly.
...
You can store it in a const
variable, but you can't call it, much like you can't call a mutable method on a const
object.
Completely surprising behavior and a breaking change.
> >Trying to draw a comparison with it, I found out today that slice's capacity
is pure
and also that it's a bug admitted in object.d
("This is a lie. [It] is neither nothrow
nor pure
, but this lie is necessary for now to prevent breaking code.")
It's completely unsound, e.g., it allows creating race conditions in @safe
code.
Maybe I'm just too uncreative or too dumb to come up with one myself. I once ran into something like that trying out std.parallelism.parallel
and how much it could gain me.
std.parallelism.parallel cannot be annotated @safe or @trusted.
>It's years ago and I cannot remember a lot. I figured it wasn't applicable in my case. The
I'd really appreciate an example from your side.
...
E.g., this:
import std.concurrency;
void main(){
int x;
// this conversion should not go through
shared(int delegate(int*)) dg = (int* update) {
if (update) x = *update;
return x;
};
spawn((typeof(dg) dg) {
int y = 3;
dg(&y); // this should not be callable
}, dg);
import std.stdio;
writeln(x);
}
What if shared
on the outer of dg
wouldn't matter, but it being missing on the context annotation does? Then dg(&y)
wouldn't be the problem, but spawn
just cannot take delegates with mutable non-shared
contexts.
I get that it is theoretically sound to extend shared
of shared(int delegate(int*))
to make it identical to shared(int delegate(int*) shared)
. The same way it was theoretically sound in C++11 to define that constexpr
member functions are also const
member functions. (They removed that rule in C++14 which was a breaking change.)
To stick to D, it sounds as unpractical as only allowing const
on variables of types that only have const
member functions.
As I explained above, qualifiers and attributes have practical limits and I do think this is one. It's not like we cannot express a shared context on a delegate type.
> > > >By the changes proposed by this DIP, compose
is pure
. However, all delegates you pass to it lose information of attributes because you could assign f
or g
in compose
, no problem.
But that's a terrible reason to not be able to annotate them const
. const
means "this won't change", it does not mean "if you compose this, it won't be recognized as pure
" and there is no clear way to get from one to the other. It's a textbook example of a messy non-orthogonal design that does not make any sense upon closer inspection.
Maybe use in
(i.e. const scope
) then? It clearly signifies: This is to read information from, not to assign to it, assign it to a global, not even to return it in any fashion.
...
It's not what you need. Reassigning is fine, it just has to be something with compatible attributes.
> > >But as you don't intend to mutate f
or g
in it, you could get the idea of making them const
like this:
Yes, let's assume that was my intention.
>C delegate(A) compose(A, B, C)(const C delegate(B) f, const B delegate(A) g) pure
{
return a => f(g(a));
}
Then, by the proposed changes, only pure
arguments lead to a pure
call expression.
Which was my point. This is indefensible.
It suffices to write this and one @safe
unit test: The compile error will tell you there's a problem. I can add to the Error Messages section that in this case, the error message should hint that the const
might be used improperly.
...
The error message would have to say it was designed improperly.
> > >However, compose
is a good example why this is not an issue: It is already a template. Why not go the full route and make the delegate
part of the template type arguments like this:
auto compose(F : C delegate(B), G : B delegate(A), A, B, C)(F f, G g) pure
{
return delegate C(A arg) => f(g(arg));
}
The fact that there is some ugly workaround for my illustrative example that also defeats the point of your DIP does not eliminate the problem with the DIP.
This isn't an ugly workaround,
ugly, check, workaround, check.
Ugly is opinion. On that basis, almost every template is a workaround for lack of this or that kind of polymorphism.
It's only ugly on the first glace. Apart form the delegate C
part, it contains the bare minimum to express itself. One could leave out the : Y delegate(X)
parts, too.
I like to have expectations spelled out in code. That's a personal preference, I guess.
but merely an attempt to stick to the example. Simply omitting the specialization syntax isn't possible. return a => f(g(a));
doesn't compile, you need the (A a)
part and for that, you need A
. You can get it alternatively with Parameters!f
; but auto compose(F, G)(F f, G g)
with return a => f(g(a));
doesn't work.
...
None of this matters. You "solved" the problem by removing the need for attribute polymorphism using automated code duplication.
Of course. Unless you have attribute polymorphism you need templates. If you don't use them, in the current state, you can have all attributes you want on your input delegates, you have to specify one set of attributes on the return type. So you need attribute polymorphism and type polymorphism, because without them together, you still need templates. Then you can just use templates.
Practically speaking, if your types are concrete so you don't need template type parameters, it's very likely that the attributes are concrete, too, removing the need for attribute polymorphism.
The requirement of being compositional could be unwarranted.
Your reinterpretation of what delegate qualifiers mean would need a DIP in its own right and it would hopefully be rejected.
I'm not sure it's a *re-*interpretation.
It defies attribute transitivity, which is a stated design goal.
Depends on how you interpret transitivity. Being practical is also a design goal. In contrast to e.g. C++, D doesn't "trust the programmer". I'm not advocating a "trust the programmer" solution (at least I'm convinced I'm not). In my estimation, I advocate a practical solution that minimizes language change.
> >As factually the compiler defines the language at places, you're probably right about the DIP part.
Unfortunately, the compiler has bugs. One can't take its behavior as holy gospel that just needs to be interpreted correctly.
I don't. And I was wrong about the DIP part. It's the other way around because it's a breaking change.