October 01, 2020
On Thursday, 1 October 2020 at 12:47:08 UTC, Stefan Koch wrote:
> It's an argument.
> But I would like to argue my actual use-case first.

Certainly, I was not discouraging your efforts. Simply trying to point out that it might be easier to convince sceptics if you can argue that your concept is a cog in the existing machinery or that it can be seen as syntactic sugar over desired cogs in a future revision of the language. You don't want sceptics to see your concept as sitting on a limb.


> It goes for making a very common usage patterns (at least in all the big codebases I looked at, it common)

Yes, this is a good argument. Strengthening existing community practice. Making learning/reading easier.


> I also don't see D coming to a slim core. And frankly I don't see why it needs to.
> D does not cater to purists, it caters to people who value choice and creativity.
> At least as far as I see it.
> At least that is who I am.

I think you are right. Powerful features like the compiles-trait could make it virtually impossible to streamline language concepts in a non-breaking fashion as libraries might start to break when corner cases are removed.

Also there are recent additions that support your view. E.g. the addition of shared semantics which basically is just boxing/unboxing should have been possible as a library construct. Yet Manu had found it difficult to do as a pure library construct and wanted to see it implemented sooner than later, so D got another "limb feature". But there was little resistance to this feature because "shared" was already perceived by the community as being a language feature (although I would argue that it wasn't, it was more of a social construct). If a slim language was a goal then it would have been done as a library feature and "shared" would just have been syntactical sugar for that library feature.

Another vector you might use to argue for your feature is that reification/mangling is making future language evolution more difficult than your solution. At least, that is my understanding. Maybe I am wrong? I could be. But you might find some argument you can use there if your concept has a higher abstraction level than the alternatives. Which is generally desirable in a language as it is better for modelling. And D is marketed as being good for modelling, so that would be an argument if favour of your idea.

If people argue that your feature could be done using the existing language/reification then you should be able to argue that your feature is much needed syntactical sugar over such constructs. Basically, that you simply are creating an abstraction over existing features and practice that improves on the usability of the language.

Compiler performance issues should be argued as being optimizations. Abstractions often makes optimizations easier to implement as less static analysis is needed (the search space is smaller). So even though you might see your idea as being a compiler addition, you should be able to argue that it isn't really, but that it would be beneficial to implement it as such.

Anyway, I am interested to see where you go with this. :-)





October 01, 2020
On 10/1/20 9:53 AM, H. S. Teoh wrote:
> On Thu, Oct 01, 2020 at 12:27:23PM +0000, Stefan Koch via Digitalmars-d wrote:
> [...]
>> type functions do a Type -> TypeObject conversion when you pass the
>> types in.
>> inside the type function you get a "type-like" interface.
>> But it has lost everything that made it a type.
>> It has sizeof, alignof, stringof, tupleof, you can use __traits on it.
>> But it cannot be used as a type anymore.
>> Not inside a type-function anyway.
>>
>> If you return a type or a type tuple from a type function, it becomes
>> a regular type/type tuple again.
> 
> Isn't this just the same thing as Andrei's reification / dereification,
> except redressed in terms of aliases?

The difference I see is that reification/dereification has to rebuild everything that is already in the compiler, but at runtime. And it has to make sure the functionality is identical. You are using the compiler to build another type introspection system, but just so you can do CTFE based things.

It reminds me of "modern" web applications which take over browser functionality (page navigation, etc.) and reimplement it in javascript.

If we can't get a system that allows dereifcation DURING function execution, then I think type functions are a much better solution. There's no reason to reimplement what the compiler already does. Type functions are cleaner, clearer, and much more efficient.

-Steve
October 01, 2020
On 10/1/20 12:12 PM, Steven Schveighoffer wrote:
> 
> The difference I see is that reification/dereification has to rebuild everything that is already in the compiler, but at runtime. And it has to make sure the functionality is identical. You are using the compiler to build another type introspection system, but just so you can do CTFE based things.
> 
> It reminds me of "modern" web applications which take over browser functionality (page navigation, etc.) and reimplement it in javascript.
> 
> If we can't get a system that allows dereifcation DURING function execution, then I think type functions are a much better solution. There's no reason to reimplement what the compiler already does. Type functions are cleaner, clearer, and much more efficient.

Hmmm... interesting. It's also been my accumulated perception that much metaprogramming is in fact "doing things in the target language that normally only the compiler writer is empowered to do". For example, the implementation of switch on strings has been entirely shipped to druntime (https://github.com/dlang/druntime/blob/master/src/core/internal/switch_.d), which does everything a compiler normally does with optimizing a switch: looks at the number and content of cases, defines distinct strategies depending on them, etc. In smaller form __cmp is the same story (optimizes comparison depending on the types involved) and many other cases. My take is that that's a good thing, because I don't need to be in the compiler do do powerful things.

Of course std.traits would be chock full of this kind of stuff, as no doubt most artifacts in there are present in the compiler internals, or easily extractable therefrom. In that respect, __traits is an exercise in "what is the minimum set of primitives the compiler needs to expose, in order to allow regular D code to do everything else that the compiler normally does?" It's a bottleneck design (complex entities communicating through a narrow interface). Probably not very well done because __traits keep on getting a variety of things that are difficult to decide upon. I have no good litmus test on a proposed __trait whether it's deserving, the right thing, the most enabling, etc. On the other hand, std.traits did allow us to do things that would be quite onerous to put in the language definition and/or compiler implementation. On the whole this bottleneck design has been quite enabling.

The difficulty/challenge of crossing of the barrier from types to values and back is good to appreciate when doing things like mixed compile-time/run-time uses and integration with type-oriented facilities (such as those in std.traits itself).
October 01, 2020
On Thursday, 1 October 2020 at 16:49:20 UTC, Andrei Alexandrescu wrote:
> On 10/1/20 12:12 PM, Steven Schveighoffer wrote:
>> 
>> The difference I see is that reification/dereification has to rebuild everything that is already in the compiler, but at runtime. And it has to make sure the functionality is identical. You are using the compiler to build another type introspection system, but just so you can do CTFE based things.
>> 
>> It reminds me of "modern" web applications which take over browser functionality (page navigation, etc.) and reimplement it in javascript.
>> 
>> If we can't get a system that allows dereifcation DURING function execution, then I think type functions are a much better solution. There's no reason to reimplement what the compiler already does. Type functions are cleaner, clearer, and much more efficient.
>
> Hmmm... interesting. It's also been my accumulated perception that much metaprogramming is in fact "doing things in the target language that normally only the compiler writer is empowered to do". For example, the implementation of switch on strings has been entirely shipped to druntime (https://github.com/dlang/druntime/blob/master/src/core/internal/switch_.d), which does everything a compiler normally does with optimizing a switch: looks at the number and content of cases, defines distinct strategies depending on them, etc. In smaller form __cmp is the same story (optimizes comparison depending on the types involved) and many other cases. My take is that that's a good thing, because I don't need to be in the compiler do do powerful things.
>
> Of course std.traits would be chock full of this kind of stuff, as no doubt most artifacts in there are present in the compiler internals, or easily extractable therefrom. In that respect, __traits is an exercise in "what is the minimum set of primitives the compiler needs to expose, in order to allow regular D code to do everything else that the compiler normally does?" It's a bottleneck design (complex entities communicating through a narrow interface). Probably not very well done because __traits keep on getting a variety of things that are difficult to decide upon. I have no good litmus test on a proposed __trait whether it's deserving, the right thing, the most enabling, etc. On the other hand, std.traits did allow us to do things that would be quite onerous to put in the language definition and/or compiler implementation. On the whole this bottleneck design has been quite enabling.
>
> The difficulty/challenge of crossing of the barrier from types to values and back is good to appreciate when doing things like mixed compile-time/run-time uses and integration with type-oriented facilities (such as those in std.traits itself).

Please do try to express yourself more concisely.
As well as close to the topic.
I cannot make heads or tails of your writing.

This is not about std.traits.
This is more about making std.traits superfluous.
Integrating type based programming so well with the language that there is almost no barrier when crossing to type-object land.
The serialized "reified" type is given behavior as close as possible to a real type.
Which simplifies the interface, and keeps it intuitive.

All you can do with a type the compiler can already do.
It has to be able to do it.
So at CTFE I merely tap into what the compiler already does.
There's very little code duplication and even the bit there is can be improved incrementally making the compiler more modular.
October 01, 2020
On 10/1/20 12:49 PM, Andrei Alexandrescu wrote:
> On 10/1/20 12:12 PM, Steven Schveighoffer wrote:
>>
>> The difference I see is that reification/dereification has to rebuild everything that is already in the compiler, but at runtime. And it has to make sure the functionality is identical. You are using the compiler to build another type introspection system, but just so you can do CTFE based things.
>>
>> It reminds me of "modern" web applications which take over browser functionality (page navigation, etc.) and reimplement it in javascript.
>>
>> If we can't get a system that allows dereifcation DURING function execution, then I think type functions are a much better solution. There's no reason to reimplement what the compiler already does. Type functions are cleaner, clearer, and much more efficient.
> 
> Hmmm... interesting. It's also been my accumulated perception that much metaprogramming is in fact "doing things in the target language that normally only the compiler writer is empowered to do". For example, the implementation of switch on strings has been entirely shipped to druntime (https://github.com/dlang/druntime/blob/master/src/core/internal/switch_.d), 

This is not exactly true -- the compiler still only allows strings in switch statements (not arbitrary types), and it also generates case labels based on how it expects the runtime to return the correct value. The runtime also expects the compiler to pass the strings in a certain order.

What *is* true, is that the runtime can make different decisions on what a match is. For example, the template could potentially do case-insensitive comparison, or use a different mechanism to search based on the incoming parameters (what it does now). But I don't know if this also is used in CTFE?

However, I will note that before it was this template call, the compiler still relied on the runtime to do what it currently does. It just passed the strings as a runtime parameter instead of a compile time parameter. (https://github.com/dlang/druntime/blob/c9dbcbb8cab8a480b143c1f8a78ba5193d3d2c40/src/rt/switch_.d#L30)

I have nothing against hoisting algorithmic tasks out of the compiler, and using the runtime to determine such things. In fact, I prefer it -- the runtime is so much more approachable as a developer than the compiler. But when the compiler MUST implement is(T : U), and that expression is well defined, I don't see why we have to reimplement that feature in the runtime as well. At least without a compelling example of "the compiler can't do this as well".

-Steve
October 01, 2020
On 10/1/20 1:26 PM, Steven Schveighoffer wrote:
> But when the compiler MUST implement is(T : U), and that expression is well defined, I don't see why we have to reimplement that feature in the runtime as well. At least without a compelling example of "the compiler can't do this as well".

How do you implement Variant.get(T) without reifying is(T : U)?
October 01, 2020
On Thursday, 1 October 2020 at 17:35:51 UTC, Andrei Alexandrescu wrote:
> On 10/1/20 1:26 PM, Steven Schveighoffer wrote:
>> But when the compiler MUST implement is(T : U), and that expression is well defined, I don't see why we have to reimplement that feature in the runtime as well. At least without a compelling example of "the compiler can't do this as well".
>
> How do you implement Variant.get(T) without reifying is(T : U)?

What does it do?
is Variant.get returning type T ?
Then I would assume you don't.
You use a template + typeid that's what they are for!

Variant.get is not doing any type computation which you would need is (T : U) for?
Where would I use it?
October 01, 2020
On 10/1/20 1:45 PM, Stefan Koch wrote:
> On Thursday, 1 October 2020 at 17:35:51 UTC, Andrei Alexandrescu wrote:
>> On 10/1/20 1:26 PM, Steven Schveighoffer wrote:
>>> But when the compiler MUST implement is(T : U), and that expression is well defined, I don't see why we have to reimplement that feature in the runtime as well. At least without a compelling example of "the compiler can't do this as well".
>>
>> How do you implement Variant.get(T) without reifying is(T : U)?
> 
> What does it do?
> is Variant.get returning type T ?

Yes: https://dlang.org/phobos/std_variant.html#.VariantN.get

> Then I would assume you don't.
> You use a template + typeid that's what they are for!
> 
> Variant.get is not doing any type computation which you would need is (T : U) for?
> Where would I use it?

Everywhere Variant.get is called. The decision is(T : U) needs to be carried, except that one of the types is available only at runtime.
October 01, 2020
On Thursday, 1 October 2020 at 17:50:39 UTC, Andrei Alexandrescu wrote:
> On 10/1/20 1:45 PM, Stefan Koch wrote:
>> On Thursday, 1 October 2020 at 17:35:51 UTC, Andrei Alexandrescu wrote:
>>> On 10/1/20 1:26 PM, Steven Schveighoffer wrote:
>>>> [...]
>>>
>>> How do you implement Variant.get(T) without reifying is(T : U)?
>> 
>> What does it do?
>> is Variant.get returning type T ?
>
> Yes: https://dlang.org/phobos/std_variant.html#.VariantN.get
>
>> Then I would assume you don't.
>> You use a template + typeid that's what they are for!
>> 
>> Variant.get is not doing any type computation which you would need is (T : U) for?
>> Where would I use it?
>
> Everywhere Variant.get is called. The decision is(T : U) needs to be carried, except that one of the types is available only at runtime.

Please show me in the code where is(T : U) is used.
I don't find it in std variant.
October 01, 2020
On 10/1/20 1:35 PM, Andrei Alexandrescu wrote:
> On 10/1/20 1:26 PM, Steven Schveighoffer wrote:
>> But when the compiler MUST implement is(T : U), and that expression is well defined, I don't see why we have to reimplement that feature in the runtime as well. At least without a compelling example of "the compiler can't do this as well".
> 
> How do you implement Variant.get(T) without reifying is(T : U)?

How do you implement Variant.get(T) *with* reifying is(U : T)?

step 1: determine using reified constructs that U is convertible to T
step 2: ?????
step 3: Return a T from a U!

BTW, Variant already does a *limited* form of this. reification doesn't change what needs to happen.

I don't see why Variant's needs have to dictate how you need to reason about types at compile time in CTFE.

-Steve