September 10, 2020
On Thursday, 10 September 2020 at 18:05:23 UTC, Paul Backus wrote:
> [snip]
>
> One thing built-in .sizeof does that no user-code version can do is "freeze" the size of a type to prevent additional members from being added. For example, if you try to compile this code:
>
> struct S
> {
>     int a;
>     enum size = S.sizeof;
>     mixin("int b;");
> }
>
> ...you'll get an error:
>
> onlineapp.d-mixin-5(5): Error: variable onlineapp.S.b cannot be further field because it will change the determined S size

I wouldn't want to rely on something like that personally. I'd rather guarantee that the only members are those in a specific list.
September 10, 2020
On 9/10/20 1:05 PM, Meta wrote:

> 
> I'm curious, will this also work?
> 
> size_t sizeOf(alias t)
> {
>      size_t result;
>      /* static? */ if (__traits(isScalar))
>      {
>          static if (is(t == int))
>              result += 32;
>          else static if (...)
>          ...
>      }
>      else static if (is(t == A[n], A, size_t n))
>          result += A.sizeOf * n
>      else static if (...)
>          ...
>      else
>          /* static? */ foreach (field; t.tupleof)
>              result += field.sizeOf
> 
>      return result;
> }
> 
> Basically, is the implementation at a level where sizeOf can be turtles all the way down, with minimal or no reliance on __traits?

Doesn't the compiler have to do this anyway so it can define the memory layout? I'm curious what the benefit of doing this in a library would be.

-Steve
September 10, 2020
On Thursday, 10 September 2020 at 17:05:02 UTC, Meta wrote:
>     else static if (is(t == A[n], A, size_t n))
>         result += A.sizeOf * n
>
that is expression introduces 2 symbols `n` and `A`
based on whether t is a static array.
which is something that you cannot do in a type function.
You cannot change the form of the function body.
Type functions are not polymorphic, they cannot change shape.

Besides is there ever a case in which A.sizeOf * n
would not be the same as t.sizeof ?

> Basically, is the implementation at a level where sizeOf can be turtles all the way down, with minimal or no reliance on __traits?

I'd guess you would have higher reliance on __traits simply because  __traits have a return value and can be pure, as opposed to pattern matching is expressions which can change things outside of their contexts.
September 10, 2020
On Thursday, 10 September 2020 at 18:44:46 UTC, Meta wrote:
> Really though, is it even necessary to be able to "freeze" a type like this when evaluating type functions?

type functions cannot change the functions they are working on.
So from that point you don't have to enforce a determinate size,
But at the same time, I expect people to only feed determined things into type functions.
Everything else is mighty confusing.
September 10, 2020
On Thursday, 10 September 2020 at 18:44:46 UTC, Meta wrote:
>
> It looks like it depends on the order of fields:
>
> struct S
> {
>     int a;
>     mixin("int b;"); //No error
>     enum size = S.sizeof;
> }
>
> Ideally `enum size = S.sizeof` could be delayed until after all mixins are "evaluated" (is that during one of the semantic phases? I don't know much about how DMD actually works), but I imagine that would take some major re-architecting. Really though, is it even necessary to be able to "freeze" a type like this when evaluating type functions?

Sometimes there is no way to avoid "evaluating" one part of the program before another. For example:

struct S
{
    int a;
    static if (S.sizeof < 8) {
        int b;
    }
}

Because of the way static if works, the compiler *must* do semantic analysis on S.sizeof before it does semantic analysis on the declaration of b. There is no way to avoid it by re-ordering or deferring the analysis of certain declarations.
September 10, 2020
On Thursday, 10 September 2020 at 16:52:12 UTC, H. S. Teoh wrote:
> On Thu, Sep 10, 2020 at 04:28:46PM +0000, Bruce Carneal via Digitalmars-d wrote:
>> On Thursday, 10 September 2020 at 15:14:00 UTC, Per Nordlöw wrote:
>> > On Thursday, 10 September 2020 at 09:43:34 UTC, Stefan Koch wrote:
>> > > limited UFCS for type functions works again.
>> > 
>> > Having type functions with UFCS is a significant improvement of the developer experience aswell compared to having to use templates.
>> 
>> Absolutely.  Functions are more readable, compose more readily, are easier to debug (better locality), and tickle fewer compiler problems than templates.  CTFE is a huge win overall, "it just works".
>
> I wouldn't be so sure about that last part. The current CTFE implementation is, shall we say, hackish at best? ...

When I said "it just works" I should have said "it just works as you'd expect any program to work".  The implementation may be lacking but the contract with the programmer is wonderfully straightforward.

> It "works" by operating on AST nodes as if they were values, and is slow, memory inefficient, and when there are problems, it's a nightmare to debug. For small bits of code, it works wonderfully, but ...

Yes.  My understanding is that the current implementation is much less than ideal for both but that we may be able to clear away a lot of the template "problem" in one go with type functions.  Crucially, the type function implementation complexity is, reportedly, much much lower than that seen in other sub components.

>
>
>> By contrast, when using D's pattern matching meta programming sub-language, things start out very nicely but <... rapidly degenerate>
>
> To be fair, the problems really only arise with IFTI and a few other isolated places in D's template system.  In other respects, D templates are wonderfully nice to work with.  Definitely a refreshing change from the horror show that is C++ templates.

Yes.  Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence.  As you note, it's not even close.

It seems to me from forum postings and reports on the (un)maintainability and instability of large template heavy dlang code bases, that we're approaching the practical limits of our template capability.  At least we're approaching the "heroic efforts may be needed ongoing" threshold.

So, what to do?  We can always add more tooling to try and help the situation: better error reporting, better logging, pattern resolution dependency graph visualizers, ...

We can also go the "intrinsics" route: "have something that's too hard to do with templates?  No problem, we'll add an intrinsic!".

We can also go the template library route: "too tough for mere mortals?  No problem, my super-duper layer of template magic will make it all better!".

You'll note that not one of the above "solutions" actually reduces complexity, they just try to manage it.  Type functions, on the other hand, look like they would support real world simplification.  Much of that simplification comes from programmer familiarity and from the ability to "opt-in" to pattern/set operations rather than being forced to, awkwardly, opt-out.

>
>
>> In a type function world we'll still need templates, we'll just need fewer of them.
>
> Generally, I'm in favor of this. Templates have their place, but in many type manipulation operations, type functions are definitely better than truckloads of recursive template instantiations with their associated memory hoggage and slowdown of compile times.

Yep.



September 10, 2020
On Thu, Sep 10, 2020 at 11:44:30PM +0000, Bruce Carneal via Digitalmars-d wrote:
> On Thursday, 10 September 2020 at 16:52:12 UTC, H. S. Teoh wrote:
> > On Thu, Sep 10, 2020 at 04:28:46PM +0000, Bruce Carneal via Digitalmars-d wrote:
[...]
> > > Absolutely.  Functions are more readable, compose more readily, are easier to debug (better locality), and tickle fewer compiler problems than templates.  CTFE is a huge win overall, "it just works".
> > 
> > I wouldn't be so sure about that last part. The current CTFE implementation is, shall we say, hackish at best? ...
> 
> When I said "it just works" I should have said "it just works as you'd expect any program to work".  The implementation may be lacking but the contract with the programmer is wonderfully straightforward.

If that's what you meant, then I agree.  One of the big wins of CTFE is that it unifies compile-time code and runtime code into a single interface (syntax), rather than require periphrasis in a separate sub-language.  D templates win in this respect in the area of template functions: template parameters are "merely" compile-time parameters, rather than some odd distinct category of things with a different syntax; this has been one of the big factors in the usability of D templates.

However, D templates fail on this point when it comes to non-function templates, e.g. you have to write a recursive template in order to manipulate a list of types, and imperative-style type manipulation is not possible.  This adds friction to usage (e.g., I have to re-think my type sorting algorithm in terms of recursive templates rather than just calling std.algorithm.sort) and induces boilerplate (I can't reuse an existing sorting solution in std.algorithm.sort but have to rewrite essentially the same logic in recursive template style). Ideally, this redundant work should not be necessary; as long as I define an ordering predicate on types, I ought to be able to reuse std.algorithm for sorting or otherwise manipulating types.

Seen in this light, type functions are a major step in the right direction.


[...]
> > To be fair, the problems really only arise with IFTI and a few other isolated places in D's template system.  In other respects, D templates are wonderfully nice to work with.  Definitely a refreshing change from the horror show that is C++ templates.
> 
> Yes.  Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence.  As you note, it's not even close.

Theoretical equivalence is really only useful in mathematical proofs; in practice there's a huge difference between languages of equivalent computing power. Lambda calculus can in theory express everything a D program can, but nobody in his sane mind would want to write a non-trivial program in lambda calculus. :-D  Hence the term "Turing tarpit".


> It seems to me from forum postings and reports on the (un)maintainability and instability of large template heavy dlang code bases, that we're approaching the practical limits of our template capability.  At least we're approaching the "heroic efforts may be needed ongoing" threshold.

It really depends on what you're trying to do.  I'm a pretty heavy template user (bring on those UFCS chains!), but IME it has not been a problem.  The problems really only arise in specific usage patterns, such as excessive use of recursive templates (which is where Stefan's type functions come in), or excessive use of compile-time codegen with CTFE and templates (e.g., std.regex, std.uni tables).

Or unreasonably-long UFCS chains: I have a non-trivial example in one of my projects where almost the entire program logic from processing input to outputting a PNG file exists in one gigantic UFCS chain. :-D  It led to megabyte-long symbols that eventually spurred Rainer to implement the symbol folding that we enjoy today. Eventually, I had to break the chain down into 2-3 pieces just to get it to compile before running out of memory.  :-D

For "normal" template usage, templates really aren't that big of a problem.  Unfortunately, some of the "bad" usage patterns occur in Phobos, so sometimes the unwary can trip up on them, which may be why there's been a string of complaints about templates lately.


> So, what to do?  We can always add more tooling to try and help the situation: better error reporting, better logging, pattern resolution dependency graph visualizers, ...

I say we fix the compiler implementation so that we can use what the language allows us to use. :-)


> We can also go the "intrinsics" route: "have something that's too hard to do with templates?  No problem, we'll add an intrinsic!".

I wouldn't add an intrinsic unless it provides some special functionality not expressible with the normal language. I don't think we have too many of those.

The current problems with templates is really a matter of quality of implementation.  That, and the lack of more suitable ways of doing certain things like type manipulation, so templates get pressed into service where they are perhaps not really the best tool for the job.


> We can also go the template library route: "too tough for mere mortals?  No problem, my super-duper layer of template magic will make it all better!".

std.algorithm anybody? ;-)


> You'll note that not one of the above "solutions" actually reduces complexity, they just try to manage it.  Type functions, on the other hand, look like they would support real world simplification.  Much of that simplification comes from programmer familiarity and from the ability to "opt-in" to pattern/set operations rather than being forced to, awkwardly, opt-out.
[...]

Type functions will definitely be a major step towards unifying the meta language with the regular language: the holy grail of metaprogramming. I doubt we can really get all the way there, but the closer we get, the more powerful D will become, and at the same time the more easy to use D's metaprogramming features will become.  That's worthy of pursuit IMO.


T

-- 
Doubt is a self-fulfilling prophecy.
September 11, 2020
On Thursday, 10 September 2020 at 23:44:30 UTC, Bruce Carneal wrote:
>
> Yes.  Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence.  As you note, it's not even close.
>
> It seems to me from forum postings and reports on the (un)maintainability and instability of large template heavy dlang code bases, that we're approaching the practical limits of our template capability.  At least we're approaching the "heroic efforts may be needed ongoing" threshold.

I think the main difficulty of scaling code bases the rely heavily on templates (either D or C++), as compared to other kinds of generic code like OOP-style polymorphism or traits/typeclasses à la Rust and Haskell, is that templates themselves--not the code they generate when you instantiate them, but the actual *templates*--are essentially dynamically typed. In general, there's no way to catch errors in a template until you "run" it (that is, instantiate it) and see what it does.

What this suggests to me (and this is borne out by my experience) is that writing correct, maintainable template code probably requires the same kind of disciplined approach to testing as writing correct, maintainable code in a dynamic language like Python or Ruby. Don't assume anything works until you can demonstrate it, actively look for ways to make your code fail, etc. If your test suite is shorter than your template code, you're almost certainly not being thorough enough.
September 11, 2020
On Friday, 11 September 2020 at 01:07:55 UTC, Paul Backus wrote:
> On Thursday, 10 September 2020 at 23:44:30 UTC, Bruce Carneal wrote:
>>
>> Yes.  Of course in theory D templates are no more powerful than C++ templates but anyone who has used both understands that simplicity in practical use trumps theoretical equivalence.  As you note, it's not even close.
>>
>> It seems to me from forum postings and reports on the (un)maintainability and instability of large template heavy dlang code bases, that we're approaching the practical limits of our template capability.  At least we're approaching the "heroic efforts may be needed ongoing" threshold.
>
> I think the main difficulty of scaling code bases the rely heavily on templates (either D or C++), as compared to other kinds of generic code like OOP-style polymorphism or traits/typeclasses à la Rust and Haskell, is that templates themselves--not the code they generate when you instantiate them, but the actual *templates*--are essentially dynamically typed. In general, there's no way to catch errors in a template until you "run" it (that is, instantiate it) and see what it does.

Yes exactly.
templates are polymorphic, they can re-shape and what they do can be determined by the "call site".
Therefore they do not even have any meaning before being instantiated.

type functions on the other hand don't suffer from that.
They have a meaning.
Whether you call them or not.
September 10, 2020
On Fri, Sep 11, 2020 at 01:07:55AM +0000, Paul Backus via Digitalmars-d wrote: [...]
> I think the main difficulty of scaling code bases the rely heavily on templates (either D or C++), [...], is that templates themselves--not the code they generate when you instantiate them, but the actual *templates*--are essentially dynamically typed. In general, there's no way to catch errors in a template until you "run" it (that is, instantiate it) and see what it does.
[...]

This is why when I write template code, I try to write defensively in a way that makes as few assumptions as possible about the template arguments.  Ideally, every operation you'd do with that type should be tested in the sig constraints.

Even better would be if the compiler enforced this: unless you tested for some operation in the sig constraints, that operation would be deemed illegal.  But in the past Walter & Andrei have shot down Concepts, which is very similar to this idea, so I don't know how likely this will ever make it into D.

Another approach, instead of sig constraints, might be to have typed (or meta-typed) template arguments, a kind of template analogue of static typing, so that arguments are constrained to satisfy certain constraints (i.e., are instances of a meta-type). Though these meta-types are just Concepts redressed, so that's not saying very much.


T

-- 
Talk is cheap. Whining is actually free. -- Lars Wirzenius