February 10, 2007
Bill Baxter wrote:
> Kyle Furlong wrote:
>> Andrei Alexandrescu (See Website For Email) wrote:
> 
>> I understand that you are a successful computer scientist. I accept that you have had success with some books on the subject. I respect that you currently research in the field. None of this allows you the freedoms you have taken in the discussions you have been having.
> 
> Wow, this is sounding sillier and sillier.
> It seems pretty clear to me that the answer is simply that Andrei doesn't really know enough about RoR give a concrete example of how better metaprogramming would be useful for DeRailed.  He pretty much said as much in the last mail.  But it would be good if he gave some more practical, concrete examples of places where it would help.
> 
> Note that what's going on here is *talk* about features that may or may not get into DMD any time soon.  In fact you could say this whole discussion has been about *preventing* features from getting introduced.  At least in an ad-hoc manner. This meat of this metaprogramming discussion started with Walter saying he was thinking of adding compile time regexps to the language.  Without any discussion about whether that's a good thing or not and what the ramifications are, then it's just going to happen, whether it's good for D or not.  So the question becomes what should D look like?  Rather than add hoc features, what do we really want D's metaprogramming to look like?
> 
> To me the discussion has all been about figuring out a clear picture of what things *should* look like in the future w.r.t. metaprogramming in order to convince Walter that throwing things in ad-hoc is not the way to go.  Or maybe to find out that what he's thinking of throwing in isn't so ad-hoc after all and actually makes for a nice evolutionary step towards where we want to go.
> 
> As for whether that would help DeRailed, I dunno.  Sounds like kris has a pretty clear idea that reflection would be much more useful to DeRailed.
> 
> As for whether DeRailed will help D, I also don't know.  I kinda wonder though, becuase if someone wants RoR, why wouldn't they just use RoR? Seems like it's a tough battle to unseat a champ like that.  I would think that D would have a better shot at dominating by providing a great solution to a niche which is currently underserved.  But that's just my opinion.  Also I don't do web development, so that may be another part of it.  But the description given of what Rails does so well, with all kinds of dynamic this and on-the-fly that, really sounds more like what a scripting language is good at than a static compile-time language.  I mean the dominant web languages are Perl, Python, Ruby, Php, and Javascript.  Not a compiled language in the bunch.  There's must be a reason for that.  Even Java is interpreted bytecode.
> 
> As for Andrei having Walter's ear.  I think Andrei has Walter's ear mostly because Andrei is interested in the same kinds of things that interest Walter.  I think everyone can tell by now that Walter pretty much works on solving the problems that interest him.   Right now (and pretty much ever since 'static if') the thing that seems to interest him most is metaprogramming.  Hopefully some day he'll get back to being interested in reflection.  But if he's really got the metaprogramming bug, then that may not be until after he's got D's compile time framework to a point where he feels it's "done".  But only Walter knows.
> 
> 
> 
> --bb

Even coming at this debate from the perspective you outlined, Andrei's stance and tone have still been wrong. Instead of moving towards an understanding of what would be best for the people who are actually using D for real code, he instead disregards experience and advocates the theoretical ideal (from his perspective).

I simply cant understand why anyone, including Mr. Alexandrescu, would disregard the opinion of a man who has been working with D for more years in REAL code than almost anyone here.
February 10, 2007
Bill Baxter wrote:
> As for Andrei having Walter's ear.  I think Andrei has Walter's ear mostly because Andrei is interested in the same kinds of things that interest Walter.  I think everyone can tell by now that Walter pretty much works on solving the problems that interest him.   Right now (and pretty much ever since 'static if') the thing that seems to interest him most is metaprogramming.  Hopefully some day he'll get back to being interested in reflection.  But if he's really got the metaprogramming bug, then that may not be until after he's got D's compile time framework to a point where he feels it's "done".  But only Walter knows.

There is a deeper connection between runtime reflection and compile-time reflection than it might appear.

In the runtime reflection scenario, the compiler must generate, for each user-defined type, an amount of boilerplate code that allows symbolic inspection from the outside, and code execution from the outside with, say, untyped (or dynamically-typed) arguments.

The key point is that the code is *boilerplate* and as such its production can be confined to a code generation task, which would keep the compiler simple. The availability of compile-time introspection effectively enables implementation of run-time introspection in a library.

For example:

class Widget
{
  ... data ...
  ... methods ...
}

mixin Manifest!(Widget);

If compile-time introspection is available, the Manifest template can generate full-blown run-time introspection code for Widget, with stubs for dynamic invocation, the whole nine yards.

This is nicer than leaving the task to the compiler because it relieves the compiler writer from being the bottleneck.


Andrei

February 10, 2007
Andrei Alexandrescu (See Website For Email) wrote:
> Bill Baxter wrote:
> 
> There is a deeper connection between runtime reflection and compile-time reflection than it might appear.
> 
> In the runtime reflection scenario, the compiler must generate, for each user-defined type, an amount of boilerplate code that allows symbolic inspection from the outside, and code execution from the outside with, say, untyped (or dynamically-typed) arguments.
> 
> The key point is that the code is *boilerplate* and as such its production can be confined to a code generation task, which would keep the compiler simple. The availability of compile-time introspection effectively enables implementation of run-time introspection in a library.
> 
> For example:
> 
> class Widget
> {
>   ... data ...
>   ... methods ...
> }
> 
> mixin Manifest!(Widget);
> 
> If compile-time introspection is available, the Manifest template can generate full-blown run-time introspection code for Widget, with stubs for dynamic invocation, the whole nine yards.
> 
> This is nicer than leaving the task to the compiler because it relieves the compiler writer from being the bottleneck.
> 
> 
> Andrei
> 

Well there you go then.  Sounds a lot like the serialization problem too.

    mixin Serializer!(Widget);

Or the problem of exposing classes to scripting languages.

    mixin ScriptBinding!(Widget);

Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case).  Any thoughts, Kirk?  You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection.

--bb
February 10, 2007
Since there seems to be no escaping it, let's return to the realm of theory for a moment.  The ultimate goal of all tools and approaches being discussed is to automate the process of representing one language, A, in another language, B.  From here I feel the problem space can be broken into three general categories, the first being any case where a strict A->B mapping is desired and little to no modification of the output will occur.  This may be because A is a superset of B and can therefore the output is likely to be very close to the desired result (as long as the domain remains in or near the boundaries of B), or simply because the output can be used as reference material of sorts with the embellishment handled elsewhere.  A very limited example of where A is a superset of B might be translating the Greek word for 'love' into English.  In Greek, there are at least four separate words to describe different kinds of affection, but all of these words can be adequately described as short phrases in English.

A more technical example where embellishment of the output, B, is often unnecessary is representing a database model in a language intended to access the database.  Typically, it is sufficient to perform A->B into a set of definition modules (header files) and do the heavy lifting separately in language B.  The output of the translation is inspectible, and any use of the output is verifiable as well.  Compilers are the preferred tool for such translations, and the problem is well understood.  Let's call this case A.

The second case is where a loose A->B mapping is desired or where a great deal of modification of B will occur.  To return to the Greek example for a moment, someone translating English into Greek may need to embellish the result to ensure that it communicates the proper intent. And since the original intent is contextual, an intelligent analysis of A is typically required.

Another situation that has been mentioned in this thread is the desire to perform matrix operations in a language that does not support them directly.  In this case we would like to do the bulk of our work in B but represent multiplication, addition, etc, in a manner that is relatively efficient.  The salient point here is that B already supports mathematic expressions, and this extension is simply intended to specialize B for additional type-driven semantics.  Meta-language tools tend to be fairly good at this, and several popular examples of this particular solution exist, expression templates being one such.  Let's call this case B.

The third case is where the complexity of A and B are fairly equal and the domains of each do not sufficiently overlap.  In such a situation, embellishment of the result of A->B is necessary to sufficiently express the desired behavior.  Let's call this case AB since the division of work or complexity is roughly balanced.

From experience, it is evident that attempts to map solutions for case A and case B onto this problem have distinct but recognizable issues. Solutions for case A (ie. compilers) are excellent at a static A->B translation, but if B is modified into B' and then A is changed, the new A->B translation must again manually be converted to B', which tends to generally be quite complex.  From a business perspective, I have seen cases where language A was thrown away entirely and all work done in language B simply to avoid this process, and even then the vestiges of A can have a long-lasting impact on work in B--often it's simply too expensive to rewrite B' from scratch, but the existing B' is awkwardly expressed because of the inexact mapping that took place.

Solutions for case B, on the other hand, have the opposite problem. They allow for a great deal of flexibility in language B, but the way they perform A->B tends to be impenetrable for any reasonably complex A, and the process is typically not inspectible.  The C macro language is one example here, as are C++ and even D templates.  In fact, since they live in B I believe that the new mixin/import features belong to this category as well.  I do suspect that great improvements can be made here, but I am skeptical that any such tool will ever be ideal for AB.

With this in mind, it seems clear that a third approach is required for AB, but to discover such an approach let's first distill the previous two approaches: solutions for A seem to exist as external agents which perform the translation, while solutions for B seem to exist as in-language compile-time languages.  Solutions for A are insufficient because they do not allow for ongoing manipulation of both A and B, and solutions for B are insufficient because the expressing a means of performing A->B within B is often awkward and occurs in a way that can not be independently monitored.

My feeling is that the proper solution for case AB is a dynamic composition of pre-defined units of B to express the meaning of A.  Each unit is individually inspectible and its meaning is well understood, so any composition of such units should be comprehensible as well.  I have only limited experience here, but my impression is that fully reflected dynamic languages are well-suited for this situation.  Ruby on Rails is one example of such a solution, and I suspect that similar examples could be found for Lisp, etc.

Does this sound reasonable?  And can anyone provide supporting or conflicting examples?  My goal here is simply to establish some general parameters for the problem domain in an attempt to determine whether the new and planned macro features for D will ever be suitable for AB problems, and whether another solution for D might exist that is more fitting or more optimal.


Sean
February 10, 2007
Bill Baxter wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>> Bill Baxter wrote:
>>
>> There is a deeper connection between runtime reflection and compile-time reflection than it might appear.
>>
>> In the runtime reflection scenario, the compiler must generate, for each user-defined type, an amount of boilerplate code that allows symbolic inspection from the outside, and code execution from the outside with, say, untyped (or dynamically-typed) arguments.
>>
>> The key point is that the code is *boilerplate* and as such its production can be confined to a code generation task, which would keep the compiler simple. The availability of compile-time introspection effectively enables implementation of run-time introspection in a library.
>>
>> For example:
>>
>> class Widget
>> {
>>   ... data ...
>>   ... methods ...
>> }
>>
>> mixin Manifest!(Widget);
>>
>> If compile-time introspection is available, the Manifest template can generate full-blown run-time introspection code for Widget, with stubs for dynamic invocation, the whole nine yards.
>>
>> This is nicer than leaving the task to the compiler because it relieves the compiler writer from being the bottleneck.
>>
>>
>> Andrei
>>
> 
> Well there you go then.  Sounds a lot like the serialization problem too.
> 
>     mixin Serializer!(Widget);

Yes, that is being discussed a little in the announce group. One nice thing (barring potential balkanization) is that you can invent various serialization engines supporting different formats. The point is that it's the programmer, not the compiler writer, deciding that.

One difference between serialization and run-time reflection is that you may decide to serialize only the subset of an object, so you may want to provide specific indications to the Serializer template (a la "don't serialize this guy", or "only serialize these guys" etc):

mixin Serializer!(Widget, "exclude(x, y, foo, bar)");

> Or the problem of exposing classes to scripting languages.
> 
>     mixin ScriptBinding!(Widget);

And probably the first scripting engine to be supported would be DMDScript...

> Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case).  Any thoughts, Kirk?  You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection.

...or PyD. :o)


Andrei
February 11, 2007
Bill Baxter wrote:
> Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case).  Any thoughts, Kirk?  You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection.
> 
> --bb

One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior.

http://pyd.dsource.org/inherit.html

Getting the most proper behavior requires a bit of a workaround. For every class that a user wishes to expose to Python, they must write a "wrapper" class, and then expose both the wrapper and the original class to Python. The basic idea is so that you can subclass D classes with Python classes and then get D code to polymorphically call the methods of the Python class:

// D class
class Foo {
    void bar() { writefln("Foo.bar"); }
}

// D function calling method
void polymorphic_call(Foo f) {
    f.bar();
}

# Python subclass
class PyFoo(Foo):
    def bar(self):
        print "PyFoo.bar"

# Calling D function with instance of Python class
>>> o = PyFoo()
>>> polymorphic_call(o)
PyFoo.bar

Read that a few times until you get it. To see how Pyd handles this, read the above link. It's quite ugly.

The D wrapper class for Foo would look something like this:

class FooWrapper : Foo {
    mixin OverloadShim;
    void bar() {
        get_overload(&super.bar, "bar");
    }
}

Never mind what this actually does. The problem at hand is somehow generating a class like this at compile-time, possibly given only the class Foo. While these new mixins now give me a mechanism for generating this class, I don't believe I can get all of the information about the class that I need at compile-time, at least not automatically. I might be able to rig something creative up with tuples, now that I think about it...

However, I have some more pressing issues with Pyd at the moment (strings, embedding, and building, for three examples), which have nothing to do with these new features.

-- 
Kirk McDonald
Pyd: Wrapping Python with D
http://pyd.dsource.org
February 11, 2007
Kirk McDonald wrote:
> Bill Baxter wrote:
>> Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case).  Any thoughts, Kirk?  You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection.
>>
>> --bb
> 
> One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior.
> 
> http://pyd.dsource.org/inherit.html

Great lib, and a good place to figure how introspection can help.

> Getting the most proper behavior requires a bit of a workaround. For every class that a user wishes to expose to Python, they must write a "wrapper" class, and then expose both the wrapper and the original class to Python. The basic idea is so that you can subclass D classes with Python classes and then get D code to polymorphically call the methods of the Python class:
> 
> // D class
> class Foo {
>     void bar() { writefln("Foo.bar"); }
> }
> 
> // D function calling method
> void polymorphic_call(Foo f) {
>     f.bar();
> }
> 
> # Python subclass
> class PyFoo(Foo):
>     def bar(self):
>         print "PyFoo.bar"
> 
> # Calling D function with instance of Python class
>  >>> o = PyFoo()
>  >>> polymorphic_call(o)
> PyFoo.bar
> 
> Read that a few times until you get it. To see how Pyd handles this, read the above link. It's quite ugly.

If I understand things correctly, in the ideal setup you'd need a means
to expose an entire, or parts of, a class to Python. That is, for the class:

class Base {
    void foo() { writefln("Base.foo"); }
    void bar() { writefln("Base.bar"); }
}

instead of (or in addition to) the current state of affairs:

wrapped_class!(Base) b;
b.def!(Base.foo);
b.def!(Base.bar);
finalize_class(b);

it would be probably desirable to simply write:

defclass!(Base);

which automa(t|g)ically takes care of all of the above.

To do so properly, and to also solve the polymorphic problem that you
mention, defclass must define the following class:

class BaseWrap : Base {
    mixin OverloadShim;
    void foo() {
        get_overload(&super.foo, "foo");
    }
    void bar() {
        get_overload(&super.bar, "bar");
    }
}

Then the BaseWrap (and not Base) class would be exposed to Python, along
with each of its methods.

If I misunderstood something in the above, please point out the error
and don't read the rest of this post. :o)

This kind of task should be easily doable with compile-time reflection,
possibly something along the following lines (for the wrapping part):

class PyDWrap(class T) : T
{
  mixin OverloadShim;
  // Escape into the compile-time realm
  mixin
  {
    foreach (m ; methods!(T))
    {
      char[] args = formals!(m).length
        ? ", " ~ actuals!(m) : "";
      writefln("%s %s(%s)
        { return get_overload(super.%s, `%s`%s); }",
        ret_type!(m), name!(m), formals!(m),
        name!(m), name!(m), args);
    }
  }
}

So instantiating, say, PyDWrap with Base will tantamount to this:

class PyDWrap!(Base) : Base {
    mixin OverloadShim;
    void foo() {
        return get_overload(&super.foo, `foo`);
    }
    void bar() {
        get_overload(&super.bar, `bar`);
    }
}

Instantiating PyDWrap with a more sophisticated class (one that defines
methods with parameters) will work properly as the conditional
initialization of args suggests.

> The D wrapper class for Foo would look something like this:
> 
> class FooWrapper : Foo {
>     mixin OverloadShim;
>     void bar() {
>         get_overload(&super.bar, "bar");
>     }
> }
> 
> Never mind what this actually does. The problem at hand is somehow generating a class like this at compile-time, possibly given only the class Foo. While these new mixins now give me a mechanism for generating this class, I don't believe I can get all of the information about the class that I need at compile-time, at least not automatically. I might be able to rig something creative up with tuples, now that I think about it...

At the end of the day, without compile-time introspection, code will end
up repeating itself somewhere. For example, you nicely conserve the
inheritance relationship among D classes in their Python incarnations.
Why is that possible? Because D offers you the appropriate introspection
primitive. If you didn't have that, or at least C++'s SUPERSUBCLASS
trick (which works almost by sheer luck), you would have required the
user to wire the inheritance graph explicitly.

> However, I have some more pressing issues with Pyd at the moment
> (strings, embedding, and building, for three examples), which have
> nothing to do with these new features.

Update early, update often. :o) Please write out any ideas or issues you
are confronting. Looks like PyD is a good case study for D's nascent
introspection abilities.


Andrei


February 11, 2007
Andrei Alexandrescu (See Website For Email) wrote:
> Kirk McDonald wrote:
>> Bill Baxter wrote:
>>> Speaking of which I'm surprised Kirk hasn't piped in here more about how this could make life easier for PyD (or not if that's the case).  Any thoughts, Kirk?  You're in one of the best positions to say what's a bottleneck with the current state of compile-time reflection.
>>>
>>> --bb
>>
>> One area of Pyd which I am unhappy with is its support for inheritance and polymorphic behavior.
>>
>> http://pyd.dsource.org/inherit.html
> 
> Great lib, and a good place to figure how introspection can help.

Thanks!

> 
>> Getting the most proper behavior requires a bit of a workaround. For every class that a user wishes to expose to Python, they must write a "wrapper" class, and then expose both the wrapper and the original class to Python. The basic idea is so that you can subclass D classes with Python classes and then get D code to polymorphically call the methods of the Python class:
>>
>> // D class
>> class Foo {
>>     void bar() { writefln("Foo.bar"); }
>> }
>>
>> // D function calling method
>> void polymorphic_call(Foo f) {
>>     f.bar();
>> }
>>
>> # Python subclass
>> class PyFoo(Foo):
>>     def bar(self):
>>         print "PyFoo.bar"
>>
>> # Calling D function with instance of Python class
>>  >>> o = PyFoo()
>>  >>> polymorphic_call(o)
>> PyFoo.bar
>>
>> Read that a few times until you get it. To see how Pyd handles this, read the above link. It's quite ugly.
> 
> If I understand things correctly, in the ideal setup you'd need a means
> to expose an entire, or parts of, a class to Python. That is, for the class:
> 
> class Base {
>     void foo() { writefln("Base.foo"); }
>     void bar() { writefln("Base.bar"); }
> }
> 
> instead of (or in addition to) the current state of affairs:
> 
> wrapped_class!(Base) b;
> b.def!(Base.foo);
> b.def!(Base.bar);
> finalize_class(b);
> 
> it would be probably desirable to simply write:
> 
> defclass!(Base);
> 
> which automa(t|g)ically takes care of all of the above.
> 

That would be nice. However, my gut (and previous experience) tells me that it is simpler, or at least more reliable, if the user explicitly lists the things to wrap. There remain bits and pieces of D that I can't expose to Python, and it's easier to allow the user to simply not specify those things, than to detect and not wrap them.

This is moot if D's reflection becomes perfect. We are not there, yet, however, so I must play with what we have.

> To do so properly, and to also solve the polymorphic problem that you
> mention, defclass must define the following class:
> 
> class BaseWrap : Base {
>     mixin OverloadShim;
>     void foo() {
>         get_overload(&super.foo, "foo");
>     }
>     void bar() {
>         get_overload(&super.bar, "bar");
>     }
> }
> 
> Then the BaseWrap (and not Base) class would be exposed to Python, along
> with each of its methods.
> 
> If I misunderstood something in the above, please point out the error
> and don't read the rest of this post. :o)

One minor nit: Both BaseWrap and Base must be wrapped by Pyd, although only BaseWrap will actually be exposed to Python. (Meaning Python code can subclass and create instances of BaseWrap, but not Base.) This is so that D functions can return instances of Base to Python.

> 
> This kind of task should be easily doable with compile-time reflection,
> possibly something along the following lines (for the wrapping part):
> 
> class PyDWrap(class T) : T
> {
>   mixin OverloadShim;
>   // Escape into the compile-time realm
>   mixin
>   {
>     foreach (m ; methods!(T))
>     {
>       char[] args = formals!(m).length
>         ? ", " ~ actuals!(m) : "";
>       writefln("%s %s(%s)
>         { return get_overload(super.%s, `%s`%s); }",
>         ret_type!(m), name!(m), formals!(m),
>         name!(m), name!(m), args);
>     }
>   }
> }
> 
> So instantiating, say, PyDWrap with Base will tantamount to this:
> 
> class PyDWrap!(Base) : Base {
>     mixin OverloadShim;
>     void foo() {
>         return get_overload(&super.foo, `foo`);
>     }
>     void bar() {
>         get_overload(&super.bar, `bar`);
>     }
> }
> 
> Instantiating PyDWrap with a more sophisticated class (one that defines
> methods with parameters) will work properly as the conditional
> initialization of args suggests.
> 

My current idea, which can be done right now, involves some simple refactoring of the class-wrapping API, which was designed before we had proper tuples. It would look something like this:

wrap_class!(
    Base,
    Def!(Base.foo)
    Def!(Base.bar),
);

'Def' would become a struct or class template. (The capital 'D' distinguishes it from the 'def' function template used to wrap regular functions.) Since all of the methods are now specified at compile-time, I can generate the wrapper class at compile time with little difficulty.

>> The D wrapper class for Foo would look something like this:
>>
>> class FooWrapper : Foo {
>>     mixin OverloadShim;
>>     void bar() {
>>         get_overload(&super.bar, "bar");
>>     }
>> }
>>
>> Never mind what this actually does. The problem at hand is somehow generating a class like this at compile-time, possibly given only the class Foo. While these new mixins now give me a mechanism for generating this class, I don't believe I can get all of the information about the class that I need at compile-time, at least not automatically. I might be able to rig something creative up with tuples, now that I think about it...
> 
> At the end of the day, without compile-time introspection, code will end
> up repeating itself somewhere. For example, you nicely conserve the
> inheritance relationship among D classes in their Python incarnations.
> Why is that possible? Because D offers you the appropriate introspection
> primitive. If you didn't have that, or at least C++'s SUPERSUBCLASS
> trick (which works almost by sheer luck), you would have required the
> user to wire the inheritance graph explicitly.
> 
>> However, I have some more pressing issues with Pyd at the moment
>> (strings, embedding, and building, for three examples), which have
>> nothing to do with these new features.
> 
> Update early, update often. :o) Please write out any ideas or issues you
> are confronting. Looks like PyD is a good case study for D's nascent
> introspection abilities.
> 

It has always been one. Pyd was probably the first D library to have a serious need for tuples, going so far as to fake them before they were part of the language proper. It's probably the largest concrete application of meta-programming written in D.

-- 
Kirk McDonald
http://kirkmcdonald.blogspot.com
Pyd: Connecting D and Python
http://pyd.dsource.org
February 11, 2007
janderson wrote:
> You've gotta really love working on D to have kept up this solid pace for so long.

It's the interest in D by the people here that fuels it.
February 11, 2007
Sean Kelly wrote:
> Please note that I'm not criticizing in-language DSL parsing as a general idea so much as questioning whether this is truly the best example for the usefulness of such a feature.

Compile time DSL's will really only be useful for relatively small languages. For a complex DSL, a separate compilation tool will be probably much more powerful and much more useful.

I don't know anything about database languages, so I'm no help there.

One example of a highly useful compile time DSL is the regex package that Don Clugston and Eric Anderton put together. With better metaprogramming support, this kind of thing will become much simpler to write.

There's often a need for custom 'little languages' for lots of projects. Most of the time, people just make do without them because they aren't worth the effort to create. I hope to make it so easy to create them, that all kinds of unforeseen uses will be made of them.

I'll give an example: I often have a need to create parallel tables of data. C, C++, and D have no mechanism to do that directly (though I've used a macro trick to do it in C). With a DSL, this becomes easy.