August 11, 2012
On Sat, Aug 11, 2012 at 2:19 AM, David Piepgrass <qwertie256@gmail.com> wrote:

> I must say though, that while ADTs are useful for simple ASTs, I am not convinced that they scale to big and complex ASTs, let alone extensible ASTs, which I care about more.

You mean AST for D code?

> Nevertheless ADTs are at least useful for
> rapid prototyping, and pattern matching is really nice too. I'm sure
> somebody could at least write a D mixin for ADTs, if not pattern matching.)

I did it, maybe 2 years ago. I worked for recursive ADT too (lists,
trees) and automatically generated small matchers, and maybe specific
map/reduce.
That would be easier today, with this CTFE++ we now have.

IIRC, it generated an abstract class with internal subtypes and a tag to distinguish the state. I guess I could have used a union for the fields, like Timon did further upthread. Hmm, does a union allow for recursive fields?

I never tried to do generic pattern matchers, that would work also on any struct and class, by using .tupleof. I daydreamed about it a few times, tough, but never found a palatable syntax. Maybe with the new () => syntax, that'd be better.


> I hope someday to have a programming system whose features are not limited to whatever features the language designers saw fit to include -- a language where the users can add their own features, all the while maintaining "native efficiency" like D. That language would potentially allow Rust-like code, D-like code, Ruby-like code and even ugly C-like code.
>
> I guess you don't want to be the one to kickstart that PL. I've been planning to do it myself, but so far the task seems just too big for one person.

Well, we are not far from having an official D lexer.
Then, an official D parser.

>From this, adding user-defined extensions is not *that* complicated
(not simple, mind you, but doable).

* define lowerings (aka, translations from your extended syntax to D
syntax), maybe by snatching the unused macro keyword
* code a small wrapper around dmd, rdmd-like: given a file, it
extracts the macros, parses the extended code, transforms the
extensions, does that as many times as necessary, if some macros call
other macros.
* Discard the macros and then pass the transformed file to dmd.

It looks like C macros and preprocessor-based programming, but since it knows the D grammar, it's nearer Lisp macros, I think.
August 11, 2012
On 08/10/12 14:32, bearophile wrote:
> (Repost from D.learn.)
> 
> Through Reddit I've found a page that shows a small example of Rust code:
> 
> http://www.reddit.com/r/programming/comments/xyfqg/playing_with_rust/ https://gist.github.com/3299083
> 
> The code:
> https://gist.github.com/3307450
> 
> -----------------------------
> 
> So I've tried to translate this first part of the Rust code to D (I have not run it, but it looks correct):
> 
> 
> enum expr {
>     val(int),
>     plus(&expr, &expr),
>     minus(&expr, &expr)
> }
> 
> fn eval(e: &expr) -> int {
>     alt *e {
>       val(i) => i,
>       plus(a, b) => eval(a) + eval(b),
>       minus(a, b) => eval(a) - eval(b)
>     }
> }
> 
> fn main() {
>     let x = eval(
>         &minus(&val(5),
>                &plus(&val(3), &val(1))));
> 
>     io::println(#fmt("val: %i", x));
> }
> 

Ugh. Haven't really read that article, but how about this
D version:

   import std.stdio;

   template ALIAS(alias A) { alias A ALIAS; }

   static struct Expr(string EVAL, A...) {
      A a;
      static if (is(typeof(a[0].eval)))
         @property a0() { return a[0].eval; }
      else
         alias ALIAS!(a[0]) a0;
      static if (is(typeof(a[1]))) {
         static if (is(typeof(a[1].eval)))
            @property a1() { return a[1].eval; }
         else
            alias ALIAS!(a[1]) a1;
      }
      @property auto eval() {
         static if (is(typeof(mixin(EVAL))))
            return mixin(EVAL);
         else
            mixin(EVAL);
      }
      //alias eval this; // Uncommenting this line will enable automatic
                         // evaluation -- which may not always be desirable.

      auto opBinary(string op, B)(B b) {
         return Expr!("a0" ~ op ~ "a1", Expr, B)(this, b);
      }
   }

   auto Val(V)(V v) { return Expr!("a0", V)(v); }

   void main() {
      auto r = Val(5) - (Val(3) + Val(1));
      writeln("r: ", r, " == ", r.eval);
      auto s = sqr(Val(5) * Val(2) ^^ Val(3));
      writeln("s: ", s, " == ", s.eval);
   }

   auto sqr(T)(T a) { return Expr!("a0*a0", T)(a); }

which is more readable while being much more powerful.

But still trivial enough that the compiler (GDC) evaluates it all at compile time, even without being asked to do so.

artur
August 11, 2012
On Saturday, 11 August 2012 at 14:45:55 UTC, Russel Winder wrote:
> On Sat, 2012-08-11 at 02:19 +0200, David Piepgrass wrote:
> […]
>> I hope someday to have a programming system whose features are not limited to whatever features the language designers saw fit to include -- a language where the users can add their own features, all the while maintaining "native efficiency" like D. That language would potentially allow Rust-like code, D-like code, Ruby-like code and even ugly C-like code.
>> 
>> I guess you don't want to be the one to kickstart that PL. I've been planning to do it myself, but so far the task seems just too big for one person.
>
> <quasi-troll>
> Isn't that language Lisp?
> </quasi-troll>

You missed the native efficiency part :-)

I think XL is the closest thing that currently exists.

http://en.wikipedia.org/wiki/XL_(programming_language)
August 11, 2012
On Saturday, 11 August 2012 at 16:12:14 UTC, Peter Alexander wrote:
> On Saturday, 11 August 2012 at 14:45:55 UTC, Russel Winder wrote:
>> On Sat, 2012-08-11 at 02:19 +0200, David Piepgrass wrote:
>> […]
>>> I hope someday to have a programming system whose features are not limited to whatever features the language designers saw fit to include -- a language where the users can add their own features, all the while maintaining "native efficiency" like D. That language would potentially allow Rust-like code, D-like code, Ruby-like code and even ugly C-like code.
>>> 
>>> I guess you don't want to be the one to kickstart that PL. I've been planning to do it myself, but so far the task seems just too big for one person.
>>
>> <quasi-troll>
>> Isn't that language Lisp?
>> </quasi-troll>
>
> You missed the native efficiency part :-)
>
You mean like the Common Lisp compilers that are able to beat FORTRAN compilers
in floating point computations?

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.5725

--
Paulo




August 11, 2012
On Sat, 2012-08-11 at 18:12 +0200, Peter Alexander wrote:
> On Saturday, 11 August 2012 at 14:45:55 UTC, Russel Winder wrote:
[…]
> > <quasi-troll>
> > Isn't that language Lisp?
> > </quasi-troll>
> 
> You missed the native efficiency part :-)

Most modern Lisp implementations employ JITing one way or another, so you do get native code. Just not on the first run through a bit of code.

[…]

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


August 11, 2012
On Saturday, 11 August 2012 at 18:04:29 UTC, Paulo Pinto wrote:
> On Saturday, 11 August 2012 at 16:12:14 UTC, Peter Alexander wrote:
>> On Saturday, 11 August 2012 at 14:45:55 UTC, Russel Winder wrote:
>>> On Sat, 2012-08-11 at 02:19 +0200, David Piepgrass wrote:
>>> […]
>>>> I hope someday to have a programming system whose features are not limited to whatever features the language designers saw fit to include -- a language where the users can add their own features, all the while maintaining "native efficiency" like D. That language would potentially allow Rust-like code, D-like code, Ruby-like code and even ugly C-like code.
>>>> 
>>>> I guess you don't want to be the one to kickstart that PL. I've been planning to do it myself, but so far the task seems just too big for one person.
>>>
>>> <quasi-troll>
>>> Isn't that language Lisp?
>>> </quasi-troll>
>>
>> You missed the native efficiency part :-)
>>
> You mean like the Common Lisp compilers that are able to beat FORTRAN compilers
> in floating point computations?
>
> http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.5725
>
> --
> Paulo

Not sure where you read that in the paper.

From the conclusion:

"We have demonstrated that the speed of compiled Common Lisp code, though today somewhat slower than that of the best compiled Fortran, could probably be as efficient, and in some ways superior."

Probably is the operative word there.

> Most modern Lisp implementations employ JITing one way or another, so
> you do get native code. Just not on the first run through a bit of code.

JIT has its limits. A dynamically typed language is still dynamically typed once compiled. Sure the JIT may be able to deduce the types in some cases, but not all.

I do see your point, but in general it's still not as fast as optimised C.
August 11, 2012
On 8/11/2012 11:04 AM, Paulo Pinto wrote:
> On Saturday, 11 August 2012 at 16:12:14 UTC, Peter Alexander wrote:
>> You missed the native efficiency part :-)
>>
> You mean like the Common Lisp compilers that are able to beat FORTRAN compilers
> in floating point computations?

Floating point code is a rather specialized subset of what a good native compiler can do.

For example, with Java, doing well with floating point has no relevance to the lack of user defined value types in Java, and the lack of efficiency that entails.
August 11, 2012
On 08/11/2012 01:24 PM, Marco Leise wrote:
> Am Fri, 10 Aug 2012 15:56:53 +0200
> schrieb Timon Gehr<timon.gehr@gmx.ch>:
>
>> int eval(scope Expr* e){
>>       final switch(e.tag) with(Expr.Tag){
>>           case val:   return e.i;
>>           case plus:  return eval(e.a) + eval(e.b);
>>           case minus: return eval(e.a) - eval(e.b);
>>       }
>> }
>
> Can you quickly explain the use of scope here?
> Does that mean "I wont keep a reference to e"?

It means "I won't keep a reference to *e", but I assume that is what
was meant.

> What are the implications?

The caller has some confidence that passing a pointer to stack-
allocated data is safe.

> Does scope change the method signature?

Yes. It is eg. impossible to override a method that has a scope
parameter with a method that does not have a scope parameter.

> Does the compiler enforce something?

In this case and currently, it is merely documentation.
I think it should be enforced and cast(scope) should be added
to allow non-@safe code to escape the conservative analysis.

> Will generated code differ?

Only the mangled symbol name will differ. (unlike when scope is used on
delegate parameters, in this case it prevents closure allocation at the
call site.)

> Does it prevent bugs or is it documentation for the user of the function?

It is just documentation, both for the user and the maintainer.

> Thanks in advance for some insight!
>
August 11, 2012
On Saturday, 11 August 2012 at 22:17:44 UTC, Timon Gehr wrote:
>> Will generated code differ?
>
> Only the mangled symbol name will differ. (unlike when scope is used on
> delegate parameters, in this case it prevents closure allocation at the call site.)

The code for callee stays the same, yes, but the code for the caller might change as the optimizer is free to take advantage of the fact that any reference in the parameters will not be escaped by the function. For example, LDC will stack-allocate dynamic arrays and objects if they are local to the function. [1]

David


[1] The fine print: We currently don't take advantage of "scope" parameters for this yet, though (it seems too dangerous with the related analysis not being implemented in the frontend), and for a completely unrelated reason, the code which performs the mentioned optimization is disabled in current master (but will be re-enabled in the near future, before the September release).
August 11, 2012
On 08/12/2012 12:34 AM, David Nadlinger wrote:
> On Saturday, 11 August 2012 at 22:17:44 UTC, Timon Gehr wrote:
>>> Will generated code differ?
>>
>> Only the mangled symbol name will differ. (unlike when scope is used on
>> delegate parameters, in this case it prevents closure allocation at
>> the call site.)
>
> The code for callee stays the same, yes, but the code for the caller
> might change as the optimizer is free to take advantage of the fact that
> any reference in the parameters will not be escaped by the function. For
> example, LDC will stack-allocate dynamic arrays and objects if they are
> local to the function. [1]
>
> David
>
>
> [1] The fine print: We currently don't take advantage of "scope"
> parameters for this yet, though (it seems too dangerous with the related
> analysis not being implemented in the frontend),  and for a completely
> unrelated reason, the code which performs the mentioned optimization is
> disabled in current master (but will be re-enabled in the near future,
> before the September release).

Is there an upper bound on the amount of allocated memory? Implicit
stack-allocation of arbitrarily-sized dynamic arrays seems dangerous.