April 06, 2016
On 06.04.2016 15:15, Andrei Alexandrescu wrote:
> I just got word about Sparrow (from its creator no less):
>
> presentation_offline_Sparrow.pdf - https://db.tt/m2WwpxIY

The sum of squares of odd fibonacci numbers example is... unconvincing. It is inefficient and incorrect.

> Speak.mp4 - https://db.tt/RDmrlEu7
> ThesisLucTeo.pdf - https://db.tt/1ylGHuc1

- He wrongly claims that dynamic memory allocation is disallowed in D during CTFE, dismissing it immediately.

- Concepts such as ctorFromCt are not discussed sufficiently well.

- No discussion of of out-of-bounds vector accesses. It seems that the compiler implementation might allow UB during compile time?

- I haven't found anything about modules, forward references or ordering of compile-time computations (which can have externally observable effects).

>
> An interesting language that shares a lot of D's vision and features.
> The languages differ in a few aspects: Sparrow has a much more flexible
> syntax allowing a variety of custom operators. (I happen to disagree
> that's a good thing as I believe highly flexible syntax easily leads to
> transmission noise code). On the plus side Sparrow has a smoother
> integration of compile-time vs. run-time computation, which makes it a
> bit easier to transition from one to another.

- No array literal enum mess.
- Overloading based on constants.

What else?

> Another feature we could
> consider adding to D is a simple definition of concepts as complex
> Boolean predicates. That's essentially identical to isForwardRange etc.
> etc. but makes for simpler use of these predicates.
> ...

Compile-time parsers with meaningful error messages for DSLs should probably also be supported at some point.
April 06, 2016
On Wednesday, 6 April 2016 at 21:35:51 UTC, mate wrote:
> On Wednesday, 6 April 2016 at 20:48:20 UTC, Lucian Radu Teodorescu wrote:
>> On Wednesday, 6 April 2016 at 18:27:25 UTC, BLM768 wrote:
>>> On Wednesday, 6 April 2016 at 18:25:11 UTC, BLM768 wrote:
>>>>
>>>> Aside from the explicit annotations, I don't see how their solution is more flexible than D's CTFE, but I might be missing something.
>>>
>>> Never mind. Just saw their language embedding example. Neat!
>>
>> Compared to CTFE, in Sparrow you can run at compile-time *any* algorithm you like. No restrictions apply. Not only you can do whatever your run-time code can do, but can also call external programs at compile-time.
>>
>> Imagine that you are calling the D compiler from inside the Sparrow compiler to compile some D code that you encounter.
>
> Wow, could be dangerous to compile source code.

Spawning processes during compilation is as dangerous as executing the program you just compiled (which you're going to do; the entire point of compiling a program is to execute it). I wouldn't be too concerned.

If you're hosting an online compiler, then you're (hopefully) already sandboxing the compiler (to prevent source code that does a lot of CTFE/has large static arrays/etc from eating all your cpu+mem) and the compiled program (for obvious reasons) anyway.

(Same argument for D's string import paths not allowing you into symlinks/subdirectores; there are more thorough sandboxing options for those concerned)
April 06, 2016
On 04/06/2016 04:42 PM, Lucian Radu Teodorescu wrote:
> In my opinion implementing concepts in terms of interfaces is more
> restrictive than implementing them as boolean predicates.
[snip]
> Implementing concepts as predicates, give you much more freedom. I would
> argue that you can represent many more concepts with predicates. Not
> only you can represent all the "has-a" relations, but you can also place
> conditions on the compile-time values that your types might have, and
> how that class may be used.

We've long used and appreciated concepts as Boolean predicates in this community, so we wouldn't need much convincing :o). Probably a great thing Sparrow could do is push that idea in the academic community. -- Andrei

April 07, 2016
On Wednesday, 6 April 2016 at 20:42:27 UTC, Lucian Radu Teodorescu wrote:
> On Wednesday, 6 April 2016 at 14:54:18 UTC, Puming wrote:
>> On Wednesday, 6 April 2016 at 13:15:48 UTC, Andrei Alexandrescu wrote:
>>> [...]
>>
>> Interesting. I've thinking about concepts too. Hopefully they could come into D. Actually, I think concept and interfaces could be combined into one feature. An interface is just a special case of a concept, saying its implementer should have the matching function signatures.
>
> In my opinion implementing concepts in terms of interfaces is more restrictive than implementing them as boolean predicates. You cannot create interfaces based on existence of attributes, on existence of dependent types or on the existence of extern function that apply to your types. Moreover, you cannot create concepts based on the compile-time values associated with your type.

Well, I meant the opposite, interfaces be implemented with concept.
So in my opinion, interface is just a spcecial case of concept.
I'm in the same park with you.
See https://github.com/venus-lang/venus/blob/master/docs/reference/trait.md where I call concept 'trait'.

>
> One basic example, from C++, is that you cannot create a concept that distinguishes between vector<int> and vector<bool>, as they theoretically should have the same interface.
>
> The main uses of concepts are in my view most of the time related to dependent types. I would like to have an InputRange concept that will also tell me the type of the elements I can read from it.
>

Exactly, that is well I found InputRange in D awkward, where I have to specify a long predicate everywhere in my function definiations of a call chain.

> Implementing concepts as predicates, give you much more freedom. I would argue that you can represent many more concepts with predicates. Not only you can represent all the "has-a" relations, but you can also place conditions on the compile-time values that your types might have, and how that class may be used.
>
Yes, indeed.
> My two cents,
> LucTeo

April 10, 2016
On Wednesday, 6 April 2016 at 21:42:31 UTC, Timon Gehr wrote:
> The sum of squares of odd fibonacci numbers example is... unconvincing. It is inefficient and incorrect.

How would you do it?
The Fibonacci example isn't very complex, but the idea was to go from really-simple examples to more complex examples.

> - He wrongly claims that dynamic memory allocation is disallowed in D during CTFE, dismissing it immediately.

Unfortunately I'm not a D expert, so I cannot argue with you. I remember I read somewhere that dynamic memory allocation is disallowed in D at compile-time. And it makes sense if you think that global variablles cannot be accessed during CTFE, and that functions need to be "free of side-effects". I have a hard time imagining how an allocator would work on these conditions. How does D implement the memory allocation at compile-time?

> - Concepts such as ctorFromCt are not discussed sufficiently well.

Although I discussed them several times in the thesis, I might have been less convincing than I wanted to.

The concept of ctorFromCt is a little bit of advanced concept, but the idea behind it is pretty simple. If you have arbitrarily complex data structures, you cannot simply move them from compile-time to run-time. For numbers and strings, this is easily doable by just copying the bits. But what do you do with vectors of vectors? You cannot simply copy the bits; mainly because you don't know what bits are to be copied. The compiler tries to generate a default ctorFromCt for all your data structures by recursing down, similar to what a compiler does to a copy constructor. However, for pointers this cannot be done; the user needs to provide its own ctorFromCt. Please also see section 10.3.2 from the thesis, for a discussion about cases you would want to use user-defined ctorFromCt to speed up compilation.

The reason I say that this is an advanced concept is that typically the programmer doesn't thing of such things in the daily routine. But once you start working with complex structures compile-time it comes as a natural concept.

> - No discussion of of out-of-bounds vector accesses. It seems that the compiler implementation might allow UB during compile time?

There should be no distinction between out-of-bounds as compile-time at out-of-bounds at
run-time. With that said, I must admit that out-of-bounds are not handled properly by the compiler at this point. I was assuming the presence of exceptions, but I never get around to implement exceptions; it's a feature so popular these days, that I don't think that I can have something different there.

The purpose of the compiler and the language as they are now was to prove that we can build a language on the three fundamental goals: flexibility, efficiency and naturalness. I've answered the question "is such a language possible?" If you would like me to answer to the question of "can I use such a language in commercial applications?", I would say "not yet, but I'm confident that with some work, and a little bit of luck, it would be a great language to be used in commercial applications."

> - I haven't found anything about modules, forward references or ordering of compile-time computations (which can have externally observable effects).

Modules is very high on my TODOs (and pains) lists. Any help is greatly appreciated :)

Forward references are implemented by allowing the compiler to traverse your source code non-linearly.

Ordering of compile-time computations is entirely implementation defined. Word of caution if you plan to abuse it.

> - No array literal enum mess.

As I said, the language is yet a proof-of-concept. I would like to have these implemented as library features instead of a compiler-feature.

> - Overloading based on constants.

I actually find this a strength of the language. Types are just another kind of compile-time values.

> Compile-time parsers with meaningful error messages for DSLs should probably also be supported at some point.

I would like to see this feature in D as well. At this point I'm not sure how clean would the implementation be if there are no "side-effects" allowed in CTFE.
April 10, 2016
On Sunday, 10 April 2016 at 10:01:21 UTC, Lucian Radu Teodorescu wrote:
> I have a hard time imagining how an allocator would work on these conditions. How does D implement the memory allocation at compile-time?

It's actually a problem that CTFE interpreter allocates too much and too often quickly running out of memory ^_^
The question is rather how would you implement CTFE without allocating a single thing?
April 10, 2016
On 10.04.2016 12:01, Lucian Radu Teodorescu wrote:
> On Wednesday, 6 April 2016 at 21:42:31 UTC, Timon Gehr wrote:
>> The sum of squares of odd fibonacci numbers example is...
>> unconvincing. It is inefficient and incorrect.
>
> How would you do it?
> The Fibonacci example isn't very complex, but the idea was to go from
> really-simple examples to more complex examples.
>
>> - He wrongly claims that dynamic memory allocation is disallowed in D
>> during CTFE, dismissing it immediately.
>
> Unfortunately I'm not a D expert, so I cannot argue with you.

Sure, but note that D is among the most closely related works to yours.

> I remember
> I read somewhere that dynamic memory allocation is disallowed in D at
> compile-time. And it makes sense if you think that global variablles
> cannot be accessed during CTFE, and that functions need to be "free of
> side-effects". I have a hard time imagining how an allocator would work
> on these conditions. How does D implement the memory allocation at
> compile-time?
> ...

There's a heap at compile time, so one can use constructs that allocate heap memory. (Except closures, which are not currently supported at compile time by the reference implementation.)

import std.range, std.algorithm, std.array, std.conv;
class C{
    int x;
    this(int x)in{assert(__ctfe);}body{ // (contract for illustration)
        this.x=x;
    }
    override string toString(){ return text("C(",x,")"); }
}

auto x=[new C(1),new C(3),new C(4)]~iota(6).map!(x=>new C(x)).array; // run at compile time, multiple heap allocations

void main(){
    import std.stdio;
    writeln(x); // use at run time
}


>> - Concepts such as ctorFromCt are not discussed sufficiently well.
>
> Although I discussed them several times in the thesis, I might have been
> less convincing than I wanted to.
>
> The concept of ctorFromCt is a little bit of advanced concept, but the
> idea behind it is pretty simple. If you have arbitrarily complex data
> structures, you cannot simply move them from compile-time to run-time.

In D, the transformation is done by the compiler. (In the cases that have been implemented so far. Unfortunately, the reference implementation does not currently support moving some kinds of mutable data to run time). What is an example where I would want to use a custom ctorFromCt?

> ...
>> - No discussion of of out-of-bounds vector accesses. It seems that the
>> compiler implementation might allow UB during compile time?
>
> There should be no distinction between out-of-bounds as compile-time at
> out-of-bounds at
> run-time. With that said, I must admit that out-of-bounds are not
> handled properly by the compiler at this point.

Which means that the benchmarks might report slightly optimistic times.

> I was assuming the
> presence of exceptions, but I never get around to implement exceptions;
> it's a feature so popular these days, that I don't think that I can have
> something different there.
>
> The purpose of the compiler and the language as they are now was to
> prove that we can build a language on the three fundamental goals:
> flexibility, efficiency and naturalness.

I think this is an example of a trade-off between efficiency and naturalness.

> I've answered the question "is
> such a language possible?" If you would like me to answer to the
> question of "can I use such a language in commercial applications?",...

(That is not what I was asking.)

> I would say "not yet, but I'm confident that with some work, and a little
> bit of luck, it would be a great language to be used in commercial
> applications."
> ...

Nondeterminism in the build process might be a deal-breaker for some.

>> - I haven't found anything about modules, forward references or
>> ordering of compile-time computations (which can have externally
>> observable effects).
>
> Modules is very high on my TODOs (and pains) lists. Any help is greatly
> appreciated :)
>
> Forward references are implemented by allowing the compiler to traverse
> your source code non-linearly.
>
> Ordering of compile-time computations is entirely implementation
> defined.

The design of CTFE in D is such that ordering of compile-time computations does not matter. (Except where it is forced to be a certain way by data dependencies.)

> Word of caution if you plan to abuse it.
>

I.e., your stance on accidents based on unclean language semantics is that they are the fault of the user?

>> - No array literal enum mess.
>
> As I said, the language is yet a proof-of-concept. I would like to have
> these implemented as library features instead of a compiler-feature.
> ...

I was referring to a specific weakness of D here, e.g.:

import std.range, std.algorithm, std.array, std.conv;
class C{
    int x;
    this(int x)immutable in{assert(__ctfe);}body{
        this.x=x;
    }
    override string toString(){ return text("C(",x,")"); }
}

immutable x=[new immutable(C)(1),new immutable(C)(3),new immutable(C)(4)]~iota(6).map!(x=>new immutable(C)(x)).array;
auto y=x;

void main(){
    import std.stdio;
    writeln(x is y); // false
}

What would be a similar code snippet in Sparrow, and what does it do?

>> - Overloading based on constants.
>
> I actually find this a strength of the language.

I don't consider it a weakness. (Although, one drawback is that it is easy to accidentally block partial evaluation during subsequent edits, leading to performance regressions.)

> Types are just another kind of compile-time values.
>

Do you support custom computations on types?
April 10, 2016
On 10.04.2016 13:00, Timon Gehr wrote:
> On 10.04.2016 12:01, Lucian Radu Teodorescu wrote:
>> On Wednesday, 6 April 2016 at 21:42:31 UTC, Timon Gehr wrote:
>>> The sum of squares of odd fibonacci numbers example is...
>>> unconvincing. It is inefficient and incorrect.
>>
>> How would you do it?
>> The Fibonacci example isn't very complex, but the idea was to go from
>> really-simple examples to more complex examples.

Forgot to answer this part.

Well, one thing is, Fibonacci numbers grow really quickly, so you'll just produce overflow in the cases worth timing. Furthermore, in order to compute F_(n+1) you always recompute F_1,...,F_n. The obvious way to do it is to have a range of fibonacci numbers with O(1) popFront, filter them by parity, and sum up their squares.


However, there is a solution in O(log n) using linear algebra (assuming unit costs for arithmetic operations on integers):

F_n = F_(n-1) + F_(n-2)
F_n² = F_(n-1)² + 2 F_(n-1)F_(n-2) + F_(n-2)²
F_n F_(n-1) = F_(n-1)² + F_(n-1) F_(n-2)

I.e., if you know values of (F_n, F_n², F_n F_(n-1)) for a couple of subsequent values of n, you can compute them at a larger index by applying a linear transformation.

Now just keep enough subsequent values in a vector together with a counter to which you add the current odd fibonacci numbers (parity of fibonacci numbers follows the 3-periodic pattern 1,1,0,1,1,0,1,1,0,...). This vector can be updated from index n to index n+3 by a linear transformation. The linear transformation has an associated matrix. Now just exponentiate the matrix by exponentiation-by-squaring to transform a vector of initial values the correct number of times and read out the result from the counter.

If you are able to diagonalize the linear transformation, you'll even get a formula in closed form. (I don't have time to do this now, maybe later this week.)
September 07, 2016
On Wednesday, 6 April 2016 at 13:15:48 UTC, Andrei Alexandrescu wrote:
> I just got word about Sparrow (from its creator no less):
>
> presentation_offline_Sparrow.pdf - https://db.tt/m2WwpxIY
> Speak.mp4 - https://db.tt/RDmrlEu7
> ThesisLucTeo.pdf - https://db.tt/1ylGHuc1
>
> An interesting language that shares a lot of D's vision and features. The languages differ in a few aspects: Sparrow has a much more flexible syntax allowing a variety of custom operators. (I happen to disagree that's a good thing as I believe highly flexible syntax easily leads to transmission noise code). On the plus side Sparrow has a smoother integration of compile-time vs. run-time computation, which makes it a bit easier to transition from one to another. Another feature we could consider adding to D is a simple definition of concepts as complex Boolean predicates. That's essentially identical to isForwardRange etc. etc. but makes for simpler use of these predicates.
>
>
> Andrei

I came from the R world and I have been playing the game of flitting between R and C++; using C++ (through RCpp) to speed up slow things in R for some time and I have been looking for a better solution. This is one of the reasons I am in the D community.

For some time I have been considering a problem to do with creating tables with unbounded types, one of the failed attempts is here: https://forum.dlang.org/thread/gdjaoxypicsxlfvzwbvt@forum.dlang.org?page=1
I then exchanged emails with Lucian, Sparrows creator and he very quickly and simply outlined the solution to the problem. Thereafter I read his PhD thesis - one of the most informative texts in computer science I have read and very well written.

At the moment, there are lots of languages attempting to solve the dynamic-static loop, being able to have features inherent in dynamic programming languages, while keeping the safety and performance that comes with a static compiled programming language, and then doing so in a language that doesn't cause your brain to bleed. The "One language to rule them all" motif of Julia has hit the rocks; one reason is because they now realize that their language is being held back because the compiler cannot infer certain types for example: http://www.johnmyleswhite.com/notebook/2015/11/28/why-julias-dataframes-are-still-slow/

A language that can create arbitrary complex programs is the kind of thing that could transform the landscape of computing. I don't think D should be left out and should take Sparrow very seriously indeed.

My two pence
1 2
Next ›   Last »