October 03, 2014
> Thanks a lot, by the way!
>
> I've just skimmed through the code and the README... You did not use the packed forest representation, did you..?

Sorry for the microscopic documentation (Pegged is more documented than that...), it was a 'for me only' project.

The forest is packed, in the sense that common nodes are re-used and shared among parse trees: all intermediate parse results from any grammar part is stored and used to produce the parse nodes.

The range view gives access to parse trees one after another, but the global parse forest is present in the grammar object (or rather, generated and completed during the parse process: each new parse result completes the parse forest).

It has a strange effect on parsing times repartition among the parse results:
If you time the different parse trees, you'll see that the first one
might take maybe 40% of the entire parsing time all by itself, because
it has to discover all parse results alone. The following trees are
very rapidly calculated, since the intermediate parse results are
already known. Of course, once the parse trees begin to deviate from
the first ones, the process slows down again since they have to
explore as-yet-unused rules and input slices.

I'm not sure the previous paragraph is clear...

Imagine you have 10 different parse trees. They could be disributed like this:

# parse result      % global parse time
1                          40%
2                          2
3                          3
4                          3
5                          5
6                          6
7                          8
8                          10
9                          11
10                        12
October 03, 2014
On Friday, 3 October 2014 at 16:20:29 UTC, Philippe Sigaud via Digitalmars-d wrote:
>> Thanks a lot, by the way!
>>
>> I've just skimmed through the code and the README... You did not use the
>> packed forest representation, did you..?
>
> Sorry for the microscopic documentation (Pegged is more documented
> than that...), it was a 'for me only' project.

This is perfectly fine with me: I think I should be okay with the theory behind
the code, and your style isn't cryptic.

>
> I'm not sure the previous paragraph is clear...

I got the point.

> Imagine you have 10 different parse trees. They could be disributed like this:
>
> # parse result      % global parse time
> 1                          40%
> 2                          2
> 3                          3
> 4                          3
> 5                          5
> 6                          6
> 7                          8
> 8                          10
> 9                          11
> 10                        12

Interesting, indeed.

Anyway, thank you very much for the code. The weekend is coming -> I'll play with your implementation and see if there any improvements possible.
October 03, 2014
> Anyway, thank you very much for the code. The weekend is coming -> I'll play with your implementation and see if there any improvements possible.

Be sure to keep me informed of any enormous mistake I made. I tried Appender and other concatenation means, without big success.

Btw, I saw on the ML that using byKeys.front() is very slow. Use
keys[0] of somesuch instead.
October 03, 2014
On Friday, 3 October 2014 at 17:24:43 UTC, Philippe Sigaud via Digitalmars-d wrote:

> Be sure to keep me informed of any enormous mistake I made. I tried
> Appender and other concatenation means, without big success.

I am not sure if there are any. Maybe GLL just IS non-practical, after all. Right now only one thing is for sure: the generalized parsing fruit is not a low-hanging one.

Yeap, I will let you know. This is sort of a personal challenge now :)

> Btw, I saw on the ML that using byKeys.front() is very slow. Use
> keys[0] of somesuch instead.

Hm... Funny. Did you investigate into it..?
1 2 3
Next ›   Last »