February 26, 2014
On 2/25/14, 10:58 PM, Brian Schott wrote:
> On Tuesday, 25 February 2014 at 23:17:56 UTC, Meta wrote:
>> std.lexer could be the umbrella for a bunch of different lexers. Then
>> we could have std.lexer.xml, std.lexer.json, etc.
>
> I think that's a bit backwards. I'd rather have
>
> std.d.lexer
> std.d.ast
> std.d.parser
>
> than
>
> std.lexer.d
> std.parser.d
> std.ast.d

I think we wouldn't want to add one more package for each language supported.

Andrei

February 26, 2014
On 2014-02-26 07:58, Brian Schott wrote:

> I think that's a bit backwards. I'd rather have
>
> std.d.lexer
> std.d.ast
> std.d.parser
>
> than
>
> std.lexer.d
> std.parser.d
> std.ast.d

I agree with Brian. Although I would have a common package for all languages:

std.language.d.lexer
std.language.d.ast
std.language.d.parser

-- 
/Jacob Carlborg
February 26, 2014
On 2014-02-26 16:18, Andrei Alexandrescu wrote:

> I think we wouldn't want to add one more package for each language
> supported.

That's exactly what we want, preferably in a common package:

std.language.d.lexer
std.language.d.ast
std.language.d.parser

std.language.xml.lexer
std.language.xml.parser
std.language.xml.dom

What do you suggest, having multiple lexers for different languages in the same module?

-- 
/Jacob Carlborg
February 26, 2014
On 2014-02-26 00:25, Dicebot wrote:

> Don't know if it makes sense to introduce random package categorizatin.
> I'd love to see more hierarchy in Phobos too but we'd first need to
> agree to package separation principles then.

Then that's what we need to do. I don't want any more top level modules. There are already too many.

-- 
/Jacob Carlborg
March 03, 2014
Bringing this back to the front page.
March 10, 2014
Reminder about benchmarks.

By the way, is generated lexer usable at CTFE? Imaginary use case
: easier DSL implementation.
March 10, 2014
On Wednesday, 26 February 2014 at 18:07:37 UTC, Jacob Carlborg wrote:
> On 2014-02-26 00:25, Dicebot wrote:
>
>> Don't know if it makes sense to introduce random package categorizatin.
>> I'd love to see more hierarchy in Phobos too but we'd first need to
>> agree to package separation principles then.
>
> Then that's what we need to do. I don't want any more top level modules. There are already too many.

As much as I hate to say it, but such hierarchy is worth a DIP. Once it is formalized, I can proceed with it in review queue as if it was a new module proposal.
March 16, 2014
Initial review has finished. Voting will be delayed because Brian is currently busy and there is ongoing Walter's scopebuffer proposal to be processed (per agreement with both Brian and Walter).

Anyone late for review can still leave comments, I am sure Brian will take them into consideration when doing last moment changes before voting.
March 16, 2014
On 02/22/2014 09:31 PM, "Marc Schütz" <schuetzm@gmx.net>" wrote:
> But that still doesn't explain why a custom hash table implementation is
> necessary. Maybe a lightweight wrapper around built-in AAs is sufficient?

I'm also wondering what benefit this hash table provides.
April 14, 2014
On 21/02/2014 12:12 PM, Dicebot wrote:
> http://wiki.dlang.org/Review/std.lexer
>
> This is follow-up by Brian to his earlier proposal
> (http://wiki.dlang.org/Review/std.d.lexer). This time proposed module
> focuses instead on generic lexer generation as discussed in matching
> voting thread.
>
> Docs: http://hackerpilot.github.io/experimental/std_lexer/phobos/lexer.html
> Code: https://github.com/Hackerpilot/Dscanner/blob/master/stdx/lexer.d
>
> Example topics to evaluate during review:
>   - Feasibility of overall design and concept
>   - Quality of documentation
>   - Consistency with existing Phobos modules
>   - Overall quality of implementation
>
> Initial review will end on March the 8th.


I know the official review period is long past but I'd not had a good look at this module until this past weekend.

Last year I had been working on my own xml lexer/parser but as of yet I have nothing to show for it so I took a look a this proposal with an eye towards using it to make my efforts easier.

Andrei's posts about the possible design of a generic lexer had also influenced me, so I was expecting to find similarities between this module and my own work, albeit with the added benefits of being generic (in the good way). I have, however, found it very difficult to understand much of it, which I entirely put down to my own deficiencies with templates and especially the use of mixins.

In the example Dlang lexer, the constructor takes a ubyte[] as input and wraps it in a LexerRange struct which defines the normal input range primitives as well as various functions for lookahead. It is not documented whether the lexer needs these extra features or if they are only provided for use within the tokenising functions that are supplied to the template by the user. If they are used by the core of the lexer then it would seem to preclude the use of any other type of input that cannot be coerced into a ubyte[] without the effort on the part of the user to implement the same interface.

I think the description of the functionality required of the tokenSeparatingFunction that the user must supply needs to be much better. If I understand correctly, it is intended to differentiate between keywords and identifiers which begin with a keyword. THe more I think about this the less certain I am.

When the lexer dispatches to a token handler, the front of the range is left pointing to the beginning of the character sequence that was matched, allowing it to be included in the returned token. However, many of the handlers in the example Dlang lexer begin with a number of blind popFront calls to jump to the end of that match. I am aware that in well meaning code this is a case of the range being logically !empty, but I also wonder how often it might get overlooked when 2 matches of different lengths are dispatched the same handler. (I had a similar situation in my own code, and my solution was to have a variable storing the .save of my inputRange and count how many chars were consumed since it was updated. This way I could either return the whole match or part of it in the token or discard it and include only what came after it.) As there has been some contention about the correct use of the range primitives of late, I will refrain from making any other comment on their use in this module, especially as I am no longer sure that I have been using them correctly myself.

In the short time that I have been looking at the features of this lexer I have not been able to figure out a way of writing a standards compliant XML parser without having to lex some parts of the document at least twice, or subverting the token handlers to change behaviour according to context. Several non compliant single pass XML lexers would be possible, but they would not be able to process documents that used some (admittedly obscure and often overlooked) features. The only scalable technique that I can think of to allow XML to be lexed in a single pass in a fully spec compliant way would be to allow handlers to return multiple tokens. I am not sure how feasible this would be or what mechanism would be best to implement it.

On the whole I think the the overall design of the module shows promise but requires polish to make it both more idiomatically Dlang-y and easier for the user to build upon (both by documentation and interface).

On a side not related to the example lexer for Dlang, I believe the predicate function isDocComment will produce false positives for the following comment delimiters which to my knowledge are not valid DDoc delimiters...

//* //+ /*+ /*/ /+* /+/

As the Dlang lexer is not part of the review proper I have not inspected it carefully, this function just happens to be the first one declared in that example.

Again, my apologies for the tardiness of this review.

A...