May 22, 2012
On Tuesday, 22 May 2012 at 18:59:48 UTC, Jacob Carlborg wrote:
> On 2012-05-22 20:33, Roman D. Boiko wrote:
>
>> Yes, and even before that I'm going to document some fundamental
>> primitives, like immutability and core data structures.
>
> Wouldn't it be better to start with the use cases? You probably already have a fairly good idea about the use cases, but in theory the use cases could change how the data structure might look like.

Yes, that's the intention. I meant that *before Lexer* I must deal with some critical (fundamental) primitives.
May 22, 2012
Le 22/05/2012 20:33, Roman D. Boiko a écrit :
> On Tuesday, 22 May 2012 at 18:10:59 UTC, Jacob Carlborg wrote:
>> On 2012-05-22 19:14, Roman D. Boiko wrote:
>>
>>> This is a draft of use cases for all DCT libraries combined.
>>
>> This seems to be mostly focused on lexing? See below for some ideas.
> Yeah, about 50% is lexing. I pay more attention to it because lexer
> alone is enough for several uses. I would like to have at least some
> functionality used as early as possible, this would provide me great
> feedback.
>

Providing it as a Range of Tokens would be the awesomest design decision you could ever make.

>>> Scope for DCT is to provide semantic analysis, but not code
>>> generation (that may become another project some time). Information
>>> about projects, etc., is useful for e.g., analysing dependencies.
>>
>> That's a good point.
>>
>>> I'll improve overall structure and add some explanations + examples
>>> tomorrow. Could you elaborate on specific points which are vague?
>>
>> I would probably have specified some high level use cases first, like:
>>
>> * IDE integration
>> * Refactoring tool
>> * Static analysis
>> * Compiler
>> * Doc generating
>> * Build tool
> Thanks! I didn't think about build tool, for exapmle.
>
> I started this way, but after your comment on my previous post that
> there is nothing new I reconsidered my approach and decided to start
> from concrete (low-lewel), then improve it according to feedback, and
> then split into areas (which roughly correspond to your hi-level use
> cases).
>
>> In general, use cases that can span several compile phases, i.e.
>> lexing, parsing, semantic analysis and so on. Some of these use cases
>> can be broken in to several new use cases at a lower level. Some
>> examples:
>>
>> IDE integration:
>>
>> * Syntax highlighting
>> * Code completion
>> * Showing lex, syntax and semantic errors
>>
>> Refactoring:
>>
>> * Cross-referencing symbols
>>
>> Build tool:
>>
>> * Tracking module dependencies
>>
>> Doc generating:
>>
>> * Associate a declaration and its documentation
>>
>> Some of these "sub" use cases are needed by several tools, then you
>> can either repeat them or pick unique sub use cases for each high
>> level use case.
>>
>> Then you can get into more detail over lower level use cases for the
>> different compile phases. If you have enough to write you could
>> probably have a post about the use cases for each phase.
>
> Thanks for examples.
>
>> It seems some of your use cases are implementation details or design
>> goals, like "Store text efficiently".
> Actually, many of those are architectural (although low-level), because
> they are key to achieve the project goals, and failing in this area
> could cause overall failure. I intend to move any non-architectural
> information into a separate series of posts, feel free commenting what
> you don't consider important for the architecture (probably don't start
> yet, I'm reviewing text right now).
>
>> It would not be necessary to start with the high level goals, but it
>> would be nice. The next best thing would probably be to start with the
>> use cases compiler phase you already have started on, that is lexing,
>> if I have understood everything correctly.
>
> Yes, and even before that I'm going to document some fundamental
> primitives, like immutability and core data structures.

May 22, 2012
On Tuesday, 22 May 2012 at 20:31:40 UTC, deadalnix wrote:
> Le 22/05/2012 20:33, Roman D. Boiko a écrit :
>> On Tuesday, 22 May 2012 at 18:10:59 UTC, Jacob Carlborg wrote:
>>> On 2012-05-22 19:14, Roman D. Boiko wrote:
>>>
>>>> This is a draft of use cases for all DCT libraries combined.
>>>
>>> This seems to be mostly focused on lexing? See below for some ideas.
>> Yeah, about 50% is lexing. I pay more attention to it because lexer
>> alone is enough for several uses. I would like to have at least some
>> functionality used as early as possible, this would provide me great
>> feedback.
>>
>
> Providing it as a Range of Tokens would be the awesomest design decision you could ever make.
Indeed :)
May 22, 2012
On Tuesday, 22 May 2012 at 20:31:40 UTC, deadalnix wrote:
> Le 22/05/2012 20:33, Roman D. Boiko a écrit :
>> On Tuesday, 22 May 2012 at 18:10:59 UTC, Jacob Carlborg wrote:
>>> On 2012-05-22 19:14, Roman D. Boiko wrote:
>>>
>>>> This is a draft of use cases for all DCT libraries combined.
>>>
>>> This seems to be mostly focused on lexing? See below for some ideas.
>> Yeah, about 50% is lexing. I pay more attention to it because lexer
>> alone is enough for several uses. I would like to have at least some
>> functionality used as early as possible, this would provide me great
>> feedback.
>>
>
> Providing it as a Range of Tokens would be the awesomest design decision you could ever make.
Agree
May 23, 2012
On Tuesday, 22 May 2012 at 18:33:38 UTC, Roman D. Boiko wrote:
> I'm reviewing text right now
Posted an updated version, but it is still a draft:

http://d-coding.com/2012/05/23/dct-use-cases-revised.html
May 23, 2012
On 2012-05-23 17:36, Roman D. Boiko wrote:
> On Tuesday, 22 May 2012 at 18:33:38 UTC, Roman D. Boiko wrote:
>> I'm reviewing text right now
> Posted an updated version, but it is still a draft:
>
> http://d-coding.com/2012/05/23/dct-use-cases-revised.html

That's a lot better :)

-- 
/Jacob Carlborg
July 25, 2012
On Wednesday, 23 May 2012 at 15:36:59 UTC, Roman D. Boiko wrote:
> On Tuesday, 22 May 2012 at 18:33:38 UTC, Roman D. Boiko wrote:
>> I'm reviewing text right now
> Posted an updated version, but it is still a draft:
>
> http://d-coding.com/2012/05/23/dct-use-cases-revised.html

I think one of the key challenges will be incremental updates. You could perhaps afford to reparse entire source files on each keystroke, assuming DCT runs on a PC*, but you don't want to repeat the whole semantic analysis of several modules on every keystroke. (*although, in all seriousness, I hope someday to browse/write code in a smartphone/tablet IDE, without killing battery life)

D in particular makes standard IDE features difficult, if the code uses a lot of CTFE just to decide the meaning of the code, e.g. "static if" computes 1_000_000 digits of PI and decides whether to declare method "foo" or method "bar" based on whether the last digit is odd or even.

Of course, code does not normally waste the compiler's time deliberately, but these sorts of things can easily crop up accidentally. So DCT could profile its own operation and report to the user which analyses and functions are taking the longest to run.

Ideally, somebody would design an algorithm that, given a location where the syntax tree has changed, figures out what parts of the code are impacted by that change and only re-runs semantic analysis on the code whose meaning has potentially changed.

But, maybe that is too just hard. A simple approach would be to just re-analyze the whole damn program, but prioritize analysis so that whatever code the user is looking at is re-analyzed first. This could be enhanced by a simple-minded dependency tree, so that changing module X does not trigger reinterpretation of module Y if Y does not directly or indirectly use X at all.

By using multiple threads to analyze, any long computations wouldn't prevent analysis of the "easy parts"; but several threads could get stuck waiting on the same thing. For example, it would seem to me that if a module X contains a slow "static if" at module scope, ANY other module that imports X cannot resolve ANY unqualified function calls until that "static if" is done processing, because the contents of the "static if" MIGHT create new overloads that have to be considered*. So, when a thread gets stuck, it needs to be able to look for other work to do instead.

In any case, since D is turing-complete and CTFE may enter infinite loops (or just very long loops), an IDE will need to occasionally terminate threads and restart analysis, so the analysis threads must be killable, but hopefully it could be designed so that analysis doesn't have to restart from scratch.

I guess immutable data structures will therefore be quite important in the design, which you seem to be aware of already.
July 26, 2012
On Wednesday, 23 May 2012 at 15:36:59 UTC, Roman D. Boiko wrote:
> On Tuesday, 22 May 2012 at 18:33:38 UTC, Roman D. Boiko wrote:
>> I'm reviewing text right now
> Posted an updated version, but it is still a draft:
>
> http://d-coding.com/2012/05/23/dct-use-cases-revised.html

BTW, have you seen the video by Bret Victor entitled "Inventing on Principle"? This should be a use case for DCT:

http://vimeo.com/36579366

The most important part for the average (nongraphical) developer is his demo of writing a binary search algorithm. It may be difficult to use an ordinary debugger to debug CTFE, template overload resolution and "static if" statements, but something like Bret's demo, or what the Light Table IDE is supposed to do...

http://www.kickstarter.com/projects/ibdknox/light-table

...would be perfect for compile-time debugging, and not only that, it would also help people write their code in the first place, including (obviously) code intended for run-time.

P.S. oh how nice it would be if we could convince anyone to pay us to develop these compiler tools... just minimum wage would be soooo nice.
July 26, 2012
On 2012-07-26 01:46, David Piepgrass wrote:

> I think one of the key challenges will be incremental updates. You could
> perhaps afford to reparse entire source files on each keystroke,
> assuming DCT runs on a PC*, but you don't want to repeat the whole
> semantic analysis of several modules on every keystroke. (*although, in
> all seriousness, I hope someday to browse/write code in a
> smartphone/tablet IDE, without killing battery life)

It would be nice if not even lexing or parsing needed to be done on the whole file.

-- 
/Jacob Carlborg
July 26, 2012
On 2012-07-26 02:03, David Piepgrass wrote:

> BTW, have you seen the video by Bret Victor entitled "Inventing on
> Principle"? This should be a use case for DCT:
>
> http://vimeo.com/36579366
>
> The most important part for the average (nongraphical) developer is his
> demo of writing a binary search algorithm. It may be difficult to use an
> ordinary debugger to debug CTFE, template overload resolution and
> "static if" statements, but something like Bret's demo, or what the
> Light Table IDE is supposed to do...
>
> http://www.kickstarter.com/projects/ibdknox/light-table
>
> ...would be perfect for compile-time debugging, and not only that, it
> would also help people write their code in the first place, including
> (obviously) code intended for run-time.
>
> P.S. oh how nice it would be if we could convince anyone to pay us to
> develop these compiler tools... just minimum wage would be soooo nice.

That is so cool :)

-- 
/Jacob Carlborg
1 2 3
Next ›   Last »