June 19, 2012
Le 17/06/2012 00:41, Walter Bright a écrit :
> On 6/14/2012 11:58 PM, Don Clugston wrote:
>> And we're well set up for parallel compilation. There's no shortage of
>> things we
>> can do to improve compilation time.
>
> The language is carefully designed, so that at least in theory all the
> passes could be done in parallel. I've got the file reads in parallel,
> but I'd love to have the lexing, parsing, semantic, optimization, and
> code gen all done in parallel. Wouldn't that be awesome!
>
>> Using di files for speed seems a bit like jettisoning the cargo to
>> keep the ship
>> afloat. It works but you only do it when you've got no other options.
>
> .di files don't make a whole lotta sense for small files, but the bigger
> they get, the more they are useful. D needs to be scalable to enormous
> project sizes.

The key point is project size here. I wouldn't expect file size to increase in an important manner.
June 19, 2012
Le 18/06/2012 19:53, Walter Bright a écrit :
> On 6/18/2012 6:07 AM, Don Clugston wrote:
>> On 17/06/12 00:37, Walter Bright wrote:
>>> On 6/14/2012 1:03 AM, Don Clugston wrote:
>>>>> It is for debug builds.
>>>> Iain's data indicates that it's only a few % of the time taken on
>>>> semantic1().
>>>> Do you have data that shows otherwise?
>>>
>>> Nothing recent, it's mostly from my C++ compiler testing.
>>
>> But you argued in your blog that C++ parsing is inherently slow, and
>> you've
>> fixed those problems in the design of D.
>> And as far as I can tell, you were extremely successful!
>> Parsing in D is very, very fast.
>
> Yeah, but I can't escape that lingering feeling that lexing is slow.
>
> I was fairly disappointed that asynchronously reading the source files
> didn't have a measurable effect most of the time.

It is kind of religious. We need data.
June 25, 2012
On Mon, 18 Jun 2012 13:53:43 -0400, Walter Bright <newshound2@digitalmars.com> wrote:

> On 6/18/2012 6:07 AM, Don Clugston wrote:
>> On 17/06/12 00:37, Walter Bright wrote:
>>> On 6/14/2012 1:03 AM, Don Clugston wrote:
>>>>> It is for debug builds.
>>>> Iain's data indicates that it's only a few % of the time taken on
>>>> semantic1().
>>>> Do you have data that shows otherwise?
>>>
>>> Nothing recent, it's mostly from my C++ compiler testing.
>>
>> But you argued in your blog that C++ parsing is inherently slow, and you've
>> fixed those problems in the design of D.
>> And as far as I can tell, you were extremely successful!
>> Parsing in D is very, very fast.
>
> Yeah, but I can't escape that lingering feeling that lexing is slow.
>
> I was fairly disappointed that asynchronously reading the source files didn't have a measurable effect most of the time.

I have found that my project, which has a huge number of symbols (And large ones) compiles much slower than I would expect.  Perhaps you have forgotten about this issue:

http://d.puremagic.com/issues/show_bug.cgi?id=4900

Maybe fixing this still doesn't help parsing, not sure.

-Steve
June 25, 2012
On Mon, 18 Jun 2012 19:53:43 +0200, Walter Bright <newshound2@digitalmars.com> wrote:

> On 6/18/2012 6:07 AM, Don Clugston wrote:
>> On 17/06/12 00:37, Walter Bright wrote:
>>> On 6/14/2012 1:03 AM, Don Clugston wrote:
>>>>> It is for debug builds.
>>>> Iain's data indicates that it's only a few % of the time taken on
>>>> semantic1().
>>>> Do you have data that shows otherwise?
>>>
>>> Nothing recent, it's mostly from my C++ compiler testing.
>>
>> But you argued in your blog that C++ parsing is inherently slow, and you've
>> fixed those problems in the design of D.
>> And as far as I can tell, you were extremely successful!
>> Parsing in D is very, very fast.
>
> Yeah, but I can't escape that lingering feeling that lexing is slow.
>
> I was fairly disappointed that asynchronously reading the source files didn't have a measurable effect most of the time.

Lexing is definitely taking a big part of debug compilation time.
I haven't profiled the compiler for some time now but here are some thoughts.

- speeding up the identifier hash table
  there was always a profile spike at StringTable::lookup, though it reduced
  since you increased the bucket count

- memory mapping the source file saves a copy for UTF-8 sources
  this is by far the fastest way to read a source file

- parallel reading/parsing doesn't help much if most of the source files are
  read during import semantic

I'm regularly hitting other bottle necks so I don't think that lexing is #1.
When compiling std.range with unittests for example more that 50% of the compile time
is spend to check for existing template instantiations using O(N^2)/2 compares of template arguments.
If we managed to fix http://d.puremagic.com/issues/show_bug.cgi?id=7469 we could efficiently use
the mangled name as key.
1 2 3 4 5 6
Next ›   Last »