June 09, 2019
On 6/8/19 6:19 PM, Nick Sabalausky (Abscissa) wrote:
> On 6/8/19 10:48 AM, Andrei Alexandrescu wrote:
>> On 6/8/19 5:23 AM, H. S. Teoh wrote:
>>> It should have been. The old std.algorithm was a monster of 10,000 LOC
>>> that caused the compiler to exhaust my RAM and thrash on swap before
>>> dying horribly, when building unittests. It was an embarrassment.
>>
>> The appropriate response would have been (and still is) to fix the compiler. A more compact working set will also accelerate execution due to better locality.
> 
> Sheesh, this is *exactly* the sort of "Perfection is the enemy of the good" foot-shooting that's been keeping D years behind where it could be.

I did allow the breaking up of std.algorithm.
June 09, 2019
On 6/9/19 3:56 AM, Andrei Alexandrescu wrote:
> On 6/8/19 6:19 PM, Nick Sabalausky (Abscissa) wrote:
>> On 6/8/19 10:48 AM, Andrei Alexandrescu wrote:
>>> On 6/8/19 5:23 AM, H. S. Teoh wrote:
>>>> It should have been. The old std.algorithm was a monster of 10,000 LOC
>>>> that caused the compiler to exhaust my RAM and thrash on swap before
>>>> dying horribly, when building unittests. It was an embarrassment.
>>>
>>> The appropriate response would have been (and still is) to fix the compiler. A more compact working set will also accelerate execution due to better locality.
>>
>> Sheesh, this is *exactly* the sort of "Perfection is the enemy of the good" foot-shooting that's been keeping D years behind where it could be.
> 
> I did allow the breaking up of std.algorithm.

Yes, and that's definitely good - after all, it gave us a stopgap fix for the immediate problem, while the proper solution is(was?) still in-the-works. Besides, much of the time, once a "proper solution" does become available, the old stopgap can then be rolled back/deprecated if necessary. (Not sure whether or not rolling back the change would be appropriate in std.algorithm's case, but ATM, I'm not too terribly concerned with that either way.)

To clarify, in my previous post, I wasn't really talking *specifically* about the breaking up of std.algorithm (after all, like you said, that DID go through). I was just speaking in general about the overall strategy you were promoting: Preferring to forego stopgap measures in the interims before correct solutions are available. (Unless I misunderstood?)

(Of course, in the cases where a correct solution is just as quick-n-simple as any stopgap, well, then yes, certainly the correct solution should just be done instead. Again, not saying this was or wasn't the case with the breaking up of std.algorithm, I'm just speaking in general terms.)
June 09, 2019
On 6/9/19 2:52 PM, Nick Sabalausky (Abscissa) wrote:
> I was just speaking in general about the overall strategy you were promoting: Preferring to forego stopgap measures in the interims before correct solutions are available. (Unless I misunderstood?)

I can tell at least what I tried to do - use good judgment for each such decision. More often than not I propose workarounds that the community turns its nose at - see e.g. the lazy import. To this day I think that would have been a great thing to do. But no, we need to "wait" for the full lazy import that will never materialize.
June 10, 2019
On Sunday, 9 June 2019 at 19:51:14 UTC, Andrei Alexandrescu wrote:
> On 6/9/19 2:52 PM, Nick Sabalausky (Abscissa) wrote:
>> I was just speaking in general about the overall strategy you were promoting: Preferring to forego stopgap measures in the interims before correct solutions are available. (Unless I misunderstood?)
>
> I can tell at least what I tried to do - use good judgment for each such decision. More often than not I propose workarounds that the community turns its nose at - see e.g. the lazy import. To this day I think that would have been a great thing to do. But no, we need to "wait" for the full lazy import that will never materialize.


Has anyone merged all of phobos in to one large file, removed all the modules(or modify the compile to handle multiple modules per file and internally break them up) and see where the bottle neck truly is? Is it specific template use or just all templates? Is there a specific template design pattern that is often used that kills D?

And alternatively, break phobos up in to many more files, one per template... and see the performance of it.

Maybe there is specific performance blocks in the compiler and those could be rewritten such as parallel compilation or rewriting that part of the compiler to be faster or whatever..

What I'm seeing is that it seems no one really knows the true culprit. Is it the file layout or templates? and each issue then branches...

After all, maybe it is a combination and all the areas could be optimized better. Even a 1% increase is 1% and if it stacks with another 1% then one has 2%. A journey starts with the first 1%.




June 10, 2019
On 6/10/19 4:54 AM, Amex wrote:
> On Sunday, 9 June 2019 at 19:51:14 UTC, Andrei Alexandrescu wrote:
>> On 6/9/19 2:52 PM, Nick Sabalausky (Abscissa) wrote:
>>> I was just speaking in general about the overall strategy you were promoting: Preferring to forego stopgap measures in the interims before correct solutions are available. (Unless I misunderstood?)
>>
>> I can tell at least what I tried to do - use good judgment for each such decision. More often than not I propose workarounds that the community turns its nose at - see e.g. the lazy import. To this day I think that would have been a great thing to do. But no, we need to "wait" for the full lazy import that will never materialize.
> 
> 
> Has anyone merged all of phobos in to one large file, removed all the modules(or modify the compile to handle multiple modules per file and internally break them up) and see where the bottle neck truly is? Is it specific template use or just all templates? Is there a specific template design pattern that is often used that kills D?
> 
> And alternatively, break phobos up in to many more files, one per template... and see the performance of it.
> 
> Maybe there is specific performance blocks in the compiler and those could be rewritten such as parallel compilation or rewriting that part of the compiler to be faster or whatever..
> 
> What I'm seeing is that it seems no one really knows the true culprit. Is it the file layout or templates? and each issue then branches...
> 
> After all, maybe it is a combination and all the areas could be optimized better. Even a 1% increase is 1% and if it stacks with another 1% then one has 2%. A journey starts with the first 1%.

Not if that 1% costs 99% of your budget.

June 10, 2019
On 6/8/19 3:12 AM, Andrei Alexandrescu wrote:
> On 1/21/19 2:35 PM, Neia Neutuladh wrote:
>> On Mon, 21 Jan 2019 19:10:11 +0000, Vladimir Panteleev wrote:
>>> On Monday, 21 January 2019 at 19:01:57 UTC, Steven Schveighoffer wrote:
>>>> I still find it difficult to believe that calling exists x4 is a huge
>>>> culprit. But certainly, caching a directory structure is going to be
>>>> more efficient than reading it every time.
>>>
>>> For large directories, opendir+readdir, especially with stat, is much
>>> slower than open/access.
>>
>> We can avoid stat() except with symbolic links.
>>
>> Opendir + readdir for my example would be about 500 system calls, so it
>> breaks even with `import std.stdio;` assuming the cost per call is
>> identical and we're reading eagerly. Testing shows that this is the case.
> 
> Another simple test:
> 
> import std.experimental.all;
> void main(){}
> 
> Use "time -c test.d". On my SSD laptop that takes 0.55 seconds. Without the import, it takes 0.02 seconds. In an ideal world there should be no difference. Those 0.53 seconds are the upper bound of the gains to be made by first-order improvements to import mechanics. (IMHO: low impact yet not negligible.)


Might it be due to something like this?

https://issues.dlang.org/show_bug.cgi?id=19874

-Steve
June 13, 2019
On Saturday, 8 June 2019 at 00:00:17 UTC, Mike Franklin wrote:
> On Friday, 7 June 2019 at 16:38:56 UTC, Seb wrote:
>
>> Reading files is really cheap, evaluating templates and running CTFE isn't. That's why importing Phobos modules is slow - not because of the files it imports, but because of all the CTFE these imports trigger.
>
> Yes that make much more sense to me.  But, if that's the case, what's all the concern from Walter and Andrei expressed in this thread and in the conversations linked below?
>
> https://forum.dlang.org/post/q7dpmg$29oq$1@digitalmars.com
> https://github.com/dlang/druntime/pull/2634#issuecomment-499494019
> https://github.com/dlang/druntime/pull/2222#issuecomment-398390889
>
> Mike

If they wanted to make DMD faster, compiling with LDC would make it truly faster (more than halfs the runtime!!).

As mentioned above, the real reason for the import overhead are the ton of templates and CTFE evaluations and these either need to be cached, made faster, made lazier or reduced if any significant performance gains are anticipated. Tweaking the file tree won't help.

1 2 3 4 5 6 7 8 9
Next ›   Last »