November 29, 2022

On Tuesday, 29 November 2022 at 07:15:25 UTC, Walter Bright wrote:

>

On 11/27/2022 1:29 AM, FeepingCreature wrote:

>
  1. But also if we're talking about number of instantiations, hasUDA and getUDA lead the pack. I think the way these work is just bad - I've rewritten all my own hasUDA/getUDA code to be of the form udaIndex!(U, __traits(getAttributes, T)) - instantiating a unique copy for every combination of field and UDA is borderline quadratic - but that didn't help much even though -vtemplates hinted that it should. -vtemplates needs compiler time attributed to template recursively.

hasUDA and getUDAs are defined:

enum hasUDA(alias symbol, alias attribute) = getUDAs!(symbol, attribute).length != 0;

template getUDAs(alias symbol, alias attribute)
{
import std.meta : Filter;

alias getUDAs = Filter!(isDesiredUDA!attribute, __traits(getAttributes, symbol));

}

These do look pretty inefficient. Who wants to fix Phobos with FeepingCreature's solution?

Well, in his codebase I ended up just redefining hasUDA in terms of udaIndex, and even though hasUDA led the pack in -vtemplates this didn't actually result in any noticeable change in speed. I think even though hasUDA gets instantiated a lot, it doesn't result in much actual compile time. Unfortunately there's no good way to know this without porting everything, which is I think the actual problem.

November 30, 2022
On 30/11/2022 1:43 AM, FeepingCreature wrote:
> Unfortunately there's no good way to know this without porting everything

If we had some way to determine cost of template instantiation then we would have a good idea, but that tool is currently missing. Very high value this feature would be.
November 29, 2022

On Sunday, 27 November 2022 at 09:29:29 UTC, FeepingCreature wrote:

>

...

  • To be fair, his computer isn't the fastest. But it's an 8core AMD, so DMD's lack of internal parallelization hurts it here. This will only get worse in the future.

Hello, I'm far from being a D compilation specialist, but in case this is of any use or inspiration: I've been using parallel compilation for a few years now, recompiling only the new files, one-by-one, distributed over the available cores, then linking.

Here it is, just one bash script: https://github.com/glathoud/d_glat/blob/master/dpaco.sh (So far used with LDC only.)

The result is far from perfect, sometimes the resulting binary does not reflect a code change, but 80-90% of the time it does. And I don't have to maintain a build system at all. Overall this approach saves quite a bit of time - and improves motivation, having to wait only a few seconds on a project that has grown to about 180 D files. My use of templating is limited but happens regularly.

If there is, or would be a 100% reliable solution to do parallel compilation without a build system, that'd be wonderful. Not just for me, I guess.

Best regards,
Guillaume

1 2 3 4
Next ›   Last »