September 24, 2019
On Tuesday, 24 September 2019 at 07:41:35 UTC, Martin Tschierschke wrote:
> Thank you, I found this too, but it is more an example of the principle, but what is the use case?
>
> It is only useful if the instruction set of the compiling computer differ from target
> hardware and by this you get
>
>> using host processor instruction set
>
> ???

If you don't want to ship 10 fine-tuned binaries for 10 different CPUs (see `-mcpu=?`), you can use JIT to compile and tune performance-critical pieces for the executing/target CPU. E.g., letting the auto-vectorizer exploit the full register width for AVX-512 CPUs etc.
September 24, 2019
On Tuesday, 24 September 2019 at 16:48:48 UTC, kinke wrote:
> [snip]
>
> If you don't want to ship 10 fine-tuned binaries for 10 different CPUs (see `-mcpu=?`), you can use JIT to compile and tune performance-critical pieces for the executing/target CPU. E.g., letting the auto-vectorizer exploit the full register width for AVX-512 CPUs etc.

Ivan provided an example here [1] (you recommended he write it up a wiki).

[1] https://forum.dlang.org/thread/bskpxhrqyfkvaqzoospx@forum.dlang.org
September 24, 2019
On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:
>

About bind call overhead, bind object hold pointer to shared payload, which is allocated via malloc. This payload has function pointer (initially null).
During compileDynamicCode call runtime will update this pointer to generated code.
Bind object opCall call this function pointer from payload.

Call itself
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L352
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L493

toDelegate
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L509
https://github.com/ldc-developers/ldc/blob/v1.18.0-beta1/runtime/jit-rt/d/ldc/dynamic_compile.d#L355
September 24, 2019
On Tuesday, 24 September 2019 at 18:24:36 UTC, Ivan Butygin wrote:
> On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:
>>
>
> About bind call overhead, bind object hold pointer to shared payload, which is allocated via malloc. This payload has function pointer (initially null).
> During compileDynamicCode call runtime will update this pointer to generated code.
> Bind object opCall call this function pointer from payload.
>
> [snip]

That's very helpful. The bind stuff is making a little more sense to me now.

Is there a concern that ldc cannot inline these function pointers versus the normal function calls?
September 24, 2019
On Tuesday, 24 September 2019 at 19:17:25 UTC, jmh530 wrote:
> On Tuesday, 24 September 2019 at 18:24:36 UTC, Ivan Butygin wrote:
>> On Tuesday, 24 September 2019 at 17:49:13 UTC, jmh530 wrote:
>>>
>>
>> About bind call overhead, bind object hold pointer to shared payload, which is allocated via malloc. This payload has function pointer (initially null).
>> During compileDynamicCode call runtime will update this pointer to generated code.
>> Bind object opCall call this function pointer from payload.
>>
>> [snip]
>
> That's very helpful. The bind stuff is making a little more sense to me now.
>
> Is there a concern that ldc cannot inline these function pointers versus the normal function calls?

We probably can't do anything with static->jit calls overhead. Just try to jit big enough functions to make this overhead negligible. But jit->static calls will be optimized. When compiler sees call to other function in jitted code and function body is available it will try to pull this function to jitted code as well even if it isn't marked @dynamicCompile/@dynamicCompileEmit. Static calls to such functions will still use static version but jit will use its own version which can be optimized with rest of jitted code and can be inlined into jitted code.
Next ›   Last »
1 2