On 7 June 2013 10:44, deadalnix <deadalnix@gmail.com> wrote:
On Thursday, 6 June 2013 at 23:54:49 UTC, Manu wrote:
The trouble, as has been pointed out before, is shared libraries.


And the existence of 'sufficiently smart linker', and the fact that the
platforms that suffer from this stuff way harder than x86 almost always
have less mature compilers/optimisers/linkers.
I just wouldn't ever place my faith in the future arrival of some
sufficiently-smart-[tool]. You couldn't make a business investment on that
illusive possibility.

GCC and LLVM have what it take to implement this kind of stuff
and can do codegen for a large variety of plateforms. I think
it's never gonna work with dmd, and I think this is why Walter
and yourself are pushing that hard to break everybody's code.

IIRC, GCC requires explicit support for LTO in the backend, which means minority architectures will probably never get support, and these are the ones that need it the most.
Don't know about LLVM, but I'll bet again, the minority architectures will not have good support.

And there's still the DLL case.
You can't simply compare the relative cost of a DLL call and a virtual call (although a DLL call is still slightly less work than a virtual call).
The real issue is though, that code within your program which IS subject to LTO still can't have a de-virtualisation optimisation applied anyway, since it can't know if a DLL might introduce a new derived class.
The possibility of a DLL simply existing means such an optimisation can't be performed, even if it is possible.

If D were a JITed language, I wouldn't make a fuss. But it's not, it's a compiled systems language, and it's the only realistic competitor to C++ I know of in the same space.
All other JIT languages have alternatives, they live in a crowded space, and while D might be a compelling option in those spaces, in the compiled systems space, D pretty much stands alone, and would do well not to inhibit the needs of those users.