| |
 | Posted by Manu | Permalink Reply |
|
Manu 
| https://issues.dlang.org/show_bug.cgi?id=15110
--- Comment #11 from Manu <turkeyman@gmail.com> ---
(In reply to Walter Bright from comment #10)
> Manu, I thought you had argued strongly to me that if the inlining fails an error should be generated.
True, but I also think I understand a little more nuance now than I did 8 years ago.
I'll make these points from today's perspective:
1. All arguments I made at that time were based on a presumption that inline
worked *AT LEAST* as I expected as a native C/C++ ABI developer. It may have
been inexperience to assume that, but I absolutely did assume that the binary
linkage facets of inline would be 'predictable' as I considered that a baseline
for the concept, and what I was arguing were the details where our definition
may be distinct from those expectations.
I was essentially arguing the high-level semantic; we wanted to be confident
inline 'worked' (did inline), but I did leave out the other ABI related
considerations that are useful and predictable things about binary generation
and linkage from the conversation.
2. I still feel it's valuable to have an error when inlining fails, but I change my mind on the absolute-ness of this from time to time... I think both are valuable in different cases. Today I would argue for `inline` to behave as people expect when interacting with typical native code ecosystems, and `forceinline` to produce an error, in addition to those linkage details.
What I think the most useful union of concepts would look like, is that `(inline, true)` do what I've been saying; emit the symbol only when called, and mark it with the internal linkage flag. I think this is what most people want almost all of the time, and it's generally compatible with widely standard binary/linkage environments. I don't think it's right that people should experience weird surprises when trying to integration into their existing ecosystems. We have no reason not to work in a 'standard' way here.
`(inline, force)` could emit an error when inlining is impossible (like taking the address), but while it's left to the optimiser, GCC/LLVM will always have ultimate agency over inlining. The only way I can imagine that we can assure a reliable error when inlining fails is if it's performed in the front-end.
I think what's underlying all of this, is that `inline` in C/C++ is frequently (almost exclusively?) used as a mechanism to control your binary output. Accepting the attribute as a hint that inlining is desirable is nice, but what really matters is that it is used to control the formation of the binary, and how it interacts with the linker environment.
There are 2 particularly useful tools in C/C++, `inline` (ie, on-demand +
internal linkage) and `weak` (nix + windows express this differently).
We have neither of those in D...
I see no reason why 'inline' should not be used to inform the first thing; the rule of least surprise makes it the obvious thing to do, but it may also be possible to have an alternative way to express that extremely useful concept.
--
|