March 18, 2014

On 18.03.2014 02:15, Marco Leise wrote:
> Am Mon, 17 Mar 2014 20:10:31 +0100
> schrieb Rainer Schuetze <r.sagitario@gmx.de>:
>
>>> In that specific case, why does this not work for you?:
>>>
>>> nothrow extern(Windows) {
>>>     HANDLE GetCurrentProcess();
>>> }
>>>
>>
>> The attributes sometimes need to be selected conditionally, e.g. when
>> building a library for static or dynamic linkage (at least on windows
>> where not everything is exported by default). Right now, you don't have
>> an alternative to code duplication or heavy use of string mixins.
>
> Can we write this? It just came to my mind:
>
> enum attribs = "nothrow extern(C):";
>
> {
>      mixin(attribs);
>          HANDLE GetCurrentProcess();
> }
>

Interesting idea, though it doesn't seem to work:

enum attribs = "nothrow extern(C):";

extern(D) { // some dummy attribute to make it parsable
    mixin(attribs);
        int GetCurrentProcess();
}

int main() nothrow // Error: function 'D main' is nothrow yet may throw
{
	return GetCurrentProcess(); // Error: 'attr.GetCurrentProcess' is not nothrow
}

I guess this is by design, the mixin introduces declarations after the parser has already attached attributes to the non-mixin declarations.
March 18, 2014
Am Mon, 17 Mar 2014 18:16:13 +0000
schrieb "Ola Fosheim Grøstad"
<ola.fosheim.grostad+dlang@gmail.com>:

> On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote:
> > About two years ago we had that discussion and my opinion
> > remains that there are too many "if"s and "assume"s for the
> > compiler.
> > It is not so simple to trace back where an object originated
> > from when you call a method on it.
> 
> It might not be easy, but in my view the language should be designed to support future advanced compilers. If D gains traction on the C++ level then the resources will become available iff the language has the right constructs or affords extensions that makes advanced optimizations tractable. What is possible today is less imoortant...

Let's just say it will never detect all cases, so the "final" keyword will still be around. Can you find any research papers that indicate that such compiler technology can be implemented with satisfactory results? Because it just sounds like a nice idea on paper to me that only works when a lot of questions have been answered with yes.

>   >It could be created though
> > the factory mechanism in Object using a runtime string or it
> 
> If it is random then you know that it is random.

These not entirely random objects from a class hierarchy could well have frequently used final methods like a name or position. I also mentioned objects passed as parameters into delegates.

> If you want speed you create separate paths for the dominant instance types. Whole program optimizations is guided by profiling data.

Another optimization, ok. The compiler still needs to know that the instance type cannot be sub-classed.

> > There are plenty of situations where it is virtually impossible to know the instance type statically.
> 
> But you might know that it is either A and B or C and D in most cases. Then you inline those cases and create specialized execution paths where profitable.

Thinking about it, it might not even be good to duplicate
code. It could easily lead to instruction cache misses.
Also this is way too much involvement from both the coder and
the compiler. At this point I'd ask for "final" if it wasn't
already there, if just to be sure the compiler gets it right.

> > Whole program analysis only works on ... well, whole programs. If you split off a library or two it doesn't work. E.g. you have your math stuff in a library and in your main program you write:
> >
> >   Matrix m1, m2;
> >   m1.crossProduct(m2);
> >
> > Inside crossProduct (which is in the math lib), the compiler could not statically verify if it is the Matrix class or a sub-class.
> 
> In my view you should avoid not having source access, but even then it is sufficient to know the effect of the function. E.g. you can have a high level specification language asserting pre and post conditions if you insist on closed source.

More shoulds and cans and ifs... :-(

> >> With a compiler switch or pragmas that tell the compiler what can be dynamically subclassed the compiler can assume all leaves in the compile time specialization hierarchies to be final.
> >
> > Can you explain, how this would work and where it is used?
> 
> You specify what plugins are allowed to do and access at whatever resolution is necessary to enable the optimizations your program needs?
> 
> Ola.

I don't get the big picture. What does the compiler have to do with plugins? And what do you mean by allowed to do and access and how does it interact with virtuality of a method? I'm confused.

-- 
Marco

March 18, 2014
On Tuesday, 18 March 2014 at 13:01:56 UTC, Marco Leise wrote:
> Let's just say it will never detect all cases, so the "final"
> keyword will still be around. Can you find any research papers
> that indicate that such compiler technology can be implemented
> with satisfactory results? Because it just sounds like a nice
> idea on paper to me that only works when a lot of questions
> have been answered with yes.

I don't think this is such a theoretical interesting question. Isn't this actually a special case of a partial correctness proof where you try to establish constraints on types? I am sure you can find a lot of papers covering bits and pieces of that.

> These not entirely random objects from a class hierarchy could
> well have frequently used final methods like a name or
> position. I also mentioned objects passed as parameters into
> delegates.

I am not sure I understand what you are getting at.

You start with the assumption that a pointer to base class A is the full set of that hierarchy. Then establish constraints for all the subclasses it cannot be. Best effort. Then you can inline any virtual function call that is not specialized across that constrained result set. Or you can inline all candidates in a switch statement and let the compiler do common subexpression elimination & co.

>> If you want speed you create separate paths for the dominant instance types. Whole program optimizations is guided by profiling data.
>
> Another optimization, ok. The compiler still needs to know
> that the instance type cannot be sub-classed.

Not really. It only needs to know that in the current execution path you do have instance of type X (which is most frequent) then you have another execution path for the inverted set.

> Thinking about it, it might not even be good to duplicate
> code. It could easily lead to instruction cache misses.

You have heuristics for that. After all, you do have the execution pattern. You have the data of a running system on typical input. If you log all input events (which is useful for a simulation) you can rerun the program in as many configurations you want. Then you skip the optimizations that leads to worse performance.

> Also this is way too much involvement from both the coder and
> the compiler.

Why? Nobody claimed that near optimal whole program optimization has to be fast.

> At this point I'd ask for "final" if it wasn't already there, if just to be sure the compiler gets it right.

Nobody said that you should not have final, but final won't help you inlining virtual functions where possible.

>> you can have a high level specification language asserting pre and post conditions if you insist on closed source.
>
> More shoulds and cans and ifs... :-(

Err… well, you can of course start with a blank slate after calling a closed source library function.

> I don't get the big picture. What does the compiler have to do
> with plugins? And what do you mean by allowed to do and
> access and how does it interact with virtuality of a method?
> I'm confused.

In my view plugins should not be allowed to subclass. I think it is ugly, but IFF then you need to tell the compiler which classes it can subclass, instantiate etc. As well as what side effects the call to the plugin may and may not have.

Why is that confusing? If you shake the world, you need to tell the compiler what the effect is. Otherwise you have to assume "anything" upon return from said function call.

That said, I am personally not interested in plugins without constraints imposed on them (or at all). Most programs can do fine with just static linkage, so I find the whole dynamic linkage argument less interesting.

Closed source library calls are more interesting, especially if you can say something about the state of that library. That could provide you with detectors for wrong library usage (which could be the OS itself). E.g. that a file has to be opened before it is closed etc.
March 18, 2014
 Nobody uses D, so worrying about breaking backwards compatibly for such an obvious improvement is pretty funny:)

 D should just do what Lua does.

 Lua breaks backwards compatibility at every version. Why is it not a problem? If you don't want to upgrade, just keep using the older compiler! It isn't like it ceased to exist--
March 18, 2014
On Tuesday, 18 March 2014 at 18:11:27 UTC, dude wrote:
> Nobody uses D, so worrying about breaking backwards compatibly for such an obvious improvement is pretty funny:)

I kind of agree with you if it happens once and is a sweeping change that fix the syntactical warts as well as the semantical ones.

>  Lua breaks backwards compatibility at every version. Why is it not a problem? If you don't want to upgrade, just keep using the older compiler! It isn't like it ceased to exist--

It is a problem because commercial developers have to count hours and need a production compiler that is maintained.

If your budget is 4 weeks of development, then you don't want another 1 week to fix compiler induced bugs.

Why?

1. Because you have already signed a contract on a certain amount of money based on estimates of how much work it is. All extra costs are cutting into profitability.

2. Because you have library dependencies. If a bug is fixed in library version 2 which requires version 3 of the compiler, then you need to upgrade to the version 3 of the compiler. That compiler better not break the entire application and bring you into a mess of unprofitable work.

Is attracting commercial developers important for D? I think so, not because they contribute lots of code, but because they care about the production quality of the narrow libraries they do create and are more likely to maintain them over time. They also have a strong interest in submitting good bug reports and fixing performance bottle necks.




March 18, 2014
Ola Fosheim Grøstad:

> Is attracting commercial developers important for D?

In this phase of D life commercial developers can't justify to have the language so frozen that you can't perform reasonable improvements as the one discussed in this thread.

Bye,
bearophile
March 18, 2014
On Tuesday, 18 March 2014 at 18:34:14 UTC, bearophile wrote:
> In this phase of D life commercial developers can't justify to have the language so frozen that you can't perform reasonable improvements as the one discussed in this thread.

I don't disagree, but D is suffering from not having a production ready compiler/runtime with a solid optimizing backend in maintenance mode. So it is giving other languages "free traction" rather than securing its own position.

I think there is a bit too much focus on standard libraries, because not having libraries does not prevent commercial adoption. Commercial devs can write their own C-bindings if the core language, compiler and runtime is solid. If the latter is not solid then only commercial devs that can commit lots of resources to D will pick it up and keep using it (basically the ones that are willing to turn themselves into D shops).

Perhaps also D2 was announced too early, and then people jumped onto it expecting it to come about "real soon". Hopefully the language designers will do D3 design on paper behind closed doors for a while before announcing progress and perhaps even deliberately keep it at gamma/alpha quality in order to prevent devs jumping ship to D2 prematurely. :-)

That is how I view it, anyway.
March 19, 2014
On Monday, 17 March 2014 at 01:05:09 UTC, Manu wrote:
> Whole program optimisation can't do anything to improve the situation; it
> is possible that DLL's may be loaded at runtime, so there's nothing the
> optimiser can do, even at link time.

With everything being exported by default, this is true. But should DIP45 be implemented, LTO/WPO will be able to achieve a lot more, at least if the classes in question are not (or not fully) exported.
25 26 27 28 29 30 31 32 33 34 35
Next ›   Last »