May 17, 2012
On Wed, 16 May 2012 05:46:44 -0700, Steven Schveighoffer <schveiguy@yahoo.com> wrote:

> On Tue, 15 May 2012 05:51:44 -0400, kenji hara <k.hara.pg@gmail.com> wrote:
>
>> Old days import/core/thread.di was generated from src/core/thread.d .
>> Current import/core/thread.di is generated from src/core/thread.*di* .
>
> Huh?  Why the copy?  Just move src/core/thread.di to import/core/thread.di
>
> object.di lives in import/core, I think it should be the same for all the hand-maintained .di files.
>
> FWIW, I thought thread.di was being generated because of this.
>
> Also, I agree that thread and object are the only modules that need to be .di files.  Everything else is already opaque for the most part, and the pieces that aren't are just supporting code that can be visible.
>
> What we need to protect is the runtime implementation, so projects cannot depend on private APIs that may change.
>
> -Steve

I have a waiting commit that will move thread.di to import/core and updates the makefiles accordingly, however, I need Zor's threading pulls merged before this can happen.

https://github.com/D-Programming-Language/druntime/pull/214/
https://github.com/D-Programming-Language/druntime/pull/204/

-- 
Adam Wilson
IRC: LightBender
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/
May 21, 2012
On Tuesday, 15 May 2012 at 06:46:58 UTC, Adam Wilson wrote:
> On Mon, 14 May 2012 23:11:50 -0700, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
>
>> On Monday, May 14, 2012 23:03:20 Adam Wilson wrote:
>>> I have updated the make files so that only core.thread and core.sync.* are
>>> run through DI generation. ALL other core.* modules are copied into the
>>> import directory now.
>>
>> I assume that object.di and core/thread.di are being used rather than their
>> corresponding .d files being run through the .di generation? They already have
>> hand-crafted .di files.
>>
>> - Jonathan M Davis
>
> The funny thing is that core.thread.di was being run through the DI generator in the old make file. I had left it that way, I have posted a commit that moves it to the copy section.

I did that so that the same di generation logic is applied to
the handwritten import files. Currently I don't really expect any
difference though.
May 21, 2012
On Mon, 21 May 2012 05:12:32 -0700, dawg <dawg@dawgfoto.de> wrote:

> On Tuesday, 15 May 2012 at 06:46:58 UTC, Adam Wilson wrote:
>> On Mon, 14 May 2012 23:11:50 -0700, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
>>
>>> On Monday, May 14, 2012 23:03:20 Adam Wilson wrote:
>>>> I have updated the make files so that only core.thread and core.sync.* are
>>>> run through DI generation. ALL other core.* modules are copied into the
>>>> import directory now.
>>>
>>> I assume that object.di and core/thread.di are being used rather than their
>>> corresponding .d files being run through the .di generation? They already have
>>> hand-crafted .di files.
>>>
>>> - Jonathan M Davis
>>
>> The funny thing is that core.thread.di was being run through the DI generator in the old make file. I had left it that way, I have posted a commit that moves it to the copy section.
>
> I did that so that the same di generation logic is applied to
> the handwritten import files. Currently I don't really expect any
> difference though.

Currently, there isn't any difference, however, once DI generation is changed to strip out function implementations it could change significantly. There is even talk of building a limited amount of semantic analysis into DI generation. Because of this, relying on the current DI generation in the future would not be a good idea; these changes are in preparation for future DI changes.

-- 
Adam Wilson
IRC: LightBender
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/
June 11, 2012
questions:

A) as I understand it, the new di generation will systematically strip out the implementation of non-auto-return, non-templated functions, is that correct?

B) since there are some important differences with the current di files (in terms of inlining optimization, etc), will there be a dmd command-line flag to output those stripped down di files (eg: -stripdi), so user still has choice of which to output ?

C) why can't auto-return functions be semantically analyzed and resolved?
(eg:auto fun(int x){return x;} ) => generated di should be: int fun(int x); )

D) can we have an option to strip out template functions as well? This could be more or less customizable, eg with a file that contains a list of template functions to strip, or simply strip all templates). The library writer would instantiate those templates to certain predefined values. Eg:

module fun;
T fun1(T)(T x){
	return 2*x;
}
void dummy_instantiate(){
//instantiate to double, and repeat for all desired types, eg with a mixin.
	alias double T;
	fun1!(T)(T.init);
}
Then the library writer generates a library (static or shared) and the end user uses the templates with one of the allowed types (otherwise link error happens). In many cases (eg matrix/numerical libraries), all that's needed is a finite set of predefined types (eg int,float etc). Bonus points would be if the generated di file automatically generates template constraints to reflect the allowed types, to have compile time errors instead of link time ones.
Having ability to strip templates greatly simplifies distribution of code, as it doesn't have to carry all dependencies recursively if all one wants is a few predefined types.

D) btw, is there a way to force the instantiations more elegantly rather than using dummy_instantiate? in C++ we can just write something like: template double fun<double>(double);, but the same doesnt in D.






June 11, 2012
On 06/11/2012 09:37 AM, timotheecour wrote:
> questions:
>
> A) as I understand it, the new di generation will systematically strip out the implementation of
> non-auto-return, non-templated functions, is that correct?
>

This is my understanding as well.

>B) since there are some important differences with the current di files (in terms of inlining
> optimization, etc), will there be a dmd command-line flag to output those stripped down di files
> (eg: -stripdi), so user still has choice of which to output ?

You could use cp instead of dmd -H.

>
> C) why can't auto-return functions be semantically analyzed and resolved?
> (eg:auto fun(int x){return x;} ) => generated di should be: int fun(int
> x); )
>

Conditional compilation.

version(A) int x;
else version(B) double x;
else static assert(0);

auto foo(){return x;}

would need to be stripped to

version(A){
    int x;
    int foo();
}else version(B){
    double x;
    double foo();
}else static assert(0);

which is a nontrivial transformation.

This is just a simple example. Resolving the return type conditionally
in the general case is undecidable, therefore, making it work
satisfactorily involves a potentially unbounded amount of work.


> D) can we have an option to strip out template functions as well? This
> could be more or less customizable, eg with a file that contains a list
> of template functions to strip, or simply strip all templates). The
> library writer would instantiate those templates to certain predefined
> values. Eg:
>
> module fun;
> T fun1(T)(T x){
>      return 2*x;
> }
> void dummy_instantiate(){
> //instantiate to double, and repeat for all desired types, eg with a mixin.
>      alias double T;
>      fun1!(T)(T.init);
> }
> Then the library writer generates a library (static or shared) and the
> end user uses the templates with one of the allowed types (otherwise
> link error happens). In many cases (eg matrix/numerical libraries), all
> that's needed is a finite set of predefined types (eg int,float etc).
> Bonus points would be if the generated di file automatically generates
> template constraints to reflect the allowed types, to have compile time
> errors instead of link time ones.
> Having ability to strip templates greatly simplifies distribution of
> code, as it doesn't have to carry all dependencies recursively if all
> one wants is a few predefined types.

You could use overloads instead and use templates for implementing them. Templates are generally only exposed in a well-designed library interface if they work with an unbounded number of types.

>
> D) btw, is there a way to force the instantiations more elegantly rather
> than using dummy_instantiate? in C++ we can just write something like:
> template double fun<double>(double);, but the same doesnt in D.
>
>

For example:

T foo(T)(T arg){return arg; pragma(msg, T);}

mixin template Instantiate(alias t,T...){
    static assert({
        void _(){mixin t!T;}
        return true;
    }());
}

mixin Instantiate!(foo,int);
mixin Instantiate!(foo,double);

The nested function named '_' is unnecessary. It works around a DMD bug, mixin t!T is 'not yet implemented in CTFE'.
June 12, 2012
On Mon, 11 Jun 2012 04:55:37 -0700, Timon Gehr <timon.gehr@gmx.ch> wrote:

> On 06/11/2012 09:37 AM, timotheecour wrote:
>> questions:
>>
>> A) as I understand it, the new di generation will systematically strip out the implementation of
>> non-auto-return, non-templated functions, is that correct?
>>
>
> This is my understanding as well.
>

Correct. A lot of community consultation went into the improvements.

>> B) since there are some important differences with the current di files (in terms of inlining
>> optimization, etc), will there be a dmd command-line flag to output those stripped down di files
>> (eg: -stripdi), so user still has choice of which to output ?
>
> You could use cp instead of dmd -H.
>

In fact I rewrote the DRuntime makefiles to do precisely this with the hand-crafted .DI files.
https://github.com/D-Programming-Language/druntime/pull/218

>>
>> C) why can't auto-return functions be semantically analyzed and resolved?
>> (eg:auto fun(int x){return x;} ) => generated di should be: int fun(int
>> x); )
>>
>
> Conditional compilation.
>
> version(A) int x;
> else version(B) double x;
> else static assert(0);
>
> auto foo(){return x;}
>
> would need to be stripped to
>
> version(A){
>      int x;
>      int foo();
> }else version(B){
>      double x;
>      double foo();
> }else static assert(0);
>
> which is a nontrivial transformation.
>
> This is just a simple example. Resolving the return type conditionally
> in the general case is undecidable, therefore, making it work
> satisfactorily involves a potentially unbounded amount of work.
>
>

The general explanation is that any time you rewrite the AST (such as the operation performed above) you have to duplicate that work in DI generation to maintain semantic cohesion (what I put in is what I get out). Another reason is that DI generation is *required* to be run prior to the semantic analysis stage due to fact that your command line could alter the analysis and subsequent AST. In fact this note is one of the *very* few multi-line comments Walter put into the DMD source. In essence DI generation is an AST pretty-printer, and as such is must be run prior to analysis and after parsing. All my patch does is insert checks into the printing process to stop it from printing certain parts of the AST.

Theoretically one could rebuild a semantic analyzer that didn't change it's behavior based on the the command-line into the DI generation, but that would literally require rewriting DI generation from the ground up. And you'd still have to verify that the primary semantic analysis didn't change anything from the DI analysis. Then you'd have to write a reconciliation process. Personally, I think that we have better places to focus our efforts.

That said, what you really want is the full source embedded into the library (similar, to .NET's CIL). That would get you want you are after. Such a thing could actually be done except for OMF/Optlink. Since OMF doesn't support custom sections there is a no special place to store the code that the compiler could easily access. This would enable the compiler to extract the source during compilation and analyze the both the user source and library source and perform all possible optimizations accordingly. I look at adding COFF to DMD and my brain melted. There are enough #IFDEFs in there to cause permanent insanity... *sigh*

>> D) can we have an option to strip out template functions as well? This
>> could be more or less customizable, eg with a file that contains a list
>> of template functions to strip, or simply strip all templates). The
>> library writer would instantiate those templates to certain predefined
>> values. Eg:
>>
>> module fun;
>> T fun1(T)(T x){
>>      return 2*x;
>> }
>> void dummy_instantiate(){
>> //instantiate to double, and repeat for all desired types, eg with a mixin.
>>      alias double T;
>>      fun1!(T)(T.init);
>> }
>> Then the library writer generates a library (static or shared) and the
>> end user uses the templates with one of the allowed types (otherwise
>> link error happens). In many cases (eg matrix/numerical libraries), all
>> that's needed is a finite set of predefined types (eg int,float etc).
>> Bonus points would be if the generated di file automatically generates
>> template constraints to reflect the allowed types, to have compile time
>> errors instead of link time ones.
>> Having ability to strip templates greatly simplifies distribution of
>> code, as it doesn't have to carry all dependencies recursively if all
>> one wants is a few predefined types.
>
> You could use overloads instead and use templates for implementing them. Templates are generally only exposed in a well-designed library interface if they work with an unbounded number of types.
>
>>
>> D) btw, is there a way to force the instantiations more elegantly rather
>> than using dummy_instantiate? in C++ we can just write something like:
>> template double fun<double>(double);, but the same doesnt in D.
>>
>>
>
> For example:
>
> T foo(T)(T arg){return arg; pragma(msg, T);}
>
> mixin template Instantiate(alias t,T...){
>      static assert({
>          void _(){mixin t!T;}
>          return true;
>      }());
> }
>
> mixin Instantiate!(foo,int);
> mixin Instantiate!(foo,double);
>
> The nested function named '_' is unnecessary. It works around a DMD bug, mixin t!T is 'not yet implemented in CTFE'.


-- 
Adam Wilson
IRC: LightBender
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/
June 12, 2012
> mixin Instantiate!(foo,int);
Thanks for the syntax tip!

> You could use cp instead of dmd -H.
That won't produce the same output (eg large functions tend to be stripped currently), but I guess the current behavior is relatively useless so it's fine.


> want you are after. Such a thing could actually be done except
> for OMF/Optlink. Since OMF doesn't support custom sections
> there is a no special place to store the code that the compiler
> could easily access.


If we want to embed the AST inside an "import" library file (which is optional, could be done with a directory), wouldn't it be possible to store the AST as a global / static variable? When dmd compiles myfun.d, it generates the AST and inserts a global variable (eg void*_ast_myfun=...) in the data segment of the library/object file (that could be done with a mixin and be portable).

just started a new thread for related ideas: http://forum.dlang.org/thread/lmepufogzaxlbxwgubvl@forum.dlang.org
thanks!

June 12, 2012
On Tue, 12 Jun 2012 02:16:19 -0700, timotheecour <thelastmammoth@gmail.com> wrote:

>> mixin Instantiate!(foo,int);
> Thanks for the syntax tip!
>
>> You could use cp instead of dmd -H.
> That won't produce the same output (eg large functions tend to be stripped currently), but I guess the current behavior is relatively useless so it's fine.
>
>
>> want you are after. Such a thing could actually be done except
>> for OMF/Optlink. Since OMF doesn't support custom sections
>> there is a no special place to store the code that the compiler
>> could easily access.
>
>
> If we want to embed the AST inside an "import" library file (which is optional, could be done with a directory), wouldn't it be possible to store the AST as a global / static variable? When dmd compiles myfun.d, it generates the AST and inserts a global variable (eg void*_ast_myfun=...) in the data segment of the library/object file (that could be done with a mixin and be portable).
>
> just started a new thread for related ideas: http://forum.dlang.org/thread/lmepufogzaxlbxwgubvl@forum.dlang.org
> thanks!
>

Theoretically yes, but it's extremely poor form for the compiler to be adding variables to programmer created structures especially when said variables are only ever going to be used by the compiler.

-- 
Adam Wilson
IRC: LightBender
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/
1 2 3
Next ›   Last »