February 07, 2007
Walter Bright wrote:

>> Why is that evil? I think it's actually a great idea. "versions" are a sort of configuration that determines which code should be compiled and which code shouldn't. Storing this configuration in a separate file makes sense to me.
> 
> The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules.
> 
> This is a lot easier to manage when you're dealing with larger, more complex code, although it is more work up front.


How would you use something like autoconf with this approach ?

Would it need to generate different Makefiles / D file lists for
different options/versions, instead of just -version statements ?
Example of things I am thinking about are "__WXGTK__", "UNICODE",
or "HAVE_OPENGL". With C/C++, they're usually in a config.h file.

--anders
February 07, 2007
On Wed, 07 Feb 2007 13:17:48 +0300, Kyle Furlong <kylefurlong@gmail.com> wrote:

>>  The template DActiveRecord must parse DSL string at compile time and produce another string with Account class implementation in D. With all necessary symantic analysis and constraints (for example, it is impossible to use name of field in 'validate_presence_of' if it isn't described as 'field').
>>  Do you think this task can be done with D templates at complile time?

> Almost certainly.

:)
I know, but at which price? ;)

-- 
Regards,
Yauheni Akhotnikau
February 07, 2007
"Walter Bright" <newshound@digitalmars.com> wrote in message news:eq91hs$fvm$1@digitaldaemon.com...
> Fixes many bugs, some serious.
>
> Some new goodies.
>
> http://www.digitalmars.com/d/changelog.html
>
> http://ftp.digitalmars.com/dmd.1.005.zip

I am amazed at the mixin/import features.  The ability we have to arbitrarily generate code at runtime is.. something I never would have imagined would come until D2.0.

I've been thinking about it, though, and I'm not sure if it's 100% the best way to do it.  The major advantage, of course, is that it doesn't introduce tons of new syntax for building up code.  However, I'm not sure I really like the idea of building up strings either.  It seems.. preprocessor-y.

Don't get me wrong, I'm still giddy with the prospects of this new feature. But something about it just seems off.


February 07, 2007
"Jarrett Billingsley" <kb3ctd2@yahoo.com> wrote in message news:eqcmo3$2u17$1@digitaldaemon.com...

Just wanted to add that something I find severely lacking is that there's no way to get a pretty (usable, non-mangled) string representation of a type. Instead we have to write large template libraries to accomplish something that the compiler does _all the time_.  This means it's currently not possible to do something like

template MakeVariable(Type, char[] name)
{
    const char[] MakeVariable = Type.nameof ~ " " ~ name ~ ";";
}

mixin(MakeVariable!(int, "x"));

Instead we have to use a template library to demangle the name:

import ddl.meta.nameof;

template MakeVariable(Type, char[] name)
{
    const char[] MakeVariable = prettytypeof!(Type) ~ " " ~ name ~ ";";
}

mixin(MakeVariable!(int, "x"));


:\


February 07, 2007
Jarrett Billingsley wrote:
> "Walter Bright" <newshound@digitalmars.com> wrote in message news:eq91hs$fvm$1@digitaldaemon.com...
>> Fixes many bugs, some serious.
>>
>> Some new goodies.
>>
>> http://www.digitalmars.com/d/changelog.html
>>
>> http://ftp.digitalmars.com/dmd.1.005.zip
> 
> I am amazed at the mixin/import features.  The ability we have to arbitrarily generate code at runtime is.. something I never would have imagined would come until D2.0.

At compile time you mean.

> I've been thinking about it, though, and I'm not sure if it's 100% the best way to do it.  The major advantage, of course, is that it doesn't introduce tons of new syntax for building up code.  However, I'm not sure I really like the idea of building up strings either.  It seems.. preprocessor-y.
> 
> Don't get me wrong, I'm still giddy with the prospects of this new feature. But something about it just seems off. 

The ability to transform true code trees will come with D's macro abilities. But that's a few months ahead at least.


Andrei
February 07, 2007
Hasan Aljudy wrote:

> 
> Why is that evil? I think it's actually a great idea. "versions" are a sort of configuration that determines which code should be compiled and which code shouldn't. Storing this configuration in a separate file makes sense to me.
> 

rename versions.txt to versions.d

and use

import versions;

same effect, less confusion
February 07, 2007
BCS Wrote:
<snip>
> Without the gratuitous stuff that has to be the cleanest quine outside of bash (in bash an empty file prints nothing)
> 
> import std.stdio;
> void main(){writef(import(__FILE__));}

What is your definition of "clean"?

Moreover, there are many languages in which an empty source file is a null program - BASIC, Perl and probably most shell scripting languages (indeed, probably most scripting languages) have this characteristic.  In the course of history there have even been one or two C compliers that did this.

Stewart.
February 07, 2007
Walter Bright wrote:
> 
> 
> The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules.
> 

What about cases where 90% of the code is identical but small bits and peaces are different? If I understand correctly, to do what you suggest would requirer that those bits be put in functions and have several versions of the function somewhere else. This could be a problem in several ways

===Tiny bits of code would requirer tiny functions that would hide what is going on.

version(RowMajor)
	x = table[i][j];
else // RowMinor
	x = table[j][i];

====Empty else cases would result in do nothing functions:

version(StrongChecks)
{
	if(foo) ...
	if(bar) ...
	...
}
//empty else

====You can't break across function calls

switch(i)
{
	case 1:
		version(Baz)
			if(baz) break;
		else
			break;
	case 2:

	...// lots of un versioned code
}

or

switch(i)
{
	version(Baz)
	{
		case 1:
			if(baz) break;
	}
	case 2:

	...// lots of un versioned code
	
	version(!Baz)
	{
		case 1:
	}
}


====lots of version combinations

version(Foo) i = foo(i);
version(Boo) i = boo(i);
version(Fig) i = fig(i);
version(Baz) i = baz(i);
version(Bar) i = bar(i);	//32 options???


Are these valid concerns? Am I misunderstanding what you said?
February 07, 2007
Anders F Björklund wrote:
> Walter Bright wrote:
> 
>>> Why is that evil? I think it's actually a great idea. "versions" are a sort of configuration that determines which code should be compiled and which code shouldn't. Storing this configuration in a separate file makes sense to me.
>>
>> The right way to do versions that cut across multiple files is to abstract the versioning into an API, and implement the different versions in different modules.
>>
>> This is a lot easier to manage when you're dealing with larger, more complex code, although it is more work up front.
> 
> 
> How would you use something like autoconf with this approach ?
> 
> Would it need to generate different Makefiles / D file lists for
> different options/versions, instead of just -version statements ?
> Example of things I am thinking about are "__WXGTK__", "UNICODE",
> or "HAVE_OPENGL". With C/C++, they're usually in a config.h file.

I've worked with the C approach for many years, and have gotten increasingly dissatisfied with it. Over time, it leads to conflicting, misused, overlapping version macros.

I've also tried the "make an API for the version" method, and have been much more satisfied with it. You can see it at work in the gc implementation (see gclinux.d and win32.d).
February 07, 2007
Pragma a écrit :
> BLS wrote:
>> Pragma schrieb:
>>> BLS wrote:
>> I can imagine the following scenario : D Compiler is calling a
>> Translator, a modified Enki f.i. to translate a Domain Specific Language
>> into D ... strange
> 
> I've thought about that too- much like BCS's work.  The only thing 

Enki?
BCS?

Could you avoid mysterious references?

Regards,
renoX