December 19, 2005
"BCS" <BCS_member@pathlink.com> wrote
[snip]
> I think that the primary advantage of using templates is that they can allow for automated setup/verification of complex environments at compile time. How about building a static binary search tree? It would be nice to be able to initialize such a tree from an unsorted array and be sure that it gets put together right. something like this
>
>
> auto BTreeRoot = BuildBTree!(int, CompFn, [1,3,7,9,2,5,8]);
>
> would build a balanced tree of ints, sorted with CompFn.


Nice, but surely this is something that can be done at runtime via a static ctor?


[snip]
> In a nut shell, I see a distinct advantage of code that is insured to be completely evaluated at compile time.


As do I.

Another perspective on this is to consider Templates as being extensions to the compiler proper. While such things can be very useful, I imagine it's not hard to get carried away building a language within a language. Still, being able to instruct the compiler to build a regex state-machine for you is pretty powerful, and likely useful for some ~ I'd use it. A full blown compiler-compiler is then not so far-fetched ... where does one draw the line?


December 19, 2005
Kris wrote:
> "BCS" <BCS_member@pathlink.com> wrote
> [snip]
> 
>>I think that the primary advantage of using templates is that they can allow for automated setup/verification of complex environments at compile time. How about building a static binary search tree? It would be nice to be able to initialize such a tree from an unsorted array and be sure that it gets put together right. something like this
>>
>>
>>auto BTreeRoot = BuildBTree!(int, CompFn, [1,3,7,9,2,5,8]);
>>
>>would build a balanced tree of ints, sorted with CompFn.
> 
> 
> 
> Nice, but surely this is something that can be done at runtime via a static ctor?
> 
> 
Yes you can do this in a static ctor, (and in many cases this would be good enough) however this would add to the start up cost each time the program is run. Why spend code and time setting up that state of a program when that state can be set ahead of time by just having preinitialized data space?
December 19, 2005
"BCS" <BCS_member@pathlink.com> wrote ...
> Kris wrote:
>> "BCS" <BCS_member@pathlink.com> wrote
>> [snip]
>>
>>>I think that the primary advantage of using templates is that they can allow for automated setup/verification of complex environments at compile time. How about building a static binary search tree? It would be nice to be able to initialize such a tree from an unsorted array and be sure that it gets put together right. something like this
>>>
>>>
>>>auto BTreeRoot = BuildBTree!(int, CompFn, [1,3,7,9,2,5,8]);
>>>
>>>would build a balanced tree of ints, sorted with CompFn.
>>
>>
>>
>> Nice, but surely this is something that can be done at runtime via a static ctor?
>>
>>
> Yes you can do this in a static ctor, (and in many cases this would be good enough) however this would add to the start up cost each time the program is run. Why spend code and time setting up that state of a program when that state can be set ahead of time by just having preinitialized data space?

OK ~ I follow you. In the past, some folk would precompute "expensive" data into a file and then load it up at start time. You're effectively noting that might be done with templates, embedding said data right into the executable.


December 19, 2005
Kris wrote:
> "BCS" <BCS_member@pathlink.com> wrote ...
>> Kris wrote:
>>> "BCS" <BCS_member@pathlink.com> wrote
>>> [snip]
>>>
>>>> I think that the primary advantage of using templates is that they can allow for automated setup/verification of complex environments at compile time. How about building a static binary search tree? It would be nice to be able to initialize such a tree from an unsorted array and be sure that it gets put together right. something like this
>>>>
>>>>
>>>> auto BTreeRoot = BuildBTree!(int, CompFn, [1,3,7,9,2,5,8]);
>>>>
>>>> would build a balanced tree of ints, sorted with CompFn.
>>>
>>>
>>> Nice, but surely this is something that can be done at runtime via a static ctor?
>>>
>>>
>> Yes you can do this in a static ctor, (and in many cases this would be good enough) however this would add to the start up cost each time the program is run. Why spend code and time setting up that state of a program when that state can be set ahead of time by just having preinitialized data space?
> 
> OK ~ I follow you. In the past, some folk would precompute "expensive" data into a file and then load it up at start time. You're effectively noting that might be done with templates, embedding said data right into the executable. 

Yup.  I think the usefulness of templates is that the data is integrated with the runtime code, which eliminates the need for the awkward syntax and error detection associated with storing precalculated data in files.  So far, this seems to have been most useful for science and mathematics, as calculations can simply be done at compile time "whenever possible" without the need for programmer involvement.  And this can still be supplemented by the file method if there is a larger class of data that can be precalculated but for some reason can not be done in code.

I think determining a tipping point is probably situation dependent.  In C++, for example, I find a lot of what's in Boost to be a fantastic example of what's possible using C++ templates, but the implementation often (necessarily) so horrendous that I'm disinclined to actually use it.  Compilation time is another issue, as template code can become so complex that an application might literally take days to compile.  At some point the run-time efficiency simply becomes not worth the coding or compile-time cost.


Sean
December 19, 2005
Sean Kelly wrote:
> Kris wrote:
> 
>>
>>
>> OK ~ I follow you. In the past, some folk would precompute "expensive" data into a file and then load it up at start time. You're effectively noting that might be done with templates, embedding said data right into the executable. 
> 
> 
> Yup.  I think the usefulness of templates is that the data is integrated with the runtime code, which eliminates the need for the awkward syntax and error detection associated with storing precalculated data in files.  So far, this seems to have been most useful for science and mathematics, as calculations can simply be done at compile time "whenever possible" without the need for programmer involvement.  And this can still be supplemented by the file method if there is a larger class of data that can be precalculated but for some reason can not be done in code.
> 
> I think determining a tipping point is probably situation dependent.  In C++, for example, I find a lot of what's in Boost to be a fantastic example of what's possible using C++ templates, but the implementation often (necessarily) so horrendous that I'm disinclined to actually use it.  Compilation time is another issue, as template code can become so complex that an application might literally take days to compile.  At some point the run-time efficiency simply becomes not worth the coding or compile-time cost.
> 
> 
> Sean

The tipping point also depends on what the program is for, If you are writing a GCI app that will need to start, run and generate output while a user is waiting (and the program will run millions of times), compile time won't be nearly so much of an issue as if you are writing a finite element analysis program that will only be run a half dozen times for each time it is compiled.

One other thought, to keep the compile/run cycle manageable, maybe many of these libraries should have a setting to change them from compile time code to run time code, to use my last example:

template BuildBTree(T, bool function(T,T) cmp, T[], ar)
{
	static if(atComplie)
			// if at compile generate a tree now
		const BTree!(T) BuildBTree = ...// some template magic
	else
			// else insert code to call function
		BTree!(T) BuildBTree!(T)(comp, ar);
}

I know that doesn't work but you get the idea.
December 20, 2005
Ben Hinkle wrote:
> "Don Clugston" <dac@nospam.com.au> wrote in message news:do6m1f$oh8$1@digitaldaemon.com...
> 
>>Ben Hinkle wrote:
>>
>>>>It would in theory be possible to apply the D meaning of const to functions and function parameters. For example, here's the metaprogramming "pow!()":
>>>>
>>>>/** real pow!(real a, int b)
>>>>* Fast integer powers
>>>>*/
>>>>template pow(real a, int b)
>>>>{
>>>> static if (b==0) const real pow=1.0L;
>>>> else static if (b<0) const real pow = 1.0L/.pow!(a, -b);
>>>> else static if (b==1) const real pow = a;
>>>> else static if (b & 1) const real pow = a * .pow!(a*a, b>>1);
>>>> else const real pow = .pow!(a*a, b>>1);
>>>>}
>>>>
>>>>
>>>>Ideally, this could be written as:
>>>>
>>>>const real pow(const real a, const int b)
>>>>{
>>>> if (b==0) return 1.0L;
>>>> else if (b<0) return 1.0L/pow(a, -b);
>>>> else if (b==1) return a;
>>>> else if (b & 1) return a * pow(a*a, b>>1);
>>>> else return pow(a*a, b>>1);
>>>>}
>>>>
>>>>That is, provide an overload for the case in which all parameters are known at compile time. (maybe the "const" before real a, real b are unnecessary, they are implied by the const return?).
>>>
>>>That could be one approach, though personally I'm not sure its worth making the language and overloading rules more complex. For example given
>>>  real pow(real a, int b);
>>>  const real pow(const real a, const int b);
>>>what would the compiler do with
>>>  pow(1.0,1)?
>>>The compiler could convert the const double to const real and call the compile-time pow or it could call the run-time pow.
>>
>>It should definitely treat the constant as a const real.
> 
> 
> It isn't obvious to me which one to choose. I would expect that it would follow D's "error on any multiple matches" rule.
> 
> 
>>>I think it's better to
>>>have either 1) a separate function name for compile-time pow if the algorithm for the compile-time version is different than the run-time version or 2) import a module called something like std.inlinemath that has the compile-time pow and is tailored to let the compiler inline as much as possible.
>>
>>This is what I've done, effectively -- meta.math includes pow!() which behaves exactly in that way.
> 
> I'm not sure I'm communicating my point. You have written meta.math.pow!() and have suggested (or maybe I should say floated rather than suggested) a function pow(const real, const int). My point is that in a perfect world a "compile-time" pow shouldn't be a template and shouldn't use overloading on pow. Why invent lots of machinery when it might be avoidable?

I agree with you. But, I think that in practice, a compiler from that perfect world is extremely difficult to create. It seems that even after decades of work, highly optimising compilers are very difficult to create. Existing compilers have problems even with the simple cases, like matrix operations. This seems to be the rationale behind much of the code at boost.

It's possible that the reason compilers aren't good at it is simply that it has never been a priority; but I suspect it's an intrinsically difficult problem. Code which is optimised to be fast at run time is typically going to be very difficult for a compiler to understand; so it's much more effective to provide seperate algorithms for the two cases.
In practice, I think that compiler writers have too much work to do already.

Pragmatically, I think the best approach is to attempt to bring the run-time and compile-time languages together. In this respect, the use of "static if" instead of partial template specialisation is an enormous improvement over C++. Removing some of the current D template quirks will bring them even closer. I'm convinced that we'll be able to make significant progress after that, too, with only very minor language enhancements.
December 21, 2005
Ben Hinkle wrote:
>>How about going all the way (down the road of orthogonality ) and make this a full blown syntax shortcut for templated functions:
>>
>>  TYPE opAdd(const TYPE, TYPE a, TYPE b ) {
>>return a + b;
>>  }
>>  ...
>>  // creates a templated function instance for TYPE=int :
>>  opAdd(int, 40, 2);
> 
> 
> I don't understand the advantage over the current style opAdd!(int)(40,2).
> 
I too don't see any (beyond orthogonality), but maybe in the future that could bring some more tangible advantadges.

-- 
Bruno Medeiros - CS/E student
"Certain aspects of D are a pathway to many abilities some consider to be... unnatural."
December 21, 2005
Awesome!!!!  Have you done any benchmarks?

-Craig


December 21, 2005
In article <doc040$2jt4$1@digitaldaemon.com>, Craig Black says...
>
>Awesome!!!!  Have you done any benchmarks?
>
>-Craig

None yet, but I'd be lying if I said that I wasn't curious myself.  When I wrote this, I really just wanted a proof-of-concept - performance wasn't a concern. :)

But if I had to guess, I'd say that it comes out ahead for largeish queries or for programs that heavily rely on regular expressions.

- EricAnderton at yahoo
January 04, 2006
"Sean Kelly" <sean@f4.ca> wrote in message news:do769q$1k87$1@digitaldaemon.com...
> I think determining a tipping point is probably situation dependent.  In C++, for example, I find a lot of what's in Boost to be a fantastic example of what's possible using C++ templates, but the implementation often (necessarily) so horrendous that I'm disinclined to actually use it.

The problem with C++ template metaprogramming is it was discovered as a side effect of templates. The templates were never designed to do that. Hence, it is, as you say, horrendous. In D we have the benefit of seeing where we want to go and putting the road directly there, rather than just following the arbitrary lay of the land and seeing where we wound up.


1 2
Next ›   Last »