October 06, 2008
Vincent Richomme wrote:
> Michel Fortin a écrit :
>> On 2008-10-05 20:57:31 -0400, "Chris R. Miller" <lordsauronthegreat@gmail.com> said:
>>
>>> The !() syntax seems to serve only as a heads up that it's a template. Otherwise (as far as I can tell) a simple foo(int)(bar, baaz) would work just as well as foo!(int)(bar, baaz).
> 
> What about the ^ ? I think it's not worst than !
> 
> foo^(int)(bar) but maybe this symbol is already used by D ?
> Personally I would prefer to keep the C++, Java, C# syntax with <> because people are used to it even if in some cases it looks like a bit shift op.

^ is for bitwise XOR!

Unless you use ^^

  foo^^(int)(bar)

Not my cup of tea though.
October 06, 2008
Lars Ivar Igesund wrote:
> Gregor Richards wrote:
> 
>> downs wrote:
>>> Andrei Alexandrescu wrote:
>>>> The problem I see with "!" as a template instantiation is not technical.
>>>> I write a fair amount of templated code and over years the "!" did not
>>>> grow on me at all. I was time and again consoled by Walter than one day
>>>> that will happen, but it never did. I also realized that Walter didn't
>>>> see a problem with it because he writes only little template code.
>>>>
>>> FWIW: I write large volumes of template code and I like the ! just fine.
>>>
>> Don't take this endorsement lightly. When downs says he writes large
>> volumes of template code, he means volumes in the literary sense: You
>> could fill a fairly-large bookshelf with his tomes of template code.
> 
> I'm sure you mean not so large tomes, but with very long lines?
> 

If you strip all the \n's you'll get a very long line. /joke
(unless you've used a #line)
October 06, 2008
KennyTM~ a écrit :
> Vincent Richomme wrote:
>> Michel Fortin a écrit :
>>> On 2008-10-05 20:57:31 -0400, "Chris R. Miller" <lordsauronthegreat@gmail.com> said:
>>>
>>>> The !() syntax seems to serve only as a heads up that it's a template. Otherwise (as far as I can tell) a simple foo(int)(bar, baaz) would work just as well as foo!(int)(bar, baaz).
>>
>> What about the ^ ? I think it's not worst than !
>>
>> foo^(int)(bar) but maybe this symbol is already used by D ?
>> Personally I would prefer to keep the C++, Java, C# syntax with <> because people are used to it even if in some cases it looks like a bit shift op.
> 
> ^ is for bitwise XOR!
Yes and ! is for what ?
Just look at managed c++ in .NET and tell me if ^ is always a  bitwise XOR.

> Unless you use ^^
> 
>   foo^^(int)(bar)
> 
> Not my cup of tea though.
don't like either ^^
October 06, 2008
Vincent Richomme wrote:
> KennyTM~ a écrit :
>> Vincent Richomme wrote:
>>> Michel Fortin a écrit :
>>>> On 2008-10-05 20:57:31 -0400, "Chris R. Miller" <lordsauronthegreat@gmail.com> said:
>>>>
>>>>> The !() syntax seems to serve only as a heads up that it's a template. Otherwise (as far as I can tell) a simple foo(int)(bar, baaz) would work just as well as foo!(int)(bar, baaz).
>>>
>>> What about the ^ ? I think it's not worst than !
>>>
>>> foo^(int)(bar) but maybe this symbol is already used by D ?
>>> Personally I would prefer to keep the C++, Java, C# syntax with <> because people are used to it even if in some cases it looks like a bit shift op.
>>
>> ^ is for bitwise XOR!
> Yes and ! is for what ?
> Just look at managed c++ in .NET and tell me if ^ is always a  bitwise XOR.

OK. The *binary* operator ^ is for bitwise XOR!

And the ^ in foo^(int) acts as a *binary* operator also.

And in C++/CLI the ^ in int^ and ^x acts as a *unary* operator, so no problem in this case.

AFAIK one of the reason ! was chosen because a!b doesn't make sense in C, so D is free to use ! as a *binary* operator.

> 
>> Unless you use ^^
>>
>>   foo^^(int)(bar)
>>
>> Not my cup of tea though.
> don't like either ^^
October 06, 2008
Andrei Alexandrescu wrote:

...
> No go due to #line. :o(
> 
> Andrei

Perhaps this is silly, but if #line is the sole blocker, then why not change #line? That feature seems so small in comparison to template instantiation, it isn't fair to get the nicer symbol.

October 06, 2008
On Mon, 06 Oct 2008 06:17:44 +0100, Brad Roberts <braddr@puremagic.com> wrote:

> Andrei Alexandrescu wrote:
>> Bruce Adams wrote:
>>> On Mon, 06 Oct 2008 00:55:33 +0100, Andrei Alexandrescu
>>> <SeeWebsiteForEmail@erdani.org> wrote:
>>>>>>
>>>>>> Andrei
>>>>>  I disagree. I'm not saying its easy but it could be done. We would
>>>>> have to start
>>>>> with something relatively simple and work our way up but it could be
>>>>> done.
>>>>
>>>> I, too, think it can be done in the same way supersonic mass
>>>> transportation can be done, but having done research in the area I can
>>>> tell you you are grossly underestimating the difficulties. It is a
>>>> project of gargantuan size. Today the most advanced systems only managed
>>>> to automatically prove facts that look rather trivial to the casual
>>>> reader. There is absolutely no hope for D to embark on this.
>>>>
>>> I'm not asking for a generalised theorem prover. Something to handle
>>> even the
>>> simple cases is a start.
>>> I agree that D won't have this (any time soon) but mainly because there
>>> are several hundred things higher up the priority list.
>>>
>>>>> Compiler's already do all kinds of clever analyses behind the scenes
>>>>> but each one
>>>>> is often hard coded. I suspect the main difficulty is giving users
>>>>> too much rope
>>>>> by which to hang themselves, or rather hang the compiler trying to
>>>>> prove something
>>>>> it doesn't realise it can't. Marrying declarative / constraint based
>>>>> programming
>>>>> at compile time is creeping in via templates. I wish it was less
>>>>> well hidden.
>>>>
>>>> I discussed the problem this morning with Walter and he also started
>>>> rather cocky: if you assert something early on, you can from then on
>>>> assume the assertion is true (assuming no assignment took place, which
>>>> is not hard if you have CFA in place). He got an arrow in a molar with
>>>> the following example:
>>>>
>>>> double[] vec;
>>>> foreach (e; vec) assert(e >= 0);
>>>> // now we know vec is all nonnegatives
>>>> normalize(vec);
>>>>
>>>> The definition of normalize is:
>>>>
>>>> void normalize(double[] vec)
>>>> {
>>>>      foreach (e; vec) assert(e >= 0);
>>>>      auto sum = reduce@"a + b"(vec, 0);
>>>>      assert(sum > 0);
>>>>      foreach (ref e; vec) e /= sum;
>>>> }
>>>>
>>>> If normalize takes udouble[] and you have one of those, there's no
>>>> need to recheck. Automated elimination of the checking loop above is
>>>> really hard.
>>>>
>>>>
>>>> Andrei
>>>
>>> Is it? I think your example needs to be expanded or we may be talking
>>> at cross purposes.
>>> Firstly I would rearrange things a little, though in principle it makes
>>> no difference.
>>>
>>>  pre
>>>  {
>>>     static assert(foreach (e; vec) assert(e >= 0));
>>>  }
>>>  void normalize(double[] vec)
>>>  {
>>>     auto sum = reduce@"a + b"(vec, 0);
>>>     assert(sum > 0);
>>>     foreach (ref e; vec) e /= sum;
>>>  }
>>>
>>> double[] vec;
>>> static assert(foreach (e; vec) assert(e >= 0));  // line X
>>>
>>> // now we know vec is all nonnegatives
>>> normalize(vec);   // line Y
>>>
>>> Imagine I have a prolog style symbolic unification engine to hand
>>> inside my compiler.
>>>
>>> At line X the static assertion is evaluated.
>>> The logical property e>=0 is asserted on the vec symbol.
>>> The compiler reaches line Y.
>>> Vec has not been modified so still has this property associated with it.
>>> We now unify the symbol representing vec with the contract on normalise.
>>> The unification succeeds and everything is fine.
>>>
>>> Now imagine we have a stupid analyser.
>>>
>>> double[] vec;
>>> static assert(foreach (e; vec) assert(e >= 0));  // line X
>>>
>>> vec[0] -= 1; // line Z
>>>
>>> // now we know vec is all nonnegatives
>>> normalize(vec);   // line Y
>>>
>>> when the compile time analyser reaches line Z it can't work out
>>> whether or not the
>>> contract still applies, so it removes the assertion that vec is e>=0
>>> for all elements, or
>>> rather asserts that it is not provably true.
>>> Now when we reach Y the unification fails.
>>> We don't throw a compile time contraint violation error. We haven't
>>> proved it to have failed.
>>> We throw a compile time constraint unprovable error or warning.
>>>
>>> Its like a lint warning. Your code may not be wrong but you might want
>>> to consider altering it
>>> in a way that makes it provably correct.
>>>
>>> We would really like to be able to assert that certain properties
>>> always hold for a variable
>>> through its life-time. That is a harder problem.
>>>
>>> I see something like this as a basis on which more advanced analysis
>>> can be built gradually.
>>>
>>> Actually re-using static assert above was probably misleading. It
>>> would be better to have something
>>> that says talk to the compile time theorem prover (a prolog
>>> interpreter would do).
>>>
>>> I have been meaning to write something along these lines for years but
>>> so far I haven't got around to it
>>> so I might as well stop keeping it under my hat.
>>>
>>> Regards,
>>>
>>> Bruce.
>>
>> I think it's terrific to try your hand at it. I for one know am not
>> equipped for such a pursuit. I have no idea how to perform unification
>> over loops, particularly when they may have myriads of little
>> differences while still having the same semantics; how to do that
>> cross-procedurally in reasonable time; how to take control flow
>> sensitivity into account; and why exactly people who've worked hard at
>> it haven't gotten very far. But I am equipped to define a type that
>> enforces all that simply and clearly, today.
>>
>>
>> Andrei
>
> I'm not an expert at optimizers, but I do read a whole lot.  This type
> of analysis is often call 'value range propagation'.  Tracking the
> possible values for data as it flows around and eliminating checks based
> on prior events.  Most modern compilers do some amount of it.  The
> question is how much and under what circumstances.  I'm not sure how
> many combine that with static value analysis to create multiple function
> versions.  IE, 'enough' callers to justify emitting two different forms
> of the function, each with different base assumptions about the incoming
> data and adjusting call sites.  I know that some compilers do that for
> known constant values.  I guess it's something halfway to inlining.
>
> I can guess at what dmd's backend does.. or rather doesn't do. :)
>
> Later,
> Brad

Perhaps more can be done with LLVM.

I was actually suggesting something more radical than this. Having a turing
complete interpreter available at compile time with access to the data structures
used to represent the data flow analysis. This is what I mean by rope enough
to hang yourself by. This isn't done currently for many reasons.
1. its a sledgehammer to crack a nut (though it is more general purpose)
- as Andrei points out he can do myriad useful things today without this
2. compiler authors are rightly uneasy about exposing internal data-structures at compile time
3. compiler authors are rightly uneasy permitting activities that can potential
result in longer compile times and especially infinitely long compile times (though templates
make this possible)
4. many programmers are uneasy about marrying different programming styles.
One of D's successes is actually in making template meta-programming more like regular procedural/functional
OO programming and less like pattern matching / logical / declarative programming.
That makes it more comprehendable to the masses which is a good thing.
But it also hides the possibility of slipping a constraint solver in there.
Though I don't think the CTFE system and templates are powerful enough to write an interpreter in.
You don't have access to state information except through arguments that are constant expressions.
It would be unnecessarily challenging to make this state mutable from different places in the program,
if its even possible. This is where a multi-level language might win.

I like the idea of possibly generating two different function implementations internally for the
internal optimizer. However, I was suggesting the analysis is focused on contracts at interface boundaries.

Regards,

Bruce.
October 06, 2008
Andrei Alexandrescu wrote:
> Hello,
> 
> 
> (Background: Walter has kindly allowed ".()" as an alternative to the ugly "!()" for template argument specifications.)
> 
> Just a quick question - I am feeling an increasing desire to add a template called Positive to std.typecons. Then Positive.(real) would restrict its values to only positive real numbers etc.
> 
> The implementation would accept conversion from its base type, e.g. Positive.(real) can be constructed from a real. The constructor enforces dynamically the condition. Also, Positive.(real) accepts implicit conversion to real (no checking necessary).
> 
> There are many places in which Positive can be useful, most notably specifications of interfaces. For example:
> 
> Positive.(real) sqrt(Positive.(real) x);

From the IEEE definition of sqrt(x), x can be >=0, which includes -0.
For negative zero, the return result is -0.
However, not all positive-valued functions will accept -0 as an argument. They also vary in whether or not they will accept an infinity or a NaN.
So there's an annoying bit of complication.


> These specifications are also efficient because checking for positivity is done outside sqrt; if you had a Positive.(real) to start with, there is no cascading checking necessary so there's one test less to do.
> 
> However, there is also the risk that Positive has the same fate the many SafeInt implementation have had in C++: many defined them, nobody used them. What do you think? If you had Positive in std and felt like crunching some numbers, would you use it?

I think it probably is similar to SafeInt in C++. It just doesn't add enough value. Essentially, it allows you to drop some trivial contracts. But I don't think it scales particularly well. As math functions gain more arguments, the relationships in the 'in' contracts tend to get more complicated.

Much more useful would be positive int. Meaning, the top bit of the int is always 0, which is a completely different concept to uint. Then we could have implicit conversion from posint -> int and posint -> uint, and disallow conversions between int <-> uint.

But really I'd like to see polysemous types first...
October 06, 2008
Sat, 04 Oct 2008 23:50:47 -0500,
Andrei Alexandrescu wrote:
> Alexander Panek wrote:
> > Andrei Alexandrescu wrote:
> >> The problem I see with "!" as a template instantiation is not technical. I write a fair amount of templated code and over years the "!" did not grow on me at all. I was time and again consoled by Walter than one day that will happen, but it never did. I also realized that Walter didn't see a problem with it because he writes only little template code.
> >>
> >> I didn't have much beef with other oddities unique to D. For example, I found no problem accommodating binary "~" and I was wondering what makes "!" different. I was just looking at a page full of templates and it looked like crap.
> >>
> >> One morning I woke up with the sudden realization of what the problem was: the shouting.
> >>
> >> In C, "!" is used as a unary operator. That may seem odd at first, but it nevers follows a word so it's tenuous to associate it with the natural language "!". In D, binary "!" _always_ follows a word, a name, something coming from natural language. So the conotation with exclamation jumps at you.
> >>
> >> That's why I find the choice of "!" poor. I believe it can impede to some extent acquisition of templates by newcomers, and conversely I believe that using .() can make templates more palatable. I tried using ".()" in my code and in only a couple of days it looked and felt way better to me. Based on that experience, I suggest that "!()" is dropped in favor of ".()" for template instantiation for D2.
> >>
> >> Sean's argument that "The exclamation mark signifies an assertion of sorts" is exactly where I'd want templates not to be: they should be blended in, not a hiccup from normal code. Serious effort has been, and still is, made in D to avoid shell-shocking people about use of templates, and I think ".()" would be a good step in that direction.
> > 
> > Sean has a point. Templates are not runtime constructs. So a clear
> > distinction between instantiating a function with a given type and just
> > calling a function that has fixed argument types and a fixed return type
> >  is necessary.
> 
> Why? This sounds objective, so you better back it up. Au contraire, I see absolutely, but absolutely no need for a distinction. If it weren't for syntactic difficulties, to me using straight parentheses for template instantiation would have been the perfect choice. (How many times did you just forget the "!"? I know I often do. Why? Because most of the time it's not even needed.)
> 
> > The exclamation mark gives us this clear distinction and has served well in terms of readability for me, especially because it jumps out ? not because it?s an exclamation mark, thus having a meaning in natural language, but rather just because of its form. A straight vertical line with a dot underneath it. That just works perfectly well as seperator between identifier/type and type argument.
> 
> I believe the clear distinction is not only unnecessary, but undesirable. We should actively fight against it.

You have a problem with shouting.  Not everyone has.

The distinction is important IMHO because the choice between template and runtime is a speed/size tradeoff choice.  It's better to make it explicitly.
October 06, 2008
Andrei Alexandrescu wrote:
> One morning I woke up with the sudden realization of what the problem was: the shouting.

Here's my (nutty) opinion:

Neither the "!" nor the "." really want to be there. I think the language really *wants* to be using a bare set of parens for templates. Because the language actually wants templates and functions to converge.

Instead of a special-case syntax for templates, and a set of special rules for CTFE, and a whole set of parallel "static" statements (if, else, foreach) and a special compile-type-only type construct (tuples), just let D be D, either at runtime or compile type.

If a function could return a Type, and if that type could be used in a Type Constructor, then you'd have all the magic template sauce you'd need, and templates could happily converge themselves with regular functions.

Hey! I told you it was going to be nutty!!!

<g>

--benji
October 06, 2008
Jarrett Billingsley wrote:
> On Sun, Oct 5, 2008 at 8:57 PM, Chris R. Miller
> <lordsauronthegreat@gmail.com> wrote:
>> The !() syntax seems to serve only as a heads up that it's a template.
>> Otherwise (as far as I can tell) a simple foo(int)(bar, baaz) would work
>> just as well as foo!(int)(bar, baaz).
>>
> 
> Unambiguous grammar, you fail it.
> 
> foo(bar)(baz); // template instantiation or a chained call?
> 
> This _can_ be _made_ to work, but it would mean that the parse tree
> would be dependent upon semantic analysis, and that just makes things
> slow and awful.  I.e. C++.

I'd be happy to get rid of OpCall, which I've always found confusing and pointless.

--benji