August 07, 2009
On 2009-08-07 12:33:09 -0400, Miles <_______@_______.____> said:

> Lars T. Kyllingstad wrote:
>> Neither of the natural candidates, a^b and a**b, are an option, as they
>> are, respectively, already taken and ambiguous.
> 
> I think that a ** b can be used, is not ambiguous except for the
> tokenizer of the language. It is the same difference you have with:
> 
>   a ++ b  -> identifier 'a', unary operator '++', identifier 'b' (not
> parseable)
> 
>   a + + b  -> identifier 'a', binary operator '+', unary operator '+',
> identifier 'b' (parseable)

But to be coherent with a++ which does a+1, shouldn't a** mean a to the power 1 ?

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

August 07, 2009
Michel Fortin wrote:
> On 2009-08-07 06:50:25 -0400, "Lars T. Kyllingstad" <public@kyllingen.NOSPAMnet> said:
> 
>> Daniel Keep has proposed the syntax
>>
>>    a*^b
>>
>> while my suggestion was
>>
>>    a^^b
> 
> I always wondered why there isn't an XOR logical operator.
> 
>     binary     logical
>     (a & b) => (a && b)
>     (a | b) => (a || b)
>     (a ^ b) => (a ^^ b)

 a | b | a ^^ b | a != b
---+---+--------+--------
 F | F |   F    |   F
 F | T |   T    |   T
 T | F |   T    |   T
 T | T |   F    |   F

That's why.
August 07, 2009
On 2009-08-07 13:01:55 -0400, Daniel Keep <daniel.keep.lists@gmail.com> said:

> Michel Fortin wrote:
>> I always wondered why there isn't an XOR logical operator.
>> 
>> binary     logical
>> (a & b) => (a && b)
>> (a | b) => (a || b)
>> (a ^ b) => (a ^^ b)
> 
>  a | b | a ^^ b | a != b
> ---+---+--------+--------
>  F | F |   F    |   F
>  F | T |   T    |   T
>  T | F |   T    |   T
>  T | T |   F    |   F
> 
> That's why.

For this table to work, a and b need to be boolean values. With && and ||, you have an implicit convertion to boolean, not with !=. So if a == 1 and b == 2, an hypothetical ^^ would yeild false since both are converted to true, while != would yield false.

But I have another explanation now. With && and ||, there's always a chance that the expression on the left won't be evaluated. If that wasn't the case, the only difference in && vs. &, and || vs. | would be the automatic convertion to a boolean value. With ^^, you always have to evaluate both sides, so it's less useful.

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

August 07, 2009
Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:

> In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more.

> ...

> "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
> pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use
> those, do we?

A lot of other built-in operators are missing. I'd like to add opGcd() (greatest common denominator), opFactorial (memoizing O(1) implementation, for advertising the terseness of D on reddit), opStar (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to your list. Operations for paraconsistent logic would be nice, too. After all, these are operations I use quite often ==> everyone must need them badly.

> He also proposed that the overload be called opPower.
> 
> What do you think?

Sounds perfect.

> 
> -Lars

August 07, 2009
"Andrei Alexandrescu" <SeeWebsiteForEmail@erdani.org> wrote in message news:4A7C5313.10105@erdani.org...
> Jimbob wrote:
>> "bearophile" <bearophileHUGS@lycos.com> wrote in message news:h5h3uf$23sg$1@digitalmars.com...
>>> Lars T. Kyllingstad:
>>>> He also proposed that the overload be called opPower.
>>> I want to add to two small things to that post of mine: http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123
>>>
>>> The name opPow() may be good enough instead of opPower().
>>>
>>> And A^^3 may be faster than A*A*A when A isn't a simple number, so
>>> always replacing the
>>> power with mults may be bad.
>>
>> It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.
>
> Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.

In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel.



August 07, 2009
Jimbob wrote:
> "Andrei Alexandrescu" <SeeWebsiteForEmail@erdani.org> wrote in message news:4A7C5313.10105@erdani.org...
>> Jimbob wrote:
>>> "bearophile" <bearophileHUGS@lycos.com> wrote in message news:h5h3uf$23sg$1@digitalmars.com...
>>>> Lars T. Kyllingstad:
>>>>> He also proposed that the overload be called opPower.
>>>> I want to add to two small things to that post of mine:
>>>> http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=95123
>>>>
>>>> The name opPow() may be good enough instead of opPower().
>>>>
>>>> And A^^3 may be faster than A*A*A when A isn't a simple number, so always replacing the
>>>> power with mults may be bad.
>>> It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles.
>> Yeah, but what's the throughput? With multiple ALUs you can get several multiplications fast, even though getting the first one incurs a latency.
> 
> In this case you incur the latency of every mul because each one needs the result of the previous mul before it can start. Thats the main reason trancendentals take so long to compute, cause they have large dependancy chains which make it difficult, if not imposible for any of it to be done in parallel.


Oh, you're right. At least if there were four multiplies in there, I could've had a case :o).

Andrei
August 07, 2009
Reply to Jimbob,

> "bearophile" <bearophileHUGS@lycos.com> wrote in message
> news:h5h3uf$23sg$1@digitalmars.com...
> 
>> Lars T. Kyllingstad:
>> 
>>> He also proposed that the overload be called opPower.
>>> 
>> I want to add to two small things to that post of mine:
>> http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalma
>> rs.D&article_id=95123
>> 
>> The name opPow() may be good enough instead of opPower().
>> 
>> And A^^3 may be faster than A*A*A when A isn't a simple number, so
>> always
>> replacing the
>> power with mults may be bad.
> It wont be on x86. Multiplication has a latency of around 4 cycles
> whether int or float, so x*x*x will clock around 12 cycles. The main
> instruction needed for pow, F2XM1, costs anywhere from 50 cycles to
> 120, depending on the cpu. And then you need to do a bunch of other
> stuff to make F2XM1 handle different bases.
> 

For constant integer exponents the compiler should be able to choose between the multiplication solution and a intrinsic solution.

also: http://en.wikipedia.org/wiki/Exponentiation#Efficiently_computing_a_power


August 07, 2009
Jimbob Wrote:

>bearophile:
> > And A^^3 may be faster than A*A*A when A isn't a simple number, so always
> > replacing the
> > power with mults may be bad.
> 
> It wont be on x86. Multiplication has a latency of around 4 cycles whether int or float, so x*x*x will clock around 12 cycles. The main instruction needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on the cpu. And then you need to do a bunch of other stuff to make F2XM1 handle different bases.

I don't understand what you mean.
But "when A isn't a simple number" means for example when A is a matrix. In such case the algorithm of A^3 may be faster than doing two matrix multiplications, and even if it's not faster it may be better numerically, etc. In such cases I'd like to leave to the matrix power algorithm the decision regarding what do to and I don't think rewriting the power is good.

This means the rewriting rules I have shown (x^^2 => x*x, x^^3 => x*x*x, and maybe x^^4 => y=x*x;y*y) have to be used only when x is a built-in datum.

Bye,
bearophile
August 07, 2009
On Fri, Aug 7, 2009 at 10:43 AM, language_fan<foo@bar.com.invalid> wrote:
> Fri, 07 Aug 2009 12:50:25 +0200, Lars T. Kyllingstad thusly wrote:
>
>> In the 'proposed syntax change' thread, Don mentioned that an exponentiation operator is sorely missing from D. I couldn't agree more.
>
>> ...
>
>> "Why do we need this?" you say. "Isn't pow(a,b) good enough?" And yes,
>> pow(a,b) is just as good as mul(a,b) or div(a,b), but we don't use
>> those, do we?
>
> A lot of other built-in operators are missing. I'd like to add opGcd()
> (greatest common denominator), opFactorial (memoizing O(1)
> implementation, for advertising the terseness of D on reddit), opStar
> (has been discussed before), opSwap, opFold, opMap, and opFish (><>) to
> your list. Operations for paraconsistent logic would be nice, too. After
> all, these are operations I use quite often ==> everyone must need them
> badly.

Ha ha.  Funny jokes.  Can we call it opDarwin instead of opFish?  That would be funnier.

--bb
August 07, 2009
"bearophile" <bearophileHUGS@lycos.com> wrote in message news:h5hvhh$if8$1@digitalmars.com...
> Jimbob Wrote:
>
>>bearophile:
>> > And A^^3 may be faster than A*A*A when A isn't a simple number, so
>> > always
>> > replacing the
>> > power with mults may be bad.
>>
>> It wont be on x86. Multiplication has a latency of around 4 cycles
>> whether
>> int or float, so x*x*x will clock around 12 cycles. The main instruction
>> needed for pow, F2XM1, costs anywhere from 50 cycles to 120, depending on
>> the cpu. And then you need to do a bunch of other stuff to make F2XM1
>> handle
>> different bases.
>
> I don't understand what you mean.
> But "when A isn't a simple number" means for example when A is a matrix.

Oops, my brain didnt parse what you meant by "simple number".