December 08, 2015
On Tuesday, 8 December 2015 at 21:18:01 UTC, Timon Gehr wrote:
> On 12/08/2015 09:56 PM, cym13 wrote:
>> On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:
>>> On 12/08/2015 12:12 PM, Timon Gehr wrote:
>>>> O(log(n+m)) = O(log(n)+log(m)).
>>>
>>> Noice. Yes I did miss it. Thx!! -- Andrei
>>
>> Surely I'm missing something obvious but why is it true exactly?
>
> n+m ≤ 2·max(n,m). (1)
> max(n,m) ≤ n+m.   (2)
>
> Hence,
>
> log(n+m) ≤ log(max(n,m)) + log(2)       by (1)
>          = max(log(n),log(m)) + log(2)  by monotonicity
>          ≤ log(n) + log(m) + log(2)     by (2),
>
> log(n)+log(m) ≤ 2·max(log(n),log(m))    by (1)
>               = 2·log(max(n,m))         by monotonicity
>               ≤ 2·log(n+m)              by " and (2).
>
> Similar arguments work for any monotone increasing function that does not grow too fast.

This seems reasonable, but you have undefined behavior of logarithms if n or m is zero.
December 08, 2015
On 12/08/2015 10:35 PM, Charles wrote:
> On Tuesday, 8 December 2015 at 21:18:01 UTC, Timon Gehr wrote:
>> On 12/08/2015 09:56 PM, cym13 wrote:
>>> On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:
>>>> On 12/08/2015 12:12 PM, Timon Gehr wrote:
>>>>> O(log(n+m)) = O(log(n)+log(m)).
>>>>
>>>> Noice. Yes I did miss it. Thx!! -- Andrei
>>>
>>> Surely I'm missing something obvious but why is it true exactly?
>>
>> n+m ≤ 2·max(n,m). (1)
>> max(n,m) ≤ n+m.   (2)
>>
>> Hence,
>>
>> log(n+m) ≤ log(max(n,m)) + log(2)       by (1)
>>          = max(log(n),log(m)) + log(2)  by monotonicity
>>          ≤ log(n) + log(m) + log(2)     by (2),
>>
>> log(n)+log(m) ≤ 2·max(log(n),log(m))    by (1)
>>               = 2·log(max(n,m))         by monotonicity
>>               ≤ 2·log(n+m)              by " and (2).
>>
>> Similar arguments work for any monotone increasing function that does
>> not grow too fast.
>
> This seems reasonable, but you have undefined behavior of logarithms if
> n or m is zero.

The context is asymptotic runtime bounds. The special cases for small values of continuous logarithms can just be defined away. They have no relevance for asymptotic runtime analysis (even though defining big-O adequately for multiple variables is more tricky than for a single variable).
December 08, 2015
On 12/08/2015 04:55 PM, Timon Gehr wrote:
> The context is asymptotic runtime bounds. The special cases for small
> values of continuous logarithms can just be defined away. They have no
> relevance for asymptotic runtime analysis (even though defining big-O
> adequately for multiple variables is more tricky than for a single
> variable).

From the cycle "Andrei's esprits d'escalier", the inequality can be derived from the somewhat notorious log sum inequality if you make all bi equal: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture3.pdf

Andrei
December 09, 2015
On 12/08/2015 11:31 PM, Andrei Alexandrescu wrote:
> On 12/08/2015 04:55 PM, Timon Gehr wrote:
>> The context is asymptotic runtime bounds. The special cases for small
>> values of continuous logarithms can just be defined away. They have no
>> relevance for asymptotic runtime analysis (even though defining big-O
>> adequately for multiple variables is more tricky than for a single
>> variable).
>
>  From the cycle "Andrei's esprits d'escalier", the inequality can be
> derived from the somewhat notorious log sum inequality if you make all
> bi equal:
> http://ocw.usu.edu/Electrical_and_Computer_Engineering/Information_Theory/lecture3.pdf
>
>
> Andrei

Which inequality?
December 08, 2015
On Tue, Dec 08, 2015 at 09:30:09PM +0000, Charles via Digitalmars-d wrote:
> On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:
> >On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:
> >>On 12/08/2015 12:12 PM, Timon Gehr wrote:
> >>>O(log(n+m)) = O(log(n)+log(m)).
> >>
> >>Noice. Yes I did miss it. Thx!! -- Andrei
> >
> >Surely I'm missing something obvious but why is it true exactly?
> 
> Im confident it isn't. I think the rule he was thinking of was
> O(log(ab)) = O(log(a)+log(b)), which is just a basic property of
> logarithms. It's pretty easy to get to a contradiction between those
> two rules.

There is no contradiction.  Big-O notation deals with asymptotic behaviour as the variables approach infinity, so behaviour at 'small' values of m and n are irrelevant.  The idea is that if m grows fundamentally faster than n, then for sufficiently large values of m and n, the actual value will be dominated by m, and the contribution from n will be negligible. The same argument holds for the converse. Hence, O(n+m) = O(max(n,m)).

Now, for O(log(n+m)), at sufficiently large values of n and m what
dominates the argument to log will be max(n,m), so O(log(n+m)) =
O(log(max(n,m))). But we've already established that O(f(x)+g(x)) =
O(max(f(x),g(x))), so we note that O(log(max(n,m))) =
O(max(log(n),log(m))) = O(log(n)+log(m)).

Note that the last step only works for slow-growing functions like log,
because log(x+y) < log(x) + log(y). This is no longer true for functions
that grow too fast, e.g., exp(x+y) < exp(x) + exp(y) is false, so big-O
expressions involving exp cannot be simplified in this way.


T

-- 
We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare.  Now, thanks to the Internet, we know this is not true. -- Robert Wilensk
December 09, 2015
On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:
> Big-O notation deals with asymptotic
> behaviour as the variables approach infinity, so behaviour at 'small'
> values of m and n are irrelevant.

The problem is, however, that only m /or/ n could be small. Therefore, formalizing this intuition is somewhat more delicate than one would like it to be. E.g. see
http://people.cis.ksu.edu/~rhowell/asymptotic.pdf
December 09, 2015
On Wednesday, 9 December 2015 at 01:40:12 UTC, Timon Gehr wrote:
> On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:
>> Big-O notation deals with asymptotic
>> behaviour as the variables approach infinity, so behaviour at 'small'
>> values of m and n are irrelevant.
>
> The problem is, however, that only m /or/ n could be small.

Pretty sure as per property 1 (common, "weak" definition of multi-variate O class) no variable can be "small". For all n >= ntresh and for m >= mtresh the inequality must hold, and that's only the weakest property.
December 09, 2015
On 12/09/2015 03:06 AM, Algo wrote:
> On Wednesday, 9 December 2015 at 01:40:12 UTC, Timon Gehr wrote:
>> On 12/09/2015 02:02 AM, H. S. Teoh via Digitalmars-d wrote:
>>> Big-O notation deals with asymptotic
>>> behaviour as the variables approach infinity, so behaviour at 'small'
>>> values of m and n are irrelevant.
>>
>> The problem is, however, that only m /or/ n could be small.
>
> Pretty sure as per property 1 (common, "weak" definition of
> multi-variate O class) no variable can be "small".

It certainly can be. Property 1 only talks about those values of the function where this is not the case though.

> For all n >= ntresh and for m >= mtresh the inequality must hold,
> and that's only the weakest property.

Exactly, it is a rather weak property. Other properties can (and should) also constrain functions in O(f(n,m)) on those values where one of the two variables is small.

You could have a function like:

void bar(int n)@Time(BigTheta("2^n"));

void foo(int n,int m){
    if(n<10) return bar(m);
    if(m<10) return bar(n);
    return 0;
}

If only property 1 is required, one can derive that foo runs in O(1), which is inadequate.
1 2
Next ›   Last »