January 23, 2017
On 23/01/17 15:18, Andrei Alexandrescu wrote:
> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>> If, instead of increasing its size by 100%, we increase it by a smaller
>> percentage of its previous size, we still maintain the amortized O(1)
>> cost (with a multiplier that might be a little higher, but see the trade
>> off). On the other hand, we can now reuse memory.
>
> Heh, I have a talk about it. The limit is the golden cut,
> 1.6180339887498948482... The proof is fun. Anything larger prevents you
> from reusing previously used space. -- Andrei
>

What does D use when we keep appending?

Shachar
January 23, 2017
On 23/01/17 15:18, Andrei Alexandrescu wrote:
> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>> If, instead of increasing its size by 100%, we increase it by a smaller
>> percentage of its previous size, we still maintain the amortized O(1)
>> cost (with a multiplier that might be a little higher, but see the trade
>> off). On the other hand, we can now reuse memory.
>
> Heh, I have a talk about it. The limit is the golden cut,
> 1.6180339887498948482... The proof is fun. Anything larger prevents you
> from reusing previously used space. -- Andrei
>

I was going to ask for the proof, but I first went to refresh my memory a little about the golden ratio, at which point the proof became somewhat trivial.

Shachar
January 25, 2017
On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>> If, instead of increasing its size by 100%, we increase it by a smaller
>> percentage of its previous size, we still maintain the amortized O(1)
>> cost (with a multiplier that might be a little higher, but see the trade
>> off). On the other hand, we can now reuse memory.
>
> Heh, I have a talk about it. The limit is the golden cut, 1.6180339887498948482... The proof is fun. Anything larger prevents you from reusing previously used space. -- Andrei

Andrei, could you link this talk? Thanks!
January 25, 2017
On Sunday, 22 January 2017 at 21:29:39 UTC, Markus Laker wrote:
> Obviously, we wouldn't want to break compatibility with existing code by demanding a maximum line length at every call site.  Perhaps the default maximum length should change from its current value -- infinity -- to something like 4MiB: longer than lines in most text files, but still affordably small on most modern machines.

An issue I had with low default buffer limits: they are difficult to discover and usually start to fail only in production where you hit the actual big data, which gets only bigger with time. You find and bump one limit, deploy, only hit another later.
January 25, 2017
On 01/25/2017 12:58 AM, TheGag96 wrote:
> On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
>> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>>> If, instead of increasing its size by 100%, we increase it by a smaller
>>> percentage of its previous size, we still maintain the amortized O(1)
>>> cost (with a multiplier that might be a little higher, but see the trade
>>> off). On the other hand, we can now reuse memory.
>>
>> Heh, I have a talk about it. The limit is the golden cut,
>> 1.6180339887498948482... The proof is fun. Anything larger prevents
>> you from reusing previously used space. -- Andrei
>
> Andrei, could you link this talk? Thanks!

Not public. -- Andrei
January 25, 2017
On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei Alexandrescu wrote:
> On 01/25/2017 12:58 AM, TheGag96 wrote:
>> On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
>>> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>>>> If, instead of increasing its size by 100%, we increase it by a smaller
>>>> percentage of its previous size, we still maintain the amortized O(1)
>>>> cost (with a multiplier that might be a little higher, but see the trade
>>>> off). On the other hand, we can now reuse memory.
>>>
>>> Heh, I have a talk about it. The limit is the golden cut,
>>> 1.6180339887498948482... The proof is fun. Anything larger prevents
>>> you from reusing previously used space. -- Andrei
>>
>> Andrei, could you link this talk? Thanks!
>
> Not public. -- Andrei

Have you done measurements on the matter? Because I'm not sold on the idea. To me at this point this is just a theoretical observation. There are also arguments indicating it is less useful. Any numbers on how it affects e.g. memory usage?

Jens
January 25, 2017
On 01/25/2017 02:12 PM, Jens Mueller wrote:
> On Wednesday, 25 January 2017 at 14:18:15 UTC, Andrei Alexandrescu wrote:
>> On 01/25/2017 12:58 AM, TheGag96 wrote:
>>> On Monday, 23 January 2017 at 13:18:57 UTC, Andrei Alexandrescu wrote:
>>>> On 1/23/17 5:44 AM, Shachar Shemesh wrote:
>>>>> If, instead of increasing its size by 100%, we increase it by a
>>>>> smaller
>>>>> percentage of its previous size, we still maintain the amortized O(1)
>>>>> cost (with a multiplier that might be a little higher, but see the
>>>>> trade
>>>>> off). On the other hand, we can now reuse memory.
>>>>
>>>> Heh, I have a talk about it. The limit is the golden cut,
>>>> 1.6180339887498948482... The proof is fun. Anything larger prevents
>>>> you from reusing previously used space. -- Andrei
>>>
>>> Andrei, could you link this talk? Thanks!
>>
>> Not public. -- Andrei
>
> Have you done measurements on the matter?

Affirmative.

> Because I'm not sold on the
> idea.

Wasn't selling anything.

> To me at this point this is just a theoretical observation.

No.

> There
> are also arguments indicating it is less useful.

That is correct.

> Any numbers on how it
> affects e.g. memory usage?

Depends on the application. You'd do good to collect your own.


Andrei

1 2
Next ›   Last »