May 07, 2015
On 5/7/15 11:06 AM, Vladimir Panteleev wrote:
> On Thursday, 7 May 2015 at 17:57:24 UTC, Andrei Alexandrescu wrote:
>> On 5/6/15 11:00 PM, Vladimir Panteleev wrote:
>>> On Thursday, 7 May 2015 at 02:28:45 UTC, Andrei Alexandrescu wrote:
>>>> http://erdani.com/d/phobos-prerelease/std_experimental_allocator_porcelain.html
>>>>
>>>>
>>>>
>>>> Andrei
>>>
>>> Now that https://issues.dlang.org/show_bug.cgi?id=8269 was fixed, how
>>> about that idea of using with(scopeAllocator(...)) { /* use theAllocator
>>> */ } ?
>>>
>>> I.e. encapsulating
>>>
>>> auto oldAllocator = theAllocator;
>>> scope(exit) theAllocator = oldAllocator;
>>> theAllocator = allocatorObject(...);
>>>
>>> into a nice RAII type and then using it with WithStatement.
>>
>> Sadly that won't be possible with the current design; all higher-level
>> functions are not methods and instead rely on UFCS. -- Andrei
>
> Not what I meant. This is your idea:
>
> http://forum.dlang.org/post/l4ccb4$25ul$1@digitalmars.com

Oh I see. That will be operational once we get the built-in allocating expressions (new, array literals, delegates...) to use theAllocator. Cool, thanks, -- Andrei
May 07, 2015
On 5/7/15 11:13 AM, Ali Çehreli wrote:
> On 05/07/2015 02:18 AM, Brian Schott wrote:
>> On Thursday, 7 May 2015 at 02:28:45 UTC, Andrei Alexandrescu wrote:
>>> http://erdani.com/d/phobos-prerelease/std_experimental_allocator_porcelain.html
>>>
>>>
>>>
>>> Andrei
>>
>> *Reads module name* "...toilets? Oh. Wait.
>
> I thought dishes and tea cups. :)
>
>  > This is is allocator stuff."
>
> If it is related to allocators, I am not familiar with that term. Could
> someone please explain why "porcelain"?
>
> Ali

https://git-scm.com/book/tr/v2/Git-Internals-Plumbing-and-Porcelain

Made perfect sense the second I first saw it. -- Andrei

May 08, 2015
On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
> Oh I see. That will be operational once we get the built-in allocating expressions (new, array literals, delegates...) to use theAllocator. Cool, thanks, -- Andrei

I'm not sure how desirable this is. This require a round trip to TLS + virtual function call. That can be expensive, but even worse, will make the optimizer blind.
May 08, 2015
On Friday, 8 May 2015 at 19:04:20 UTC, deadalnix wrote:
> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>> Oh I see. That will be operational once we get the built-in allocating expressions (new, array literals, delegates...) to use theAllocator. Cool, thanks, -- Andrei
>
> I'm not sure how desirable this is. This require a round trip to TLS + virtual function call. That can be expensive, but even worse, will make the optimizer blind.

It will still be no worse than the current situation (GC invocation). Performance-sensitive algorithms can use an allocator (which won't be wrapped in a class) that in turn allocates memory in bulk from theAllocator. This pattern will allow you to discard all scratch memory at once once you're done with it.
May 08, 2015
On Friday, 8 May 2015 at 19:13:21 UTC, Vladimir Panteleev wrote:
> On Friday, 8 May 2015 at 19:04:20 UTC, deadalnix wrote:
>> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>>> Oh I see. That will be operational once we get the built-in allocating expressions (new, array literals, delegates...) to use theAllocator. Cool, thanks, -- Andrei
>>
>> I'm not sure how desirable this is. This require a round trip to TLS + virtual function call. That can be expensive, but even worse, will make the optimizer blind.
>
> It will still be no worse than the current situation (GC invocation). Performance-sensitive algorithms can use an allocator (which won't be wrapped in a class) that in turn allocates memory in bulk from theAllocator. This pattern will allow you to discard all scratch memory at once once you're done with it.

It IS worse. Current GC to not do a round trip to TLS (which IS slow, especially when dynamic linking is involved) and the optimizer can understand the API and optimize based on it (LDC does it to some extent).
May 08, 2015
On 5/8/15 12:04 PM, deadalnix wrote:
> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>> Oh I see. That will be operational once we get the built-in allocating
>> expressions (new, array literals, delegates...) to use theAllocator.
>> Cool, thanks, -- Andrei
>
> I'm not sure how desirable this is. This require a round trip to TLS +
> virtual function call. That can be expensive, but even worse, will make
> the optimizer blind.

Yah the virtual barrier is a necessary evil. For ultimate performance you'd need a local allocator object fronting the built-in one. -- Andrei
May 08, 2015
On 5/8/15 12:13 PM, Vladimir Panteleev wrote:
> On Friday, 8 May 2015 at 19:04:20 UTC, deadalnix wrote:
>> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>>> Oh I see. That will be operational once we get the built-in
>>> allocating expressions (new, array literals, delegates...) to use
>>> theAllocator. Cool, thanks, -- Andrei
>>
>> I'm not sure how desirable this is. This require a round trip to TLS +
>> virtual function call. That can be expensive, but even worse, will
>> make the optimizer blind.
>
> It will still be no worse than the current situation (GC invocation).
> Performance-sensitive algorithms can use an allocator (which won't be
> wrapped in a class) that in turn allocates memory in bulk from
> theAllocator. This pattern will allow you to discard all scratch memory
> at once once you're done with it.

That's right. That reminds me I need to implement a sort of a pool allocator that remembers all allocations that went through it, and release them all in the destructor.

What's a good name for that? I thought MarkSweepAllocator is it, but that term is more often used in conjunction with garbage collection.


Andrei
May 08, 2015
On Friday, 8 May 2015 at 19:34:13 UTC, deadalnix wrote:
> On Friday, 8 May 2015 at 19:13:21 UTC, Vladimir Panteleev wrote:
>> On Friday, 8 May 2015 at 19:04:20 UTC, deadalnix wrote:
>>> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>>>> Oh I see. That will be operational once we get the built-in allocating expressions (new, array literals, delegates...) to use theAllocator. Cool, thanks, -- Andrei
>>>
>>> I'm not sure how desirable this is. This require a round trip to TLS + virtual function call. That can be expensive, but even worse, will make the optimizer blind.
>>
>> It will still be no worse than the current situation (GC invocation). Performance-sensitive algorithms can use an allocator (which won't be wrapped in a class) that in turn allocates memory in bulk from theAllocator. This pattern will allow you to discard all scratch memory at once once you're done with it.
>
> It IS worse. Current GC to not do a round trip to TLS (which IS slow, especially when dynamic linking is involved) and the optimizer can understand the API and optimize based on it (LDC does it to some extent).

I don't know enough about TLS to argue but it strikes me as odd that it would be slower than the layers of un-inlinable extern(C) calls, going through lifetime.d, gc.d, gcx.d, there locking on a global mutex, and allocating memory accordingly to a general-purpose GC (vs. specialized allocator).
May 08, 2015
On 5/8/15 12:34 PM, deadalnix wrote:
> On Friday, 8 May 2015 at 19:13:21 UTC, Vladimir Panteleev wrote:
>> On Friday, 8 May 2015 at 19:04:20 UTC, deadalnix wrote:
>>> On Thursday, 7 May 2015 at 18:25:39 UTC, Andrei Alexandrescu wrote:
>>>> Oh I see. That will be operational once we get the built-in
>>>> allocating expressions (new, array literals, delegates...) to use
>>>> theAllocator. Cool, thanks, -- Andrei
>>>
>>> I'm not sure how desirable this is. This require a round trip to TLS
>>> + virtual function call. That can be expensive, but even worse, will
>>> make the optimizer blind.
>>
>> It will still be no worse than the current situation (GC invocation).
>> Performance-sensitive algorithms can use an allocator (which won't be
>> wrapped in a class) that in turn allocates memory in bulk from
>> theAllocator. This pattern will allow you to discard all scratch
>> memory at once once you're done with it.
>
> It IS worse. Current GC to not do a round trip to TLS (which IS slow,
> especially when dynamic linking is involved) and the optimizer can
> understand the API and optimize based on it (LDC does it to some extent).

Well you either do TLS or do some interlocking. Frying pan vs. fire. Lake vs. well. Before this devolves into yet another Epic Debate, a few measurements would be in order. -- Andrei
May 08, 2015
On Friday, 8 May 2015 at 19:53:16 UTC, Andrei Alexandrescu wrote:
> What's a good name for that? I thought MarkSweepAllocator is it, but that term is more often used in conjunction with garbage collection.

Pascal calls these functions "Mark" and "Release" :)

I named mine TrackingAllocator but that's probably too ambiguous.