Jump to page: 1 2
Thread overview
Easy & huge GC optimizations
May 22, 2014
Etienne
May 22, 2014
Rainer Schuetze
May 22, 2014
Etienne
May 23, 2014
Rainer Schuetze
May 23, 2014
Chris
May 23, 2014
John Colvin
May 23, 2014
Etienne
May 23, 2014
Chris
May 23, 2014
Etienne
May 23, 2014
Chris
May 23, 2014
Etienne
May 23, 2014
Etienne
May 23, 2014
Etienne
May 23, 2014
Rainer Schuetze
May 23, 2014
Etienne
May 23, 2014
Rainer Schuetze
May 23, 2014
Etienne
May 22, 2014
I was thinking of how the GC could be optimized further and came across some sweet flags that are barely used throughout Phobos in favor of a witch-hunting against the GC:

GC.BlkAttr.NO_SCAN
GC.BlkAttr.NO_INTERIOR

When using the NO_SCAN attribute with GC.setAttr(p, NO_SCAN), you're basically doing removeRange on a GC allocation. It's never going to scan the memory in it, but the memory will stay alive if pointers are found pointing to anything inside it. This is very useful for strings! But I can't even find an example of a string with this flag. I'm totally baffled.

When using NO_INTERIOR attribute, you're telling the GC that nothing can point to the inside of the allocation if it's bigger than 4096 bytes, and to completely ignore scanning its contents in such case.

With these 2 attributes, one could write a simple recursive function in phobos that adjusts the flags on an object's allocations based on the type info.

Tuple!(int, int, int, string)[] bigTupleArray;
bigTupleArray.optimizeGC(); // sets NO_SCAN on ints, and on the pointee of string

Thoughts?
May 22, 2014

On 22.05.2014 18:42, Etienne wrote:
> I was thinking of how the GC could be optimized further and came across
> some sweet flags that are barely used throughout Phobos in favor of a
> witch-hunting against the GC:
>
> GC.BlkAttr.NO_SCAN
> GC.BlkAttr.NO_INTERIOR
>
> When using the NO_SCAN attribute with GC.setAttr(p, NO_SCAN), you're
> basically doing removeRange on a GC allocation. It's never going to scan
> the memory in it, but the memory will stay alive if pointers are found
> pointing to anything inside it. This is very useful for strings! But I
> can't even find an example of a string with this flag. I'm totally baffled.
>
> When using NO_INTERIOR attribute, you're telling the GC that nothing can
> point to the inside of the allocation if it's bigger than 4096 bytes,
> and to completely ignore scanning its contents in such case.
>
> With these 2 attributes, one could write a simple recursive function in
> phobos that adjusts the flags on an object's allocations based on the
> type info.
>
> Tuple!(int, int, int, string)[] bigTupleArray;
> bigTupleArray.optimizeGC(); // sets NO_SCAN on ints, and on the pointee
> of string
>
> Thoughts?

rt.lifetime makes heavy use of "NO_SCAN" for all array handling (including strings). It uses the compiler generated flag inside the TypeInfo. Classes also deal with this flag (see _d_newclass).

I'm not sure where allocating a struct ends up, so maybe there is some potential there.

"NO_INTERIOR" is currently only used for the hash array used by associative arrays. It is a bit dangerous to use as any pointer,slice or register still operating on the array is ignored, so collecting it might corrupt your memory.
May 22, 2014
On Thu, 22 May 2014 14:12:38 -0400, Rainer Schuetze <r.sagitario@gmx.de> wrote:

>
> I'm not sure where allocating a struct ends up, so maybe there is some potential there.

It's done similarly to arrays. The function is here: https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lifetime.d#L991

-Steve
May 22, 2014
On 2014-05-22 2:12 PM, Rainer Schuetze wrote:
>
> "NO_INTERIOR" is currently only used for the hash array used by
> associative arrays. It is a bit dangerous to use as any pointer,slice or
> register still operating on the array is ignored, so collecting it might
> corrupt your memory.

That's quite a relief, I was afraid of having to do it ;)

I'm currently exploring the possibility of sampling the pointers during mark'ing to check if they're gone and using bayesian probabilities to decide whether or not to skip the pool.

I explained it all here: https://github.com/D-Programming-Language/druntime/pull/797#issuecomment-43896016

-- paste --
Basically, when marking, you take 1 in X of the references and send them to a specific array that represents the pool they refer to. Then, next time you're going to collect you test them individually and if they're mostly there, you skip marking/free'ing for that particular pool during collection. You can force collection on certain pools every 1 in X collections to even out the average lifetime of the references.

You're going to want to have a lower certainty of failure for big allocations, but basically you're using probabilities to avoid pushing a lot of useless load on the processor, especially when you're in a part of an application that's just allocating a lot (sampling will determine that the software is not in a state of data removal).

http://en.wikipedia.org/wiki/Bayes_factor

-- end paste --

The bayes factor is merely there to choose the appropriate model that fits with the program. Bayesian inference would take care of deciding if a pool should end up being mark'ed. In other words, machine learning.

Would you think it'd be a good optimization opportunity?
May 23, 2014

On 22.05.2014 21:04, Etienne wrote:
> On 2014-05-22 2:12 PM, Rainer Schuetze wrote:
>>
>> "NO_INTERIOR" is currently only used for the hash array used by
>> associative arrays. It is a bit dangerous to use as any pointer,slice or
>> register still operating on the array is ignored, so collecting it might
>> corrupt your memory.
>
> That's quite a relief, I was afraid of having to do it ;)
>
> I'm currently exploring the possibility of sampling the pointers during
> mark'ing to check if they're gone and using bayesian probabilities to
> decide whether or not to skip the pool.
>
> I explained it all here:
> https://github.com/D-Programming-Language/druntime/pull/797#issuecomment-43896016
>
>
> -- paste --
> Basically, when marking, you take 1 in X of the references and send them
> to a specific array that represents the pool they refer to. Then, next
> time you're going to collect you test them individually and if they're
> mostly there, you skip marking/free'ing for that particular pool during
> collection. You can force collection on certain pools every 1 in X
> collections to even out the average lifetime of the references.
>
> You're going to want to have a lower certainty of failure for big
> allocations, but basically you're using probabilities to avoid pushing a
> lot of useless load on the processor, especially when you're in a part
> of an application that's just allocating a lot (sampling will determine
> that the software is not in a state of data removal).
>
> http://en.wikipedia.org/wiki/Bayes_factor
>
> -- end paste --
>
> The bayes factor is merely there to choose the appropriate model that
> fits with the program. Bayesian inference would take care of deciding if
> a pool should end up being mark'ed. In other words, machine learning.
>
> Would you think it'd be a good optimization opportunity?

Hmm, I guess I don't get the idea. You cannot skip a pool based on some statistics, you might have references in there to anything. As a result you cannot collect anything.

May 23, 2014
On Friday, 23 May 2014 at 06:17:43 UTC, Rainer Schuetze wrote:
>
>
> On 22.05.2014 21:04, Etienne wrote:
>> On 2014-05-22 2:12 PM, Rainer Schuetze wrote:
>>>
>>> "NO_INTERIOR" is currently only used for the hash array used by
>>> associative arrays. It is a bit dangerous to use as any pointer,slice or
>>> register still operating on the array is ignored, so collecting it might
>>> corrupt your memory.
>>
>> That's quite a relief, I was afraid of having to do it ;)
>>
>> I'm currently exploring the possibility of sampling the pointers during
>> mark'ing to check if they're gone and using bayesian probabilities to
>> decide whether or not to skip the pool.
>>
>> I explained it all here:
>> https://github.com/D-Programming-Language/druntime/pull/797#issuecomment-43896016
>>
>>
>> -- paste --
>> Basically, when marking, you take 1 in X of the references and send them
>> to a specific array that represents the pool they refer to. Then, next
>> time you're going to collect you test them individually and if they're
>> mostly there, you skip marking/free'ing for that particular pool during
>> collection. You can force collection on certain pools every 1 in X
>> collections to even out the average lifetime of the references.
>>
>> You're going to want to have a lower certainty of failure for big
>> allocations, but basically you're using probabilities to avoid pushing a
>> lot of useless load on the processor, especially when you're in a part
>> of an application that's just allocating a lot (sampling will determine
>> that the software is not in a state of data removal).
>>
>> http://en.wikipedia.org/wiki/Bayes_factor
>>
>> -- end paste --
>>
>> The bayes factor is merely there to choose the appropriate model that
>> fits with the program. Bayesian inference would take care of deciding if
>> a pool should end up being mark'ed. In other words, machine learning.
>>
>> Would you think it'd be a good optimization opportunity?
>
> Hmm, I guess I don't get the idea. You cannot skip a pool based on some statistics, you might have references in there to anything. As a result you cannot collect anything.

I'm not a fan of machine learning, especially not in cases you _can_ control, like memory allocation / deallocation. Guessing is not a good strategy, if you can have control over something. Machine learning is only good for vast and unpredictable data (voice recognition for example). Then it makes sense to apply probability. But if you can actually control what you are doing, why would you want to rely on a stupid and blind machine that decides things for you based on a probability of n%? Things can go wrong and you don't even know why. Mind you, we should rule the machines, not the other way around.
May 23, 2014
On Friday, 23 May 2014 at 13:43:53 UTC, Chris wrote:
> On Friday, 23 May 2014 at 06:17:43 UTC, Rainer Schuetze wrote:
>>
>>
>> On 22.05.2014 21:04, Etienne wrote:
>>> On 2014-05-22 2:12 PM, Rainer Schuetze wrote:
>>>>
>>>> "NO_INTERIOR" is currently only used for the hash array used by
>>>> associative arrays. It is a bit dangerous to use as any pointer,slice or
>>>> register still operating on the array is ignored, so collecting it might
>>>> corrupt your memory.
>>>
>>> That's quite a relief, I was afraid of having to do it ;)
>>>
>>> I'm currently exploring the possibility of sampling the pointers during
>>> mark'ing to check if they're gone and using bayesian probabilities to
>>> decide whether or not to skip the pool.
>>>
>>> I explained it all here:
>>> https://github.com/D-Programming-Language/druntime/pull/797#issuecomment-43896016
>>>
>>>
>>> -- paste --
>>> Basically, when marking, you take 1 in X of the references and send them
>>> to a specific array that represents the pool they refer to. Then, next
>>> time you're going to collect you test them individually and if they're
>>> mostly there, you skip marking/free'ing for that particular pool during
>>> collection. You can force collection on certain pools every 1 in X
>>> collections to even out the average lifetime of the references.
>>>
>>> You're going to want to have a lower certainty of failure for big
>>> allocations, but basically you're using probabilities to avoid pushing a
>>> lot of useless load on the processor, especially when you're in a part
>>> of an application that's just allocating a lot (sampling will determine
>>> that the software is not in a state of data removal).
>>>
>>> http://en.wikipedia.org/wiki/Bayes_factor
>>>
>>> -- end paste --
>>>
>>> The bayes factor is merely there to choose the appropriate model that
>>> fits with the program. Bayesian inference would take care of deciding if
>>> a pool should end up being mark'ed. In other words, machine learning.
>>>
>>> Would you think it'd be a good optimization opportunity?
>>
>> Hmm, I guess I don't get the idea. You cannot skip a pool based on some statistics, you might have references in there to anything. As a result you cannot collect anything.
>
> I'm not a fan of machine learning, especially not in cases you _can_ control, like memory allocation / deallocation. Guessing is not a good strategy, if you can have control over something. Machine learning is only good for vast and unpredictable data (voice recognition for example). Then it makes sense to apply probability. But if you can actually control what you are doing, why would you want to rely on a stupid and blind machine that decides things for you based on a probability of n%? Things can go wrong and you don't even know why. Mind you, we should rule the machines, not the other way around.

Bear in mind here that most code goes though a whole bunch of machine learning algorithms in the CPU itself. Like it or not, it has proved extremely successful.
May 23, 2014
On 2014-05-23 2:17 AM, Rainer Schuetze wrote:
>
>
> On 22.05.2014 21:04, Etienne wrote:
>> On 2014-05-22 2:12 PM, Rainer Schuetze wrote:
>>>
>>> "NO_INTERIOR" is currently only used for the hash array used by
>>> associative arrays. It is a bit dangerous to use as any pointer,slice or
>>> register still operating on the array is ignored, so collecting it might
>>> corrupt your memory.
>>
>> That's quite a relief, I was afraid of having to do it ;)
>>
>> I'm currently exploring the possibility of sampling the pointers during
>> mark'ing to check if they're gone and using bayesian probabilities to
>> decide whether or not to skip the pool.
>>
>> I explained it all here:
>> https://github.com/D-Programming-Language/druntime/pull/797#issuecomment-43896016
>>
>>
>>
>> -- paste --
>> Basically, when marking, you take 1 in X of the references and send them
>> to a specific array that represents the pool they refer to. Then, next
>> time you're going to collect you test them individually and if they're
>> mostly there, you skip marking/free'ing for that particular pool during
>> collection. You can force collection on certain pools every 1 in X
>> collections to even out the average lifetime of the references.
>>
>> You're going to want to have a lower certainty of failure for big
>> allocations, but basically you're using probabilities to avoid pushing a
>> lot of useless load on the processor, especially when you're in a part
>> of an application that's just allocating a lot (sampling will determine
>> that the software is not in a state of data removal).
>>
>> http://en.wikipedia.org/wiki/Bayes_factor
>>
>> -- end paste --
>>
>> The bayes factor is merely there to choose the appropriate model that
>> fits with the program. Bayesian inference would take care of deciding if
>> a pool should end up being mark'ed. In other words, machine learning.
>>
>> Would you think it'd be a good optimization opportunity?
>
> Hmm, I guess I don't get the idea. You cannot skip a pool based on some
> statistics, you might have references in there to anything. As a result
> you cannot collect anything.
>

It only skips the inner search of the pool, like marking it NO_SCAN if a sample of the pointers that pointed to it are still alive.

I mean, why would you want to check the pointers and mark every page in a memory zone when you know they're probably all there anyways? The idea is that you could manage to avoid collection altogether during periods of high allocation.

There's no other way to guess it. And specifying "GC.disable()" before making allocations is a little too verbose to consider it an optimization of the GC.
May 23, 2014
On 2014-05-23 11:41 AM, John Colvin wrote:
> Bear in mind here that most code goes though a whole bunch of machine
> learning algorithms in the CPU itself. Like it or not, it has proved
> extremely successful.

I'm happy you're here to say this. Machine learning is the future of algorithms & heuristics, and then it has to work its way up, otherwise there's no real advantage of just doing it at a high-level for e.g. voice recognition
May 23, 2014
On 2014-05-23 12:33 PM, Etienne wrote:
> It only skips the inner search of the pool, like marking it NO_SCAN if a
> sample of the pointers that pointed to it are still alive.

Sorry that's not true.

It's like marking it NO_INTERIOR while it being still SCAN. By default, all the pages would be marked if the sample pointers to the pool are still alive.

And so the objective is to be able to skip collections. How many collections are executed only to recover only 1-2% of the memory?
« First   ‹ Prev
1 2