Jump to page: 1 2
Thread overview
Best data structure for a particle system?
Nov 15, 2013
Mikko Ronkainen
Nov 15, 2013
Damian
Nov 15, 2013
Mikko Ronkainen
Nov 15, 2013
Mike Parker
Nov 15, 2013
Marco Leise
Nov 15, 2013
bearophile
Nov 15, 2013
Mikko Ronkainen
Nov 15, 2013
bearophile
Nov 15, 2013
Chris Cain
Nov 15, 2013
bearophile
Nov 15, 2013
Chris Cain
Nov 15, 2013
Ivan Kazmenko
Nov 15, 2013
qznc
November 15, 2013
What would be the best data structure for handling particles in a particle system in D2?

Here's some thoughts:

Particles are simple structs.
Two lists, one for alive particles, one for dead ones.
Memory allocations should be avoided, preallocate everything, no allocations when moving between lists.
Keep alive list as short as possible for fast iteration -> move dead particles off  during iteration.
Removal and addition of single items only, and it should be fast.

Maybe a single-linked list, std.container.SList? Is there any gotchas? Or some better container for this scenario?
November 15, 2013
On Friday, 15 November 2013 at 11:52:44 UTC, Mikko Ronkainen
wrote:
> What would be the best data structure for handling particles in a particle system in D2?
>
> Here's some thoughts:
>
> Particles are simple structs.
> Two lists, one for alive particles, one for dead ones.
> Memory allocations should be avoided, preallocate everything, no allocations when moving between lists.
> Keep alive list as short as possible for fast iteration -> move dead particles off  during iteration.
> Removal and addition of single items only, and it should be fast.
>
> Maybe a single-linked list, std.container.SList? Is there any gotchas? Or some better container for this scenario?

Use just one list with a flag in the particle to see whether the
particle is alive or dead, saves swapping between lists and you
can use a simple array for fast access.
November 15, 2013
> Use just one list with a flag in the particle to see whether the
> particle is alive or dead, saves swapping between lists and you
> can use a simple array for fast access.

Yes, simplicity, that's a good idea :) I was just wondering how much time would be saved if just iterating over the active particles instead of everything every time (say a system of 10000 particles). Maybe that's relevant, maybe not.

If relevant, doubly-linked might work better? dcollections LinkList, or maybe DList? I'm asking mostly because I'd like the container to avoid memory allocations while in use.
November 15, 2013
On 11/15/2013 9:19 PM, Mikko Ronkainen wrote:
>
> If relevant, doubly-linked might work better? dcollections LinkList, or
> maybe DList? I'm asking mostly because I'd like the container to avoid
> memory allocations while in use.

For a particle system, I would avoid lists. A list of particles needs to be iterated every frame. A linked list is going to kill you as the particle count increases. Since the particles will most like not be contiguous in memory, you'll be thrashing your cache to hell and back.

"Caches love linear access" is a quote to remember. When you need to do frequent iteration of a list, your cache will love you if all of the items are in a block of contiguous memory. So it's almost always better to use an array for this sort of thing. Make your particle object a struct and use a dynamic array of particles that you can grow as needed or, if it makes sense, a fixed size static array. The point is that arrays of structs will be lined up one next to the other in memory so that you can make good use of the cache. Of course, the size of your particle object also has a role to play here.

Google for "linked list cache thrashing" or "cache-friendly data structures" or some such for ideas.
November 15, 2013
On Friday, 15 November 2013 at 11:52:44 UTC, Mikko Ronkainen wrote:
> What would be the best data structure for handling particles in a particle system in D2?
>
> Here's some thoughts:
>
> Particles are simple structs.
> Two lists, one for alive particles, one for dead ones.
> Memory allocations should be avoided, preallocate everything, no allocations when moving between lists.
> Keep alive list as short as possible for fast iteration -> move dead particles off  during iteration.
> Removal and addition of single items only, and it should be fast.
>
> Maybe a single-linked list, std.container.SList? Is there any gotchas? Or some better container for this scenario?

Linked lists are very cache unfriendly, so they should be avoided. Generally try to use arrays first, then try again using arrays, and if you fail then try to use something different.

I suggest to use a dynamic array of particle structs, that contain a boolean that represents if a particle is alive or dead. This is very simple to implement and use.

Once you have implemented and benchmarked that, if the code is *not fast enough* then you could try adding a pointer field (or uint/ushort index) to each struct that contains the pointer to the next active cell. But you have to start using and updating such pointers only when the _estimate_ percentage of active cells is very low.

An alternative strategy to try is to compact the array of particle structs (copying only the active particles as you scan it to use it). This is especially good if you don't need to keep their order stable and if the structs are small (like a size_t).

Bye,
bearophile
November 15, 2013
Ok, thanks! That linked list cache thrashing was just the thing I knew that I don't know :)

Let's say I just use dynamic array and grow it when adding new particles to the system, and when particles die, just set their dead flag on. This means that, when adding a new particle, I need to scan the array first for possible dead particles that can be reused. Is there some trick that could be used here to reduce excessive searching/iteration?
November 15, 2013
Mikko Ronkainen:

> Let's say I just use dynamic array and grow it when adding new particles to the system, and when particles die, just set their dead flag on. This means that, when adding a new particle, I need to scan the array first for possible dead particles that can be reused. Is there some trick that could be used here to reduce excessive searching/iteration?

Generally avoid resizing the array. Just leave some many ones in the array.

Bye,
bearophile
November 15, 2013
On Friday, 15 November 2013 at 13:32:38 UTC, Mikko Ronkainen wrote:
> Ok, thanks! That linked list cache thrashing was just the thing I knew that I don't know :)
>
> Let's say I just use dynamic array and grow it when adding new particles to the system, and when particles die, just set their dead flag on. This means that, when adding a new particle, I need to scan the array first for possible dead particles that can be reused. Is there some trick that could be used here to reduce excessive searching/iteration?

Instead of having a "dead flag", you could swap ( http://dlang.org/phobos/std_algorithm.html#swap ) the dying particle with the last particle in the list and then decrement the list's length.

Two assumptions: 1, you don't have pointers to particles anywhere. 2, when a particle "dies" it knows about the list and its position in the list.

By default (using the default GC and everything), D does not reallocate a dynamic array every time you change the length (even increasing it), so this will still be okay with allocations.
November 15, 2013
Chris Cain:

> Instead of having a "dead flag", you could swap ( http://dlang.org/phobos/std_algorithm.html#swap ) the dying particle with the last particle in the list and then decrement the list's length.

If the array is long you are accessing a cold part of it to swap with the end.


> By default (using the default GC and everything), D does not reallocate a dynamic array every time you change the length (even increasing it), so this will still be okay with allocations.

The situation is a little more complex, there is a capacity field that I think is kept in a cold place of the array, it's also cached, but only if you append to just one array, etc.

Bye,
bearophile
November 15, 2013
On Friday, 15 November 2013 at 14:08:20 UTC, bearophile wrote:
> The situation is a little more complex, there is a capacity field that I think is kept in a cold place of the array, it's also cached, but only if you append to just one array, etc.

An alternative might be hold an array and manage a slice to that array. "Appending" would just be reslicing the array.

> If the array is long you are accessing a cold part of it to swap with the end.

Sure. But on a long array, the time taken to iterate over the entire array looking for a dead particle to recycle would take time as well. Not to mention that removing the dead particle by swapping it with the end would likely keep the overall array smaller and, thus, more likely to fit completely in the cache. There's a lot to be said about keeping memory usage as compact as possible.

Is there any empirical data to suggest either approach is better? There's factors that could suggest either one might be faster depending on the situation.
« First   ‹ Prev
1 2