November 15, 2013
On Friday, 15 November 2013 at 14:01:36 UTC, Chris Cain wrote:
> By default (using the default GC and everything), D does not reallocate a dynamic array every time you change the length (even increasing it), so this will still be okay with allocations.

Not exactly so.  If you decrease the length, the capacity is set to 0.  If you then try to increase it, you must use assumeSafeAppend on the array, or it will be reallocated.  Alternatively, you may choose to allocate a large enough array once and then create a wrapper struct to store the currently used length.

See the explanation of how array slices work in D:
http://dlang.org/d-array-article.html

Or a thread I started when learning to use D arrays as stacks and queues without wrappers:
http://forum.dlang.org/thread/yrxspdrpusrrijmfyldc@forum.dlang.org?page=1

Ivan Kazmenko.
November 15, 2013
Am Fri, 15 Nov 2013 21:56:15 +0900
schrieb Mike Parker <aldacron@gmail.com>:

> On 11/15/2013 9:19 PM, Mikko Ronkainen wrote:
> >
> > If relevant, doubly-linked might work better? dcollections LinkList, or maybe DList? I'm asking mostly because I'd like the container to avoid memory allocations while in use.
> 
> For a particle system, I would avoid lists. A list of particles needs to be iterated every frame. A linked list is going to kill you as the particle count increases. Since the particles will most like not be contiguous in memory, you'll be thrashing your cache to hell and back.
> 
> "Caches love linear access" is a quote to remember. When you need to do frequent iteration of a list, your cache will love you if all of the items are in a block of contiguous memory. So it's almost always better to use an array for this sort of thing. Make your particle object a struct and use a dynamic array of particles that you can grow as needed or, if it makes sense, a fixed size static array. The point is that arrays of structs will be lined up one next to the other in memory so that you can make good use of the cache. Of course, the size of your particle object also has a role to play here.
> 
> Google for "linked list cache thrashing" or "cache-friendly data structures" or some such for ideas.

It is true, that these days you optimize for cache locality
more than for anything else.
How about this:
- use an array
- keep alive particles at the front and dead particles at the
  back
- when an alive particle dies, you swap it with the last alive
  particle in the array and mark it as dead
This way you always have a compact array from 0..aliveCount
and don't need if-else to check for dead ones. (Which is
otherwise another slowdown factor).

-- 
Marco

November 15, 2013
On Friday, 15 November 2013 at 14:01:36 UTC, Chris Cain wrote:
> On Friday, 15 November 2013 at 13:32:38 UTC, Mikko Ronkainen wrote:
>> Ok, thanks! That linked list cache thrashing was just the thing I knew that I don't know :)
>>
>> Let's say I just use dynamic array and grow it when adding new particles to the system, and when particles die, just set their dead flag on. This means that, when adding a new particle, I need to scan the array first for possible dead particles that can be reused. Is there some trick that could be used here to reduce excessive searching/iteration?
>
> Instead of having a "dead flag", you could swap ( http://dlang.org/phobos/std_algorithm.html#swap ) the dying particle with the last particle in the list and then decrement the list's length.

This swapping might even speed up the normal particle processing, because with a mix of dead and alive particles the flag checking could confuse the branch predictor.

http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array

Ultimately, if flag or swap is faster must be measured. Profiling and bencharking are your friends, if you want to optimize your particle system. For a start use an array and either method.
1 2
Next ›   Last »