On 2 February 2012 21:15, dsimcha <dsimcha@yahoo.com> wrote:
On Thursday, 2 February 2012 at 18:06:24 UTC, Manu wrote:
On 2 February 2012 17:40, dsimcha <dsimcha@yahoo.com> wrote:

On Thursday, 2 February 2012 at 04:38:49 UTC, Robert Jacques wrote:

An XML parser would probably want some kind of stack segment growth
schedule, which, IIRC isn't supported by RegionAllocator.


at least assuming we're targeting PCs and not embedded devices.


I don't know about the implications of your decision, but comment makes me
feel uneasy.

I don't know how you can possibly make that assumption? Have you looked
around at the devices people actually use these days?
PC's are an endangered and dying species... I couldn't imagine a worse
assumption if it influences the application of D on different systems.

I'm not saying that embedded isn't important.  It's just that for low level stuff like memory management it requires a completely different mindset.  RegionAllocator is meant to be fast and simple at the expense of space efficiency.  In embedded you'd probably want completely different tradeoffs.  Depending on how deeply embedded, space efficiency might be the most important thing.  I don't know exactly what tradeoffs you'd want, though, since I don't do embedded development.  My guess is that you'd want something completely different, not RegionAllocator plus a few tweaks that would complicate it for PC use.  Therefore, I designed RegionAllocator for PCs with no consideration for embedded environments.

Okay, this reasoning seems sound to me. I wonder though, is it easy/possible/compatible to plug a new back end in for embedded systems? It should definitely conserve memory at all costs, but above all, it needs to be fast.
Short term embedded platforms will surely have some lenience with memory size, but demand high performance.

What sort of overheads are we talking about for these allocators? I wonder what the minimum footprint of a D app would be, ie. exe size + minimum memory allocation for the runtime/etc?