May 20, 2004
Kevin Bealer wrote:
>>>Regarding fragmentation: does the D collector actually copy collect, or does it
>>>need to be conservative?  I would suspect that using C pointers and copy
>>>collection are two great tastes that don't taste great together.
>>
>>Wrong type of fragmentation. :-)
>>
>>This happens when the allocator doesn't have specifically sized areanas for certain sized allocations (usually powers of 2, n > 4). If the allocator just tries to find the best fit, breaking up previously allocated blocks without coalescing adjacent freespace, you end up with a very fragmented memory map.
> 
> 
> I was thinking "locality".  Okay, this makes sense.  Yeah, you can use a large
> program allocator that uses a large number of free lists and chops up blocks
> (completely unsuitable for small, memory tight, programs), or you can use
> small-piece allocators and tolerate fragmentation.  The ugliest way is to tweak
> the algorithm to use the same sized object (by making buffers a certain size).

You're thinking of customized slab allocators or separate heaps for a particular object type or objects of the same size. Once you allocated from the custom heap, you return it and keep track of the free objects.

Yup, modern mallocs (FreeBSD's libc version and Lea's, commonly found in Linux's glibc and usable under Win32) do this already; that's why they have arenas. The allocators you're thinking about that has really nasty allocation algorithms are the old Microsoft C compiler's malloc and the old Kingsley malloc in past BSDs.

Basically, there's really not much that's exciting about memory allocation. Berger's paper is probably the most exciting research I've seen in 15 years and it solves an interesting problem when you really need custom allocation or want a region of related objects allocated, initialized, freed and destroyed as a region. The current GP allocators do a really good job and tweaking won't actually buy you much extra performance.
May 20, 2004
In article <c8hb2a$1o63$1@digitaldaemon.com>, -scooter- says...
>
>Kevin Bealer wrote:
>>>>Regarding fragmentation: does the D collector actually copy collect, or does it need to be conservative?  I would suspect that using C pointers and copy collection are two great tastes that don't taste great together.
>>>
>>>Wrong type of fragmentation. :-)
>>>
>>>This happens when the allocator doesn't have specifically sized areanas for certain sized allocations (usually powers of 2, n > 4). If the allocator just tries to find the best fit, breaking up previously allocated blocks without coalescing adjacent freespace, you end up with a very fragmented memory map.
>> 
>> 
>> I was thinking "locality".  Okay, this makes sense.  Yeah, you can use a large program allocator that uses a large number of free lists and chops up blocks (completely unsuitable for small, memory tight, programs), or you can use small-piece allocators and tolerate fragmentation.  The ugliest way is to tweak the algorithm to use the same sized object (by making buffers a certain size).
>
>You're thinking of customized slab allocators or separate heaps for a particular object type or objects of the same size. Once you allocated from the custom heap, you return it and keep track of the free objects.
>
>Yup, modern mallocs (FreeBSD's libc version and Lea's, commonly found in Linux's glibc and usable under Win32) do this already; that's why they have arenas. The allocators you're thinking about that has really nasty allocation algorithms are the old Microsoft C compiler's malloc and the old Kingsley malloc in past BSDs.
>
>Basically, there's really not much that's exciting about memory allocation. Berger's paper is probably the most exciting research I've seen in 15 years and it solves an interesting problem when you really need custom allocation or want a region of related objects allocated, initialized, freed and destroyed as a region. The current GP allocators do a really good job and tweaking won't actually buy you much extra performance.

Which makes perfect sense.  I was writing one because I had a neat idea vis a vis persistent objects and relative pointers.  You can create data in a subheap, and mmap it in, providing an instant program "state load" effect.  The relative pointers keep all objects connected.  Except of course virtual pointers; also you need to rewrite every class to use the sub heap, everything on the subheap needs to be connected to a "top" pointer, etc.  It works, its good, but its special purpose.  A smart language space implementation could take advantage of such a design, but it gets messy in portable-application space.

The other time you might want one is when you want a set of data (i.e. linked list) to have really good locality.  But the solution to that is a vector or plex.  (By plex, I mean a list-of-small-vectors, with a list-like interface. They "should" have good performance in many common cases; they were used in a GNU container library that went away when STL came out.)  Vectors are probably good enough for this purpose.

I guess the general philosophy of all that is that we should start treating heap storage the way we treat FS storage.  If the system runs out of main memory, and all of your data is organized by memory pages, then OS page-out wont cause thrashing until every page of system memory is in-use.  Locality is treated as foremost in disk container algorithm design.  Swap thrashing occurs because it is not treated that way in heap container class design.

To a degree, in most cases, always ski in control, etc.

Kevin


1 2 3 4 5 6 7
Next ›   Last »