Jump to page: 1 25  
Page
Thread overview
Fast Memory Allocation Strategy
Mar 16, 2005
Craig Black
Mar 16, 2005
Craig Black
Mar 16, 2005
Derek Parnell
Mar 17, 2005
Craig Black
Mar 17, 2005
Martin M. Pedersen
Mar 17, 2005
Craig Black
Mar 17, 2005
Martin M. Pedersen
Mar 17, 2005
Craig Black
Mar 16, 2005
J C Calvarese
Mar 16, 2005
John Reimer
Mar 16, 2005
Craig Black
dm.D.learn (was Re: Fast Memory Allocation Strategy)
Mar 17, 2005
J C Calvarese
Mar 16, 2005
Ben Hinkle
Mar 17, 2005
Craig Black
Mar 17, 2005
Ben Hinkle
Mar 17, 2005
Craig Black
Thin skin (Re: Fast Memory Allocation Strategy)
Mar 17, 2005
J C Calvarese
Mar 17, 2005
Craig Black
Mar 17, 2005
Ben Hinkle
Mar 17, 2005
Dave
Mar 17, 2005
David Medlock
Mar 17, 2005
Dave
Mar 17, 2005
Craig Black
Mar 17, 2005
Sean Kelly
Mar 17, 2005
Craig Black
Mar 18, 2005
Martin M. Pedersen
Mar 18, 2005
Craig Black
Mar 18, 2005
Martin M. Pedersen
Mar 18, 2005
Craig Black
Mar 18, 2005
David Medlock
Mar 18, 2005
Craig Black
Mar 18, 2005
David Medlock
Mar 18, 2005
Craig Black
Mar 18, 2005
Dave
Mar 18, 2005
Craig Black
Mar 18, 2005
Sean Kelly
Mar 18, 2005
Dave
Mar 18, 2005
Craig Black
Mar 18, 2005
Ben Hinkle
Mar 19, 2005
Dave
Mar 19, 2005
Ben Hinkle
Mar 22, 2005
Sean Kelly
Mar 18, 2005
Dave
Mar 18, 2005
Craig Black
Mar 17, 2005
Sean Kelly
Mar 17, 2005
David Medlock
Mar 17, 2005
pragma
Mar 17, 2005
Craig Black
Mar 18, 2005
Martin M. Pedersen
March 16, 2005
(I put posted this in D.learn NG but nobody has responded yet and I'm impatient.)

I've been looking at D's approach to memory allocation and I have an idea that I think will cause memory allocation to be blazingly fast.

I don't know what to call it, and there probably is a name for it.  It's kind of similar to D's mark and release approach but slightly more involved.

Objects are organized into buffers according to their size.  So if an object is 10 words long, it will go into the 10-word buffer.  If an object is 5 words long, it goes into the 5-word buffer, etc.  Thus all objects in each buffer are of equal size.

The buffers must be resizable.  When an object is instantiated from a buffer
that is full, the buffer will be enlarged to facilitate more objects.  When
the
buffer is resized the GC must move all the objects.  When an object is freed
from memory, the gap in the buffer is filled with the last item in the
buffer.  This requires the GC to move that block of memory.  Thus there is
no memory fragmentation within each buffer.

The problem with this approach is that it requires interaction with the GC that I don't think D currently provides, so Walter may have to get involved for this to be implemented.

All you smart D folks, let me know what you think.

-Craig



March 16, 2005
"Craig Black" <cblack@ara.com> wrote in message news:d1a3oo$218h$1@digitaldaemon.com...
> (I put posted this in D.learn NG but nobody has responded yet and I'm impatient.)
>
> I've been looking at D's approach to memory allocation and I have an idea that I think will cause memory allocation to be blazingly fast.
>
> I don't know what to call it, and there probably is a name for it.  It's kind of similar to D's mark and release approach but slightly more involved.
>
> Objects are organized into buffers according to their size.  So if an
> object
> is 10 words long, it will go into the 10-word buffer.  If an object is 5
> words long, it goes into the 5-word buffer, etc.  Thus all objects in each
> buffer are of equal size.
>
> The buffers must be resizable.  When an object is instantiated from a
> buffer
> that is full, the buffer will be enlarged to facilitate more objects.
> When the
> buffer is resized the GC must move all the objects.  When an object is
> freed
> from memory, the gap in the buffer is filled with the last item in the
> buffer.  This requires the GC to move that block of memory.  Thus there is
> no memory fragmentation within each buffer.
>
> The problem with this approach is that it requires interaction with the GC
> that I don't think D currently provides, so Walter may have to get
> involved
> for this to be implemented.
>
> All you smart D folks, let me know what you think.

Sounds like a free list.  Look at the "memory management" section of the D spec.  It explains them.

It is indeed a very fast method of memory allocation, but since it can be done very efficiently with code, I'm not sure if it would be integrated into the GC.  Though you never know.


March 16, 2005
In article <d1a3oo$218h$1@digitaldaemon.com>, Craig Black says...
>
>(I put posted this in D.learn NG but nobody has responded yet and I'm impatient.)

This seems like a pretty heavy-duty subject. Why did you post it to dm.D.learn in the first place? I thought that newsgroup would be either for beginning programmers or those who are trying to learn how to use D in a practical sense.

I think this topic belongs in dm.D. I think if you're trying to be "blazingly fast" you're well past trying to "learn" D--you're trying to revamp D or reinvent D.

I'd like to see dm.D.learn reserved for beginning programmers that feel left out of these more complicated topics.

My two cents.

>I've been looking at D's approach to memory allocation and I have an idea that I think will cause memory allocation to be blazingly fast.
>
>I don't know what to call it, and there probably is a name for it.  It's kind of similar to D's mark and release approach but slightly more involved.


jcc7
March 16, 2005
J C Calvarese wrote:
> In article <d1a3oo$218h$1@digitaldaemon.com>, Craig Black says...
> 
>>(I put posted this in D.learn NG but nobody has responded yet and I'm impatient.)
> 
> 
> This seems like a pretty heavy-duty subject. Why did you post it to dm.D.learn
> in the first place? I thought that newsgroup would be either for beginning
> programmers or those who are trying to learn how to use D in a practical sense.
> 
> I think this topic belongs in dm.D. I think if you're trying to be "blazingly
> fast" you're well past trying to "learn" D--you're trying to revamp D or
> reinvent D. 
> 
> I'd like to see dm.D.learn reserved for beginning programmers that feel left out
> of these more complicated topics.
> 
> My two cents.
> 
> 
>>I've been looking at D's approach to memory allocation and I have an idea
>>that I think will cause memory allocation to be blazingly fast.
>>
>>I don't know what to call it, and there probably is a name for it.  It's
>>kind of similar to D's mark and release approach but slightly more involved.
> 
> 
> 
> jcc7

Agreed.  I was wondering the same thing! :-)
March 16, 2005
Advanced topic?  Perhaps, but I'm trying to learn about D, so I thought it would be appropriate.  Guess not.

-Craig


March 16, 2005
The problem with free lists is that the memory is organized as a linked-list.  This produces many small units.  Using large buffers  is much more efficient.  Less allocation units to manage.  This is the main reason std::vector is preferred to std::list in C++ programming.  It's faster and is more efficient in its use of memory.

-Craig


March 16, 2005
On Wed, 16 Mar 2005 15:59:50 -0600, Craig Black wrote:

> The problem with free lists is that the memory is organized as a linked-list.  This produces many small units.  Using large buffers  is much more efficient.  Less allocation units to manage.  This is the main reason std::vector is preferred to std::list in C++ programming.  It's faster and is more efficient in its use of memory.
> 
> -Craig

I'm not so sure you have thought this through enough. Consider the situation where you have, say 100, 5K (adjacent) blocks of RAM allocated. When the application needs one of them, which one should it acquire? Obviously, one which is not already used. But how does one know which is used or not? You either mark them in some manner and do a scan through the markers looking for a free one, or you keep a list of the free ones and take the first free one and update the list. You need to some how know which are not free and which are free because they may be released in a different sequence than that which they were acquired in.

In either case, you end up with a list of RAM blocks in which you must maintain their 'free' status in some manner.

-- 
Derek
Melbourne, Australia
17/03/2005 9:14:08 AM
March 16, 2005
"Craig Black" <cblack@ara.com> wrote in message news:d1a3oo$218h$1@digitaldaemon.com...
> (I put posted this in D.learn NG but nobody has responded yet and I'm impatient.)
>
> I've been looking at D's approach to memory allocation and I have an idea that I think will cause memory allocation to be blazingly fast.
>
> I don't know what to call it, and there probably is a name for it.  It's kind of similar to D's mark and release approach but slightly more involved.
>
> Objects are organized into buffers according to their size.  So if an
> object
> is 10 words long, it will go into the 10-word buffer.  If an object is 5
> words long, it goes into the 5-word buffer, etc.  Thus all objects in each
> buffer are of equal size.

This is roughly what the GC already does. See the "allocPage" function in src/phobos/internal/gc/gcx.d. A "page" to the GC is a chunk of memory that is split into equal sized "bins". Individual malloc requests (eg, calls to "new") returns the next available bin for a given size.

> The buffers must be resizable.  When an object is instantiated from a
> buffer
> that is full, the buffer will be enlarged to facilitate more objects.
> When the
> buffer is resized the GC must move all the objects.  When an object is
> freed
> from memory, the gap in the buffer is filled with the last item in the
> buffer.  This requires the GC to move that block of memory.  Thus there is
> no memory fragmentation within each buffer.

This part the GC doesn't do yet - resize pages and/or move memory around.


March 17, 2005
"Craig Black" <cblack@ara.com> skrev i en meddelelse news:d1aa8o$27df$1@digitaldaemon.com...
> The problem with free lists is that the memory is organized as a linked-list.  This produces many small units.  Using large buffers  is much more efficient.

Not necesarrily. You can use the object itself to store the 'next' pointer,
and thus eliminate the need for many small 'link' nodes.
Of cause that can be combined with allocating larger blocks, and handing
them out in smaller pieces.

Regards,
Martin


March 17, 2005
Craig Black wrote:
> Advanced topic?  Perhaps, but I'm trying to learn about D, so I thought it would be appropriate.  Guess not.
> 
> -Craig 

It's certainly a subjective thing. There's not necessarily a right or wrong answer, but I'll explain my perspective.

I think dm.D.learn is meant to be the beginner's newsgroup. More geared towards people that are new to D and struggling to get a simple program to compile. Or maybe they're trying to understand an intermediate concept. If someone already has a strong programming background in C, C++, or Java, his questions will likely be better suited for dm.D (though I'm sure he could help answer question on dm.D.learn).

Writing a new garbage collector to go the speed of light strikes me as an advanced concept. It's a question that requires an expert to answer. It's a question that Walter may even need to ponder a few seconds before he could answer. (It's over my head, and I've been following this newsgroup for several years.) It seems much more theoretical than explaining what a common error message means.

Anyways, dm.D.learn is brand-spanking new and it'll take some time for us to figure out which questions belong where. There'll definitely be a gray area.

Now I'm going to post my first post on dm.D.learn, and I'll try to be on topic. ;)

-- 
Justin (a/k/a jcc7)
http://jcc_7.tripod.com/d/
« First   ‹ Prev
1 2 3 4 5