March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Derek Parnell | > I'm not so sure you have thought this through enough. >
Actually my friend, I have thought it through quite enough. Obviously you did not understand the approach that I am presenting. Each time a unit is deallocated, the gap in the buffer is FILLED IN with the last item in the buffer. Thus, the memory is maintained completely contiguous. This is only possible because each item in the buffer is the same size.
-Craig
| |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Martin M. Pedersen | > Not necesarrily. You can use the object itself to store the 'next'
> pointer, and thus eliminate the need for many small 'link' nodes.
> Of cause that can be combined with allocating larger blocks, and handing
> them out in smaller pieces.
You don't understand the power of contiguous memory. We are dealing with quite different algorthic complexity for these two approaches. Without considering the excess baggage that the operating system must endure to manage, lets say, 10,000 small allocation units, let us simply consider that each allocation unit must be reserved by an allocator of some sort. OK, that's at best a linear complexity of O(n). Each allocation unit must be processed by something like malloc(). Now consider allocating a resizable buffer for 10,000 allocation units. Given that we enlarge the buffer by a factor of lets say 2 for example. This yields logarithmic O(log2(n)) complexity (much better than linear complexity). This does not even factor in all the crap that the allocator has to do to manage so many objects.
-Craig
| |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Ben Hinkle | > This part the GC doesn't do yet - resize pages and/or move memory around.
What kind of GC doesn't move memory around? So much for efficient memory allocation. Thanks for the info.
-Craig
| |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | Craig Black wrote: <snip> A Good related paper on this topic: http://citeseer.ist.psu.edu/appel87garbage.html | |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | "Craig Black" <cblack@ara.com> wrote in message news:d1cane$19c4$1@digitaldaemon.com... >> This part the GC doesn't do yet - resize pages and/or move memory around. > > What kind of GC doesn't move memory around? So much for efficient memory allocation. Thanks for the info. umm. you are kidding, right? There are GC's of every sort floating about - compacting, rarely-copying, never-copying. Conservattive vs precise. Mark/sweep vs generational. Incremental vs stop-the-world. Plus everything in between. Each has different trade-offs. A specific issue with your proposal is that determining if the last item can be moved has a non-trivial cost. One would have to track all references and either update those references to the last block or live with blocks that can't be moved. In a language that allows pointers it gets expensive to try to move memory around. Boehm has an interesting paper that I can't find right now that argues that compacting collectors aren't any better than non-compacting collectors. The time spent figuring out how/if to move something can be as expensive as dealing with fragmentation. | |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Ben Hinkle | > umm. you are kidding, right? There are GC's of every sort floating about - compacting, rarely-copying, never-copying. Conservattive vs precise. Mark/sweep vs generational. Incremental vs stop-the-world. Plus everything in between. Each has different trade-offs. Nope not kidding. I'm a C++ guy that has no personal experience with GC. I only know what I've read about the Java and C# GC's that are compacting. That's why I originally posted this on D.learn and got flamed for it. I'm trying to learn the best approach to GC and so I proposed an idea that I thought would work well. As far as the different GC approaches you listed, I don't know any details about them. I just know basic concepts. > A specific issue with your proposal is that determining if the last item can be moved has a non-trivial cost. One would have to track all references and either update those references to the last block or live with blocks that can't be moved. In a language that allows pointers it gets expensive to try to move memory around. If we can, theoretically, design a silver bullet GC, why use pointers anymore? Yes, we would have to move memory and update references accordingly, but this would not be a problem if the right algorithm was implemented. For example, if the GC references indexed a buffer rather than pointing directly to memory, then when the buffer is resized, there would be no need to change references. With the right implementation, I believe that it can be done efficiently. > Boehm has an interesting paper that I can't find right now that argues that compacting collectors aren't any better than non-compacting collectors. The time spent figuring out how/if to move something can be as expensive as dealing with fragmentation. Why then do I hear all the time that compacting collectors outperform noncompacting ones? Fragmentation is a worse problem than moving memory IMO. The problem with fragmentation is that it gets worse over time. Moving memory doesn't. -Craig | |||
March 17, 2005 Thin skin (Re: Fast Memory Allocation Strategy) | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | In article <d1cdkg$1cp6$1@digitaldaemon.com>, Craig Black says... > >> umm. you are kidding, right? There are GC's of every sort floating about - compacting, rarely-copying, never-copying. Conservattive vs precise. Mark/sweep vs generational. Incremental vs stop-the-world. Plus everything in between. Each has different trade-offs. > >Nope not kidding. I'm a C++ guy that has no personal experience with GC. I only know what I've read about the Java and C# GC's that are compacting. That's why I originally posted this on D.learn and got flamed for it. I'm trying to learn the best approach to GC and so I proposed an idea that I thought would work well. As far as the different GC approaches you listed, I don't know any details about them. I just know basic concepts. If you call my suggestion that this topic is an expert topic a flame, then you have a different definition for flame than I do. Yes, "learn" is a somewhat ambiguous name. Perhaps "learn" is a euphemism for "advanced compiler topics". Maybe I'm the one who's wrong. But if you though I was "flaming" you, you must get singed frequently. Really, I promise I didn't mean to hurt your feelings. And I held back from commenting until you posted a second message (albeit in what I would consider the proper newsgroup). jcc7 | |||
March 17, 2005 Re: Thin skin (Re: Fast Memory Allocation Strategy) | ||||
|---|---|---|---|---|
| ||||
Posted in reply to J C Calvarese | "J C Calvarese" <jcc7@cox.net> wrote in message news:d1chbo$1gtt$1@digitaldaemon.com... > In article <d1cdkg$1cp6$1@digitaldaemon.com>, Craig Black says... >> >>> umm. you are kidding, right? There are GC's of every sort floating >>> about - >>> compacting, rarely-copying, never-copying. Conservattive vs precise. >>> Mark/sweep vs generational. Incremental vs stop-the-world. Plus >>> everything >>> in between. Each has different trade-offs. >> >>Nope not kidding. I'm a C++ guy that has no personal experience with GC. >>I >>only know what I've read about the Java and C# GC's that are compacting. >>That's why I originally posted this on D.learn and got flamed for it. I'm >>trying to learn the best approach to GC and so I proposed an idea that I >>thought would work well. As far as the different GC approaches you >>listed, >>I don't know any details about them. I just know basic concepts. > > If you call my suggestion that this topic is an expert topic a flame, then > you > have a different definition for flame than I do. Yes, "learn" is a > somewhat > ambiguous name. Perhaps "learn" is a euphemism for "advanced compiler > topics". > Maybe I'm the one who's wrong. But if you though I was "flaming" you, you > must > get singed frequently. > > Really, I promise I didn't mean to hurt your feelings. And I held back > from > commenting until you posted a second message (albeit in what I would > consider > the proper newsgroup). > > jcc7 Ok, sorry, flamed is a strong word. No, my feeling were not hurt. No harm done. I see your point and I think you are right and I am wrong. I was just trying to explain why I posted on D.learn. My bad. -Craig | |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Craig Black | "Craig Black" <cblack@ara.com> wrote in message news:d1cdkg$1cp6$1@digitaldaemon.com... >> umm. you are kidding, right? There are GC's of every sort floating about - compacting, rarely-copying, never-copying. Conservattive vs precise. Mark/sweep vs generational. Incremental vs stop-the-world. Plus everything in between. Each has different trade-offs. > > Nope not kidding. I'm a C++ guy that has no personal experience with GC. I only know what I've read about the Java and C# GC's that are compacting. That's why I originally posted this on D.learn and got flamed for it. I'm trying to learn the best approach to GC and so I proposed an idea that I thought would work well. As far as the different GC approaches you listed, I don't know any details about them. I just know basic concepts. ok. no problem. I thought you were diss'ing DMD's GC. It's pretty good actually. If you look at most GCs they are huge (source-code-wise) compared to DMD's. >> A specific issue with your proposal is that determining if the last item can be moved has a non-trivial cost. One would have to track all references and either update those references to the last block or live with blocks that can't be moved. In a language that allows pointers it gets expensive to try to move memory around. > > If we can, theoretically, design a silver bullet GC, why use pointers anymore? Yes, we would have to move memory and update references accordingly, but this would not be a problem if the right algorithm was implemented. For example, if the GC references indexed a buffer rather than pointing directly to memory, then when the buffer is resized, there would be no need to change references. With the right implementation, I believe that it can be done efficiently. I encourage you to pursue your ideas, but I bet people have been looking for a silver-bullet GC since the first days of malloc. This stuff is all about making trade-offs. One size does not fit all. >> Boehm has an interesting paper that I can't find right now that argues that compacting collectors aren't any better than non-compacting collectors. The time spent figuring out how/if to move something can be as expensive as dealing with fragmentation. > > Why then do I hear all the time that compacting collectors outperform noncompacting ones? Fragmentation is a worse problem than moving memory IMO. The problem with fragmentation is that it gets worse over time. Moving memory doesn't. That's what I generally hear, too, but when you ask for details they say "well everyone says that". Personally I believe it depends on the language/type-system, application, and OS/machine. | |||
March 17, 2005 Re: Fast Memory Allocation Strategy | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Ben Hinkle | In article <d1cbp9$1afa$1@digitaldaemon.com>, Ben Hinkle says... > > >"Craig Black" <cblack@ara.com> wrote in message news:d1cane$19c4$1@digitaldaemon.com... >>> This part the GC doesn't do yet - resize pages and/or move memory around. >> >> What kind of GC doesn't move memory around? So much for efficient memory allocation. Thanks for the info. > >umm. you are kidding, right? There are GC's of every sort floating about - >compacting, rarely-copying, never-copying. Conservattive vs precise. >Mark/sweep vs generational. Incremental vs stop-the-world. Plus everything >in between. Each has different trade-offs. >A specific issue with your proposal is that determining if the last item can >be moved has a non-trivial cost. One would have to track all references and >either update those references to the last block or live with blocks that >can't be moved. In a language that allows pointers it gets expensive to try >to move memory around. >Boehm has an interesting paper that I can't find right now that argues that >compacting collectors aren't any better than non-compacting collectors. The >time spent figuring out how/if to move something can be as expensive as >dealing with fragmentation. Herb Sutter has claimed that the .NET garbage collector is quite fast. It's a generational gc and the code behind it is probably quite complicated. There's a decent outline of how it works (and of garbage collection in general) here: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/dotnetgcbasics.asp Sean | |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply