June 30, 2003 Re: GC workgroup invitation | ||||
---|---|---|---|---|
| ||||
Posted in reply to Fabian Giesen | Fabian Giesen wrote: >>I know that many people are not big fans of reference counting >>collectors, but if we're implementing a generic GC interface, then we >>should include reference/dereference handlers so that refcounters >>could work. > > > Refcounting cannot even work in the presence of data structures with > cyclical references, which includes very basic things like double-linked > lists and trees where nodes have pointers to their parents. Making > ref-counting work in that case requires either a complete redesign of the > datastructures or so-called "weak references", i.e. references that don't > "own" the object they're referencing. > > This can silently break existing code in a myriad of ways, and it's > especially bad because it won't cause any suspicious behavior, it'll just > cause memory leaks to pile. As the whole idea was to make memory management > *easier* and not more involved and error-prone, I really don't think it's > worth it. > > -fg This problem has been solved. http://www.research.ibm.com/people/d/dfb/papers.html Read the links on "Recycler" |
June 30, 2003 Re: GC workgroup invitation | ||||
---|---|---|---|---|
| ||||
Posted in reply to Helmut Leitner | > I think I read the link you mentioned. But did you also read that it has an 4-6 (worst case 10) factor between "live object" data and maximum heap size?
>
> That is: when a program at a point has 2 MB of objects it may need at the same time 20MB (ok typical 10 MB) heap to do this real time?
There are no "typical" values in that table, it's strictly worst-case
analysis, because the absolute upper bounds are what counts for hard
real time requirements. Also note that the statistics given are assuming
the use of a segregated storage allocator - presumably because worst-case
bounds for segregated storage are far easier to compute than for e.g. best
fit :). Finally, the actual GC overhead is "twice the amount of memory that
is allocated per full garbage collection cycle" (per size class; page 26) - the
rest is due to the memory allocator used.
Finally, all this is for *hard* realtime requirements. Relaxing this to soft real-time (i.e. trading somewhat worse worst-case complexity for far better average-complexity) should give quite competitive results (even though it won't show in the worst-case statistics, of course).
-fg
|
July 02, 2003 Re: GC workgroup invitation | ||||
---|---|---|---|---|
| ||||
Posted in reply to Helmut Leitner | http://www.cs.kent.ac.uk/people/staff/rej/gc.html |
Copyright © 1999-2021 by the D Language Foundation