| |
| Posted by Kevin Bealer in reply to Walter Bright | PermalinkReply |
|
Kevin Bealer
Posted in reply to Walter Bright
| Walter Bright wrote:
> 3rd try at this.
>
> http://www.digitalmars.com/d/changelog.html
>
> http://ftp.digitalmars.com/dmd.1.003.zip
I wanted to see how much difference the GC changes made; so I wrote a program that fills arrays full of random ints 8 megabyte at a time. It keeps 20 such arrays around, in an effort to trick the GC into 'chaining' them together and therefore incorrectly leaking all the memory. (Source code is below.)
With DMD 1.0, this runs slowly and hogs memory dramatically -- after an hour it was up to iteration 96, and had a VmSize of 814K (at this point I killed it.)
But with DMD 1.003, I was able to run it 3 times. The total memory rose to 414K by iteration 153, then stayed exactly there till the application terminated at iteration 500. It took about 40 seconds for each run.
(While this is not an real application, some scientific stuff does use large arrays of fairly random-looking packed binary data -- as it turns out I work on such a project.)
Great work! Of course, a perfectly precise GC might be better, but I think this probably enables a lot of otherwise potentially dangerous things (video compression?) without losing the benefit of the GC.
Kevin
import std.stdio;
import std.stream;
import std.string;
import std.gc;
import std.c.stdlib;
char[] vminfo()
{
Stream file = new BufferedFile("/proc/self/status");
foreach(ulong n, char[] line; file) {
if (find(line, "VmSize:") != -1) {
return strip(split(line.dup, ":")[1]);
}
}
return "no info";
}
int main(char[][] args)
{
int[][] stuff;
int NUM = 2_000_000;
stuff.length = 20;
disable();
for(int i = 0; i < 500; i++) {
int[] arr = new int[NUM];
for(int j = 0; j < arr.length; j++) {
arr[j] = rand();
}
int zig = i;
if (zig > stuff.length)
zig = rand() % stuff.length;
stuff[zig] = arr;
writefln("\nIter=%s", i);
if (i == 20) {
enable();
}
writefln("%s", vminfo);
}
return 0;
}
|