May 13, 2013
On Monday, 13 May 2013 at 07:13:03 UTC, maarten van damme wrote:
> But seeing as the memory only starts growing when I put the png line in

This is pretty strange because looking at the source there, the only time data is even referenced is here:

                output ~= data[pos..pos+bytesPerLine];


Which makes a copy. There's no delegates or anything so shouldn't be a copied stack reference sticking around.... I see no reason why this wouldn't work.

I also tried replacing the new ubyte with malloc/free hoping it'd get an access violation if it was used after free, and nothing came up.

But my computer that i'm using right now is one of those netbooks so it doesn't have the cpu power to generate hte large images. I did a 200x100 instead which obviously uses way less memory. But still, if it was a bad reference I think it would show up here too.



The only other line that might be a problem is this one:
        auto com = cast(ubyte[]) compress(output);


output is at this point a copy of data, with other things added. So it'd be a very large array too.... and is created through the ~= operator, so it is all gc managed. Could be a false positive pointing into this array causing the leak.


And the compress function, from std.zlib, does this:

    auto destlen = srcbuf.length + ((srcbuf.length + 1023) / 1024) + 12;
    auto destbuf = new ubyte[destlen];
    auto err = etc.c.zlib.compress2(destbuf.ptr, &destlen, cast(ubyte *)srcbuf.ptr, srcbuf.length, level);
    if (err)
    {   delete destbuf;
        throw new ZlibException(err);
    }

    destbuf.length = destlen;



Which is again, destbuf being another large gc managed array. So my guess is either my output or std.zlib's destbuf is getting mistaken for still being referenced by some random number on the stack, so the gc is conservatively keeping it around.



A potential fix would be for my png.d to manually manage this memory - preallocate output[] with malloc and perhaps use its own compress function with its own manual buffer too. I figure this would be a new function, savePngFromImage, that saves the file itself instead of returning the array so it has complete control over the reference.
May 13, 2013
actually the output array doesn't need to be there at all, I could compress on the fly, adding the filter byte to a much smaller buffer, then writing straight out to the file. Then the only big array would be the one in the image class itself, which is easy to manage.

In fact, I did another file originally called imagedraft.d that does pngs as lazy ranges. Doesn't look like I ever posted that one to github. I think it can actually read more png files too.


But in any case, switching to that will be harder since I don't even remember how to use it.



ANYWAY, I added a new function to png.d:

void writeImageToPngFile(in char[] filename, TrueColorImage image);


Use it like this:

           // and finally write the data to a png file
           writeImageToPngFile("images/"~toHexString(md5Of(curtrans))~".png", i);


And see if things are any better. This one uses malloc/free for its temporary buffer and uses the Compress class from std.zlib to do it one line at a time instead of the big function, so hopefully its temporary buffer will be more manageable too.

It also explicitly deletes the compressed data buffer when it is done with it, after writing to the file. (I know delete is deprecated but switching this to use an malloced buffer would have needed me to totally duplicate the std.zlib class so meh, we can do it if needed but for now let's just try this).


Since my computer can't handle the large arrays anyway, I can't test this for sure but I expect you'll see some difference here. If you've changed TrueColorImage to use malloc/free as well that would probably help as well (none of this keeps a reference to data, so free it in truecolorimage's destructor and you should be ok.)
May 13, 2013
On Friday, 10 May 2013 at 23:18:31 UTC, maarten van damme wrote:
> I'm trying to generate images and after displaying them I want to save them
> to a png file. I use parallelism to generate them asynchronously with the
> gui-thread to avoid blocking (don't know if this is really relevant).
>
> If I add a single line in the method where I generate the image that
> creates a class which allocates a pretty big array, slowly but surely the
> memory starts to rise until it ends in a spectacular out of memory
> exception. I always thought the garbage collector would deal with it.
>
> I tried reducing it but failed. Maybe I should let dustmite bite in it a
> bit later.
> How can I manually fix this? Is this a known bug in the garbage collector?
>
> btw, this code also caused segfaults on my  64 bit laptop on linux.
> How can I avoid this?
>
> main class:
> https://dl.dropboxusercontent.com/u/15024434/picturegenerator/generalised.d
>
> zip file with everything ready to go to compile:
> https://dl.dropboxusercontent.com/u/15024434/picturegenerator/picturegenerator.zip

1) your code is not compilable on 64x system due to erroneous treating of some platform-varying types as uint, use size_t. Luckily dmd has -m32 switch

2) I encounter not out of memory exception but enforcement failure in simpledisplay.d:1024 "Maximum number of clients" which looks like not a memory error but you running out of some X server limit.

3) Segfault occures in simpledisplay.d:1058 - in destructor you call XDestroyImage(handle) but do not check whether handle is null. However it happens after enforcement failure so it doesn't looks like root of problem.

4) What really consumes memory is kernel or something which forces it to consume it since my task manager reports two blocks of 280 MB and 126 MB inside kernel part of virtual memory.
May 13, 2013
I'll check every uint occurance, thank you. I'm on windows so I won't get
the "maximum number of clients" error you talk about. The loop I used
inside main was :
"while(true){
curtrans=generateTransformationMatrix();

for(int y=0;y<height;y++)
for(int x=0;x<width;x++)
i.data[(y*width+x)*4..y*width+x)*4+4]=colorify(applyTransformation(transformXY(x,y),curtrans)).dup[0..3]
~ 255;
 // and finally write the data to a png file
png = pngFromImage(i);
//std.file.write("images/"~toHexString(md5Of(curtrans))~".png",
writePng(png));
}"
This doesn't use simpledisplay anymore so this should work fine on linux
too?

Adam checks this thread so he'll probably read the errors about simpledisplay. if not I'll copy paste and put in a bug report on github.
--

The new fixes from adam rupe for png and image.d actually get rid of all memory leakages in my reduced version (without simpledisplay). Now I can get to finetuning my functions for the best images :)

I'll see if the simpledisplay version now works too (leave it running
overnight).
--

So, do we actually know where the memory problems came from? is anyone actually running out of memory too or am I the only one? (tested on both my pc and my laptop so should be reproducible)


2013/5/13 Maxim Fomin <maxim@maxim-fomin.ru>

> On Friday, 10 May 2013 at 23:18:31 UTC, maarten van damme wrote:
>
>> I'm trying to generate images and after displaying them I want to save
>> them
>> to a png file. I use parallelism to generate them asynchronously with the
>> gui-thread to avoid blocking (don't know if this is really relevant).
>>
>> If I add a single line in the method where I generate the image that creates a class which allocates a pretty big array, slowly but surely the memory starts to rise until it ends in a spectacular out of memory exception. I always thought the garbage collector would deal with it.
>>
>> I tried reducing it but failed. Maybe I should let dustmite bite in it a
>> bit later.
>> How can I manually fix this? Is this a known bug in the garbage collector?
>>
>> btw, this code also caused segfaults on my  64 bit laptop on linux. How can I avoid this?
>>
>> main class:
>> https://dl.dropboxusercontent.**com/u/15024434/**
>> picturegenerator/generalised.d<https://dl.dropboxusercontent.com/u/15024434/picturegenerator/generalised.d>
>>
>> zip file with everything ready to go to compile: https://dl.dropboxusercontent.**com/u/15024434/**picturegenerator/** picturegenerator.zip<https://dl.dropboxusercontent.com/u/15024434/picturegenerator/picturegenerator.zip>
>>
>
> 1) your code is not compilable on 64x system due to erroneous treating of some platform-varying types as uint, use size_t. Luckily dmd has -m32 switch
>
> 2) I encounter not out of memory exception but enforcement failure in simpledisplay.d:1024 "Maximum number of clients" which looks like not a memory error but you running out of some X server limit.
>
> 3) Segfault occures in simpledisplay.d:1058 - in destructor you call XDestroyImage(handle) but do not check whether handle is null. However it happens after enforcement failure so it doesn't looks like root of problem.
>
> 4) What really consumes memory is kernel or something which forces it to consume it since my task manager reports two blocks of 280 MB and 126 MB inside kernel part of virtual memory.
>


May 13, 2013
On Monday, 13 May 2013 at 21:28:34 UTC, maarten van damme wrote:
> Adam checks this thread so he'll probably read the errors about
> simpledisplay. if not I'll copy paste and put in a bug report on github.

I won't be on my linux box with free time until at least tomorrow afternoon but i'll check then. I've used simpledisplay on linux before, but never with a 64 bit app so i'll have to try it.

> So, do we actually know where the memory problems came from? is anyone actually running out of memory too or am I the only one?

I didn't actually get that far since i'm on my slow computer, but my guess is one of those copies got pointed into.

The image in your code is something like 8 MB of memory, and is generated pretty rapidly, as fast as teh cpu could do it in the other thread.

The old function took that 8MB and made at least two copies of it, once in my function and once in std.zlib's function. If the usable address space is 2 GB, any random 32 bit number would have about a 1/128 chance of pointing into it.

If you consider the gc was probably scanning at least one copy of that data for pointers - std.zlib's one was constructed as a void[], so even if allocator told the gc what type it was (idk if it does or not), void[] has to be scanned conservatively because it could be an array of pointers.

And then add in random numbers on the stack, and the odds are pretty good the gc will get a false positive and then fail to free the memory block.


Repeat as it loops through the images, and you've got a pretty big memory leak adding up.


I'm not 100% sure that's what happened, but I think it is pretty likely. The reason the new function fixes it is because it manages the big copies manually, freeing them at the end of the function without worrying about false pointers.
May 14, 2013
So now pgen.d works (without simpledisplay version an reusing trueimage).
The simpledisplay version still has problems, but I'll try (manually
(de)allocating)/(reusing) the data


2013/5/14 Adam D. Ruppe <destructionator@gmail.com>

> On Monday, 13 May 2013 at 21:28:34 UTC, maarten van damme wrote:
>
>> Adam checks this thread so he'll probably read the errors about simpledisplay. if not I'll copy paste and put in a bug report on github.
>>
>
> I won't be on my linux box with free time until at least tomorrow afternoon but i'll check then. I've used simpledisplay on linux before, but never with a 64 bit app so i'll have to try it.
>
>
>  So, do we actually know where the memory problems came from? is anyone
>> actually running out of memory too or am I the only one?
>>
>
> I didn't actually get that far since i'm on my slow computer, but my guess is one of those copies got pointed into.
>
> The image in your code is something like 8 MB of memory, and is generated pretty rapidly, as fast as teh cpu could do it in the other thread.
>
> The old function took that 8MB and made at least two copies of it, once in my function and once in std.zlib's function. If the usable address space is 2 GB, any random 32 bit number would have about a 1/128 chance of pointing into it.
>
> If you consider the gc was probably scanning at least one copy of that data for pointers - std.zlib's one was constructed as a void[], so even if allocator told the gc what type it was (idk if it does or not), void[] has to be scanned conservatively because it could be an array of pointers.
>
> And then add in random numbers on the stack, and the odds are pretty good the gc will get a false positive and then fail to free the memory block.
>
>
> Repeat as it loops through the images, and you've got a pretty big memory leak adding up.
>
>
> I'm not 100% sure that's what happened, but I think it is pretty likely. The reason the new function fixes it is because it manages the big copies manually, freeing them at the end of the function without worrying about false pointers.
>


1 2
Next ›   Last »