Jump to page: 1 2
Thread overview
[phobos] Phobos unit tests running out of memory
Sep 03, 2012
Jonathan M Davis
Sep 04, 2012
Johannes Pfau
Sep 04, 2012
David Simcha
Sep 04, 2012
Rainer Schuetze
Sep 04, 2012
Jonathan M Davis
Sep 04, 2012
Rainer Schuetze
Sep 17, 2012
Rainer Schuetze
Sep 04, 2012
kenji hara
Sep 04, 2012
Jonathan M Davis
Sep 04, 2012
Jonathan M Davis
Sep 04, 2012
Dmitry Olshansky
September 02, 2012
Is anyone else having dmd complain that it ran out of memory when building Phobos' unit tests on Windows? Right now, it always seems to die for me when it hits around 900MB, even though there's plenty of memory left. It's passing on the autotester, which implies that it's a problem with my box.

But I just did a fresh install. I blew away the old dmd2 directory, put the unzipped dmd 2.060 there, built dmd from a fresh clone of dmd from github, and adjusted sc.init to point to my freshly cloned druntime and Phobos. Then I built druntime and Phobos. Both build just fine, but when I build the unit tests for Phobos, they run out of memory. It makes no sense to me. I have no idea what I could be doing wrong.

Has anyone else been running into similar problems? Even better, does anyone have any idea what the problem and its solution are?

- Jonathan M Davis
_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
Am 03.09.2012 04:29, schrieb Jonathan M Davis:
> Is anyone else having dmd complain that it ran out of memory when building Phobos' unit tests on Windows? Right now, it always seems to die for me when it hits around 900MB, even though there's plenty of memory left. It's passing on the autotester, which implies that it's a problem with my box.
>
> But I just did a fresh install. I blew away the old dmd2 directory, put the unzipped dmd 2.060 there, built dmd from a fresh clone of dmd from github, and adjusted sc.init to point to my freshly cloned druntime and Phobos. Then I built druntime and Phobos. Both build just fine, but when I build the unit tests for Phobos, they run out of memory. It makes no sense to me. I have no idea what I could be doing wrong.
>
> Has anyone else been running into similar problems? Even better, does anyone have any idea what the problem and its solution are?
>
>
I think this is a known problem. When I added the std.uuid module the unittests failed on the autotester because of out of memory issues. I then moved the std.uuid module to a separate unit test run (SRC_STD_4, in win32.mak, I tried all other SRC_STD variables, but none worked) to make it work. The whole thing seems very fragile and I guess most contributors just check that the unittests work on the autotester, so they might fail on other  machines.

A proper solution would probably be to adapt the makefile to compile each module separately for unittests, like it's done in posix.mak. But I guess there's a reason why those unittests are combined right now.

-- 
Johannes Pfau

_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
On 9/4/12 6:28 PM, Johannes Pfau wrote:
> A proper solution would probably be to adapt the makefile to compile
> each module separately for unittests, like it's done in posix.mak. But I
> guess there's a reason why those unittests are combined right now.

I think the only reason is historical. We should separate the unittests unless Walter has based objections. Walter?

Andrei

_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
I think we need separate compilation for non-unittest Phobos compiles, too.
 Antti-Ville's precise heap scanning, which will hopefully be integrated
soon (I've been meaning to ping him on that) makes the non-unittest Phobos
build process run out of memory on Windows because of all the template
instantiations necessary.  Alternatively, we could bring back the compile
time GC.  Walter, why was compile-time GC removed again?

On Tue, Sep 4, 2012 at 12:47 PM, Andrei Alexandrescu <andrei@erdani.com>wrote:

> On 9/4/12 6:28 PM, Johannes Pfau wrote:
>
>> A proper solution would probably be to adapt the makefile to compile each module separately for unittests, like it's done in posix.mak. But I guess there's a reason why those unittests are combined right now.
>>
>
> I think the only reason is historical. We should separate the unittests unless Walter has based objections. Walter?
>
> Andrei
>
>
> ______________________________**_________________
> phobos mailing list
> phobos@puremagic.com
> http://lists.puremagic.com/**mailman/listinfo/phobos<http://lists.puremagic.com/mailman/listinfo/phobos>
>


September 04, 2012
On Tuesday, September 04, 2012 18:28:31 Johannes Pfau wrote:
> I think this is a known problem. When I added the std.uuid module the unittests failed on the autotester because of out of memory issues. I then moved the std.uuid module to a separate unit test run (SRC_STD_4, in win32.mak, I tried all other SRC_STD variables, but none worked) to make it work. The whole thing seems very fragile and I guess most contributors just check that the unittests work on the autotester, so they might fail on other machines.

I'm well aware of all of that. What baffles me is how it's running out of memory on my box and not the autotester. I wouldn't have thought that it would matter what box you were on. But maybe memory fragmentation or something changes things.

And of course, there's always the mystery of why on earth dmd runs out of memory at around 900MB even when there's lots more memory on the box. It's been like that for a long time (I've seen it for other running out of memory build problems), but I have no idea why that would happen. I'm not aware of any significance to 900MB (unlike, say 3.6 GB, when the 32-bit address space hits its limit).

- Jonathan M Davis
_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
On Tuesday, September 04, 2012 18:47:47 Andrei Alexandrescu wrote:
> On 9/4/12 6:28 PM, Johannes Pfau wrote:
> > A proper solution would probably be to adapt the makefile to compile each module separately for unittests, like it's done in posix.mak. But I guess there's a reason why those unittests are combined right now.
> 
> I think the only reason is historical. We should separate the unittests unless Walter has based objections. Walter?

One serious downside to building the unit tests separately is that we don't catch circular dependencies.

Also, when Kenji created a pull request for doing that previously, Walter was against it, because he thought that it was a good test for compiling a large application:

https://github.com/D-Programming-Language/phobos/pull/280#issuecomment-2259177

Then again, couldn't we just set it up so that each module is built separately but linked into a single executable? If I understand correctly, that would fix the memory issue and still make it so that we'd catch circular dependency problems. It would probably make for a slower build though, and it's already painfully slow on Windows (at least in my experience).

- Jonathan M Davis
_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
On 04-Sep-12 21:48, Jonathan M Davis wrote:
> On Tuesday, September 04, 2012 18:28:31 Johannes Pfau wrote:
>> I think this is a known problem. When I added the std.uuid module the
>> unittests failed on the autotester because of out of memory issues. I
>> then moved the std.uuid module to a separate unit test run (SRC_STD_4,
>> in win32.mak, I tried all other SRC_STD variables, but none worked) to
>> make it work. The whole thing seems very fragile and I guess most
>> contributors just check that the unittests work on the autotester, so
>> they might fail on other machines.
> I'm well aware of all of that. What baffles me is how it's running out of memory
> on my box and not the autotester. I wouldn't have thought that it would matter
> what box you were on. But maybe memory fragmentation or something changes
> things.
> And of course, there's always the mystery of why on earth dmd runs out of
> memory at around 900MB even when there's lots more memory on the box. It's
> been like that for a long time (I've seen it for other running out of memory
> build problems), but I have no idea why that would happen. I'm not aware of
> any significance to 900MB (unlike, say 3.6 GB, when the 32-bit address space
> hits its limit).
>
A limitation of DMC runtime's allocator probably.

But I've seen different numbers over time and certainly more then 1 Gb, for me it usually dies closer to 2GB say ~1.4.
The actual limit for old 32bit programs is 2GB (or 3GB by stealing from kernel) for user space things
because the other 1-2GB of memory are in the kernel space. That it's it for 32bit OS.

See details (also note that 32-bit binaries have more virtual memory available on 64 bit windows).
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx <http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778%28v=vs.85%29.aspx>

-- 
Olshansky Dmitry

_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
On 04.09.2012 19:24, David Simcha wrote:
> I think we need separate compilation for non-unittest Phobos compiles,
> too.  Antti-Ville's precise heap scanning, which will hopefully be
> integrated soon (I've been meaning to ping him on that) makes the
> non-unittest Phobos build process run out of memory on Windows because
> of all the template instantiations necessary.  Alternatively, we could
> bring back the compile time GC.  Walter, why was compile-time GC removed
> again?
>


IIRC it was removed because compile times more than doubled.

Regarding the bail out with less than 1 GB allocated I suspect that the dmc runtime uses VirtualAlloc with sizes below the allocation granularity (which is 64kB on 32-bit windows). This waists virtual address space, so another allocation fails even if the actually allocated physical memory is well below the 2GB limit.

The MS-cl-compiled version of dmd does not exhibit this problem (and compiles D code considerably faster).

With a 64-bit host (or with the /3GB boot switch on 32-bit XP), the 2 GB limit can be extended up to 4 GB if the application is "large address aware". Unfortunately optlink cannot set the corresponding bit in the executable header, so you'll have to patch the executable with another tool (like this one: https://github.com/rainers/visuald/blob/master/tools/largeadr.d )

This problem also affects all optlink generated D programs.

_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 05, 2012
On 9/4/12 9:38 PM, Rainer Schuetze wrote:
> IIRC it was removed because compile times more than doubled.

That always struck me as a simple parameter choosing problem. Can't we set the limit at which the GC enters in action high enough so only the few programs that really need a lot of RAM actually incur a collection? What am I missing?

> Regarding the bail out with less than 1 GB allocated I suspect that the
> dmc runtime uses VirtualAlloc with sizes below the allocation
> granularity (which is 64kB on 32-bit windows). This waists virtual
> address space, so another allocation fails even if the actually
> allocated physical memory is well below the 2GB limit.
>
> The MS-cl-compiled version of dmd does not exhibit this problem (and
> compiles D code considerably faster).

Interesting. Can we fix that?


Andrei
_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

September 04, 2012
On Wednesday, September 05, 2012 00:07:41 Andrei Alexandrescu wrote:
> On 9/4/12 9:38 PM, Rainer Schuetze wrote:
> > IIRC it was removed because compile times more than doubled.
> 
> That always struck me as a simple parameter choosing problem. Can't we set the limit at which the GC enters in action high enough so only the few programs that really need a lot of RAM actually incur a collection? What am I missing?

I think that it pretty much just came down to a question of time. Walter tried it out. It made things worse, so he removed it. He didn't want to spend the time right then to figure out how to solve the speed problem. I fully expect that it's possible to adjust it so that it improves efficiency rather than harming it, but that requires time and effort that no one has yet been willing to put forth.

- Jonathan M Davis
_______________________________________________
phobos mailing list
phobos@puremagic.com
http://lists.puremagic.com/mailman/listinfo/phobos

« First   ‹ Prev
1 2