Jump to page: 1 24  
Page
Thread overview
Concern about dmd memory usage on win32
Dec 07, 2012
monarch_dodra
Dec 07, 2012
monarch_dodra
Dec 07, 2012
Dmitry Olshansky
Dec 07, 2012
Jonathan M Davis
Dec 07, 2012
Nick Sabalausky
Dec 07, 2012
Jonathan M Davis
Dec 07, 2012
deadalnix
Dec 07, 2012
deadalnix
Dec 10, 2012
Walter Bright
Dec 10, 2012
deadalnix
Dec 11, 2012
evansl
Dec 11, 2012
deadalnix
Dec 07, 2012
bearophile
Dec 07, 2012
deadalnix
Dec 07, 2012
monarch_dodra
Dec 07, 2012
Jonathan M Davis
Dec 10, 2012
monarch_dodra
Dec 10, 2012
Jonathan M Davis
Dec 07, 2012
Jonathan M Davis
Dec 07, 2012
Andrej Mitrovic
Dec 07, 2012
deadalnix
Dec 07, 2012
Andrej Mitrovic
Dec 07, 2012
Jonathan M Davis
Dec 07, 2012
Jonathan M Davis
Dec 11, 2012
Jonathan M Davis
Dec 11, 2012
Walter Bright
Jan 14, 2013
monarch_dodra
Jan 14, 2013
Walter Bright
Jan 14, 2013
Rainer Schuetze
Jan 15, 2013
Walter Bright
Jan 15, 2013
monarch_dodra
December 07, 2012
In particular, when compiling "-unittest std\algorithm.d", dmd uses *nearly* 1 GB (it uses about 1,051,176K on my machine).

Problem is that when it reaches 1GB, it crashes. I have a pull request which adds a few unittests to algorithm, and it is consistently crashing on win32 with an out of memory error.

In layman's terms: std\algorithm.d is full. You literally can't add any more unittests to it, without crashing dmd on win32.

I'd have recommended splitting the unittests in sub-modules or whatnot, but come to think about it, I'm actually more concern that a module could *singlehandedly* make the compiler crash on out of memory...

Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64 bit machine...?
December 07, 2012
On Friday, 7 December 2012 at 14:23:39 UTC, monarch_dodra wrote:
> [SNIP]

Come to think about it, what's funny is that the pull *used* to pass, but it doesn't anymore, and I haven't changed anything in the mean-time.

There *have* been changes to algorithm, so that may be it, but it may also be a performance regression of dmd. Again, I'm no expert, so all I can do is report :/
December 07, 2012
12/7/2012 6:23 PM, monarch_dodra пишет:
> In particular, when compiling "-unittest std\algorithm.d", dmd uses
> *nearly* 1 GB (it uses about 1,051,176K on my machine).
>
> Problem is that when it reaches 1GB, it crashes. I have a pull request
> which adds a few unittests to algorithm, and it is consistently crashing
> on win32 with an out of memory error.

Yup, it dies the same way on auto-tester for my pull.

>
> In layman's terms: std\algorithm.d is full. You literally can't add any
> more unittests to it, without crashing dmd on win32.
>
> I'd have recommended splitting the unittests in sub-modules or whatnot,
> but come to think about it, I'm actually more concern that a module
> could *singlehandedly* make the compiler crash on out of memory...
>
> Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64
> bit machine...?

It's not large address-space aware (a matter of a proper bit set in PE header) thus is limited to 2Gb.  The other part of problem has to do with the way DMC run-time allocates virtual memory. Somebody fixed it but the patch failed to get any recognition.

-- 
Dmitry Olshansky
December 07, 2012
On Friday, December 07, 2012 15:23:36 monarch_dodra wrote:
> In particular, when compiling "-unittest std\algorithm.d", dmd uses *nearly* 1 GB (it uses about 1,051,176K on my machine).
> 
> Problem is that when it reaches 1GB, it crashes. I have a pull request which adds a few unittests to algorithm, and it is consistently crashing on win32 with an out of memory error.
> 
> In layman's terms: std\algorithm.d is full. You literally can't add any more unittests to it, without crashing dmd on win32.
> 
> I'd have recommended splitting the unittests in sub-modules or whatnot, but come to think about it, I'm actually more concern that a module could *singlehandedly* make the compiler crash on out of memory...
> 
> Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64 bit machine...?

If you look in win32.mak, you'll see that the source files are split into separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is specifically to combat this problem. Every time that we reach the point that the compilation starts running out of memory again, we add more groups and/or rearrange them. It's suboptimal, but I don't know what else we can do at this point given dmd's limitations on 32-bit Windows.

- Jonathan M Davis
December 07, 2012
On Fri, 07 Dec 2012 08:23:06 -0800
Jonathan M Davis <jmdavisProg@gmx.com> wrote:

> On Friday, December 07, 2012 15:23:36 monarch_dodra wrote:
> > In particular, when compiling "-unittest std\algorithm.d", dmd uses *nearly* 1 GB (it uses about 1,051,176K on my machine).
> > 
> > Problem is that when it reaches 1GB, it crashes. I have a pull request which adds a few unittests to algorithm, and it is consistently crashing on win32 with an out of memory error.
> > 
> > In layman's terms: std\algorithm.d is full. You literally can't add any more unittests to it, without crashing dmd on win32.
> > 
> > I'd have recommended splitting the unittests in sub-modules or whatnot, but come to think about it, I'm actually more concern that a module could *singlehandedly* make the compiler crash on out of memory...
> > 
> > Also, I'm no expert, but why is my dmd limited to 1 GB memory on my 64 bit machine...?
> 
> If you look in win32.mak, you'll see that the source files are split into separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is specifically to combat this problem. Every time that we reach the point that the compilation starts running out of memory again, we add more groups and/or rearrange them. It's suboptimal, but I don't know what else we can do at this point given dmd's limitations on 32-bit Windows.
> 

Sooo...what's the status of fixing DMD's forever-standing memory usage issues?

My understanding is that the big issues are:

1. CTFE allocates every time a CTFE variable's value is changed.

2. The GC inside DMD is disabled because it broke things, so it never releases memory.

Is this correct? If so, what's the current status of fixes? It seems to me this would be something that should be creeping higher and higher up the priority list (if it hasn't already been doing so).

December 07, 2012
On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis wrote:
> If you look in win32.mak, you'll see that the source files are split into
> separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is
> specifically to combat this problem. Every time that we reach the point that
> the compilation starts running out of memory again, we add more groups and/or
> rearrange them. It's suboptimal, but I don't know what else we can do at this
> point given dmd's limitations on 32-bit Windows.
>
> - Jonathan M Davis

I had actually been through this before, and someone told me about that. The problem at this point is that this isn't even an option anymore, since std/algorithm.d is in a group *alone*.

On Friday, 7 December 2012 at 17:07:10 UTC, Nick Sabalausky wrote:
> [SNIP]
> Is this correct? If so, what's the current status of fixes? It seems to
> me this would be something that should be creeping higher and higher up
> the priority list (if it hasn't already been doing so).

What he said.
December 07, 2012
On Friday, December 07, 2012 18:18:43 monarch_dodra wrote:
> I had actually been through this before, and someone told me about that. The problem at this point is that this isn't even an option anymore, since std/algorithm.d is in a group *alone*.

Then I'd have two suggestions then:

1. Figure out which tests are too expensive. Either disable them or make them less expensive. If we can't test as much as we need to right now, then the less critical tests will just need to be disabled as ugly as that may be.

2. version stuff. Either specifically version some of it out on Windows (or maybe just 32-bit Windows now that we have 64-bit Windows), or put some of the less critical stuff in version blocks that can be explicitly enabled by someone working on Phobos.

The two should probably be combined though. Figure out which tests are problematic and version out the less critical ones on 32-bit Windows. Then you can create a bug report for those tests specifically and anyone working on the memory problem with have somethingh specific to run.

Much of std.datetime's tests used to have to be disabled with version blocks on Windows until the compiler was improved enough and/or the tests were adjusted enough that they could be run on Windows (adjusting the makefile probably helped as well).

Templates and CTFE are particularly expensive, so fantastic tricks like using foreach with TypeTuple can really cost a lot with dmd's current memory issues, and std.algorithm may just not be able to afford some of the better tests right now as much as that sucks.

- Jonathan M Davis
December 07, 2012
On Friday, December 07, 2012 12:07:06 Nick Sabalausky wrote:
> Sooo...what's the status of fixing DMD's forever-standing memory usage issues?
> 
> My understanding is that the big issues are:
> 
> 1. CTFE allocates every time a CTFE variable's value is changed.
> 
> 2. The GC inside DMD is disabled because it broke things, so it never releases memory.
> 
> Is this correct? If so, what's the current status of fixes? It seems to me this would be something that should be creeping higher and higher up the priority list (if it hasn't already been doing so).

The GC didn't break things per se. It was just made compilation much slower, and Walter didn't have time to fix it at the time (as dmd was close to a release), so it was disable. But someone needs to take the time to work on it and make it efficient enough to use (possibly doing stuff like making it so that it only kicks in at least a certain amount of memory is used to keep the common case fast but make the memory-intensive cases work). And no one has done that. Walter has been busy with other stuff and has made it clear that it's likely going to need to be someone else who steps up and fixes it, so we're stuck until soemone does that.

As for CTFE, I don't know what the current state is. Don has plans, but I get the impression that he's too busy to get much done with them these days.

We're dealing with a problem that requires some of our key developers (or someone to put in enough time and effort to learn much of what they know) in order to get it done, so it's fallen by the wayside thus far.

- Jonathan M Davis
December 07, 2012
On Friday, December 07, 2012 19:43:50 Jonathan M Davis wrote:
> Then I'd have two suggestions then:

I really need to reread my posts more before actually posting them...

- Jonathan M Davis
December 07, 2012
On 12/7/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> Then I'd have two suggestions then:

Actually there is one other way, use version specifiers and then re-compile using different versions, e.g.:

<line 1>
version(StdAlgTest1)
{
<..lots of tests...>
}

<line 1000>
version(StdAlgTest2)
{
<..lots of tests...>
}

<line 2000>
version(StdAlgTest3)
{
<..lots of tests...>
}

Then the makefile would have to compile algorithm.d 3 times, via something like:

$ rdmd --unittest -version=StdAlgTest1 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest2 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest3 --main std\algorithm.d

In fact, why aren't we taking advantage and using rdmd already instead of using a seperate unittest.d file? I've always used rdmd to test my Phobos changes, it works very simple this way. All it takes to test a module is to pass the --unittest and --main flags and the module name.
« First   ‹ Prev
1 2 3 4