March 31, 2015
On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:

> Is anyone there looked how msvc for example compiles really big files ? I never seen it goes over 200 MB. And it is written in C++, so no GC. And compiles very quick.

and it has no CTFE, so...

CTFE is a big black hole that eats memory like crazy.

March 31, 2015
"Jacob Carlborg"  wrote in message news:mfe0dm$2i6l$1@digitalmars.com... 

> Doesn't DMD already have a GC that is disabled?

It did once, but it's been gone for a while now.
March 31, 2015
On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
> On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:
>
>> Is anyone there looked how msvc for example compiles really big files ?
>> I never seen it goes over 200 MB. And it is written in C++, so no GC.
>> And compiles very quick.
>
> and it has no CTFE, so...
>
> CTFE is a big black hole that eats memory like crazy.

I'm going to propose again the same thing as in the past :
 - before CTFE switch pool.
 - CTFE in the new pool.
 - deep copy result from ctfe pool to main pool.
 - ditch ctfe pool.
March 31, 2015
I don't use CTFE in my game engine and DMD uses about 600 MB memory per file for instance.
March 31, 2015
On 03/31/2015 05:51 AM, deadalnix wrote:
> Yes, compiler to perform significantly better with GC than with other memory management strategy. Ironically, I think that weighted a bit too much in favor of GC for language design in the general case.

Why? Compilers use a lot of long-lived data structures (AST, metadata)
which is particularly bad for a conservative GC.
Any evidence to the contrary?
March 31, 2015
On Monday, 30 March 2015 at 22:47:51 UTC, lobo wrote:
> On Monday, 30 March 2015 at 22:39:51 UTC, lobo wrote:
>> On Sunday, 29 March 2015 at 23:14:31 UTC, Martin Krejcirik wrote:
>>> It seems like every DMD release makes compilation slower. This time I see 10.8s vs 7.8s on my little project. I know this is generally least of concern, and D1's lighting-fast times are long gone, but since Walter often claims D's superior compilation speeds, maybe some profiling is in order ?
>>
>> I'm finding memory usage the biggest problem for me. 3s speed increase is not nice but an increase of 500MB RAM usage with DMD 2.067 over 2.066 means I can no longer build one of my projects.
>>
>> bye,
>> lobo
>
> I should add that I am on a 32-bit machine with 4GB RAM. I just ran some tests measuring RAM usage:
>
> DMD 2.067 ~4.2GB (fails here so not sure of the full amount required)
> DMD 2.066 ~3.7GB (maximum)
> DMD 2.065 ~3.1GB (maximum)
>
> It was right on the edge with 2.066 anyway but this trend of more RAM usage seems to also be occurring with each DMD release.
>
> bye,
> lobo

As far as memory is concerned. How hard would it be to simply have DMD use a swap file? This would fix the out of memory issues and provide some safety(at least you can get your project to compile. Seems like it would be a relatively simple thing to add?
March 31, 2015
On Tuesday, 31 March 2015 at 19:20:20 UTC, Jake The Baker wrote:
> As far as memory is concerned. How hard would it be to simply have DMD use a swap file?

That'd hit the same walls as the operating system trying to use a swap file at least - running out of address space, and being brutally slow even if it does keep running.
March 31, 2015
On Tuesday, 31 March 2015 at 19:19:23 UTC, Martin Nowak wrote:
> On 03/31/2015 05:51 AM, deadalnix wrote:
>> Yes, compiler to perform significantly better with GC than with other
>> memory management strategy. Ironically, I think that weighted a bit too
>> much in favor of GC for language design in the general case.
>
> Why? Compilers use a lot of long-lived data structures (AST, metadata)
> which is particularly bad for a conservative GC.
> Any evidence to the contrary?

The graph is not acyclic, which makes it even worse for anything else.
March 31, 2015
On Tue, 31 Mar 2015 18:24:48 +0000, deadalnix wrote:

> On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
>> On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:
>>
>>> Is anyone there looked how msvc for example compiles really big files
>>> ?
>>> I never seen it goes over 200 MB. And it is written in C++, so no GC.
>>> And compiles very quick.
>>
>> and it has no CTFE, so...
>>
>> CTFE is a big black hole that eats memory like crazy.
> 
> I'm going to propose again the same thing as in the past :
>   - before CTFE switch pool.
>   - CTFE in the new pool.
>   - deep copy result from ctfe pool to main pool.
>   - ditch ctfe pool.

this won't really help long CTFE calls (like building a parser based on grammar, for example, as this is a one very long call). it will slow down simple CFTE calls though.

it *may* help, but i'm looking at my "life" samle, for example, and see that it eats all my RAM while parsing big .lif file. it has to do that in one call, as there is no way to enumerate existing files in directory and process them sequentially -- as there is no way to store state between CTFE calls, so i can't even create numbered arrays with data.

March 31, 2015
On Tuesday, 31 March 2015 at 18:24:49 UTC, deadalnix wrote:
> On Tuesday, 31 March 2015 at 11:29:23 UTC, ketmar wrote:
>> On Tue, 31 Mar 2015 10:14:05 +0000, Temtaime wrote:
>>
>>> Is anyone there looked how msvc for example compiles really big files ?
>>> I never seen it goes over 200 MB. And it is written in C++, so no GC.
>>> And compiles very quick.
>>
>> and it has no CTFE, so...
>>
>> CTFE is a big black hole that eats memory like crazy.
>
> I'm going to propose again the same thing as in the past :
>  - before CTFE switch pool.
>  - CTFE in the new pool.
>  - deep copy result from ctfe pool to main pool.
>  - ditch ctfe pool.

Wait, you mean DMD doesn't already do something like that? Yikes. I had always assumed (without looking) that ctfe used some separate heap that was chucked after each call.