December 07, 2012
On Friday, December 07, 2012 20:04:54 Andrej Mitrovic wrote:
> On 12/7/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> > Then I'd have two suggestions then:
> Actually there is one other way, use version specifiers and then re-compile using different versions, e.g.:
> 
> <line 1>
> version(StdAlgTest1)
> {
> <..lots of tests...>
> }
> 
> <line 1000>
> version(StdAlgTest2)
> {
> <..lots of tests...>
> }
> 
> <line 2000>
> version(StdAlgTest3)
> {
> <..lots of tests...>
> }
> 
> Then the makefile would have to compile algorithm.d 3 times, via something like:
> 
> $ rdmd --unittest -version=StdAlgTest1 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest2 --main std\algorithm.d $ rdmd --unittest -version=StdAlgTest3 --main std\algorithm.d
> 
> In fact, why aren't we taking advantage and using rdmd already instead of using a seperate unittest.d file? I've always used rdmd to test my Phobos changes, it works very simple this way. All it takes to test a module is to pass the --unittest and --main flags and the module name.

The windows build purposefully creates one executable in order to catch stuff like circular dependencies and to give dmd a larger project to compile at once in order to test dmd. Clearly, we're running into issues with that due to dmd's lack of capabilities when it comes to memory. But Walter has rejected all proposals to change it, and to some extent, I think that he's right. If anything, this is just highlighting an area that dmd really needs to be improved. All of the messing around that we've done with the makefiles is just hiding the problem.

The POSIX builds do build the modules separately for unit tests, though they don't use dmd. I would point out though that as it stands, it wouldn't work to use rdmd, because it's in the tools project and not in dmd, druntime, or Phobos. Rather, it depends on them, so they can't depend on it.

- Jonathan M Davis
December 07, 2012
On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis wrote:
> If you look in win32.mak, you'll see that the source files are split into
> separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.). This is
> specifically to combat this problem. Every time that we reach the point that
> the compilation starts running out of memory again, we add more groups and/or
> rearrange them. It's suboptimal, but I don't know what else we can do at this
> point given dmd's limitations on 32-bit Windows.
>
> - Jonathan M Davis

I don't know? Maybe disabling the GC because it slowed down dmd wasn't a good idea after all.

Who care about a fast compiler is that one crashes ?

It does crash ! Yes but at least, it is fast !
December 07, 2012
On 12/7/12, deadalnix <deadalnix@gmail.com> wrote:
> It does crash ! Yes but at least, it is fast !

Except it's not fast. It's slow and it crashes. It's a Yugo.
December 07, 2012
On Friday, December 07, 2012 21:21:17 deadalnix wrote:
> On Friday, 7 December 2012 at 16:23:49 UTC, Jonathan M Davis
> 
> wrote:
> > If you look in win32.mak, you'll see that the source files are
> > split into
> > separate groups (STD_1_HEAVY, STD_2_HEAVY, STD_3, STD_4, etc.).
> > This is
> > specifically to combat this problem. Every time that we reach
> > the point that
> > the compilation starts running out of memory again, we add more
> > groups and/or
> > rearrange them. It's suboptimal, but I don't know what else we
> > can do at this
> > point given dmd's limitations on 32-bit Windows.
> > 
> > - Jonathan M Davis
> 
> I don't know? Maybe disabling the GC because it slowed down dmd wasn't a good idea after all.
> 
> Who care about a fast compiler is that one crashes ?
> 
> It does crash ! Yes but at least, it is fast !

Most programs compile just fine as things are, and Walter cares a _lot_ about speed of compilation, so doing something harms the common case in favor of a less common one that doesn't even work right now didn't seem like a good idea. But really what it comes down to is that it was an experimental feature that clearly had problems, so it was temporarily disabled until it could be sorted out. All that means is that things were left exactly as they were rather than introducing a new element that could have caused problems. Further investigation and work _does_ need to be done, but without proper testing and further work being done on it, it probably _isn't_ a good idea to enable it. As with many things around here, the trick is that someone needs to spend time working on the problem, and no one has done so yet.

- Jonathan M Davis
December 07, 2012
On 12/7/12 1:43 PM, Jonathan M Davis wrote:
> The GC didn't break things per se. It was just made compilation much slower,
> and Walter didn't have time to fix it at the time (as dmd was close to a
> release), so it was disable. But someone needs to take the time to work on it
> and make it efficient enough to use (possibly doing stuff like making it so that
> it only kicks in at least a certain amount of memory is used to keep the
> common case fast but make the memory-intensive cases work). And no one has
> done that.

I suggested this several times: work the GC so it only intervenes if the consumed memory would otherwise be prohibitively large. That way there's never a collection during normal compilation.

Andrei



December 07, 2012
On Friday, 7 December 2012 at 22:30:59 UTC, Andrei Alexandrescu wrote:
> On 12/7/12 1:43 PM, Jonathan M Davis wrote:
>> The GC didn't break things per se. It was just made compilation much slower,
>> and Walter didn't have time to fix it at the time (as dmd was close to a
>> release), so it was disable. But someone needs to take the time to work on it
>> and make it efficient enough to use (possibly doing stuff like making it so that
>> it only kicks in at least a certain amount of memory is used to keep the
>> common case fast but make the memory-intensive cases work). And no one has
>> done that.
>
> I suggested this several times: work the GC so it only intervenes if the consumed memory would otherwise be prohibitively large. That way there's never a collection during normal compilation.
>
> Andrei

Nobody told you that the GC was THAT SLOW that it as even slower than the swap ? You really know nothing about optimization, don't you ?
December 07, 2012
On 12/7/12 5:37 PM, deadalnix wrote:
> On Friday, 7 December 2012 at 22:30:59 UTC, Andrei Alexandrescu wrote:
>> On 12/7/12 1:43 PM, Jonathan M Davis wrote:
>>> The GC didn't break things per se. It was just made compilation much
>>> slower,
>>> and Walter didn't have time to fix it at the time (as dmd was close to a
>>> release), so it was disable. But someone needs to take the time to
>>> work on it
>>> and make it efficient enough to use (possibly doing stuff like making
>>> it so that
>>> it only kicks in at least a certain amount of memory is used to keep the
>>> common case fast but make the memory-intensive cases work). And no
>>> one has
>>> done that.
>>
>> I suggested this several times: work the GC so it only intervenes if
>> the consumed memory would otherwise be prohibitively large. That way
>> there's never a collection during normal compilation.
>>
>> Andrei
>
> Nobody told you that the GC was THAT SLOW that it as even slower than
> the swap ? You really know nothing about optimization, don't you ?

This is not even remotely appropriate. Where did it come from?

Andrei
December 07, 2012
deadalnix:

> Nobody told you that the GC was THAT SLOW that it as even slower than the swap ? You really know nothing about optimization, don't you ?

Please be gentle in the forums, even with a person as strong as Andrei. Thank you.

Bye,
bearophile
December 07, 2012
On Friday, 7 December 2012 at 22:39:09 UTC, Andrei Alexandrescu wrote:
>> Nobody told you that the GC was THAT SLOW that it as even slower than
>> the swap ? You really know nothing about optimization, don't you ?
>
> This is not even remotely appropriate. Where did it come from?
>

Sorry, I was trying to be ironic. People keep telling that the GC was removed because it was too slow, but it is pretty clear that, however how slow it is, it is not slower than swapping.

I more and more irritated by the fact that we justify everything here because it have to be fast or whatever, and at the end, we have unreliable software that aren't even that fast in many cases.

I'm working on a program that now require more than 2.5Gb of RAM to compile, where separate compilation is not possible due to bug 8997 and that randomly fails to compile due to bug 8596. It is NOT fast and that insane memory consumption is a major cause of slowness.

make; make; make; make; make is the new make.
December 07, 2012
On Friday, 7 December 2012 at 22:42:16 UTC, bearophile wrote:
> deadalnix:
>
>> Nobody told you that the GC was THAT SLOW that it as even slower than the swap ? You really know nothing about optimization, don't you ?
>
> Please be gentle in the forums, even with a person as strong as Andrei. Thank you.
>
> Bye,
> bearophile

I never meant to say that Andrei was wrong, that the complete opposite. I made my point poorly and I apologize for that. I seemed obvious to me that the GC couldn't be slower than the swap and that nobody would take it seriously.