Jump to page: 1 24  
Page
Thread overview
Compile-time memory footprint of std.algorithm
Apr 22, 2014
Iain Buclaw
Apr 22, 2014
H. S. Teoh
Apr 22, 2014
Peter Alexander
Apr 22, 2014
Iain Buclaw
Apr 23, 2014
Dmitry Olshansky
Apr 23, 2014
Walter Bright
Apr 23, 2014
Dmitry Olshansky
Apr 23, 2014
Walter Bright
Apr 23, 2014
Kagamin
Apr 23, 2014
Walter Bright
Apr 23, 2014
Dmitry Olshansky
Apr 23, 2014
Walter Bright
Apr 23, 2014
Dmitry Olshansky
Apr 23, 2014
Steve Teale
Apr 23, 2014
Nordlöw
Apr 23, 2014
Peter Alexander
Apr 23, 2014
Nordlöw
Apr 23, 2014
Nordlöw
Apr 23, 2014
Iain Buclaw
Apr 24, 2014
Iain Buclaw
Apr 23, 2014
Jussi Jumppanen
Apr 23, 2014
Brian Schott
Apr 24, 2014
Kagamin
Apr 23, 2014
Messenger
Apr 24, 2014
Jacob Carlborg
Apr 24, 2014
Ary Borenszweig
Apr 24, 2014
Iain Buclaw
Apr 26, 2014
Walter Bright
Apr 24, 2014
monarch_dodra
Apr 23, 2014
Daniel Murphy
Apr 23, 2014
Dmitry Olshansky
Apr 24, 2014
Marco Leise
Apr 26, 2014
Dmitry Olshansky
Apr 23, 2014
Ary Borenszweig
Jun 21, 2014
Iain Buclaw
Jun 21, 2014
H. S. Teoh
April 22, 2014
Testing a 2.065 pre-release snapshot against GDC. I see that std.algorithm now surpasses 2.1GBs of memory consumption when compiling unittests.  This is bringing my laptop down to its knees for a painful 2/3 minutes.

This is time that could be better spent if the unittests where simply broken down/split up.
April 22, 2014
On Tue, Apr 22, 2014 at 06:09:11PM +0000, Iain Buclaw via Digitalmars-d wrote:
> Testing a 2.065 pre-release snapshot against GDC. I see that std.algorithm now surpasses 2.1GBs of memory consumption when compiling unittests.  This is bringing my laptop down to its knees for a painful 2/3 minutes.
> 
> This is time that could be better spent if the unittests where simply broken down/split up.

Didn't we say (many months ago!) that we wanted to split up
std.algorithm into more manageable chunks? I see that that hasn't
happened yet. :-(


T

-- 
"Real programmers can write assembly code in any language. :-)" -- Larry Wall
April 22, 2014
On Tuesday, 22 April 2014 at 18:09:12 UTC, Iain Buclaw wrote:
> Testing a 2.065 pre-release snapshot against GDC. I see that std.algorithm now surpasses 2.1GBs of memory consumption when compiling unittests.  This is bringing my laptop down to its knees for a painful 2/3 minutes.

My (ancient) laptop only has 2GB of RAM :-)

Has anyone looked into why it is using so much? Is it all the temporary allocations created by CTFE that are never cleaned up?
April 22, 2014
On 22 April 2014 21:43, Peter Alexander via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Tuesday, 22 April 2014 at 18:09:12 UTC, Iain Buclaw wrote:
>>
>> Testing a 2.065 pre-release snapshot against GDC. I see that std.algorithm now surpasses 2.1GBs of memory consumption when compiling unittests.  This is bringing my laptop down to its knees for a painful 2/3 minutes.
>
>
> My (ancient) laptop only has 2GB of RAM :-)
>
> Has anyone looked into why it is using so much? Is it all the temporary allocations created by CTFE that are never cleaned up?

I blame Kenji and all the semanticTiargs and other template-related copying and discarding of memory around the place. :o)
April 23, 2014
23-Apr-2014 01:00, Iain Buclaw via Digitalmars-d пишет:
> On 22 April 2014 21:43, Peter Alexander via Digitalmars-d
> <digitalmars-d@puremagic.com> wrote:
>> On Tuesday, 22 April 2014 at 18:09:12 UTC, Iain Buclaw wrote:
>>>
>>> Testing a 2.065 pre-release snapshot against GDC. I see that std.algorithm
>>> now surpasses 2.1GBs of memory consumption when compiling unittests.  This
>>> is bringing my laptop down to its knees for a painful 2/3 minutes.
>>
>>
>> My (ancient) laptop only has 2GB of RAM :-)
>>
>> Has anyone looked into why it is using so much? Is it all the temporary
>> allocations created by CTFE that are never cleaned up?
>
> I blame Kenji and all the semanticTiargs and other template-related
> copying and discarding of memory around the place. :o)
>

At a times I really don't know why can't we just drop in a Boehm GC (the stock one, not homebrew stuff) and be done with it. Speed? There is no point in speed if it leaks that much.

-- 
Dmitry Olshansky
April 23, 2014
On 4/22/2014 11:33 PM, Dmitry Olshansky wrote:
> At a times I really don't know why can't we just drop in a Boehm GC (the stock
> one, not homebrew stuff) and be done with it. Speed? There is no point in speed
> if it leaks that much.

I made a build of dmd with a collector in it. It destroyed the speed. Took it out.

April 23, 2014
23-Apr-2014 10:39, Walter Bright пишет:
> On 4/22/2014 11:33 PM, Dmitry Olshansky wrote:
>> At a times I really don't know why can't we just drop in a Boehm GC
>> (the stock
>> one, not homebrew stuff) and be done with it. Speed? There is no point
>> in speed
>> if it leaks that much.
>
> I made a build of dmd with a collector in it.
> It destroyed the speed.
> Took it out.

Getting more practical - any chance to use it selectively in CTFE and related stuff that is KNOWN to generate garbage?

-- 
Dmitry Olshansky
April 23, 2014
On Wednesday, 23 April 2014 at 06:39:04 UTC, Walter Bright wrote:
> I made a build of dmd with a collector in it. It destroyed the speed. Took it out.

Is it because of garbage collections? Then allow people configure collection threshold, say, collect garbage only when the heap is bigger than 16GB.
April 23, 2014
On 4/22/2014 11:56 PM, Dmitry Olshansky wrote:
> Getting more practical - any chance to use it selectively in CTFE and related
> stuff that is KNOWN to generate garbage?

Using it there only will require a rewrite of interpret.c.

April 23, 2014
On 4/23/2014 12:20 AM, Kagamin wrote:
> On Wednesday, 23 April 2014 at 06:39:04 UTC, Walter Bright wrote:
>> I made a build of dmd with a collector in it. It destroyed the speed. Took it
>> out.
>
> Is it because of garbage collections? Then allow people configure collection
> threshold, say, collect garbage only when the heap is bigger than 16GB.

It's more than that. I invite you to read the article I wrote on DrDobbs a while back about changes to the allocator to improve speed.

tl;dr: allocation is a critical speed issue with dmd. Using the bump-pointer method is very fast, and it matters.
« First   ‹ Prev
1 2 3 4