View mode: basic / threaded / horizontal-split · Log in · Help
March 18, 2009
Re: new D2.0 + C++ language
Weed wrote:
> bearophile пишет:
>> Weed:
>>> I want to offer the dialect of the language D2.0, suitable for use where
>>> are now used C/C++. Main goal of this is making language like D, but
>>> corresponding "zero-overhead principle" like C++:
>>> ...
>>> The code on this language almost as dangerous as a code on C++ - it is a
>>> necessary cost for increasing performance.
>> No, thanks...
>>
>> And regarding performance, eventually it will come a lot from a good
>> usage of multiprocessing,
> 
> The proposal will be able support multiprocessing - for it provided a
> references counting in the debug version of binaries. If you know the
> best way for language *without GC* guaranteeing the existence of an
> object without overhead - I have to listen!

You cannot alter the reference count of an immutable variable.
March 19, 2009
Re: new D2.0 + C++ language
On Wed, 18 Mar 2009 13:48:55 -0400, Craig Black <cblack@ara.com> wrote:

> bearophile Wrote:
>
>> Weed:
>> > I want to offer the dialect of the language D2.0, suitable for use  
>> where
>> > are now used C/C++. Main goal of this is making language like D, but
>> > corresponding "zero-overhead principle" like C++:
>> >...
>> > The code on this language almost as dangerous as a code on C++ - it  
>> is a
>> > necessary cost for increasing performance.
>>
>> No, thanks...
>>
>> And regarding performance, eventually it will come a lot from a good  
>> usage of multiprocessing, that in real-world programs may need pure  
>> functions and immutable data. That D2 has already, while C++ is less  
>> lucky.
>>
>> Bye,
>> bearophile
>
> Multiprocessing can only improve performance for tasks that can run in  
> parallel.  So far, every attempt to do this with GC (that I know of) has  
> ended up slower, not faster.  Bottom line, if GC is the bottleneck, more  
> CPU's won't help.
>
> For applications where GC performance is unacceptable, we either need a  
> radically new way to do GC faster, rely less on the GC, or drop GC  
> altogether.
>
> However, in D, we can't get rid of the GC altogether, since the compiler  
> relies on it.  But we can use explicit memory management where it makes  
> sense to do so.
>
> -Craig

*Sigh*, you do know people run cluster & multi-threaded Java apps all the  
time right? I'd recommend reading about concurrent GCs  
http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._incremental_vs._concurrent.  
By the way, traditional malloc has rather horrible multi-threaded  
performance as 1) it creates lots of kernel calls and 2) requires a global  
lock on access. Yes, there are several alternatives available now, but the  
same techniques work for enabling multi-threaded GCs. D's shared/local  
model should support thread local heaps, which would improve all of the  
above.
March 19, 2009
Re: new D2.0 + C++ language
Christopher Wright пишет:
> Weed wrote:
>> bearophile пишет:
>>> Weed:
>>>> I want to offer the dialect of the language D2.0, suitable for use
>>>> where
>>>> are now used C/C++. Main goal of this is making language like D, but
>>>> corresponding "zero-overhead principle" like C++:
>>>> ...
>>>> The code on this language almost as dangerous as a code on C++ - it
>>>> is a
>>>> necessary cost for increasing performance.
>>> No, thanks...
>>>
>>> And regarding performance, eventually it will come a lot from a good
>>> usage of multiprocessing,
>>
>> The proposal will be able support multiprocessing - for it provided a
>> references counting in the debug version of binaries. If you know the
>> best way for language *without GC* guaranteeing the existence of an
>> object without overhead - I have to listen!
> 
> You cannot alter the reference count of an immutable variable.

Why?
March 19, 2009
Re: new D2.0 + C++ language
Weed wrote:
> Christopher Wright пишет:
>> Weed wrote:
>>> bearophile пишет:
>>>> Weed:
>>>>> I want to offer the dialect of the language D2.0, suitable for use
>>>>> where
>>>>> are now used C/C++. Main goal of this is making language like D, but
>>>>> corresponding "zero-overhead principle" like C++:
>>>>> ...
>>>>> The code on this language almost as dangerous as a code on C++ - it
>>>>> is a
>>>>> necessary cost for increasing performance.
>>>> No, thanks...
>>>>
>>>> And regarding performance, eventually it will come a lot from a good
>>>> usage of multiprocessing,
>>> The proposal will be able support multiprocessing - for it provided a
>>> references counting in the debug version of binaries. If you know the
>>> best way for language *without GC* guaranteeing the existence of an
>>> object without overhead - I have to listen!
>> You cannot alter the reference count of an immutable variable.
> 
> Why?

Because it's immutable!

Unless you're storing a dictionary of objects to reference counts 
somewhere, that is. Which would be hideously slow and a pain to use. Not 
that reference counting is fun.
March 19, 2009
Re: new D2.0 + C++ language
Christopher Wright пишет:


>>>>> And regarding performance, eventually it will come a lot from a good
>>>>> usage of multiprocessing,
>>>> The proposal will be able support multiprocessing - for it provided a
>>>> references counting in the debug version of binaries. If you know the
>>>> best way for language *without GC* guaranteeing the existence of an
>>>> object without overhead - I have to listen!
>>> You cannot alter the reference count of an immutable variable.
>>
>> Why?
> 
> Because it's immutable!
> 
> Unless you're storing a dictionary of objects to reference counts
> somewhere, that is. Which would be hideously slow and a pain to use. Not
> that reference counting is fun.


Precisely. I wrote the cost for that: 1 dereferencing + inc/dec of counter.

Ok, you can make reference counting is disabled by option. In any case,
to release code that does not been included, it is only for -debug!
March 19, 2009
Re: new D2.0 + C++ language
Robert Jacques пишет:

>>
>> Multiprocessing can only improve performance for tasks that can run in
>> parallel.  So far, every attempt to do this with GC (that I know of)
>> has ended up slower, not faster.  Bottom line, if GC is the
>> bottleneck, more CPU's won't help.
>>
>> For applications where GC performance is unacceptable, we either need
>> a radically new way to do GC faster, rely less on the GC, or drop GC
>> altogether.
>>
>> However, in D, we can't get rid of the GC altogether, since the
>> compiler relies on it.  But we can use explicit memory management
>> where it makes sense to do so.
>>
>> -Craig
> 
> *Sigh*, you do know people run cluster & multi-threaded Java apps all
> the time right? I'd recommend reading about concurrent GCs
> http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._incremental_vs._concurrent.
> By the way, traditional malloc has rather horrible multi-threaded
> performance as 1) it creates lots of kernel calls

D2.0 with GC also creates lots of kernel calls!

> and 2) requires a
> global lock on access.

Who?

> Yes, there are several alternatives available
> now, but the same techniques work for enabling multi-threaded GCs. D's
> shared/local model should support thread local heaps, which would
> improve all of the above.

It does not prevent pre-create the objects, or to reserve memory for
them in advance. (This is what makes the GC, but a programmer would do
it better)
March 19, 2009
Re: new D2.0 + C++ language
BCS пишет:
> Reply to Weed,
> 
>> If you know the
>> best way for language *without GC* guaranteeing the existence of an
>> object without overhead - I have to listen!
>>
> 
> Never delete anything?
> 
> One of the arguments for GC is that it might well have /less/ overhead
> than any other practical way of managing dynamic memory.

Mmm
When I say "overhead" I mean the cost of executions, and not cost of
programming

> Yes you can be
> very careful in keeping track of pointers (not practical) or use smart
> pointers and such (might end up costing more than GC)

I am do not agree: GC overexpenditure CPU or memory. Typically, both.


> but neither is
> particularly nice.
March 19, 2009
Re: new D2.0 + C++ language
On Thu, 19 Mar 2009 07:32:18 -0400, Weed <resume755@mail.ru> wrote:

> Robert Jacques пишет:
>
>>>
>>> Multiprocessing can only improve performance for tasks that can run in
>>> parallel.  So far, every attempt to do this with GC (that I know of)
>>> has ended up slower, not faster.  Bottom line, if GC is the
>>> bottleneck, more CPU's won't help.
>>>
>>> For applications where GC performance is unacceptable, we either need
>>> a radically new way to do GC faster, rely less on the GC, or drop GC
>>> altogether.
>>>
>>> However, in D, we can't get rid of the GC altogether, since the
>>> compiler relies on it.  But we can use explicit memory management
>>> where it makes sense to do so.
>>>
>>> -Craig
>>
>> *Sigh*, you do know people run cluster & multi-threaded Java apps all
>> the time right? I'd recommend reading about concurrent GCs
>> http://en.wikipedia.org/wiki/Garbage_collection_(computer_science)#Stop-the-world_vs._incremental_vs._concurrent.
>> By the way, traditional malloc has rather horrible multi-threaded
>> performance as 1) it creates lots of kernel calls
>
> D2.0 with GC also creates lots of kernel calls!

*sigh* All memory allocation must make some kernel calls. D's GC makes  
fewer calls than a traditional malloc. Actually,  modern malloc  
replacements imitate the way GCs allocate memory since it's a lot faster.  
(Intel's threading building blocks mentions this as part of its marketing  
and performance numbers, so modern mallocs are probably not that common)

>
>> and 2) requires a
>> global lock on access.
>
> Who?

Traditional malloc requires taking global lock (and as point 1, often a  
kernel lock. Again, fixing this issue is one of Intel's TBB's  
marketing/performance points)

>> Yes, there are several alternatives available
>> now, but the same techniques work for enabling multi-threaded GCs. D's
>> shared/local model should support thread local heaps, which would
>> improve all of the above.
>
> It does not prevent pre-create the objects, or to reserve memory for
> them in advance. (This is what makes the GC, but a programmer would do
> it better)

I think the point you're trying to make is that a GC is more memory  
intensive. Actually, since fast modern mallocs and GC share the same  
underlying allocation techniques, they have about the same memory usage,  
etc. Of course, a traditional malloc with aggressive manual control can  
often return memory to the kernel in a timely manner, so a program's  
memory allocation better tracks actual usage as opposed to the maximum.  
Doing so is very performance intensive and GCs can return memory to the  
system too (Tango's does if I remember correctly).
March 19, 2009
Re: new D2.0 + C++ language
Weed Wrote:
> BCS �����:
> > Yes you can be
> > very careful in keeping track of pointers (not practical) or use smart
> > pointers and such (might end up costing more than GC)
> I am do not agree: GC overexpenditure CPU or memory. Typically, both.

I wouldn't be so sure about CPU:
http://shootout.alioth.debian.org/debian/benchmark.php?test=all&lang=gdc&lang2=gpp&box=1
March 19, 2009
Re: new D2.0 + C++ language
Hello Weed,

> BCS ?????:
> 
>> Reply to Weed,
>> 
>>> If you know the
>>> best way for language *without GC* guaranteeing the existence of an
>>> object without overhead - I have to listen!
>> Never delete anything?
>> 
>> One of the arguments for GC is that it might well have /less/
>> overhead than any other practical way of managing dynamic memory.
>> 
> Mmm
> When I say "overhead" I mean the cost of executions, and not cost of
> programming

So do I.

I figure unless it save me more times than it costs /all/ the users, run 
time cost trumps. 

>> Yes you can be
>> very careful in keeping track of pointers (not practical) or use
>> smart
>> pointers and such (might end up costing more than GC)
> I am do not agree: GC overexpenditure CPU or memory. Typically, both.

ditto naryl on CPU

As for memory, unless the thing overspends into swap and does so very quickly 
(many pages per second) I don't think that matters. This is because most 
of the extra will not be part of the resident set so the OS will start paging 
it out to keep some free pages. This is basically free until you have the 
CPU or HDD locked hard at 100%. The other half is that the overhead of reference 
counting and/or the like will cost in memory (you have to store the count 
somewhere) and might also have bad effects regarding cache misses.

> 
>> but neither is
>> particularly nice.
1 2 3 4 5 6
Top | Discussion index | About this forum | D home