March 20, 2009
Reply to Weed,


> It is designed not so. There will be a hidden dereferencing:
> 
> const ref Obj object -> struct{ Obj* object;    -> Obj object;
> int counter; };

Who deletes those structs and when?


March 20, 2009
Reply to Weed,

> Simen Kjaeraas ?????:
> 
>> Weed <resume755@mail.ru> wrote:
>> 
>>>> I think the point you're trying to make is that a GC is more memory
>>>> intensive.
>>>> 
>>> + Sometimes allocation and freeing of memory in an arbitrary
>>> unpredictable time  unacceptable. (in game development or realtime
>>> software, for example. One hundred million times discussed about it
>>> there, I guess)
>>> 
>> Then use the stub GC or disable the GC, then re-enable it when you
>> have the time to run a sweep (yes, you can).
>> 
> Then a memory overrun
> 

I can't think of a case where having the GC running would be a problem where allocating memory at all would not (malloc/new/most any allocator is NOT cheap)


March 20, 2009
Christopher Wright пишет:

>>>> + Sometimes allocation and freeing of memory in an arbitrary unpredictable time  unacceptable. (in game development or realtime software, for example. One hundred million times discussed about it there, I guess)
>>> So you are optimizing for the uncommon case?
>>
>> GC is an attempt of optimizing for the uncommon case )
> 
> I don't think so. Programmers have more important things to do than write memory management systems. My boss would not be happy if I produced an application that leaked memory at a prodigious rate, and he would not be happy if I spent much time at all on memory management.
> 

You should use language with GC in this case.

> With the application I develop at work, we cache some things. These would have to be reference counted or deleted and recomputed every time. Reference counting is a lot of tedious developer effort. Recomputing is rather expensive. Deleting requires tedious developer effort and determining ownership of everything. This costs time and solves no problems for the customers.

I do not agree. I am quite easy to give tracking the creation and deletion of the objects on the stack and on the heap. I do not see problem there. Although there is an alternative - C++, but not D.

And you do not need to do reference counting for all the objects in the program. Normally, objects in need not so much as objects do not need. (Therefore, I propose to extend the stack usage for storing and passing objects.)

Unfortunately, not yet thought of another way to memory management

> 
> And the best manual memory management that I am likely to write would not be faster than a good garbage collector.
> 
> What sort of applications do you develop?

games, images processing

> Have you used a garbage
> collector in a large application?

I do not write really large applications
March 20, 2009
Reply to Weed,

> Simen Kjaeraas ?????:
> 
>> or disable the GC and enable it
>> when you have the time.
>
> Again, the memory will be overrun)
> 

Are you saying that you have a program with a time critical section that allocates 100s of MB of memory? If so, you have other problems to fix. If that is not the case, then disable the GC run your critical/RT section allocating a few kB/MB and when you exit that section re enable the GC and clean up. This won't create a memory overrun unless you allocate huge amounts of memory or forget to re enable the GC.

Short version: The only places I can think of where having the GC run would cause problems are can't be allowed to run for very long or allocate hardly ram at all. To argue this point you will have to give a specific use case.


March 20, 2009
Reply to Weed,

> Christopher Wright ?????:
> 
>>>>> + Sometimes allocation and freeing of memory in an arbitrary
>>>>> unpredictable time  unacceptable. (in game development or realtime
>>>>> software, for example. One hundred million times discussed about
>>>>> it there, I guess)
>>>>> 
>>>> So you are optimizing for the uncommon case?
>>>> 
>>> GC is an attempt of optimizing for the uncommon case )
>>> 
>> I don't think so. Programmers have more important things to do than
>> write memory management systems. My boss would not be happy if I
>> produced an application that leaked memory at a prodigious rate, and
>> he would not be happy if I spent much time at all on memory
>> management.
>> 
> You should use language with GC in this case.
> 

that type of cases IS the normals case

>> And the best manual memory management that I am likely to write would
>> not be faster than a good garbage collector.
>> 
>> What sort of applications do you develop?
>> 
> games, images processing
> 
>> Have you used a garbage
>> collector in a large application?
>
> I do not write really large applications

Small applications are NOT the normal case.

trying to design a language for large apps (one of the things D is targeted at) based on what works in small apps is like say "I know what works for bicycles so now I'll design a railroad train".

Yes there are programs where manual memory management is easy, they are generally considered to be few and far between in real life. Many of them really have no need for memory management at all as they die before they would run out of ram anyway. 


March 20, 2009
Reply to Weed,

> BCS ?????:
> 
>> Hello Weed,
>> 
>>> 
>>> Mmm
>>> When I say "overhead" I mean the cost of executions, and not cost of
>>> programming
>> So do I.
>> 
>> I figure unless it save me more times than it costs /all/ the users,
>> run time cost trumps.
>> 
> This is a philosophical dispute.
> 
> A good and frequently used code can be written once and then used 10
> years in 50 applications in 10000 installations. Here, the costs of
> programming may be less than the cost of end-user's time and hardware.
> 

You are agreeing with me.

>> 
>> As for memory, unless the thing overspends into swap and does so very
>> quickly (many pages per second) I don't think that matters. This is
>> because most of the extra will not be part of the resident set so the
>> OS will start paging it out to keep some free pages. This is
>> basically free until you have the CPU or HDD locked hard at 100%. The
>> other half is that the overhead of reference counting and/or the like
>> will cost in memory (you have to store the count somewhere) and might
>> also have bad effects regarding cache misses.
>> 
>
> Once again I repeat: forget about reference counting - it is only for
> the debug purposes. I think this addition should be switchable by
> compiler option.
> 
> It did not included into the resulting code. Ref-counting needed for
> multithreaded programs, when there is a risk to get and use a
> reference to an object that another process has already been killed.
> This situation needs to be recognized and issued to a run-time error.

As I understand the concept "reference counting" is a form of GC. It has nothing to do with threading. The point is to keep track of how many references, in any thread, there are to a dynamic resource and to free it when there are no more. Normally (as in if you are not doing things wrong) you never release/free/delete a reference counted resource so it doesn't even check if it is delete. Also because the count is attached to the referenced resource, it can't do that check because the count is deleted right along with it.

For that concept (the only meaning of the term "reference counting" I known of) the idea of turning it off for non-debug builds is silly. Are you referring to something else?


March 20, 2009
BCS пишет:
>>>
>>> I figure unless it save me more times than it costs /all/ the users, run time cost trumps.
>>>
>> This is a philosophical dispute.
>>
>> A good and frequently used code can be written once and then used 10 years in 50 applications in 10000 installations. Here, the costs of programming may be less than the cost of end-user's time and hardware.
>>
> 
> You are agreeing with me.
> 
>>>
>>> As for memory, unless the thing overspends into swap and does so very quickly (many pages per second) I don't think that matters. This is because most of the extra will not be part of the resident set so the OS will start paging it out to keep some free pages. This is basically free until you have the CPU or HDD locked hard at 100%. The other half is that the overhead of reference counting and/or the like will cost in memory (you have to store the count somewhere) and might also have bad effects regarding cache misses.
>>>
>>
>> Once again I repeat: forget about reference counting - it is only for the debug purposes. I think this addition should be switchable by compiler option.
>>
>> It did not included into the resulting code. Ref-counting needed for multithreaded programs, when there is a risk to get and use a reference to an object that another process has already been killed. This situation needs to be recognized and issued to a run-time error.
> 
> As I understand the concept "reference counting" is a form of GC.

In the proposed language is a way to learn mistake - deleting object in another thread that still have a reference.

In other words, it is a way to provide proof that the reference refers to an object rather than the emptiness or garbage.


> It has
> nothing to do with threading. The point is to keep track of how many
> references, in any thread, there are to a dynamic resource and to free
> it when there are no more. Normally (as in if you are not doing things
> wrong) you never release/free/delete a reference counted resource so it
> doesn't even check if it is delete. Also because the count is attached
> to the referenced resource, it can't do that check because the count is
> deleted right along with it.
> 
> For that concept (the only meaning of the term "reference counting" I known of) the idea of turning it off for non-debug builds is silly. Are you referring to something else?
> 
> 
March 20, 2009
BCS пишет:
> Reply to Weed,
> 
> 
>> It is designed not so. There will be a hidden dereferencing:
>>
>> const ref Obj object -> struct{ Obj* object;    -> Obj object; int counter; };
> 
> Who deletes those structs and when?
> 
> 

When an object is deleted the struct also is deleted
March 20, 2009
Reply to Weed,

> BCS пишет:
> 
>> As I understand the concept "reference counting" is a form of GC.
>> 
> In the proposed language is a way to learn mistake - deleting object
> in another thread that still have a reference.
> 
> In other words, it is a way to provide proof that the reference refers
> to an object rather than the emptiness or garbage.
> 

OK, than quit calling it reference counting because everyone will think of something else when you call it that. 

Also, what you are proposing would not be specific to threads. The problem of deleting stuff early is just as much a problem and looks exactly the same in non threaded code.


March 20, 2009
Weed wrote:
> Christopher Wright пишет:
>> What sort of applications do you develop?
> 
> games, images processing

Libraries will often have no need for data structures with complex lifestyles. There are exceptions, of course, but that's what I have generally found to be the case. For the bulk of image processing, you can just throw the memory management problem to the end user.

Games have strict performance requirements that a stop-the-world type of garbage collector violates. Specifically, a full collection would cause an undue delay of hundreds of milliseconds on occasion. If this happens once every ten seconds, your game has performance problems. This is not true of pretty much any other type of application.

Games usually have scripting languages that might make use of a garbage collector, though. And there is research into realtime garbage collectors that would be suitable for use in games.