November 21, 2006
== Quote from Walter Bright (newshound@digitalmars.com)'s article
> No. But most RAII usage is for managing memory, and Boris didn't say why he needed RAII for the stack allocated objects.

Mostly for closing OS handles, locking, caching and stuff. Like:
  getFile("a.txt").getText()

Normally, one would have to:
  1. open the file (which may allocate caching buffers, lock the file, etc.)
  2. use the file (efficiently)
  3. close the file (frees handles, buffers, releases locks, etc.)

It's a common design pattern, really.
November 21, 2006
Boris Kolar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
>> No. But most RAII usage is for managing memory, and Boris didn't say why
>> he needed RAII for the stack allocated objects.
> 
> Mostly for closing OS handles, locking, caching and stuff. Like:
>   getFile("a.txt").getText()
> 
> Normally, one would have to:
>   1. open the file (which may allocate caching buffers, lock the file, etc.)
>   2. use the file (efficiently)
>   3. close the file (frees handles, buffers, releases locks, etc.)
> 
> It's a common design pattern, really.

A lot of file reads and writes can be done atomically with the functions in std.file, without need for RAII.
November 21, 2006
== Quote from Walter Bright (newshound@digitalmars.com)'s article
> A lot of file reads and writes can be done atomically with the functions in std.file, without need for RAII.

I know, but I rarely use standard libraries directly. One of the first I do when I start programming in a new language is abstracting most of std libraries.

For most programmers, file is something on your disk. For me, file is an abstract concept: it may be something on a network, it may be something calculated on demand,... Some "files" need opening/closing, some don't.

I usually even go as far as defining a temlate File(T) (a file of elements
of type T). Anyway, File is not the only example, there are also locks,
widgets, sockets,.... All of them just as abstract if not more :)

Sometimes I need RIAA, sometimes I don't. Because of my programming style
I very freequently encounter a situation when I need very small classes, like
selection ((from, to) pair), parser event ((event, selection) pair) - these
classes are just abstract enought they can't be structs and simple enough
they shouldn't trigger GC pauses. A vast majority of such classes is immutable,
(some having copy-on-write semantics) and are often returned from functions.

One very recent specific example: I created socket class and 3 implementations (socket over TCP, socket over Netbios, socket over buffer). The last one (socket over buffer) doesn't need to open/close connections, but the other two do. In my scenario, a real sockets reads encrypted data, write decrypted data to buffer, and a "fake" socket reads buffer as if it was an unencrypted connection.

Anyway, my almost 20 years of programming experience has tought me enough that
I can tell when some missing feature is making my life harder. And I'm not a
feature freak - I wouldn't miss goto or even array slicing (you guessed it,
I abstract arrays as well ;), but I do miss a decent RIAA and deterministic
object destruction.
November 21, 2006
Boris Kolar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
>> No. But most RAII usage is for managing memory, and Boris didn't say why
>> he needed RAII for the stack allocated objects.
> 
> Mostly for closing OS handles, locking, caching and stuff. Like:
>   getFile("a.txt").getText()
> 
> Normally, one would have to:
>   1. open the file (which may allocate caching buffers, lock the file, etc.)
>   2. use the file (efficiently)
>   3. close the file (frees handles, buffers, releases locks, etc.)
> 
> It's a common design pattern, really.

A lot of this can be handled by "scope."  Though I grant that using objects for the rest and relying on the GC for clean-up is possibly not ideal for resources that must be cleaned up in a timely manner.


Sean
November 21, 2006
Sean Kelly wrote:

> A lot of this can be handled by "scope."  Though I grant that using objects for the rest and relying on the GC for clean-up is possibly not ideal for resources that must be cleaned up in a timely manner.

Indeed. Long, long ago I suggested disallowing destructors for classes not declared 'scope' (or 'auto', as it was then), on the grounds that if you need stuff done there you really don't want to rely on the GC to do it.

It was a bit of a Devil's Advocate thing, but the response was surprisingly positive, and (as I recall) nobody came up with a counterexample where a dtor was needed but timeliness wasn't.

November 21, 2006
Mike Capp wrote:
> Sean Kelly wrote:
> 
>> A lot of this can be handled by "scope."  Though I grant that using
>> objects for the rest and relying on the GC for clean-up is possibly not
>> ideal for resources that must be cleaned up in a timely manner.
> 
> Indeed. Long, long ago I suggested disallowing destructors for classes not
> declared 'scope' (or 'auto', as it was then), on the grounds that if you need
> stuff done there you really don't want to rely on the GC to do it.
> 
> It was a bit of a Devil's Advocate thing, but the response was surprisingly
> positive, and (as I recall) nobody came up with a counterexample where a dtor was
> needed but timeliness wasn't.

I've actually got a test build of Ares (not sure if it's in SVN) that hooks the GC collection process so the user can be notified when an object is being cleaned up and can optionally prevent the object's dtor from being run.  The intent is to allow the user to detect "leaks" of resources intended to have deterministic scope and to allow the dtors of such objects to perform activities normally not allowed in GCed objects.  I haven't used the feature much yet in testing, but it seems a good compromise between the current D behavior and your suggestion.


Sean
November 21, 2006
Boris Kolar wrote:
> Anyway, my almost 20 years of programming experience has tought me enough that
> I can tell when some missing feature is making my life harder. And I'm not a
> feature freak - I wouldn't miss goto or even array slicing (you guessed it,
> I abstract arrays as well ;), but I do miss a decent RIAA and deterministic
> object destruction.

I hear you. The best suggestion I can make is to use the RIAA features the compiler has now, and wait for it to be upgraded to true stack allocation. Then, your code will just need a recompile.
November 21, 2006
Walter Bright wrote:
> 
> I hear you. The best suggestion I can make is to use the RIAA features the compiler has now, 

s/RIAA/RAII/

But I wonder what RIAA features DMD could add.

November 22, 2006
Boris Kolar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
> 
>>No. But most RAII usage is for managing memory, and Boris didn't say why
>>he needed RAII for the stack allocated objects.
> 
> 
> Mostly for closing OS handles, locking, caching and stuff. Like:
>   getFile("a.txt").getText()
> 
> Normally, one would have to:
>   1. open the file (which may allocate caching buffers, lock the file, etc.)
>   2. use the file (efficiently)
>   3. close the file (frees handles, buffers, releases locks, etc.)
> 
> It's a common design pattern, really.

I know Sean suggested scope (RAII) but how about:

File file = new FileStream(...);
scope(exit) { file.close(); }
...

-DavidM
November 22, 2006
On Sun, 19 Nov 2006 15:28:33 -0800, "John Reimer" <terminal.node@gmail.com> wrote:

>On Sun, 19 Nov 2006 14:59:19 -0800, BCS <BCS@pathilink.com> wrote:
>
>> Mars wrote:
>>> http://www.osnews.com/comment.php?news_id=16526
>>
>>
>> One issue brought up is that of D "requiring" the use of a GC.
>> What would it take to prove that wrong by making a full blown standard
>> lib that doesn't use a GC, and in fact doesn't have a GC?

I don't know. Personally, I am all in favour of having the choice - but remember, it's not just a matter of creating that library. Maintaining two standard libraries would mean a lot of ongoing headaches.

>Note, however, that C++ users, many who have grown dependent on manual memory management, are looking for a reason to fault D.  I've actually heard cases where C++ users lambast GC based languages: use of a GC apparently creates "bad programming practices" -- imagine the laziness of not cleaning up after yourself!

I agree - but I also strongly disagree.

The problem is that memory management isn't just about allocating and freeing memory. It is closely coupled with newing and deleting, with constructors and destructors, and therefore with wider resource management issues.

Two problems can arise...

1.  Garbage collection isn't immediate. Resources can stay locked long
    after they should have been freed, because the garbage collector
    hasn't got around to destroying those objects yet. This can be a
    problem if you are trying to acquire further locks or whatever.

2.  Reference cycles. Take Java. It can garbage collect when there are
    reference cycles, sure, but it cannot know what order to destroy
    those objects in. Calling destructors in the wrong order could
    cause big problems.

    Solution - don't call the destructors (sorry, finalisers) at all.
    Just free the memory, since it doesn't matter what order you do
    that in.

    So that's it - Java doesn't guarantee to call finalisers. I don't
    know for sure that this is why, but it is the only good reason I
    can think of.

    If you think reference cycles are a theoretical rather than real
    problem, well, I'm afraid many practical data structures have
    them - even the humble doubly-linked list.

Either of these problems is sufficient on its own to mean that the garbage collector cannot be relied upon. As the programmer, you have to take responsibility for ensuring that the cleaning up is done. And that, according to black-and-white reasoning, defeats the whole point of garbage collection.

But then these problems, even counted together, only create issues for a minority of objects in most code.

Awkward persons might observe that the rate of problems tends to increase in lower level code, and that this is why the applications-oriented language Java has more problems than the very-high-level languages that also do GC such as Python. And those same awkward persons might then point out that D explicitly targets systems level code, aiming its sights at a somewhat lower level than Java.

But lets put that point to one side for a bit.

Someone intelligent enough to consider shades of grey might still argue that it is a good idea to develop good habits early, and to apply them consistently. It saves on having these problems arise as surprise bugs, and perhaps as a result of third party libraries that you don't have source for and cannot fix.

I have a lot of sympathy with this point of view, and don't think it can be lightly dismissed. It isn't just a matter of taking sides and rejecting the other side no matter what. It is a valid view of the issue.

The trouble is that the non-GC way is also prone to surprise bugs.

So, as far as I can see, neither approach is a clear and absolute winner. I know it can seem as if GC is the 'modern' way and that non-GC is a dinosaur, but good and bad isn't decided by fashions or bandwagons. Both GC and non-GC have problems.

Now to consider that point I put to one side. D is explicitly aimed at systems level code. Well, that's true, but in the context of GC we have a rather skewed sense of high-level vs low-level - low level would tend to mean data structures and resource management rather than bit twiddling and hardware access. D systems level programming is probably roughly equally prone to GC problems as Java applications level programming.

In any case, D is a systems level language in the sense of down-to-and-including systems level. Most real world code has a mix of high-level and low-level. So in a single app, there can be a whole bunch of high-level code where GC is a near perfect approach, and a whole bunch of low-level code in which GC cannot be relied upon and is probably just an unwanted complication.

And when there are two equally valid approaches, each of which has its own advantages and disadvantages, and both of which could be valuable in the same application, which should the serious programmer demand? Particularly the systems-level programmer?

Right - Both!

But does it make sense to demand a separate non-GC standard library? That seems to suggest a world where an application is either all GC or all non-GC.

GC seems pointless if it doesn't happen by default, so the approach of opting out for specific classes when necessary seems, to me, to be as close to ideal as you can get. And even then, there's the proviso that you should stick to the default approach as much as possible and make damned sure that when you opt out, it's clear what you are doing and why. It's not a GC-is-superior thing, just a consistency thing - minimising confusion and complexity.

In that case, with GC as the default and with opting out being reserved for special cases, you're probably going to carry on using the GC standard library anyway.

As for embedded platforms, if malloc and free would work, so would garbage collection. If not, you probably can't use any conventional standard library (and certainly not data structure library code), and should be using a specialised embedded development library (probably tailored for the specific platform).

In other words, the only benefit I can see to having a separate non-GC library is marketing. And it seems that a better approach is to educate people about the benefits of dropping the old either-or thinking and choosing both.

AFAIK, there are two competitors in this having-both approach, and they are both C++. Managed C++, and C++ with a GC library. And they all get it wrong IMO - you have to opt in to GC, not opt out. If GC isn't the default, you get new classes of bugs - the 'oh - I thought that was GC, but apparently not' and 'damn - I forgot to specify GC for this' bugs.

So there we are, D is not only already perfect, it is the only language available that has achieved this amazing feat ;-)

-- 
Remove 'wants' and 'nospam' from e-mail.