September 05, 2002
OK, there are two things that I need to point out that will help tremendously:

1. I apologize, I was sloppy with my terminology.  When reading my earlier posts, please replace "instance" with "reference".  At no point was I actually intentionally referring to the instance (chunk of memory, allocated with new, to which there may be multiple references).  I was always referring to the reference variable.  Please take my posts, put 'em in an editor, and do a global search and replace, then see if they make any more sense.  This was an incredibly bad mistake on my part, because without precision in our words, we can't communicate effectively.

2. I am strongly in favor of reference counting instead of simple scoping.  I think that the only reliable and safe deterministic finalization technique is reference counting (assuming the presence of a GC that can come through and gather up the cycles that refcounting leaves behind).  Scoping falls out as the simplest case of reference counting.  However, I have beaten that horse until it is not only dead, but has now been chopped up, processed, cured, canned, fed to dogs around the country, and gone to its final (though ignoble and somewhat disgusting) resting place(s).  It isn't going to happen, at least not in the foreseeable future.  So, I have turned my attentions to trying to make the raii system as safe as I can envision it.

The safest form that I can imagine, because it requires the least continuous attention and effort from the programmer, is one where classes are flagged so that any references to their instances are raii by default.  Then two exceptions are made:

1. References that are function arguments are not raii -- the function isn't creating the instance (I meant it that time :-), so the argument reference is not the owning reference, so it must not exhibit raii destruction behavior.

2. A keyword is introduced to allow a particular owner reference (the one that was used in the statement "MyRef = new MyRaiiClass") to not exhibit raii behaviors.  Thus, if a function is going to create an instance (meant that one, too) and attach a reference to it, but it also expects to hand that instance to somewhere else and then exit (which would finalize the instance, normally), it can declare its reference to be a nonraii reference.  The following is contrived, but I'm trying to keep it short...

raii class Lock{/*...*/}

void func1()
{
Lock lockFromElsewhere; // reference is raii, because no keyword

lockFromElsewhere = func2();
} // lockFromElsewhere reference will finalize instance that is returned
// from func2 at this point

Lock func2()
{
nonraii Lock notMine = new Lock; // notMine is a nonraii reference
return notMine;
} // Since notMine is nonraii, nothing magical happens here.

Looking at that code, I'm not sure if a third exception needs to be made for function return types or not.  I certainly don't want to have to type:

nonraii Lock func2() {/*...*/}

although that might not be a bad thing...  It would signify that the Lock that is being returned does not have the normal raii semantics and will need to be handled some other way.  Of course, any return value will "not have the normal raii semantics", so the nonraii keyword there should not technically be necessary.

Functions like those above are not common, but it is possible that a subfunction could start doing some work for which it needed to acquire a resource, and then reached a point where it needed to let the caller finish the work, but it doesn't want to release the resource because that would open a window of opportunity for other threads/processes/tasks/whatever to affect the resource. Of the few cases where this comes up, most can probably be handled by redesigning the two functions in question.  But there are still some cases that simply need that design.

Just to forestall any comments, I wish to point out that Lock can usually be fixed through other means.  If a function has to return a locked resource, the programmer must take the responsibility for tracking who owns the lock and who should finally unlock it.  Most Lock classes do not actually contain the locking mechanism -- they usually just contain a reference to a Mutex or Semaphore.  The purpose of the Lock class is to remove responsibility from the programmer for remembering to unlock the Mutex/Semaphore.  If the programmer has taken that responsibility back upon him/herself, then there is no need to use the Lock, and the Mutex/Semaphore (which are non-raii classes anyway) can be directly managed. I mention this to show that I understand that Lock is probably a poor choice for demonstrating some of the more contorted cases that occasionally arise.  A better example would probably be File or Printer or Port, where the raii'd class also actually "is" the resource, from a programming standpoint, rather than a simple convenience wrapper around another class.

In article <al7dih$2oaq$1@digitaldaemon.com>, Sandor Hojtsy says...
>The *instances* of Lock and File can (<sigh> almost) always be raii. But you are associating the raii property with the *references* to Lock and File. In this discussion there was no mentioning of keywords to require or stop raii "for a particular instance". How would you accomplish that?
>
>> "you signal your class that all of its instances are raii unless otherwise
>noted"
>
>Put it this way:
>"you signal your class that all *references* to its instances are raii
>unless otherwise noted"
>And signaling that doesn't make sense.

OK, given the exceptions mentioned above (function parameter references and keyword marked references), does it now make sense to signal that all non-exceptional references to the raii class are raii references?

Mac


September 05, 2002
Sandor Hojtsy wrote:

> > There is a risk if the non-raii reference will be holding on past the
> function
> > invocation lifetime -- if it is storing it away in a cache or something.
> Of
> > course, such a situation will blow up anything that isn't reference
> counted, so
> > we'll ignore that case for the time being...
>
> Yes that kind of trickery can also ruin the "struct destructors" concept.
>
> And of course you will need non-raii references to Lock and File objects, to enable function calls (including member functions of Lock and File). Would you disable member function calls of Lock and File? The secret parameter, the reference to itself will be non-raii, and can be stored anywhere you like.
>
> But let me start a new line of discussion here. You don't really need to *delete* the instance. Memory is not important. You only need to release other resources. Finalyzing or "disposing" would do. Then the old references may remain valid. Like references to an existing but closed File object. All i/o operations would produce exceptions. I think it is more elegant than possibly having "dangling references".

Interesting idea.  But maybe you could build it into the compile: There could be a "valid" bool variable (allocated along with the object?) that marked whether or not the object had been finalized.  Each call on a member function would include an implicit "in" contract that "valid==true".  So use after the object had been "finalized" would result in a contract failure exception.

Of course, you could also code this individually into each class that needs it. Perhaps a special template for raii classes:

template Raii(X : Object) {
    class Raii_Wrapper : X {
        X ref;

        bool raii_valid = true;
        void finalize()
        {
            ref.finalize();
            raii_valid = false;
        };
        invariant() { assert raii_valid; }
    }
}

Now, each time that somebody declares an raii object:
    void func()
    {
        auto MyClass raii_ref;
        // blah
    }
the compiler turns it into an instance of the template, with the implicit
finally clause:
    void func()
    {
        instance Raii(MyClass) raii_ref;
        try {
            // blah
        }
        finally
        {
            raii_ref.finalize();
        }
    }

Thoughts?

--
The Villagers are Online! villagersonline.com

.[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ]
.[ (a version.of(English).(precise.more)) is(possible) ]
?[ you want.to(help(develop(it))) ]


September 05, 2002
"Mac Reiter" <Mac_member@pathlink.com> wrote in message news:al5kgm$1ud8$1@digitaldaemon.com...
> In article <al4j79$2s6p$1@digitaldaemon.com>, Sandor Hojtsy says...
> I would prefer to avoid a similar mistake
> in D (which does do 'virtual' by default, by the way, because Walter
realized
> this problem).

LOL. I suffered from subtle bugs caused by that exact issue over and over. When you have a large app, the tedium of trying to verify the correctness of the virtual use is boggling. There was just no way to prove your program was free of such. It's a maddening timewaster.

The correct way is to have virtual by default, and allow no overrides of final functions.


September 05, 2002
"Russ Lewis" <spamhole-2001-07-16@deming-os.org> wrote in message news:3D77A06B.ACD19741@deming-os.org...
> Interesting idea.  But maybe you could build it into the compile: There
could be
> a "valid" bool variable (allocated along with the object?) that marked
whether
> or not the object had been finalized.  Each call on a member function
would
> include an implicit "in" contract that "valid==true".  So use after the
object
> had been "finalized" would result in a contract failure exception.

That is handilly handled (!) by zeroing the vptr after finalization.


September 05, 2002
Walter wrote:
> "Russ Lewis" <spamhole-2001-07-16@deming-os.org> wrote in message
> news:3D77A06B.ACD19741@deming-os.org...
> 
>>Interesting idea.  But maybe you could build it into the compile: There
> 
> could be
> 
>>a "valid" bool variable (allocated along with the object?) that marked
> 
> whether
> 
>>or not the object had been finalized.  Each call on a member function
> 
> would
> 
>>include an implicit "in" contract that "valid==true".  So use after the
> 
> object
> 
>>had been "finalized" would result in a contract failure exception.
> 
> 
> That is handilly handled (!) by zeroing the vptr after finalization.
> 
> 

(Shudder) Well, I suppose any error is better than none.  It's a good point, though, that raii finalization should NOT drop the object out of the garbage collection map.  Finalize it, but don't mark it as garbage until you're sure there are no dangling references.  It will make for easier-to-debug programs.

September 06, 2002
"Walter" <walter@digitalmars.com> wrote in news:akgsaf$aog$1@digitaldaemon.com:

> The 'auto' idea was looking more and more like a way to simply put class objects on the stack.

A example of code being converted for user RAII:

http://sourceforge.net/docman/display_doc.php?docid=8673&group_id=9028

This is the Firebird SQL DB Server converted from C to C++

September 09, 2002
"Walter" <walter@digitalmars.com> wrote in message news:al8nql$2ejr$1@digitaldaemon.com...
>
> "Russ Lewis" <spamhole-2001-07-16@deming-os.org> wrote in message news:3D77A06B.ACD19741@deming-os.org...
> > Interesting idea.  But maybe you could build it into the compile: There
> could be
> > a "valid" bool variable (allocated along with the object?) that marked
> whether
> > or not the object had been finalized.  Each call on a member function
> would
> > include an implicit "in" contract that "valid==true".  So use after the
> object
> > had been "finalized" would result in a contract failure exception.
>
> That is handilly handled (!) by zeroing the vptr after finalization.
>

Good. In that case, releasing raii objects should call finalyze, and not
delete.
Theoretically a wrapper object of a released resource may still have usable
member functions, such as statistical data of usage, or filename an such.
Calling these functions are still disabled, but at least dangling references
are better handled.

Sandor


1 2 3 4
Next ›   Last »