April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Michel Fortin | On 2014-04-11 19:54:16 +0000, Michel Fortin <michel.fortin@michelf.ca> said: > Can destructors be @safe at all? When called from the GC the destructor 1) likely runs in a different thread and 2) can potentially access other destructed objects, those objects might contain pointers to deallocated memory if their destructor manually freed a memory block. There's another issue I forgot to mention earlier: the destructor could leak the pointer to an external variable. Then you'll have a reference to a deallocated memory block. Note that making the destructor pure will only helps for the global variable case. The struct/class itself could contain a pointer to a global or to another memory block that'll persist beyond the destruction of the object and assign the pointer there. It can thus leak the deallocating object (or even "this" if it's a class) through that pointer. -- Michel Fortin michel.fortin@michelf.ca http://michelf.ca |
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kagamin | On 2014-04-12 10:29:50 +0000, "Kagamin" <spam@here.lot> said: > On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote: >> 2- after the destructor is run on an object, wipe out the memory block with zeros. This way if another to-be-destructed object has a pointer to it, at worse it'll dereference a null pointer. With this you might get a sporadic crash when it happens, but that's better than memory corruption. > > Other objects will have a valid pointer to zeroed out block and will be able to call its methods. They are likely to crash, but it's not guaranteed, they may just fine corrupt memory. Imagine the class has a pointer to a memory block of 10MB size, the size is an enum and is encoded in the function code (won't be zeroed), the function may write to any region of that block of memory pointed to by null after the clearing. Well, that's a general problem of @safe when dereferencing any potentially null pointer. I think Walter's solution was to insert a runtime check if the offset is going to be beyond a certain size. But there has been discussions on non-nullable pointers since then, and I'm not sure what Walter thought about them. The runtime check would help in this case, but not non-nullable pointers. -- Michel Fortin michel.fortin@michelf.ca http://michelf.ca |
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Marc Schütz | On 2014-04-12 08:50:59 +0000, "Marc Schütz" <schuetzm@gmx.net> said: > More correctly, every reference to the destroyed object needs to be wiped, not the object itself. But this requires a fully precise GC. That'd be more costly (assuming you could do it) than just wiping the object you just destroyed. But it'd solve the issue of leaking a reference to the outside world from the destructor. -- Michel Fortin michel.fortin@michelf.ca http://michelf.ca |
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to deadalnix | On 2014-04-12 09:01:12 +0000, "deadalnix" <deadalnix@gmail.com> said: > On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote: >> 2- after the destructor is run on an object, wipe out the memory block with zeros. This way if another to-be-destructed object has a pointer to it, at worse it'll dereference a null pointer. With this you might get a sporadic crash when it happens, but that's better than memory corruption. You only need to do this when allocated on the GC heap, and only pointers need to be zeroed, and only if another object being destroyed is still pointing to this object, and perhaps only do it for @safe destructors. > > You don't get a crash, you get undefined behavior. That is much worse and certainly not @safe. You get a null dereference. Because the GC will not free memory for objects in a given collection cycle until they're all destroyed, any reference to them will still be "valid" while the other object is being destroyed. In other word, if one of them was destroyed and it contained a pointer it'll be null. That null dereference is going to be like any other potential null dereference in @safe code: it is expected to crash. There's still the problem of leaking a reference somewhere where it survives beyond the current collection cycle. My proposed solution doesn't work for that. :-( -- Michel Fortin michel.fortin@michelf.ca http://michelf.ca |
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Michel Fortin | On Saturday, 12 April 2014 at 11:06:33 UTC, Michel Fortin wrote:
> Well, that's a general problem of @safe when dereferencing any potentially null pointer. I think Walter's solution was to insert a runtime check if the offset is going to be beyond a certain size.
Well, if you don't access anything beyond a certain offset, it doesn't make sense to declare something that large. So, it would be a compile-time check, not run-time.
|
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Michel Fortin | On 04/12/2014 01:06 PM, Michel Fortin wrote:
> On 2014-04-12 10:29:50 +0000, "Kagamin" <spam@here.lot> said:
>
>> On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
>>> 2- after the destructor is run on an object, wipe out the memory
>>> block with zeros. This way if another to-be-destructed object has a
>>> pointer to it, at worse it'll dereference a null pointer. With this
>>> you might get a sporadic crash when it happens, but that's better
>>> than memory corruption.
>>
>> Other objects will have a valid pointer to zeroed out block and will
>> be able to call its methods. They are likely to crash, but it's not
>> guaranteed, they may just fine corrupt memory. Imagine the class has a
>> pointer to a memory block of 10MB size, the size is an enum and is
>> encoded in the function code (won't be zeroed), the function may write
>> to any region of that block of memory pointed to by null after the
>> clearing.
>
> Well, that's a general problem of @safe when dereferencing any
> potentially null pointer. I think Walter's solution was to insert a
> runtime check if the offset is going to be beyond a certain size. But
> there has been discussions on non-nullable pointers since then, and I'm
> not sure what Walter thought about them.
>
> The runtime check would help in this case, but not non-nullable pointers.
>
Yes, they would help (eg. just treat every pointer as potentially null in a destructor.)
|
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to deadalnix | On 4/12/2014 4:57 AM, deadalnix wrote:
> On Friday, 11 April 2014 at 06:29:39 UTC, Nick Sabalausky wrote:
>> Realistically, I would imagine this @trusted part should *always* be a
>> dummy wrapper over a specific @system function. Why? Because @trusted
>> disables ALL of @safe's extra safety checks. Therefore, restricting
>> usage of @trusted to ONLY be dummy wrappers over the specific parts
>> which MUST be @system will minimize the amount of collateral code
>> that must loose all of @safe's special safety checks.
>>
>
> No.
>
> Trusted is about providing a safe interface to some unsafe internals.
> For instance, free cannot be safe. But a function can do malloc and free
> in a safe manner. That function can thus be tagged @trusted .
>
The problem with that is @trusted also disables all the SafeD checks for the *rest* of the code in your function, too.
To illustrate, suppose you have this function:
void doStuff() {
...stuff...
malloc()
...stuff...
free()
...stuff...
}
Because of malloc/free, this function obviously can't be @safe (malloc/free are, of course, just examples here; they could be any @system functions).
Problem is, that means for *everything* else in doStuff, *all* of the ...stuff... parts, you CANNOT enable the extra safety checks that @safe provides. The use of one @system func poisons the rest of doStuff's implementation (non-transitively) into being non-checkable via SafeD.
However, if you implement doStuff like this:
// Here I'm explicitly acknowledging that malloc/free are non-@safe
@trusted auto trustedWrapperMalloc(...) {...}
@trusted auto trustedWrapperFree(...) {...}
void doStuff() {
...stuff...
trustedWrapperMalloc()
...stuff...
trustedWrapperFree()
...stuff...
}
*Now* doStuff can be marked @safe and enjoy all the special checks that @safe provides.
|
April 12, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Nick Sabalausky | On Saturday, 12 April 2014 at 22:02:26 UTC, Nick Sabalausky wrote:
> *Now* doStuff can be marked @safe and enjoy all the special checks that @safe provides.
_and_ is terribly wrong because it is not guaranteed to be @safe for all use cases, braking type system once used anywhere but "special" functions.
I agree that @trusted functions should be as small as possible but they still be self-contained.
|
April 13, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Dicebot | On 4/12/2014 7:08 PM, Dicebot wrote: > On Saturday, 12 April 2014 at 22:02:26 UTC, Nick Sabalausky wrote: >> *Now* doStuff can be marked @safe and enjoy all the special checks >> that @safe provides. > > _and_ is terribly wrong because it is not guaranteed to be @safe for all > use cases, braking type system once used anywhere but "special" functions. > If, as you say, this is wrong: ---------------------------------- @system auto foo() {...} // Note, I meant for trustedWrapperWhatever to be private // and placed together with doStuff. Obviously not a public // func provided by foo's author. @trusted private auto trustedWrapperFoo(...) {...} @safe void doStuff() { ...stuff... // Yes, as the author of doStuff, I'm acknowledging // foo's lack of @safe-ty trustedWrapperFoo(); ...stuff... } ---------------------------------- Then how could this possibly be any better?: ---------------------------------- @system auto foo() {...} @trusted void doStuff() { ...stuff... foo(); ...stuff... } ---------------------------------- The former contains extra safety checks (ie, for everything in "...stuff...") that the latter does not. The former is therefore better. |
April 13, 2014 Re: The "@safe vs struct destructor" dilemma | ||||
---|---|---|---|---|
| ||||
Posted in reply to Nick Sabalausky | On Sunday, 13 April 2014 at 01:30:59 UTC, Nick Sabalausky wrote: > // Note, I meant for trustedWrapperWhatever to be private > // and placed together with doStuff. Obviously not a public > // func provided by foo's author. > @trusted private auto trustedWrapperFoo(...) {...} Still accessible by other functions in same module unless you keep each @trusted function in own module. > ---------------------------------- > > Then how could this possibly be any better?: > > ---------------------------------- > @system auto foo() {...} > > @trusted void doStuff() { > ...stuff... > foo(); > ...stuff... > } > ---------------------------------- > > The former contains extra safety checks (ie, for everything in "...stuff...") that the latter does not. The former is therefore better. Because @system does not give any guarantees. It is expected by type system that calling such function can do anything horrible. @trusted, however, is expected to be 100% equivalent to @safe with only exception that its safety can't be verified by compiler. Any @trusted function from the type system point of view can be used in any context where @safe can be used. It is you personal responsibility as a programmer to verify 100% safety of each @trusted function you write, otherwise anything can go wrong and writer will be only one to blame. |
Copyright © 1999-2021 by the D Language Foundation