www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - The " safe vs struct destructor" dilemma

reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
So the idea behind  safe is most code should be  safe, with occasional 
 system/ trusted pieces isolated deeper in the call chain. That 
inevitably means occasionally invoking  system from  safe via an 
 trusted intermediary.

Realistically, I would imagine this  trusted part should *always* be a 
dummy wrapper over a specific  system function. Why? Because  trusted 
disables ALL of  safe's extra safety checks. Therefore, restricting 
usage of  trusted to ONLY be dummy wrappers over the specific parts 
which MUST be   system will minimize the amount of collateral code that 
must loose all of  safe's special safety checks.

This means some mildly-annoying boilerplate at all the  safe ->  system 
seams, but it's doable...*EXCEPT*, afaics, for struct destructors. Maybe 
I'm missing something, but I'm not aware of any reasonable way to stuff 
those behind an  trusted wrapper (or even an ugly way, for that matter).

If there really *isn't* a reasonable way to wrap  system struct 
destructors (ex: RefCounted) inside an  trusted wall, then any such 
structs will poison all functions which touch them into being  trusted, 
thus destroying the  safe safety checks for the *entire* body of such 
functions. Well, that is, aside from any portions of the function which 
don't touch the struct *and* can be factored out into separate  safe 
helper functions - but that solution seems both limited and 
contortion-prone.

Any thoughts?
Apr 10 2014
next sibling parent "Kagamin" <spam here.lot> writes:
RefCounted is probably unsafe, so just don't use it in safe code.
Apr 11 2014
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-11 06:29:32 +0000, Nick Sabalausky 
<SeeWebsiteToContactMe semitwist.com> said:

 So the idea behind  safe is most code should be  safe, with occasional 
  system/ trusted pieces isolated deeper in the call chain. That 
 inevitably means occasionally invoking  system from  safe via an 
  trusted intermediary.
 
 Realistically, I would imagine this  trusted part should *always* be a 
 dummy wrapper over a specific  system function. Why? Because  trusted 
 disables ALL of  safe's extra safety checks. Therefore, restricting 
 usage of  trusted to ONLY be dummy wrappers over the specific parts 
 which MUST be   system will minimize the amount of collateral code that 
 must loose all of  safe's special safety checks.
 
 This means some mildly-annoying boilerplate at all the  safe ->  system 
 seams, but it's doable...*EXCEPT*, afaics, for struct destructors. 
 Maybe I'm missing something, but I'm not aware of any reasonable way to 
 stuff those behind an  trusted wrapper (or even an ugly way, for that 
 matter).
 
 If there really *isn't* a reasonable way to wrap  system struct 
 destructors (ex: RefCounted) inside an  trusted wall, then any such 
 structs will poison all functions which touch them into being  trusted, 
 thus destroying the  safe safety checks for the *entire* body of such 
 functions. Well, that is, aside from any portions of the function which 
 don't touch the struct *and* can be factored out into separate  safe 
 helper functions - but that solution seems both limited and 
 contortion-prone.
 
 Any thoughts?
Can destructors be safe at all? When called from the GC the destructor 1) likely runs in a different thread and 2) can potentially access other destructed objects, those objects might contain pointers to deallocated memory if their destructor manually freed a memory block. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 11 2014
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 4/11/2014 3:54 PM, Michel Fortin wrote:
 Can destructors be  safe at all? When called from the GC the destructor
 1) likely runs in a different thread and 2) can potentially access other
 destructed objects, those objects might contain pointers to deallocated
 memory if their destructor manually freed a memory block.
If destructors can't be safe, that would seem to create a fairly sizable hole in the utility of safe.
Apr 11 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-11 22:22:18 +0000, Nick Sabalausky 
<SeeWebsiteToContactMe semitwist.com> said:

 On 4/11/2014 3:54 PM, Michel Fortin wrote:
 
 Can destructors be  safe at all? When called from the GC the destructor
 1) likely runs in a different thread and 2) can potentially access other
 destructed objects, those objects might contain pointers to deallocated
 memory if their destructor manually freed a memory block.
If destructors can't be safe, that would seem to create a fairly sizable hole in the utility of safe.
Well, they are safe as long as they're not called by the GC. I think you could make them safe even with the GC by changing things this way: 1- make the GC call the destructor in the same thread the object was created in (for non-shared objects), so any access to thread-local stuff stays in the right thread, avoiding low-level races. 2- after the destructor is run on an object, wipe out the memory block with zeros. This way if another to-be-destructed object has a pointer to it, at worse it'll dereference a null pointer. With this you might get a sporadic crash when it happens, but that's better than memory corruption. You only need to do this when allocated on the GC heap, and only pointers need to be zeroed, and only if another object being destroyed is still pointing to this object, and perhaps only do it for safe destructors. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 11 2014
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 11 Apr 2014 23:02:55 -0400, Michel Fortin  
<michel.fortin michelf.ca> wrote:

 Well, they are safe as long as they're not called by the GC. I think you  
 could make them safe even with the GC by changing things this way:

 1- make the GC call the destructor in the same thread the object was  
 created in (for non-shared objects), so any access to thread-local stuff  
 stays in the right thread, avoiding low-level races.
This needs to be done sooner rather than later. It would solve a lot of GC annoyances. I think in the ARC discussion, it also came up as a necessary step. -Steve
Apr 11 2014
prev sibling next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 1- make the GC call the destructor in the same thread the 
 object was created in (for non-shared objects), so any access 
 to thread-local stuff stays in the right thread, avoiding 
 low-level races.
There also needs to be a mechanism to promote a local object to shared (and probably vice versa). This can be easily done with unique objects, although depending on the implementation, it would require moving memory.
 2- after the destructor is run on an object, wipe out the 
 memory block with zeros. This way if another to-be-destructed 
 object has a pointer to it, at worse it'll dereference a null 
 pointer. With this you might get a sporadic crash when it 
 happens, but that's better than memory corruption. You only 
 need to do this when allocated on the GC heap, and only 
 pointers need to be zeroed, and only if another object being 
 destroyed is still pointing to this object, and perhaps only do 
 it for  safe destructors.
More correctly, every reference to the destroyed object needs to be wiped, not the object itself. But this requires a fully precise GC.
Apr 12 2014
parent Michel Fortin <michel.fortin michelf.ca> writes:
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 2- after the destructor is run on an object, wipe out the 
 memory block with zeros. This way if another to-be-destructed 
 object has a pointer to it, at worse it'll dereference a null 
 pointer. With this you might get a sporadic crash when it 
 happens, but that's better than memory corruption. You only 
 need to do this when allocated on the GC heap, and only 
 pointers need to be zeroed, and only if another object being 
 destroyed is still pointing to this object, and perhaps only do 
 it for  safe destructors.
You don't get a crash, you get undefined behavior. That is much worse and certainly not safe.
Apr 12 2014
parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-12 09:01:12 +0000, "deadalnix" <deadalnix gmail.com> said:

 On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 2- after the destructor is run on an object, wipe out the memory block 
 with zeros. This way if another to-be-destructed object has a pointer 
 to it, at worse it'll dereference a null pointer. With this you might 
 get a sporadic crash when it happens, but that's better than memory 
 corruption. You only need to do this when allocated on the GC heap, and 
 only pointers need to be zeroed, and only if another object being 
 destroyed is still pointing to this object, and perhaps only do it for 
  safe destructors.
You don't get a crash, you get undefined behavior. That is much worse and certainly not safe.
You get a null dereference. Because the GC will not free memory for objects in a given collection cycle until they're all destroyed, any reference to them will still be "valid" while the other object is being destroyed. In other word, if one of them was destroyed and it contained a pointer it'll be null. That null dereference is going to be like any other potential null dereference in safe code: it is expected to crash. There's still the problem of leaking a reference somewhere where it survives beyond the current collection cycle. My proposed solution doesn't work for that. :-( -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 12 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 2- after the destructor is run on an object, wipe out the 
 memory block with zeros. This way if another to-be-destructed 
 object has a pointer to it, at worse it'll dereference a null 
 pointer. With this you might get a sporadic crash when it 
 happens, but that's better than memory corruption.
Other objects will have a valid pointer to zeroed out block and will be able to call its methods. They are likely to crash, but it's not guaranteed, they may just fine corrupt memory. Imagine the class has a pointer to a memory block of 10MB size, the size is an enum and is encoded in the function code (won't be zeroed), the function may write to any region of that block of memory pointed to by null after the clearing.
Apr 12 2014
parent reply Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-12 10:29:50 +0000, "Kagamin" <spam here.lot> said:

 On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 2- after the destructor is run on an object, wipe out the memory block 
 with zeros. This way if another to-be-destructed object has a pointer 
 to it, at worse it'll dereference a null pointer. With this you might 
 get a sporadic crash when it happens, but that's better than memory 
 corruption.
Other objects will have a valid pointer to zeroed out block and will be able to call its methods. They are likely to crash, but it's not guaranteed, they may just fine corrupt memory. Imagine the class has a pointer to a memory block of 10MB size, the size is an enum and is encoded in the function code (won't be zeroed), the function may write to any region of that block of memory pointed to by null after the clearing.
Well, that's a general problem of safe when dereferencing any potentially null pointer. I think Walter's solution was to insert a runtime check if the offset is going to be beyond a certain size. But there has been discussions on non-nullable pointers since then, and I'm not sure what Walter thought about them. The runtime check would help in this case, but not non-nullable pointers. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 12 2014
next sibling parent "Kagamin" <spam here.lot> writes:
On Saturday, 12 April 2014 at 11:06:33 UTC, Michel Fortin wrote:
 Well, that's a general problem of  safe when dereferencing any 
 potentially null pointer. I think Walter's solution was to 
 insert a runtime check if the offset is going to be beyond a 
 certain size.
Well, if you don't access anything beyond a certain offset, it doesn't make sense to declare something that large. So, it would be a compile-time check, not run-time.
Apr 12 2014
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/12/2014 01:06 PM, Michel Fortin wrote:
 On 2014-04-12 10:29:50 +0000, "Kagamin" <spam here.lot> said:

 On Saturday, 12 April 2014 at 03:02:56 UTC, Michel Fortin wrote:
 2- after the destructor is run on an object, wipe out the memory
 block with zeros. This way if another to-be-destructed object has a
 pointer to it, at worse it'll dereference a null pointer. With this
 you might get a sporadic crash when it happens, but that's better
 than memory corruption.
Other objects will have a valid pointer to zeroed out block and will be able to call its methods. They are likely to crash, but it's not guaranteed, they may just fine corrupt memory. Imagine the class has a pointer to a memory block of 10MB size, the size is an enum and is encoded in the function code (won't be zeroed), the function may write to any region of that block of memory pointed to by null after the clearing.
Well, that's a general problem of safe when dereferencing any potentially null pointer. I think Walter's solution was to insert a runtime check if the offset is going to be beyond a certain size. But there has been discussions on non-nullable pointers since then, and I'm not sure what Walter thought about them. The runtime check would help in this case, but not non-nullable pointers.
Yes, they would help (eg. just treat every pointer as potentially null in a destructor.)
Apr 12 2014
prev sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2014-04-11 19:54:16 +0000, Michel Fortin <michel.fortin michelf.ca> said:

 Can destructors be  safe at all? When called from the GC the destructor 
 1) likely runs in a different thread and 2) can potentially access 
 other destructed objects, those objects might contain pointers to 
 deallocated memory if their destructor manually freed a memory block.
There's another issue I forgot to mention earlier: the destructor could leak the pointer to an external variable. Then you'll have a reference to a deallocated memory block. Note that making the destructor pure will only helps for the global variable case. The struct/class itself could contain a pointer to a global or to another memory block that'll persist beyond the destruction of the object and assign the pointer there. It can thus leak the deallocating object (or even "this" if it's a class) through that pointer. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca
Apr 12 2014
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Friday, 11 April 2014 at 06:29:39 UTC, Nick Sabalausky wrote:
 Realistically, I would imagine this  trusted part should 
 *always* be a dummy wrapper over a specific  system function. 
 Why? Because  trusted disables ALL of  safe's extra safety 
 checks. Therefore, restricting usage of  trusted to ONLY be 
 dummy wrappers over the specific parts which MUST be   system 
 will minimize the amount of collateral code that must loose all 
 of  safe's special safety checks.
No. Trusted is about providing a safe interface to some unsafe internals. For instance, free cannot be safe. But a function can do malloc and free in a safe manner. That function can thus be tagged trusted . When you tag something trusted, you are telling that the part aren't individually proven to be safe, but the develloper ensured that the whole, as seen from outside, is safe. The thin wrapper thing do not really fit that model.
 If there really *isn't* a reasonable way to wrap  system struct 
 destructors (ex: RefCounted) inside an  trusted wall, then any 
 such structs will poison all functions which touch them into 
 being  trusted, thus destroying the  safe safety checks for the 
 *entire* body of such functions. Well, that is, aside from any 
 portions of the function which don't touch the struct *and* can 
 be factored out into separate  safe helper functions - but that 
 solution seems both limited and contortion-prone.

 Any thoughts?
RefCounted can't be made safe in any way given the current type system.
Apr 12 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 4/12/2014 4:57 AM, deadalnix wrote:
 On Friday, 11 April 2014 at 06:29:39 UTC, Nick Sabalausky wrote:
 Realistically, I would imagine this  trusted part should *always* be a
 dummy wrapper over a specific  system function. Why? Because  trusted
 disables ALL of  safe's extra safety checks. Therefore, restricting
 usage of  trusted to ONLY be dummy wrappers over the specific parts
 which MUST be   system will minimize the amount of collateral code
 that must loose all of  safe's special safety checks.
No. Trusted is about providing a safe interface to some unsafe internals. For instance, free cannot be safe. But a function can do malloc and free in a safe manner. That function can thus be tagged trusted .
The problem with that is trusted also disables all the SafeD checks for the *rest* of the code in your function, too. To illustrate, suppose you have this function: void doStuff() { ...stuff... malloc() ...stuff... free() ...stuff... } Because of malloc/free, this function obviously can't be safe (malloc/free are, of course, just examples here; they could be any system functions). Problem is, that means for *everything* else in doStuff, *all* of the ...stuff... parts, you CANNOT enable the extra safety checks that safe provides. The use of one system func poisons the rest of doStuff's implementation (non-transitively) into being non-checkable via SafeD. However, if you implement doStuff like this: // Here I'm explicitly acknowledging that malloc/free are non- safe trusted auto trustedWrapperMalloc(...) {...} trusted auto trustedWrapperFree(...) {...} void doStuff() { ...stuff... trustedWrapperMalloc() ...stuff... trustedWrapperFree() ...stuff... } *Now* doStuff can be marked safe and enjoy all the special checks that safe provides.
Apr 12 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 12 April 2014 at 22:02:26 UTC, Nick Sabalausky wrote:
 *Now* doStuff can be marked  safe and enjoy all the special 
 checks that  safe provides.
_and_ is terribly wrong because it is not guaranteed to be safe for all use cases, braking type system once used anywhere but "special" functions. I agree that trusted functions should be as small as possible but they still be self-contained.
Apr 12 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 4/12/2014 7:08 PM, Dicebot wrote:
 On Saturday, 12 April 2014 at 22:02:26 UTC, Nick Sabalausky wrote:
 *Now* doStuff can be marked  safe and enjoy all the special checks
 that  safe provides.
_and_ is terribly wrong because it is not guaranteed to be safe for all use cases, braking type system once used anywhere but "special" functions.
If, as you say, this is wrong: ---------------------------------- system auto foo() {...} // Note, I meant for trustedWrapperWhatever to be private // and placed together with doStuff. Obviously not a public // func provided by foo's author. trusted private auto trustedWrapperFoo(...) {...} safe void doStuff() { ...stuff... // Yes, as the author of doStuff, I'm acknowledging // foo's lack of safe-ty trustedWrapperFoo(); ...stuff... } ---------------------------------- Then how could this possibly be any better?: ---------------------------------- system auto foo() {...} trusted void doStuff() { ...stuff... foo(); ...stuff... } ---------------------------------- The former contains extra safety checks (ie, for everything in "...stuff...") that the latter does not. The former is therefore better.
Apr 12 2014
parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 13 April 2014 at 01:30:59 UTC, Nick Sabalausky wrote:
 // Note, I meant for trustedWrapperWhatever to be private
 // and placed together with doStuff. Obviously not a public
 // func provided by foo's author.
  trusted private auto trustedWrapperFoo(...) {...}
Still accessible by other functions in same module unless you keep each trusted function in own module.
 ----------------------------------

 Then how could this possibly be any better?:

 ----------------------------------
  system auto foo() {...}

  trusted void doStuff() {
     ...stuff...
     foo();
     ...stuff...
 }
 ----------------------------------

 The former contains extra safety checks (ie, for everything in 
 "...stuff...") that the latter does not. The former is 
 therefore better.
Because system does not give any guarantees. It is expected by type system that calling such function can do anything horrible. trusted, however, is expected to be 100% equivalent to safe with only exception that its safety can't be verified by compiler. Any trusted function from the type system point of view can be used in any context where safe can be used. It is you personal responsibility as a programmer to verify 100% safety of each trusted function you write, otherwise anything can go wrong and writer will be only one to blame.
Apr 13 2014