www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Simplification of trusted

reply RazvanN <razvan.nitu1305 gmail.com> writes:
Currently,  trusted applies only to functions. This is most of 
the times a pain when you want trusted code blocks inside 
functions. Why not simplify it a bit by using trusted scope 
blocks? E.g. this:

```d
void foo()  safe
{
     ()  trusted { ... }();
}
```

becomes this:

```d
void foo()  safe
{
      trusted
     {
        ....
     }
}
```
To make things easier,  trusted does not insert a scope (similar 
to `static if`).

Of course, the feature would be additive (you can have both 
trusted functions and code blocks).

That would also provide an elegant workaround if void 
initialization is rejected in  safe code [1][2]. For example:

```d
void foo()  safe
{
      trusted
     {
         int[100] a = void;
     }
     ...
}
```

What do you think?

Cheers,
RazvanN

[1] https://issues.dlang.org/show_bug.cgi?id=17566
[2] https://github.com/dlang/dlang.org/pull/2260
Jun 16 2021
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 [snip]

 ```d
 void foo()  safe
 {
      trusted
     {
         int[100] a = void;
     }
     ...
 }
 ```

 [snip]
The documentation related to these trusted blocks should emphasize that the block should be large enough to encompass enough information to verify the safety of what would normally require the function to be labelled system. For instance, in your above example, just void initializing is system, but if you fill `a` later outside the trusted block later, then it is harder to verify that it is actually safe.
Jun 16 2021
next sibling parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Wednesday, 16 June 2021 at 12:06:56 UTC, jmh530 wrote:
 On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 [snip]

 ```d
 void foo()  safe
 {
      trusted
     {
         int[100] a = void;
     }
     ...
 }
 ```

 [snip]
The documentation related to these trusted blocks should emphasize that the block should be large enough to encompass enough information to verify the safety of what would normally require the function to be labelled system. For instance, in your above example, just void initializing is system, but if you fill `a` later outside the trusted block later, then it is harder to verify that it is actually safe.
I'm not sure what you are referring to. Whenever `a` is used outside the trusted block, the compiler will apply the normal safety constraints. When `a` will be used, the trusted block has already been analyzed and any information regarding to it will be present.
Jun 16 2021
parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 12:15:51 UTC, RazvanN wrote:
 [snip]

 I'm not sure what you are referring to. Whenever `a` is used 
 outside the trusted block, the compiler will apply the normal 
 safety constraints. When `a` will be used, the trusted block 
 has already been analyzed and any information regarding to it 
 will be present.
Below makes clear what I was thinking. In both you void initialize a pointer. However, the `foo` assigns to the pointer within the trusted block and `bar` assigns it outside the trusted block. It is easier to verify for another person to verify the trusted block is correct in `foo` than in `bar`. More of a best practice than anything else. ```d void foo() safe { int x; trusted { int* p = void; p = &x; } ... } void bar() safe { int x; trusted { int* p = void; } ... p = &x; } ```
Jun 16 2021
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/16/21 8:06 AM, jmh530 wrote:
 On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 [snip]

 ```d
 void foo()  safe
 {
      trusted
     {
         int[100] a = void;
     }
     ...
 }
 ```

 [snip]
The documentation related to these trusted blocks should emphasize that the block should be large enough to encompass enough information to verify the safety of what would normally require the function to be labelled system. For instance, in your above example, just void initializing is system, but if you fill `a` later outside the trusted block later, then it is harder to verify that it is actually safe.
You mean like it does now? ```d void foo() safe { int[100] a = () trusted {int[100] a = void; return a; }(); } ``` (in LDC, this compiles equivalent to Razvan's code above, not sure about DMD) For trusted blocks or inner trusted functions, it's really difficult to say what parts are trusted and what parts are safe. See my [dconf online 2020 talk](http://dconf.org/2020/online/index.html#steven). Right now, safe has 2 meanings, one is that code within it is safe, one is that code marked safe is mechanically checked by the compiler. Only the mechanical checking is guaranteed, the semantic meaning that the code actually is safe is easily thwarted by inner trusted code. This is the impetus behind [DIP1035](https://github.com/dlang/DIPs/blob/master/DIPs/DIP1035.md). But as long as we want code to do anything interesting, there is always going to be some trusted code, and the risks that come with it. I would support trusted blocks, as long as we can have system variables (DIP1035) and variables declared inside a trusted block were implicitly system. -Steve
Jun 16 2021
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 13:17:41 UTC, Steven Schveighoffer 
wrote:
 [snip]

 You mean like it does now?
 [snip]
See the code example I have above. My point isn't about trusted per se, it's about best practices for using a trusted code block. In my opinion, your trusted lambda example is a bad use of trusted because you're not filling in the void initialized variable within trusted code area. The person who is trying to manually verify that what is in the trusted block is actually safe has to search for that outside the block.
Jun 16 2021
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/16/21 9:22 AM, jmh530 wrote:
 On Wednesday, 16 June 2021 at 13:17:41 UTC, Steven Schveighoffer wrote:
 [snip]

 You mean like it does now?
 [snip]
See the code example I have above. My point isn't about trusted per se, it's about best practices for using a trusted code block. In my opinion, your trusted lambda example is a bad use of trusted because you're not filling in the void initialized variable within trusted code area. The person who is trying to manually verify that what is in the trusted block is actually safe has to search for that outside the block.
Of course it's bad. But this is how code is written today, because trusted is too blunt an instrument (I might want just void initialization of that one variable, but still want other safety checks throughout the rest of the function). My point (in a slightly snarky reply, apologies) is that we don't need a new trusted block feature to have the documentation identify such pitfalls. -Steve
Jun 16 2021
prev sibling parent Nick Treleaven <nick geany.org> writes:
On Wednesday, 16 June 2021 at 13:17:41 UTC, Steven Schveighoffer 
wrote:
 I would support  trusted blocks, as long as we can have  system 
 variables (DIP1035) and variables declared inside a  trusted 
 block were implicitly  system.
That would be great if both reading and writing those local system variables wouldn't compile in safe only code. So it would require another trusted block any time those variables were used. That could be the holy grail of supporting compiler checking of all safe operations even in a function that does unsafe stuff.
Jun 18 2021
prev sibling next sibling parent reply kinke <noone nowhere.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 What do you think?
`unsafe {}` blocks. I absolutely hate the trusted lambdas 'idiom'.
Jun 16 2021
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jun 16, 2021 at 01:00:07PM +0000, kinke via Digitalmars-d wrote:
 On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 What do you think?
{}` blocks. I absolutely hate the trusted lambdas 'idiom'.
This isn't the first time it was suggested. Way back when, it was brought up and rejected because Walter thought that trusted blocks should be discouraged, and therefore should be ugly to write. It was extensively argued, but Walter preferred the "trusted lambda idiom", precisely because it was ugly, required effort to write, and therefore deters casual (ab)uses of trusted. T -- What is Matter, what is Mind? Never Mind, it doesn't Matter.
Jun 16 2021
next sibling parent RazvanN <razvan.nitu1305 gmail.com> writes:
On Wednesday, 16 June 2021 at 15:37:22 UTC, H. S. Teoh wrote:

 This isn't the first time it was suggested.  Way back when, it 
 was brought up and rejected because Walter thought that 
  trusted blocks should be discouraged, and therefore should be 
 ugly to write.  It was extensively argued, but Walter preferred 
 the "trusted lambda idiom", precisely because it was ugly, 
 required effort to write, and therefore deters casual (ab)uses 
 of  trusted.


 T
I think that the time to reassess that decision has come. In reality, trusted is needed to write **optimized** safe code. Since there are a lot of scenarios where trusted is needed we should simply accept it and make it easy to use. Another counter argument is that it is so much easier to trust the entire function (instead of using trusted lambdas), that the previous design decision ends up causing more bad than good.
Jun 16 2021
prev sibling next sibling parent reply Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Wednesday, 16 June 2021 at 15:37:22 UTC, H. S. Teoh wrote:
 On Wed, Jun 16, 2021 at 01:00:07PM +0000, kinke via 
 Digitalmars-d wrote:
 On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 What do you think?
`unsafe {}` blocks. I absolutely hate the trusted lambdas 'idiom'.
This isn't the first time it was suggested. Way back when, it was brought up and rejected because Walter thought that trusted blocks should be discouraged, and therefore should be ugly to write. It was extensively argued, but Walter preferred the "trusted lambda idiom", precisely because it was ugly, required effort to write, and therefore deters casual (ab)uses of trusted. T
Yet, it forces to make entire function trusted if lambdas are not used, and safe guarantees are lost to remainder of the code due to that. +1 for moving safety qualifiers to code blocks instead of functions. Alex.
Jun 16 2021
next sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 16 June 2021 at 17:36:46 UTC, Alexandru Ermicioi 
wrote:
 Yet, it forces to make entire function trusted if lambdas are 
 not used, and safe guarantees are lost to remainder of the code 
 due to that.

 +1 for moving safety qualifiers to code blocks instead of 
 functions.

 Alex.
I have a better idea, throw it all out. What is safe? It's a limitation of operations you can do in D that might cause memory corruption, like pointer casts and such. Wouldn't be enough that the programmer self know about this and do not use those potentially harmful operations? That would be enough according to me but let's say that the programmer doesn't remember what is unsafe/safe. Then a compiler switch that gives a warning would be enough, at least for me. I couldn't care less about this safe/unsafe and it just gets in the way. It is also clear that despite you want to automate safe code verification, you are unable to do so and the responsibility falls to the programmer anyway. That you are unable to solve how FFI should act (remember the famous DIP 1028) is also a reminder of that.
Jun 16 2021
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jun 16, 2021 at 05:59:19PM +0000, IGotD- via Digitalmars-d wrote:
[...]
 I have a better idea, throw it all out. What is  safe? It's a
 limitation of operations you can do in D that might cause memory
 corruption, like pointer casts and such. Wouldn't be enough that the
 programmer self know about this and do not use those potentially
 harmful operations? That would be enough according to me but let's say
 that the programmer doesn't remember what is unsafe/safe. Then a
 compiler switch that gives a warning would be enough, at least for me.
This is a gross misunderstanding of safe. The whole point of safe is to minimize human error. Trusting the programmer to know better is what led to C's design with all of its security holes. Why bother with array length when we can just trust the programmer to do the right thing? Why not just pass bare pointers around and freely cast them to/from void*, since the programmer ought to know whether it's safe? The last several decades of security flaws involving buffer overflows, memory corruption, and all of that nice stuff is proof that the programmer CANNOT be trusted. Programs are too complex for a human to write flawlessly. We need to mechanically verify that stuff is safe so that (most) human errors are caught early, before they get deployed to production and cause massive damage. Of course, due to Turing completeness and the halting problem, you can never mechanically verify 100% of the code. Especially in system code, sometimes you DO need to trust that the programmer knows what he's doing. E.g., if you want to write a GC. So sometimes you need an escape hatch to allow you to go outside the safe box. The whole idea behind safe/ trusted/ system is that you want to allow the human to go outside the box sometimes, but you want to *minimize* that in order to reduce the surface area of potential human error. So most code should be safe, and only occasionally trusted when you need to do something the compiler cannot mechanically check. IOW, reduce the room for human error as much as possible. Even if we can never eliminate it completely, it's better to minimize it rather than do nothing at all.
 I couldn't care less about this safe/unsafe and it just gets in the
 way.
If you don't care about safe, then why not just write system code? system code is allowed to freely call into safe without any restrictions. You won't even need to know safe exists if you don't use it.
 It is also clear that despite you want to automate safe code
 verification, you are unable to do so and the responsibility falls to
 the programmer anyway.  That you are unable to solve how FFI should
 act (remember the famous DIP 1028) is also a reminder of that.
This is not an an all-or-nothing binary choice. *Ideally* we want to mechanically verify everything. But since that's impossible (cf. halting problem), we settle for mechanically verifying as much as we can, and leave the rest as trusted blocks that require human verification. It's a practical compromise. It's proven that mechanical checks DO catch human errors, even if they won't catch *all* of them. It's better to catch *some* of them than none at all (cf. the past, oh, 30+ years of security exploits caused by C/C++'s lack of automated checks?). T -- MAS = Mana Ada Sistem?
Jun 16 2021
prev sibling parent reply Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Wednesday, 16 June 2021 at 17:59:19 UTC, IGotD- wrote:
 I have a better idea, throw it all out. What is  safe? It's a 
 limitation of operations you can do in D that might cause 
 memory corruption, like pointer casts and such. Wouldn't be 
 enough that the programmer self know about this and do not use 
 those potentially harmful operations? That would be enough 
 according to me but let's say that the programmer doesn't 
 remember what is unsafe/safe. Then a compiler switch that gives 
 a warning would be enough, at least for me.

 I couldn't care less about this safe/unsafe and it just gets in 
 the way. It is also clear that despite you want to automate 
 safe code verification, you are unable to do so and the 
 responsibility falls to the programmer anyway. That you are 
 unable to solve how FFI should act (remember the famous DIP 
 1028) is also a reminder of that.
That is a no go. Why should I leave verification of code to a human that is known to fail from times to times? C has no verification, and what is the result of this? Lot's and lots of bugs due to human errors. One more thing for verification to be present, is that it saves me time. I don't have to be extra careful while writing code, and certainly won't need to spend more time debugging a bug that could be prevented by automatic code verification.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 09:56:52 UTC, Alexandru Ermicioi 
wrote:
 C has no verification, and what is the result of this? Lot's 
 and lots of bugs due to human errors.
True, although there are dialects of C that have more advanced verification than D, both research projects and industrial projects.
 One more thing for verification to be present, is that it saves 
 me time. I don't have to be extra careful while writing code, 
 and certainly won't need to spend more time debugging a bug 
 that could be prevented by automatic code verification.
Indeed. But if you think about C functions that require arrays of zero terminated strings… Ok, you can create a simple trusted wrapper, but then that wrapper has to check that all the strings are zero terminated, which adds unacceptable overhead. So even in this trivial example the trusted code has to assume that the provided data structure is correct, and thus it enables safe code to make correct trusted code unsafe. It gets even more complicated in real system level programming where you might make a function trusted because you know that when this function is called no other threads are running. That is an assumption about an invariant bound to time. Proving things about timelines and concurrency is difficult/impossible. So, in practice, the correctness of trusted is ad hoc, cannot be assumed to be local and requires audits as the code base changes. But it could be helpful to list the invariants unsafe code depends on, e.g.: ``` unsafe(assumes_singlethreaded){ …fast update of shared datastructure… } unsafe(pointer_packing, pointer_arithmetics){ … } unsafe(innocent_compiler_workaround){ … } ``` Now you have something to scan for. Like, in testing you could inject a check before the code that assumes no threads to be running. If you build with GC then you can scan all used libraries that does tricks with pointers and so on. For true system level programming something like this (or more advanced) is needed for people to use it. Otherwise just slapping system on all the code is the easier option. There has to be some significant benefits if you want programmers to add visual noise to their codebase. You could also add a tag that says when the unsafe code was last audited (or at all): ``` unsafe(pointer_arithmetics, 2021-06-17){ … } ```
Jun 17 2021
next sibling parent reply ag0aep6g <anonymous example.com> writes:
On 17.06.21 12:28, Ola Fosheim Grøstad wrote:
 Indeed. But if you think about C functions that require arrays of zero 
 terminated strings… Ok, you can create a simple  trusted wrapper, but 
 then that wrapper has to check that all the strings are zero terminated, 
 which adds unacceptable overhead. So even in this trivial example the 
  trusted code has to assume that the provided data structure is correct, 
 and thus it enables  safe code to make correct  trusted code unsafe.
The function you describe simply can't be trusted. If you need to call a function with a zero-terminated string, and you cannot afford to check that the string is indeed zero-terminated, then you just can't guarantee safety. A function that is not guaranteed to be safe is system, not trusted.
 It gets even more complicated in real system level programming where you 
 might make a function  trusted because you know that when this function 
 is called no other threads are running. That is an assumption about an 
 invariant bound to time.
That's also not a valid trusted function. "It's safe as long as [some condition that's not guaranteed by the language]" describes an system function, not an trusted one. If you want to be extra clever and exploit conditions that are not guaranteed by the language, then you either have to make sure inside the trusted function that the conditions are actually met, or you settle for system. [...]
 For true system level programming something like this (or more advanced) 
 is needed for people to use it. Otherwise just slapping  system on all 
 the code is the easier option. There has to be some significant benefits 
 if you want programmers to add visual noise to their codebase.
True system level programming is going to be system in D. I don't think that's much of a surprise.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 10:57:01 UTC, ag0aep6g wrote:
 The function you describe simply can't be  trusted. If you need 
 to call a function with a zero-terminated string, and you 
 cannot afford to check that the string is indeed 
 zero-terminated, then you just can't guarantee safety. A 
 function that is not guaranteed to be safe is  system, not 
  trusted.
That basically means that all interesting system level code is system, including all the code that calls it. That also means that you prevent system level programmers from benefiting from language safety checks!? Here is the problem with that viewpoint, there is no way for the function to prove that the memory it receives has not been freed. So there is in fact no way for the function to ensure that it is trusted. That applies to safe functions too. There has to be a contract between caller and callee, those are the invariants that the unsafe code (and safe code) depends on. So I strongly disagree with the viewpoint that trusted cannot assume invariants to hold for the data it receives. That is mandatory for all correct code of some complexity. For instance, in order to make the dmd lexer trusted you would then require the lexer to do the allocation itself. If it accepts a filebuffer allocated outside the lexer then there is no way for the lexer to ensure that the sentinels (zeros at the end) are not overwritten by other code. That is an unreasonable restriction that makes trusted and safe useless. The lexer should be allowed to assume that the invariants of the filebuffer holds when it takes ownership of it. It is difficult to prove without language level unique ownership, but it is unreasonable to make the lexer and everything that calls it system, just because it accepts a filebuffer object.
 That's also not a valid  trusted function. "It's safe as long 
 as [some condition that's not guaranteed by the language]" 
 describes an  system function, not an  trusted one.
What are the invariants that are guaranteed by the language in a multi-threaded program that calls C code? What can you depend on? Is it at all possible to write a performant 3D game that isn't system?
 If you want to be extra clever and exploit conditions that are 
 not guaranteed by the language, then you either have to make 
 sure inside the  trusted function that the conditions are 
 actually met, or you settle for  system.
But that is the signature of a very high level language, not of a system level language. In system level programming you cannot have dynamic checks all over the place, except in debug builds.
 True system level programming is going to be  system in D. I 
 don't think that's much of a surprise.
That makes Rust a much better option for people who cares about safety. That is a problem.
Jun 17 2021
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 17 June 2021 at 11:16:47 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 10:57:01 UTC, ag0aep6g wrote:
...
 True system level programming is going to be  system in D. I 
 don't think that's much of a surprise.
That makes Rust a much better option for people who cares about safety. That is a problem.
Actually that is the same road taken by Rust, all interop with C libraries is considered unsafe. You can enjoy endless amount of unsafe on Microsoft code samples for Windows coding with Rust and Win32 APIs. https://github.com/microsoft/windows-rs/tree/master/examples
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 11:51:03 UTC, Paulo Pinto wrote:
 Actually that is the same road taken by Rust, all interop with 
 C libraries is considered unsafe.
The big difference is that Rust _has_ language level unique ownership and a full blown borrow checker. So in that case the lexer can take over ownership and be certain that the filebuffer is fully isolated. If D wants to compete it has to be more pragmatic. Anyway, it doesn't really matter what language lawyers say. People _will_ use ` trusted` in their system-level code bases as they see fit in order to get pragmatic safety, meaning not loosing out on efficiency and still get more checks than making everything ` system`. This is inevitable. Programmers care about what is best for _their project_, not what some goofy idealistic people claim on a philosophical level. This includes game-oriented libraries. So there will never be an eco-system where ` trusted` has the semantics language lawyers claim that they should have. Therefore it is fatally flawed to make that requirement in the first place. It is a tool, not a religion. People are not afraid of going to ` safe` hell. If your only alternative is ` system`, then there is no reason for programmers to not abuse ` safe` and ` trusted`. Appealing to religion won't work.
Jun 17 2021
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 17 June 2021 at 12:14:18 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 11:51:03 UTC, Paulo Pinto wrote:
 Actually that is the same road taken by Rust, all interop with 
 C libraries is considered unsafe.
The big difference is that Rust _has_ language level unique ownership and a full blown borrow checker. So in that case the lexer can take over ownership and be certain that the filebuffer is fully isolated. If D wants to compete it has to be more pragmatic. Anyway, it doesn't really matter what language lawyers say. People _will_ use ` trusted` in their system-level code bases as they see fit in order to get pragmatic safety, meaning not loosing out on efficiency and still get more checks than making everything ` system`. This is inevitable. Programmers care about what is best for _their project_, not what some goofy idealistic people claim on a philosophical level. This includes game-oriented libraries. So there will never be an eco-system where ` trusted` has the semantics language lawyers claim that they should have. Therefore it is fatally flawed to make that requirement in the first place. It is a tool, not a religion. People are not afraid of going to ` safe` hell. If your only alternative is ` system`, then there is no reason for programmers to not abuse ` safe` and ` trusted`. Appealing to religion won't work.
Which is why on some deployment platforms where security is the top selling point for their customers, like https://www.unisys.com/offerings/clearpath-forward/clearpath-forward-products, require admin access to enable a tainted binary (e.g. unsafe code) to be made executable. Developers point of view doesn't matter for security assessments.
Jun 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 13:00:22 UTC, Paulo Pinto wrote:
 Which is why on some deployment platforms where security is the 
 top selling point for their customers, like 
 https://www.unisys.com/offerings/clearpath-forward/clearpath-forward-products,
require admin access to enable a tainted binary (e.g. unsafe code) to be made
executable.

 Developers point of view doesn't matter for security 
 assessments.
That makes a lot of sense for a commercial venture. You cannot actually modify the code after auditing unsafe code. That would have to trigger a new audit (hopefully automated). There is some hope that in the future simpler functions can be fully specced formally and that implementations then can be automatically proven correct (with the right asserts). That could be a big change for open source (when/if) that happens. People could compete on performance on a function-by-function basis and users (or even compilers) could pick and choose knowing that they get the same output for the same input for all available implementations.
Jun 17 2021
prev sibling parent reply ag0aep6g <anonymous example.com> writes:
On 17.06.21 13:16, Ola Fosheim Grøstad wrote:
 That basically means that all interesting system level code is  system, 
 including all the code that calls it. That also means that you prevent 
 system level programmers from benefiting from language safety checks!?
Yes.
 Here is the problem with that viewpoint, there is no way for the 
 function to prove that the memory it receives has not been freed. So 
 there is in fact no way for the function to ensure that it is  trusted. 
 That applies to  safe functions too. There has to be a contract between 
 caller and callee, those are the invariants that the unsafe code (and 
 safe code) depends on.
It's not a viewpoint. It's how system/ trusted/ safe are defined. Part of that definition is that pointer arguments to safe and trusted functions must be valid (not freed). If a freed pointer ends up as the argument to an safe/ trusted function, you have an error in your system/ trusted code. safe code can't produce such a pointer, because it can't call `free`.
 So I strongly disagree with the viewpoint that  trusted cannot assume 
 invariants to hold for the data it receives. That is mandatory for all 
 correct code of some complexity.
It can assume the invariants that are guaranteed by the language. The language guarantees (and demands) that pointer arguments are valid.
 For instance, in order to make the dmd lexer  trusted you would then 
 require the lexer to do the allocation itself. If it accepts a 
 filebuffer allocated outside the lexer then there is no way for the 
 lexer to ensure that the sentinels (zeros at the end) are not 
 overwritten by other code.
I don't know DMD's source very well, so I can't make statements about that piece of code. But it wouldn't surprise me if it can't be validly trusted. If you provide a concrete example (that is digestible in size), I can give my take on it.
 That is an unreasonable restriction that makes  trusted and  safe 
 useless. The lexer should be allowed to assume that the invariants of 
 the filebuffer holds when it takes ownership of it. It is difficult to 
 prove without language level unique ownership, but it is unreasonable to 
 make the lexer and everything that calls it  system, just because it 
 accepts a filebuffer object.
I don't mind you thinking that trusted is useless. It is what it is. If you want something different, you'll have to push for change (i.e. nag Walter, write a DIP, make DMD pull requests). Please don't mistake my insistence on the definition of trusted as a defense of it. If trusted falls short, then we need something better. But you can't just assert that trusted really means something else beyond what's in the spec, something that isn't backed by Walter or DMD. That just adds to the confusion about trusted which is already high.
 That's also not a valid  trusted function. "It's safe as long as [some 
 condition that's not guaranteed by the language]" describes an  system 
 function, not an  trusted one.
What are the invariants that are guaranteed by the language in a multi-threaded program that calls C code? What can you depend on?
trusted functions can assume that they're only called with "safe values" and "safe aliasing" in the parameters. For details, see the spec: https://dlang.org/spec/function.html#safe-interfaces https://dlang.org/spec/function.html#safe-values https://dlang.org/spec/function.html#safe-aliasing Note that that part of the spec is largely my attempt at pinning down what Walter means by "safe interface". There are certainly still some things missing. But the gist is there, and it has Walter's blessing.
 Is it at all possible to write a performant 3D game that isn't  system?
I don't know. I would expect that you need some system code in there. But maybe the higher-level abstractions can be trusted. [...]
 But that is the signature of a very high level language, not of a system 
 level language. In system level programming you cannot have dynamic 
 checks all over the place, except in debug builds.
D is both high and low level. At least, it tries to be. High level: garbage collection enables code to be safe. Low level: You can avoid garbage collection in system code. DIP 1000 tries to make some lower-level code safe, but it's clearly not a cure-all. [...]
 That makes Rust a much better option for people who cares about safety. 
 That is a problem.
I don't have an opinion on this.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 12:06:45 UTC, ag0aep6g wrote:
 It's not a viewpoint. It's how  system/ trusted/ safe are 
 defined.
Ok. But then the definition has some big _real world_ holes in it.
 Part of that definition is that pointer arguments to  safe and 
  trusted functions must be valid (not freed). If a freed 
 pointer ends up as the argument to an  safe/ trusted function, 
 you have an error in your  system/ trusted code.  safe code 
 can't produce such a pointer, because it can't call `free`.
It can't call free, but since the language does not have a full blown borrow checker or isolated ownership pointer types, there is also no way anyone can be 100% certain (as in provably correct code). My take on this is that interfacing with C/C++ undermines safe to such an extent that C/C++ interop isn't really as big of a selling point as it is made out to be (meaning you have to choose either safe or C/C++ interop). I think that is a problem. If you have two big features then you shouldn't have to choose. The conception of safe has to work well for people who write large application with lots of C/C++ interop.
 It can assume the invariants that are guaranteed by the 
 language. The language guarantees (and demands) that pointer 
 arguments are valid.
But it does not guarantee anything about the content that is being pointed to. That will trip most interesting use cases for unsafe code. Just think about an array with memory-offsets. That definition makes trusted mostly useless as safe code can clearly change those memory-offsets. That prevents interesting high performance ADTs from being safe, even when they are correctly implemented. You actually should think of the the whole class as trusted then.
 I don't know DMD's source very well, so I can't make statements 
 about that piece of code. But it wouldn't surprise me if it 
 can't be validly  trusted. If you provide a concrete example 
 (that is digestible in size), I can give my take on it.
When you ask for the next lexeme the lexer advances a pointer, if it hits a zero character it stops advancing. For this to be trusted, by the safe-requirements, the lexer cannot accept a filebuffer it has not allocated itself, as that makes it possible for external code to overwrite the zeros. That is not an acceptable restriction. The lexer should only require that the filebuffer invariant of unique ownership and no borrowed pointers to hold. Then the lexer can add the zero-character at the end of the buffer and it will be safe. That is the only acceptable take on trusted in my view. Anything more restrictive than this makes trusted useless.
  trusted as a defense of it. If  trusted falls short, then we 
 need something better. But you can't just assert that  trusted 
 really means something else beyond what's in the spec, 
 something that isn't backed by Walter or DMD. That just adds to 
 the confusion about  trusted which is already high.
Ok. But then Walter has to provide a clean description of how trusted can work without making _any_ assumptions about invariants of datastructures provided through arguments. It is not realistic. Not at all!
  trusted functions can assume that they're only called with 
 "safe values" and "safe aliasing" in the parameters.
I don't think this is enough to prevent safe code from tripping up trusted code as it would prevent many interesting ADTs from being implemented efficently. Meaning, you would have to restrict yourself to safe practices (like bounds checks).
 Note that that part of the spec is largely my attempt at 
 pinning down what Walter means by "safe interface". There are 
 certainly still some things missing. But the gist is there, and 
 it has Walter's blessing.
Got it.
 D is both high and low level. At least, it tries to be. High 
 level: garbage collection enables code to be  safe. Low level: 
 You can avoid garbage collection in  system code. DIP 1000 
 tries to make some lower-level code  safe, but it's clearly not 
 a cure-all.
Ok, but it is not _realistic_ to think that D users will not write code that they think is _good enough_ for their purpose. Since there is no way to verify that they adhere to idealistic principles, it won't happen. So, you can get Phobos to adhere to it, but basically no other libraries will. And applications will most certainly make choices on a case-by-case evaluation. Now, I am not against Phobos being held to a higher standard, it should! But there is no way other people will follow those high ideals.
Jun 17 2021
next sibling parent reply Mathias LANG <geod24 gmail.com> writes:
On Thursday, 17 June 2021 at 12:52:40 UTC, Ola Fosheim Grøstad 
wrote:
 It can't call free, but since the language does not have a full 
 blown borrow checker or isolated ownership pointer types, there 
 is also no way anyone can be 100% certain (as in provably 
 correct code).
Wat ? That doesn't make any sense. A function that would free its input *has to be system*.
 My take on this is that interfacing with C/C++ undermines  safe 
 to such an extent that C/C++ interop isn't really as big of a 
 selling point as it is made out to be (meaning you have to 
 choose either  safe or C/C++ interop). I think that is a 
 problem. If you have two big features then you shouldn't have 
 to choose. The conception of  safe has to work well for people 
 who write large application with lots of C/C++ interop.
C++ interop is what convinced my company to use D in the first place. You're right that those two features have friction, but I take C/C++ interop over ` safe` any day of the week.
 But it does not guarantee anything about the content that is 
 being pointed to. That will trip most interesting use cases for 
 unsafe code. Just think about an array with memory-offsets.
Anything that deals with an array of memory offset needs to be encapsulated in its own data structure. ` safe` is about exposing a ` safe` interface, that is, something that can't be misused. If you use an array of memory offsets, then you have to do pointer arithmetic, which is not ` safe`.
 That definition makes  trusted mostly useless as  safe code can 
 clearly change those memory-offsets. That prevents interesting 
 high performance ADTs from being  safe, even when they are 
 correctly implemented. You actually should think of the the 
 whole class as  trusted then.
You *can't* mark a function as trusted if it accepts an array of memory offset and just uses it. And you can't call that "correctly implemented", either.
Jun 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 13:19:01 UTC, Mathias LANG wrote:
 Wat ? That doesn't make any sense. A function that would free 
 its input *has to be  system*.
Yes, but that was not the point. If you call C-code, it may release a pointer deep down in the datastructure without setting the pointer to NULL. The invariant ensures that it isn't used. Then you call D, and D claims that the pointer has to be null or you break trusted , even if you never dereference? Also, it claims that you cannot make assumptions of size, which would be an invariant (e.g. a size field would be tied to an invariant). This is not a language level guarantee, so it cannot be used...
 Anything that deals with an array of memory offset needs to be 
 encapsulated in its own data structure. ` safe` is about 
 exposing a ` safe` interface, that is, something that can't be 
 misused. If you use an array of memory offsets, then you have 
 to do pointer arithmetic, which is not ` safe`.
But this is not enough. As trusted apparently requires you to assume that the datastructure can be filled with garabage and using it should still always be safe? Otherwise you assume that invariants hold.
 You *can't* mark a function as trusted if it accepts an array 
 of memory offset and just uses it. And you can't call that 
 "correctly implemented", either.
Why not? It is protected by private. You CAN if you only access it with trusted memberfunctions. However if you have one safe memberfunction then a bug in that one could accidentally modify the offsets. As a consequence you can ONLY have trusted member functions, not even a single safe member function, that does nothing can be allowed. As it could in theory contain a bug that could change the offsets. Or, where am I wrong now?
Jun 17 2021
prev sibling next sibling parent reply ag0aep6g <anonymous example.com> writes:
On 17.06.21 14:52, Ola Fosheim Grøstad wrote:
 When you ask for the next lexeme the lexer advances a pointer, if it 
 hits a zero character it stops advancing.
 
 For this to be  trusted, by the safe-requirements, the lexer cannot 
 accept a filebuffer it has not allocated itself, as that makes it 
 possible for external code to overwrite the zeros.
That's right. An trusted function cannot ever advance a pointer it received from the outside. It can only assume that the pointer can be dereferenced. [...]
 The lexer should only require that the filebuffer invariant of unique 
 ownership and no borrowed pointers to hold. Then the lexer can add the 
 zero-character at the end of the buffer and it will be  safe.
You're losing me. You wrote that the lexer advances a pointer to a "character". I figure that means it has a `char*` parameter. What's a filebuffer? If it's a struct around a `char*`, why is the lexer manipulating the pointer directly instead of calling some method of the filebuffer? An example in code (instead of descriptions) would go a long way. [...]
 I don't think this is enough to prevent  safe code from tripping up 
  trusted code as it would prevent many interesting ADTs from being 
 implemented efficently. Meaning, you would have to restrict yourself to 
  safe practices (like bounds checks).
I'm sure many interesting types are not compatible with safe/ trusted. DIP 1000, Walter's live experiments, and system variables (DIP 1035 [1]) might enable some more. The ideas in this thread might go somewhere, too. [...]
 Ok, but it is not _realistic_ to think that D users will not write code 
 that they think is _good enough_ for their purpose. Since there is no 
 way to verify that they  adhere to idealistic principles, it won't happen.
I think there's almost a consensus that trusted isn't quite good enough, exactly because no one can be bothered to use it as intended. That's why the incremental improvements mentioned above are being worked on. I don't think anyone is working on redesigning (or reinterpreting) the whole thing from the ground up. And I would expect strong push-back from Walter if someone tried.
 So, you can get Phobos to adhere to it, but basically no other libraries 
 will. And applications will most certainly make choices on a 
 case-by-case evaluation.
Phobos isn't even close to adhering to it. Yes, that's a problem. [1] https://github.com/dlang/DIPs/blob/master/DIPs/DIP1035.md
Jun 17 2021
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 18/06/2021 1:54 AM, ag0aep6g wrote:
 Ok, but it is not _realistic_ to think that D users will not write 
 code that they think is _good enough_ for their purpose. Since there 
 is no way to verify that they  adhere to idealistic principles, it 
 won't happen.
I think there's almost a consensus that trusted isn't quite good enough, exactly because no one can be bothered to use it as intended. That's why the incremental improvements mentioned above are being worked on. I don't think anyone is working on redesigning (or reinterpreting) the whole thing from the ground up. And I would expect strong push-back from Walter if someone tried.
What might be a good thing to have is make the compiler able to produce an audit report for trusted code. Along with a list of all trusted symbols in source code (not generated). You can diff the list to see what symbols are added, and ensure somebody audits it at PR time with the help of the CI.
Jun 17 2021
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 13:54:06 UTC, ag0aep6g wrote:
 That's right. An  trusted function cannot ever advance a 
 pointer it received from the outside. It can only assume that 
 the pointer can be dereferenced.
This is way too constraining. At the very least it should accept a Phobos ownership-transfer wrapper where you can an obtain a buffer as a one-time transfer. Meaning, if you try to obtain it, it is moved out of the wrapper. That would be perfectly safe.
 You're losing me. You wrote that the lexer advances a pointer 
 to a "character". I figure that means it has a `char*` 
 parameter. What's a filebuffer? If it's a struct around a 
 `char*`, why is the lexer manipulating the pointer directly 
 instead of calling some method of the filebuffer?
Let us assume filebuffer is just a wrapper that transfers ownership. It prevents borrowing and ownership can only be transferred once. This is fully encapsulated. That is the invariant for the filebuffer. Yet, as it only is an invariant, you would need to reallocate the buffer then stream over the filebuffer into your own buffer according to the requirements you've been kind to point out to me.
 An example in code (instead of descriptions) would go a long 
 way.
I guess, but it is too simple. It is conceptually just `if (*ptr==0) return 0; return *(++ptr);`. For the trusted to rely on this it has to know that it has unique access to the buffer. The filebuffer object guarantees this, but it is an invariant, it cannot be checked by the language. As a consequence you trigger completely pointless copying.
 I think there's almost a consensus that  trusted isn't quite 
 good enough, exactly because no one can be bothered to use it 
 as intended.
I am sure that it will be used as intended in Phobos, or at least that this is achievable, but not in the kind of libraries that would be targeting games and such. And well, one selling point for D could be that it is possible to write games in a _safer_ environment than C++...
 Phobos isn't even close to adhering to it. Yes, that's a 
 problem.
Well, then nobody else will either. :-/
Jun 17 2021
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 17 June 2021 at 12:52:40 UTC, Ola Fosheim Grøstad 
wrote:
 My take on this is that interfacing with C/C++ undermines  safe 
 to such an extent that C/C++ interop isn't really as big of a 
 selling point as it is made out to be (meaning you have to 
 choose either  safe or C/C++ interop). I think that is a 
 problem. If you have two big features then you shouldn't have 
 to choose. The conception of  safe has to work well for people 
 who write large application with lots of C/C++ interop.
No language can do this. C++ API does not provide any safety guarantees, so calling a C++ function means that it needs to be manually verified, or it's authors trusted, BY DEFINITION. The only way around this would be to implement an automatic safety checker on C++ side. If it has to work out of the box on most real-world code, I don't think we will see such a complex checker in decades. I suspect you're trying to say that because of the above, we would have to conclude that good C++ interop and memory safety guarantees should never be mixed in one language, D or otherwise. If that's the case, the only conclusion I can draw is that D philosophy is fundamentally wrong from your point of view. D is all about letting the programmer pick the paradigm according to the situation, instead of being designed for just one of them. This philosophy is rooted so deep that if it proves to be just plain wrong, were best off to just ditch D and switch to other languages. I sure hope that won't happen.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 15:08:53 UTC, Dukc wrote:
 No language can do this. C++ API does not provide any safety 
 guarantees, so calling a C++ function means that it needs to be 
 manually verified, or it's authors trusted, BY DEFINITION.
Sure, but that is obviously not enough. Because what is being said implies that trusted code have to assume that anything it receives that isn't pointers can be garbage and that such garbage should never lead to memory unsafety even if _you know_ that the trusted function never receives garbage.
 If that's the case, the only conclusion I can draw is that D 
 philosophy is fundamentally wrong from your point of view. D is 
 all about letting the programmer pick the paradigm according to 
 the situation, instead of being designed for just one of them. 
 This philosophy is rooted so deep that if it proves to be just 
 plain wrong, were best off to just ditch D and switch to other 
 languages.

 I sure hope that won't happen.
My conclusion so far is that it is unrealistic to think that anyone would write code that satisfies that requirements put upon trusted functions for a program the size of a desktop application. It is even unrealistic to think that the average D programmer will understand what the requirements for trusted are!
Jun 17 2021
parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 17 June 2021 at 17:24:27 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 15:08:53 UTC, Dukc wrote:
 No language can do this. C++ API does not provide any safety 
 guarantees, so calling a C++ function means that it needs to 
 be manually verified, or it's authors trusted, BY DEFINITION.
Sure, but that is obviously not enough. Because what is being said implies that trusted code have to assume that anything it receives that isn't pointers can be garbage and that such garbage should never lead to memory unsafety even if _you know_ that the trusted function never receives garbage.
Ah, I guess the problem is that someone phrased that slightly wrong. Let me try a better formulation. Given a module `module_a`, if a client module `module_b` that imports only `module_b`, and contains only ` safe` code cannot cause memory corruption (except due to compiler/OS/hardware bugs), then and only then API of `module_a` is sound with regards to memory safety. This means that a ` trusted` or ` safe` function is allowed to assume certain invariants about some types, as long as those invariants cannot be violated from ` safe` client code alone. This also means that ` safe` code that is in `module_a` may be able to violate memory safety. DIP1035 aims to address that.
Jun 17 2021
next sibling parent Dukc <ajieskola gmail.com> writes:
On Thursday, 17 June 2021 at 17:49:39 UTC, Dukc wrote:
 Given a module `module_a`, if a client module `module_b` that 
 imports only `module_b`
Meant: that imports only `module_a`
Jun 17 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 17:49:39 UTC, Dukc wrote:
 This means that a ` trusted` or ` safe` function is allowed to 
 assume certain invariants about some types, as long as those 
 invariants cannot be violated from ` safe` client code alone. 
 This also means that ` safe` code that is in `module_a` may be 
 able to violate memory safety. DIP1035 aims to address that.
So if I control `module_0` and `module_a` depends on it, then I can assume the invariants for types in `module_0` as long as `module_b` cannot break those invariants from safe code?
Jun 17 2021
parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 17 June 2021 at 18:02:26 UTC, Ola Fosheim Grøstad 
wrote:
 So if I control `module_0` and `module_a` depends on it, then I 
 can assume the invariants for types in `module_0` as long as 
 `module_b` cannot break those invariants from  safe code?
Well, if you make `module_0` or `module_a` unsound with your changes to `module_0`, then there's no telling what `module_b` will pass to your ` trusted` functions. But yes you can assume those invariants as long as your API is sound when the invariants hold.
Jun 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 18:25:50 UTC, Dukc wrote:
 Well, if you make `module_0` or `module_a` unsound with your 
 changes to `module_0`, then there's no telling what `module_b` 
 will pass to your ` trusted` functions. But yes you can assume 
 those invariants as long as your API is sound when the 
 invariants hold.
Yes, let's assume I annotate my unsafe code with the invariants they depend on and do a new audit if any invariants in my modules change. Those are my modules after all, so it isn't beyond reason for me to do this. (Would be nice if it could be checked by machinery, of course, but manual audit is sufficient to discuss the core requirements of trusted and safe. :-)
Jun 17 2021
prev sibling next sibling parent Elronnd <elronnd elronnd.net> writes:
On Thursday, 17 June 2021 at 10:28:42 UTC, Ola Fosheim Grøstad 
wrote:
 it could be helpful to list the invariants unsafe code depends 
 on, e.g.:


 ```
  unsafe(assumes_singlethreaded){
    …fast update of shared datastructure…
 }

  unsafe(pointer_packing, pointer_arithmetics){
This also opens the door to more sophisticated compiler checking. E.G. an unsafe(pointer packing, pointer arithmetic) function can call an unsafe(pointer arithmetic) function, but not an unsafe(assumes singlethreaded) function.
Jun 18 2021
prev sibling parent Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Thursday, 17 June 2021 at 10:28:42 UTC, Ola Fosheim Grøstad 
wrote:
 Indeed. But if you think about C functions that require arrays 
 of zero terminated strings… Ok, you can create a simple 
  trusted wrapper, but then that wrapper has to check that all 
 the strings are zero terminated, which adds unacceptable 
 overhead. So even in this trivial example the  trusted code has 
 to assume that the provided data structure is correct, and thus 
 it enables  safe code to make correct  trusted code unsafe.
There are always trade offs. The best it is for trusted code to check the input arguments, and output too. We also have for this contract programming luckily. If you want performance, then it is really upon the owner of the project to decide. In worst case trusted at least marks pieces of code that need extra care during review and from testing point of view.
 It gets even more complicated in real system level programming 
 where you might make a function  trusted because you know that 
 when this function is called no other threads are running. That 
 is an assumption about an invariant bound to time.

 Proving things about timelines and concurrency is 
 difficult/impossible. So, in practice, the correctness of 
  trusted is ad hoc, cannot be assumed to be local and requires 
 audits as the code base changes.
Correct me if I'm wrong but this is also true for safe functions, since safe is about memory mainly, not concurrency.
 But it could be helpful to list the invariants unsafe code 
 depends on, e.g.:


 ```
  unsafe(assumes_singlethreaded){
    …fast update of shared datastructure…
 }

  unsafe(pointer_packing, pointer_arithmetics){
  …
 }

  unsafe(innocent_compiler_workaround){
  …
 }
 ```
This may work if safety system offers safety not only for memory. I think you could rename unsafe to trusted, implying that underlying function is aware of the issue and resolves it at runtime somehow.
 Now you have something to scan for. Like, in testing you could 
 inject a check before the code that assumes no threads to be 
 running. If you build with GC then you can scan all used 
 libraries that does tricks with pointers and so on.
The thing about trusted per my understanding is that it talks about inner code of the function itself, not behavior of the calling function. If you want safety, for input arguments, than call it from within a safe function.
 For true system level programming something like this (or more 
 advanced) is needed for people to use it. Otherwise just 
 slapping  system on all the code is the easier option. There 
 has to be some significant benefits if you want programmers to 
 add visual noise to their codebase.
Well, it might be for true systems programming, but let's not forget that we have other use cases for D, such as GUI apps, web apps, console apps, and a plethora of other uses which might not necessarily, require every inch of performance to be squeezed, and more focus on convenience and better safety for lower cost of developing the product. In summary, we do need to improve the trusted functionality, and best to account for all use cases of D or at least to have a consistent and easy to use (not misuse) feature. P.S. I really try to avoid use of trusted lambdas as much as possible, but sometimes I can't, and then I have the choice, either to use lambda, or slap entire function as trusted which I hate. And no extracting a two words unsafe operation into separate method, from a four word method, is not an appealing solution for me.
Jun 19 2021
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jun 16, 2021 at 05:36:46PM +0000, Alexandru Ermicioi via Digitalmars-d
wrote:
 On Wednesday, 16 June 2021 at 15:37:22 UTC, H. S. Teoh wrote:
[...]
 This isn't the first time it was suggested.  Way back when, it was
 brought up and rejected because Walter thought that  trusted blocks
 should be discouraged, and therefore should be ugly to write.  It
 was extensively argued, but Walter preferred the "trusted lambda
 idiom", precisely because it was ugly, required effort to write, and
 therefore deters casual (ab)uses of  trusted.
[...]
 Yet, it forces to make entire function trusted if lambdas are not
 used, and safe guarantees are lost to remainder of the code due to
 that.
Yeah, that's a flaw that ought to be fixed. Marking an entire function trusted is generally a bad idea, unless it's a trivial function of 3-4 lines or less, because it turns off ALL safety checks in the function body. If the function is large, that's an onerous burden to review whether or not the code is indeed trustworthy. What we really want is to keep those checks on except for the few bits of code that the compiler cannot automatically verify. This has also been suggested before, and people are saying it again (and in principle I agree): 1) Change the meaning of trusted such that in a trusted function, safe checks are NOT suppressed by default. Instead, a trusted function allows system blocks in its body where safe checks are temporarily suspended. 2) system blocks are NOT allowed inside a safe function. Here's the reasoning: - trusted marks the *entire* function tainted and needing to be reviewed -- this is necessary because the safety of system blocks therein often depends on the surrounding code context, and has to be reviewed in that larger context. It is not sufficient to review only the blocks containing system code. - Leaving safe checks on outside system blocks allows the trusted function to be still mechanically checked to minimize human error. This allows the unchecked code to be confined to as small of a code block as possible. Basically, limit the surface area of potential errors. - We continue to maintain a difference between a safe function and a trusted function, because we do not want to allow system blocks in a safe function -- that would make safe essentially meaningless (anybody can just throw in system blocks in safe code to bypass safety checks). Such escapes are only allowed if the entire function is marked trusted, making it clear that it potentially does something unsafe and therefore needs careful scrutiny. T -- One Word to write them all, One Access to find them, One Excel to count them all, And thus to Windows bind them. -- Mike Champion
Jun 16 2021
prev sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 16 June 2021 at 15:37:22 UTC, H. S. Teoh wrote:
 This isn't the first time it was suggested.  Way back when, it 
 was brought up and rejected because Walter thought that 
  trusted blocks should be discouraged, and therefore should be 
 ugly to write.  It was extensively argued, but Walter preferred 
 the "trusted lambda idiom", precisely because it was ugly, 
 required effort to write, and therefore deters casual (ab)uses 
 of  trusted.
unsafe { ... } Can we please allow unsafe { ... } , Walter? It's trivial to grep for ` trusted` to find all possible safety violations in projects.
Jul 11 2021
parent Bruce Carneal <bcarneal gmail.com> writes:
On Sunday, 11 July 2021 at 08:36:33 UTC, Per Nordlöw wrote:
 On Wednesday, 16 June 2021 at 15:37:22 UTC, H. S. Teoh wrote:
 This isn't the first time it was suggested.  Way back when, it 
 was brought up and rejected because Walter thought that 
  trusted blocks should be discouraged, and therefore should be 
 ugly to write.  It was extensively argued, but Walter 
 preferred the "trusted lambda idiom", precisely because it was 
 ugly, required effort to write, and therefore deters casual 
 (ab)uses of  trusted.
unsafe { ... } Can we please allow unsafe { ... } , Walter? It's trivial to grep for ` trusted` to find all possible safety violations in projects.
Localization/minimization of code that must be reviewed for basic safety is very desirable. The quiet pollution caused by a nested non-static trusted lambda within code marked safe is not. IIUC, ongoing tolerance of such lambdas means that all safe code must be grepped/reviewed or else blindly trusted. IOW, as things stand currently, safe code bodies must be treated as trusted until manually proven otherwise. My preference is to move in the other direction, towards safe checking-by-default within trusted blocks with system syntax to escape from there (a backwards compatible transition proposal for this with simple syntax to be discussed at July beerconf).
Jul 13 2021
prev sibling parent Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 16 June 2021 at 13:00:07 UTC, kinke wrote:

 `unsafe {}` blocks. I absolutely hate the trusted lambdas 
 'idiom'.
I completely agree.
Jun 20 2021
prev sibling next sibling parent reply =?UTF-8?Q?S=c3=b6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 16.06.2021 um 13:38 schrieb RazvanN:
 Currently,  trusted applies only to functions. This is most of the times 
 a pain when you want trusted code blocks inside functions. Why not 
 simplify it a bit by using trusted scope blocks?
Yes, please! There are 800 of these in vibe.d alone. There has also been an issue where the delegate workaround was erroneously flagged as a heap delegate, causing considerable GC memory load. ` trusted` *should* probably not even be available for functions (of course it is not a desirable breaking change to disallow that now, though).
Jun 16 2021
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/16/21 9:09 AM, Sönke Ludwig wrote:
 Am 16.06.2021 um 13:38 schrieb RazvanN:
 Currently,  trusted applies only to functions. This is most of the 
 times a pain when you want trusted code blocks inside functions. Why 
 not simplify it a bit by using trusted scope blocks?
Yes, please! There are 800 of these in vibe.d alone. There has also been an issue where the delegate workaround was erroneously flagged as a heap delegate, causing considerable GC memory load. ` trusted` *should* probably not even be available for functions (of course it is not a desirable breaking change to disallow that now, though).
If I were to design it today: - a safe function could not call trusted functions that gained implicit access to local data (i.e. inner functions, or non-static member functions from the same type). - a trusted function would be mechanically checked just like safe, but could have system blocks in them (where you could call system functions, or do system-like behaviors). This at least puts the emphasis on where manual verification is required, but still has the compiler checking things I want it to check. Most times, I never want to write a fully marked trusted function, because it's so easy to trust things you didn't intend to (like destructors). -Steve
Jun 16 2021
parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 13:25:09 UTC, Steven Schveighoffer 
wrote:
 [snip]

 If I were to design it today:

 - a  safe function could not call  trusted functions that 
 gained implicit access to local data (i.e. inner functions, or 
 non-static member functions from the same type).
 - a  trusted function would be mechanically checked just like 
  safe, but could have  system blocks in them (where you could 
 call  system functions, or do  system-like behaviors).

 This at least puts the emphasis on where manual verification is 
 required, but still has the compiler checking things I want it 
 to check. Most times, I never want to write a fully marked 
  trusted function, because it's so easy to trust things you 
 didn't intend to (like destructors).

 -Steve
I see what you're saying. I agree the trusted function with the system blocks within them is better than safe functions with trusted blocks, all else equal. The downside is that the only way to do it without breaking code is to introduce an alternative to trusted since safe checking for trusted would break code. However, consider if you can have safe/ trusted/ system blocks (such that a safe block does manual checking). In that case, you could have something like below trusted { safe { // some safe code that you want manually checked //... trusted { system { // some system code } // some trusted code using the system code } } // some other trusted code }
Jun 16 2021
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 16 June 2021 at 13:09:53 UTC, Sönke Ludwig wrote:
 ` trusted` *should* probably not even be available for 
 functions (of course it is not a desirable breaking change to 
 disallow that now, though).
Use the name ``` unsafe``` and deprecate ``` trusted``` then.
Jun 16 2021
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/16/2021 6:09 AM, Sönke Ludwig wrote:
 There are 800 of these in vibe.d alone.
That is concerning. But it isn't necessarily cause for redesigning trusted. For example, I removed (in aggregate) a great deal of unsafe allocation code from the backend simply by moving all that code into one resizable array abstraction. Piece by piece, I've been removing the unsafe code from the backend. There really should be very, very little of it.
 There has also been an 
 issue where the delegate workaround was erroneously flagged as a heap
delegate, 
 causing considerable GC memory load.
I can't think of a case where: () trusted { ... }(); would make it a heap delegate. Such cases should be in bugzilla.
 ` trusted` *should* probably not even be available for functions (of course it 
 is not a desirable breaking change to disallow that now, though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
Jun 16 2021
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 16.06.21 23:22, Walter Bright wrote:
 
 ` trusted` *should* probably not even be available for functions (of 
 course it is not a desirable breaking change to disallow that now, 
 though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
Yes. This.
Jun 16 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/16/2021 2:25 PM, Timon Gehr wrote:
 Yes. This.
At last, Timon, we agree on something! You've made my day!
Jun 16 2021
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 6/17/21 3:39 AM, Walter Bright wrote:
 On 6/16/2021 2:25 PM, Timon Gehr wrote:
 Yes. This.
At last, Timon, we agree on something! You've made my day!
We agree on many things. Maybe I should point it out more often. :)
Jun 18 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/18/2021 2:41 AM, Timon Gehr wrote:
 On 6/17/21 3:39 AM, Walter Bright wrote:
 On 6/16/2021 2:25 PM, Timon Gehr wrote:
 Yes. This.
At last, Timon, we agree on something! You've made my day!
We agree on many things. Maybe I should point it out more often. :)
<g>
Jun 18 2021
prev sibling next sibling parent GrimMaple <grimmaple95 gmail.com> writes:
On Wednesday, 16 June 2021 at 21:22:32 UTC, Walter Bright wrote:
 On 6/16/2021 6:09 AM, Sönke Ludwig wrote:
 There are 800 of these in vibe.d alone.
That is concerning. But it isn't necessarily cause for redesigning trusted. For example, I removed (in aggregate) a great deal of unsafe allocation code from the backend simply by moving all that code into one resizable array abstraction. Piece by piece, I've been removing the unsafe code from the backend. There really should be very, very little of it.
 There has also been an issue where the delegate workaround was 
 erroneously flagged as a heap delegate, causing considerable 
 GC memory load.
I can't think of a case where: () trusted { ... }(); would make it a heap delegate. Such cases should be in bugzilla.
 ` trusted` *should* probably not even be available for 
 functions (of course it is not a desirable breaking change to 
 disallow that now, though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
But what about allowing safe blocks (increasing the safety level) to encourage safety checks in system code? I made an exmaple above: ```d void foo() trusted { int[100] a = void; safe { // Code with safety checks } } ``` And having trusted/ system blocks inside safe functions would be disallowed by compiler.
Jun 16 2021
prev sibling next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 21:22:32 UTC, Walter Bright wrote:
 [snip]
 ` trusted` *should* probably not even be available for 
 functions (of course it is not a desirable breaking change to 
 disallow that now, though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
This is a better argument against trusted blocks within safe blocks than it is against system blocks within trusted code.
Jun 16 2021
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 21:47:55 UTC, jmh530 wrote:
 On Wednesday, 16 June 2021 at 21:22:32 UTC, Walter Bright wrote:
 [snip]
 ` trusted` *should* probably not even be available for 
 functions (of course it is not a desirable breaking change to 
 disallow that now, though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
This is a better argument against trusted blocks within safe blocks than it is against system blocks within trusted code.
I'd love to see those gone as well but it could be hard to get there from here. It's easier to validate simply nested rather than finely interwoven dependencies. I hope we can start reversing course rather than moving further down the "convenience over practical safety" path.
Jun 16 2021
parent reply max haughton <maxhaton gmail.com> writes:
On Wednesday, 16 June 2021 at 22:02:18 UTC, Bruce Carneal wrote:
 On Wednesday, 16 June 2021 at 21:47:55 UTC, jmh530 wrote:
 On Wednesday, 16 June 2021 at 21:22:32 UTC, Walter Bright 
 wrote:
 [snip]
 [...]
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
This is a better argument against trusted blocks within safe blocks than it is against system blocks within trusted code.
I'd love to see those gone as well but it could be hard to get there from here. It's easier to validate simply nested rather than finely interwoven dependencies. I hope we can start reversing course rather than moving further down the "convenience over practical safety" path.
Where do you make the distinction between convenience and practicality in this context?
Jun 16 2021
parent Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 22:10:38 UTC, max haughton wrote:
 On Wednesday, 16 June 2021 at 22:02:18 UTC, Bruce Carneal wrote:
 On Wednesday, 16 June 2021 at 21:47:55 UTC, jmh530 wrote:
 On Wednesday, 16 June 2021 at 21:22:32 UTC, Walter Bright 
 wrote:
 [snip]
 [...]
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
This is a better argument against trusted blocks within safe blocks than it is against system blocks within trusted code.
I'd love to see those gone as well but it could be hard to get there from here. It's easier to validate simply nested rather than finely interwoven dependencies. I hope we can start reversing course rather than moving further down the "convenience over practical safety" path.
Where do you make the distinction between convenience and practicality in this context?
"practical safety" was meant to signify "machine checked safety", as in "if humans are actively involved it is likely not safe". Should have been more specific. Addressing what I believe you're asking, around my confusing formulation, I dont think that we have to choose between ease of use and safety in this case. If we move to safe checking by default within trusted code we get additional safety with more localized (easier) manual checking. I think the opt-in scheme I proposed ( system blocks trigger safe checking) would be backward compatible and also readable/maintainable going forward.
Jun 16 2021
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.com> writes:
On 2021-06-16 17:22, Walter Bright wrote:
 On 6/16/2021 6:09 AM, Sönke Ludwig wrote:
 There are 800 of these in vibe.d alone.
That is concerning. But it isn't necessarily cause for redesigning trusted. For example, I removed (in aggregate) a great deal of unsafe allocation code from the backend simply by moving all that code into one resizable array abstraction. Piece by piece, I've been removing the unsafe code from the backend. There really should be very, very little of it.
 There has also been an issue where the delegate workaround was 
 erroneously flagged as a heap delegate, causing considerable GC memory 
 load.
I can't think of a case where: () trusted { ... }(); would make it a heap delegate. Such cases should be in bugzilla.
 ` trusted` *should* probably not even be available for functions (of 
 course it is not a desirable breaking change to disallow that now, 
 though).
The idea is to encourage programmers to think about organizing code so that there are clear separations between safe and system code. Interleaving the two on a line-by-line basis defeats the purpose.
I think the whole discussion should be redirected toward simplifying `pure` instead. * There are many legitimate reasons to want impure code act as pure. * There is no easy recourse as there is for trusted. All approaches are crazily convoluted.
Jun 17 2021
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 17, 2021 at 11:49:50AM -0400, Andrei Alexandrescu via Digitalmars-d
wrote:
[...]
 I think the whole discussion should be redirected toward simplifying
 `pure` instead.
 
 * There are many legitimate reasons to want impure code act as pure.
 * There is no easy recourse as there is for  trusted. All approaches
 are crazily convoluted.
What are the actual advantages of code being marked pure? I'm all for generalizing pure, but does it bring enough actual benefits to be worth the effort? I'm not talking about theoretical benefits, but actual benefits that the compiler can actually make use of to emit better code. I used to be a big fan of pure, but in practice I've found that it doesn't make *that* much of a difference in terms of codegen. Maybe I'm missing something, in which case I'd love to be enlightened. T -- Perhaps the most widespread illusion is that if we were in power we would behave very differently from those who now hold it---when, in truth, in order to get power we would have to become very much like them. -- Unknown
Jun 17 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 8:59 AM, H. S. Teoh wrote:
 What are the actual advantages of code being marked pure?  I'm all for
 generalizing pure, but does it bring enough actual benefits to be worth
 the effort?
 
 I'm not talking about theoretical benefits, but actual benefits that the
 compiler can actually make use of to emit better code.  I used to be a
 big fan of pure, but in practice I've found that it doesn't make *that*
 much of a difference in terms of codegen.  Maybe I'm missing something,
 in which case I'd love to be enlightened.
You're right it doesn't make that much difference in code quality. What it *does* provide is: 1. makes it easy to reason about 2. makes unit testing easy Just think about trying to unit test a function that side-loads various globals.
Jun 17 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 17, 2021 at 11:25:44AM -0700, Walter Bright via Digitalmars-d wrote:
 On 6/17/2021 8:59 AM, H. S. Teoh wrote:
 What are the actual advantages of code being marked pure?  I'm all
 for generalizing pure, but does it bring enough actual benefits to
 be worth the effort?
 
 I'm not talking about theoretical benefits, but actual benefits that
 the compiler can actually make use of to emit better code.  I used
 to be a big fan of pure, but in practice I've found that it doesn't
 make *that* much of a difference in terms of codegen.  Maybe I'm
 missing something, in which case I'd love to be enlightened.
You're right it doesn't make that much difference in code quality. What it *does* provide is: 1. makes it easy to reason about 2. makes unit testing easy Just think about trying to unit test a function that side-loads various globals.
I guess, as a side-effect of the way I usually code, which is to avoid globals unless I absolutely cannot get around it, I'm not really seeing the benefits of adding `pure` to my function declarations. :-D Almost all of my code is already pure anyway, it's just more work to tag them with `pure`. So if it doesn't bring significant additional benefits, I'm not seeing it as pulling its own weight. On a tangential note, when it comes to unittests, you're right that globals make things hard to test. The same also applies for code that modify the state of the environment, e.g., the filesystem. In Phobos there used to be (still are?) unittests that create temporary files, which makes them hard to parallelize and occasionally prone to random unrelated breakage. In my own code, I sometimes resort to templatizing filesystem-related functions/types so that I can test the code using a mock-up filesystem instead of the real one, e.g.: // Original code: hard to test without risking unwanted // interactions with the OS environment auto manipulateFile(File input) { ... auto data = input.rawRead(...); ... } // More testable code auto manipulateFile(File = std.stdio.File)(File input) { ... auto data = input.rawRead(...); ... } unittest { struct MockupFile { ... // mockup file contents here void[] rawRead(...) { ... } } // Look, ma! Test a filesystem-related function without // touching the filesystem! assert(manipulateFile(MockupFile(...)) == ...); } T -- Life is complex. It consists of real and imaginary parts. -- YHL
Jun 17 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
You're right that the way to unittest file I/O is to use mock files.
Jun 17 2021
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 8:49 AM, Andrei Alexandrescu wrote:
 I think the whole discussion should be redirected toward simplifying `pure` 
 instead.
 
 * There are many legitimate reasons to want impure code act as pure.
 * There is no easy recourse as there is for  trusted. All approaches are
crazily 
 convoluted.
There are ways to do it that are in use in Phobos. It involves doing an unsafe cast. Doing things like that is why D supports system code. I'm uncomfortable making it too easy to defeat the semantics of pure.
Jun 17 2021
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/17/21 2:18 PM, Walter Bright wrote:
 On 6/17/2021 8:49 AM, Andrei Alexandrescu wrote:
 I think the whole discussion should be redirected toward simplifying 
 `pure` instead.

 * There are many legitimate reasons to want impure code act as pure.
 * There is no easy recourse as there is for  trusted. All approaches 
 are crazily convoluted.
There are ways to do it that are in use in Phobos. It involves doing an unsafe cast.
Problem is there's a lot more work than trusted casts.
 Doing things like that is why D supports  system code.
Non-sequitur. The problem is there is no system/ trusted/ safe troika for pure. It's either pure or not, no way to express (as is needed in key parts) "this function shall be trusted to be pure".
 I'm 
 uncomfortable making it too easy to defeat the semantics of pure.
This is a misunderstanding. That wasn't asked for. You transferred your answer to the trusted blocks discussion to a different question.
Jun 22 2021
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 23 June 2021 at 06:25:08 UTC, Andrei Alexandrescu 
wrote:
 [snip]
 Doing things like that is why D supports  system code.
Non-sequitur. The problem is there is no system/ trusted/ safe troika for pure. It's either pure or not, no way to express (as is needed in key parts) "this function shall be trusted to be pure". [snip]
Reminds me of C's restrict. It's kind of like the equivalent of trusting that you are not overlapping data. I don't believe it provides a compile-time error if you do (just run-time undefined behavior).
Jun 23 2021
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 23.06.21 12:28, jmh530 wrote:
 On Wednesday, 23 June 2021 at 06:25:08 UTC, Andrei Alexandrescu wrote:
 [snip]
 Doing things like that is why D supports  system code.
Non-sequitur. The problem is there is no system/ trusted/ safe troika for pure. It's either pure or not, no way to express (as is needed in key parts) "this function shall be trusted to be pure". [snip]
Reminds me of C's restrict. It's kind of like the equivalent of trusting that you are not overlapping data. I don't believe it provides a compile-time error if you do (just run-time undefined behavior).
UB is too blunt of an instrument for trusted pure though, you should just lose some guarantees on ordering and number of executions, possibly aliasing (due to memoization).
Jun 23 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 23 June 2021 at 06:25:08 UTC, Andrei Alexandrescu 
wrote:
 Non-sequitur. The problem is there is no  system/ trusted/ safe 
 troika for pure. It's either pure or not, no way to express (as 
 is needed in key parts) "this function shall be trusted to be 
 pure".
Then you need to list explicitly what `pure` means in terms of allowed optimizations. Otherwise you'll end up with something that is too weak to be useful for compiler authors.
Jun 23 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 23 June 2021 at 12:32:03 UTC, Ola Fosheim Grøstad 
wrote:
 Then you need to list explicitly what `pure` means in terms of 
 allowed optimizations. Otherwise you'll end up with something 
 that is too weak to be useful for compiler authors.
This includes deadlock issues. If you allow access to globals you could get deadlocks that `pure` would have prevented.
Jun 23 2021
prev sibling parent reply =?UTF-8?Q?S=c3=b6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 16.06.2021 um 23:22 schrieb Walter Bright:
 On 6/16/2021 6:09 AM, Sönke Ludwig wrote:
 There are 800 of these in vibe.d alone.
That is concerning. But it isn't necessarily cause for redesigning trusted. For example, I removed (in aggregate) a great deal of unsafe allocation code from the backend simply by moving all that code into one resizable array abstraction. Piece by piece, I've been removing the unsafe code from the backend. There really should be very, very little of it.
Many of them are external functions that are ` system` when they shouldn't have to be: - `() trusted { return typeid(string).getHash(&ln); }()));` - `() trusted { return allocatorObject(GCAllocator.instance); } ();` - `() trusted { GC.addRange(mem.ptr, ElemSlotSize); } ()` - `() trusted { return sanitize(ustr); } ()` - `() trusted { return logicalProcessorCount(); } ()` - ... It could be that nowadays a number of those has been made ` safe` already, I'd have to check one-by-one. Then there are OS/runtime functions that are not ` safe`, but need to be called from a ` safe` context: - `() trusted { return mkstemps(templ.ptr, cast(int)suffix.length); } ();` - ``` trusted { scope (failure) assert(false); return CreateFileW(...); } (); ``` - ... There is also quite some manual memory management that requires ` trusted`. Once we are there with ownership (and ten compiler versions ahead) those can be replaced by some kind custom reference type. Then there are some shortcut references as pointers that are necessary because `ref` can't be used for local symbols (lifetime analysis could solve this, too): - `auto slot = () trusted { return &m_core.m_handles[h]; } ();` There are surely some places that can be refactored to push ` trusted` further down, but right now most of them can't in a meaningful way.
Jun 17 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 12:15 PM, Sönke Ludwig wrote:
 Am 16.06.2021 um 23:22 schrieb Walter Bright:
 On 6/16/2021 6:09 AM, Sönke Ludwig wrote:
 There are 800 of these in vibe.d alone.
That is concerning. But it isn't necessarily cause for redesigning trusted. For example, I removed (in aggregate) a great deal of unsafe allocation code from the backend simply by moving all that code into one resizable array abstraction. Piece by piece, I've been removing the unsafe code from the backend. There really should be very, very little of it.
Many of them are external functions that are ` system` when they shouldn't have to be:                                       - `() trusted { return typeid(string).getHash(&ln); }()));` - `() trusted { return allocatorObject(GCAllocator.instance); } ();` - `() trusted { GC.addRange(mem.ptr, ElemSlotSize); } ()` - `() trusted { return sanitize(ustr); } ()` - `() trusted { return logicalProcessorCount(); } ()` - ... It could be that nowadays a number of those has been made ` safe` already, I'd have to check one-by-one. Then there are OS/runtime functions that are not ` safe`, but need to be called from a ` safe` context: - `() trusted { return mkstemps(templ.ptr, cast(int)suffix.length); } ();` - ```     trusted {         scope (failure) assert(false);         return CreateFileW(...);     } ();     ``` - ... There is also quite some manual memory management that requires ` trusted`. Once we are there with ownership (and ten compiler versions ahead) those can be replaced by some kind custom reference type. Then there are some shortcut references as pointers that are necessary because `ref` can't be used for local symbols (lifetime analysis could solve this, too): - `auto slot = () trusted { return &m_core.m_handles[h]; } ();` There are surely some places that can be refactored to push ` trusted` further down, but right now most of them can't in a meaningful way.
Things like logicalProcessorCount() surely can be safe. m_core.m_handles[h] looks like it needs encapsulation in a proper function that takes m_core and h as arguments. I got rid of a *lot* of memory management code in the back end by creating a container type to do it and prevent a safe interface. Unsafe system calls like CreateFileW() can be encapsulated with a wrapper that presents a safe interface. Yes, this is extra work. But it's good work. I bet you'll like the result! I sure have when I've done it.
Jun 17 2021
parent reply =?UTF-8?Q?S=c3=b6nke_Ludwig?= <sludwig outerproduct.org> writes:
Am 18.06.2021 um 04:07 schrieb Walter Bright:
 (...)
 
 m_core.m_handles[h] looks like it needs encapsulation in a proper 
 function that takes m_core and h as arguments.
Accessing `m_core.m_handles[h]` is ` safe`, just taking the address of the result is not. `scope slot = ...` might make it work in this particular case, but of course only with the appropriate compiler version and `-preview` switch.
 I got rid of a *lot* of memory management code in the back end by 
 creating a container type to do it and prevent a safe interface.
The problem here is just escaping references to contained items. At some point in the future, with DIP25/DIP1000 enabled by default, this will hopefully become a non-issue.
 Unsafe system calls like CreateFileW() can be encapsulated with a 
 wrapper that presents a safe interface.
 
 Yes, this is extra work. But it's good work. I bet you'll like the 
 result! I sure have when I've done it.
The code that calls it *is* the ` safe` wrapper ;) (more or less, it does a little bit more than that - but adding another wrapper in-between wouldn't really add anything apart from complexity, because the function is only used in a single place)
Jun 19 2021
parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 19 June 2021 at 21:19:29 UTC, Sönke Ludwig wrote:

 The code that calls it *is* the ` safe` wrapper ;)  (more or 
 less, it does a little bit more than that - but adding another 
 wrapper in-between wouldn't really add anything apart from 
 complexity, because the function is only used in a single place)
I agree. The whole point of having free variables is to avoid useless explicit interfaces for local blocks of code. If they ban trusted lambdas/blocks, the next logical step is to ban free variables overall.
Jun 20 2021
parent Lorenso <lorensbrawns gmail.com> writes:
Thanks for this solution!
Jun 20 2021
prev sibling next sibling parent Dominikus Dittes Scherkl <dominikus scherkl.de> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 Currently,  trusted applies only to functions. This is most of 
 the times a pain when you want trusted code blocks inside 
 functions. Why not simplify it a bit by using trusted scope 
 blocks? E.g. this:

 ```d
 void foo()  safe
 {
     ()  trusted { ... }();
 }
 ```

 becomes this:

 ```d
 void foo()  safe
 {
      trusted
     {
        ....
     }
 }
 ```
 To make things easier,  trusted does not insert a scope 
 (similar to `static if`).
YES PLEASE! This was suggested since years meanwhile! (and by the way: the use of trusted at functions should be deprecated - a function should either be save or system, nothing in between. Only trusted blocks should be allowed)
Jun 16 2021
prev sibling next sibling parent Ogi <ogion.art gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 Currently,  trusted applies only to functions. This is most of 
 the times a pain when you want trusted code blocks inside 
 functions. Why not simplify it a bit by using trusted scope 
 blocks?
[Yes](https://forum.dlang.org/thread/vpdkkqjffuvtrxjsubbp forum.dlang.org).
Jun 16 2021
prev sibling next sibling parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 Currently,  trusted applies only to functions. This is most of 
 the times a pain when you want trusted code blocks inside 
 functions. Why not simplify it a bit by using trusted scope 
 blocks? E.g. this:

 ```d
 void foo()  safe
 {
     ()  trusted { ... }();
 }
 ```

 becomes this:

 ```d
 void foo()  safe
 {
      trusted
     {
        ....
     }
 }
 ```
 To make things easier,  trusted does not insert a scope 
 (similar to `static if`).

 Of course, the feature would be additive (you can have both 
 trusted functions and code blocks).

 That would also provide an elegant workaround if void 
 initialization is rejected in  safe code [1][2]. For example:

 ```d
 void foo()  safe
 {
      trusted
     {
         int[100] a = void;
     }
     ...
 }
 ```

 What do you think?

 Cheers,
 RazvanN

 [1] https://issues.dlang.org/show_bug.cgi?id=17566
 [2] https://github.com/dlang/dlang.org/pull/2260
I don't like that this allows implicitly lowering the safety level of any given function. As per example, the foo() function isn't safe anymore, but trusted. Which in turn should be reflected in the function signature. If this function is marked as safe, I expect it to be safe and not perform any shady stuff inside it. To me this really looks like foo() should be trusted instead. What I like more, is permitting to temporarily increase the safety level by using eg safe blocks inside a trusted function. For example ```d void foo() trusted { int[100] a = void; safe { // Code with safety checks } } ``` Overall, if something like this is implemented, it should support all safety levels for blocks, including safe and system, for consistency purposes
Jun 16 2021
next sibling parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Wednesday, 16 June 2021 at 14:57:19 UTC, GrimMaple wrote:

 What I like more, is permitting to temporarily increase the 
 safety level by using eg  safe blocks inside a  trusted 
 function. For example

 ```d
 void foo()  trusted
 {
     int[100] a = void;
      safe
     {
         // Code with safety checks
     }
 }
 ```
I don't think that this is a good alternative. The normalcy we are striving for is to have as much code as possible safe and as little as possible trusted. From that perspective it makes much more sense to annotate the entire function as safe and have minor sections as being trusted.
 Overall, if something like this is implemented, it should 
 support all safety levels for blocks, including  safe and 
  system, for consistency purposes
I don't really see the point in having safe/trusted blocks inside of system functions or system/trusted blocks inside safe functions. As for trusted functions, although I like the idea of deprecating them, there are cases of functions that are a few lines of code long (or even one liners) that do some unsafe operations that can be trusted; in this situation it's much easier to annotate the function as trusted. However, outside of such scenarios (for big functions), annotating functions as trusted is bad practice.
Jun 16 2021
parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Wednesday, 16 June 2021 at 15:58:23 UTC, RazvanN wrote:
 On Wednesday, 16 June 2021 at 14:57:19 UTC, GrimMaple wrote:

 What I like more, is permitting to temporarily increase the 
 safety level by using eg  safe blocks inside a  trusted 
 function. For example

 ```d
 void foo()  trusted
 {
     int[100] a = void;
      safe
     {
         // Code with safety checks
     }
 }
 ```
I don't think that this is a good alternative. The normalcy we are striving for is to have as much code as possible safe and as little as possible trusted. From that perspective it makes much more sense to annotate the entire function as safe and have minor sections as being trusted.
With my approach, you can still cover as many code safe as you want without lying to the end user about the function safety. IMHO, if you perform trusted operations in a safe function, the function cannot be called safe .
 Overall, if something like this is implemented, it should 
 support all safety levels for blocks, including  safe and 
  system, for consistency purposes
I don't really see the point in having safe/trusted blocks inside of system functions or system/trusted blocks inside safe functions. As for trusted functions, although I like the idea of deprecating them, there are cases of functions that are a few lines of code long (or even one liners) that do some unsafe operations that can be trusted; in this situation it's much easier to annotate the function as trusted. However, outside of such scenarios (for big functions), annotating functions as trusted is bad practice.
The point is enabling safe checks on parts of system/ trusted code, that otherwise would require weird solutions. Or, if you prefer, to be consistent over the lang. I think it is more predictable to the user, that if you can mark a block trusted, it can be marked safe or system as well
Jun 16 2021
parent reply RazvanN <razvan.nitu1305 gmail.com> writes:
On Wednesday, 16 June 2021 at 16:17:28 UTC, GrimMaple wrote:
 On Wednesday, 16 June 2021 at 15:58:23 UTC, RazvanN wrote:
 On Wednesday, 16 June 2021 at 14:57:19 UTC, GrimMaple wrote:
 With my approach, you can still cover as many code  safe as you 
 want without lying to the end user about the function safety. 
 IMHO, if you perform  trusted operations in a  safe function, 
 the function cannot be called  safe .
But this is not true. As Paul Backus pointed out, you can still have a safe function that calls trusted functions. Your argument seems to imply that safe should never interact with trusted code, which to be honest will make it unusable.
 Overall, if something like this is implemented, it should 
 support all safety levels for blocks, including  safe and 
  system, for consistency purposes
 The point is enabling  safe checks on parts of  system/ trusted 
 code, that otherwise would require weird solutions. Or, if you 
 prefer, to be consistent over the lang. I think it is more 
 predictable to the user, that if you can mark a block  trusted, 
 it can be marked  safe or  system as well
I understand, but my point is that if you have a system function it is kind of useless to have safe portions of it, you still cannot call it from safe code. It makes much more sense to have a safe function with small trusted blocks.
Jun 16 2021
parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Wednesday, 16 June 2021 at 16:22:24 UTC, RazvanN wrote:
 On Wednesday, 16 June 2021 at 16:17:28 UTC, GrimMaple wrote:
 On Wednesday, 16 June 2021 at 15:58:23 UTC, RazvanN wrote:
 On Wednesday, 16 June 2021 at 14:57:19 UTC, GrimMaple wrote:
 With my approach, you can still cover as many code  safe as 
 you want without lying to the end user about the function 
 safety. IMHO, if you perform  trusted operations in a  safe 
 function, the function cannot be called  safe .
But this is not true. As Paul Backus pointed out, you can still have a safe function that calls trusted functions. Your argument seems to imply that safe should never interact with trusted code, which to be honest will make it unusable.
It's already broken, so let's break it even more? :)
Jun 16 2021
next sibling parent IGotD- <nise nise.com> writes:
On Wednesday, 16 June 2021 at 16:32:01 UTC, GrimMaple wrote:
 It's already broken, so let's break it even more? :)
Yes, why not. safe, trusted, system is one of the more bizarre things in D. It kind of remind me of protection rings 0-3 in the X86 ISA. Few use all of these rings and just use two of them in order to separate kernel from user code. They are just there consuming gates and power. ImportC seems to make this even more confusing, what are functions imported with ImportC going to be designated? safe, trusted, system? If they they are labelled safe, then it's kind of a lie and the programmer is responsible for knowing which function is FFI and what is not. If it is system, then we need trampoline functions in D for every function if you are going to call it from safe, which kind of defeats the purpose of ImportC. trusted might be the middle road but it is like safe. system blocks in my opinion is an improvement and we can do away with trusted because what's the point with it.
Jun 16 2021
prev sibling parent Paul Backus <snarwin gmail.com> writes:
On Wednesday, 16 June 2021 at 16:32:01 UTC, GrimMaple wrote:
 It's already broken, so let's break it even more? :)
It's not broken; you just don't understand it yet. I've written an article that goes through the whole thing step-by-step: https://pbackus.github.io/blog/what-does-memory-safety-really-mean-in-d.html
Jun 16 2021
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Wednesday, 16 June 2021 at 14:57:19 UTC, GrimMaple wrote:
 I don't like that this allows implicitly lowering the safety 
 level of any given function. As per example, the foo() function 
 isn't  safe anymore, but  trusted. Which in turn should be 
 reflected in the function signature. If this function is marked 
 as  safe, I expect it to be  safe and not perform any shady 
 stuff inside it. To me this really looks like foo() should be 
  trusted instead.
There is no difference (from the caller's point of view) between the "safety level" of a safe function and a trusted function. Both are memory-safe. The only difference is that the safe function has its safety checked automatically, by the compiler, and the trusted function has its safety checked manually, by the programmer. "But surely," you will object, "automatic checking is more reliable than manual checking, and therefore the safe function is 'safer' than the trusted one." Unfortunately, the conclusion does not follow from the premise: a safe function is allowed to call any number of trusted functions internally, so it is entirely possible that *both* functions rely on manual checking for their memory safety. You cannot tell just from looking at the signatures.
Jun 16 2021
parent Max Samukha <maxsamukha gmail.com> writes:
On Wednesday, 16 June 2021 at 16:11:46 UTC, Paul Backus wrote:

  You cannot tell just from looking at the signatures.
Then why is there the difference in the interface?
Jun 16 2021
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 16.06.21 13:38, RazvanN wrote:
 
 
 What do you think?
trusted nested functions are an antipattern and this enshrines them in a language feature.
Jun 16 2021
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 18:28:48 UTC, Timon Gehr wrote:
 On 16.06.21 13:38, RazvanN wrote:
 
 
 What do you think?
trusted nested functions are an antipattern and this enshrines them in a language feature.
Yes. Making it trivial to circumvent safe weakens it broadly and across time. I would rather see the language move towards more readable/maintainable safety dependencies and additional automated checking. I like the notion that others have mentioned of safe checking by default within trusted code (which would require system blocks to disable checking). Perhaps we could adopt an opt-in strategy where such safe checking is triggered by the presence of an system block. If adopted, we might later move to safe-within- trusted-by-default via compiler flags or module attributes or ...
Jun 16 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking 
 by default within  trusted code (which would require  system 
 blocks to disable checking).  Perhaps we could adopt an opt-in 
 strategy where such  safe checking is triggered by the presence 
 of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
Jun 16 2021
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 16 June 2021 at 21:32:46 UTC, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking 
 by default within  trusted code (which would require  system 
 blocks to disable checking).  Perhaps we could adopt an opt-in 
 strategy where such  safe checking is triggered by the 
 presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
I assumed that what Bruce is saying is that if you have a system block within trusted code, then the remainder of the trusted code gets safe checking. That's not the same thing.
Jun 16 2021
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 16.06.21 23:32, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking by 
 default within  trusted code (which would require  system blocks to 
 disable checking).  Perhaps we could adopt an opt-in strategy where 
 such  safe checking is triggered by the presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
I wouldn't even bother if that was the case. The semantics is different if you consider code evolution and properly assigning blame for bugs. safe code is code you don't have to check for memory safety errors. You can try to move the goal posts, but the title of this discussion is "simplifying trusted", not "redefining safe".
Jun 16 2021
prev sibling next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 21:32:46 UTC, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking 
 by default within  trusted code (which would require  system 
 blocks to disable checking).  Perhaps we could adopt an opt-in 
 strategy where such  safe checking is triggered by the 
 presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
The difference is in ease of maintenance. Things should nest properly wrt human comprehension. In the "bikeshedding" proposal safe code need not be checked manually while the trusted code, which already needed to be checked manually, will now enjoy a narrowing of focus. Perhaps I'm missing something here. If so, please enlighten me as to the advantages of the "non-bikeshedding" approach and/or the errors in my logic.
Jun 16 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 16 June 2021 at 21:55:20 UTC, Bruce Carneal wrote:
 In the "bikeshedding" proposal  safe code need not be checked 
 manually while the  trusted code, which already needed to be 
 checked manually, will now enjoy a narrowing of focus.

 Perhaps I'm missing something here.  If so, please enlighten me 
 as to the advantages of the "non-bikeshedding" approach and/or 
 the errors in my logic.
The whole concept is kinda broken. If you have an unsafe region in a method then that may effect not only that method, but the class and all other methods. So basically the whole class, or maybe even the whole module, should be flagged as " trusted". But there is no reason to, as that can be deduced. And the explicit tags do not imply when it was checked nor does it say anything about whether the code in the class has changed after it was checked. So what is good for? For it to work you only need unsafe markers that are as tight as possible, but with more detail of the nature of unsafety. For instance, some unsafe operations such as SIMD/casting optimizations can be guranteed to have only local effect. If they hinder safe then that is just a compiler weakness, so you have to disable it to get past it. On the other hand, more global hacks... are quite different in nature. As there is no way to express the nature of "unsafety", or whether it has been checked and what the edit timeline is for the surrounding code, it is not much more useful than a comment. Which renders the feature more annoying than useful. Tight unsafe is no worse than the D regime. More detailed information about the reason for why the region is marked unsafe and when it was checked would be infinitely more useful than what D is doing now.
Jun 16 2021
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 22:21:22 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 16 June 2021 at 21:55:20 UTC, Bruce Carneal wrote:
 In the "bikeshedding" proposal  safe code need not be checked 
 manually while the  trusted code, which already needed to be 
 checked manually, will now enjoy a narrowing of focus.

 Perhaps I'm missing something here.  If so, please enlighten 
 me as to the advantages of the "non-bikeshedding" approach 
 and/or the errors in my logic.
The whole concept is kinda broken. If you have an unsafe region in a method then that may effect not only that method, but the class and all other methods. So basically the whole class, or maybe even the whole module, should be flagged as " trusted".
We all work to isolate dependencies, safety related and otherwise, so that we can reason about the whole with more confidence. Whenever we introduce something that impedes our ability to "treeify" dependencies, something that makes analysis more of a graph analysis than a tree analysis, we've done something that has made maintenance and code-reuse more difficult. At the language level we can't stop programmers from writing code with nasty dependencies but we can make it easier to write code with fewer entanglements.
 But there is no reason to, as that can be deduced. And the 
 explicit tags do not imply when it was checked nor does it say 
 anything about whether the code in the class has changed after 
 it was checked.

 So what is good for?
Perhaps I misunderstand you but I'd say it's good for a lot. As a contrasting exercise one can think about what would happen if safe/ trusted/ system became compiler noops starting tomorrow and forevermore. Yes, I think things can be made better but there's already, to my way of thinking at least, a lot of value.
 For it to work you only need  unsafe markers that are as tight 
 as possible, but with more detail of the nature of unsafety. ...
I think that finer granularity/specificity might be useful, particularly if can improve basic readability/nesting.
Jun 16 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 16 June 2021 at 22:54:51 UTC, Bruce Carneal wrote:
 We all work to isolate dependencies, safety related and 
 otherwise, so that we can reason about the whole with more 
 confidence.  Whenever we introduce something that impedes our 
 ability to "treeify" dependencies, something that makes 
 analysis more of a graph analysis than a tree analysis, we've 
 done something that has made maintenance and code-reuse more 
 difficult.
Well, but the problem is at a more fundamental level. When writing code we make assumptions about invariants on various levels. So when auditing unsafe code you audit it with respect to local and global invariants. For instance, if I write a program with GC turned off as an invariant. Then I might audit code that packs information into lower bits of addresses in pointers values as safe. Then some other guy takes over the code base and decides to turn on the GC. Well, effectively the program is broken at a global level, because the actual scope of trusted that we can be certain of is the whole program! :-D
At the language level we can't stop programmers from writing
 code with nasty dependencies but we can make it easier to write 
 code with fewer entanglements.
As long as people continue to modify, refactor, redesign and change invariants then you need something more explicit. For instance if my unsafe region had been marked with a tag that said pointer-value-manipulation, and you have a timestamp on it, and you have an association between pointer-value-manipulation and turning on GC, then the compiler could start listing all the weak spots that has to go through a new audit. But with no knowledge of WHY a region is unsafe, when it was audited and how/when the surrounding changed, the compiler can't do much, so we are left with a feature that essentially does nothing.
 Perhaps I misunderstand you but I'd say it's good for a lot. As 
 a contrasting exercise one can think about what would happen if 
  safe/ trusted/ system became compiler noops starting tomorrow 
 and forevermore.
No, safe does something, but you cannot know what the scope of trusted should be, because even global invariants can change. So you might as well do what other languages do and just put an "unsafe" compiler-bypass-checking-silencer on the offending statement/expression.
 Yes, I think things can be made better but there's already, to 
 my way of thinking at least, a lot of value.
Ok, but it doesn't do anything more than a comment.
 I think that finer granularity/specificity might be useful, 
 particularly if can improve basic readability/nesting.
For it to work, you need to list all the invariants that the audit of the unsafe region makes assumptions about. With such a list you could actually trigger warnings about unsafe regions when the invariants change. Scope is not really meaningful. The scope of the unsafe statement is unknown, because the code base is changing.
Jun 16 2021
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Wednesday, 16 June 2021 at 23:15:19 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 16 June 2021 at 22:54:51 UTC, Bruce Carneal wrote:
 We all work to isolate dependencies, safety related and 
 otherwise, so that we can reason about the whole with more 
 confidence.  Whenever we introduce something that impedes our 
 ability to "treeify" dependencies, something that makes 
 analysis more of a graph analysis than a tree analysis, we've 
 done something that has made maintenance and code-reuse more 
 difficult.
Well, but the problem is at a more fundamental level...
Fundamental problems sometimes admit incremental improvements towards resolution. I'm more disposed to improvement-through-simplifying-but-possibly-large-rewrites than incremental change but in this case I'd go with incremental.
 For instance, if I write a program with GC turned off as an 
 invariant. Then I might audit code that packs information into 
 lower bits of addresses in pointers values as safe. Then some 
 other guy takes over the code base and decides to turn on the 
 GC. Well, effectively the program is broken at a global level, 
 because the actual scope of  trusted that we can be certain of 
 is the whole program! :-D
Sure, we have responsibility for the whole program. Anything that lets us confidently subdivide that problem is welcome.
At the language level we can't stop programmers from writing
 code with nasty dependencies but we can make it easier to 
 write code with fewer entanglements.
As long as people continue to modify, refactor, redesign and change invariants then you need something more explicit.
I disagree. I believe that more general mechanisms can help that do not preclude the finer granularity mechanisms which you advocate (and which I am interested in).
 For instance if my unsafe region had been marked with a tag 
 that said pointer-value-manipulation, and you have a timestamp 
 on it, and you have ...
I believe providing the compiler with more information can be a very good thing, perhaps worthy of another thread.
 Perhaps I misunderstand you but I'd say it's good for a lot. 
 As a contrasting exercise one can think about what would 
 happen if  safe/ trusted/ system became compiler noops 
 starting tomorrow and forevermore.
No, safe does something, but you cannot know what the scope of trusted should be, because even global invariants can change. So you might as well do what other languages do and just put an "unsafe" compiler-bypass-checking-silencer on the offending statement/expression.
Yes, trusted works within a broader framework, a *manually managed hence dangerous* framework. A practical path forward is to improve the compilers ability to check things and localize/minimize the unchecked. As far as how the programmer deals with all the rest, well, I'd say we're back to "best practices" and "convention". We all want the language/compiler to do more on the safety front, at least if improvements come at very little coding inconvenience, but something like an trusted transition zone will be needed at the, evolving programmer defined, boundary between machine checkable and "you're on your own".
 ...
 Yes, I think things can be made better but there's already, to 
 my way of thinking at least, a lot of value.
Ok, but it doesn't do anything more than a comment.
In line with my comments both above and below, I disagree.
 I think that finer granularity/specificity might be useful, 
 particularly if can improve basic readability/nesting.
For it to work, you need to list all the invariants that the audit of the unsafe region makes assumptions about. With such a list you could actually trigger warnings about unsafe regions when the invariants change.
Yes. The compiler will need enough information to act.
 Scope is not really meaningful. The scope of the unsafe 
 statement is unknown, because the code base is changing.
If I understand your meaning here, I disagree. I think safe/ trusted is very useful, essential even, in code bases that are changing. It is a demarcation tool that lets us carve out ever larger safe areas.
Jun 16 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 16 June 2021 at 23:52:02 UTC, Bruce Carneal wrote:
 If I understand your meaning here, I disagree.  I think 
  safe/ trusted is very useful, essential even, in code bases 
 that are changing.  It is a demarcation tool that lets us carve 
 out ever larger safe areas.
Ok, I think I understand better what you meant now. So, essentially, one is currently forced to make perfectly safe code trusted because if there is any possibility that a safe method contains a bug (even if 100% unlikely) that affects assumptions made in trusted code then that safe method can no longer be considered safe. So basically one wants trusted code to be checked by the compiler just like safe, and then instead explicitly turn off the checking in a more narrow unsafe region within the trusted method. Unless pending DIPs turn out to remove this issue completely. Time will show, I guess.
Jun 17 2021
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 17 June 2021 at 21:19:32 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 16 June 2021 at 23:52:02 UTC, Bruce Carneal wrote:
 If I understand your meaning here, I disagree.  I think 
  safe/ trusted is very useful, essential even, in code bases 
 that are changing.  It is a demarcation tool that lets us 
 carve out ever larger safe areas.
Ok, I think I understand better what you meant now. ...
 So basically one wants  trusted code to be checked by the 
 compiler just like  safe, and then instead explicitly turn off 
 the checking in a more narrow unsafe region within the  trusted 
 method.
Yep. I think there's a clean opt-in way to do it that should wear well going forward. If you're at beerconf we can discuss it and your and other alternatives.
 ..
Jun 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 18 June 2021 at 01:13:47 UTC, Bruce Carneal wrote:
 Yep.  I think there's a clean opt-in way to do it that should 
 wear well going forward.  If you're at beerconf we can discuss 
 it and your and other alternatives.
Maybe it is better to mark the whole class as trusted, have all methods checked and mark the unsafe regions with the invariants they rely on. Maybe automated proofs of safe methods would be possible if invariants for a class were made explicit. (I will be on a weak mobile link in the summer, not suitable for video, but Beerconf sounds like a great idea!)
Jun 18 2021
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/16/21 5:32 PM, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking by 
 default within  trusted code (which would require  system blocks to 
 disable checking).  Perhaps we could adopt an opt-in strategy where 
 such  safe checking is triggered by the presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
Yes, and that leaves safe code to actually not require manual checking, as opposed to today, where any safe code with trusted blocks today requires manual checking of all the safe code (I agree just changing trusted/system code this way, and doing nothing with safe would be bikeshedding). In reality, safe code should be a function of its inputs, and what is considered a safe input. With trusted lambdas, the inputs are "everything". -Steve
Jun 16 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 00:34:12 UTC, Steven Schveighoffer 
wrote:
 On 6/16/21 5:32 PM, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal 
 wrote:
 I like the notion that others have mentioned of  safe 
 checking by default within  trusted code (which would require 
  system blocks to disable checking).  Perhaps we could adopt 
 an opt-in strategy where such  safe checking is triggered by 
 the presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
Yes, and that leaves safe code to actually not require manual checking, as opposed to today, where any safe code with trusted blocks today requires manual checking of all the safe code (I agree just changing trusted/system code this way, and doing nothing with safe would be bikeshedding). In reality, safe code should be a function of its inputs, and what is considered a safe input. With trusted lambdas, the inputs are "everything".
It's impossible to guarantee, at the language level, that safe code can never require manual review. The programmer is allowed to use any and all knowledge at their disposal to verify the memory safety of trusted (or in your proposal, system-block) code, including knowledge about safe code. You might say, "the only thing a trusted function can possibly know about a safe function is its signature, so that doesn't matter," but that's not quite true. If the trusted function and the safe function are in the same module, the trusted function can (in principle) rely on the inner workings of the safe function without invalidating its proof of memory safety, since the programmer knows that any given version of the trusted function will only ever call the corresponding version of the safe function. Of course, such a trusted function should never pass code review. But the fact that it can exist in principle means that you cannot truly rely on safe to mean "no manual checking required", even if trusted lambdas and nested functions are forbidden.
Jun 16 2021
next sibling parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 17 June 2021 at 01:07:05 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 00:34:12 UTC, Steven Schveighoffer 
 wrote:
 [...]
It's impossible to guarantee, at the language level, that safe code can never require manual review. The programmer is allowed to use any and all knowledge at their disposal to verify the memory safety of trusted (or in your proposal, system-block) code, including knowledge about safe code. You might say, "the only thing a trusted function can possibly know about a safe function is its signature, so that doesn't matter," but that's not quite true. If the trusted function and the safe function are in the same module, the trusted function can (in principle) rely on the inner workings of the safe function without invalidating its proof of memory safety, since the programmer knows that any given version of the trusted function will only ever call the corresponding version of the safe function. Of course, such a trusted function should never pass code review. But the fact that it can exist in principle means that you cannot truly rely on safe to mean "no manual checking required", even if trusted lambdas and nested functions are forbidden.
I understand there is a big difference between "never need to check absent compiler error" and "only need to check if someone who wrote the code should find another line of work", but there is also a big difference between where we are now and where we could be, particularly since improvements in this area will yield compounding benefit.
Jun 16 2021
parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 17 June 2021 at 01:33:39 UTC, Bruce Carneal wrote:
 On Thursday, 17 June 2021 at 01:07:05 UTC, Paul Backus wrote:
 [...]
I understand there is a big difference between "never need to check absent compiler error" and "only need to check if someone who wrote the code should find another line of work", but there is also a big difference between where we are now and where we could be, particularly since improvements in this area will yield compounding benefit.
More succinctly: I agree that the changes under discussion will not protect against arbitrarily devious coding practices, but I believe they would meaningfully reduce the likelihood of innocent mistakes. I think that we should move towards safety unless it is believed that the change would inhibit future improvements or would put large burdens on current users.
Jun 16 2021
prev sibling next sibling parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 01:07:05 UTC, Paul Backus wrote:
 Of course, such a  trusted function should never pass code 
 review.
This isn't true. trusted code may rely on invariants throughout the code base. As it should. And frequently does (or would if you tried to convert system code to safe). You can not make those invariants local to trusted in the general case. For instance algorithms that rely on sentinels. There is no reason to make all code that place sentinels trusted. There is a difference between preparing for danger and executing danger.
Jun 16 2021
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/16/21 9:07 PM, Paul Backus wrote:
 On Thursday, 17 June 2021 at 00:34:12 UTC, Steven Schveighoffer wrote:
 On 6/16/21 5:32 PM, Paul Backus wrote:
 On Wednesday, 16 June 2021 at 21:26:08 UTC, Bruce Carneal wrote:
 I like the notion that others have mentioned of  safe checking by 
 default within  trusted code (which would require  system blocks to 
 disable checking).  Perhaps we could adopt an opt-in strategy where 
 such  safe checking is triggered by the presence of an  system block.
Under this proposal, system lambdas/blocks within trusted code would have the exact same semantics as trusted blocks/lambdas within safe code currently do. It's pure bikeshedding.
Yes, and that leaves safe code to actually not require manual checking, as opposed to today, where any safe code with trusted blocks today requires manual checking of all the safe code (I agree just changing trusted/system code this way, and doing nothing with safe would be bikeshedding). In reality, safe code should be a function of its inputs, and what is considered a safe input. With trusted lambdas, the inputs are "everything".
It's impossible to guarantee, at the language level, that safe code can never require manual review. The programmer is allowed to use any and all knowledge at their disposal to verify the memory safety of trusted (or in your proposal, system-block) code, including knowledge about safe code.
The goal is to guarantee that *as long as* your trusted functions and blocks have a safe interface, then safe code does not need to be checked. When I say "not require review" I mean "I have checked all the trusted code, and it has a sound safe interface, so all safe code that may call it have no need for review." We will never have a marking that is language-guaranteed to not require review. To put it another way, as long as you aren't using trusted escapes that leak implementation details, your safe code shouldn't need a review. The problem is that trusted lambdas are not only the norm, it's actually required, due to template inference. Right now, a safe function can only be "assumed safe" as long as there are no trusted blocks in it. Once there is one trusted block, then you have to review the whole function. The same thing goes for data invariants (though that spreads to the whole module instead). Not having to review code for memory safety is supposed to be the major point of safe.
 You might say, "the only thing a  trusted function can possibly know 
 about a  safe function is its signature, so that doesn't matter," but 
 that's not quite true. If the  trusted function and the  safe function 
 are in the same module, the  trusted function can (in principle) rely on 
 the inner workings of the safe function without invalidating its proof 
 of memory safety, since the programmer knows that any given version of 
 the  trusted function will only ever call the corresponding version of 
 the  safe function.
A truly "correct" trusted function should defensively be callable and provide appropriate return data for any possible safe interface. A trusted escape accepts as a parameter "everything" from the outer function, and returns "everything". Which means the safe function can add or subtract *at will* any parameters of any type or return values that it wants. This just isn't reviewable separately. We might as well call a spade a spade.
 Of course, such a  trusted function should never pass code review. But 
 the fact that it can exist in principle means that you cannot truly rely 
 on  safe to mean "no manual checking required", even if  trusted lambdas 
 and nested functions are forbidden.
If D had a hard requirement (enforced by having undefined behavior threats) that trusted functions should always have a safe interface, then you could say that safe code needs no review. Limiting what a trusted function can do helps the ability to verify that the interface is indeed safe. This is all kind of moot though, I don't see any change to the safe regime happening. The best we might get is DIP1035. -Steve
Jun 17 2021
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Thursday, 17 June 2021 at 14:30:58 UTC, Steven Schveighoffer 
wrote:
 [snip]
 The goal is to guarantee that *as long as* your  trusted 
 functions and blocks have a  safe interface, then  safe code 
 does not need to be checked. When I say "not require review" I 
 mean "I have checked all the  trusted code, and it has a sound 
  safe interface, so all  safe code that may call it have no 
 need for review." We will never have a marking that is 
 language-guaranteed to not require review.
I think I've suggested before a safe-strict, or something like that, that is the same as safe but only allows calling safe-strict functions ( safe/ trusted/ system functions can all call a safe-strict function). The idea would be that those shouldn't need to be reviewed for the issues that safe attempts to address. You could then have safe-strict blocks to complement the system blocks you discussed previously (I've come along to the idea that trusted blocks is a bad idea, you really just need these two). The idea would be that the safe-strict blocks within a safe/ trusted/ system function would be mechanically checked and since they do not call any safe/ trusted/ system not need to be reviewed. Of course, one might argue that this results in mixing up system code with safe-strict code. However, it also helps separate out the safer parts of unsafe code. For instance, this approach means that you have a backwards compatible way to have the code in a trusted function that is not in a system block be checked manually (see below, system block would be optional). ```d trusted void foo() { safe-strict { //mechanically checked code that can't call any safe/ trusted/ system functions } //system code } ```
Jun 17 2021
prev sibling next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 14:30:58 UTC, Steven Schveighoffer 
wrote:
 On 6/16/21 9:07 PM, Paul Backus wrote:
 
 It's impossible to guarantee, at the language level, that 
  safe code can never require manual review. The programmer is 
 allowed to use any and all knowledge at their disposal to 
 verify the memory safety of  trusted (or in your proposal, 
  system-block) code, including knowledge about  safe code.
The goal is to guarantee that *as long as* your trusted functions and blocks have a safe interface, then safe code does not need to be checked. When I say "not require review" I mean "I have checked all the trusted code, and it has a sound safe interface, so all safe code that may call it have no need for review." We will never have a marking that is language-guaranteed to not require review. To put it another way, as long as you aren't using trusted escapes that leak implementation details, your safe code shouldn't need a review. The problem is that trusted lambdas are not only the norm, it's actually required, due to template inference. Right now, a safe function can only be "assumed safe" as long as there are no trusted blocks in it. Once there is one trusted block, then you have to review the whole function. The same thing goes for data invariants (though that spreads to the whole module instead). Not having to review code for memory safety is supposed to be the major point of safe.
Consider the following example: ```d size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 return array.ptr[favoriteNumber()]; } ``` `favoriteElement` has a safe interface. There is no argument you can pass to it from ` safe` code that can possibly result in memory corruption. However, if you change `favoriteNumber` to return something different (for example, 69), this may no longer be the case. So changes to `favoriteNumber`--a ` safe` function with no ` trusted` escapes--must still be manually reviewed to ensure that memory safety is maintained. There is no language change you can make (short of removing ` trusted` entirely) that will prevent this situation from arising.
Jun 17 2021
next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Thursday, 17 June 2021 at 16:21:53 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 14:30:58 UTC, Steven Schveighoffer 
 wrote:
 [...]
Consider the following example: ```d size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 return array.ptr[favoriteNumber()]; } ``` `favoriteElement` has a safe interface. There is no argument you can pass to it from ` safe` code that can possibly result in memory corruption. However, if you change `favoriteNumber` to return something different (for example, 69), this may no longer be the case. So changes to `favoriteNumber`--a ` safe` function with no ` trusted` escapes--must still be manually reviewed to ensure that memory safety is maintained. There is no language change you can make (short of removing ` trusted` entirely) that will prevent this situation from arising.
Apart from reviewers asking the author of favoriteElement to assert that the index is appropriate ... That's exactly the reason behind a review of trusted function, you need to check only it's source code.
Jun 17 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 16:50:28 UTC, Paolo Invernizzi wrote:
 On Thursday, 17 June 2021 at 16:21:53 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 14:30:58 UTC, Steven 
 Schveighoffer wrote:
 [...]
Consider the following example: ```d size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted { // This is memory safe because we know favoriteNumber returns 42 return array.ptr[favoriteNumber()]; } ```
[...]
 There is no language change you can make (short of removing 
 ` trusted` entirely) that will prevent this situation from 
 arising.
Apart from reviewers asking the author of favoriteElement to assert that the index is appropriate ...
Yes, that's exactly my point. This can't be solved by changing the language, but it *can* be solved by a good code review process. So we should avoid wasting our time on language-based solutions (like Steven's proposal for ` system` blocks in ` trusted` functions), and instead focus on how to improve our code review process so that this kind of brittle ` trusted` code doesn't slip through. For example, a review process that enforced the following rules would have flagged `favoriteElement` as problematic: 1. Every use of ` trusted` must be accompanied by a comment containing a proof of memory safety. 2. A memory-safety proof for ` trusted` code may not rely on any knowledge about other functions beyond what is implied by their signatures. Specifically, `favoriteElement` violates rule (2). To bring it into compliance, we'd have to either add an `assert` to verify our assumption about `favoriteNumber`, or find a way to encode that assumption into `favoriteNumber`'s signature (for example, with an `out` contract).
Jun 17 2021
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jun 17, 2021 at 05:14:12PM +0000, Paul Backus via Digitalmars-d wrote:
 On Thursday, 17 June 2021 at 16:50:28 UTC, Paolo Invernizzi wrote:
 On Thursday, 17 June 2021 at 16:21:53 UTC, Paul Backus wrote:
[...]
 ```d
 size_t favoriteNumber()  safe { return 42; }
 
 int favoriteElement(ref int[50] array)  trusted
 {
     // This is memory safe because we know favoriteNumber returns 42
     return array.ptr[favoriteNumber()];
 }
 ```
[...]
 1. Every use of ` trusted` must be accompanied by a comment containing
 a proof of memory safety.
 2. A memory-safety proof for ` trusted` code may not rely on any
 knowledge about other functions beyond what is implied by their
 signatures.
 
 Specifically, `favoriteElement` violates rule (2). To bring it into
 compliance, we'd have to either add an `assert` to verify our
 assumption about `favoriteNumber`, or find a way to encode that
 assumption into `favoriteNumber`'s signature (for example, with an
 `out` contract).
Using an out contract is a rather weak guarantee, because the author of favoriteNumber can easily change the contract and silently break favoriteElement's assumptions. Using an assert in favoriteElement is better, because if favoriteNumber ever changes in an incompatible way, the assert would trigger. T -- The best way to destroy a cause is to defend it poorly.
Jun 17 2021
prev sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Thursday, 17 June 2021 at 17:14:12 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 16:50:28 UTC, Paolo Invernizzi 
 wrote:
 On Thursday, 17 June 2021 at 16:21:53 UTC, Paul Backus wrote:
...
 Yes, that's exactly my point. This can't be solved by changing 
 the language, but it *can* be solved by a good code review 
 process. So we should avoid wasting our time on language-based 
 solutions (like Steven's proposal for ` system` blocks in 
 ` trusted` functions), and instead focus on how to improve our 
 code review process so that this kind of brittle ` trusted` 
 code doesn't slip through.
I don't consider it a waste of time to search for language upgrades that would reduce the need for expert code review. I trust experts significantly less than I trust automated checking (in this context I am one of those less trusted "expert"s). I like where Steven's proposal was headed, if I understood it correctly, and have a variant to put forward that should be opt-in, with an uncluttered syntax and clear semantics for the long term, or so I believe. I suggest that we discuss the topic at beerconf where, with any luck, we can converge quickly on understanding.
 ...
Jun 17 2021
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 6/17/21 12:21 PM, Paul Backus wrote:
 On Thursday, 17 June 2021 at 14:30:58 UTC, Steven Schveighoffer wrote:
 On 6/16/21 9:07 PM, Paul Backus wrote:
 It's impossible to guarantee, at the language level, that  safe code 
 can never require manual review. The programmer is allowed to use any 
 and all knowledge at their disposal to verify the memory safety of 
  trusted (or in your proposal,  system-block) code, including 
 knowledge about  safe code.
The goal is to guarantee that *as long as* your trusted functions and blocks have a safe interface, then safe code does not need to be checked. When I say "not require review" I mean "I have checked all the trusted code, and it has a sound safe interface, so all safe code that may call it have no need for review." We will never have a marking that is language-guaranteed to not require review. To put it another way, as long as you aren't using trusted escapes that leak implementation details, your safe code shouldn't need a review. The problem is that trusted lambdas are not only the norm, it's actually required, due to template inference. Right now, a safe function can only be "assumed safe" as long as there are no trusted blocks in it. Once there is one trusted block, then you have to review the whole function. The same thing goes for data invariants (though that spreads to the whole module instead). Not having to review code for memory safety is supposed to be the major point of safe.
Consider the following example: ```d size_t favoriteNumber() safe { return 42; } int favoriteElement(ref int[50] array) trusted {     // This is memory safe because we know favoriteNumber returns 42     return array.ptr[favoriteNumber()]; } ``` `favoriteElement` has a safe interface. There is no argument you can pass to it from ` safe` code that can possibly result in memory corruption. However, if you change `favoriteNumber` to return something different (for example, 69), this may no longer be the case. So changes to `favoriteNumber`--a ` safe` function with no ` trusted` escapes--must still be manually reviewed to ensure that memory safety is maintained.
But that's a different kind of problem. If favoriteNumber is allowed to return anything above 49, then favoriteElement is invalid. If favoriteNumber is *required* by spec to return 42, then it needs to be reviewed to ensure that it does that. And then the safe function does not need to be reviewed for *memory* problems, just that it's written to spec.
 
 There is no language change you can make (short of removing ` trusted` 
 entirely) that will prevent this situation from arising.
Sure, code reviews on safe code still need to happen to ensure they are correct. But code reviews for *memory safety errors* which are *really hard* to do right, should not be required. It's similar to unit-tests. Instead of testing the whole project at once, you test one thing, and if all the little things are correct, the combination of things is valid. -Steve
Jun 17 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 14:30:58 UTC, Steven Schveighoffer 
wrote:
 The goal is to guarantee that *as long as* your  trusted 
 functions and blocks have a  safe interface, then  safe code 
 does not need to be checked. When I say "not require review" I 
 mean "I have checked all the  trusted code, and it has a sound 
  safe interface, so all  safe code that may call it have no 
 need for review." We will never have a marking that is 
 language-guaranteed to not require review.
But doesn't this mean that having even a single safe method on an ADT class would be a liability? So you are essentially forced to define them all as trusted? E.g. ``` class A { this() trusted { ptr = &buffer[0]; offset = 0; } int get() trusted { return ptr[offset]; } void set(int i) trusted { this.offset = i&1; } /*BUG: offset was pasted in here by mistake*/ int size() safe{ offset=2; return 2;} private: int[2] buffer; int* ptr; int offset; } ``` Since this safe size() function could in theory mess up offset by a bug, it should not be allowed? However if we make size() trusted then this is perfectly ok by the requirements? As a result, you have to make ALL methods trusted.
Jun 17 2021
parent reply ag0aep6g <anonymous example.com> writes:
On Thursday, 17 June 2021 at 17:42:08 UTC, Ola Fosheim Grøstad 
wrote:
 ```
 class A {

     this()  trusted {
         ptr = &buffer[0];
         offset = 0;
     }

     int get()  trusted { return ptr[offset]; }
     void set(int i)  trusted { this.offset = i&1; }

     /*BUG: offset was pasted in here by mistake*/
     int size() safe{ offset=2;  return 2;}

 private:
     int[2] buffer;
     int* ptr;
     int offset;
 }


 ```

 Since this  safe size() function could in theory mess up offset 
 by a bug, it should not be allowed?
With the current spec, the bug is in `get`. It cannot be trusted, because it does not have a safe interface. With DIP 1035 ( system variables) you could mark `offset` as system. Then `get` would be fine and the compiler would catch the bug in `size`.
 However if we make size()  trusted then this is perfectly ok by 
 the requirements?
If you make `size` trusted, `get` still does not have a safe interface and cannot be trusted.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 18:40:15 UTC, ag0aep6g wrote:
 If you make `size`  trusted, `get` still does not have a safe 
 interface and cannot be  trusted.
What about it isn't safe? It is provably safe? Meaning, I can do a formal verification of it as being safe!? If this isn't safe then it becomes impossible to write safe wrappers for C data structures.
Jun 17 2021
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 18:46:09 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 18:40:15 UTC, ag0aep6g wrote:
 If you make `size`  trusted, `get` still does not have a safe 
 interface and cannot be  trusted.
What about it isn't safe? It is provably safe? Meaning, I can do a formal verification of it as being safe!?
In order for `get` to have a safe interface, it must not be possible to call it from ` safe` code with an instance that has `offset >= 2`. Because of the bug in `size`, it *is* possible for ` safe` code to call `get` with such an instance. Therefore, `get` does not have a safe interface.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 19:06:31 UTC, Paul Backus wrote:
 In order for `get` to have a safe interface, it must not be 
 possible to call it from ` safe` code with an instance that has 
 `offset >= 2`. Because of the bug in `size`, it *is* possible 
 for ` safe` code to call `get` with such an instance. 
 Therefore, `get` does not have a safe interface.
Yes, but if I make size() trusted and fix the bug then interface is provably safe? ``` class A { this() trusted { ptr = &buffer[0]; offset = 0; } int get() trusted { return ptr[offset]; } void set(int i) trusted { this.offset = i&1; } int size() trusted { return 2;} private: int[2] buffer; int* ptr; int offset; } Also, if I do this, it is probably safe, because of the invariant that is checked? ``` class A { this() trusted { ptr = &buffer[0]; offset = 0; } int get() trusted { return ptr[offset]; } void set(int i) trusted { this.offset = i&1; } int size() safe{ offset=2; return 2;} invariant{ assert(0<= offset && offset <=1 ); } private: int[2] buffer; int* ptr; int offset; } ```
Jun 17 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 20:25:22 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 19:06:31 UTC, Paul Backus wrote:
 In order for `get` to have a safe interface, it must not be 
 possible to call it from ` safe` code with an instance that 
 has `offset >= 2`. Because of the bug in `size`, it *is* 
 possible for ` safe` code to call `get` with such an instance. 
 Therefore, `get` does not have a safe interface.
Yes, but if I make size() trusted and fix the bug then interface is provably safe?
Assuming [issue 20941][1] is fixed, yes. [1]: https://issues.dlang.org/show_bug.cgi?id=20941
   Also, if I do this, it is probably safe, because of the 
 invariant that is checked?
[...]
 ```
     invariant{ assert(0<= offset && offset <=1 ); }
 ```
Yes.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 20:33:33 UTC, Paul Backus wrote:
 Assuming [issue 20941][1] is fixed, yes.
[…]
 Yes.
Thanks. There seems to be many interpretations of what safe means though. Right now, safe interfacing with C seems like opening Pandora's box. Probably a good idea to write up a set of best practice rules for making safe interfacing with C libraries (with examples).
Jun 17 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 20:42:20 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 17 June 2021 at 20:33:33 UTC, Paul Backus wrote:
 Assuming [issue 20941][1] is fixed, yes.
[…]
 Yes.
Thanks. There seems to be many interpretations of what safe means though. Right now, safe interfacing with C seems like opening Pandora's box. Probably a good idea to write up a set of best practice rules for making safe interfacing with C libraries (with examples).
A lot of people on the D forums have an incomplete or incorrect understanding of what memory safety means, and how D's safe, trusted and system attributes can be used to help prove a program memory-safe. The interpretation that I and ag0aep6g have been describing is the correct one. Re: interfacing with C, the best guarantee you can reasonably hope to achieve is "my trusted code is memory safe as long as the C functions I'm calling behave as specified in the relevant documentation or standard." I go into more detail about this in [my blog post on memory safety in D][1]. [1]: https://pbackus.github.io/blog/what-does-memory-safety-really-mean-in-d.html
Jun 17 2021
next sibling parent reply ag0aep6g <anonymous example.com> writes:
On Thursday, 17 June 2021 at 21:00:13 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 20:42:20 UTC, Ola Fosheim Grøstad 
 wrote:
 On Thursday, 17 June 2021 at 20:33:33 UTC, Paul Backus wrote:
 Assuming [issue 20941][1] is fixed, yes.
[…]
 Yes.
[...]
 The interpretation that I and ag0aep6g have been describing is 
 the correct one.
Yet I would answer "no" where you answered "yes" above. The question was: "Yes, but if I make size() trusted and fix the bug then interface is provably safe?". The corresponding code: ```d class A { this() trusted { ptr = &buffer[0]; offset = 0; } int get() trusted { return ptr[offset]; } void set(int i) trusted { this.offset = i&1; } int size() trusted { return 2;} private: int[2] buffer; int* ptr; int offset; } ``` In my opinion, that code is fundamentally equivalent to this (regarding the safety of `get`): ```d int get(int* ptr, int offset) trusted { return ptr[offset]; } ``` That function does not have a safe interface, because it exhibits undefined behavior wenn called like `get(new int, 1000)`, which safe code can do. `private`, other methods, the constructor - those things don't matter.
Jun 17 2021
parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 17 June 2021 at 21:16:02 UTC, ag0aep6g wrote:
 On Thursday, 17 June 2021 at 21:00:13 UTC, Paul Backus wrote:
 On Thursday, 17 June 2021 at 20:42:20 UTC, Ola Fosheim Grøstad 
 wrote:
 On Thursday, 17 June 2021 at 20:33:33 UTC, Paul Backus wrote:
 Assuming [issue 20941][1] is fixed, yes.
[…]
 Yes.
[...]
 The interpretation that I and ag0aep6g have been describing is 
 the correct one.
Yet I would answer "no" where you answered "yes" above. The question was: "Yes, but if I make size() trusted and fix the bug then interface is provably safe?". The corresponding code:
[...]
 In my opinion, that code is fundamentally equivalent to this 
 (regarding the safety of  `get`):

 ```d
 int get(int* ptr, int offset)  trusted { return ptr[offset]; }
 ```
 That function does not have a safe interface, because it 
 exhibits undefined behavior wenn called like `get(new int, 
 1000)`, which  safe code can do.
In current D, yes, because issue 20941 means that `private` cannot be relied upon for encapsulation--thus the caveat. However, if we assume for the sake for argument that safe code *can't* bypass `private`, then it is possible to prove that the invariant `offset < 2` is maintained by examining only code in the module where `offset` is defined. And once we've proven that invariant, we can prove that `get` is always memory safe when called from ` safe` code. I will grant that, even in this hypothetical, `get` still would not satisfy the definition of "safe interface" currently given in the language spec--but at that point, who cares? The current definition is valid for the current language, but if the language changes, the definition ought to be updated too.
Jun 17 2021
parent ag0aep6g <anonymous example.com> writes:
On Thursday, 17 June 2021 at 22:03:47 UTC, Paul Backus wrote:
 However, if we assume for the sake for argument that  safe code 
 *can't* bypass `private`, then it is possible to prove that the 
 invariant `offset < 2` is maintained by examining only code in 
 the module where `offset` is defined. And once we've proven 
 that invariant, we can prove that `get` is always memory safe 
 when called from ` safe` code.
Not when new ( safe) code is added to the same module. E.g., the new guy adds an safe method that has the bug from Ola's original code (`offset = 2;`).
 I will grant that, even in this hypothetical, `get` still would 
 not satisfy the definition of "safe interface" currently given 
 in the language spec--but at that point, who cares? The current 
 definition is valid for the current language, but if the 
 language changes, the definition ought to be updated too.
If the spec is ever changed to say that an trusted function taints the module it's in, then sure. But until that happens, I maintain that trusted code is not allowed to rely on an external integer for safety, including private ones. I'm not saying that that's the best way to define trusted. But I believe that it's in our best interest to acknowledge the spec, and that no one really bothers following it. That way everyone is on the same page (we don't need any more confusion when it comes to trusted), and it gives DIP 1035 and other proposals more oomph, because it's clear that the status quo isn't good enough. If everyone pretends that the spec agrees with their personal take on trusted, then there is no perceived need to fix anything. People with slightly different interpretations will keep talking past each other. Newbies will keep having a hard time.
Jun 17 2021
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 21:00:13 UTC, Paul Backus wrote:
 A lot of people on the D forums have an incomplete or incorrect 
 understanding of what memory safety means, and how D's  safe, 
  trusted and  system attributes can be used to help prove a 
 program memory-safe. The interpretation that I and ag0aep6g 
 have been describing is the correct one.
There is a difference between proving a program memory safe and having to assume that all safe code is maximally broken. Which is what is required here. Also, ag0aep6g are not happy with proving that the class is memory safe. It has to be memory safe when non-pointer values are overwritten with garbage. He wants you to look at each trusted method in isolation, not the class as a whole.
 Re: interfacing with C, the best guarantee you can reasonably 
 hope to achieve is "my  trusted code is memory safe as long as 
 the C functions I'm calling behave as specified in the relevant 
 documentation or standard."
More tricky that that I think. If I obtain a DOM node from a C library, then that DOM node points to the parent node, and so on, effectively I could have the entire C heap in my hands. You basically either have to create slow wrappers around everything or manually modify the C-bindings so you give less information to the D compiler than the C compiler has (basically censor out back pointers from the struct)?? Or maybe not. Either way, it is not trivial to reason about the consequences that you get from obtaining not just objects from C-code, but nodes in a connected graph. I haven't tried to create such safe bindings, maybe I am wrong. Maybe there are some easy patterns one can follow.
Jun 17 2021
prev sibling parent reply ag0aep6g <anonymous example.com> writes:
On 17.06.21 20:46, Ola Fosheim Grøstad wrote:
 What about it isn't safe? It is provably safe? Meaning, I can do a 
 formal verification of it as being safe!?
`offset` is an input to `get` (via `this`). `offset` is an int, so all possible values (int.min through int.max) are considered "safe values". `get` exhibits undefined behavior when `offset` is greater than 1. A function that can exhibit undefined behavior when called with only safe values cannot be trusted.
 If this isn't safe then it becomes impossible to write  safe wrappers 
 for C data structures.
As I wrote, DIP 1035 addresses this.
Jun 17 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 19:25:51 UTC, ag0aep6g wrote:
 On 17.06.21 20:46, Ola Fosheim Grøstad wrote:
 What about it isn't safe? It is provably safe? Meaning, I can 
 do a formal verification of it as being safe!?
`offset` is an input to `get` (via `this`). `offset` is an int, so all possible values (int.min through int.max) are considered "safe values". `get` exhibits undefined behavior when `offset` is greater than 1. A function that can exhibit undefined behavior when called with only safe values cannot be trusted.
But this.offset can provably only hold the value 0 or 1. What is the point of manually auditing trusted if one impose arbitrary requirements like these? So I am basically forced to use a bool to represent offset for it to be considered safe? One should start by defining invariants that will keep the class safe. Then one should audit all methods with respect to the invariants. The invariant is that this.offset cannot hold a value different from 0 and 1. And it is easy to prove. (I asssume here that all methods are marked as trusted)
Jun 17 2021
parent reply ag0aep6g <anonymous example.com> writes:
On Thursday, 17 June 2021 at 20:37:11 UTC, Ola Fosheim Grøstad 
wrote:
 But this.offset can provably only hold the value 0 or 1.
You mean if it holds a different value, then the program becomes invalid? Sure, that's easy to prove. Do you expect the compiler to do that proof, and then give you an error when you violate the invariant? That's not at all how D works.
 What is the point of manually auditing  trusted if one impose 
 arbitrary requirements like these?
The point of manually auditing trusted is to ensure that the function actually follows the requirements. If you don't want to bother making a function's interface safe, mark it system.
 So I am basically forced to use a bool to represent offset for 
 it to be considered safe?
That might work (haven't given it much thought). What you're supposed to do is wait for DIP 1035, or recreate its system variables in a library. What people actually do is ignore the rules and live the dangerous lifes of safety outlaws.
 One should start by defining invariants that will keep the 
 class safe.

 Then one should audit all methods with respect to the 
 invariants.
You can do that with system. safe and trusted won't help enforcing your custom invariants.
Jun 17 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 20:57:27 UTC, ag0aep6g wrote:
 You mean if it holds a different value, then the program 
 becomes invalid? Sure, that's easy to prove. Do you expect the 
 compiler to do that proof, and then give you an error when you 
 violate the invariant? That's not at all how D works.
I mean that this should satisfy the requirements (fixed a bug that allowed ptr to be changed after construction): ```d class A { this() trusted { ptr = &buffer[0]; offset = 0; } int get() trusted { return ptr[offset]; } void set(int i) trusted { this.offset = i&1; } int size() trusted{ return 2;} private: int[2] buffer; const(int*) ptr; int offset; } ``` 1. There is no way for safe code to make this unsafe. 2. I have through audit proven that all methods keep offset within the required range.
 The point of manually auditing  trusted is to ensure that the 
 function actually follows the requirements.
Then the requirements need to be made more explicit, because people seem to disagree as to what is required.
 variables in a library. What people actually do is ignore the 
 rules and live the dangerous lifes of safety outlaws.
No, that is not what I want. I want to define the necessary invariants to uphold safety. Then prove that my class remains within the constraints. Yes, preventing safe code from writing to some variables might make that task easier. In the meanwhile I can just make all methods trusted.
 You can do that with  system.  safe and  trusted won't help 
 enforcing your custom invariants.
But trusted requires you to manually audit the invariants needed, so why force a weakening of the invariants that are needed to make it safe? That makes no sense, if you already are required to conduct a manual audit. The compiler has no say in this either way. People will do it, and they will be correct and sensible for doing so. By requiring an audit _we already presume that the auditor has the required skillset_ to reason about these issues at a formal level!
Jun 17 2021
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Wednesday, 16 June 2021 at 18:28:48 UTC, Timon Gehr wrote:
 On 16.06.21 13:38, RazvanN wrote:
 
 
 What do you think?
trusted nested functions are an antipattern and this enshrines them in a language feature.
+1 Keep them only at function level.
Jun 17 2021
prev sibling next sibling parent ikod <igor.khasilev gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 becomes this:

 ```d
 void foo()  safe
 {
      trusted
     {
        ....
     }
 }
 ```
Definitely would love this
Jun 16 2021
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 ```d
 void foo()  safe
 {
      trusted
     {
         int[100] a = void;
     }
     ...
 }
 ```

 What do you think?
It raises the question, that what is this going to do? ```d trusted { // Is d trusted? // Or is someDelegateProvider allowed to be system? // Or both? void delegate(int) d = someDelegateProvider(); } ```
Jun 16 2021
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
I'm sure you can guess where this post is going.

Consider the following controversial language features:

1. implicit variable declaration
2. non-void default initialization
3. macros
4. operator overloading for non-arithmetic purposes
5. arithmetic in version declarations
6. transitive const
7. default constructors
8. direct access to shared values
9. emulation of hardware vector operations
10. recoverable assert failures
11.  trusted statements

Ok, I have successfully sold (1) and (2). But
the others are all uphill struggles for D.

What they all have in common is they're programming
language crack cocaine.

By that I mean that these features make programming faster
and easier. It's like that first hit of crack enables
a superpower where prodigious quantities of code can be
quickly churned out. Wow! What could be wrong with that?

On the other hand, there's dour old Walter dressed in his
severe nun's habit rapping the knuckles with a yardstick anyone
reaching for those treats.

The problem is those features come with a downside. Some
become apparent earlier, some later, even a decade later.
The downside is unreadable, unmaintainable code. Of course,
everyone personally believes that *they* can judiciously
and smartly snort the crack and there won't be downside.

Here's where I play the experience card. All those features
transform code into s**t after a while, no matter who uses them.
Including me. Earlier in my career, I would have happily implemented
all of them, because more power is better. Writing code faster is
better.

No it isn't. Writing code faster isn't better. More power isn't
always better. Having a language that pushes towards writing code
that is more readable is better, safer is better, more maintainable
is better.

Even C, (in)famous for never breaking backwards compatibility,
had implicit function declaratations for 30 years until quietly
taking it behind the woodshed and slitting its throat. (The verbiage
for it just disappeared from the C99 spec.)

That's for context. The rest will be about the  trusted proposal.

The question is: Why is  trusted at the function level rather than
at the statement level? It certainly seems more convenient to apply
it with statement granularity, and it will save 4 characters of typing
over the lambda approach. What could be wrong with that?
And indeed, that so far appears to be the general reaction.

The idea of putting it at the function level is to force (I know,
that sounds bad, but indulge me for a moment) the programmer
to think about the decomposition of programs into safe and unsafe
code. Ideally, untrusted code should be encapsulated and segregated
into separate sections of code, with clean, well-defined, well-considered,
and well thought through interfaces.

At statement level, one just schlepps  trusted in front and gives it
no more consideration. It is thoughtlessly applied, the compiler error
goes away, Mission Accomplished! It might as well be renamed
the  shaddup attribute. Zero thought is given to carefully crafting
a safe interface, because a safe interface to it is not required.
Of course that's tempting.

The () trusted{ ... }() is equally bad. Note that there are no parameters
to it, hence NO interface. This was discovered as a way to evade the
intent that  trusted was to be applied to a function interface. It
had never occurred to me to add semantics to disallow it. Even worse,
I myself have been seduced into using this trick.

But I consoled myself that at least it was ugly, and required an extra
4 characters to type. Unfortunately, that hasn't been enough of a
deterrence, as it appears to have been used with abandon.

I recall it was David Nadlinger who originally pointed out that  trusted
at the function level, even if only one statement was unsafe, would
hide safety issues in the rest of the function. Hence the appearance
of the lambda workaround. He is correct. But the real problem was in
failing to encapsulate the unsafe code, and place it behind a sound
and safe interface.

As for my own pre-safe code, I've been gradually upgrading it to be
fully  safe. It's a satisfying result.

Use of  trusted lambdas is a code smell, and making  trusted work at the
statement level is putting the stamp of approval on a stinky practice.
Enabling this is a one way street, it'd be very very hard to undo it.

P.S. A subtler aspect of this is D's semantic reliance on rich function
signatures. This passes critical semantic information to the compiler,
such as inputs, outputs, escapes, live ranges, who's zoomin' who, etc.
Having  trusted at the statement level, with no defined interface to it,
just torpedoes the compiler's ability to reason about the code.
What is the interface to an  trusted statement? Who knows.
An  trusted statement can do anything to the surrounding code in ways nearly
impossible for a compiler to infer. After all, that's why  system code
is called  system in the first place. It doesn't play by the rules.
Jun 16 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 The idea of putting it at the function level is to force (I 
 know,
 that sounds bad, but indulge me for a moment) the programmer
 to think about the decomposition of programs into safe and 
 unsafe
 code. Ideally, untrusted code should be encapsulated and 
 segregated
 into separate sections of code, with clean, well-defined, 
 well-considered,
 and well thought through interfaces.
Then you have to make the "calls unsafe code" marker transitive. Which is completely unreasonable. It is a mistake to think that unsafety is related to interfaces. That would only apply to the most trivial usage of unsafe code. If safe is meant to be useful then it should be done in a way that makes people want to use it. That includes people who currently slap system all over their code base. That includes people who write kernels and device drivers. If you want people to use safety features then you also have to design it in a way that makes people not hate their code base.
 At statement level, one just schlepps  trusted in front and 
 gives it
 no more consideration. It is thoughtlessly applied, the 
 compiler error
 goes away, Mission Accomplished!
Slapping one trusted over the whole function is even easier than slapping two trusted on 2 statements inside the function body. So that argument makes no sense to me. (Also " trusted" looks far more innocent to newbies than " unsafe", but that is another issue.)
 It might as well be renamed
 the  shaddup attribute. Zero thought is given to carefully 
 crafting
 a safe interface, because a safe interface to it is not 
 required.
Your lexer in DMD will obviously never be safe as it is written. It can be made trusted, but if safe code overwrite sentinels then it will obviously break. In actual system level code you cannot make the assumption that memory safety is related to interfaces. It is related to datastructures, events and timelines and their invariants.
 As for my own pre-safe code, I've been gradually upgrading it 
 to be
 fully  safe. It's a satisfying result.
Well, all of dmd can be rewritten to be safe with no problem. As is the case for most batch programs. But a compiler is not really system level code.
 P.S. A subtler aspect of this is D's semantic reliance on rich 
 function
 signatures. This passes critical semantic information to the 
 compiler,
 such as inputs, outputs, escapes, live ranges, who's zoomin' 
 who, etc.
 Having  trusted at the statement level, with no defined 
 interface to it,
 just torpedoes the compiler's ability to reason about the code.
Easy solution: make trusted inferred, and introduce unsafe at the statement level. Then let IDEs annotate call chains with some kind of syntax highlighting or other marker that lets people know that a call chain includes unsafe code. One thing that makes people dislike the look of D code is exactly those "rich" function signatures. It has a negative impact on programmers ability to interpret function signatures. Especially newbies. It is easier to just slap system on the whole code base. If you write system code. Which also is the kind of code that could benefit most from safe...
Jun 17 2021
prev sibling next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:

 The idea of putting it at the function level is to force (I 
 know,
 that sounds bad, but indulge me for a moment) the programmer
 to think about the decomposition of programs into safe and 
 unsafe
 code. Ideally, untrusted code should be encapsulated and 
 segregated
 into separate sections of code, with clean, well-defined, 
 well-considered,
 and well thought through interfaces.
No, the lambda approach will not lead to those. Just like every first D project out there abuses "static if" to hack around the limitations of "version", people will use something like "int a = () trusted{ ... terrible hacks... }();" everywhere. I think you are fighting a strawman here: we are not arguing that powerful features won't be abused. We are arguing that your safeguards will not prevent the abuse. By allowing immediately called nullary lambdas (sematic equivalent of blocks), you are not forcing people to think about anything but how to hack around the lack of trusted blocks. I could play an experience card too, and say that the shittiest code I have ever seen was written to work around programming language limitations. So, your argument works both ways: yes, powerful features may and do lead to shitty code, and, yes, limitations may and do lead to shitty code.
Jun 17 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 2:06 AM, Max Samukha wrote:
 No, the lambda approach will not lead to those. Just like every first D
project 
 out there abuses "static if" to hack around the limitations of "version",
people 
 will use something like "int a = ()  trusted{ ... terrible hacks... }();" 
 everywhere.
I actually agree with you. The lambdas should be replaced with a static nested functions, so the arguments come in through the front door.
Jun 17 2021
parent Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 17 June 2021 at 10:18:57 UTC, Walter Bright wrote:

 I actually agree with you.
That's a miracle!
 The lambdas should be replaced with a static nested functions, 
 so the arguments come in through the front door.
I had skipped over the other part of your post where you admitted that trusted lambdas had been an oversight. I apologize for that.
Jun 17 2021
prev sibling next sibling parent reply Dennis <dkorpel gmail.com> writes:
Nice post! Just one thing I don't understand:

On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 Consider the following controversial language features:
 ...
 2. non-void default initialization
 ...
 6. transitive const
 ...
 Ok, I have successfully sold (1) and (2). But
 the others are all uphill struggles for D.
Are you trying to "sell" the idea that this list of features is bad? In that case, are you saying D's `const` and default initialization of `T x = T.init;` instead of `T x = void;` are bad? I get the other items on the list, but (2) and (6) confuse me.
Jun 17 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 2:13 AM, Dennis wrote:
 Nice post! Just one thing I don't understand:
 
 On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 Consider the following controversial language features:
 ...
 2. non-void default initialization
 ...
 6. transitive const
 ...
 Ok, I have successfully sold (1) and (2). But
 the others are all uphill struggles for D.
Are you trying to "sell" the idea that this list of features is bad? In that case, are you saying D's `const` and default initialization of `T x = T.init;` instead of `T x = void;` are bad? I get the other items on the list, but (2) and (6) confuse me.
Yeah, I could have phrased that better. For 2, I'm referring to the C feature of no initializer means initialize to random garbage. For 6, I meant that people want to have "logical const", and so want a way out of transitive const.
Jun 17 2021
prev sibling next sibling parent Ogi <ogion.art gmail.com> writes:
On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 What they all have in common is they're programming
 language crack cocaine.

 By that I mean that these features make programming faster
 and easier. It's like that first hit of crack enables
 a superpower where prodigious quantities of code can be
 quickly churned out. Wow! What could be wrong with that?
No, Walter, the actual crack cocaine here is function-level ` trusted`. You want to call this function from ` safe` code? Just slap ` trusted` on it! Problem solved, no thinking required.
Jun 17 2021
prev sibling next sibling parent reply IGotD- <nise nise.com> writes:
On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 [...]
So let's say we keep it as it is with this safe, trusted, system on function level. How is this going to work with FFI? With ImportC if you are going to be able to call C functions from safe code, then you need to designate the either safe or trusted. Now many C function actually want a pointer, like strings for example which might be potentially harmful (probably a cast required). Either you must create trusted trampolines that works with D primitives or C functions can only be called from system code. In this particular case system blocks could be useful if you don't want to create trampoline code for many C functions. How safe, trusted, system work with FFI is not documented here. https://dlang.org/spec/memory-safe-d.html and I think it needs to be defined.
Jun 17 2021
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 2:56 AM, IGotD- wrote:
 How  safe,  trusted,  system work with FFI is not documented here.
 
 https://dlang.org/spec/memory-safe-d.html
 
 and I think it needs to be defined.
For FFI, the programmer of the import file for it gets to decide. You can see that at work in the core.stdc.* files. For ImportC, it's going to be system. But that may get relaxed in the future by having the compiler infer the safety by analyzing the function bodies, if they are provided.
Jun 17 2021
parent reply Elronnd <elronnd elronnd.net> writes:
On Thursday, 17 June 2021 at 10:26:07 UTC, Walter Bright wrote:
 For ImportC, it's going to be  system. But that may get relaxed 
 in the future by having the compiler infer the safety by 
 analyzing the function bodies, if they are provided.
Consider a convenient syntax for asserting the trustedness of ImportC'd functions?
Jun 18 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/18/2021 3:35 PM, Elronnd wrote:
 Consider a convenient syntax for asserting the  trustedness of ImportC'd
functions?
We already have that. It's DasBetterC!
Jun 18 2021
prev sibling next sibling parent reply Alexandru Ermicioi <alexandru.ermicioi gmail.com> writes:
On Thursday, 17 June 2021 at 02:02:58 UTC, Walter Bright wrote:
 That's for context. The rest will be about the  trusted 
 proposal.

 The question is: Why is  trusted at the function level rather 
 than
 at the statement level? It certainly seems more convenient to 
 apply
 it with statement granularity, and it will save 4 characters of 
 typing
 over the lambda approach. What could be wrong with that?
 And indeed, that so far appears to be the general reaction.

 The idea of putting it at the function level is to force (I 
 know,
 that sounds bad, but indulge me for a moment) the programmer
 to think about the decomposition of programs into safe and 
 unsafe
 code. Ideally, untrusted code should be encapsulated and 
 segregated
 into separate sections of code, with clean, well-defined, 
 well-considered,
 and well thought through interfaces.
Not always possible. Sometimes you have objects, that 90% are safe, and only 10% not. Having dedicated functions or interfaces for those 10% is just plain and unneeded clutter. How would I even name those methods/interfaces?
 At statement level, one just schlepps  trusted in front and 
 gives it
 no more consideration. It is thoughtlessly applied, the 
 compiler error
 goes away, Mission Accomplished! It might as well be renamed
 the  shaddup attribute. Zero thought is given to carefully 
 crafting
 a safe interface, because a safe interface to it is not 
 required.
 Of course that's tempting.
Truth to be told, I gave in to this temptation, though they were one liners. But still I fear that this temptation is quite great, as not every software engineer is keen at keeping highest degree of safety and code quality. That is my concern why current use of trusted, and trusted lambda might not be sufficient, to make it quite convenient for ordinary engineer to use them properly. Note: There is a better proposal flying around, which making trusted code be verified as safe, but allowing system blocks insid., That or it's derived version might be the best approach here imho.
Jun 17 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 3:33 AM, Alexandru Ermicioi wrote:
 Not always possible. Sometimes you have objects, that 90% are safe, and only
10% 
 not. Having dedicated functions or interfaces for those 10% is just plain and 
 unneeded clutter. How would I even name those methods/interfaces?
"Not possible" and "unneeded clutter" are unrelated. Anyhow, consider it a challenge to one's organizational skills. I didn't say it was always the easy path. But it's worth making the effort.
 Truth to be told, I gave in to this temptation, though they were one liners.
But 
 still I fear that this temptation is quite great, as not every software
engineer 
 is keen at keeping highest degree of safety and code quality. That is my
concern 
 why current use of  trusted, and trusted lambda might not be sufficient, to
make 
 it quite convenient for ordinary engineer to use them properly.
The trusted lambda (){}() is indeed bad, but blessing it with new syntax is much worse.
Jun 17 2021
prev sibling parent reply ag0aep6g <anonymous example.com> writes:
On 17.06.21 04:02, Walter Bright wrote:
 The () trusted{ ... }() is equally bad. Note that there are no parameters
 to it, hence NO interface. This was discovered as a way to evade the
 intent that  trusted was to be applied to a function interface. It
 had never occurred to me to add semantics to disallow it. Even worse,
 I myself have been seduced into using this trick.
It still has an interface, of course. The surrounding context acts as one large `ref` parameter. Strictly speaking, the programmer must ensure that the trusted nested function doesn't create unsafe values in the outer function. Everyone conveniently forgets that when writing trusted nested functions. Just like everyone conveniently forgets that all(!) trusted functions must have safe interfaces. But a review process that is serious about trusted could catch those errors and force programmers to use it as intended. However, even the standard library has more than enough instances of strictly-speaking-incorrect trusted. So maybe it's time to give the users more features that make help with writing correct trusted code. system variables is one [1]. system blocks in trusted functions with special semantics as proposed earlier in the discussion is another. [1] https://github.com/dlang/DIPs/blob/master/DIPs/DIP1035.md
Jun 17 2021
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/17/2021 4:19 AM, ag0aep6g wrote:
 It still has an interface, of course. The surrounding context acts as one
large 
 `ref` parameter. Strictly speaking, the programmer must ensure that the
 trusted 
 nested function doesn't create unsafe values in the outer function.
You're right. I made a mistake in not thinking about that when designing trusted. trusted lambdas should be `static` so that their interface is forced to be spelled out.
 However, even the standard library has more than enough instances of 
 strictly-speaking-incorrect  trusted.
That's right. They should all be re-engineered.
Jun 17 2021
prev sibling next sibling parent vit <vit vit.vit> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 ...

 Cheers,
 RazvanN

 [1] https://issues.dlang.org/show_bug.cgi?id=17566
 [2] https://github.com/dlang/dlang.org/pull/2260
Yes trusted lamdas are ugly but necessary in templates where safe/ system is deducted. ```d struct Foo{ int test() safe{ return 1; } } struct Bar{ int test() system{ return 2; } } void system_fn() system{ } int template_fn(T)(T x){ () trusted{ system_fn(); }(); return x.test(); } void main() safe{ template_fn(Foo.init); //ok template_fn(Bar.init); //Error: ` safe` function `D main` cannot call ` system` function } ```
Jun 16 2021
prev sibling next sibling parent jfondren <julian.fondren gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 What do you think?
https://github.com/dlang/phobos/blob/master/std/array.d#L283 ```d auto temp = str.toUTF32; /* Unsafe cast. Allowed because toUTF32 makes a new array and copies all the elements. */ return () trusted { return cast(CopyTypeQualifiers!(ElementType!String, dchar)[]) temp; } (); ``` https://doc.rust-lang.org/src/core/ptr/non_null.rs.html#81-84 ```rust // SAFETY: mem::align_of() returns a non-zero usize which is then casted // to a *mut T. Therefore, `ptr` is not null and the conditions for // calling new_unchecked() are respected. unsafe { let ptr = mem::align_of::<T>() as *mut T; NonNull::new_unchecked(ptr) } ```
Jun 25 2021
prev sibling parent Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 16 June 2021 at 11:38:54 UTC, RazvanN wrote:
 What do you think?

 Cheers,
 RazvanN

 [1] https://issues.dlang.org/show_bug.cgi?id=17566
 [2] https://github.com/dlang/dlang.org/pull/2260
Yes, please. Very much. And without introducing a new scope like for `static if`, please. Furthermore, could we have also add an even shorter syntax for trusted function calls? For instance, trusted foo(...); ? If so, that might be an incentive for Walter Bright to reconsider his prerequisite of enforcing extern(C) to be ` trusted` in his safe-by-default DIP. And instead slap trusted in front of the calls to extern(C) being called in safe contexts.
Jul 10 2021