www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Manu's `shared` vs the trusted promise

reply ag0aep6g <anonymous example.com> writes:
It took me a while to understand Manu's idea for `shared`, and I suspect 
that it was/is the same for others. At the same time, Manu seems 
bewildered about the objections. I'm going to try and summarize the 
situation. Maybe it can help advance the discussion.


(1) How Does Manu's `shared` Interact with  trusted?

With Manu's `shared`, there is implicit conversion from non-`shared` to 
`shared`. It would essentially become a language rule. For that rule to 
be sound, any access to `shared` data must be  system. And more 
challengingly,  system/ trusted code must be written carefully with the 
new rule in mind.

(Manu, you might say that the conversion follows from `shared` methods 
being "guaranteed threadsafe", but I think it's easier to reason this 
way. Anyway, potayto potahto.)

The consequence is: In  trusted code, I have to make sure that I have 
exclusive access to any `shared` data that I use. If code that is not 
under my control can obtain a non-`shared` view of the same data, I have 
failed and my  trusted code is invalid.

An example in code (just rehashing code given by Manu):

----
struct Atomic
{
     private int x;

     void incr() shared  trusted
     {
         /* ... atomically increment x ... */
     }

     /* If this next method is here, the one above is invalid. It's the
     responsibility of the author of the  trusted code to make sure
     that this doesn't happen. */

     void badboy()  safe { ++x; } /* NO! BAD! NOT ALLOWED! */
}
----


(2) What's Wrong with That?

The  trusted contract says that an  trusted function must be safe when 
called from an  safe function. That calling  safe function might be 
located in the same module, meaning it might have the same level of 
access as the  trusted function.

That means, Atomic.incr is invalid. It's invalid whether Atomic.badboy 
exists or not. It's invalid because we can even possibly write an 
Atomic.badboy. That's my interpretation of the spec, at least.

But according to Manu, Atomic.incr is fine as long as there's no 
Atomic.badbody that messes things up. So it looks like we're expected to 
violate the  trusted contract when dealing with Manu's `shared`. But 
when we routinely have to break the rules, then that's a sign that the 
rules are bad.


(3) Maybe It Can Be Made to Work?

There might be a way to implement Atomic without breaking the  trusted 
promise:

----
struct Atomic
{
     shared/*!*/ int x;

     void incr() shared  trusted { /* ... */ }

     /* Now this gets rejected by the compiler: */
     void badboy()  safe { ++x; } /* compiler error  */
}
----

With a `shared int x` there's no way that  safe code might access it, so 
the  trusted promise is kept.

Manu, I don't know if marking fields like this is compatible with your 
plans. But it would address the  safe-ty issue, I think.

However, even if it's possible to reconcile Manu's `shared` with  safe 
and  trusted, that doesn't mean it's automatically golden, of course. It 
would be an enormous breaking change that should be well thought-out, 
scrutinized, planned, and executed.
Oct 21 2018
next sibling parent reply Simen =?UTF-8?B?S2rDpnLDpXM=?= <simen.kjaras gmail.com> writes:
On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
 (2) What's Wrong with That?

 The  trusted contract says that an  trusted function must be 
 safe when called from an  safe function. That calling  safe 
 function might be located in the same module, meaning it might 
 have the same level of access as the  trusted function.

 That means, Atomic.incr is invalid. It's invalid whether 
 Atomic.badboy exists or not. It's invalid because we can even 
 possibly write an Atomic.badboy. That's my interpretation of 
 the spec, at least.

 But according to Manu, Atomic.incr is fine as long as there's 
 no Atomic.badbody that messes things up. So it looks like we're 
 expected to violate the  trusted contract when dealing with 
 Manu's `shared`. But when we routinely have to break the rules, 
 then that's a sign that the rules are bad.
It's invalid only if Atomic.badboy exists. As the writer of Atomic, you should not put badboy in there. If someone else on your dev team put it there, you have bigger problems - do more code reviews, put stuff in smaller modules, etc. Essentially, since the module is the unit of encapsulation, it also needs to be the unit of programmer responsibility.
 (3) Maybe It Can Be Made to Work?

 There might be a way to implement Atomic without breaking the 
  trusted promise:

 ----
 struct Atomic
 {
     shared/*!*/ int x;

     void incr() shared  trusted { /* ... */ }

     /* Now this gets rejected by the compiler: */
     void badboy()  safe { ++x; } /* compiler error  */
 }
 ----

 With a `shared int x` there's no way that  safe code might 
 access it, so the  trusted promise is kept.
It's clearly better to mark x as shared, yes. However, I fail to see how this is significantly different from the above - J. Random Newbie can still open that file, remove the 'shared' qualifier on x, and rewrite badboy to be thread-unsafe. If we're going to assume that bad actors have write-access to all files, there's no end to the trouble that can be had. -- Simen
Oct 22 2018
parent reply ag0aep6g <anonymous example.com> writes:
On 22.10.18 10:39, Simen Kjærås wrote:
 On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
[...]
 It's invalid only if Atomic.badboy exists.
I don't agree. I prefer the stronger trusted. As far as I know, the stronger one is the current one.
 Essentially, since the module is the unit of encapsulation, it also 
 needs to be the unit of programmer responsibility.
 
 
[...]
 ----
 struct Atomic
 {
     shared/*!*/ int x;

     void incr() shared  trusted { /* ... */ }

     /* Now this gets rejected by the compiler: */
     void badboy()  safe { ++x; } /* compiler error  */
 }
 ----

 With a `shared int x` there's no way that  safe code might access it, 
 so the  trusted promise is kept.
It's clearly better to mark x as shared, yes. However, I fail to see how this is significantly different from the above - J. Random Newbie can still open that file, remove the 'shared' qualifier on x, and rewrite badboy to be thread-unsafe. If we're going to assume that bad actors have write-access to all files, there's no end to the trouble that can be had.
It's not about bad actors, it's about mistakes and finding them. Of course, you can still makes mistakes with the stronger trusted, but they will be in trusted code. In the example, if anything's wrong you know that the mistake must be in `incr`, because it's the only trusted function. You don't even have to look at the safe code. The compiler can check that it's ok.
Oct 22 2018
parent reply Manu <turkeyman gmail.com> writes:
On Mon, Oct 22, 2018 at 2:21 AM ag0aep6g via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 22.10.18 10:39, Simen Kjærås wrote:
 On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
[...]
 It's invalid only if Atomic.badboy exists.
I don't agree. I prefer the stronger trusted. As far as I know, the stronger one is the current one.
The current one has the critical weakness that it causes **EVERY USER** to write unsafe code, manually casting things to shared. You're here spouting some fantasy about a bad-actor hacking cruft into Atomic() in druntime... Like, if you're worried about the author of Atomic(T), how about _every user, including the interns_? author:users is a 1:many relationship. I can't imagine any line of reason that doesn't find it logical that the proper placement of the burden of correctly handling trusted code should be the one expert threadsafe library author, and not *every user ever*, because that's what the current model prescribes, and the entire point for wasting my breath.
Oct 22 2018
next sibling parent ag0aep6g <anonymous example.com> writes:
On 22.10.18 11:40, Manu wrote:
 On Mon, Oct 22, 2018 at 2:21 AM ag0aep6g via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
[...]
 I don't agree. I prefer the stronger  trusted. As far as I know, the
 stronger one is the current one.
The current one has the critical weakness that it causes **EVERY USER** to write unsafe code, manually casting things to shared.
You're conflating trusted with `shared`. As I've tried to show, your version of `shared` probably doesn't need the weaker trusted. It can work with the stronger one. And I'm not trying to defend the current `shared` in any way.
 You're here spouting some fantasy
With these snarky asides, you're making it difficult to argue for your side.
Oct 22 2018
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Monday, 22 October 2018 at 09:40:42 UTC, Manu wrote:
 On Mon, Oct 22, 2018 at 2:21 AM ag0aep6g via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 On 22.10.18 10:39, Simen Kjærås wrote:
 On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
[...]
 It's invalid only if Atomic.badboy exists.
I don't agree. I prefer the stronger trusted. As far as I know, the stronger one is the current one.
The current one has the critical weakness that it causes **EVERY USER** to write unsafe code, manually casting things to shared.
Nope: ------- auto list = /*new?*/ shared MyFancyLockFreeList(); ------- If you want a `shared` something, create it as shared to begin with, there's no need to cast.
Oct 22 2018
prev sibling next sibling parent Kagamin <spam here.lot> writes:
On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
 With Manu's `shared`, there is implicit conversion from 
 non-`shared` to `shared`. It would essentially become a 
 language rule. For that rule to be sound, any access to 
 `shared` data must be  system. And more challengingly, 
  system/ trusted code must be written carefully with the new 
 rule in mind.
Well, we have __gshared for that. When code is written carefully, architecturally minimize amount of multithreaded code, and it becomes manageable, for an example see https://github.com/dlang/druntime/blob/master/src/core/sync/semaphore.d
Oct 22 2018
prev sibling next sibling parent reply Manu <turkeyman gmail.com> writes:
On Sun, Oct 21, 2018 at 3:05 PM ag0aep6g via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 It took me a while to understand Manu's idea for `shared`, and I suspect
 that it was/is the same for others. At the same time, Manu seems
 bewildered about the objections. I'm going to try and summarize the
 situation. Maybe it can help advance the discussion.


 (1) How Does Manu's `shared` Interact with  trusted?

 With Manu's `shared`, there is implicit conversion from non-`shared` to
 `shared`. It would essentially become a language rule. For that rule to
 be sound, any access to `shared` data must be  system. And more
 challengingly,  system/ trusted code must be written carefully with the
 new rule in mind.
Well, just to be clear, I wouldn't say "access to shared data is system", I'd say, "there is no access to shared data". The only way to access data members from a shared instance is to cast away shared; and naturally, that is system. So if this is what you mean, then I agree.
 (Manu, you might say that the conversion follows from `shared` methods
 being "guaranteed threadsafe", but I think it's easier to reason this
 way. Anyway, potayto potahto.)

 The consequence is: In  trusted code, I have to make sure that I have
 exclusive access to any `shared` data that I use.
This isn't strictly true, other access are possible, as long as they're atomic and you encapsulate properly, but I'll go with you here...
 If code that is not
 under my control can obtain a non-`shared` view of the same data, I have
 failed and my  trusted code is invalid.
Right.
 An example in code (just rehashing code given by Manu):

 ----
 struct Atomic
 {
      private int x;

      void incr() shared  trusted
      {
          /* ... atomically increment x ... */
atomicIncrement(cast(int)*&x); // <- I added the unsafe call to atomicInc for clarity
      }

      /* If this next method is here, the one above is invalid. It's the
      responsibility of the author of the  trusted code to make sure
      that this doesn't happen. */

      void badboy()  safe { ++x; } /* NO! BAD! NOT ALLOWED! */
 }
 ----
Right, this is the rule that I depend on. It carries the weight of the model. Fortunately, the number of tools of this sort are very few in number, and you probably would never write any of them yourself.
 (2) What's Wrong with That?

 The  trusted contract says that an  trusted function must be safe when
 called from an  safe function. That calling  safe function might be
 located in the same module, meaning it might have the same level of
 access as the  trusted function.

 That means, Atomic.incr is invalid. It's invalid whether Atomic.badboy
 exists or not. It's invalid because we can even possibly write an
 Atomic.badboy. That's my interpretation of the spec, at least.
It's trusted, not safe... so I don't think you can say "It's invalid because we can even possibly write an Atomic.badboy" (I would agree to that statement if it were safe). That's the thing about trusted, you have to trust the engineer to confirm contextual correctness. But semantics aside, how and why did you add code to this module? Do you usually add code to druntime or phobos? Assuming you did add code to this module, are you telling me that you don't understand what Atomic() does, and you also did not understand the rules of `shared`? You can't be trusted to write threadsafe code if you don't understand `shared`s rules. My point is, do you genuinely believe this is a high-risk? When did you last rewrite Atomic(T) because you didn't like the one in druntime? Have you ever heard of a case of that? I mean, I understand it's _possible_ to violate incr()'s promise, and that's why it's trusted, and not safe. But what's the probability of that happening by accident... and would you *honestly* make an argument that this unlikely scenario is more likely to occur than any of your 900 high-level engineers making any sort of mistake with respect to the *use* of shared's current rules, which require unsafe interaction at every call, by every end-user? Most users don't modify druntime. I think people are drastically over-estimating how much such trusted code would exist. Everybody fears multithreading, and nobody wants to write data-races. If you lived in a world where we had a model to describe safe multithreading, you would do everything you can to stay inside that playground, and engaging in unsafe threading code would quickly become a pungent stench. I suspect you only think about writing trusted code so much because that's what the existing implementation of `shared` has trained us all to do.
 But according to Manu, Atomic.incr is fine as long as there's no
 Atomic.badbody that messes things up. So it looks like we're expected to
 violate the  trusted contract when dealing with Manu's `shared`. But
 when we routinely have to break the rules, then that's a sign that the
 rules are bad.
I lost you here... how are you "expected" to violate the contract... and "routinely break the rules"? I'm telling you to NEVER do that. How can you say it's expected, or that it's routine? There are like, 2 times I can think of: 1. When the person that writes Atomic(T) writes it. I'll do it if you like, and I won't bugger it up. 2. When some smart cookie writes LockFreeQueue(T), and everyone uses it, because nobody would dare ever write that class themselves. That's almost all the times that you would ever see code like this. There is also Mutex, and Semaphore, but they're trivial. These modules that you refer to should be very small and tight. Threadsafe promises are encapsulated by the module. Threadsafety is hard, and you do not write a crap-load of code and hope it's all correct. You write the smallest amount of code possible, then you test it. You DON'T go and start messing with foundational threadsafe code, because you're not a moron. I agree the situation you fear is technically possible, but I think it's very unlikely, and in balance to the risks associated with shared today, which is completely unsafe at the user-facing level, and also completely unregulated (you can access members freely)...
 (3) Maybe It Can Be Made to Work?

 There might be a way to implement Atomic without breaking the  trusted
 promise:

 ----
 struct Atomic
 {
      shared/*!*/ int x;

      void incr() shared  trusted { /* ... */ }

      /* Now this gets rejected by the compiler: */
      void badboy()  safe { ++x; } /* compiler error  */
 }
 ----
Yeah, that's probably fine.
 With a `shared int x` there's no way that  safe code might access it, so
 the  trusted promise is kept.

 Manu, I don't know if marking fields like this is compatible with your
 plans. But it would address the  safe-ty issue, I think.
Yeah, that's fine granted my rules. Another option might be to wrap the volatile member like `x` here in a `shared` property, and only access it through that. Perhaps it's possible that the compiler could note when a data member is accessed within a shared function, and then make a noise any time that same symbol is accessed directly in a non-shared function in the same module (recommending to use a `shared` property). I've said on at least 5 other occasions, I'm sure there are a whole lot of options we can explore to assist and improve the probability that the ground-level author doesn't make a mistake (although I am quite confident you'd be wasting your time, because they won't make a mistake like this anyway). That conversation has nothing to do with the validity of the rules though, which is what the other 400 post thread is about. If you *do* trust the 5-10 trusted functions in the library; Is my scheme sound? If the answer is yes, then we can talk about how to improve our odds.
 However, even if it's possible to reconcile Manu's `shared` with  safe
 and  trusted, that doesn't mean it's automatically golden, of course. It
 would be an enormous breaking change that should be well thought-out,
 scrutinized, planned, and executed.
Sure. But the OP, and every one of the 400 posts later, are trying to determine if the scheme is sound. We can fuss about the details until the cows come home, but I'm finding it impossible to get everyone on the same page in the first place.
Oct 22 2018
parent ag0aep6g <anonymous example.com> writes:
On 22.10.18 11:33, Manu wrote:
 On Sun, Oct 21, 2018 at 3:05 PM ag0aep6g via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
[...]
 It's  trusted, not  safe... so I don't think you can say "It's invalid
 because we can even possibly write an Atomic.badboy" (I would agree to
 that statement if it were  safe).
 That's the thing about  trusted, you have to trust the engineer to
 confirm contextual correctness.
Your definition of trusted is weaker than mine. I think the one I gave is the one that's in the spec and the one that Walter and Timon work with.
 But semantics aside, how and why did you add code to this module? Do
 you usually add code to druntime or phobos?
 Assuming you did add code to this module, are you telling me that you
 don't understand what Atomic() does, and you also did not understand
 the rules of `shared`? You can't be trusted to write threadsafe code
 if you don't understand `shared`s rules.
 My point is, do you genuinely believe this is a high-risk? When did
 you last rewrite Atomic(T) because you didn't like the one in
 druntime? Have you ever heard of a case of that?
 
 I mean, I understand it's _possible_ to violate incr()'s promise, and
 that's why it's  trusted, and not  safe. But what's the probability of
 that happening by accident... and would you *honestly* make an
 argument that this unlikely scenario is more likely to occur than any
 of your 900 high-level engineers making any sort of mistake with
 respect to the *use* of shared's current rules, which require unsafe
 interaction at every call, by every end-user?
 Most users don't modify druntime.[...]
 I agree the situation you fear is technically possible, but I think
 it's very unlikely,
You're arguing with probability while the other camp is entrenched in fundamentals. The goal is a very strong safe. All safe code must be compiler-verifiable safe. With the weaker trusted, that doesn't hold when the safe code is next to trusted code. So the weaker trusted is unacceptable. By the way, can you dial down the confrontational rhetoric, please? I'm pretty much trying to help get your point across here. I don't need snarky questions/remarks about my ability to write thread-safe code. Even if I can't write correct code, it's besides the point.
 and in balance to the risks associated with shared
 today, which is completely unsafe at the user-facing level, and also
 completely unregulated (you can access members freely)...
I think most agree it's bad that `shared` data can currently be accessed freely without casting, without atomics, without synchronization. I know that I agree. [...]
 I've said on at least 5 other occasions, I'm sure there are a whole
 lot of options we can explore to assist and improve the probability
 that the ground-level author doesn't make a mistake (although I am
 quite confident you'd be wasting your time, because they won't make a
 mistake like this anyway).
 That conversation has nothing to do with the validity of the rules
 though, which is what the other 400 post thread is about.
The perception is that one has to break the (stronger) trusted promise in order to do anything with your version of `shared`. And that's perceived as bad. So you have to show that breaking the trusted promise is not necessary. I've tried to show that. Or you have to show that a weaker trusted is preferable to the stronger one. Convincing people of that would be a hard task, I think.
 If you *do* trust the 5-10  trusted functions in the library; Is my
 scheme sound?
If you can implement them without breaking the (strong) trusted promise, then I guess so. If you can't, then no (arguably, depending on how trusted is defined). [...]
 We can fuss about the details until the cows come home, but I'm
 finding it impossible to get everyone on the same page in the first
 place.
I think we're making progress right here. It seems to me that there are two slightly different definitions of trusted floating around. Ideally, everyone would agree which one is correct (or more useful). Things might fall into place then.
Oct 22 2018
prev sibling next sibling parent Stanislav Blinov <stanislav.blinov gmail.com> writes:
On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
 It took me a while to understand Manu's idea for `shared`, and 
 I suspect that it was/is the same for others...
Three threads one... Three threads two... Three threads three! Sold! Thank you very much, ladies and gentlemen!
Oct 22 2018
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Sunday, 21 October 2018 at 22:03:00 UTC, ag0aep6g wrote:
 The  trusted contract says that an  trusted function must be 
 safe when called from an  safe function. That calling  safe 
 function might be located in the same module, meaning it might 
 have the same level of access as the  trusted function.

 That means, Atomic.incr is invalid. It's invalid whether 
 Atomic.badboy exists or not. It's invalid because we can even 
 possibly write an Atomic.badboy. That's my interpretation of 
 the spec, at least.
Frankly, this does not sound credible. According to this rationale, array access should be system too, because it relies on the array not giving direct access to its length to the user, which would also in itself be safe.
Oct 22 2018
next sibling parent reply ag0aep6g <anonymous example.com> writes:
On Monday, 22 October 2018 at 11:24:27 UTC, Dukc wrote:
 Frankly, this does not sound credible. According to this 
 rationale, array access should be  system too, because it 
 relies on the array not giving direct access to its length to 
 the user, which would also in itself be  safe.
Arrays are a language builtin. As far as I'm aware, there isn't actually an struct defined in DRuntime for arrays. But maybe I'm wrong. If there is, and if it uses a plain size_t for the length member, then it is breaking the strong trusted promise, yes. But having existing exceptions to the rule doesn't mean that the rule is void. We could identify the existing exceptions as bugs and try to fix them. Or we could say that they're a necessary evil, but we don't want to add more evil. On the other hand, D could of course embrace a weaker trusted/ safe. That would be up to Walter and Andrei, I guess. As far as I can tell from the other `shared` thread, Walter currently favors a strong trusted.
Oct 22 2018
parent reply Dukc <ajieskola gmail.com> writes:
On Monday, 22 October 2018 at 14:23:09 UTC, ag0aep6g wrote:
 Arrays are a language builtin. As far as I'm aware, there isn't 
 actually an struct defined in DRuntime for arrays. But maybe 
 I'm wrong. If there is, and if it uses a plain size_t for the 
 length member, then it is breaking the strong  trusted promise, 
 yes.
This does not only apply to builtin arrays. It also applies to RAII or refcounted memory resources made safe with DIP1000
 But having existing exceptions to the rule doesn't mean that 
 the rule is void.
Perhaps you're right. But IMO the proposed struct would definitely be low-level and fundamental enough to warrant an exception to that rule. Note, I'm not saying I support what Manu proposed, just that the argument about strong trusted should not stop the proposal if we otherwise like it.
Oct 23 2018
parent reply ag0aep6g <anonymous example.com> writes:
On 23.10.18 12:23, Dukc wrote:
 This does not only apply to builtin arrays. It also applies to RAII or 
 refcounted memory resources made safe with DIP1000
Absolutely. And they would be (or are) in violation of a strong trusted. [...]
 Perhaps you're right. But IMO the proposed struct would definitely be 
 low-level and fundamental enough to warrant an exception to that rule.
Maybe. But then the proposer has to spend time and nerves justifying the exception against fundamentalists like me or the more prestigious Timon Gehr. If marking the fields as `shared` is enough to maintain a strong trusted, then that's a really simple way of making everyone happy.
 Note, I'm not saying I support what Manu proposed, just that the 
 argument about strong  trusted should not stop the proposal if we 
 otherwise like it.
I guess it comes down to how much value one assigns to a strong trusted. Myself, I think that a strong trusted/ safe is really, really nice. And I'd like to think that it can be maintained without any exceptions. For non-`shared` safety-critical variables (reference count, etc.), there doesn't seem to be a nice solution to the problem, yet. We could mark them with Manu's `shared` just to force a non- safe cast. But that would be silly, they're not shared in any way. Instead, maybe we could let system apply to variables. It would forbid accesses from safe code. Then (strongly) trusted code can rely on them. Without a language change, an idiom like this might do the trick: ---- struct Array(T) { Unsafe!size_t length; Unsafe!(T*) ptr; ref int opIndex(size_t i) trusted { if (i >= length) throw new Error(""); return ptr[i]; } } struct Unsafe(T) { void[T.sizeof] data; ref T get() system { return * cast(T*) &data; } alias get this; } safe unittest { Unsafe!int x; int y; static assert(!__traits(compiles, x = 42)); static assert(!__traits(compiles, y = x)); } system unittest { Unsafe!int x; x = 42; assert(x == 42); int y = x; assert(y == 42); } ----
Oct 23 2018
next sibling parent Dukc <ajieskola gmail.com> writes:
On Tuesday, 23 October 2018 at 11:24:27 UTC, ag0aep6g wrote:
 Instead, maybe we could let  system apply to variables. It 
 would forbid accesses from  safe code. Then (strongly)  trusted 
 code can rely on them.

 Without a language change, an idiom like this might do the 
 trick:

 [snip]
Sounds good.
Oct 23 2018
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 23 October 2018 at 11:24:27 UTC, ag0aep6g wrote:
 [snip]

 Instead, maybe we could let  system apply to variables. It 
 would forbid accesses from  safe code. Then (strongly)  trusted 
 code can rely on them.

 Without a language change, an idiom like this might do the 
 trick:
 [snip]
We don't need a disruptive, breaking change. safe currently only applies to functions. Applying to variables is a first step, but ideally you would want it to apply like a UDA to arbitrary scopes as well. That way you can have an safe part of a function and a system part of a function and then have the overall thing be trusted and can easily see which is which. It's really more of a code organization thing than anything else. You could do the same thing now by just breaking apart the function into multiple sub-functions.
Oct 23 2018
prev sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Monday, 22 October 2018 at 11:24:27 UTC, Dukc wrote:
 Frankly, this does not sound credible. According to this 
 rationale, array access should be  system too, because it 
 relies on the array not giving direct access to its length to 
 the user, which would also in itself be  safe.
For reading, its a size_t so can be done with atomics, writing OTOH is a property that calls a function to reallocate if ned be. Reallocation obviously needs to be locked.
Oct 22 2018
parent reply Dukc <ajieskola gmail.com> writes:
On Monday, 22 October 2018 at 14:49:13 UTC, Nicholas Wilson wrote:
 On Monday, 22 October 2018 at 11:24:27 UTC, Dukc wrote:
 [snip]
For reading, its a size_t so can be done with atomics, writing OTOH is a property that calls a function to reallocate if ned be. Reallocation obviously needs to be locked.
True, but I meant that the very concept of array having a length member violates the strong trusted rule referred to by the theard author. That's because somebody could make a safe function to the module defining an array that makes the array unsafe to use (If the array is a struct in DRuntime, that is. I'm not sure about that.).
Oct 23 2018
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Tue, 23 Oct 2018 10:31:25 +0000, Dukc wrote:

 On Monday, 22 October 2018 at 14:49:13 UTC, Nicholas Wilson wrote:
 On Monday, 22 October 2018 at 11:24:27 UTC, Dukc wrote:
 [snip]
For reading, its a size_t so can be done with atomics, writing OTOH is a property that calls a function to reallocate if ned be. Reallocation obviously needs to be locked.
True, but I meant that the very concept of array having a length member violates the strong trusted rule referred to by the theard author. That's because somebody could make a safe function to the module defining an array that makes the array unsafe to use (If the array is a struct in DRuntime, that is. I'm not sure about that.).
Altering the length of a builtin array calls a runtime function _d_arraysetlengthT to reallocate it. And they don't have a .tupleof property. So builtin arrays are safe.
Oct 23 2018
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 23.10.18 17:37, Neia Neutuladh wrote:
 On Tue, 23 Oct 2018 10:31:25 +0000, Dukc wrote:
 
 On Monday, 22 October 2018 at 14:49:13 UTC, Nicholas Wilson wrote:
 On Monday, 22 October 2018 at 11:24:27 UTC, Dukc wrote:
 [snip]
For reading, its a size_t so can be done with atomics, writing OTOH is a property that calls a function to reallocate if ned be. Reallocation obviously needs to be locked.
True, but I meant that the very concept of array having a length member violates the strong trusted rule referred to by the theard author. That's because somebody could make a safe function to the module defining an array that makes the array unsafe to use (If the array is a struct in DRuntime, that is. I'm not sure about that.).
Altering the length of a builtin array calls a runtime function _d_arraysetlengthT to reallocate it. And they don't have a .tupleof property. So builtin arrays are safe.
What he is saying is, you could add some safe code to the druntime module that defines the dynamic array struct. Then, within this code, DMD would consider independent assignments to the length and ptr members safe, even though this is not the case. Therefore, safe is broken in druntime.
Oct 23 2018
next sibling parent Neia Neutuladh <neia ikeran.org> writes:
On Tue, 23 Oct 2018 22:25:55 +0200, Timon Gehr wrote:
 What he is saying is, you could add some  safe code to the druntime
 module that defines the dynamic array struct. Then, within this code,
 DMD would consider independent assignments to the length and ptr members
  safe, even though this is not the case. Therefore,  safe is broken in
 druntime.
Yes, the principle is quite reasonable. Anything that can access a non- safe interface of a thing needs to be carefully vetted to make sure it's valid as trusted code, and that means looking at a whole module at once. So put your trusted code in separate modules insofar as possible. In this particular case, though, druntime functions are generally trusted rather than safe, and arrays are defined by the compiler and not as structs in druntime.
Oct 23 2018
prev sibling parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Tuesday, 23 October 2018 at 20:25:55 UTC, Timon Gehr wrote:
 On 23.10.18 17:37, Neia Neutuladh wrote:
 Altering the length of a builtin array calls a runtime function
 _d_arraysetlengthT to reallocate it. And they don't have a 
 .tupleof
 property. So builtin arrays are safe.
 
What he is saying is, you could add some safe code to the druntime module that defines the dynamic array struct. Then, within this code, DMD would consider independent assignments to the length and ptr members safe, even though this is not the case. Therefore, safe is broken in druntime.
I would have assumed that they would be behind properties which would be not `shared` (so uncallable on shared slices), and their internal logic would be trusted.
Oct 23 2018