www.digitalmars.com         C & C++   DMDScript  

digitalmars.dip.ideas - Temporally safe by default

reply Richard (Rikki) Andrew Cattermole <richard cattermole.co.nz> writes:
As part of type state analysis work, I've been thinking about 
whether would we want to keep old `` safe`` available for new 
editions to use.
I suspect that the answer is yes.
Not everyone wants to use DIP1000 or temporal safety.

So what I am thinking is also an answer to `` safe`` by default.

Introduce a new level to SafeD, `` tsafe``, for temporarily safe.

Move to disable DIP1000 in `` safe``.
Treat it as `` trusted +  somelints`` instead.

This also answers another question, how do you pass around old 
`` safe`` in new editions.

The default for all functions with bodies would be `` tsafe``, if 
you see any of these four attributes, it indicates review is 
required.
Mar 29
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Saturday, 30 March 2024 at 02:28:02 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
 As part of type state analysis work, I've been thinking about 
 whether would we want to keep old `` safe`` available for new 
 editions to use.
 I suspect that the answer is yes.
 Not everyone wants to use DIP1000 or temporal safety.

 So what I am thinking is also an answer to `` safe`` by default.

 Introduce a new level to SafeD, `` tsafe``, for temporarily 
 safe.

 Move to disable DIP1000 in `` safe``.
 Treat it as `` trusted +  somelints`` instead.

 This also answers another question, how do you pass around old 
 `` safe`` in new editions.

 The default for all functions with bodies would be `` tsafe``, 
 if you see any of these four attributes, it indicates review is 
 required.
Can you please write a code example or two? Doesn't have to be anything with a nailed-down syntax, but it's really hard to be sure what you're suggesting without one.
Apr 03
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 04/04/2024 7:50 AM, Dukc wrote:
 On Saturday, 30 March 2024 at 02:28:02 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 As part of type state analysis work, I've been thinking about whether 
 would we want to keep old `` safe`` available for new editions to use.
 I suspect that the answer is yes.
 Not everyone wants to use DIP1000 or temporal safety.

 So what I am thinking is also an answer to `` safe`` by default.

 Introduce a new level to SafeD, `` tsafe``, for temporarily safe.

 Move to disable DIP1000 in `` safe``.
 Treat it as `` trusted +  somelints`` instead.

 This also answers another question, how do you pass around old 
 `` safe`` in new editions.

 The default for all functions with bodies would be `` tsafe``, if you 
 see any of these four attributes, it indicates review is required.
Can you please write a code example or two? Doesn't have to be anything with a nailed-down syntax, but it's really hard to be sure what you're suggesting without one.
Okay so you need something a bit bigger picture for temporally safe? My way of working would mean I would need to solve isolated and then temporally safe before I can do that. It might be a while before it all comes together for me to be able to do it concretely.
Apr 03
parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 4 April 2024 at 06:45:44 UTC, Richard (Rikki) Andrew 
Cattermole wrote:

 Can you please write a code example or two? Doesn't have to be 
 anything with a nailed-down syntax, but it's really hard to be 
 sure what you're suggesting without one.
Okay so you need something a bit bigger picture for temporally safe? My way of working would mean I would need to solve isolated and then temporally safe before I can do that. It might be a while before it all comes together for me to be able to do it concretely.
I mean, given you're posting this as a new thread in the DIP ideas forum, I'm assuming you have a language improvement idea to present and want some informal feedback for it. But I don't get from your posts what exactly you're proposing, only that it's some sort of improvement to ` safe`.
Apr 04
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 05/04/2024 1:32 AM, Dukc wrote:
 On Thursday, 4 April 2024 at 06:45:44 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 
 Can you please write a code example or two? Doesn't have to be 
 anything with a nailed-down syntax, but it's really hard to be sure 
 what you're suggesting without one.
Okay so you need something a bit bigger picture for temporally safe? My way of working would mean I would need to solve isolated and then temporally safe before I can do that. It might be a while before it all comes together for me to be able to do it concretely.
I mean, given you're posting this as a new thread in the DIP ideas forum, I'm assuming you have a language improvement idea to present and want some informal feedback for it. But I don't get from your posts what exactly you're proposing, only that it's some sort of improvement to ` safe`.
Okay yes, you want some big picture overview. Temporal safety is about making sure one thread doesn't stomp all over memory that another thread also knows about. So this is locking, ensuring only one thread has a reference to it, atomics ext. Moving us over to this without the edition system would break everyone's code. So it has to be based upon this. So the question of this thread is all about how do we annotate our code to indicate its temporally safe and how does it map into older editions view of what safe is. There is at least three different solutions to this that I have come up with. Escape analysis wrt. DIP1000 is only a very small part of such a system. But it all plays together to give us what is known as program security. Program security is what the CS field has been working towards since the 80's. Its about guaranteeing that a program will work as expected guaranteed. We are no where near that, it'll need a proof assistant for full blown program security, however temporal safety will get us most of the way there. I know Walter has seen the writing on the wall for PL's if they don't have it ~5 years ago. Hence DIP1000 and live. However I don't know how far along the path he has gone. I'm waiting on him talking with Adam Wilson this month to find that out to know how to proceed with type state analysis (which is a building block to make it all work nicely together).
Apr 04
parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 4 April 2024 at 12:42:07 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 On 05/04/2024 1:32 AM, Dukc wrote:
 On Thursday, 4 April 2024 at 06:45:44 UTC, Richard (Rikki) 
 Andrew Cattermole wrote:
 
 Can you please write a code example or two? Doesn't have to 
 be anything with a nailed-down syntax, but it's really hard 
 to be sure what you're suggesting without one.
Okay so you need something a bit bigger picture for temporally safe? My way of working would mean I would need to solve isolated and then temporally safe before I can do that. It might be a while before it all comes together for me to be able to do it concretely.
I mean, given you're posting this as a new thread in the DIP ideas forum, I'm assuming you have a language improvement idea to present and want some informal feedback for it. But I don't get from your posts what exactly you're proposing, only that it's some sort of improvement to ` safe`.
Okay yes, you want some big picture overview.
Thanks.
 Temporal safety is about making sure one thread doesn't stomp 
 all over memory that another thread also knows about.

 So this is locking, ensuring only one thread has a reference to 
 it, atomics ext.

 Moving us over to this without the edition system would break 
 everyone's code. So it has to be based upon this.

 So the question of this thread is all about how do we annotate 
 our code to indicate its temporally safe and how does it map 
 into older editions view of what safe is. There is at least 
 three different solutions to this that I have come up with.
Isn't `shared` just for this? As far as I can tell, you can define a data structure struct that, when `shared`, allows multiple threads to access it, works from 100% ` safe` client code and doesn't allow any data races to happen. Of course, actually implementing the data structure is challenging, just as it is for a dip1000-using reference counted data structure.
Apr 05
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 05/04/2024 10:11 PM, Dukc wrote:
     Temporal safety is about making sure one thread doesn't stomp all
     over memory that another thread also knows about.
 
     So this is locking, ensuring only one thread has a reference to it,
     atomics ext.
 
     Moving us over to this without the edition system would break
     everyone's code. So it has to be based upon this.
 
     So the question of this thread is all about how do we annotate our
     code to indicate its temporally safe and how does it map into older
     editions view of what safe is. There is at least three different
     solutions to this that I have come up with.
 
 Isn't |shared| just for this? As far as I can tell, you can define a 
 data structure struct that, when |shared|, allows multiple threads to 
 access it, works from 100% | safe| client code and doesn't allow any 
 data races to happen.
 
 Of course, actually implementing the data structure is challenging, just 
 as it is for a dip1000-using reference counted data structure.
``shared`` doesn't provide any guarantees. At best it gives us a story that makes us think that we have it solved. A very comforting story in fact. So much so people want to use it. Which also happens to make it a very bad language feature that I want to see removed. Lies like this do not improve the D experience. They do not allow optimizations to occur. See all the examples of people having to cast on/off ``shared`` to pass memory between threads. With temporal safety we'd have some sort of immutable reference that enables us to transfer ownership of an object across the function/threads. With guarantee that it isn't accessible by somebody else. There should also be some determination if a function is temporally safe if it only accesses atomics, has been synchronized, only uses immutable references, or immutable data. Note: immutable reference does not refer to the ``immutable`` type qualifier, but of a language feature where the reference to memory is limited to a single location.
Apr 05
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 See all the examples of people having to cast on/off ``shared`` 
 to pass memory between threads.
That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access `shared` data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (`core.atomic`). But if you had the data structure struct, you wouldn't have to do either. It would have a `shared` constructor and `shared` member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.
Apr 05
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 05/04/2024 11:24 PM, Dukc wrote:
 On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 See all the examples of people having to cast on/off ``shared`` to 
 pass memory between threads.
That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access `shared` data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (`core.atomic`). But if you had the data structure struct, you wouldn't have to do either. It would have a `shared` constructor and `shared` member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.
You are assuming the data structure is the problem. It isn't, the problem is who knows about the data structure, its arguments and its return value. Across the entire program, on every thread, in every global, in every function call frame. ``shared`` does not offer any guarantees to references, how many there are, on what threads its on. None of it. Its fully up to the programmer to void it if they chose to do so in normal `` safe`` code. If you cannot prove aliasing at compile time and in doing so enable optimizations, it isn't good enough for temporal safety.
Apr 05
parent reply Dukc <ajieskola gmail.com> writes:
On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 On 05/04/2024 11:24 PM, Dukc wrote:
 On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) 
 Andrew Cattermole wrote:
 See all the examples of people having to cast on/off 
 ``shared`` to pass memory between threads.
That's what it's like if you try to share plain arrays. And that's how it should be. Manipulating a shared data structure in a temporally safe way is complicated, so to access `shared` data it makes sense that do that you need to explicitly give up temporal safety (cast) or do it the hard way (`core.atomic`). But if you had the data structure struct, you wouldn't have to do either. It would have a `shared` constructor and `shared` member functions to manipulate all the data, all the ugly atomics and/or casting getting done in the struct implementation. It'd let you to copy part of itself to your thread local storage and inspect there at your leisure. It'd let you lock part of itself for a time when you wish to do an in-place update (during which the data in question would be typed as thread-local and guarded against escape with DIP1000). And so on.
You are assuming the data structure is the problem. It isn't, the problem is who knows about the data structure, its arguments and its return value. Across the entire program, on every thread, in every global, in every function call frame.
Why would that be a problem? A `shared` variable does not have to be a global variable. You can instance the `shared` data structure with `new`, or even as a local variable, and then pass references to it only to those threads you want to know about it.
 ``shared`` does not offer any guarantees to references, how 
 many there are, on what threads its on. None of it. Its fully 
 up to the programmer to void it if they chose to do so in 
 normal `` safe`` code.
My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass ` safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?
Apr 08
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 09/04/2024 7:43 AM, Dukc wrote:
 On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 ``shared`` does not offer any guarantees to references, how many there 
 are, on what threads its on. None of it. Its fully up to the 
 programmer to void it if they chose to do so in normal `` safe`` code.
My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass ` safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?
```d void thread1() { shared(Type) var = new shared Type; sendToThread2(var); for(;;) { // I know about var! } } void sentToThread2(shared(Type) var) { thread2(var); } void thread2(shared(Type) var) { for(;;) { // I know about var! } } ``` The data structure is entirely irrelevant. Shared provided no guarantees to stop this. No library features can stop it either, unless you want to check ref counts (which won't work cos ya know graphs). You could make it temporally safe with the help of locking, yes. But shared didn't contribute towards that in any way shape or form.
Apr 08
next sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 On 09/04/2024 7:43 AM, Dukc wrote:
 On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) 
 Andrew Cattermole wrote:
 ``shared`` does not offer any guarantees to references, how 
 many there are, on what threads its on. None of it. Its fully 
 up to the programmer to void it if they chose to do so in 
 normal `` safe`` code.
My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass ` safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?
```d void thread1() { shared(Type) var = new shared Type; sendToThread2(var); for(;;) { // I know about var! } } void sentToThread2(shared(Type) var) { thread2(var); } void thread2(shared(Type) var) { for(;;) { // I know about var! } } ``` The data structure is entirely irrelevant. Shared provided no guarantees to stop this. No library features can stop it either, unless you want to check ref counts (which won't work cos ya know graphs). You could make it temporally safe with the help of locking, yes. But shared didn't contribute towards that in any way shape or form.
I don't think this has much to do with shared. You can get in similar situations on a single thread, just not in parallel. Solving it requires tracking lifetimes, independent of whether that is across threads. D has very little in the way of an answer here. It has the GC for auto lifetime, smart pointers and then shoot-in-the-foot manual memory management. Well, and there is scope with dip1000 of course, which works surprisingly well but requires phrasing things in a more structured manner. Curious to what you have been cooking.
Apr 08
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 09/04/2024 10:09 AM, Sebastiaan Koppe wrote:
 On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 On 09/04/2024 7:43 AM, Dukc wrote:
 On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 ``shared`` does not offer any guarantees to references, how many 
 there are, on what threads its on. None of it. Its fully up to the 
 programmer to void it if they chose to do so in normal `` safe`` code.
My impression is you can't do that, unless the data structure you're using is flawed (well `dataStruct.tupleof` of the works to bypass ` safe`ty but I don't think it's relevant since it probably needs to be fixed anyway). Pseudocode example?
```d void thread1() {     shared(Type) var = new shared Type;     sendToThread2(var);     for(;;) {         // I know about var!     } } void sentToThread2(shared(Type) var) {     thread2(var); } void thread2(shared(Type) var) {     for(;;) {         // I know about var!     } } ``` The data structure is entirely irrelevant. Shared provided no guarantees to stop this. No library features can stop it either, unless you want to check ref counts (which won't work cos ya know graphs). You could make it temporally safe with the help of locking, yes. But shared didn't contribute towards that in any way shape or form.
I don't think this has much to do with shared. You can get in similar situations on a single thread, just not in parallel.
This is exactly my point. Shared doesn't offer any guarantees here. It doesn't assist in us having temporal safety.
 Solving it requires tracking lifetimes, independent of whether that is 
 across threads.
Yes. From allocation all the way until it is no longer known about in the program graph.
 D has very little in the way of an answer here. It has the GC for auto 
 lifetime, smart pointers and then shoot-in-the-foot manual memory 
 management.
 
 Well, and there is scope with dip1000 of course, which works 
 surprisingly well but requires phrasing things in a more structured manner.
If you think of DIP1000 has a tool for life time tracking to occur, it works quite well. If you think of it as life time tracking, you're going to have a very bad time of it. I really want us to move away from thinking its lifetime tracking, because that isn't what escape analysis is. I super duper want reference counting in the language. An RC object shouldn't be getting beholden to scope placed on to it. That is hell. I'd drop DIP1000 instead of RC if we don't get that.
 Curious to what you have been cooking.
We'll need a multi-pronged approach, the big feature I want is isolated from Midori (or something like it). Coupled with DIP1000 that'll be a little bit like a borrow checker, except a bit more flexible. While still maintaining the guarantees across all of SafeD including `` system`` via the use of a type qualifier outside of `` safe``. https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/ However I've found that you need the type state analysis DFA to perform its guarantees. So you might as well go all in and have type state analysis and get a whole lot of other goodies while you're at it. Of course you end up needing something to converge atomics, locking, immutable and isolated all together. Which I haven't done any design work on. Not there yet. If I can't get type state analysis in, there isn't much point in continuing.
Apr 08
prev sibling next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 On 09/04/2024 7:43 AM, Dukc wrote:
 My impression is you can't do that, unless the data structure 
 you're using is flawed (well `dataStruct.tupleof` of the works 
 to bypass ` safe`ty but I don't think it's relevant since it 
 probably needs to be fixed anyway). Pseudocode example?
```d void thread1() { shared(Type) var = new shared Type; sendToThread2(var); for(;;) { // I know about var! } } void sentToThread2(shared(Type) var) { thread2(var); } void thread2(shared(Type) var) { for(;;) { // I know about var! } } ``` The data structure is entirely irrelevant. Shared provided no guarantees to stop this.
I see. I thought you were talking about violating the assumptions of the data structure, but you're worried about sharing it between threads. Well, if you don't want to share something between the threads, that's exactly when you *don't* make it `shared`, and mission accomplished! However, I think you're advocating for something like ` live`. That is, a type that's destroyed in RAII fashion but that can be passed from one function to another, without risking dangling references, and you also want it to work between threads. This is where DIP1040, or if it fails then DIP1014 steps in. You can have a RAII data structure by disabling the copy constructor, but ownership can to still be passed around with `.move`. With either of these two DIPs you can insert code to be executed each time the struct is moved, if you have to. As for doing this between two threads, you want a data struct that both handles `shared` data and is RAII. The struct handle itself can be shared or thread local. Because it's RAII, only one thread can own it. Because it handles shared data, it can be moved between threads. It still needs locking when borrowed, because the ownership could be passed to another thread before the borrowing ends.
 No library features can stop it either, unless you want to 
 check ref counts (which won't work cos ya know graphs).
Yes, ref counting is required AFAIK if borrowing is allowed as it probably is. Going to write a separate reply to this.
 You could make it temporally safe with the help of locking, 
 yes. But shared didn't contribute towards that in any way shape 
 or form.
You're partially right: It's possible to design a ` safe` shared data structure entirely without `shared`. What `shared` enables is that bigger part of implementation of the structure can be ` safe`, if still very low-level and bug-prone.
Apr 09
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 09/04/2024 7:17 PM, Dukc wrote:
 However, I think you're advocating for something like ` live`. That is, 
 a type that's destroyed in RAII fashion but that can be passed from one 
 function to another, without risking dangling references, and you also 
 want it to work between threads.
`` live`` is a lint over a single function, not over the entire program. Outside of known enforced temporally safe function, yes its going to have to be type qualifier based to give those guarantees. Hence isolated which requires the DFA as provided by type state analysis to work (wrt. reachability). If you want to see what a less well made version of this looks like, look at `` live`` and its holes. Of course when you're in a temporally `` safe`` code, you really want to know that the compiler won't let you do the wrong thing. Regardless of if it's based upon isolated, atomics, locking ext. Bringing it all together will allow people to pick the right tool for the job rather than a specific one that other languages like Rust enforce.
Apr 09
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 No library features can stop it either, unless you want to 
 check ref counts (which won't work cos ya know graphs).
I think you are losing sight of the big picture. The big picture is we already have a fine safety mechanism for majority of the cases, the garbage collector! It also just works with graphs. Ownership and/or reference counting are needed for some cases. I suspect they're more important than "system programming is niche" arguments suggest because you don't always use them for RAM memory, but also things like [file or thread handles](https://theartofmachinery.com/2018/12/05/gc_not_enough.html). However, managing access to other resources than RAM memory is a lot easier since you don't have to deal with the possibility of escaped pointers to the resource you're protecting, and already well supported by the present language. What we're left with are the cases where you need to manage RAM but the GC won't do. This is a niche area. D needs to support it since systems programming is in it's domain, but it isn't a case worth optimising much. Since forever D has supported doing these by simply marking the manual memory handling code ` system`. This is already a reasonable standard considering how rarely you need to do it. D where it's currently going to does better with DIP1000, ` system` variables and move constructors, enabling, in principle, safely doing most of what Rust can do. It's true that ref counting and borrowing are more ergonomic to use in Rust than in D since the language is built around those, and probably it enables doing something safely that D doesn't. But so what? Rust does better in a case which is far from your worst problems in D either even if you always dabble at the low level. If D implements a type system addition to match Rust, it complicates the whole language but improves it only in a very narrow case. Also, Rust is a first-and-foremost systems programming language. D is a hybrid systems and application programming language. What is worthwhile for one language isn't always worthwhile for the other. As for "ref counting doesn't work because of graphs" - I'm not sure, but probably this can be worked around somehow without forcing client to use ` system` the GC. Rust manages it somehow after all. But even if that isn't the case, it's a small corner case that doesn't affect the big picture.
Apr 09
parent reply Bruce Carneal <bcarneal gmail.com> writes:
On Tuesday, 9 April 2024 at 08:16:22 UTC, Dukc wrote:
 On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 No library features can stop it either, unless you want to 
 check ref counts (which won't work cos ya know graphs).
I think you are losing sight of the big picture. The big picture is we already have a fine safety mechanism for majority of the cases, the garbage collector! It also just works with graphs. ...
One way to frame the "big picture" of language design is as an optimization problem across multiple concerns {safety, performance, testability, scalability, deployability, ease-of-use}. Traditionally, language designers have weighted some of these concerns much much more heavily than others. The D community at large has not, leading to, IMO, a very enjoyable and improving language. We're not in an either/or situation here. We can certainly improve GC and we can explore the space between our, improving, GC and malloc()... I'm not an expert in this area but I do think that there are big wins for D to be had here. On a side note, I find the Mojo approach to memory-mgmt appealing. They've certainly embraced the {performance X ease-of-use X safety} trifecta.
Apr 09
parent reply Dukc <ajieskola gmail.com> writes:
On Tuesday, 9 April 2024 at 14:20:22 UTC, Bruce Carneal wrote:
 We're not in an either/or situation here.  We can certainly 
 improve GC and we can explore the space between our, improving, 
 GC and malloc()...
Yes we can. My point is, since each language feature has a complexity cost, and since deteministic memory management is so rarely needed, any improvement to it need to have a much better benefit-to-complexity ratio than something that improves the whole language. Any large overhaul of the type system to improve memory management by RAII or ref counting will have hard time justifying itself, unless it also provides a lot of other benefits.
Apr 10
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Wednesday, 10 April 2024 at 12:31:03 UTC, Dukc wrote:
 [...] since deteministic memory management is so rarely needed
By whose count? I've found there are plenty of places where they make a difference.
Apr 10
parent Dukc <ajieskola gmail.com> writes:
On Wednesday, 10 April 2024 at 19:59:46 UTC, Sebastiaan Koppe 
wrote:
 On Wednesday, 10 April 2024 at 12:31:03 UTC, Dukc wrote:
 [...] since deteministic memory management is so rarely needed
By whose count? I've found there are plenty of places where they make a difference.
One clarification: I meant heap memory management. Stack memory objects obviously get freed deterministically but I didn't mean those. Another clarification: I meant rarely needed compared to the GC. Not rarely as in "unlikely that you personally should ever do it". Both my own count and of what others have written in this forum. On my own code, I almost always use the GC. While I can imagine doing manual freeing on a few critical places if I want to optimise, it's far far less often needed than the GC (and would be even less needed if we had a generational GC that performs better). Well, on my old Spasm code where I don't have the GC, I do resort to deterministic heap management fairly much, but a better answer in the long run would be having some sort of portable GC instead of managing the heap with other means. Even if ref counted memory was as simple to use as the GC (which it cannot be), you'd want the GC just to be able to use existing D code out of the box. This also applies to a bare-metal project: a scheduler or low level device driver code might need or want to use other means of memory management for latency or performance reasons, but in all likelihood it's going to be only a minority of the code. Someone just recently linked a blog post demonstrating how a GC is extremely relevant even in systems programming. Even if it's a hard real-time system, it can be done by keeping the critical real-time thread outside the world the GC stops, but delegate everything possible to normal GC-controlled threads.
Apr 11
prev sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 See all the examples of people having to cast on/off ``shared`` 
 to pass memory between threads. With temporal safety we'd have 
 some sort of immutable reference that enables us to transfer 
 ownership of an object across the function/threads. With 
 guarantee that it isn't accessible by somebody else.
Sounds a lot like what I know as unique.
 [...] a language feature where the reference to memory is 
 limited to a single location.
So, unique?
Apr 05
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
 On Friday, 5 April 2024 at 09:59:58 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 See all the examples of people having to cast on/off ``shared`` to 
 pass memory between threads. With temporal safety we'd have some sort 
 of immutable reference that enables us to transfer ownership of an 
 object across the function/threads. With guarantee that it isn't 
 accessible by somebody else.
Sounds a lot like what I know as unique.
 [...] a language feature where the reference to memory is limited to a 
 single location.
So, unique?
At the most basic level yes. I'm simplifying it to not lock me into any specific behavior of references to the sub graph. https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
Apr 05
parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Saturday, 6 April 2024 at 06:12:21 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
 So, unique?
At the most basic level yes. I'm simplifying it to not lock me into any specific behavior of references to the sub graph. https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
Unique would be a great addition to have. We still need shared, though. For when you want to have multiple threads refer to the same object.
Apr 06
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 06/04/2024 11:02 PM, Sebastiaan Koppe wrote:
 On Saturday, 6 April 2024 at 06:12:21 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 On 06/04/2024 10:32 AM, Sebastiaan Koppe wrote:
 So, unique?
At the most basic level yes. I'm simplifying it to not lock me into any specific behavior of references to the sub graph. https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
Unique would be a great addition to have. We still need shared, though. For when you want to have multiple threads refer to the same object.
I do want to see cross thread temporal safety also provided. But I don't think it will be using shared to do it. To make it all come together requires pieces that haven't got designs yet, so this would be the last stage that would need designing or at least that is how I've been working on it.
Apr 06
prev sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Saturday, 30 March 2024 at 02:28:02 UTC, Richard (Rikki) 
Andrew Cattermole wrote:
 Introduce a new level to SafeD, `` tsafe``, for temporarily 
 safe.
I think every step in direction " safe by default" is an improvement. But what we need to avoid is generating another attribute that's parallel to the existing safe. If what you suggest is in the same line ( system ⊇ trusted ⊇ tsafe ⊇ safe), so provides the same but not all of the guarantees safe provides, I'm all for it. But if tsafe is independend from safe like life: forget it.
Apr 04
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 05/04/2024 7:55 PM, Dom DiSc wrote:
 On Saturday, 30 March 2024 at 02:28:02 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 Introduce a new level to SafeD, `` tsafe``, for temporarily safe.
I think every step in direction " safe by default" is an improvement. But what we need to avoid is generating another attribute that's parallel to the existing safe. If what you suggest is in the same line ( system ⊇ trusted ⊇ tsafe ⊇ safe), so provides the same but not all of the guarantees safe provides, I'm all for it. But if tsafe is independend from safe like life: forget it.
You have tsafe the wrong way round to safe. It would be a stronger guarantee of temporal safety + more basic pointer safety. system ⊇ trusted ⊇ safe ⊇ tsafe The capability to have safe without DIP1000 what we have now would exist in the compiler, and keeping a way to specify it means we can interact with older code that is safe. I considered remapping safe in newer editions to temporal and having a new attribute or `` trusted safe`` to map to the older one but I am concerned it'll result in confusion when looking at different edition code.
Apr 05
parent reply Dom DiSc <dominikus scherkl.de> writes:
On Friday, 5 April 2024 at 07:16:47 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
 You have  tsafe the wrong way round to  safe.

 It would be a stronger guarantee of temporal safety + more 
 basic pointer safety.

  system ⊇  trusted ⊇  safe ⊇  tsafe

 The capability to have  safe without DIP1000 what we have now 
 would exist in the compiler, and keeping a way to specify it 
 means we can interact with older code that is  safe.
So, you want something even stronger than safe (requiring DIP1000 compliance) to be the default? I mean, I would like it. But how do you see the chances this will happen if now we can't even agree to safe by default?!?
Apr 05
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 05/04/2024 8:23 PM, Dom DiSc wrote:
 On Friday, 5 April 2024 at 07:16:47 UTC, Richard (Rikki) Andrew 
 Cattermole wrote:
 You have  tsafe the wrong way round to  safe.

 It would be a stronger guarantee of temporal safety + more basic 
 pointer safety.

  system ⊇  trusted ⊇  safe ⊇  tsafe

 The capability to have  safe without DIP1000 what we have now would 
 exist in the compiler, and keeping a way to specify it means we can 
 interact with older code that is  safe.
So, you want something even stronger than safe (requiring DIP1000 compliance) to be the default? I mean, I would like it. But how do you see the chances this will happen if now we can't even agree to safe by default?!?
Escape analysis is needed to perform temporal safety, so yes. It needs to be in that order. As far as safe by default is concerned, if a DIP proposed it with the revisions regarding requiring a body, I do expect it to be accepted, although inference is the considered approach currently. Thanks to the upcoming edition system, we can be a bit bold and consider changing defaults :)
Apr 05