www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DMD 2.100, bring ont he attribute soup

reply deadalnix <deadalnix gmail.com> writes:
So I upgraded to 2.100

It is now reporting DIP100 deprecation by default.

Fantastic. Every single one of them is a false positive so far. I 
now face the situation where I will have deprecation warning 
forever, or add attribute soup to the program.

This is the opposite of progress. Stop it.

I'd be happy to add attributes for something that could actually 
track ownership/lifetime. DIP1000 is not that. Adding attributes 
for that is not worth it.
May 26 2022
next sibling parent reply Dennis <dkorpel gmail.com> writes:
On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:
 So I upgraded to 2.100

 It is now reporting DIP100 deprecation by default.
This was actually reverted for 2.100 because it was unfinished, perhaps you're using a beta? 2.101 is when deprecation warnings will truly start. Can try again with a nightly and give some examples of the resulting false positives? By the way, if you don't use ` safe`, you shouldn't get `scope` errors either. If you do, please file an issue.
May 26 2022
parent deadalnix <deadalnix gmail.com> writes:
On Thursday, 26 May 2022 at 17:41:18 UTC, Dennis wrote:
 On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:
 So I upgraded to 2.100

 It is now reporting DIP100 deprecation by default.
This was actually reverted for 2.100 because it was unfinished, perhaps you're using a beta? 2.101 is when deprecation warnings will truly start. Can try again with a nightly and give some examples of the resulting false positives? By the way, if you don't use ` safe`, you shouldn't get `scope` errors either. If you do, please file an issue.
I was told as much, but the version I'm getting from the d-apt repository definitively does. There may have been a snafu there.
May 26 2022
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 7:35 AM, deadalnix wrote:
 So I upgraded to 2.100
 
 It is now reporting DIP100 deprecation by default.
 
 Fantastic. Every single one of them is a false positive so far. I now face the 
 situation where I will have deprecation warning forever, or add attribute soup 
 to the program.
 
 This is the opposite of progress. Stop it.
 
 I'd be happy to add attributes for something that could actually track 
 ownership/lifetime. DIP1000 is not that. Adding attributes for that is not
worth 
 it.
Many others report the same experience. Not so long ago, I was asked to try out one of the popular C++ code static analyzers. Since at the time the D backend was still written in C++, I thought "great! let's see if it finds any bugs in the backend!". It found zero bugs and 1000 false positives. Does that make it a useless tool? Not at all. The thing is, the backend is 35 year old code. All the bugs had already been squeezed out of it! The false positives were things like printf formats (like printing a pointer as a %x instead of %p), pointer aliasing issues (modern code should use unions instead of casts), irrelevant portability issues, etc. The same issue came up when const was added to D. It didn't fix existing, working, debugged code, either. It's come up as well when C and C++ tightened their language specs. The additional semantic checking is for: 1. new code - so you don't have to debug it, the compiler tells you right off the bat 2. self-documentation - so you know what a function plans to do with a pointer passed to it 3. when a pointer is marked `scope`, you don't have to double check it. The compiler checks it for you. After all, you pointed out where D allows implicit conversion of a mutable delegate to an immutable one. The fix is: https://github.com/dlang/dmd/pull/14164 but it breaks existing, working, debugged code. What do you suggest we do about that? Remove immutable annotations? Ignore immutable annotations? Or fix user code to be const-correct? (I made it opt-in by putting it behind the dip1000 switch. It will eventually become the default.) As for lifetimes, yes, I know that dip1000 only addresses one level of lifetimes. The fix for all levels is the live stuff, which has already been merged. But we've decided on fixing all the dip1000 issues before putting a major effort into live. The live implementation depends on dip1000, and you'll be pleased to note that it only is turned on a function-by-function bases iff that function is marked live. It's completely opt-in. BTW, if you use templates, the compiler will infer the correct dip1000 attributes for you.
May 26 2022
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
BTW, dip1000 is a complete solution for pointers to stack based objects. Just 
not for malloc'd objects.
May 26 2022
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 26 May 2022 at 18:00:00 UTC, Walter Bright wrote:
 As for lifetimes, yes, I know that dip1000 only addresses one 
 level of lifetimes.
The idea is right, but the execution of it makes D more complicated than C++... you cannot afford that. It would be better to use the same idea under the hood and just implement ARC and optimize the hell out of it. It is kinda like the silly in-parameter DIP, solvable by optimization. Totally arcane 1980s idea, new syntax for nothing. The DIP that tries to introduce a new object hierarchy. Again, can be solved with optimization. Completely unecessary added complexity and confusion. Reduce the number of new features to a minimum and focus on getting a new high level IR that allow more optimizations instead. Les friction. Less syntax. More power. *shrugs*
May 26 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 26 May 2022 at 18:14:16 UTC, Ola Fosheim Grøstad 
wrote:
 It would be better to use the same idea under the hood and just 
 implement ARC and optimize the hell out of it.
By this I mean that it can be done as optional optimization constraints applied to ARC. So, same effect, just not required unless you tell the compiler to be strict. Gradual typing is the future!
May 26 2022
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Thursday, 26 May 2022 at 18:00:00 UTC, Walter Bright wrote:
 Many others report the same experience.

 Not so long ago, I was asked to try out one of the popular C++ 
 code static analyzers. Since at the time the D backend was 
 still written in C++, I thought "great! let's see if it finds 
 any bugs in the backend!".

 It found zero bugs and 1000 false positives.

 Does that make it a useless tool? Not at all. The thing is, the 
 backend is 35 year old code. All the bugs had already been 
 squeezed out of it! The false positives were things like printf 
 formats (like printing a pointer as a %x instead of %p), 
 pointer aliasing issues (modern code should use unions instead 
 of casts), irrelevant portability issues, etc.

 The same issue came up when const was added to D. It didn't fix 
 existing, working, debugged code, either.

 It's come up as well when C and C++ tightened their language 
 specs.

 The additional semantic checking is for:

 1. new code - so you don't have to debug it, the compiler tells 
 you right off the bat

 2. self-documentation - so you know what a function plans to do 
 with a pointer passed to it

 3. when a pointer is marked `scope`, you don't have to double 
 check it. The compiler checks it for you.

 After all, you pointed out where D allows implicit conversion 
 of a mutable delegate to an immutable one. The fix is:

     https://github.com/dlang/dmd/pull/14164

 but it breaks existing, working, debugged code. What do you 
 suggest we do about that? Remove immutable annotations? Ignore 
 immutable annotations? Or fix user code to be const-correct? (I 
 made it opt-in by putting it behind the dip1000 switch. It will 
 eventually become the default.)

 As for lifetimes, yes, I know that dip1000 only addresses one 
 level of lifetimes. The fix for all levels is the  live stuff, 
 which has already been merged. But we've decided on fixing all 
 the dip1000 issues before putting a major effort into  live. 
 The  live implementation depends on dip1000, and you'll be 
 pleased to note that it only is turned on a 
 function-by-function bases iff that function is marked  live. 
 It's completely opt-in.

 BTW, if you use templates, the compiler will infer the correct 
 dip1000 attributes for you.
You wrote something similar to this a while back, and while there is some merit to it, I think there are a few wrong turn in there that make the argument bogus. To begin with, I don't expect the same level of analysis from a static analyzer than from the compiler. I can ignore the static analyzer if it is wrong, I cannot ignore the compiler, after all, I need it to compile my code. False positive are therefore much more acceptable from the tool than the compiler. Second, I expect the constraint checked by the compiler to provide me with useful invariant I can rely upon. For instance, if some data is immutable, and that it is really an invariant in the program, then I can know this data can be shared safely as nobody else is going to modify it. The existence of the invariant limits my options on one axis - I cannot mutate this data - while opening my option in another axis - i can share this data safely without synchronization. If immutable instead meant immutable in most places, you you can mutate it with this weird construct, then it is effectively useless as a language construct, because it restrict my expressiveness on one axis without granting me greater expressiveness on another. Breaking the invariant must be breaking the type system, which is only possible in system code and comes with all the appropriate warnings. The problem with DIP1000, is that it doesn't provide me with invariant I can rely upon, because it is unable to track more than one level indirection. It can only detect some violation of the invariant, but not all (in fact, probably not the majority). As a result, it doesn't allow me to build upon it to make something greater. It belongs in a static analysis tool.
May 26 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 3:54 PM, deadalnix wrote:
 To begin with, I don't expect the same level of analysis from a static
analyzer 
 than from the compiler. I can ignore the static analyzer if it is wrong, I 
 cannot ignore the compiler, after all, I need it to compile my code. False 
 positive are therefore much more acceptable from the tool than the compiler.
The static analyzer should be built in to the language when looking for bugs, not stylistic issues.
 Second, I expect the constraint checked by the compiler to provide me with 
 useful invariant I can rely upon. For instance, if some data is immutable, and 
 that it is really an invariant in the program, then I can know this data can
be 
 shared safely as nobody else is going to modify it.  The existence of the 
 invariant limits my options on one axis - I cannot mutate this data - while 
 opening my option in another axis - i can share this data safely without 
 synchronization.
 
 If immutable instead meant immutable in most places, you you can mutate it
with 
 this weird construct, then it is effectively useless as a language construct, 
 because it restrict my expressiveness on one axis without granting me greater 
 expressiveness on another.
Immutable means immutable in D, all the way. I've fended off many attempts to turn it into "logical const" and add "mutable" overrides. Most of the complaints about immutable and const in D is they are relentless and brutal, not that they don't work. Yes, we find holes in it now and then, and we plug them.
 Breaking the invariant must be breaking the type system, which is only
possible 
 in  system code and comes with all the appropriate warnings.
 
 The problem with DIP1000, is that it doesn't provide me with invariant I can 
 rely upon,
It does for stack based allocation.
 because it is unable to track more than one level indirection. It can 
 only detect some violation of the invariant, but not all (in fact, probably
not 
 the majority).
It's designed to track all attempts to escape pointers to the stack, and I mean all as in 100%. It will not allow building multilevel data structures on the stack. Yes, some implementation bugs have appeared and a lot has been fixed (with Dennis' help). Some mistakes in the design have been discovered and adjustments made. So far (fingers crossed) no *fundamental* problems with it have been found.
 As a result, it doesn't allow me to build upon it to make 
 something greater.
live builds upon it and relies on it. I agree live has yet to prove itself, but focus is on ImportC and dip1000 for the moment.
May 26 2022
next sibling parent deadalnix <deadalnix gmail.com> writes:
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
 Immutable means immutable in D, all the way. I've fended off 
 many attempts to turn it into "logical const" and add "mutable" 
 overrides. Most of the complaints about immutable and const in 
 D is they are relentless and brutal, not that they don't work.

 Yes, we find holes in it now and then, and we plug them.
Yes, this is an example of something that is done right and provided value as a result. DIP1000 and live just aren't.
May 26 2022
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
 On 5/26/2022 3:54 PM, deadalnix wrote:
 To begin with, I don't expect the same level of analysis from 
 a static analyzer than from the compiler. I can ignore the 
 static analyzer if it is wrong, I cannot ignore the compiler, 
 after all, I need it to compile my code. False positive are 
 therefore much more acceptable from the tool than the compiler.
The static analyzer should be built in to the language when looking for bugs, not stylistic issues.
No, no, no, no and no. You are just breaking my code when you do this. Leave my code alone. Leave the libraries I rely upon alone. It's working fine, great even. This is purely destructive. Every time this is done, we lose a chunk of the ecosystem. If it can detect bugs => static analysis. Chip the analyzer with the rest of the toolchain and be done with it. If it allows for more expressiveness on another axis => ship it with the language, and increase my powers.
May 26 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 4:52 PM, deadalnix wrote:
 This is purely destructive. Every time this is done, we lose a chunk of the 
 ecosystem.
Then just use system, which presumes the coder knows best. live is purely opt-in. If you don't want it, don't use it :-)
May 26 2022
prev sibling next sibling parent mee6 <mee6 lookat.me> writes:
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
 On 5/26/2022 3:54 PM, deadalnix wrote:
 Second, I expect the constraint checked by the compiler to 
 provide me with useful invariant I can rely upon. For 
 instance, if some data is immutable, and that it is really an 
 invariant in the program, then I can know this data can be 
 shared safely as nobody else is going to modify it.  The 
 existence of the invariant limits my options on one axis - I 
 cannot mutate this data - while opening my option in another 
 axis - i can share this data safely without synchronization.
 
 If immutable instead meant immutable in most places, you you 
 can mutate it with this weird construct, then it is 
 effectively useless as a language construct, because it 
 restrict my expressiveness on one axis without granting me 
 greater expressiveness on another.
Immutable means immutable in D, all the way. I've fended off many attempts to turn it into "logical const" and add "mutable" overrides. Most of the complaints about immutable and const in D is they are relentless and brutal, not that they don't work.
People avoid const, very easy work around, you don't hear much about const other than to avoid it. I guess people don't want to optionally force themselves to use something "relentless and brutal". The argument can be made for its removal, and it would be a boon. So much baggage like 'inout' could be removed. It would remove complexity.
May 26 2022
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Thursday, 26 May 2022 at 23:25:06 UTC, Walter Bright wrote:
 On 5/26/2022 3:54 PM, deadalnix wrote:
 The problem with DIP1000, is that it doesn't provide me with 
 invariant I can rely upon,
It does for stack based allocation.
 because it is unable to track more than one level indirection. 
 It can only detect some violation of the invariant, but not 
 all (in fact, probably not the majority).
It's designed to track all attempts to escape pointers to the stack, and I mean all as in 100%. It will not allow building multilevel data structures on the stack.
I would say that the biggest issue with DIP 1000 is that it spends a significant chunk of D's complexity budget, while offering only relatively modest benefits. On the one hand, DIP 1000's limitations: * It only works for stack allocation * It only handles one level of indirection * It cannot express `return ref` and `return scope` on the same parameter (so no small-string optimization, no StackFront allocator...) On the other hand, the investment it demands from potential users: * Learn the differences between `ref`, `return ref`, `scope`, `return scope`, `return ref scope`, and `ref return scope`. * Learn how the various rules apply in situations where the pointers/references are hidden or implicit (e.g., `this` is considered a `ref` parameter). * Learn when `scope` and `return` are inferred and when they are not. * Probably more that I'm forgetting... Is it any wonder that a lot of D programmers look at the two lists above and conclude, "this newfangled DIP 1000 stuff just ain't worth it"?
May 26 2022
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/05/2022 12:34 PM, Paul Backus wrote:
 Is it any wonder that a lot of D programmers look at the two lists above 
 and conclude, "this newfangled DIP 1000 stuff just ain't worth it"?
That is because its not worth it. I've got a whole lot of new code, with dip1000 turned on. It has caught exactly zero bugs. On the other hand it has required me to annotate scope methods as trusted just because I returned a RC type that is allocated on the heap. To be blunt, even though I have it on, the entire dip1000 needs to be chucked out and a new design considered. What I do know that must be meet for any future designs: - In non-virtual functions no annotations should be required to be written. - Inference is key to success. - The lifetime of memory must be able to be in relation to a data structure that owns that memory.
May 26 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 6:30 PM, rikki cattermole wrote:
 It has caught exactly zero bugs.
But did your code have any memory corrupting bugs? If your code didn't, then nothing can catch a bug that isn't there. has is memory corruption getting into production code. People are sick of this, and C will die because of this. I used to write a lot of memory corruption bugs. I gradually learned not to do that. But I also like the self-documentation aspect of `scope`, for example. I know I can safely pass a pointer to a stack array to a function marking the parameter as `scope`.
May 26 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/05/2022 2:54 PM, Walter Bright wrote:
 But did your code have any memory corrupting bugs?
 
 If your code didn't, then nothing can catch a bug that isn't there.
It doesn't worry me that its not finding anything. I tend to program pretty defensively these days due to it being good for optimizers. My issue with it is that it has so many false negatives and they do seem to be by design. Its basically "boy who cried wolf" type of situation. Most people would turn off these checks because it is hurting and not helping and that certainly is not a good thing. I want lifetime checks to work, I want users of my libraries to be able to use my code without caring about stuff like memory lifetimes and be strongly likely to never hit an issue even with concurrency. I want the average programmer to be able to use my stuff without having to care about the little details. DIP1000 so far does not appear to be working towards a design that meets these goals.
May 26 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 8:07 PM, rikki cattermole wrote:
 I want lifetime checks to work,
So do I. But nobody has figured out how to make this work without strongly impacting the programmer.
 I want users of my libraries to be able to use 
 my code without caring about stuff like memory lifetimes and be strongly
likely 
 to never hit an issue even with concurrency. I want the average programmer to
be 
 able to use my stuff without having to care about the little details.
The garbage collector does this famously.
 DIP1000 so far does not appear to be working towards a design that meets these 
 goals.
Rust is famous for forcing programmers to not just recode their programs, but redesign them from the ground up. How they managed to sell that is amazing.
May 26 2022
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/05/2022 5:06 PM, Walter Bright wrote:
 On 5/26/2022 8:07 PM, rikki cattermole wrote:
 I want lifetime checks to work,
So do I. But nobody has figured out how to make this work without strongly impacting the programmer.
I did come up with a design, but I'm not convinced it is entirely doable. I explained this to Timon earlier today (so have examples). Its basically making all memory objects (as in C definition), in reference to other memory objects, and tieing their lifetime to them. My worked example: ```d int[] array = [1, 2, 3]; // ownership: [] // arg ownership: [] (not ref/out) // return ownership: [first argument] -> [array] int* got = func(array); // ownership: [array] int* func(int[] source) { int[] slice = source[1 .. 2]; // ownership: [source] int* ptr = &slice[0]; // ownership: [slice, source] -> [source] return ptr; // ownership: [ptr] -> [source] } ``` A question Timon came up with that I answered: ```d int[][] func(int[] a, int[] b){ return [a,b]; } ``` (for return value): ``// ownership: [first argument, second argument]`` This ticks a lot of the boxes I'm using as preferable acceptance criteria: 1. Inferred for anything non-virtual 2. Nothing outlives its owner 3. Fairly straight forward to understand But yeah none of this is easy to resolve. On that note, an issue I have found to have in practice is[0]. I haven't been bothered to report it, but at least this shouldn't require going back to the drawing board ;) #dbugfix [0] https://issues.dlang.org/show_bug.cgi?id=23142
May 26 2022
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 27.05.22 07:25, rikki cattermole wrote:
 
 A question Timon came up with that I answered:
 
 ```d
 int[][] func(int[] a, int[] b){
      return [a,b];
 }
 ```
 
 (for return value): ``// ownership: [first argument, second argument]``
(What I don't like about this is that it conflates ownership for different levels of indirection, the memory of the resulting array is actually GC-owned.)
May 27 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 10:25 PM, rikki cattermole wrote:
 I did come up with a design, but I'm not convinced it is entirely doable.
 
 I explained this to Timon earlier today (so have examples).
I recommend starting a new thread with this.
May 28 2022
prev sibling next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 27/05/2022 5:06 PM, Walter Bright wrote:
 I want users of my libraries to be able to use my code without caring 
 about stuff like memory lifetimes and be strongly likely to never hit 
 an issue even with concurrency. I want the average programmer to be 
 able to use my stuff without having to care about the little details.
The garbage collector does this famously.
I forgot to mention this in my other reply, when I talk about lifetimes I am not meaning about memory allocation/deallocation. That's a solved problem even if reference counting is not available on classes (we should do this but for other reasons). What I care about is escape analysis, to make sure say an owning container, does not have a member of it escape its lifetime. Which of course would be bad, due to the fact that the data structure can deallocate it. The problem I'm interested in here, is that the compiler guarantees that the order of destruction happens in the right order. This is something the GC does not do, but one that the compiler can and should be doing. On that note, I've become quite a big fan of GC's for application code after studying memory management. For that reason I *really* want to see write barriers implemented as opt-in so that people can play around with implementing far more advanced GC's (since we are almost maxed out on what we can do without it).
May 26 2022
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 27 May 2022 at 06:14:53 UTC, rikki cattermole wrote:
 I forgot to mention this in my other reply, when I talk about 
 lifetimes I am not meaning about memory 
 allocation/deallocation. That's a solved problem even if 
 reference counting is not available on classes (we should do 
 this but for other reasons).

 What I care about is escape analysis, to make sure say an 
 owning container, does not have a member of it escape its 
 lifetime. Which of course would be bad, due to the fact that 
 the data structure can deallocate it.

 The problem I'm interested in here, is that the compiler 
 guarantees that the order of destruction happens in the right 
 order. This is something the GC does not do, but one that the 
 compiler can and should be doing.
The reason you can't do RC on classes, at least safely, is BECAUSE there is no escape analysis.
May 27 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/27/2022 1:58 PM, deadalnix wrote:
 The reason you can't do RC on classes, at least safely, is BECAUSE there is no 
 escape analysis.
The idea is to write the class so its fields are not exposed. All access is controlled by member functions.
May 27 2022
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 Rust is famous for forcing programmers to not just recode their 
 programs, but redesign them from the ground up. How they 
 managed to sell that is amazing.
They sold it to a small percentage of devs. That is sufficient since Rust is the only wellknown language with a borrow checker. Rust is not a good language for most devs, the ones Rust is good for are already using it. Go instead with optimization hints and checks on that, with static analysis that tells the programmer what has to be changed to get better performance. A good type system cannot easily be evolved. It has to be built, bottom up, on a theoretical foundation. This is why the live approach is unlikely to succeed.
May 26 2022
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 On 5/26/2022 8:07 PM, rikki cattermole wrote:
 I want lifetime checks to work,
So do I. But nobody has figured out how to make this work without strongly impacting the programmer.
 I want users of my libraries to be able to use my code without 
 caring about stuff like memory lifetimes and be strongly 
 likely to never hit an issue even with concurrency. I want the 
 average programmer to be able to use my stuff without having 
 to care about the little details.
The garbage collector does this famously.
 DIP1000 so far does not appear to be working towards a design 
 that meets these goals.
Rust is famous for forcing programmers to not just recode their programs, but redesign them from the ground up. How they managed to sell that is amazing.
By being *focused* on being the best language on a very specific domain, thus Rust is now the best mainstream implementation of affine types, and their official roadmap has plans to still improve that experience. The 1% use case that works as stepping stone for adoption.
May 27 2022
prev sibling next sibling parent reply mee6 <mee6 lookat.me> writes:
On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 On 5/26/2022 8:07 PM, rikki cattermole wrote:
 I want lifetime checks to work,
So do I. But nobody has figured out how to make this work without strongly impacting the programmer.
 I want users of my libraries to be able to use my code without 
 caring about stuff like memory lifetimes and be strongly 
 likely to never hit an issue even with concurrency. I want the 
 average programmer to be able to use my stuff without having 
 to care about the little details.
The garbage collector does this famously.
 DIP1000 so far does not appear to be working towards a design 
 that meets these goals.
Rust is famous for forcing programmers to not just recode their programs, but redesign them from the ground up. How they managed to sell that is amazing.
Cause Go uses a GC and there's no way around GC related spikes in Go. Rewriting programs in Rust to avoid the GC is actually a thing. Rust features actually complement each other. Things like ImportC, const, dip1000, and live don't complement the language. Dip1000 and live just don't make sense and their design is flawed on top of that. You keep saying it's optin when it adds complexity to the language, that's not optin. It seems they are just trying to copy Rust due to its success, without understanding why Rust is successful.
May 27 2022
parent Tejas <notrealemail gmail.com> writes:
On Friday, 27 May 2022 at 10:45:21 UTC, mee6 wrote:
 
 Cause Go uses a GC and there's no way around GC related spikes 
 in Go. Rewriting programs in Rust to avoid the GC is actually a 
 thing.
Isn't that exactly what Discord did, actually? I remember reading their latencies went from **milli**seconds to **micro**seconds
May 27 2022
prev sibling next sibling parent reply forkit <forkit gmail.com> writes:
On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 ...
 Rust is famous for forcing programmers to not just recode their 
 programs, but redesign them from the ground up. How they 
 managed to sell that is amazing.
Rust is designed for performance and safety, and people have confidence that the language design and compiler, actually accomplishes this. You don't have to 'opt in'. Instead, you have to consciously opt out. This principle is really important for the future of programming lanaguages. So in that sense it's not at all surprising that people would want to recode for performance and safety. What is surprising, is that they're willing to do it in Rust (given it's syntax is so cognitively demanding). But that's what happens when academics get involved in language design ;-) This simply cannot be said, when going to Rust. This is the lesson the designers of Rust should have taken from D. Imagine how popular Rust would be now, if they had done that. These forums would be pretty quiet indeed, if that had happened.
May 27 2022
parent reply max haughton <maxhaton gmail.com> writes:
On Friday, 27 May 2022 at 12:27:00 UTC, forkit wrote:
 On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 ...
 Rust is famous for forcing programmers to not just recode 
 their programs, but redesign them from the ground up. How they 
 managed to sell that is amazing.
Rust is designed for performance and safety, and people have confidence that the language design and compiler, actually accomplishes this. You don't have to 'opt in'. Instead, you have to consciously opt out. This principle is really important for the future of programming lanaguages. So in that sense it's not at all surprising that people would want to recode for performance and safety. What is surprising, is that they're willing to do it in Rust (given it's syntax is so cognitively demanding). But that's what happens when academics get involved in language design ;-) This simply cannot be said, when going to Rust. This is the lesson the designers of Rust should have taken from D. Imagine how popular Rust would be now, if they had done that. These forums would be pretty quiet indeed, if that had happened.
One thing people here seem to miss is that Rust isn't just a memory safety thing anymore. The memory safety is still a very big draw, yes obviously, but the language as a whole has a lot of goodies that draw people to it. I know a handful of rust programmers, most of them don't really care about the memory safety primarily but rather memory safety *and* all the "functional" stuff it inherited from ML. For example doing things like sum and product types as a library whether it be C++ or D just looks incredibly tired and banal compared to doing it properly and cohesively (pattern matching for example) in the language. This can be rectified in D, we just need to actually do it. Tuples have been on the cards for years.
May 27 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 27 May 2022 at 13:06:46 UTC, max haughton wrote:
 For example doing things like sum and product types as a 
 library whether it be C++ or D just looks incredibly tired and 
 banal compared to doing it properly and cohesively (pattern 
 matching for example) in the language. This can be rectified in 
 D, we just need to actually do it. Tuples have been on the 
 cards for years.
Bjarne suggested pattern matching for c++ in 2014, and there is a proposal from 2020 numbered 1371. I doubt it will make it, but who knows?
May 27 2022
prev sibling parent JN <666total wp.pl> writes:
On Friday, 27 May 2022 at 05:06:54 UTC, Walter Bright wrote:
 Rust is famous for forcing programmers to not just recode their 
 programs, but redesign them from the ground up. How they 
 managed to sell that is amazing.
I think it's because they don't focus on interop with other languages, especially other than C. It's "my way or the highway". And Rust provides many benefits over C/C++ making people willing to make the complete switch. In comparison, D is trying to be a language that can partially replace and interact with existing C++ codebases, even adding language features to support such workflow. Not arguing whether one behavior is better than the other, but it's easier to force programmers to redesign their programs if they don't have an easy transition path. If it's a hard transition path, might as well, do the redesigns at this point.
May 27 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/26/2022 5:34 PM, Paul Backus wrote:
 Is it any wonder that a lot of D programmers look at the two lists above and 
 conclude, "this newfangled DIP 1000 stuff just ain't worth it"?
Fortunately, 1. the attributes are all subtractive, i.e. they all restrict what the programmer can do when they are there. This is deliberate. If you don't want the restrictions, don't use them. 2. the attributes are inferred in templates. This helps a great deal. They'll probably get inferred for general functions at some point, it's just too useful to infer them. 3. D code defaults to system. If you just want to blitz out code, write it that way. If you want the compiler to check for memory safety errors, well, ya gotta use the memory safe features.
May 26 2022
prev sibling parent reply Nick Treleaven <nick geany.org> writes:
On Thursday, 26 May 2022 at 22:54:22 UTC, deadalnix wrote:
 If immutable instead meant immutable in most places, you you 
 can mutate it with this weird construct, then it is effectively 
 useless as a language construct, because it restrict my 
 expressiveness on one axis without granting me greater 
 expressiveness on another.
scope actually does allow that. Any local heap allocation only passed to scope parameters can be allocated on the stack instead of the heap. void f(scope T); T v = new T; // can be stack allocated f(v);
May 27 2022
next sibling parent reply Nick Treleaven <nick geany.org> writes:
On Friday, 27 May 2022 at 09:43:02 UTC, Nick Treleaven wrote:
 On Thursday, 26 May 2022 at 22:54:22 UTC, deadalnix wrote:
 If immutable instead meant immutable in most places, you you 
 can mutate it with this weird construct, then it is 
 effectively useless as a language construct, because it 
 restrict my expressiveness on one axis without granting me 
 greater expressiveness on another.
scope actually does allow that. Any local heap allocation only passed to scope parameters can be allocated on the stack instead of the heap. void f(scope T); T v = new T; // can be stack allocated f(v);
To finish my point, scope allows using `new` in nogc functions. And it enables allocation optimizations, so scope can't just be for a static analyzer.
May 27 2022
parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 27 May 2022 at 14:36:25 UTC, Nick Treleaven wrote:
 On Friday, 27 May 2022 at 09:43:02 UTC, Nick Treleaven wrote:
 On Thursday, 26 May 2022 at 22:54:22 UTC, deadalnix wrote:
 If immutable instead meant immutable in most places, you you 
 can mutate it with this weird construct, then it is 
 effectively useless as a language construct, because it 
 restrict my expressiveness on one axis without granting me 
 greater expressiveness on another.
scope actually does allow that. Any local heap allocation only passed to scope parameters can be allocated on the stack instead of the heap. void f(scope T); T v = new T; // can be stack allocated f(v);
To finish my point, scope allows using `new` in nogc functions. And it enables allocation optimizations, so scope can't just be for a static analyzer.
This is fundamentally broken. See https://forum.dlang.org/post/omcottkussnewheixydq forum.dlang.org . This is not salvageable.
May 27 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/27/2022 2:13 PM, deadalnix wrote:
 This is fundamentally broken. See 
 https://forum.dlang.org/post/omcottkussnewheixydq forum.dlang.org .
 
 This is not salvageable.
If you can make it fail with an example, please post to bugzilla.
May 28 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/28/2022 10:54 PM, Walter Bright wrote:
 If you can make it fail with an example, please post to bugzilla.
I see Timon already did.
May 29 2022
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Friday, 27 May 2022 at 09:43:02 UTC, Nick Treleaven wrote:
 On Thursday, 26 May 2022 at 22:54:22 UTC, deadalnix wrote:
 If immutable instead meant immutable in most places, you you 
 can mutate it with this weird construct, then it is 
 effectively useless as a language construct, because it 
 restrict my expressiveness on one axis without granting me 
 greater expressiveness on another.
scope actually does allow that. Any local heap allocation only passed to scope parameters can be allocated on the stack instead of the heap. void f(scope T); T v = new T; // can be stack allocated f(v);
I don't think this is a valid optimization, because DIP1000 cannot track indirections. Therefore, I could have an objects within the subgraph reachable from `v` which escape and which itself can reaches back to `v`. In the case it is possible to optimize, LDC can already do it. DIP1000 is of no help here.
May 27 2022
parent reply Nick Treleaven <nick geany.org> writes:
On Friday, 27 May 2022 at 21:09:49 UTC, deadalnix wrote:
 void f(scope T);

 T v = new T; // can be stack allocated
 f(v);
I don't think this is a valid optimization, because DIP1000 cannot track indirections. Therefore, I could have an objects within the subgraph reachable from `v` which escape and which itself can reaches back to `v`.
Can you give an example? The following currently compiles: ```d safe nogc: class C { C g() => this; } //C f(C c); // error C f(scope C c); // OK //C f(scope C c) { return c.g; } // error void main() { scope c = new C(); // allocated on the stack, scope currently required f(c); } ``` If I switch in the third version of `f`, I get an error with -dip1000: scopeclass.d(7): Error: scope variable `c` assigned to non-scope parameter `this` calling scopeclass.C.g
May 28 2022
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 28.05.22 12:55, Nick Treleaven wrote:
 On Friday, 27 May 2022 at 21:09:49 UTC, deadalnix wrote:
 void f(scope T);

 T v = new T; // can be stack allocated
 f(v);
I don't think this is a valid optimization, because DIP1000 cannot track indirections. Therefore, I could have an objects within the subgraph reachable from `v` which escape and which itself can reaches back to `v`.
Can you give an example? The following currently compiles: ```d safe nogc: class C {     C g() => this; } //C f(C c); // error C f(scope C c); // OK //C f(scope C c) { return c.g; } // error void main() {     scope c = new C(); // allocated on the stack, scope currently required     f(c); } ``` If I switch in the third version of `f`, I get an error with -dip1000: scopeclass.d(7): Error: scope variable `c` assigned to non-scope parameter `this` calling scopeclass.C.g
```d class D{ C c; } class C { D d; int x=3; this(D d) safe nogc{ d.c=this; this.d=d; } } C foo(D d) nogc safe{ scope c = new C(d); // remove `scope` and program does not crash return c.d.c; // escape c } void main(){ import std.stdio; writeln(foo(new D)); // segfault } ``` Not sure if this was already in bugzilla, added it: https://issues.dlang.org/show_bug.cgi?id=23145
May 28 2022
prev sibling next sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:
 Fantastic. Every single one of them is a false positive so far. 
 I now face the situation where I will have deprecation warning 
 forever, or add attribute soup to the program.
Can you briefly highlight what those false positives are?
 I'd be happy to add attributes for something that could 
 actually track ownership/lifetime. DIP1000 is not that. Adding 
 attributes for that is not worth it.
Would having attributes be inferred for non-templated functions alleviate some these problems? Or isn't that possible because a definition might have forward declarations with qualifiers needing to be in sync with the definition?
May 27 2022
parent deadalnix <deadalnix gmail.com> writes:
On Friday, 27 May 2022 at 07:06:08 UTC, Per Nordlöw wrote:
 On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:
 Fantastic. Every single one of them is a false positive so 
 far. I now face the situation where I will have deprecation 
 warning forever, or add attribute soup to the program.
Can you briefly highlight what those false positives are?
It's typically complaining about accessors, because reference to member escape. And yes, i know, this is exactly the point of the accessor.
 I'd be happy to add attributes for something that could 
 actually track ownership/lifetime. DIP1000 is not that. Adding 
 attributes for that is not worth it.
Would having attributes be inferred for non-templated functions alleviate some these problems? Or isn't that possible because a definition might have forward declarations with qualifiers needing to be in sync with the definition?
Maybe. In the case of trivial accessors, I assume that it would. Nevertheless, that would not convince me that DIP1000 is the right path forward, because that wouldn't be super useful to me. Consider that the accessors can be used on objects on the heap as much as objects on the stack. DIP1000 is therefore unable to make these accessors safe, simply make it so in a very limited set of circumstances. This is simply not good enough to justify breaking anything. It would be useful in a static analyzer.
May 27 2022
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:
 This is the opposite of progress. Stop it.

 I'd be happy to add attributes for something that could 
 actually track ownership/lifetime. DIP1000 is not that. Adding 
 attributes for that is not worth it.
Despite quoting Amaury, my reply is aimed at everyone here. DIP1000 is sure a difficult concept to learn, and I agree it's onerous to migrate code to use it and it does not discover that much flaws in existing code. But there is one imperative point that I haven't seen mentioned here. There was a fundamental design flaw in ` safe`, and DIP1000 is the solution to it. It's not just about enabling new paradigms, it's about plugging a hole in existing one. These compile without the DIP, but are detected by DIP1000: ```D safe ref int oops1() { int[5] arr; int[] slice = arr[]; return slice[2]; } safe ref int oops2() { struct AnInt { int here; int* myAddress() { auto result = &here; return result; } } AnInt myInt; return *myInt.myAddress; } ``` If you're against DIP1000, you have to either: - Say that we should resign the purpose of ` safe` eliminating undefined behaviour from the program. I'd hate that. - Say that we should simply disallow slicing static arrays and referencing to address of a struct member from a member function in ` safe` code. That would break a LOT of code, and would force ` safe` code to resign quite a deal of performance. I'd hate that. - Come up with a better idea for escape checking. For one, I cannot. One thing I want to mention is that you only need `scope` if you're going to work with data in the stack. Often it's more pragmatic to just allocate stuff on the heap so you don't have to fight with `scope` checks. DIP1000 is currently only scarcely documented. I'm considering doing a series of blog posts to the D blog about DIP1000, that would probably help.
May 27 2022
next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Friday, 27 May 2022 at 08:53:29 UTC, Dukc wrote:
 DIP1000 is currently only scarcely documented. I'm considering 
 doing a series of blog posts to the D blog about DIP1000, that 
 would probably help.
Personally I haven't heard about DIP1000, nor used it, and only will when it becomes default. Reading this makes me anxious how much the work will be.
May 27 2022
prev sibling next sibling parent reply claptrap <clap trap.com> writes:
On Friday, 27 May 2022 at 08:53:29 UTC, Dukc wrote:
 On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:

 DIP1000 is currently only scarcely documented. I'm considering 
 doing a series of blog posts to the D blog about DIP1000, that 
 would probably help.
Why exactly is DIP1000 talked about as inevitable when it never completed the DIP process? Or was never accepted? It just seems like Walter has decided that's what should happen? Or have I missed something?
May 27 2022
next sibling parent max haughton <maxhaton gmail.com> writes:
On Friday, 27 May 2022 at 10:05:15 UTC, claptrap wrote:
 On Friday, 27 May 2022 at 08:53:29 UTC, Dukc wrote:
 On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:

 DIP1000 is currently only scarcely documented. I'm considering 
 doing a series of blog posts to the D blog about DIP1000, that 
 would probably help.
Why exactly is DIP1000 talked about as inevitable when it never completed the DIP process? Or was never accepted? It just seems like Walter has decided that's what should happen? Or have I missed something?
The DIP1000 name is somewhat unfortunate for this reason.
May 27 2022
prev sibling parent bauss <jj_1337 live.dk> writes:
On Friday, 27 May 2022 at 10:05:15 UTC, claptrap wrote:
 On Friday, 27 May 2022 at 08:53:29 UTC, Dukc wrote:
 On Thursday, 26 May 2022 at 14:35:57 UTC, deadalnix wrote:

 DIP1000 is currently only scarcely documented. I'm considering 
 doing a series of blog posts to the D blog about DIP1000, that 
 would probably help.
Why exactly is DIP1000 talked about as inevitable when it never completed the DIP process? Or was never accepted? It just seems like Walter has decided that's what should happen? Or have I missed something?
That's the D-ictatorship for you
May 27 2022
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 27 May 2022 at 08:53:29 UTC, Dukc wrote:
 One thing I want to mention is that you only need `scope` if 
 you're going to work with data in the stack. Often it's more 
 pragmatic to just allocate stuff on the heap so you don't have 
 to fight with `scope` checks.
Yes, but allocating on the stack is an optimization. So, for safe code just let all memory be managed by the compiler and add hints where you desire memory optimization. Then let the compiler decide and report what the layout is and why. I want a reason to not use C++20. I want less manual memory management constructs. I want smarter memory management. And I generally dont want to think about it or see it in my algorithms. DIP1000 doesn’t provide that, neither does free/malloc or stop-the-world GC.
May 27 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/27/2022 1:53 AM, Dukc wrote:
 DIP1000 is sure a difficult concept to learn,
It did until recently have a serious usability problem in that it was hard to discern what `ref return scope` did. That has been replaced with a simple rule.
May 28 2022
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Saturday, 28 May 2022 at 21:22:37 UTC, Walter Bright wrote:
 On 5/27/2022 1:53 AM, Dukc wrote:
 DIP1000 is sure a difficult concept to learn,
It did until recently have a serious usability problem in that it was hard to discern what `ref return scope` did. That has been replaced with a simple rule.
The other big usability issue is the way `scope` works with `ref` parameters. D programmers who haven't already learned DIP 1000's rules generally expect the following two function declarations to be equivalent: void foo(scope int** p); void bar(scope ref int* p);
May 28 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/28/2022 2:51 PM, Paul Backus wrote:
 The other big usability issue is the way `scope` works with `ref` parameters.
D 
 programmers who haven't already learned DIP 1000's rules generally expect the 
 following two function declarations to be equivalent:
 
      void foo(scope int** p);
      void bar(scope ref int* p);
Fortunately, `scope` is only subtractive, in that it subtracts from what you can do with it. Hence, the compiler detects errors in its usage.
May 28 2022
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On Saturday, 28 May 2022 at 21:22:37 UTC, Walter Bright wrote:
 On 5/27/2022 1:53 AM, Dukc wrote:
 DIP1000 is sure a difficult concept to learn,
It did until recently have a serious usability problem in that it was hard to discern what `ref return scope` did. That has been replaced with a simple rule.
I find it marvelous that you can type this and not be like "wait a minute, something went sideways really badly here". You really needed borrow and own as semantic. Borrow is adequately represented as scope, and owning as a new type qualifier. Instead, we get "ref return scope" and the whole damn thing can only track things on the stack.
May 28 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/28/2022 4:49 PM, deadalnix wrote:
 Instead, we get "ref return scope" and the whole damn thing can only track 
 things on the stack.
That's what live adds.
May 28 2022
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 29 May 2022 at 06:01:42 UTC, Walter Bright wrote:
 On 5/28/2022 4:49 PM, deadalnix wrote:
 Instead, we get "ref return scope" and the whole damn thing 
 can only track things on the stack.
That's what live adds.
Unfortunately it is not possible to write safe code that relies on live for lifetime tracking, so its utility is extremely limited in practice.
May 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/29/2022 7:18 AM, Paul Backus wrote:
 Unfortunately it is not possible to write  safe code that relies on  live for 
 lifetime tracking, so its utility is extremely limited in practice.
I don't know what you mean. Unless you're referring to containers. I'm not well acquainted with Rust, but I think you can't write many containers there without dipping into unsafe code.
May 29 2022
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 29 May 2022 at 16:26:03 UTC, Walter Bright wrote:
 On 5/29/2022 7:18 AM, Paul Backus wrote:
 Unfortunately it is not possible to write  safe code that 
 relies on  live for lifetime tracking, so its utility is 
 extremely limited in practice.
I don't know what you mean. Unless you're referring to containers. I'm not well acquainted with Rust, but I think you can't write many containers there without dipping into unsafe code.
I explained this is an older thread about DIP 1000 and safe. To quote my own post:
 In order for  safe or  trusted code to rely on  live's 
 ownership invariants
 (e.g., "a non-scope pointer owns the memory it points to"), it 
 must be
 impossible for  safe code to violate those invariants. Since 
  live's invariants
 are only enforced in  live functions, and  safe code is allowed 
 to call
 non- live functions, it follows that  safe code is allowed to 
 violate  live's
 invariants, and therefore that those invariants cannot be 
 relied upon by  safe
 or  trusted code.
Link: https://forum.dlang.org/post/hcogwpdyjbkcyofctler forum.dlang.org In order to fix this, you have to introduce restrictions on which functions can call or be called from live functions. For example: * A safe live function cannot call a non- live function. * A safe non- live function cannot call a live function. The end result is that " safe D" and " safe live D" become mutually-exclusive language subsets. Library code written for one will not work with the other. It's a language fork in all but name.
May 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
I'm aware of this, and Timon has also brought it up.

It's this way to enable people to mix and match code, because I doubt many
would 
even try it if it was "turtles all the way down".
May 29 2022
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 29 May 2022 at 20:05:42 UTC, Walter Bright wrote:
 I'm aware of this, and Timon has also brought it up.

 It's this way to enable people to mix and match code, because I 
 doubt many would even try it if it was "turtles all the way 
 down".
The entire problem is that we *cannot* mix and match live code with safe code. Nobody who cares enough about memory safety to be interested in live is going to want to give up safe in order to use it.
May 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/29/2022 2:05 PM, Paul Backus wrote:
 The entire problem is that we *cannot* mix and match  live code with  safe
code.
 
 Nobody who cares enough about memory safety to be interested in  live is going 
 to want to give up  safe in order to use it.
live does not subtract any safety from safe.
May 29 2022
parent reply Paul Backus <snarwin gmail.com> writes:
On Sunday, 29 May 2022 at 22:18:01 UTC, Walter Bright wrote:
 On 5/29/2022 2:05 PM, Paul Backus wrote:
 The entire problem is that we *cannot* mix and match  live 
 code with  safe code.
 
 Nobody who cares enough about memory safety to be interested 
 in  live is going to want to give up  safe in order to use it.
live does not subtract any safety from safe.
But it doesn't add any either. All of the mistakes live prevents you from making are already prevented by safe on its own. The only place you get any safety benefit from live is in system code. To be more precise about it: we cannot mix "code that benefits from live" with safe code, because "code that benefits from live" is a subset of " system code."
May 29 2022
next sibling parent reply zjh <fqbqrr 163.com> writes:
On Sunday, 29 May 2022 at 23:14:32 UTC, Paul Backus wrote:
 
` live` and ` safe` are two `different attribute`, which are no different from ordinary ` attributes`. You can add ` live` if you want to use ` safe`. It doesn't matter. It's not that terrible.
May 29 2022
parent zjh <fqbqrr 163.com> writes:
On Monday, 30 May 2022 at 01:30:30 UTC, zjh wrote:

  You can add ` live` if you want to use ` safe`.
` attribute` is originally used for `restriction`.
May 29 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/29/2022 4:14 PM, Paul Backus wrote:
  live does not subtract any safety from  safe.
But it doesn't add any either. All of the mistakes live prevents you from making are already prevented by safe on its own.
It prevents use-after-free, double-free, and leaks.
 The only place you get any 
 safety benefit from  live is in  system code.
 
 To be more precise about it: we cannot mix "code that benefits from  live"
with 
  safe code, because "code that benefits from  live" is a subset of " system
code."
There's nothing preventing marking a function as live safe.
May 29 2022
parent Paul Backus <snarwin gmail.com> writes:
On Monday, 30 May 2022 at 01:56:09 UTC, Walter Bright wrote:
 On 5/29/2022 4:14 PM, Paul Backus wrote:
  live does not subtract any safety from  safe.
But it doesn't add any either. All of the mistakes live prevents you from making are already prevented by safe on its own.
It prevents use-after-free, double-free, and leaks.
safe already prevents use-after-free and double-free on its own, because safe code can't call free() in the first place. Leaks are already handled by the GC or by reference counting. None of this requires or benefits from live. In fact, live is actually an active hindrance, because it will nag you to manually dispose of pointers to memory that's already going to be cleaned up automatically by the GC.
 There's nothing preventing marking a function as  live  safe.
There is no benefit to marking a function as live safe instead of just plain safe.
May 29 2022
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 29.05.22 22:05, Walter Bright wrote:
 I'm aware of this, and Timon has also brought it up.
 
 It's this way to enable people to mix and match code, because I doubt 
 many would even try it if it was "turtles all the way down".
Sure, but live fails at doing that. live is redefining the meaning of certain built-in types within annotated functions. You can't just mix and match, because the caller and callee may disagree about the meaning of the function's interface. Function calls across the boundary engage in unsafe type punning. It's like calling a function assuming a wrong ABI. The way to enable people to mix and match code is to make ownership and borrowing semantics additional features for user defined types to optionally use. That's also what you want in the end, and it can actually help make safe more powerful along the way.
May 30 2022
prev sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Sunday, 29 May 2022 at 16:26:03 UTC, Walter Bright wrote:
 On 5/29/2022 7:18 AM, Paul Backus wrote:
 Unfortunately it is not possible to write  safe code that 
 relies on  live for lifetime tracking, so its utility is 
 extremely limited in practice.
I don't know what you mean. Unless you're referring to containers. I'm not well acquainted with Rust, but I think you can't write many containers there without dipping into unsafe code.
Have you considered starting with some simple container implementations and seeing how your design allows them to be live safe? It would be good if the discussion had some central & complete* examples that people can pick over. I feel like all this discussion without concrete code that does real work might easily lead to collective blind spots. Of course one must make sure the theory makes sense to ensure generality, but concrete examples are excellent checks. * i.e. would actually be useful. Even an example showing how to implement a live safe array type would be enlightening. The best would be "this complete, useful library type sucks in D, here's how it would be much better with safe live D". Forgive me if these already exist somewhere.
May 30 2022
parent reply deadalnix <deadalnix gmail.com> writes:
On Monday, 30 May 2022 at 18:05:56 UTC, John Colvin wrote:
 Have you considered starting with some simple container 
 implementations and seeing how your design allows them to be 
  live  safe? It would be good if the discussion had some 
 central & complete* examples that people can pick over. I feel 
 like all this discussion without concrete code that does real 
 work might easily lead to collective blind spots. Of course one 
 must make sure the theory makes sense to ensure generality, but 
 concrete examples are excellent checks.

 * i.e. would actually be useful. Even an example showing how to 
 implement a  live  safe array type would be enlightening. The 
 best would be "this complete, useful library type sucks in D, 
 here's how it would be much better with  safe  live D".

 Forgive me if these already exist somewhere.
I'm not sure a container exist, but there are various code samples in bug reports and in discussion in this forum. This debate has been going on for literally a decade, and I guess there are only so many time people will go through the effort of coming up with a realistic piece of code to make a point. In any case, it seems to me that we are suffering from the exact opposite problems. As far as I can tell, D is rolling out solution to specific problems after solution to specific problems, which balloons the complexity while never getting at the root of the problem. Consider that type qualifier, DIP1000, live, nogc, RCObject, and probably a few more, are all variations around the theme of ownership. But because we look at it from a specific angle, and attack that angle, we just accumulate partial, but incomplete solutions. invisible. yes, DIP1000 is simpler than ownership. Yes, nogc is also simpler than ownership. Maybe - not sure - live is simpler than ownership. RCObject is also definitively simpler than ownership. Type qualifier are also simpler than ownership. But you know what is not simpler than ownership? Type qualifers + DIP1000 + nogc + live + RCObject. And yet, the power you get from them is also significantly less than full blown ownership. There is a case to be made that we do not want ownership in D, because it is too complex, because Rust captured that market, or whatever. But then we got to embrace the GC, optimize allocations via escape analysis, and so on (a study of optimizations done for Java or JS would be a good primer to know what the options are here). Or we decide that we do want it. What doesn't make sense is pretend we don't want it, and implement broken version of it after broken version of it. This only becomes apparent when you stop looking at individual examples, step back, and look at the emerging patterns.
May 30 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, May 30, 2022 at 08:26:46PM +0000, deadalnix via Digitalmars-d wrote:
[...]
 In any case, it seems to me that we are suffering from the exact
 opposite problems. As far as I can tell, D is rolling out solution to
 specific problems after solution to specific problems, which balloons
 the complexity while never getting at the root of the problem.
Totally agree with this!
 Consider that type qualifier, DIP1000,  live,  nogc, RCObject, and
 probably a few more, are all variations around the theme of ownership.
 But because we look at it from a specific angle, and attack that
 angle, we just accumulate partial, but incomplete solutions.
Yep. Lately I've been feeling like we're missing the forest for the trees. We're tackling every individual tree with heroic effort, but we've lost sight of the forest and where we want to be heading. (Or maybe we just can't come to an agreement of where we want to be heading, so we decide to tackle the entire forest, one tree at a time.) We're just inventing one ingenious engineering solution after another to deal with the tree immediately in front of us, accumulating technical debt, yet we have no idea what the next tree behind this one will be, let alone the rest of the forest. In the end, we accumulate lots of powerful tools for cutting down individual trees but none that can take us through the forest, because we don't even know which direction we're supposed to be heading. We're just hoping and wishing that after tackling N trees ahead of us we will somehow magically make it to the end -- but what that end is, we don't have a clue.

 yes, DIP1000 is simpler than ownership. Yes,  nogc is also simpler
 than ownership. Maybe - not sure -  live is simpler than ownership.
 RCObject is also definitively simpler than ownership. Type qualifier
 are also simpler than ownership.
 
 But you know what is not simpler than ownership? Type qualifers +
 DIP1000 +  nogc +  live + RCObject. And yet, the power you get from
 them is also significantly less than full blown ownership.
Exactly. We tackle individual trees marvelously, each with its own specialized tool. But put all those tools together, and we still can't take on the forest.
 There is a case to be made that we do not want ownership in D, because
 it is too complex, because Rust captured that market, or whatever. But
 then we got to embrace the GC, optimize allocations via escape
 analysis, and so on (a study of optimizations done for Java or JS
 would be a good primer to know what the options are here). Or we
 decide that we do want it.
IOW, decide which edge of the forest we want to end up on in the first place, before we engage our engineering genius to tackle individual trees! :-D
 What doesn't make sense is pretend we don't want it, and implement
 broken version of it after broken version of it. This only becomes
 apparent when you stop looking at individual examples, step back, and
 look at the emerging patterns.
I wouldn't say it's broken, each tool we invent works marvelously well against the tree directly ahead of us. The problem is that we don't know which trees we should be tackling, and which side of the forest we want to end up in at the end. We heroically deal with the tree of type qualifiers, then discover standing behind it tree of escaping references 1-level deep. We fell that with great aplomb, then we discover facing us the tree of escaping reference 2-levels deep. We deal with that with equal engineering genius, but behind it stands another tree, requiring yet another heroic engineering effort to deal with. With every tree we accumulate more technical debt, but we haven't even figured out whether all of them combined will get us "there" -- because we haven't even figured out where "there" is. :-/ T -- Why ask rhetorical questions? -- JC
May 30 2022
parent reply zjh <fqbqrr 163.com> writes:
On Monday, 30 May 2022 at 21:08:20 UTC, H. S. Teoh wrote:

 IOW, decide which edge of the forest we want to end up on in 
 the first place, before we engage our engineering genius to 
 tackle individual trees! :-D
In the words of our Chinese people,that , have not open the `big picture`? Indeed, we cannot indulge in details . We need macro thinking! For example, who are our `target users`? What are we after? I think since we have more than `four compilers` , let's serve for `compiler programmers` . This is a direction. We can try to explore `here` first, right? After all, `D` teams are professional in the compiler domain.
May 30 2022
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, May 31, 2022 at 12:31:48AM +0000, zjh via Digitalmars-d wrote:
 On Monday, 30 May 2022 at 21:08:20 UTC, H. S. Teoh wrote:
 
 IOW, decide which edge of the forest we want to end up on in the first
 place, before we engage our engineering genius to tackle individual
 trees! :-D
[...]
 In the words of our Chinese people,that , have not open the `big picture`?
 Indeed, we cannot indulge in details .
 We need macro thinking!
 For example, who are our `target users`? What are we after?
 I think since we have more than `four compilers` , let's serve for
 `compiler programmers` .
 This is a direction. We can try to explore `here` first, right? After
 all, `D` teams are professional in the compiler domain.
Details are definitely needed -- eventually. I don't agree that we can just care for the "big picture" and neglect the details. But neither can we have details without the big picture: that also gets us nowhere. The details must be guided by the big picture, so until the big picture becomes clear, worrying about the details won't get us very far. More pertinently, the "forest" in deadalnix's post is: what approach do we want to take in terms of memory management? We can either embrace the GC and take steps to make things better (add dataflow analysis to eliminate unnecessary allocations, etc.), or we can decide we want Rust-style ownership tracking. Or if we're insanely ambitious, both (but that would require an overarching *detailed* design with the necessary language theory / type theory foundations, to ensure that we aren't gonna end up with a dud). But taking things "one step at a time" is clearly not working right now. You need an overall direction first, then "one step at a time" would get you there eventually. But "one step at a time" without any overall plan means we'll end up walking in circles and getting nowhere. T -- They say that "guns don't kill people, people kill people." Well I think the gun helps. If you just stood there and yelled BANG, I don't think you'd kill too many people. -- Eddie Izzard, Dressed to Kill
May 30 2022
parent zjh <fqbqrr 163.com> writes:
On Tuesday, 31 May 2022 at 00:50:41 UTC, H. S. Teoh wrote:

 You need an overall direction first.
Right.
May 30 2022
prev sibling parent reply zjh <fqbqrr 163.com> writes:
On Tuesday, 31 May 2022 at 00:31:48 UTC, zjh wrote:

 After all, `D` teams are professional in the compiler domain.
We can move towards the mother of compilers. Let's be the compiler lovers's base compiler . They build compiler on top of `D`. Serving for them maybe is an `excellent direction`. Similar to `'llvm'`, but we have a working `'compiler'`.
May 30 2022
parent monkyyy <crazymonkyyy gmail.com> writes:
On Tuesday, 31 May 2022 at 00:51:02 UTC, zjh wrote:
 On Tuesday, 31 May 2022 at 00:31:48 UTC, zjh wrote:

 After all, `D` teams are professional in the compiler domain.
We can move towards the mother of compilers. Let's be the compiler lovers's base compiler . They build compiler on top of `D`. Serving for them maybe is an `excellent direction`. Similar to `'llvm'`, but we have a working `'compiler'`.
but why? I don't exactly think there's a shortage of languages that shouldn't exist; and in what way does d handle string parsing better then everyone else or whatever?
May 30 2022