www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - The D standard library is built on GC, is that a negative or positive?

reply thebluepandabear <therealbluepandabear protonmail.com> writes:
Hello,

I was speaking to one of my friends on D language and he spoke 
about how he doesn't like D language due to the fact that its 
standard library is built on top of GC (garbage collection).

He said that if he doesn't want to implement GC he misses out on 
the standard library, which for him is a big disadvantage.

Does this claim have merit? I am not far enough into learning D, 
so I haven't touched GC stuff yet, but I am curious what the D 
community has to say about this issue.
Dec 12 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 13/12/2022 8:11 PM, thebluepandabear wrote:
 Hello,
 
 I was speaking to one of my friends on D language and he spoke about how 
 he doesn't like D language due to the fact that its standard library is 
 built on top of GC (garbage collection).
 
 He said that if he doesn't want to implement GC he misses out on the 
 standard library, which for him is a big disadvantage.
 
 Does this claim have merit? I am not far enough into learning D, so I 
 haven't touched GC stuff yet, but I am curious what the D community has 
 to say about this issue.
No. Memory and lifetime management, are two very difficult problems to solve and extremely easy to get wrong in a major way. Garbage collectors are by far the easiest and most common solution in the literature to solve these sets of problems in a way that is very unlikely to cause you issues. In D the GC is merely a library that you can call into, and in turn control. If you want to stop it collecting until a better time you can. Its just one function call away. As someone who has studied lock-free concurrent data structures, GC's, memory allocators and has a what amounts to a standard library in -betterC with his own memory allocators and locks using reference counting; use locks, embrace the GC, they solve real problems with less issues.
Dec 12 2022
prev sibling next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Tuesday, 13 December 2022 at 07:11:34 UTC, thebluepandabear 
wrote:
 He said that if he doesn't want to implement GC he misses out 
 on the standard library, which for him is a big disadvantage.
your 'friends' assertion is not entirely correct: module test; safe: nogc: import std.container.array; void main() { auto arr = Array!int(0, 1, 2); // fine if you comment out safe: // otherwise..Error: safe cannot call system constructor -> std.container.array.Array! }
Dec 12 2022
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Tuesday, 13 December 2022 at 07:11:34 UTC, thebluepandabear 
wrote:
 Hello,

 I was speaking to one of my friends on D language and he spoke 
 about how he doesn't like D language due to the fact that its 
 standard library is built on top of GC (garbage collection).

 He said that if he doesn't want to implement GC he misses out 
 on the standard library, which for him is a big disadvantage.

 Does this claim have merit? I am not far enough into learning 
 D, so I haven't touched GC stuff yet, but I am curious what the 
 D community has to say about this issue.
He said he must "implement" the GC? Does this mean he's going to program for some rare target platform where D does not offer GC of the box? If so this somewhat but not quite correct. Parts of D standard library do work with `-betterC` or otherwise stripped down DRuntime to some extent, but it's quite unstable and often requires workarounds. On the other hand, if the intention is to simply avoid using the GC, this claim is mostly wrong. Big part of the standard library, probably most of it, works. For example: ```D safe nogc vectorLength(double[] vec) { import std.array, std.algorithm, std.math; return vec.map!"a*a".sum.sqrt; } ``` There are still some tasks, like Unicode normalization, that don't work, but almost all of the core functions in `std.algorithm` and `std.range` are GC-free, plus many of the other tasks.
Dec 13 2022
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/13/22 2:11 AM, thebluepandabear wrote:
 Hello,
 
 I was speaking to one of my friends on D language and he spoke about how 
 he doesn't like D language due to the fact that its standard library is 
 built on top of GC (garbage collection).
 
 He said that if he doesn't want to implement GC he misses out on the 
 standard library, which for him is a big disadvantage.
 
 Does this claim have merit? I am not far enough into learning D, so I 
 haven't touched GC stuff yet, but I am curious what the D community has 
 to say about this issue.
 
 
Most pieces do not use the GC except to throw exceptions. This one thing is preventing much of the standard library from being nogc. I'm not sure what the priority of DIP1008 implementation is, but if we want to severely lessen the reliance on the GC, this would be a huge step in that direction. -Steve
Dec 13 2022
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, Dec 13, 2022 at 07:11:34AM +0000, thebluepandabear via Digitalmars-d
wrote:
 Hello,
 
 I was speaking to one of my friends on D language and he spoke about
 how he doesn't like D language due to the fact that its standard
 library is built on top of GC (garbage collection).
 
 He said that if he doesn't want to implement GC he misses out on the
 standard library, which for him is a big disadvantage.
 
 Does this claim have merit? I am not far enough into learning D, so I
 haven't touched GC stuff yet, but I am curious what the D community
 has to say about this issue.
1) No, this claim has no merit. However, I sympathize with the reaction because that's the reaction I myself had when I first found D online. I came from a strong C/C++ background, got fed up with C++ and was looking for a new language closer to my ideals of what a programming language should be. Stumbled across D, which caught my interest. Then I saw the word "GC" and my knee-jerk reaction was, "what a pity, the rest of the language looks so promising, but GC? No thanks." It took me a while to realize the flaw in my reasoning. Today, I wholeheartedly embrace the GC. 2) Your friend has incomplete/inaccurate information about the standard library being dependent on the GC. A pretty significant chunk of Phobos is actually usable without the GC -- a large part of the range-based stuff (std.range, std.algorithm, etc.), for example. True, some parts are GC-dependent, but you can still get pretty good mileage out of the nogc subset of Phobos. // The thing about GC vs. non-GC is that, coming from a C/C++ background, my philosophy was that I must be in control of every detail of my program; I had to know exactly what it does at any given point. Especially when it comes to managing memory allocations. The idea being that if I kept my memory tidy (i.e., free allocated chunks when I'm done with them) then there wouldn't be an accumulation of garbage that would cost a lot of time to clean up later. The idea of a big black box called the GC that I don't understand, randomly taking over management of my memory, scared me. What if it triggered a collection at an inconvenient time when performance is critical? Not an entirely wrong line of reasoning, but manual memory management comes with costs: a) The biggest cost is the additional mental load it adds to your programming tasks. Once you go beyond your trivial hello-world and add-two-numbers-together type of functions, you have to start thinking about memory management at every turn, every juncture. "My function needs space to sort this list of stuff, hmm, I need to allocate a buffer. How big of a buffer do I need? When should I allocate it? When should I free it? I also need this other scratchpad buffer for caching this other bit of data that I'll need 2 blocks down the function body. Better allocate it too. Oh no, now I have to free it, so both branches of the if-statement has to check the pointer and free it. Oh, and inside this loop too; I can't just short-circuit it by returning from the function, I need an exit block for cleaning up my allocations. Oh, but this function might be called from a performance-critical part of the code! Better not do allocations here, let the caller pass it in. Oh wait, that changes the signature of this function, so I can't put it in the generic table of function pointers to callbacks anymore, I need a control block to store the necessary information. Oh wait, I have to allocate the control block too. Who's gonna free it? When should it be freed?" And on and on it goes. Pretty soon, you find yourself spending an inordinate amount of time and effort fiddling with memory management rather than making progress in the problem domain, i.e., actually solving the problem you set out to solve in the first place. And worse yet: b) Your APIs become cluttered with memory management paraphrenalia. Instead of only input parameters that are directly related to the problem domain the function is supposed to do work in, you must also include memory-management related stuff. Like allocators, wrapped pointers -- because nobody can keep track of raw pointers without eventually tripping up -- you better wrap it in a managed pointer like auto_ptr<> or some ref-counted handle. But should you use auto_ptr or ref_counted<> or something else? In a large project, some functions will expect auto_ptr, others will expect ref_counted<>, and when you need to put them together, you need to insert additional code for interconverting between your wrapped pointer types. (And you need to take extra care not to screw up the semantics and leak/corrupt memory.) The net result is, memory management paraphrenalia percolates throughout your code, polluting every API and demanding extra code for interconverting / gluing disparate memory management conventions together. Extra code that don't help you make any progress in your problem domain, but have to be there because of manual memory management. c) So you went through all of the above troubles because you believed that it would save you from the bogeyman of unwanted GC pauses and keep you in control of the inner workings of your program. But does it really live up to its promises? Not necessarily. If you have a graph of allocated objects, for example, when the last reference to some node in that graph is going out of scope, then you have to deallocate the entire graph. The dtor must recursively traverse the entire structure and destruct everything, because after that point, you no longer have a reference to the graph, and would leak the memory if you didn't clean up now. And here's the thing: in a sufficiently complex program, (1) you cannot predict the size of this graph -- it's potentially unbounded; and (2) you cannot predict where in the code the last reference will go out of scope (when the refcount goes to 0, if you're using refcounting). The net result is: your program will unpredictably get to a point where it must spend an unbounded amount of time to deallocate a large graph of allocated objects. IOW, this is not that much different from the GC having to pause and do a collection at an unpredictable time. So you put in all this effort just to avoid this bogeyman, and lo and behold you haven't got rid of it at all! Furthermore, on today's CPU architectures that have cache hierarchies and memory access prediction units, one very important factor of performance is locality. I.e., if your program accesses memory in a sequential pattern, or within close proximity to each other, your program tends to run faster, than if it had to successively access multiple random locations in memory. If you manage memory yourself, then when a large graph of objects is going out of scope you're forced to clean it up right there and then -- even if the nodes happen to be widely scattered across memory (because they were allocated at different times in the program and attached to the graph). If you used a GC, however, the GC could change the order in which it scans for garbage in a way that has better cache utility -- because the GC isn't obligated to clean up immediately, but can wait until there's enough garbage that a single sweep would pick up pieces of diverse object graphs that happen to be close to each other in memory, and clean them up in a sequential order so that there are less CPU cache misses. Or, to put it succinctly, the GC can sometimes outperform your manual management of memory! d) Lastly, memory management is hard. Very hard. So hard that, after how many decades of industry experience with manual memory management in C/C++, well-known, battle-worn large software projects are still riddled with memory management bugs that lead to crashes and security exploits. Just check the CVE database, for example. An inordinately large proportion of security bugs are related to memory management. Using a GC immediately gets rid of 90% of these issues. (Not 100%, unfortunately, because there are still cases where problems may arise. See: "memory management is hard".) If you don't need to write the code that frees memory, then by definition you cannot introduce bugs while doing so. This leads us to the advantages of having a GC: 1) It greatly reduces the number of memory-related bugs in your program. Gets rid of an entire class of bugs related to manually managing your allocations. 2) It frees up your mental resources to make progress in your problem domain, instead of endlessly worrying about the nitty-gritty of memory management at every turn. More mental resources available means you can make progress in your problem domain faster, and with lower chances of bugs. 3) Your APIs become cleaner. You no longer need memory management paraphrenalia polluting your APIs; your parameters can be restricted to only those that are required for your problem domain and nothing else. Cleaner APIs lead to less boilerplate / glue code for interfacing between APIs that expect different memory management schemes (e.g., converting between auto_ptr<> and ref_counted<> or whatever). Diverse modules become more compatible with each other, and can call each other with less friction. Less friction means shorter development times, less bugs, and better maintainability (code without memory management paraphrenalia is much easier to read -- and understand correctly so that you can make modifications without introducing bugs). 4) In some cases, you may even get better runtime performance than if you manually managed everything. // And as a little footnote: D's GC does not run in the background independently of your program's threads; GC collections will NOT trigger unless you're allocating memory and the GC runs out of memory to give you. Meaning that you *do* have some control over GC pauses in your program -- if you want to be sure you have no collections in some piece of code, simply don't do any allocations, and collections won't start. If you're worried that another thread might trigger a collection, you can always bring out the GC.stop() hammer to stop the GC from doing any collections even in the face of continuing allocations. (And then call GC.start() later when it's safe for collections to run again.) And if you're like me, and you like more control over how things are run in your program, you can even call GC.stop() and then periodically call GC.collect() in your own schedule, at your own convenience. (In one of my D projects, I managed to eke out a 20-25% performance boost just by reducing the frequency of GC collections by running GC.collect on my own schedule.) // Also, in those few places in your code where the GC really *does* get in your way, there's nogc at your disposal. The compiler will statically enforce zero GC usage in such functions, so that you can be sure you won't trigger any collections and you won't make any new GC allocations. // So you see, the GC isn't really *that* bad, as if it were a plague that you have to avoid at all costs. It's actually a good helper if you know how to make use of its advantages. T -- Why waste time reinventing the wheel, when you could be reinventing the engine? -- Damian Conway
Dec 13 2022
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/13/22 8:18 PM, H. S. Teoh wrote:
 On Tue, Dec 13, 2022 at 07:11:34AM +0000, thebluepandabear via Digitalmars-d
wrote:
 Hello,

 I was speaking to one of my friends on D language and he spoke about
 how he doesn't like D language due to the fact that its standard
 library is built on top of GC (garbage collection).

 He said that if he doesn't want to implement GC he misses out on the
 standard library, which for him is a big disadvantage.

 Does this claim have merit? I am not far enough into learning D, so I
 haven't touched GC stuff yet, but I am curious what the D community
 has to say about this issue.
1) No, this claim has no merit. However, I sympathize with the reaction because that's the reaction I myself had when I first found D online. I came from a strong C/C++ background, got fed up with C++ and was looking for a new language closer to my ideals of what a programming language should be. Stumbled across D, which caught my interest. Then I saw the word "GC" and my knee-jerk reaction was, "what a pity, the rest of the language looks so promising, but GC? No thanks." It took me a while to realize the flaw in my reasoning. Today, I wholeheartedly embrace the GC.
So while I, too, don't hate the GC and embrace it, the truth really is that way way too much is dependent on the GC. as I alluded to in my previous post: ```d void main() nogc { import std.conv; auto v = "42".to!int; } ``` fails to compile. Why? Surely, it can't use the GC for converting a string to an integer? Well, it doesn't. But if the string you give it happens to not contain a string representation of an integer, it wants to throw an exception. And the act of allocating and throwing that exception needs the GC. We really really need to fix it. It completely cuts the legs out of the answer "if you don't want the gc, use nogc". If we do fix it, all these questions pretty much just go away. It goes from something like 20% of phobos being nogc-compatible to 80%. -Steve
Dec 13 2022
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, Dec 13, 2022 at 08:47:29PM -0500, Steven Schveighoffer via
Digitalmars-d wrote:
 On 12/13/22 8:18 PM, H. S. Teoh wrote:
 On Tue, Dec 13, 2022 at 07:11:34AM +0000, thebluepandabear via Digitalmars-d
wrote:
 Hello,
 
 I was speaking to one of my friends on D language and he spoke about
 how he doesn't like D language due to the fact that its standard
 library is built on top of GC (garbage collection).
 
 He said that if he doesn't want to implement GC he misses out on the
 standard library, which for him is a big disadvantage.
 
 Does this claim have merit? I am not far enough into learning D, so I
 haven't touched GC stuff yet, but I am curious what the D community
 has to say about this issue.
1) No, this claim has no merit. However, I sympathize with the reaction because that's the reaction I myself had when I first found D online. I came from a strong C/C++ background, got fed up with C++ and was looking for a new language closer to my ideals of what a programming language should be. Stumbled across D, which caught my interest. Then I saw the word "GC" and my knee-jerk reaction was, "what a pity, the rest of the language looks so promising, but GC? No thanks." It took me a while to realize the flaw in my reasoning. Today, I wholeheartedly embrace the GC.
So while I, too, don't hate the GC and embrace it, the truth really is that way way too much is dependent on the GC. as I alluded to in my previous post: ```d void main() nogc { import std.conv; auto v = "42".to!int; } ``` fails to compile. Why? Surely, it can't use the GC for converting a string to an integer? Well, it doesn't. But if the string you give it happens to not contain a string representation of an integer, it wants to throw an exception. And the act of allocating and throwing that exception needs the GC. We really really need to fix it. It completely cuts the legs out of the answer "if you don't want the gc, use nogc". If we do fix it, all these questions pretty much just go away. It goes from something like 20% of phobos being nogc-compatible to 80%.
[...] Hmm. Whatever happened to that proposal for GC-less exceptions? Something about allocating the exception from a static buffer and freeing it in the catch block or something? T -- Which is worse: ignorance or apathy? Who knows? Who cares? -- Erich Schubert
Dec 13 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/13/22 9:05 PM, H. S. Teoh wrote:

 Hmm.  Whatever happened to that proposal for GC-less exceptions?
 Something about allocating the exception from a static buffer and
 freeing it in the catch block or something?
https://github.com/dlang/DIPs/blob/master/DIPs/other/DIP1008.md "Postponed" -Steve
Dec 13 2022
prev sibling parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 12/13/22 18:05, H. S. Teoh wrote:

 Hmm.  Whatever happened to that proposal for GC-less exceptions?
 Something about allocating the exception from a static buffer and
 freeing it in the catch block or something?
I have an errornogc module here: https://code.dlang.org/packages/alid I hope it still compiles. :) Ali
Dec 13 2022
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/13/22 9:45 PM, Ali Çehreli wrote:
 On 12/13/22 18:05, H. S. Teoh wrote:
 
  > Hmm.  Whatever happened to that proposal for GC-less exceptions?
  > Something about allocating the exception from a static buffer and
  > freeing it in the catch block or something?
 
 I have an errornogc module here:
 
    https://code.dlang.org/packages/alid
 
 I hope it still compiles. :)
FYI, throwing actually uses the GC unless you override the traceinfo allocator. Yes, even if it's marked nogc (functions marked nogc can still call arbitrary C functions that might allocate using the GC). -steve
Dec 13 2022
parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Wednesday, 14 December 2022 at 03:20:13 UTC, Steven 
Schveighoffer wrote:
 On 12/13/22 9:45 PM, Ali Çehreli wrote:
 On 12/13/22 18:05, H. S. Teoh wrote:
 
  > Hmm.  Whatever happened to that proposal for GC-less 
 exceptions?
  > Something about allocating the exception from a static 
 buffer and
  > freeing it in the catch block or something?
 
 I have an errornogc module here:
 
    https://code.dlang.org/packages/alid
 
 I hope it still compiles. :)
FYI, throwing actually uses the GC unless you override the traceinfo allocator. Yes, even if it's marked nogc (functions marked nogc can still call arbitrary C functions that might allocate using the GC).
Hmm, I'm experimenting with the following code for errors handling in my small nogc compatible dub package: ```d safe nogc: /* nogc compatible replacement for enforce */ T enforce(string msg, T)(T cond) { if (!cond) { static immutable e = new Exception(msg); throw e; } return cond; } void main() { try { enforce!"trigger exception by a comparison error"(1 == 2); } catch (Exception e) { assert(e.msg == "trigger exception by a comparison error"); } enforce!"and now again without try/catch"(1 == 2); } ``` Does it also GC allocate behind the scene? Are there any other possible problems with it?
Dec 15 2022
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Thursday, 15 December 2022 at 13:30:57 UTC, Siarhei Siamashka 
wrote:
 Does it also GC allocate behind the scene?
Is the stack trace correct? If so, yes it does, but I think here since it is static immutable you'll just have an empty stack trace... but im not sure i shuold try running it but im lazy.
 Are there any other possible problems with it?
Yes, a lot. 1) `immutable` is ignored on exceptions. The implementation casts it away internally as it is thrown, so when it is caught, it is mutable again. If someone tried to actually modify it at this point, it is undefined behavior. (in this case i think the implementation would just retain the edits for another call to the same enforce msg, T arguments) reference to the exception (which is done by `core.thread.Thread.join` for example) they'd possibly find it changing out from under them. Since it is marked `static immutable` this doesn't apply, but if it wasn't `immutable` at construction, it would be a thread-local variable which is, again, crash city if someone were to `Thread.join` since it'd be use-after-free then by definition (the thread local block is destroyed along with the thread, and thread.join terminates the thread). BTW this is also dip1008's core problem. However, being a global instance, if two threads were to throw it simultaneously... this ties back to the immutable being casted away, but possible corruption to the data there too. static exceptions work for simple cases but there's a number of pitfalls using them beyond the basics, especially when threads come into play.
Dec 15 2022
next sibling parent Nick Treleaven <nick geany.org> writes:
On Thursday, 15 December 2022 at 13:45:03 UTC, Adam D Ruppe wrote:
 1) `immutable` is ignored on exceptions. The implementation 
 casts it away internally as it is thrown, so when it is caught, 
 it is mutable again. If someone tried to actually modify it at 
 this point, it is undefined behavior. (in this case i think the 
 implementation would just retain the edits for another call to 
 the same enforce msg, T arguments)
I was thinking to add a disabled default constructor for Throwable: disable immutable this(); But then every subclass of Throwable has to do the same. What if subclasses inherited a disabled constructor - that might solve the problem?
Dec 15 2022
prev sibling parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Thursday, 15 December 2022 at 13:45:03 UTC, Adam D Ruppe wrote:
 1) `immutable` is ignored on exceptions. The implementation 
 casts it away internally as it is thrown, so when it is caught, 
 it is mutable again. If someone tried to actually modify it at 
 this point, it is undefined behavior. (in this case i think the 
 implementation would just retain the edits for another call to 
 the same enforce msg, T arguments)
So I tried to mark this immutable data with VALGRIND_MAKE_MEM_NOACCESS and valgrind reports just a single write access to it from here: https://github.com/dlang/dmd/blob/v2.101.1/druntime/src/rt/deh.d#L46 Is it possible to construct the immutable exception data in a way that the `cast(byte*) t !is typeid(t).initializer.ptr` check would fail? What's the purpose of this check? Another problem is illustrated by the example below: ```D T enforce(string msg, T)(T cond) { if (!cond) { static immutable e = new Exception(msg); throw e; } return cond; } void main() { try { enforce!"trigger an exception"(1 == 2); } catch (Exception e) { assert(0, "if we reach here, then it's probably a compiler bug"); } catch (immutable Exception e) { // the proper place to handle it is here assert(e.msg == "trigger an exception"); } } ``` If it's an immutable exception, then the compiler should probably only catch it as immutable?
 static exceptions work for simple cases but there's a number of 
 pitfalls using them beyond the basics, especially when threads 
 come into play.
Right now it doesn't look like there are way too many problems preventing immutable exceptions from becoming usable.
Dec 15 2022
parent reply Nick Treleaven <nick geany.org> writes:
On Friday, 16 December 2022 at 04:54:21 UTC, Siarhei Siamashka 
wrote:
 Another problem is illustrated by the example below:

 ```D
 T enforce(string msg, T)(T cond) {
     if (!cond) {
         static immutable e = new Exception(msg);
         throw e;
     }
     return cond;
 }

 void main() {
     try {
         enforce!"trigger an exception"(1 == 2);
     } catch (Exception e) {
         assert(0, "if we reach here, then it's probably a 
 compiler bug");
     } catch (immutable Exception e) {
         // the proper place to handle it is here
         assert(e.msg == "trigger an exception");
     }
 }
 ```

 If it's an immutable exception, then the compiler should 
 probably only catch it as immutable?
This pull disallows throwing an immutable object: https://github.com/dlang/dmd/pull/14706 You can still throw a const object though, which would work for your `enforce`.
Dec 16 2022
next sibling parent reply bauss <jacobbauss gmail.com> writes:
On Friday, 16 December 2022 at 13:25:25 UTC, Nick Treleaven wrote:
 On Friday, 16 December 2022 at 04:54:21 UTC, Siarhei Siamashka 
 wrote:
 Another problem is illustrated by the example below:

 ```D
 T enforce(string msg, T)(T cond) {
     if (!cond) {
         static immutable e = new Exception(msg);
         throw e;
     }
     return cond;
 }

 void main() {
     try {
         enforce!"trigger an exception"(1 == 2);
     } catch (Exception e) {
         assert(0, "if we reach here, then it's probably a 
 compiler bug");
     } catch (immutable Exception e) {
         // the proper place to handle it is here
         assert(e.msg == "trigger an exception");
     }
 }
 ```

 If it's an immutable exception, then the compiler should 
 probably only catch it as immutable?
This pull disallows throwing an immutable object: https://github.com/dlang/dmd/pull/14706 You can still throw a const object though, which would work for your `enforce`.
Personally I think it should always just be implied const like: catch (Exception e) should imply catch (const e) that way both mutable and immutable will work.
Dec 16 2022
next sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 16 December 2022 at 13:56:19 UTC, bauss wrote:
 Personally I think it should always just be implied const like:

 catch (Exception e) should imply catch (const e) that way both 
 mutable and immutable will work.
Shouldn't that be like in C++ where the it is recommended to use ```cpp catch(const MyException& e) ``` so in in D it would be ```cpp catch(const ref MyException e) ```
Dec 16 2022
parent Tejas <notrealemail gmail.com> writes:
On Friday, 16 December 2022 at 14:05:37 UTC, IGotD- wrote:
 On Friday, 16 December 2022 at 13:56:19 UTC, bauss wrote:
 Personally I think it should always just be implied const like:

 catch (Exception e) should imply catch (const e) that way both 
 mutable and immutable will work.
Shouldn't that be like in C++ where the it is recommended to use ```cpp catch(const MyException& e) ``` so in in D it would be ```cpp catch(const ref MyException e) ```
Classes are reference types in D, so I think the `ref` is implicit
Dec 16 2022
prev sibling parent reply Nick Treleaven <nick geany.org> writes:
On Friday, 16 December 2022 at 13:56:19 UTC, bauss wrote:
 On Friday, 16 December 2022 at 13:25:25 UTC, Nick Treleaven 
 wrote:
 This pull disallows throwing an immutable object:

 https://github.com/dlang/dmd/pull/14706

 You can still throw a const object though, which would work 
 for your `enforce`.
Of course that was changed as immutable can convert to const before throwing, and const shouldn't be violated even if not immutable.
 Personally I think it should always just be implied const like:

 catch (Exception e) should imply catch (const e) that way both 
 mutable and immutable will work.
That solves the throw/catch qualifier mismatch problem, but it still can violate immutable when the runtime sets the stack trace. Also not sure if the runtime may set another field. If reference counted exceptions are implemented then that might conflict with const/immutable too (though it can be worked around with a hashtable).
Dec 21 2022
parent Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Wednesday, 21 December 2022 at 12:44:18 UTC, Nick Treleaven 
wrote:
 On Friday, 16 December 2022 at 13:56:19 UTC, bauss wrote:
 On Friday, 16 December 2022 at 13:25:25 UTC, Nick Treleaven 
 wrote:
 This pull disallows throwing an immutable object:

 https://github.com/dlang/dmd/pull/14706

 You can still throw a const object though, which would work 
 for your `enforce`.
Of course that was changed as immutable can convert to const before throwing, and const shouldn't be violated even if not immutable.
Using `const` is not an undefined behavior, but it's still not safe and not desirable. Because we have to hope that the catch blocks never try to modify the received exception object and never let it escape the catch block scope. This makes `immutable` a much better fit.
 Personally I think it should always just be implied const like:

 catch (Exception e) should imply catch (const e) that way both 
 mutable and immutable will work.
That solves the throw/catch qualifier mismatch problem, but it still can violate immutable when the runtime sets the stack trace.
If a custom Throwable.TraceInfo is already set (see https://forum.dlang.org/post/bvgalazssljjnchqnjso forum.dlang.org as an example), then the runtime does not try to modify it. This is valid for a wide range of the older versions of the D compiler & druntime up to and including the most recent v2.101.1 (but the future versions may of course change).
 Also not sure if the runtime may set another field.
To the best of my knowledge, it doesn't. Tested by valgrind [in this way](https://forum.dlang.org/post/msjrcymphcdquslfgbrn forum.dlang.org) and also looked at the druntime code.
 If reference counted exceptions are implemented then that might 
 conflict with const/immutable too (though it can be worked 
 around with a hashtable).
The reference counted exceptions are already implemented and can be previewed by using the `-dip1008` command line option for DMD or the `-fpreview=dip1008` command line option for GDC (only GDC12 or newer). The current implementation [checks the reference counter here](https://github.com/dlang/dmd/blob/v2.101.1/druntime/src/rt/ warfeh.d#L295-L299) and only increments it if it's non-zero. The zero value of the reference counter means that it is GC allocated (this works fine for static immutable exception instances too). I have no clue if there are any plans to make `-dip1008` available by default any time soon and its status is "Postponed". I'm not sure if anything other than https://github.com/dlang/dmd/pull/14710 is preventing this from happening. If anyone is aware of any other [DIP1008](https://github.com/dlang/DIPs/blob/master/DIPs/other/DIP1008.md) blockers, then please let me know.
Dec 21 2022
prev sibling parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Friday, 16 December 2022 at 13:25:25 UTC, Nick Treleaven wrote:
 This pull disallows throwing an immutable object:

 https://github.com/dlang/dmd/pull/14706

 You can still throw a const object though, which would work for 
 your `enforce`.
Personally, I would prefer to have this resolved by: 1. Actually placing immutable data in read only data sections. Why isn't the compiler doing it already? 2. Having any immutable exception just fall through mutable catch blocks without matching any of them. The modification of immutable exception data can be suppressed by defining a custom no-op `Throwable.TraceInfo`, something like this: ```D class EmptyTraceInfo : Throwable.TraceInfo { int opApply(scope int delegate(ref const(char[]))) const { return 0; } int opApply(scope int delegate(ref size_t, ref const(char[]))) const { return 0; } override string toString() const { return "sorry, no backtrace here"; } } class ImmutableException : Exception { static immutable empty_trace_info = new EmptyTraceInfo; nogc trusted pure nothrow this(string msg, string file = __FILE__, size_t line = __LINE__, Throwable nextInChain = null) { super(msg, file, line, nextInChain); info = cast(Throwable.TraceInfo)empty_trace_info; } nogc trusted pure nothrow this(string msg, Throwable nextInChain, string file = __FILE__, size_t line = __LINE__) { super(msg, file, line, nextInChain); info = cast(Throwable.TraceInfo)empty_trace_info; } } T enforce(string msg, T)(T cond) { if (!cond) { static immutable e = new ImmutableException(msg); throw e; } return cond; } ``` The default implementation is the source of GC allocations itself: https://github.com/dlang/dmd/blob/v2.101.1/druntime/src/rt/deh.d#L13-L21 This way immutable exceptions can work perfectly fine without any rogue GC allocations. But losing backtraces isn't nice and I'm trying to see what can be done. Maybe druntime can use its per-thread non-GC allocated storage for this: https://github.com/dlang/dmd/blob/v2.101.1/druntime/src/rt/dwarfeh.d#L146-L165 ? Using `const` instead of `immutable` is just hiding the problem and I don't like this.
Dec 16 2022
parent reply IGotD- <nise nise.com> writes:
On Friday, 16 December 2022 at 15:02:55 UTC, Siarhei Siamashka 
wrote:
 Using `const` instead of `immutable` is just hiding the problem 
 and I don't like this.
Shouldn't the actual implementation of the exception handling be hidden as much as possible towards the programmer, at least on the receiving end. Exposing it too much might lead to that any change in the implementation might not be possible in the future. When you catch an exception, is it then important for the programmer to know how the exception was thrown or can this be done auto magically under the hood?
Dec 16 2022
parent Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Friday, 16 December 2022 at 15:23:45 UTC, IGotD- wrote:
 On Friday, 16 December 2022 at 15:02:55 UTC, Siarhei Siamashka 
 wrote:
 Using `const` instead of `immutable` is just hiding the 
 problem and I don't like this.
Shouldn't the actual implementation of the exception handling be hidden as much as possible towards the programmer, at least on the receiving end. Exposing it too much might lead to that any change in the implementation might not be possible in the future.
Hiding the actual implementation is good, but hiding a memory corruption bug is bad. The code from https://forum.dlang.org/post/cmtaeuedmdwxjecpcrjh forum.dlang.org can be successfully compiled by D compilers at least from GDC 9 and up to the most recent versions. But what actually happens is that the "immutable" data gets corrupted (the field 'info' is overwritten by the exception handling code from druntime) and also the catch block can receive it in a mutable form and modify it there or pass it around and later modify somewhere else. All of this despite the ` safe` attribute. Not to mention the GC allocations despite the ` nogc` attribute too. It was known at least since https://issues.dlang.org/show_bug.cgi?id=12118 (mentioned in Nick Treleaven's pull request). Replacing "immutable" with "const" doesn't change anything on a fundamental level, the hidden corruption still remains there. But I'm not happy about just disallowing to throw immutable exceptions instead of fixing them and without a good replacement ("const" is not a good replacement).
 When you catch an exception, is it then important for the 
 programmer to know how the exception was thrown or can this be 
 done auto magically under the hood?
It's important to know that there are no hidden bugs.
Dec 16 2022
prev sibling next sibling parent reply torhu <torhu yahoo.com> writes:
On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven 
Schveighoffer wrote:
 ```d
 void main()  nogc
 {
    import std.conv;
    auto v = "42".to!int;
 }
 ```
I have been wondering why there isn't a basic variation like this available: ```d auto i = "42".toOr!int(-1); auto s = i.toOr!string(null); ```
Dec 14 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Dec 14, 2022 at 07:38:51PM +0000, torhu via Digitalmars-d wrote:
 On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven Schveighoffer wrote:
 
 ```d
 void main()  nogc
 {
    import std.conv;
    auto v = "42".to!int;
 }
 ```
I have been wondering why there isn't a basic variation like this available: ```d auto i = "42".toOr!int(-1); auto s = i.toOr!string(null); ```
This would be a nice addition to Phobos IMO. Though we should think of a better name for it. :-P T -- Never step over a puddle, always step around it. Chances are that whatever made it is still dripping.
Dec 14 2022
parent Tejas <notrealemail gmail.com> writes:
On Wednesday, 14 December 2022 at 20:30:39 UTC, H. S. Teoh wrote:
 On Wed, Dec 14, 2022 at 07:38:51PM +0000, torhu via 
 Digitalmars-d wrote:
 On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven 
 Schveighoffer wrote:
 
 ```d
 void main()  nogc
 {
    import std.conv;
    auto v = "42".to!int;
 }
 ```
I have been wondering why there isn't a basic variation like this available: ```d auto i = "42".toOr!int(-1); auto s = i.toOr!string(null); ```
This would be a nice addition to Phobos IMO. Though we should think of a better name for it. :-P T
Maybe we could've enhanced `to` itself had we named arguments in the language ```d auto c = "42".to!int(error = <whatever value you want>); ``` Or maybe a `result` type would be enough, so no need for this?
Dec 14 2022
prev sibling next sibling parent reply bachmeier <no spam.net> writes:
On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven 
Schveighoffer wrote:

 But if the string you give it happens to not contain a string 
 representation of an integer, it wants to throw an exception. 
 And the act of allocating and throwing that exception needs the 
 GC.

 We really really need to fix it. It completely cuts the legs 
 out of the answer "if you don't want the gc, use  nogc". If we 
 do fix it, all these questions pretty much just go away. It 
 goes from something like 20% of phobos being nogc-compatible to 
 80%.
Is avoiding the GC inside exceptions a problem that needs to be solved? Maybe it is, but I don't think it's common to have a loop with millions of exceptions. Perhaps the issue is that there should be a version of nogc that doesn't care about exceptions. With the current implementation of exceptions, the intersection of "avoiding GC" and "abusing exceptions" is almost certainly small.
Dec 15 2022
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/15/22 6:04 PM, bachmeier wrote:
 On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven Schveighoffer wrote:
 
 But if the string you give it happens to not contain a string 
 representation of an integer, it wants to throw an exception. And the 
 act of allocating and throwing that exception needs the GC.

 We really really need to fix it. It completely cuts the legs out of 
 the answer "if you don't want the gc, use  nogc". If we do fix it, all 
 these questions pretty much just go away. It goes from something like 
 20% of phobos being nogc-compatible to 80%.
Is avoiding the GC inside exceptions a problem that needs to be solved? Maybe it is, but I don't think it's common to have a loop with millions of exceptions. Perhaps the issue is that there should be a version of nogc that doesn't care about exceptions. With the current implementation of exceptions, the intersection of "avoiding GC" and "abusing exceptions" is almost certainly small.
Why does the quantity of exceptions matter? The point of avoiding the GC is to avoid the collection, which can happen with a single allocation. If you want to avoid the GC for specific code paths, you don't want to say "OK, I guess I can't parse integers in here". Note that it's also possible to assume the GC *likely* won't get triggered, because an exception is very unlikely. But having a mechanism to ask the compiler to help prove it, which can't be used, is pretty frustrating. -Steve
Dec 15 2022
parent reply bachmeier <no spam.net> writes:
On Thursday, 15 December 2022 at 23:29:55 UTC, Steven 
Schveighoffer wrote:
 On 12/15/22 6:04 PM, bachmeier wrote:
 On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven 
 Schveighoffer wrote:
 
 But if the string you give it happens to not contain a string 
 representation of an integer, it wants to throw an exception. 
 And the act of allocating and throwing that exception needs 
 the GC.

 We really really need to fix it. It completely cuts the legs 
 out of the answer "if you don't want the gc, use  nogc". If 
 we do fix it, all these questions pretty much just go away. 
 It goes from something like 20% of phobos being 
 nogc-compatible to 80%.
Is avoiding the GC inside exceptions a problem that needs to be solved? Maybe it is, but I don't think it's common to have a loop with millions of exceptions. Perhaps the issue is that there should be a version of nogc that doesn't care about exceptions. With the current implementation of exceptions, the intersection of "avoiding GC" and "abusing exceptions" is almost certainly small.
Why does the quantity of exceptions matter? The point of avoiding the GC is to avoid the collection, which can happen with a single allocation. If you want to avoid the GC for specific code paths, you don't want to say "OK, I guess I can't parse integers in here".
It matters if you expect the quantity to be zero. If you've thoroughly tested your code, and you're confident that the exception isn't relevant, it doesn't matter. It is quite rare that `to!int` would throw an exception in my code.
 Note that it's also possible to assume the GC *likely* won't 
 get triggered, because an exception is very unlikely. But 
 having a mechanism to ask the compiler to help prove it, which 
 can't be used, is pretty frustrating.
Those wanting that could continue to use the current nogc. If they're fine with a one in a million chance of a collection, they don't need that kind of proof, and they could use nogc right now with the 80% of Phobos you have cited.
Dec 15 2022
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/15/22 7:27 PM, bachmeier wrote:
 On Thursday, 15 December 2022 at 23:29:55 UTC, Steven Schveighoffer wrote:
 On 12/15/22 6:04 PM, bachmeier wrote:
 On Wednesday, 14 December 2022 at 01:47:29 UTC, Steven Schveighoffer 
 wrote:

 But if the string you give it happens to not contain a string 
 representation of an integer, it wants to throw an exception. And 
 the act of allocating and throwing that exception needs the GC.

 We really really need to fix it. It completely cuts the legs out of 
 the answer "if you don't want the gc, use  nogc". If we do fix it, 
 all these questions pretty much just go away. It goes from something 
 like 20% of phobos being nogc-compatible to 80%.
Is avoiding the GC inside exceptions a problem that needs to be solved? Maybe it is, but I don't think it's common to have a loop with millions of exceptions. Perhaps the issue is that there should be a version of nogc that doesn't care about exceptions. With the current implementation of exceptions, the intersection of "avoiding GC" and "abusing exceptions" is almost certainly small.
Why does the quantity of exceptions matter? The point of avoiding the GC is to avoid the collection, which can happen with a single allocation. If you want to avoid the GC for specific code paths, you don't want to say "OK, I guess I can't parse integers in here".
It matters if you expect the quantity to be zero. If you've thoroughly tested your code, and you're confident that the exception isn't relevant, it doesn't matter. It is quite rare that `to!int` would throw an exception in my code.
Totally agreed. But one to!int inside a big function makes it so it can't be nogc. You might want the nogc for other reasons. It just strikes me as limiting that converting string to int cancels the ability to use nogc at all.
 
 Note that it's also possible to assume the GC *likely* won't get 
 triggered, because an exception is very unlikely. But having a 
 mechanism to ask the compiler to help prove it, which can't be used, 
 is pretty frustrating.
Those wanting that could continue to use the current nogc. If they're fine with a one in a million chance of a collection, they don't need that kind of proof, and they could use nogc right now with the 80% of Phobos you have cited.
Maybe you misunderstood what I said. 20% of phobos is usable with nogc (not tested, but that's my expectation). Fixing exceptions so they don't use the GC would be a single thing that flips that 20% to 80%. -Steve
Dec 15 2022
prev sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 14/12/2022 2:47 PM, Steven Schveighoffer wrote:
 We really really need to fix it. It completely cuts the legs out of the 
 answer "if you don't want the gc, use  nogc". If we do fix it, all these 
 questions pretty much just go away. It goes from something like 20% of 
 phobos being nogc-compatible to 80%.
*whistles* https://github.com/rikkimax/DIPs/blob/value_type_exceptions/DIPs/DIP1xxx-RC.md
Dec 16 2022
prev sibling next sibling parent reply cc <cc nevernet.com> writes:
On Tuesday, 13 December 2022 at 07:11:34 UTC, thebluepandabear 
wrote:
 Does this claim have merit? I am not far enough into learning 
 D, so I haven't touched GC stuff yet, but I am curious what the 
 D community has to say about this issue.
I disagree with the majority opinion on this subject. I find D's GC to often be heavily oversold, when it is particularly not applicable to many portions of my use case (game development). It certainly solves certain problems, but it introduces new ones. The emphasis behind the GC mentality seems to be that most(all!) people will never encounter those for their purposes and so they should literally just not think about it and trust the GC, until you suddenly can't anymore and the whole thing breaks apart. Alternative strategies do exist obviously, but they're often shoved into the backroom, with the salespeople only leading the customers to them after much grumbling and fumbling with their keys. How do you instantiate a class object in D using the GC? ```d new Foo; ``` How do you instantiate one using malloc? Something like: ```d import core.memory; import core.stdc.stdlib : malloc, free; import core.lifetime : emplace; T NEW(T, Args...)(auto ref Args args) /* nogc (nope!)*/ if (is(T == class)) { enum size = __traits(classInstanceSize, T); void* mem = malloc(size); scope(failure) free(mem); //throw OOMError.get("Out of Memory in NEW!"~T.stringof); // wanna GC-allocate here? use a predefined object? or just ignore? return mem !is null ? emplace!T(mem[0..size], args) : null; } // and don't forget void FREE(T)(ref T obj) /* nogc*/ if (is(T == class)) { if (obj is null) return; auto mem = cast(void*) obj; //debug if (!GC.inFinalizer && GC.addrOf(mem)) return; scope(exit) free(mem); destroy(obj); obj = null; } ``` To people who are experienced with D, that's par for the course. And people who have already done a good deal of thinking about memory management in performance-intensive scenarios will understand the need to know their language's alternatives to begin with. But showing that to people coming to D as the alternative to what you're supposed to think is the *right* way everyone should use is just not attractive. It's a contradiction in one of D's core philosophies, IMO: "Solve basic problems and prevent easy bugs that most people walk into without thinking", which sounds like an admirable goal aimed at drawing in and protecting new users. Except then they're given a tool that will just create problems if used in the intended way (not thinking about it) if they get into certain domains of development. The explanation of "Well, obviously you need to think about it if you're going to be doing THAT!" just doesn't mesh with the way it's initially sold. Pipe dream: Why not `new malloc Foo;`? (or "deterministic" or something. and then, `new rc Foo;`!) What if, to prevent accidental intermingling, it were a storage class? `malloc Foo mfoo = new Foo; // Error!`. Just thinking out loud. Part of this can already be done by wrapping everything in structs and templates. But just more noise! That said, regarding your specific question, there are numerous parts of the D standard library you can safely use without allocating with the GC (and I do, and still love it) and non-allocating alternatives are often added (e.g. `.join` vs `.joiner`). Though the problem exists many components can't be *explicitly* ` nogc` (a caveat I find it just not worth it to worry about anymore- it takes up too much time and effort that could be better spent on the code itself than on solving a trillion compiler errors trying to satisfy every possible aspect and edge case of nogc-dom). There is a lot of code you can write in D that, without going over the stdlib with a fine-toothed comb, you can be reasonably sure *will probably never* GC-allocate, even if it's not explicitly nogc, if that's an acceptable tradeoff for the code safety requirements in your use case. std.container.array is good, as previously mentioned. You can build on this and make malloc/ref-counted variations of hashmaps/associative arrays too (if you want to spend the effort on it). I believe there are some third-party libraries up on dub for that already. The operator overloading and syntactic sugar is good enough that everything can look "just like" native D runtime/GC constructs if you want, except for some declarations (but all that is not exactly "out of the box", if we're still thinking of the prospective new user context here). tl;dr: I don't *hate* the GC. It's great for one&dones. I just wish it wasn't so heavily lauded as The Truth & The Way.
Dec 14 2022
next sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Wednesday, 14 December 2022 at 08:01:47 UTC, cc wrote:
 I disagree with the majority opinion on this subject.
 How do you instantiate a class object in D using the GC?
 ```d
 new Foo;
 ```
 How do you instantiate one using malloc?  Something like:
 ```d
 import core.memory;
 import core.stdc.stdlib : malloc, free;
 import core.lifetime : emplace;
 T NEW(T, Args...)(auto ref Args args) /* nogc (nope!)*/ if 
 (is(T == class)) {
 	enum size = __traits(classInstanceSize, T);
 	void* mem = malloc(size);
 	scope(failure) free(mem);
 	//throw OOMError.get("Out of Memory in NEW!"~T.stringof); // 
 wanna GC-allocate here? use a predefined object? or just ignore?
 	return mem !is null ? emplace!T(mem[0..size], args) : null;
 }
 // and don't forget
 void FREE(T)(ref T obj) /* nogc*/ if (is(T == class)) {
 	if (obj is null) return;
 	auto mem = cast(void*) obj;
 	//debug if (!GC.inFinalizer && GC.addrOf(mem)) return;
 	scope(exit) free(mem);
 	destroy(obj);
 	obj = null;
 }
 ```
Yes, but be aware that this kind of stuff is, what you would also need to do in C++ to make it more safe - but nobody does it because it's so awful. And in D you almost never need this, because it is sufficient to turn off the GC only in your performance critical loops. So you get the same performance and the same (or better) memory safety with only a tiny part of the hassle.
Dec 14 2022
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 14 December 2022 at 09:03:58 UTC, Dom DiSc wrote:
 ....
 And in D you almost never need this, because it is sufficient 
 to turn off the GC only in your performance critical loops.
 So you get the same performance and the same (or better) memory 
 safety with only a tiny part of the hassle.
Yes, people just need to take the GC chill pill. https://dlang.org/blog/2017/06/16/life-in-the-fast-lane/ In the not-too-distant future, manual memory management will be outlawed.
Dec 14 2022
parent IGotD- <nise nise.com> writes:
On Wednesday, 14 December 2022 at 09:27:54 UTC, 
areYouSureAboutThat wrote:
 Yes, people just need to take the GC chill pill.

 https://dlang.org/blog/2017/06/16/life-in-the-fast-lane/

 In the not-too-distant future, manual memory management will be 
 outlawed.
I'm in GC rehab trying to distance myself from GC. When you think about tracing GC is one of the most crazy algorithms in computer science. Still it is widely used because it covers all corner cases. However, the complexity to get it all to work is huge.
Dec 14 2022
prev sibling next sibling parent reply matheus <matheus gmail.com> writes:
On Wednesday, 14 December 2022 at 08:01:47 UTC, cc wrote:
 ...
 I disagree with the majority opinion on this subject.  I find 
 D's GC to often be heavily oversold, when it is particularly 
 not applicable to many portions of my use case (game 
 development)...
 ...
I use to write games (Personal Projects), and my main language believe or not is still the old C for most of the time, I already have my lib and my way of doing it so, no big deal for me! But when I tried D for this same thing, I used to do the basic thing like, enable GC load and instantiate everything for the level, disable GC and run the game, and just enable again after the level is over, and I don't remember having much trouble. For example missiles/projectiles will be added to a pre-allocated space as they are being fired, but what I see sometimes (In others people code) is they are allocating these things in real-time, which I don't like it. I don't know if this is the problem some people have with GC, allocating in real-time, but I'd avoid it, unless game development changed a lot and people like to do allocation in real time (By the way I'm not say you're doing this). Matheus.
Dec 14 2022
parent reply cc <cc nevernet.com> writes:
On Wednesday, 14 December 2022 at 10:06:40 UTC, matheus wrote:
 But when I tried D for this same thing, I used to do the basic 
 thing like, enable GC load and instantiate everything for the 
 level, disable GC and run the game, and just enable again after 
 the level is over, and I don't remember having much trouble.
Pre-allocated lists are fine for many cases. We typically use them for particle engines now, when we can be comfortable with a specific hard limit and the initial resource draw isn't a significant burden. But in some of our engines we decided we wanted to be able to scale without entity limits, especially for persistent always-online worlds. Naive porting from reference counted languages to D's GC led to unacceptable resource usage and performance losses that `GC.disable` couldn't work around. It doesn't matter if the actual delay caused by a GC pause is minimal, if those delays cause significant recurring hiccups and timing errors. Allocations and deallocations had to be more deterministic, ultimately it came down to the decision that with the need to offload or reschedule memory management to be more distributed, working around the GC would be more trouble than just avoiding it in the first place.
Dec 14 2022
next sibling parent Sergey <kornburn yandex.ru> writes:
On Wednesday, 14 December 2022 at 11:12:52 UTC, cc wrote:
 Pre-allocated lists are fine for many cases.  We typically use 
 them for particle engines now, when we can be comfortable with 
 a specific hard limit and the initial resource draw isn't a 
 significant burden.
Does your game available somewhere?
Dec 14 2022
prev sibling parent reply ikod <igor.khasilev gmail.com> writes:
On Wednesday, 14 December 2022 at 11:12:52 UTC, cc wrote:

 couldn't work around.  It doesn't matter if the actual delay 
 caused by a GC pause is minimal, if those delays cause 
 significant recurring hiccups and timing errors.  Allocations 
 and deallocations had to be more deterministic, ultimately it 
 came down to the decision that with the need to offload or 
 reschedule memory management to be more distributed, working 
 around the GC would be more trouble than just avoiding it in 
 the first place.
Probably this is common case when your program works in tight loop - like game, or webserver under high load. Process just have no spare time to handle GC cleanups (for stop-the-world collectors).
Dec 15 2022
parent reply IGotD- <nise nise.com> writes:
On Thursday, 15 December 2022 at 08:42:22 UTC, ikod wrote:
 Probably this is common case when your program works in tight 
 loop - like game, or webserver under high load. Process just 
 have no spare time to handle GC cleanups (for stop-the-world 
 collectors).
One interesting observation here in the forum and also in articles, blogs etc about the D garbage collector about how you can speed up memory management. All those ideas and tricks are based on that you in some way should avoid or circumvent the garbage collector. Then the question is, how good is a computer algorithm if you are supposed to avoid it all the time? Also, it doesn't matter if "the GC stop world" duration are very short. It takes time for the OS scheduler to stop all thread which probably is a significant part of the GC pause. Then depending on the load the is also a duration until the thread is started again.
Dec 15 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 16/12/2022 12:21 AM, IGotD- wrote:
 All those ideas and tricks are based on that you in some way should 
 avoid or circumvent the garbage collector. Then the question is, how 
 good is a computer algorithm if you are supposed to avoid it all the time?
Like all algorithms, if you misuse it its going to be bad. Calling into the GC unnecessarily when you care about performance is misuse.
Dec 15 2022
prev sibling next sibling parent reply matheus <matheus gmail.com> writes:
On Thursday, 15 December 2022 at 11:21:56 UTC, IGotD- wrote:
 ... Then the question is, how good is a computer algorithm if 
 you are supposed to avoid it all the time?
 ...
To be honest I don't think people generally are bothered with this. I mean when I write my batch programs to process something most of the time it will process some data in a reasonable time so I'm OK. The other day I had to write a program to comparable the sync between 2 different databases and check the differences over millions of lines. The program ran very fast and gave the result expected in a very reasonable time. Of course in some areas this may hurt (Like games) and then you will need to think of ways to avoid it. But for example imagine a drawing app, let's say you're filling an area and during this process the GC stops for a second. I mean will this make so much difference? Yes some people will notice this, but the majority will not even think or know this happened. Where I work (Which is a big Health Insurance in my country), our apps, web apps etc. And nobody is complaining too much about the delay in our operations. Matheus.
Dec 15 2022
parent reply IGotD- <nise nise.com> writes:
On Thursday, 15 December 2022 at 11:56:31 UTC, matheus wrote:
 To be honest I don't think people generally are bothered with 
 this. I mean when I write my batch programs to process 

 or whatever, most of the time it will process some data in a 
 reasonable time so I'm OK.
Majority of applications are so small and have little performance requirements that it doesn't matter. It starts to matter when you have web services, games etc. For example if you reduce the memory consumption, perhaps you don't need to buy/hire that extra infrastructure which cost money.
 Where I work (Which is a big Health Insurance in my country), 

 day, apps, web apps etc. And nobody is complaining too much 
 about the delay in our operations.
to use several different types of GC which might be more of them in the future. I'm amazed how much the tracing GC is used in the computer industry despite its complexity and drawbacks. When will the time come when the tracing GC no longer scales with increasing memory consumption?
Dec 15 2022
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 15 December 2022 at 12:36:59 UTC, IGotD- wrote:
 I'm amazed how much the tracing GC is used in the computer 
 industry despite its complexity and drawbacks. When will the 
 time come when the tracing GC no longer scales with increasing 
 memory consumption?
We will most likely see coprocessors working on regular memory, and like now, people will use no-gc libraries for the low latency heavy lifting. Another hardware solution would be to let cores have local memory, then people will switch to pushing data as values through a pipeline and avoid references. That would scale well, but requires developers to switch to a new more "functional" mindset. Anyway, concurrent collectors that don't stop the world are not as bad if you add hardware capabilities that prevents the caches for being flushed and the data-bus from being saturated. You could have a separate hardware-cache for book-keeping tasks and just slowly scan memory in the background rather than the hit-and-run approach. Besides, most applications don't really need the full capabilities of modern hardware so the users don't mind applications being slow as they don't "understand" that they are actually slow… :-) That's what makes javascript and dart competitive for application development.
Dec 15 2022
parent reply IGotD- <nise nise.com> writes:
On Thursday, 15 December 2022 at 12:55:22 UTC, Ola Fosheim 
Grøstad wrote:
 Anyway, concurrent collectors that don't stop the world are not 
 as bad if you add hardware capabilities that prevents the 
 caches for being flushed and the data-bus from being saturated. 
 You could have a separate hardware-cache for book-keeping tasks 
 and just slowly scan memory in the background rather than the 
 hit-and-run approach.
I see concurrent GC (or at least something that doesn't stop other threads) as the only choice. The more I look into the D garbage collector the more surprised I become. In order to get tracing GC to work you need to. 1. Scan the stack from the stack pointer and upwards. 2. Scan all the registers, making the algorithm non portable. 3. Scan all the globally allocated memory 4. Scan all the thread local storage. This usually live on the stack for the libraries that are loaded at startup. This information must be read out before or after the thread starts. 5. The threads that were interrupted, the context must be read out including the position of the stack pointer. 6. All this requires that the D runtime must keep track on all threads, something that you otherwise don't need since the kernel/other base runtime do this for you. 7. You need metadata for the tracing graph. D uses typeinfo in order to reduce the scanning of objects, this is nice but at the same increases the complexity. 8. You need to have a functionality that suspends the execution, that is on all CPUs. In order to to that all the CPUs needs to be interrupted. This operation itself takes time as it needs to save the context, go through the interrupt service routine, probably do something in the scheduler, also needs to report back to the requesting CPU. The code in D seems to stop the threads one by one so this operation itself takes time, more if there are more threads running on other CPUs. After the GC is done with its operation, it's the same story again but resuming the threads. A "short GC pause" is really a relative term. 9. There is probably more that I don't know about that would surprise me, not in positive way I'm afraid. I must give the D project the credit for having the stamina implementing such infrastructure in the standard library. I would have given up and looked elsewhere. Also, I'm surprised that the operating systems offer such particular interfaces for making this possible. It's not completely straight forward and suspending all threads require quite different approaches on each system. What if they didn't? So the question if GC is built into the standard library is positive or negative, then my answer is that in the case of the D it certainly increase the complexity quite a lot. What if there was a simpler path?
Dec 17 2022
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 17 December 2022 at 18:44:21 UTC, IGotD- wrote:
 I see concurrent GC (or at least something that doesn't stop 
 other threads) as the only choice. The more I look into the D 
 garbage collector the more surprised I become.
I wouldn't say it is is the only choice. If D was willing to make a bold move towards modelling software as short lived actors/tasks then you could use arenas that typically don't collect, but is wiped out when the task is done (collection only happens if you run out of memory). Right now I am personally more interested in Carbon that will be 100% GC free. I like that they take a stance on memory management overhead. There are also interesting things going on in the research field with separation logic and similar formalizations, still probably 20 years until it will be generally useful, but things are moving IMO. There is also some interest in memory pools/arenas in the C++ community I think, and there seems to be an interest in Apple for integrating C++ and Swift, so maybe pairing one managed and one unmanaged language is the way forward, short term. So overall, the global picture is nuanced.
 1. Scan the stack from the stack pointer and upwards.
 2. Scan all the registers, making the algorithm non portable.
 3. Scan all the globally allocated memory
 4. Scan all the thread local storage. This usually live on the 
 stack for the libraries that are loaded at startup. This 
 information must be read out before or after the thread starts.
 5. The threads that were interrupted, the context must be read 
 out including the position of the stack pointer.
Application specific collectors don't have to do all this work as they will only collect at specific points where the state is known and where the number of live objects are minimal. This is something one should be able to use verification technologies for, basically proving that there are no stray references hanging around when you start collecting. Rust is only the beginning I think.
 7. You need metadata for the tracing graph. D uses typeinfo in 
 order to reduce the scanning of objects, this is nice but at 
 the same increases the complexity.
You could probably generate a hardcoded "optimal" scanner statically at linktime after LTO, but that only makes it somewhat faster, and not really better.
 making this possible. It's not completely straight forward and 
 suspending all threads require quite different approaches on 
 each system. What if they didn't?
Suspending all threads, for whatever reason, is a terrible idea. Just think about Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law#/media/File:AmdahlsLaw.svg
 So the question if GC is built into the standard library is 
 positive or negative, then my answer is that in the case of the 
 D it certainly increase the complexity quite a lot. What if 
 there was a simpler path?
ARC.
Dec 18 2022
parent Tejas <notrealemail gmail.com> writes:
On Sunday, 18 December 2022 at 22:11:21 UTC, Ola Fosheim Grøstad 
wrote:
 On Saturday, 17 December 2022 at 18:44:21 UTC, IGotD- wrote:
 [...]
I wouldn't say it is is the only choice. If D was willing to make a bold move towards modelling software as short lived actors/tasks then you could use arenas that typically don't collect, but is wiped out when the task is done (collection only happens if you run out of memory). Right now I am personally more interested in Carbon that will be 100% GC free. I like that they take a stance on memory management overhead. There are also interesting things going on in the research field with separation logic and similar formalizations, still probably 20 years until it will be generally useful, but things are moving IMO. There is also some interest in memory pools/arenas in the C++ community I think, and there seems to be an interest in Apple for integrating C++ and Swift, so maybe pairing one managed and one unmanaged language is the way forward, short term. So overall, the global picture is nuanced.
 [...]
Application specific collectors don't have to do all this work as they will only collect at specific points where the state is known and where the number of live objects are minimal. This is something one should be able to use verification technologies for, basically proving that there are no stray references hanging around when you start collecting. Rust is only the beginning I think.
 [...]
You could probably generate a hardcoded "optimal" scanner statically at linktime after LTO, but that only makes it somewhat faster, and not really better.
 [...]
Suspending all threads, for whatever reason, is a terrible idea. Just think about Amdahl's law: https://en.wikipedia.org/wiki/Amdahl%27s_law#/media/File:AmdahlsLaw.svg
 [...]
ARC.
When all is said and done, I think we should try to provide lifetimes as a feature, simply because [C++ might be offering them](https://www.reddit.com/r/cpp/comments/ttw0dl/ruststyle_lifetimes_ roposed_in_clang/), and we will need to accomodate that for the sake of C++ interop anyways, else risk deteriorating our C++ interop story I know having a proposal means very little, but I have a feeling that it's a question of when, not if, C++ receives lifetime annotations
Dec 18 2022
prev sibling parent Hipreme <msnmancini hotmail.com> writes:
On Thursday, 15 December 2022 at 11:21:56 UTC, IGotD- wrote:
 On Thursday, 15 December 2022 at 08:42:22 UTC, ikod wrote:
 Probably this is common case when your program works in tight 
 loop - like game, or webserver under high load. Process just 
 have no spare time to handle GC cleanups (for stop-the-world 
 collectors).
One interesting observation here in the forum and also in articles, blogs etc about the D garbage collector about how you can speed up memory management. All those ideas and tricks are based on that you in some way should avoid or circumvent the garbage collector. Then the question is, how good is a computer algorithm if you are supposed to avoid it all the time? Also, it doesn't matter if "the GC stop world" duration are very short. It takes time for the OS scheduler to stop all thread which probably is a significant part of the GC pause. Then depending on the load the is also a duration until the thread is started again.
Read the following algorithm: ```d auto myArray = [1, 9, 5, 20, 30]; foreach(value; myArray) { myArray.sort(); writeln(value); } ``` This is such misuse. Array sorting is a real heavy task that you must avoid all the time, and when you do it, you will want to cache its result. It is the same thing that happens with GC. I use GC carelessly on initialization, but after I'm running my game loop, I totally avoid all kind of allocations. In a game for example, I'm using a text display which needs to concatenate strings and show its result: There is 2 ways to do it: 1: Will it need change? Then I use my ` nogc String`. 2: Does it happen only at initialization? Use `string` as anyone would do. Even then, this is not that important, I have coded a lot of games in Javascript and the GC never made my game slow. In Javascript it is actually impossible to run from GC. The only reason I'm doing that is for not making my engine contribute to user code GC feeding.
Dec 15 2022
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
Lot of common sense there.

We should publish "how to do it with malloc" so people can figure it out. 
Perhaps you can add it to the D wiki!
Dec 15 2022
parent reply Tejas <notrealemail gmail.com> writes:
On Friday, 16 December 2022 at 02:59:18 UTC, Walter Bright wrote:
 Lot of common sense there.

 We should publish "how to do it with malloc" so people can 
 figure it out. Perhaps you can add it to the D wiki!
It already is... https://wiki.dlang.org/Memory_Management#Explicit_Class_Instance_Allocation At best we can ask Beginner D tutorials/books to also refer to it I think mike's blog post already does this though. Yeah he does mention it: https://dlang.org/blog/2017/09/25/go-your-own-way-part-two-the-heap/ Even Ali's book mentions `emplace`, but not in the context of ` nogc` Ali, if you're reading, perhaps this can be added?
Dec 16 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/16/2022 5:23 AM, Tejas wrote:
 It already is...
Good!
Dec 16 2022
prev sibling parent Nick Treleaven <nick geany.org> writes:
On Wednesday, 14 December 2022 at 08:01:47 UTC, cc wrote:
 How do you instantiate one using malloc?
Just came up with this, seems to work: ```d class C { int i; } void main() { import std.experimental.allocator; import std.experimental.allocator.mallocator; TypedAllocator!Mallocator a; C c = a.make!C; c.i++; a.dispose(c); } ```
Dec 16 2022
prev sibling parent Guillaume Piolat <first.last spam.org> writes:
On Tuesday, 13 December 2022 at 07:11:34 UTC, thebluepandabear 
wrote:
 Hello,

 I was speaking to one of my friends on D language and he spoke 
 about how he doesn't like D language due to the fact that its 
 standard library is built on top of GC (garbage collection).

 He said that if he doesn't want to implement GC he misses out 
 on the standard library, which for him is a big disadvantage.

 Does this claim have merit? I am not far enough into learning 
 D, so I haven't touched GC stuff yet, but I am curious what the 
 D community has to say about this issue.
It's more of a small ecosystem divide, that a hindrance (you can always do more restricted code). The GC by itself can have a cost arbitrarily low, so it's not the problem. What is a problem is secret use of the druntime when you wanted no druntime things going on, often for compatibility reasons. Say, WebASM or consoles. But if you have reasons to avoid the D runtime, or reasons to make a minimal D runtime, you should be expected nto to be able to use the stdlib! Else, why would the runtime be for?
Dec 14 2022