www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Why many programmers don't like GC?

reply Marcone <marcone email.com> writes:
I've always heard programmers complain about Garbage Collector 
GC. But I never understood why they complain. What's bad about GC?
Jan 13 2021
next sibling parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
I would guess because of performance issues.
Jan 13 2021
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 13, 2021 at 06:58:56PM +0000, Marcone via Digitalmars-d-learn wrote:
 I've always heard programmers complain about Garbage Collector GC. But
 I never understood why they complain. What's bad about GC?
It's not merely a technical issue, but also a historical and sociological one. The perception of many people, esp. those with C/C++ background, is heavily colored by the GC shipped with early versions of Java, which was stop-the-world, inefficient, and associated with random GUI freezes and jerky animations. This initial bad impression continues to persist today esp. among the C/C++ crowd, despite GC technology having made great advances since those early Java days. Aside from skewed impressions, there's still these potential concerns with the GC: (1) Stop-the-world GC pauses (no longer a problem with modern generational collectors, but still applies to D's GC); (2) Non-deterministic destruction of objects (D's dtors are not even guaranteed to run if it's a GC'd object) -- you cannot predict when an object will be collected; (3) GC generally needs more memory than the equivalent manual memory management system. (1) and (2) can be mitigated in D in various ways, e.g., prefer structs over classes to reduce GC load, use GC.stop, GC.collect to control when collections happen, etc.. Use RAII structs or scope guards for things that need deterministic destruction. (Or use malloc/free yourself.) (3) is generally not a big problem unless you're targeting low-memory devices, in which case you already have to do many things manually anyway, so you generally won't be relying on the GC in the first place. There's also the matter of ROI: it's *much* easier, and faster, to write GC code than code with manual memory management. For 90% of software, none of the above concerns matter anyway, and you're just wasting your time/energy for essentially no benefit (and lots of disadvantages, like wasting time/energy debugging hard-to-trace pointer bugs and subtle memory corruptions). GC code also tends to be cleaner: your APIs don't have to be polluted with memory-management paraphrenalia that tend to percolate all over your code and make it hard to read and harder to maintain. You get to focus your mental resources on making actual progress in your problem domain instead of grappling with memory management issues at every turn. And most of the time, it's Good Enough(tm); the customer won't even notice a difference. The price of using a GC is far dwarved by the benefits it brings. Without a GC you're pouring down blood and sweat just to make a little progress in your problem domain, and the whole time you're plagued with pointer bugs, memory corruptions, and all sorts of lovely issues that come with just one tiny mistake in your code but takes hours, days, or even months to fix. And your code will be convoluted, your API's ugly, fragile, and hard to maintain. The ROI simply makes no sense, except for very narrow niches like hard real-time software (where a patient may die if you get an unexpected GC pause while controlling a radiation treatment device) and game engine cores. But there is no convincing a hard-core GC hater sometimes. You do so at your own risk. :-D T -- Life begins when you can spend your spare time programming instead of watching television. -- Cal Keegan
Jan 13 2021
parent reply claptrap <clap trap.com> writes:
On Wednesday, 13 January 2021 at 20:06:51 UTC, H. S. Teoh wrote:
 On Wed, Jan 13, 2021 at 06:58:56PM +0000, Marcone via 
 Digitalmars-d-learn wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
It's not merely a technical issue, but also a historical and sociological one. The perception of many people, esp. those with C/C++ background, is heavily colored by the GC shipped with early versions of Java, which was stop-the-world, inefficient, and associated with random GUI freezes and jerky animations. This initial bad impression continues to persist today esp. among the C/C++ crowd, despite GC technology having made great advances since those early Java days. Aside from skewed impressions, there's still these potential concerns with the GC: (1) Stop-the-world GC pauses (no longer a problem with modern generational collectors, but still applies to D's GC); (2) Non-deterministic destruction of objects (D's dtors are not even guaranteed to run if it's a GC'd object) -- you cannot predict when an object will be collected; (3) GC generally needs more memory than the equivalent manual memory management system.
I think you also have to consider that the GC you get with D is not state of the art, and if the opinions expressed on the newsgroup are accurate, it's not likely to get any better. So while you can find examples of high performance applications, AAA games, or whatever that use GC, I doubt any of them would be feasible with Ds GC. And given the design choices D has made as a language, a high performance GC is not really possible. So the GC is actually a poor fit for D as a language. It's like a convertible car with a roof that is only safe up to 50 mph, go over that and its likely to be torn off. So if you want to drive fast you have to put the roof down.
Jan 14 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 14, 2021 at 12:36:12PM +0000, claptrap via Digitalmars-d-learn
wrote:
[...]
 I think you also have to consider that the GC you get with D is not
 state of the art, and if the opinions expressed on the newsgroup are
 accurate, it's not likely to get any better. So while you can find
 examples of high performance applications, AAA games, or whatever that
 use GC, I doubt any of them would be feasible with Ds GC. And given
 the design choices D has made as a language, a high performance GC is
 not really possible.
To be fair, the GC *has* improved over the years. Just not as quickly as people would like, but it *has* improved.
 So the GC is actually a poor fit for D as a language. It's like a
 convertible car with a roof that is only safe up to 50 mph, go over
 that and its likely to be torn off. So if you want to drive fast you
 have to put the roof down.
How much D code have you actually written and optimized? That analogy is inaccurate. IME, performance issues caused by the GC are generally localized, and easy to fix by replacing that small part of the code with a bit of manual memory management (you *can* rewrite a function not to use the GC; this isn't the Java straitjacket, y'know!), or standard GC optimization techniques like reducing GC load in hot loops. There's also GC.stop and GC.collect for those times when you want more control over exactly when collection pauses happen. I wrote a compute-intensive program once, and after some profiling revealed the GC being a bottleneck, I: (1) Refactored one function called from an inner loop to reuse a buffer instead of allocating a new one each time, thus eliminating a large amoun of garbage from small allocations; (2) Used GC.stop and scheduled my own GC.collect with slightly reduced frequency. The result was about 40-50% reduction in runtime, which is close to about a 2x speedup. Now, you'll argue that had I written this code without a GC in the first place I wouldn't have needed to do all this. However: (a) Because I *had* the GC, I could write this code in about 1/5 of the time it would've taken me to write it in C++; (b) The optimization involved only changing a couple of lines of code in 2-3 functions -- a couple of days' work at most -- as opposed to blindly optimizing *every* single danged line of code, 95% of which wouldn't even have had any noticeable effect because they *are not the bottleneck*; (c) The parts of the code that aren't in the hot path can still freely take advantage of the GC require minimal effort to write, and be free of the time-consuming bugs that often creep into code that manually manages memory. As I said, it's an ROI question. I *could* have spent 5x the amount of time and effort to write the perfect, GC-less, macho-hacker-style code, and get maybe about a 1-2% performance improvement. But why would I? It takes 5x less effort to write GC code, and requires only a couple more days of effort to fix GC-related performance issues, vs. 5x the development effort to write the entire program GC-less, and who knows how much longer after that to debug obscure pointer bugs. Life is too short to be squandered chasing down the 1000th double-free and the 20,000th dangling pointer in my life. A lot of naysayers keep repeating GC performance issues as if it's a black-and-white, all-or-nothing question. It's not. You *can* write high-performance programs even with D's supposedly lousy GC -- just profile the darned thing, and refactor the hotspots to reduce GC load or avoid the GC. *In those parts of the code that actually matter*. You don't have to do this for the *entire* lousy program. The non-hot parts of the code can still GC away like there's no tomorrow, and your customers would hardly notice a difference. This isn't Java where you have no choice but to use the GC everywhere. Another example: one day I had some spare time, and wrote fastcsv (http://github.com/quickfur/fastcsv). It's an order of magnitude faster than std.csv, *and it uses the evil GC*. I just applied the same technique: write it with GC, then profile to find the bottlenecks. The first round of profiling showed that there tend to be a lot of small allocations, which create lots of garbage, which means slow collection cycles. The solution? Use a linear buffer instead of individual allocations for field/row data, and use slices where possible instead of copying the data. By reducing GC load and minimizing copying, I got huge boosts in performance -- without throwing out the GC with the bathwater. (And note: it's *because* I can rely on the GC that I can use slices so freely; if I had to use RC or manage this stuff manually, it'd take 5x longer to write and would involve copying data all over the place, which means it'd probably lose out in overall performance.) But then again, it's futile to argue with people who have already made up their minds about the GC, so meh. Let the bystanders judge for themselves. I'll shut up now. *shrug* T -- Study gravitation, it's a field with a lot of potential.
Jan 14 2021
next sibling parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
 On Thu, Jan 14, 2021 at 12:36:12PM +0000, claptrap via 
 Digitalmars-d-learn wrote: [...]
 [...]
To be fair, the GC *has* improved over the years. Just not as quickly as people would like, but it *has* improved. [...]
Nice strategy, using GC and optimizing where you need it.
Jan 15 2021
parent reply Mike Parker <aldacron gmail.com> writes:
On Friday, 15 January 2021 at 08:49:21 UTC, Imperatorn wrote:

 Nice strategy, using GC and optimizing where you need it.
That's the whole point of being able to mix and match. Anyone avoiding the GC completely is missing it (unless they really, really, must be GC-less).
Jan 15 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
 That's the whole point of being able to mix and match. Anyone 
 avoiding the GC completely is missing it (unless they really, 
 really, must be GC-less).
Has DMD switched to using the GC as the default?
Jan 15 2021
parent reply welkam <wwwelkam gmail.com> writes:
On Friday, 15 January 2021 at 11:28:55 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
 That's the whole point of being able to mix and match. Anyone 
 avoiding the GC completely is missing it (unless they really, 
 really, must be GC-less).
Has DMD switched to using the GC as the default?
No. And it will never will. Currently DMD uses custom allocator for almost everything. It works as follows. Allocate a big chunk(1MB) of memory using malloc. Have a internal pointer that points to the beginning of unallocated memory. When someone ask for memory return that pointer and increment internal pointer with the 16 byte aligned size of allocation. Meaning the new pointer is pointing to unused memory and everything behind the pointer has been allocated. This simple allocation strategy is called bump the pointer and it improved DMD performance by ~70%. You can use GC with D compiler by passing -lowmem flag. I didnt measure but I heard it can increase compilation time by 3x. https://github.com/dlang/dmd/blob/master/src/dmd/root/rmem.d#L153
Jan 15 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:
 You can use GC with D compiler by passing -lowmem flag. I didnt 
 measure but I heard it can increase compilation time by 3x.
Thanks for the info. 3x is a lot though, maybe it could be improved with precise collection, but I assume that would require a rewrite. Making it use automatic garbage collection (of some form) would be an interesting benchmark.
Jan 15 2021
parent reply welkam <wwwelkam gmail.com> writes:
On Friday, 15 January 2021 at 14:35:55 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:
 You can use GC with D compiler by passing -lowmem flag. I 
 didnt measure but I heard it can increase compilation time by 
 3x.
Thanks for the info. 3x is a lot
Take it with a grain of salt. I heard it long time ago so I might not remember correctly and I didnt measure it myself.
 improved with precise collection
Precise GC is slower than default GC.
 Making it use automatic garbage collection (of some form) would 
 be an interesting benchmark.
-lowmem flag replaces all* allocations with GC allocations so you can benchmark that On Friday, 15 January 2021 at 14:59:18 UTC, Ola Fosheim Grøstad wrote:
 I think? Or maybe I am missing something?
A write barrier is a peace of code that is inserted before a write to an [object]. Imagine you have a class that has pointer to another class. If you want to change that pointer you need to tell the GC that you changed that pointer so GC can do its magic. https://en.wikipedia.org/wiki/Write_barrier#In_Garbage_collection
 3. Make slices and dynamic arrays RC.
Reference counting needs mutation. How do you define immutable RC slice that needs to mutate its reference count? Thats a unsolved problem in D.
Jan 15 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 15:48:07 UTC, welkam wrote:
 On Friday, 15 January 2021 at 14:35:55 UTC, Ola Fosheim Grøstad 
 wrote:
 improved with precise collection
Precise GC is slower than default GC.
D does not have a fully precise GC. The "precise" collector still scans things conservatively when it cannot be certain. If you combine fully precise collection it with static analysis, then you can reduce the number of paths you follow, but it is a lot of work to implement. So it would take a very motivated individual.
 -lowmem flag replaces all* allocations with GC allocations so 
 you can benchmark that
Interesting idea. There are compilers that use GC written in other languages. It is a nice baseline test, especially since there are not many large commonly known programs for D to do realistic benchmarks with.
 A write barrier is a peace of code that is inserted before a 
 write to an [object].
Not a write to the object, but a modified pointer. The write barrier is invoked when you switch a pointer from one object to another one. Then you mark the object, so you need 2 free bits in each object to use for marking. But my uncertainty was related to how to optimize away barrier that has no impact on the final collection. It is easy to make mistakes when doing such optimizations. The goal should be to invoke as few barriers as possible by static analysis.
 Reference counting needs mutation. How do you define immutable 
 RC slice that needs to mutate its reference count? Thats a 
 unsolved problem in D.
D needs more fine grained immutable, for sure.
Jan 15 2021
prev sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:
 No. And it will never will. Currently DMD uses custom allocator 
 for almost everything. It works as follows. Allocate a big 
 chunk(1MB) of memory using malloc. Have a internal pointer that 
 points to the beginning of unallocated memory. When someone ask 
 for memory return that pointer and increment internal pointer 
 with the 16 byte aligned size of allocation. Meaning the new 
 pointer is pointing to unused memory and everything behind the 
 pointer has been allocated. This simple allocation strategy is 
 called bump the pointer and it improved DMD performance by ~70%.

 You can use GC with D compiler by passing -lowmem flag. I didnt 
 measure but I heard it can increase compilation time by 3x.

 https://github.com/dlang/dmd/blob/master/src/dmd/root/rmem.d#L153
Actually druntime uses map (Linux) and VirtualAlloc (Windows) to break out more memory. C-lib malloc is an option but not used in most platforms and this option also is very inefficient in terms of waste of memory because of alignment requirements. Bump the pointer is a very fast way to allocate memory but what is more interesting is what happens when you return the memory. What does the allocator do with chunks of free memory? Does it put it in a free list, does it merge chunks? I have a feeling that bump the pointer is not the complete algorithm that D uses because of that was the only one, D would waste a lot of memory. As far as I can see, it is simply very difficult to create a completely lockless allocator. Somewhere down the line there will be a lock, even if you don't add one in druntime (the lock will be in the kernel instead when breaking out memory). Also merging chunks can be difficult without locks.
Jan 15 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 15:18:31 UTC, IGotD- wrote:
 Bump the pointer is a very fast way to allocate memory but what 
 is more interesting is what happens when you return the memory. 
 What does the allocator do with chunks of free memory? Does it 
 put it in a free list, does it merge chunks? I have a feeling 
 that bump the pointer is not the complete algorithm that D uses 
 because of that was the only one, D would waste a lot of memory.
I don't know what DMD does exactly, but I guess this is called an "arena" or something like that? Objective-C does something similar with its autorelease pool. Basically, you have a point in the call-tree where you know that all work has been done and then you just reclaim everything that is not marked as in-long-term-use. So you don't do the mark phase, you put the burden of marking the object as in use on the object/reference and just sweep. (Or assume that everything can be freed, which fits well with a compiler that is working in discrete stages). Side note: I incidentally wrote a little allocator cache yesterday that at compile time takes a list of types and then takes the size of those types, sorts it and builds an array of freelists for those specific sizes and caches objects that are freed if they match the desired size (then there is a threshold for the length of the freelist, when that is hit C free() is called. It should be crazy fast too, since I require the free call to provide the type so the correct free list is found at compile time, not at run time.
Jan 15 2021
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 15, 2021 at 03:18:31PM +0000, IGotD- via Digitalmars-d-learn wrote:
[...]
 Bump the pointer is a very fast way to allocate memory but what is
 more interesting is what happens when you return the memory. What does
 the allocator do with chunks of free memory? Does it put it in a free
 list, does it merge chunks? I have a feeling that bump the pointer is
 not the complete algorithm that D uses because of that was the only
 one, D would waste a lot of memory.
DMD *never* frees anything. *That's* part of why it's so fast; it completely drops the complexity of tracking free lists and all of that jazz. That's also why it's a gigantic memory hog that can be a big embarrassment when run on a low-memory system. :-D This strategy only works for DMD because a compiler is, by its very nature, a transient process: you read in source files, process them, spit out object files and executables, then you exit. Add to that the assumption that most PCs these days have gobs of memory to spare, and this allocation scheme completely eliminates memory management overhead. It doesn't matter that memory is never freed, because once the process exits, the OS reclaims everything anyway. But such an allocation strategy would not work on anything that has to be long-running, or that recycles a lot of memory such that you wouldn't be able to fit it all in memory if you didn't free any of it. T -- Don't throw out the baby with the bathwater. Use your hands...
Jan 15 2021
parent reply IGotD- <nise nise.com> writes:
On Friday, 15 January 2021 at 15:50:50 UTC, H. S. Teoh wrote:
 DMD *never* frees anything.  *That's* part of why it's so fast; 
 it completely drops the complexity of tracking free lists and 
 all of that jazz.

 That's also why it's a gigantic memory hog that can be a big 
 embarrassment when run on a low-memory system. :-D

 This strategy only works for DMD because a compiler is, by its 
 very nature, a transient process: you read in source files, 
 process them, spit out object files and executables, then you 
 exit.  Add to that the assumption that most PCs these days have 
 gobs of memory to spare, and this allocation scheme completely 
 eliminates memory management overhead. It doesn't matter that 
 memory is never freed, because once the process exits, the OS 
 reclaims everything anyway.

 But such an allocation strategy would not work on anything that 
 has to be long-running, or that recycles a lot of memory such 
 that you wouldn't be able to fit it all in memory if you didn't 
 free any of it.


 T
Are we talking about the same things here? You mentioned DMD but I was talking about programs compiled with DMD (or GDC, LDC), not the nature of the DMD compiler in particular. Bump the pointer and never return any memory might acceptable for short lived programs but totally unacceptable for long running programs, like a browser you are using right now. Just to clarify, in a program that is made in D with the default options, will there be absolutely no memory reclamation?
Jan 15 2021
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 15, 2021 at 04:22:59PM +0000, IGotD- via Digitalmars-d-learn wrote:
[...]
 Are we talking about the same things here? You mentioned DMD but I was
 talking about programs compiled with DMD (or GDC, LDC), not the nature
 of the DMD compiler in particular.
 
 Bump the pointer and never return any memory might acceptable for
 short lived programs but totally unacceptable for long running
 programs, like a browser you are using right now.
 
 Just to clarify, in a program that is made in D with the default
 options, will there be absolutely no memory reclamation?
We're apparently cross-talking here. A default D program uses the GC, as should be obvious by now. DMD itself, however, uses bump-the-pointer (*not* programs it compiles, though!). The two are completely unrelated. T -- Let X be the set not defined by this sentence...
Jan 15 2021
prev sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 15 January 2021 at 16:22:59 UTC, IGotD- wrote:
 [snip]

 Are we talking about the same things here? You mentioned DMD 
 but I was talking about programs compiled with DMD (or GDC, 
 LDC), not the nature of the DMD compiler in particular.

 Bump the pointer and never return any memory might acceptable 
 for short lived programs but totally unacceptable for long 
 running programs, like a browser you are using right now.

 Just to clarify, in a program that is made in D with the 
 default options, will there be absolutely no memory reclamation?
You are talking about different things. DMD, as a program, uses the bump the pointer allocation strategy. If you compile a D program with DMD that uses new or appends to a dynamic array (or whenver else), then it is using the GC to do that. You can also use malloc or your own custom strategy. The GC will reclaim memory, but there is no guarantee that malloc or a custom allocation strategy will.
Jan 15 2021
prev sibling parent welkam <wwwelkam gmail.com> writes:
On Friday, 15 January 2021 at 15:18:31 UTC, IGotD- wrote:
 I have a feeling that bump the pointer is not the complete 
 algorithm that D uses because of that was the only one, D would 
 waste a lot of memory.
Freeing memory is for loosers :D https://issues.dlang.org/show_bug.cgi?id=21248 DMD allocates and never frees.
Jan 15 2021
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
 That's the whole point of being able to mix and match. Anyone 
 avoiding the GC completely is missing it (unless they really, 
 really, must be GC-less).
+1 mix and match is a different style versus only having a GC, or only having lifetimes for everything. And it's quite awesome as a style, since half of things don't need a well-identified owner.
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 15:50:59 UTC, Guillaume Piolat 
wrote:
 On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
 That's the whole point of being able to mix and match. Anyone 
 avoiding the GC completely is missing it (unless they really, 
 really, must be GC-less).
+1 mix and match is a different style versus only having a GC, or only having lifetimes for everything. And it's quite awesome as a style, since half of things don't need a well-identified owner.
What do you mean by "mix and match"? If it means shutting down the GC after initialization then it can easily backfire for more complicated software that accidentally calls code that relies on the GC. Until someone can describe a strategy that works for a full application, e.g. an animation-editor or something like that, it is really difficult to understand what is meant by it.
Jan 15 2021
parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 15 January 2021 at 16:21:18 UTC, Ola Fosheim Grøstad 
wrote:

 What do you mean by "mix and match"? If it means shutting down 
 the GC after initialization then it can easily backfire for 
 more complicated software that accidentally calls code that 
 relies on the GC.
I mean: "using GC, unless where it creates problems". Examples below.
 Until someone can describe a strategy that works for a full 
 application, e.g. an animation-editor or something like that, 
 it is really difficult to understand what is meant by it.
Personal examples: - The game Vibrant uses GC for some long-lived objects. Memory pools for most game entities. Audio thread has disabled GC. - Dplug plugins before runtime removal used GC in the UI, but no GC in whatever was called repeatedly, leading to no GC pause in practice. In case an error was made, it would be a GC pause, but not a leak. The pain point with the mixed approach is adding GC roots when needed. You need a mental model of traceability. It really is quite easy to do: build you app normally, evetually optimize later by using manual memory management.
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 16:26:59 UTC, Guillaume Piolat 
wrote:
 Until someone can describe a strategy that works for a full 
 application, e.g. an animation-editor or something like that, 
 it is really difficult to understand what is meant by it.
Personal examples: - The game Vibrant uses GC for some long-lived objects. Memory pools for most game entities. Audio thread has disabled GC.
But when do you call collect? Do you not create more and more long-lived objects?
 - Dplug plugins before runtime removal used GC in the UI, but 
 no GC in whatever was called repeatedly, leading to no GC pause 
 in practice. In case an error was made, it would be a GC pause, 
 but not a leak.
How do you structure this? Limit GC to one main thread? But an audio plugin GUI is not used frequently, so... hickups are less noticable. For a 3D or animation editor hickups would be very annoying.
 The pain point with the mixed approach is adding GC roots when 
 needed. You need a mental model of traceability.
Yes. I tend to regret "clever" solutions when getting back to the code months later because the mental model is no longer easily available. I think it is better with something simpler like saying one GC per thread, or ARC across the board unless you use non-arc pointers, or that only class objects are GC. Basically something that creates a simple mental model.
 It really is quite easy to do: build you app normally, 
 evetually optimize later by using manual memory management.
I understand what you are saying, but it isn't all that much more work to use explicit ownership if all the libraries have support for it. It is a lot more work to add manual memory management if the available libraries don't help you out.
Jan 15 2021
parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 15 January 2021 at 16:37:46 UTC, Ola Fosheim Grøstad 
wrote:
 But when do you call collect? Do you not create more and more 
 long-lived objects?
Calling collect() isn't very good, it's way better to ensure the GC heap is relatively small, hence easy to traverse. You can use -gc=profile for this (noting that things that can't contain pointer, such as ubyte[], scan way faster than void[])
 How do you structure this? Limit GC to one main thread? But an 
 audio plugin GUI is not used frequently, so... hickups are less 
 noticable. For a 3D or animation editor hickups would be very 
 annoying.
Yes but when a hiccup happen you can often trace it back to gargage generation and target it. It's an optimization task.
 I think it is better with something simpler like saying one GC 
 per thread
But then ownership doesn't cross threads, so it can be tricky to keep object alive when they cross threads. I think that was a problem in Nim.
 It really is quite easy to do: build you app normally, 
 evetually optimize later by using manual memory management.
I understand what you are saying, but it isn't all that much more work to use explicit ownership if all the libraries have support for it.
But sometimes that ownership is just not interesting. If you are writing a hello world program, no one cares who "hello world" string belongs to. So the GC is that global owner.
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 18:43:44 UTC, Guillaume Piolat 
wrote:
 Calling collect() isn't very good, it's way better to ensure 
 the GC heap is relatively small, hence easy to traverse.
 You can use -gc=profile for this (noting that things that can't 
 contain pointer, such as ubyte[], scan way faster than void[])
Ok, so what you basically say is that the number of pointers to trace was small, and perhaps also the render thread was not under GC control?
 I think it is better with something simpler like saying one GC 
 per thread
But then ownership doesn't cross threads, so it can be tricky to keep object alive when they cross threads. I think that was a problem in Nim.
What I have proposed before is to pin down objects with a ref count when you temporarily hand them to other threads. Then the other thread will handle it with a smart_pointer which release the "borrow ref count" on return. But yes, you "need" some way for other threads to borrow thread local memory, in order to implement async services etc. Then again, I think people who write such service frameworks will be more advanced programmers than those that use them. So I wouldn't say it is a big downside.
 But sometimes that ownership is just not interesting. If you 
 are writing a hello world program, no one cares who "hello 
 world" string belongs to. So the GC is that global owner.
I get your viewpoint, but simple types like strings can be handled equally well with RC... If we take the view, that you also stressed, that it is desirable to keep the tracable pointer count down, then maybe making only class object GC is the better approach.
Jan 15 2021
parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 15 January 2021 at 18:55:27 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 18:43:44 UTC, Guillaume Piolat 
 wrote:
 Calling collect() isn't very good, it's way better to ensure 
 the GC heap is relatively small, hence easy to traverse.
 You can use -gc=profile for this (noting that things that 
 can't contain pointer, such as ubyte[], scan way faster than 
 void[])
Ok, so what you basically say is that the number of pointers to trace was small, and perhaps also the render thread was not under GC control?
A small GC heap is sufficient. There is this blog post where there was a quantitative measure of the sub-1ms D GC heap size. http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html
 200 KB can be scanned/collected in 1 ms.
Since then the D GC has improved in many ways (multicore, precise, faster...) that surprisingly have not been publicized that much ; but probably the suggested realtime heap size is in the same order of magnitude. In this 200kb number above, things that can't contain pointers don't count.
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 19:37:12 UTC, Guillaume Piolat 
wrote:
 A small GC heap is sufficient.
 There is this blog post where there was a quantitative measure 
 of the sub-1ms D GC heap size.
That's ok for a small game, but not for applications that grow over time or projects where the requirement spec is written (and continually added to) by customers. But for enthusiast projects, that can work. Many open source projects (and also some commercial ones) work ok for small datasets, but tank when you increase the dataset. So "match and mix" basically means use it for prototyping, but do-not-rely-on-it-if-you-can-avoid-it. Switching to ARC looks more attractive, scales better and the overhead is more evenly distributed. But it probably won't happen.
Jan 15 2021
next sibling parent reply aberba <karabutaworld gmail.com> writes:
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 19:37:12 UTC, Guillaume Piolat 
 wrote:
 A small GC heap is sufficient.
 There is this blog post where there was a quantitative measure 
 of the sub-1ms D GC heap size.
That's ok for a small game, but not for applications that grow over time or projects where the requirement spec is written (and continually added to) by customers. But for enthusiast projects, that can work. Many open source projects (and also some commercial ones) work ok for small datasets, but tank when you increase the dataset. So "match and mix" basically means use it for prototyping, but do-not-rely-on-it-if-you-can-avoid-it. Switching to ARC looks more attractive, scales better and the overhead is more evenly distributed. But it probably won't happen.
Isn't it more theoretical/imaginary/hypothetical than something really measured from a real-world use case? Almost all large software use cases I've seen used mix and match. (BTW ARC is also another form of GC) Unreal game engine https://mikelis.net/garbage-collection-in-ue4-a-high-level-overview/ Unity (of course) https://docs.unity3d.com/Manual/UnderstandingAutomaticMemoryManagement.html
 Legends have it that almost every major software project in ANY 
 system language ends up writing custom allocators and 
 containers.
Jan 15 2021
next sibling parent reply aberba <karabutaworld gmail.com> writes:
On Friday, 15 January 2021 at 21:15:29 UTC, aberba wrote:
 On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
 wrote:
 [...]
Isn't it more theoretical/imaginary/hypothetical than something really measured from a real-world use case? Almost all large software use cases I've seen used mix and match. (BTW ARC is also another form of GC) Unreal game engine https://mikelis.net/garbage-collection-in-ue4-a-high-level-overview/ Unity (of course) https://docs.unity3d.com/Manual/UnderstandingAutomaticMemoryManagement.html
 [...]
TL;DR:
 In summation, the garbage collection system is a robust part of 
 Unreal Engine that affords C++ programmers a lot of safety from 
 memory leaks, as well as convenience. With this high-level 
 discussion, I was aiming to introduce the system at a 
 conceptual level, and I hope I have achieved that.
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 21:18:55 UTC, aberba wrote:
 TL;DR:

 In summation, the garbage collection system is a robust part 
 of Unreal Engine that affords C++ programmers a lot of safety 
 from memory leaks, as well as convenience. With this 
 high-level discussion, I was aiming to introduce the system at 
 a conceptual level, and I hope I have achieved that.
What is your conceptual level? You haven't described what it does, and does not do. But yes, frameworks need that allow "scripting" in some shape or form (compiled or not) has to hide internal structures and intricacies and provide some convenience. However, you write your own from scratch you can often times build the marking into an existing pass, so you get it for free. Not uncommon for people who write code that modify graphs. There is a big difference between writing a dedicated collector for a dedicated graph, and a general ownership mechanism for the whole program.
Jan 15 2021
parent reply James Blachly <james.blachly gmail.com> writes:
On 1/15/21 4:55 PM, Ola Fosheim Grøstad wrote:
 On Friday, 15 January 2021 at 21:18:55 UTC, aberba wrote:
 TL;DR:

 In summation, the garbage collection system is a robust part of 
 Unreal Engine that affords C++ programmers a lot of safety from 
 memory leaks, as well as convenience. With this high-level 
 discussion, I was aiming to introduce the system at a conceptual 
 level, and I hope I have achieved that.
What is your conceptual level? You haven't described what it does, and does not do. But yes, frameworks need that allow "scripting" in some shape or form (compiled or not) has to hide internal structures and intricacies and provide some convenience.
Those were not aberba's words, but the author of the first link, in which one does find a conceptual, high level description of GC.
Jan 17 2021
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Monday, 18 January 2021 at 01:41:35 UTC, James Blachly wrote:
 Those were not aberba's words, but the author of the first 
 link, in which one does find a conceptual, high level 
 description of GC.
I read it, it said nothing of relevance to the D collector. That is not TLDR informative.
Jan 17 2021
parent reply aberba <karabutaworld gmail.com> writes:
On Monday, 18 January 2021 at 07:11:20 UTC, Ola Fosheim Grostad 
wrote:
 On Monday, 18 January 2021 at 01:41:35 UTC, James Blachly wrote:
 Those were not aberba's words, but the author of the first 
 link, in which one does find a conceptual, high level 
 description of GC.
I read it, it said nothing of relevance to the D collector. That is not TLDR informative.
It talks how the use of GC is desired even in a game engine like Unreal. Several AAA title's have been built on Unreal. Apparently you can't convince people who have made up their mind about GC being a bad thing for D. Nevertheless, GC in D isn't going anywhere. And if the approach for writing nogc code in D doesn't cut it, then I'm not what else will.
Jan 18 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 18 January 2021 at 11:43:20 UTC, aberba wrote:
 Nevertheless, GC in D isn't going anywhere. And if the approach 
 for writing nogc code in D doesn't cut it, then I'm not what 
 else will.
As long as that attitude prevails, D will be going nowhere as well.
Jan 18 2021
parent reply aberba <karabutaworld gmail.com> writes:
On Monday, 18 January 2021 at 11:55:46 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 18 January 2021 at 11:43:20 UTC, aberba wrote:
 Nevertheless, GC in D isn't going anywhere. And if the 
 approach for writing nogc code in D doesn't cut it, then I'm 
 not what else will.
As long as that attitude prevails, D will be going nowhere as well.
I meant it as this. English is not my native language so pardon my phrasing if it doesn't sound right. I'm not aware of an alternative way of writing D code aside what already comes with it by default. If you read the Origin of D book, you would see that the GC was a desire thing when D was designed probably due to how useful it is for ... as said, 90% or so of software development. So at this point, fighting the GC isn't (in my opinion) the right strategy. I should also say that I notice your point about improving GC in D and making it more optional as much as possible for things that still rely on GC...ARC, etc. 👍 The OP was about why programmers don't "like" GC. I've been here long enough to see the GC being one of the most re-occurring issues for discussion (probably due to new users coming in). There's been official posts about how D's style of GC isn't like that of fully managed languages, how to write nogc code in D, how to minimize GC, among others. Now if none of these work for you (for some special reason), then the long-term strategy might be an alternative runtime and or std. Which isn't a good answer that thought was worth it...so I didn't include that. If none of these work, then I (as in my personal opinion), don't know what else is available.
Jan 18 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 18 January 2021 at 12:17:24 UTC, aberba wrote:
 If you read the Origin of D book, you would see that the GC was 
 a desire thing when D was designed probably due to how useful 
 it is for ... as said, 90% or so of software development. So at 
 this point, fighting the GC isn't (in my opinion) the right 
 strategy.
Not fighting the GC, but the whole argument about improving it, or mix or match, does not work for most developers looking for a new language. So either there has to be something else or the language semantics will have to be adjusted to get a better GC. That is the crux, the GC cannot be significantly improved without some minor adjustment to the language (with some breaking changes). To get there the majority have to be in favour of it. If 50% of the D community pulls strongly in the opposite direction, then the GC cannot improve in a meaningful way. Yes, it is natural that the current D population don't mind the current GC. Otherwise they would be gone... but then you have to factor in all the people that go through the revolving door and does not stay. If they stayed the eco system would be better. So the fact that they don't... is effecting everyone in a negative way (also those that har happy with the runtime).
 I should also say that I notice your point about improving GC 
 in D and making it more optional as much as possible for things 
 that still rely on GC...ARC, etc. 👍
ARC is a pretty big change, so it will depend on library authors supporting it. It also requires a new intermediate representation. So I don't think it will happen. Thread local GC seems most reasonable. As CPUs get more and more threads it becomes more and more unacceptable to lock all threads during collection.
 The OP was about why programmers don't "like" GC.
Programmers like GC, just not for system level programming. C++ has had the Boehm GC since the mid 90s. Only a tiny percentage use it. Forget about game engines, many games have game contents written in Lua and other scripting languages and can live with incremental GC with very little impact on interaction (you could use Javascript too). The game engine itself is usually not relying on GC, just the game content part.
 I've been here long enough to see the GC being one of the most 
 re-occurring issues for discussion (probably due to new users 
 coming in).
Yes, they come in, but do they stay? If they don't stay, then our eco system suffers from it. D would be in a better position by tracking why people leave and then fix those concerns (if they are related to the language/runtime).
 There's been official posts about how D's style of GC isn't 
 like that of fully managed languages, how to write nogc code in 
 D, how to minimize GC, among others.
Yes, but people who are well versed in system level programming know how to to this already, they just want more hassle than they get in other languages. And those are also the same people that would write solid libraries/improve on the compiler. So not being able to retain those developers is a biiiiig loss.
 Now if none of these work for you (for some special reason), 
 then the long-term strategy might be an alternative runtime and 
 or std. Which isn't a good answer that thought was worth 
 it...so I didn't include that.
Actually that is a good answer, if it comes with the appropriate language changes, like tagged unions and banning conflicting pointers in unions. What works for me, is not the issue, but what IS the direction?? Where are we going? That is the real issue. I am perfectly ok with C++20 for low level programming. I don't need D for that. It is totally OK if the D community decides to make it more high level and easier to deal with for newbies who come from Python.
 If none of these work, then I (as in my personal opinion), 
 don't know what else is available.
I am ok with any one of these alternatives: Alternative 1: Adjust the language semantics so that the GC can be improved and accept some breaking changes. Alternative 2: Switch focus from being a system level language to becoming more of a high level language. Alternative 3: Implement ARC. The other alternatives don't really work. Doing what Rust does is now 2 years late, it would take 5 years to get there. Doing what C++ does, does not help. Why would I use D instead of C++ then? Just pick a direction, because right now, the direction is not very clear and then progress becomes impossible. No direction, means no progress...
Jan 18 2021
next sibling parent reply Arafel <er.krali gmail.com> writes:
On 18/1/21 13:41, Ola Fosheim Grøstad wrote:
 Yes, it is natural that the current D population don't mind the current 
 GC. Otherwise they would be gone... but then you have to factor in all 
 the people that go through the revolving door and does not stay. If they 
 stayed the eco system would be better. So the fact that they don't... is 
 effecting everyone in a negative way (also those that har happy with the 
 runtime).
I must be in the minority here because one of the reasons why I started using D was precisely because it HAS a GC with full support. I wouldn't even have considered it if it hadn't. For what I usually do (non-critical server-side unattended processing) latency is most obviously not an issue, and I for me not having to worry about memory management and being able to focus on the task at hand is a requirement. So I think that several key people (in the community) have different, sometimes even contradicting issues they feel very strongly about, and think these are the most important ones, or the ones that move most people. This is quite OT (perhaps I should have split the topic), but I think that instead of focusing on what people dislike about D, it would help to ask people as well why they DID choose D. In my case, I'm coming from a mostly Java (with a touch of C/C++) and was looking for: * C/C++/Java-like syntax * OOP support (sorry, I'm too used to that ;-) ) * Proper meta-programing / templates (without Java's generics / type erasure) * Compiled language * GC (IOW, no worries about memory management) * Full linux support
Jan 18 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 18 January 2021 at 13:14:16 UTC, Arafel wrote:
 I must be in the minority here because one of the reasons why I 
 started using D was precisely because it HAS a GC with full 
 support. I wouldn't even have considered it if it hadn't.
You are probably not in a minority among those that use D for productive purposes, basically doing batch-like programming (or with similar requirements). Nothing wrong with making that a focus, but then that would be a direction. There are other (growing) alternatives in that direction too...
Jan 18 2021
prev sibling parent reply aberba <karabutaworld gmail.com> writes:
On Monday, 18 January 2021 at 13:14:16 UTC, Arafel wrote:
 On 18/1/21 13:41, Ola Fosheim Grøstad wrote:
 Yes, it is natural that the current D population don't mind 
 the current GC. Otherwise they would be gone... but then you 
 have to factor in all the people that go through the revolving 
 door and does not stay. If they stayed the eco system would be 
 better. So the fact that they don't... is effecting everyone 
 in a negative way (also those that har happy with the runtime).
I must be in the minority here because one of the reasons why I started using D was precisely because it HAS a GC with full support. I wouldn't even have considered it if it hadn't. For what I usually do (non-critical server-side unattended processing) latency is most obviously not an issue, and I for me not having to worry about memory management and being able to focus on the task at hand is a requirement.
1). You're not a minority at all. System programming is also vast so having a GC (especially D's special kind of GC) is nothing alien in System programming. If you look out there, you'd see most of the very important software (for the lack of a better word) written uses some form of GC. 2). I'm not sure anyone really know how many people use D, stay with D after first encounter or leave. So we're all guessing our biases. And I wouldn't look at just the core language as why someone will move to D or not. From my experiencing freelancing, I've come to see that a large portion of clients' decision stems from other things like familiarity and ecosystem (packages, frameworks, vendor/cloud support, engineering hiring pool, consultants/support availability, tooling, marketing/popularity/fomo/community, etc)... including things that usually comes from the community and stakeholders. For D we don't really have any measure of community size. Only looking at the forum can be misleading. 3). Using GC doesn't mean you're writing scripts. A significant amonnt of very large D code I've read (including those from long time users) use GC... sometimes partially. So to think or assume GC is hurting D is an unmeasured bias. I'm not saying those who are looking for nogc don't really matter (even though I hold the opinion that one can write nogc code in D just fine). dplug is written in D. What else couldn't? Also maybe the GC and other complaints (genuine or not), which I'm also a culprit, might actually be a contributing to people's first impression of D when they visit the forums. I have a strongly suspicious of this.
Jan 18 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 18 January 2021 at 15:18:40 UTC, aberba wrote:
 1). You're not a minority at all. System programming is also 
 vast so having a GC (especially D's special kind of GC) is 
 nothing alien in System programming. If you look out there,
This is not true, and you know it. There is nothing special about D's GC. It is just very basic. GC is not usual in system level programming. Not at all usual. It happens, but it is not the usual.
Jan 18 2021
prev sibling parent reply ddcovery <antoniocabreraperez gmail.com> writes:
On Monday, 18 January 2021 at 15:18:40 UTC, aberba wrote:
 From my experiencing freelancing, I've come to see that a large 
 portion of clients' decision stems from other things like 
 familiarity and ecosystem (packages, frameworks, vendor/cloud 
 support, engineering hiring pool, consultants/support 
 availability, tooling, marketing/popularity/fomo/community, 
 etc)... including things that usually comes from the community 
 and stakeholders. For D we don't really have any measure of 
 community size. Only looking at the forum can be misleading.
I agree. Ecosystem is one of the most important things to take the choice. In particular, when a team of developers need to engage a new project they don't just talk about language: they talk about process model, frameworks, libraries... and tooling for solving common development/testing/deployment tasks (i.e.: debugging). Go and Rust are really clever about its paradigms decisions and no one (as far as I perceive) is discussing if GC must be removed from Go or added to Rust: developers see what language offers them and they decide. D toke it's key decisions in the past: of course it is a "generalist" language trying to convince C or C++ developers, but this is really frustrating when there is no a way to perform decent debugging in linux with vscode (like https://youtu.be/X2tM21nmzfk?t=352) while the community is dedicated to discussing the sex of angels (multiple inheritance, GC/no GC, exceptions/no exceptions, ...). A good friend developer told me months ago: "If you are functionalities in other language like D, you just will get frustrated: adapt to what the language offers you or jump to other options". D is D: take it or not. This language is not the holy grail. If D is not C++ and you love to work with C++, just work with C++ (or take a try with Rust and it's Ownership memory model if D pros are not enough for you). If D is not C and you love to work with C, just work with C (or take a try with Go and it's GC if D is not enough for you), but think about the thousands of experienced developers that where looking for something mature to work with and found that D was not an option.
 Also maybe the GC and other complaints (genuine or not), which 
 I'm also a culprit, might actually be a contributing to 
 people's first impression of D when they visit the forums. I 
 have a strongly suspicious of this.
Me too: I'm absolutely convinced.
Jan 19 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 January 2021 at 10:36:13 UTC, ddcovery wrote:
 GC if D is not enough for you), but think about the thousands 
 of experienced developers that where looking for something 
 mature to work with and found that D was not an option.
And that's the point. The vast majority of _experienced_ developers recognize the current GC for what it is: a very basic primitive Boehm-style GC. Which basically is a trip back to the 1990s (or 1970s, whenever they started programming).
Jan 19 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 January 2021 at 10:43:45 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 19 January 2021 at 10:36:13 UTC, ddcovery wrote:
 GC if D is not enough for you), but think about the thousands 
 of experienced developers that where looking for something 
 mature to work with and found that D was not an option.
And that's the point. The vast majority of _experienced_ developers recognize the current GC for what it is: a very basic primitive Boehm-style GC. Which basically is a trip back to the 1990s (or 1970s, whenever they started programming).
And if it isn't clear: stuff like that (and bugs in the type system) is what makes _experienced_ developers do exactly what you said. They do go to Rust, Go and C++ (or Nim or Zig). Go back through the forums and you see plenty of dedicated D users that did exactly that. Lost opportunities. And that is why you don't get the eco system you think is needed. Truly _experienced_ programmers do test a new language before they commit to it. They will recognize the flaws based on prior experience. They do know what typical flaws in language design look like. They have experience with maybe 5-15 languages, so you cannot just throw stupid slogans in their face, they are used to that too.
Jan 19 2021
parent reply ddcovery <antoniocabreraperez gmail.com> writes:
On Tuesday, 19 January 2021 at 11:25:13 UTC, Ola Fosheim Grøstad 
wrote:
 On Tuesday, 19 January 2021 at 10:43:45 UTC, Ola Fosheim 
 Grøstad wrote:
 On Tuesday, 19 January 2021 at 10:36:13 UTC, ddcovery wrote:
 GC if D is not enough for you), but think about the thousands 
 of experienced developers that where looking for something 
 mature to work with and found that D was not an option.
And that's the point. The vast majority of _experienced_ developers recognize the current GC for what it is: a very basic primitive Boehm-style GC. Which basically is a trip back to the 1990s (or 1970s, whenever they started programming).
And if it isn't clear: stuff like that (and bugs in the type system) is what makes _experienced_ developers do exactly what you said. They do go to Rust, Go and C++ (or Nim or Zig). Go back through the forums and you see plenty of dedicated D users that did exactly that. Lost opportunities. And that is why you don't get the eco system you think is needed. Truly _experienced_ programmers do test a new language before they commit to it. They will recognize the flaws based on prior experience. They do know what typical flaws in language design look like. They have experience with maybe 5-15 languages, so you cannot just throw stupid slogans in their face, they are used to that too.
First of all, nice to read you That you want GC to work efficiently seems great to me... but at least we agree that D memory management is (and must be) GC based (so I really don't understand your somewhat over-acted answer... maybe I need to read all the threads to understand your discomfort. In any case, accept my forgiveness if I have been able to bother you). Regarding the experience, do we really have to go into that? In this forum there are more or less many people with university level and between 10 and 30 years of experience ... many (except the youngest) have worked professionally with dozens of programming languages and this is, in my opinion, the reason D attracts us (people with really different profiles and needs but with a lot of experience). In my case, for example, I have not worked manually with memory for decades (the 90s are a long way off, and my years with C/ASM, Scala, ObjectiveC, Js/Ts, Kotlin, Dart, ... D seems like a great alternative to me (mainly because the "way" of computational inefficiency that a lot of them and their most used frameworks are taking). In any case, your point about to be professional when comparing alternatives should always be kept in mind, nothing to add. Thanks for your tips.
Jan 19 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 January 2021 at 13:41:33 UTC, ddcovery wrote:
 That you want GC to work efficiently seems great to me... but 
 at least we agree that D memory management is (and must be) GC 
 based (so I really don't understand your somewhat over-acted 
 answer... maybe I need to read all the threads to understand 
 your discomfort. In any case, accept my forgiveness if I have 
 been able to bother you).
I think we need to understand the landscape around us. What was possible development directions for D 2-5 years ago are now less viable, because it is now a game of (forever) catching up to the other ones (C++/Rust) rather than being something different. 5 years ago I would argued more for competing with C++/Rust head-to-head. I think that train has left the station. D is on a different track now. So the possible viable directions has shrunk to improving the GC (by some minor language adjustments), switching to ARC or some other paradigm that give roughly the same programming experience as D users are accustomed to.
 Regarding the experience, do we really have to go into that? In 
 this forum there are more or less many people with university 
 level and between 10 and 30 years of experience ...
Yes, but would most of those have chosen D over Rust, Zig, Nim, C++20 today? We cannot know for sure, but there is clearly a significant uptake for competing languages that did not exist 10 years ago. So the "recruiting pool" is shrinking, that means losses (people leaving) cost even more today than 10 years ago... So yes, we should absolutely listen to concerns from experienced developers with Comp.Sci. background if they chose to share them here in the forums. Tooling is a much bigger issue, but language/runtime adjustments are possible if we are willing to take some inconvenience in transition (some breakage).
 In my case, for example, I have not worked manually with memory 
 for decades (the 90s are a long way off, and my years with
Yes, I don't think most people want to work manually with memory, which is why mix-and-match is not sitting well. People who really want to do fully manual stick to C (they don't even want C++, right?). Anyway, the more cores CPUs get, the more unacceptable blocking threads gets. But I also suspect thread-pooling will eventually become the only reasonable option as one has to support CPUs with 2-30 cores... difficult to do that with a threading model. So, there are many... issues. Maybe threading is not the best model... Not sure.
Jan 19 2021
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 January 2021 at 13:41:33 UTC, ddcovery wrote:
 (so I really don't understand your somewhat over-acted 
 answer... maybe I need to read all the threads to understand 
 your discomfort. In any case, accept my forgiveness if I have 
 been able to bother you).
Forgot to answer, maybe I misinterpreted your statement, if so I apologise. I felt you were putting too much emphasis on tooling as the dominant issue. If that becomes an excuse to not make som hard choices then we cannot move. Because everybody has to be on board for D to take a decisive direction. So, yes, I am not happy if we establish excuses as valid arguments against change! I also don't think tooling or libraries are the core issues. I think a solid language and a solid runtime is sufficient to get the ball rolling, because then you can retain those highly skilled people that will build the tools (over time). If people do not stay for a long time then you will have 50% of a tool built, then it goes into the graveyard. Then another builds another 50% solution, then it goes into the graveyard... D has a very large graveyard at this point of very interesting projects that only got to the 50-80% mark... then the authors left. Anyway, commercial quality tooling is expensive. Even Google gave up on building their own IDE for Dart and left it to JetBrains.
Jan 19 2021
prev sibling parent reply Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Monday, 18 January 2021 at 12:41:31 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 18 January 2021 at 12:17:24 UTC, aberba wrote:
 [...]
Not fighting the GC, but the whole argument about improving it, or mix or match, does not work for most developers looking for a new language. So either there has to be something else or the language semantics will have to be adjusted to get a better GC. That is the crux, the GC cannot be significantly improved without some minor adjustment to the language (with some breaking changes). [...]
What adjustments to the language would be needed in your opinion?
Jan 18 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 19 January 2021 at 06:14:52 UTC, Imperatorn wrote:
 What adjustments to the language would be needed in your 
 opinion?
I've mentioned them in connection with various approaches I've suggested. Depends on what area you want to improve. In short (off the top of my head, maybe more is needed): Precise tracing: - The compiler currently cannot know what a tracing pointer points to: unions, casts. - The GC traces many paths that never leads to ownership. Non-tracing pointer needed. ARC: - There is no knowledge of ownership, so ARC cannot be added. Compiler needs to know. Singled threaded GC: - The compiler does not know when a thread is created? So what are shared semantics?
Jan 19 2021
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jan 18, 2021 at 11:43:20AM +0000, aberba via Digitalmars-d-learn wrote:
[...]
 It talks how the use of GC is desired even in a game engine like
 Unreal.  Several AAA title's have been built on Unreal.
 
 Apparently you can't convince people who have made up their mind about
 GC being a bad thing for D.
'Tis what I've been saying.
 Nevertheless, GC in D isn't going anywhere. And if the approach for
 writing nogc code in D doesn't cut it, then I'm not what else will.
Nothing ever will. It's exactly like Walter has always said: X (substitute any of the usual complaints for X, like the GC) isn't the real problem. X is just the excuse. You can pour blood and sweat into "fixing" X, but that will not convince the naysayers. They will just move on to Y. You can proceed to fix Y, but they will just move on to Z. It's a waste of time bending over backwards to please non-customers who have already made up their minds and will never be customers. Instead, we should be improving life for the existing customers. Like improve the docs, improve dub, fix regressions, etc.. T -- It is widely believed that reinventing the wheel is a waste of time; but I disagree: without wheel reinventers, we would be still be stuck with wooden horse-cart wheels.
Jan 18 2021
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 21:15:29 UTC, aberba wrote:
 Isn't it more theoretical/imaginary/hypothetical than something 
 really measured from a real-world use case? Almost all large 
 software use cases I've seen used mix and match.
No?! Chrome has a garbage collector because JavaScript acquire resources in a somewhat chaotic manner, but they have fine tuned it and only call it when the call stack is short. High quality game engines have similarly fine tuned collection, and not really a big sweeping conservative scan that lock down threads.
 (BTW ARC is also another form of GC)
By GC in this thread we speak of tracing GC. Generally, in informal contexts GC always means tracing GC, even among academics.
 Legends have it that almost every major software project in ANY 
 system language ends up writing custom allocators and 
 containers.
Containers certainly, allocators, sometimes. But that is not necessarily related to handling ownership. You can write your own allocator and still rely on a standard ownership mechanism.
Jan 15 2021
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
wrote:
 Many open source projects (and also some commercial ones) work 
 ok for small datasets, but tank when you increase the dataset. 
 So "match and mix" basically means use it for prototyping, but 
 do-not-rely-on-it-if-you-can-avoid-it.
It's certainly true that in team dynamics, without any reward, efficiency can be victim to a tragedy of commons. Well, any software invariant is harder to hold if the shareholders don't care. (be it "being fast", or "being correct", or other invariants).
Jan 15 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 16 January 2021 at 00:20:16 UTC, Guillaume Piolat 
wrote:
 It's certainly true that in team dynamics, without any reward, 
 efficiency can be victim to a tragedy of commons.

 Well, any software invariant is harder to hold if the 
 shareholders don't care.
 (be it "being fast", or "being correct", or other invariants).
Yes, although for Open Source I think the "mental model" you talked about is more of an issue. How many people working on DMD has good mental model of it? It is a bit easier for programs like Gimp that can be "plugin" style. I guess Phobos is also "plugin" style, so it is easier to improve Phobos than DMD, because of the "mental model" issue. Maybe Open Source projects should be designed more for simple mental models (with "plugins") than for high throughput too. Maybe we can have languages that are better for Open Source by making it easier to make extensions of the software with only local impacts. Maybe it would be better for DMD to move away from "thread local" thinking and instead have a thread pool and stackless actors. Then tie local non-incremental garbage collection to actors. Useful for application development and servers, but not so useful for audio-plugins. So, you would probably not want it...
Jan 17 2021
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
 To be fair, the GC *has* improved over the years.  Just not as 
 quickly as people would like, but it *has* improved.
It cannot improve enough as a global collector without write barriers. No language has been able to do this. Therefore, D cannot do it. Precise collection only helps when you have few pointers to trace.
 improvement. But why would I?  It takes 5x less effort to write 
 GC code, and requires only a couple more days of effort to fix
That's like saying it takes 5x more time to write code in Swift than D. That is not at all reasonable. Tracing GC is primarily useful when you have many small long-lived objects with unclear ownership and cyclic references that are difficult to break with weak pointers. In those cases it is invaluable, but most well-designed programs have more tree-like structures and clear ownership.
 after that to debug obscure pointer bugs.  Life is too short to 
 be squandered chasing down the 1000th double-free and the 
 20,000th dangling pointer in my life.
That has nothing to do with a tracing GC... Cyclic references is the only significant problem a tracing GC addresses compared to other solutions.
 A lot of naysayers keep repeating GC performance issues as if 
 it's a black-and-white, all-or-nothing question.  It's not.  
 You *can* write high-performance programs even with D's 
 supposedly lousy GC -- just profile the darned thing, and
There are primarily two main problems, and they are not throughput, they are: 1. LATENCY: stopping the world will never be acceptable in interactive applications of some size, it is only acceptable in batch programs. In fact, even incremental collectors can cause a sluggish experience! 2. MEMORY CONSUMPTION: doing fewer collection cycles will increase the memory footprint. Ideally the collector would run all the time. In the cloud you pay for memory, so you want to keep memory consumption to a fixed level that you never exceed. System level programming is primarily valuable for interactive applications, OS level programming, or embedded. So, no, it is not snobbish to not want a sluggish GC. Most other tasks are better done in high level languages.
Jan 15 2021
prev sibling parent reply welkam <wwwelkam gmail.com> writes:
On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
 (1) Refactored one function called from an inner loop to reuse 
 a buffer instead of allocating a new one each time, thus 
 eliminating a large amount of garbage from small allocations;
 <...>
 The result was about 40-50% reduction in runtime, which is 
 close to about a 2x speedup.
I think this message needs to be signal boosted. Most of the time GC is not the problem. The problem is sloppy memory usage. If you allocate a lot of temporary objects your performance will suffer even if you use malloc and free. If you write code that tries to use stack allocation as much as possible, doesn't copy data around, reuses buffers then it will be faster than manual memory management that doesn't do that. And thats with a "slow" GC.
Jan 15 2021
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 15, 2021 at 09:04:13PM +0000, welkam via Digitalmars-d-learn wrote:
 On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
 (1) Refactored one function called from an inner loop to reuse a
 buffer instead of allocating a new one each time, thus eliminating a
 large amount of garbage from small allocations;
 <...>
 The result was about 40-50% reduction in runtime, which is close to
 about a 2x speedup.
I think this message needs to be signal boosted. Most of the time GC is not the problem. The problem is sloppy memory usage. If you allocate a lot of temporary objects your performance will suffer even if you use malloc and free.
As the joke goes, "you can write assembly code in any language". :-D If you code in a sloppy way, it doesn't matter what language you write in, your program will still suck. No amount of compiler magic will be able to help you. The solution is not to blame this or that, it's to learn how to use what the language offers you effectively.
 If you write code that tries to use stack allocation as much as
 possible, doesn't copy data around, reuses buffers then it will be
 faster than manual memory management that doesn't do that. And thats
 with a "slow" GC.
And with D, it's actually easy to do this, because D gives you tools like slices and by-value structs. Having slices backed by the GC is actually a very powerful combination that people seem to overlook: it means you can freely refer to data by slicing the buffer. Strings being slices, as opposed to null-terminated, is a big part of this. In C, you cannot assume anything about how the memory of a buffer is managed (unless you allocated it yourself); as a result, in typical C code strcpy's, strdup's are everywhere. Want a substring? You can't null-terminate the parent string without affecting code that still depends on it; solution? strdup. Want to store a string in some persistent data structure? You can't be sure the pointer will still be valid (or that the contents pointed to won't change); solution? strdup, or strcpy. Want to parse a string into words? Either you modify it in-place (e.g. strtok), invalidating any other references to it, or you have to make new allocations of every segment. GC or no GC, this will not lead to a good place, performance-wise. I could not have written fastcsv if I had to work under the constraints of C's null-terminated strings under manual memory management. Well, I *could*, but it would have taken 10x the amount of effort, and the API would be 5x uglier due to the memory management paraphrenalia required to do this correctly in C. And to support lazy range-based iteration would require a whole new set of API's in C just for that purpose. In D, I can simply take slices of the input -- eliminating a whole bunch of copying. And backed by the GC -- so the code doesn't have to be cluttered with memory management paraphrenalia, but can have a simple, easy-to-use API compatible across a large range of use cases. Lazy iteration comes "for free", no need to introduce an entire new API. It's a win-win. All that's really needed is for people to be willing to drop their C/C++/Java coding habits, and write D the way it's meant to be written: with preference for stack-allocated structs and by-value semantics, using class objects only for more persistent data. Use slices for maximum buffer reuse, avoid needless copying. Use compile-time introspection to generate code statically where possible instead of needlessly recomputing stuff at runtime. Don't fear the GC; embrace it and use it to your advantage. If it becomes a bottleneck, refactor that part of the code. No need to rewrite the entire project the painful way; most of the time GC performance issues are localised and have relatively simple fixes. T -- Once the bikeshed is up for painting, the rainbow won't suffice. -- Andrei Alexandrescu
Jan 15 2021
next sibling parent reply Max Haughton <maxhaton gmail.com> writes:
On Friday, 15 January 2021 at 21:49:07 UTC, H. S. Teoh wrote:
 On Fri, Jan 15, 2021 at 09:04:13PM +0000, welkam via 
 Digitalmars-d-learn wrote:
 [...]
As the joke goes, "you can write assembly code in any language". :-D If you code in a sloppy way, it doesn't matter what language you write in, your program will still suck. No amount of compiler magic will be able to help you. The solution is not to blame this or that, it's to learn how to use what the language offers you effectively.
 [...]
And with D, it's actually easy to do this, because D gives you tools like slices and by-value structs. Having slices backed by the GC is actually a very powerful combination that people seem to overlook: it means you can freely refer to data by slicing the buffer. Strings being slices, as opposed to null-terminated, is a big part of this. In C, you cannot assume anything about how the memory of a buffer is managed (unless you allocated it yourself); as a result, in typical C code strcpy's, strdup's are everywhere. Want a substring? You can't null-terminate the parent string without affecting code that still depends on it; solution? strdup. Want to store a string in some persistent data structure? You can't be sure the pointer will still be valid (or that the contents pointed to won't change); solution? strdup, or strcpy. Want to parse a string into words? Either you modify it in-place (e.g. strtok), invalidating any other references to it, or you have to make new allocations of every segment. GC or no GC, this will not lead to a good place, performance-wise. I could not have written fastcsv if I had to work under the constraints of C's null-terminated strings under manual memory management. Well, I *could*, but it would have taken 10x the amount of effort, and the API would be 5x uglier due to the memory management paraphrenalia required to do this correctly in C. And to support lazy range-based iteration would require a whole new set of API's in C just for that purpose. In D, I can simply take slices of the input -- eliminating a whole bunch of copying. And backed by the GC -- so the code doesn't have to be cluttered with memory management paraphrenalia, but can have a simple, easy-to-use API compatible across a large range of use cases. Lazy iteration comes "for free", no need to introduce an entire new API. It's a win-win. All that's really needed is for people to be willing to drop their C/C++/Java coding habits, and write D the way it's meant to be written: with preference for stack-allocated structs and by-value semantics, using class objects only for more persistent data. Use slices for maximum buffer reuse, avoid needless copying. Use compile-time introspection to generate code statically where possible instead of needlessly recomputing stuff at runtime. Don't fear the GC; embrace it and use it to your advantage. If it becomes a bottleneck, refactor that part of the code. No need to rewrite the entire project the painful way; most of the time GC performance issues are localised and have relatively simple fixes. T
I agree that the GC is useful, but it is a serious hindrance on the language not having an alternative other than really bad smart pointers (well written but hard to know their overhead) and malloc and free. I don't mind using the GC for my own stuff, but it's too difficult to avoid it at the moment for the times when it gets in the way. I think the way forward is some robust move semantics and analysis like Rust. I suppose ideally we would have some kind of hidden ARC behind the scenes but I don't know how that would play with structs. One more cynical argument for having a modern alternative is that it's a huge hindrance on the languages "cool"Ness in the next generation of programmers and awareness is everything (most people won't have heard of D)
Jan 15 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 22:13:01 UTC, Max Haughton wrote:
 I think the way forward is some robust move semantics and 
 analysis like Rust. I suppose ideally we would have some kind 
 of hidden ARC behind the scenes but I don't know how that would 
 play with structs.
If they are heap allocated then you just put the reference count at a negative offset (common strategy). You need pointer types for it, but that is not a big issue if the strategy is to support both the old GC and ARC. You basically just need to get library authors that support ARC to mark their library code in some way.
Jan 15 2021
prev sibling parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Friday, 15 January 2021 at 21:49:07 UTC, H. S. Teoh wrote:
 On Fri, Jan 15, 2021 at 09:04:13PM +0000, welkam via 
 Digitalmars-d-learn wrote:
 [...]
As the joke goes, "you can write assembly code in any language". :-D If you code in a sloppy way, it doesn't matter what language you write in, your program will still suck. No amount of compiler magic will be able to help you. The solution is not to blame this or that, it's to learn how to use what the language offers you effectively. [...]
+1 for this. We should/could improve the GC instead of fighting it 🌟
Jan 18 2021
prev sibling next sibling parent tsbockman <thomas.bockman gmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
SHORT VERSION: While garbage collection is great for many applications, GC also has some significant disadvantages. As a systems programming language, D attracts users who care about the disadvantages of GC, but D's design prevents it from mitigating the downsides of GC to the extent that less powerful languages like Java can. LONG VERSION: There are many different garbage collector designs, so not all of these criticisms apply to all garbage collector designs. Nevertheless, most every design suffers from at least some of these problems: 1) Freezes/pauses: many garbage collectors have to pause all other threads in the program for some significant time in order to collect. This takes somewhere between a few ms and a few s, depending on how much data needs to be scanned. For interactive applications, these pauses can cripple the program's performance. 60 FPS, or 16ms per frame, is a typical target rate for modern user interfaces, video playback, and games. When any given frame may be subject to a 5ms pause by the GC, the processing for *every* frame must be limited to what can be accomplished in 11ms, effectively wasting 30% of the available CPU performance of *all cores* on most frames, when no collection was necessary. But, 5ms is on the low end for GC pauses. Pauses longer than 16ms are common with some GC designs, guaranteeing dropped frames which are distracting and unpleasant to the user. For real-time applications such as hardware control systems, a pause that is too long could actually break something or injure someone. 2) Much higher memory consumption: all practical heap memory management schemes have some overhead - additional memory that is consumed by the manager itself, rather than the rest of the program's allocations. But, garbage collectors typically require three to ten times the size of the data for good performance, as opposed to two times or less for reference counting or manual management. 3) RAII doesn't work properly because the GC usually doesn't guarantee that destructors will be run for objects that it frees. I don't know why this is such a common limitation, but my guess is that it is due to one or both of: a) Collection often happens on a different thread from an object's allocation and construction. So, either all destructors must be thread-safe, or they just can't be run. b) A collection may occur at some awkward point where the program invariants depended on by non thread-safe destructors are violated, like inside of another destructor. It is certainly possible to retrofit correct resource management on top of a GC scheme, but the logic required to determine when to close file handles (for example) is often the same as the logic required to determine when memory can safely be freed, so why not just combine the two and skip the GC? GC has significant advantages, of course: simplicity (outside the GC itself), safety, and, for better designs, high throughput. But, the D users are more critical of GC than most for good reason: I) D is a systems programming language. While a good GC is the best choice for many, many applications, the kinds of applications for which GC is inappropriate usually require, or at least greatly benefit from, the full power of a systems programming language. So, the minority of programmers who have good reason to avoid GC tend to leave other languages like Java, Rust). II) Historically, D's GC was embarrassingly bad compared to the state-of-the-art designs used by the JVM and .NET platforms. D's GC has improved quite a lot over the years, but it is not expected to ever catch up to the really good ones, because it is limited by other design decisions in the D language that prioritize efficient, easy-to-understand interoperability with C-style code over having the best possible GC. In particular, D's GC is a stop-the-world design that must pause *all* threads that may own any GC memory whenever it collects, and thus it fully suffers from problem (1) which I described earlier. Also, it used to have the additional problem that it leaked memory by design (not a bug). This is mostly fixed now, but for some reason the fix is not enabled by default?! https://dlang.org/spec/garbage.html#precise_gc (In before someone answers with, "Nothing. It works for me, so people who say it's bad are just stupid and don't profile.")
Jan 13 2021
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
tsbockman gave a good answer. In short: - You need to design the language for GC for it to be a satisfying solution for interactive applications. For D and C++ it is bolted on... which is not great. - You will use roughly twice as much memory with GC (you get more garbage). - You will get more uneven performance with GC (humans are smarter).
Jan 13 2021
prev sibling next sibling parent reply mw <mingwu gmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
I want to stress: in D you can *MIX* GC with manual memory management, which gives you the best of both world. I summarized my experience in one earlier post, (and copy & pasted below); and I also add the code to jdiutil: Just-Do-It util https://wiki.dlang.org/Memory_Management#Explicit_Class_Instance_Allocation https://github.com/mingwugmail/jdiutil/blob/master/source/jdiutil/memory.d =========================================== https://forum.dlang.org/post/hzryuifoixwwywwifwbz forum.dlang.org One of the upside of D I like is that one can mix GC with manual memory management: https://dlang.org/library/core/memory/gc.free.html which gives you the best of both world. Currently I have a personal project, initially I was solely relying on GC just like in Java: allocate all the objects via `new`, and let the GC take care of all the bookkeeping. But there is a particular set of objects which takes the majority of memory consumption of the program, and even after I carefully removed all the reference after the object is no longer used, the program still use lots of memory because GC collection is un-predictable, both in terms of timing and efficiency. Then I decided to do manual core.memory.GC.free just for that particular objects, (it was not very easy in a multi-threaded program to make all the logic right, but eventually I got it done). And the resulting program now only use ~10% of the memory it used to use. I think this flexibility to mix GC & manual memory management is very unique in D. Actually I'm not sure if it can be done in other languages at all. ===========================================
Jan 13 2021
next sibling parent tsbockman <thomas.bockman gmail.com> writes:
On Wednesday, 13 January 2021 at 21:56:58 UTC, mw wrote:
 I think this flexibility to mix GC & manual memory management 
 is very unique in D. Actually I'm not sure if it can be done in 
 other languages at all.
Yes, this is one of the great things about D. There are miscellaneous problems with the D runtime and the D standard library that make it harder than it needs to be, though. Improvements I would like to see in the future: 1) Finalize std.experimental.allocator 2) Good, safe, flexible reference counting module in the standard library (this requires further development of dip1000 and the like, I think). 3) Upgrade core.thread to fully support nogc. I shouldn't lose access to Thread.sleep and the like just because a thread isn't being monitored by the GC. 4) Single-threaded versions of various components related to memory management that are more efficient because they don't need to be thread-safe. For example, people say that reference counting is slow because incrementing and decrementing the count is an atomic operation, but most references will never be shared between threads so it is just a waste to use atomics. Still, all of these issues can be worked around today; D lacks high quality standards in this area more than it lacks necessary features.
Jan 13 2021
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 13 January 2021 at 21:56:58 UTC, mw wrote:
 I think this flexibility to mix GC & manual memory management 
 is very unique in D. Actually I'm not sure if it can be done in 
 other languages at all.
It sure can. Most AOT languages that provide GC also provide C-interfaces and manual memory management. C++ also had the Boehm-collector since the 90s. Chrome uses Oilpan, a library-style GC with write barriers and incremental collection.
Jan 13 2021
parent reply mw <mingwu gmail.com> writes:
On Thursday, 14 January 2021 at 00:15:12 UTC, Ola Fosheim Grøstad 
wrote:
 On Wednesday, 13 January 2021 at 21:56:58 UTC, mw wrote:
 I think this flexibility to mix GC & manual memory management 
 is very unique in D. Actually I'm not sure if it can be done 
 in other languages at all.
It sure can. Most AOT languages that provide GC also provide C-interfaces and manual memory management. C++ also had the Boehm-collector since the 90s. Chrome uses Oilpan, a library-style GC with write barriers and incremental collection.
ok, what I really mean is: ... in other "(more popular) languages (than D, and directly supported by the language & std library only)" ...
Jan 13 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 00:37:29 UTC, mw wrote:
 ok, what I really mean is:

 ... in other "(more popular) languages (than D, and directly 
 supported by the language & std library only)" ...
Well, even Python supports both, if you want to, so... I suppose you mean system level programming languages? The reality is that GC for a system level programming language is not popular to begin with. In that domain it is fairly common to not use the standard library and use custom runtimes as we can see for C and C++. Anyway, what makes the D GC weak is exactly that there is not much support for it in the D language or the compilers, only in the runtime and the bare minimum of RTTI. LLVM support more advanced GC features than D provides. So, the D GC doesn't do much more for programmers than Boehm. And Boehm is not popular either... Oilpan, that Chrome uses has more advanced features than the D GC, and does what most system level programmers want: limits it to designated GC types and supports incremental collection. The downside is that each Oilpan GC type also has to specify which pointers to trace, but then again, not being able to do that in D is a disadvantage... For systems programming, I think D would be better off appropriating the approach taken by Oilpan and mix it with reference counting, but make it a language/compiler feature. That is at least a proven approach for one big interactive application. Basically make "D class" objects GC and everything else RC or manual.
Jan 14 2021
parent reply mw <mingwu gmail.com> writes:
On Thursday, 14 January 2021 at 09:26:06 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 14 January 2021 at 00:37:29 UTC, mw wrote:
 ok, what I really mean is:

 ... in other "(more popular) languages (than D, and directly 
 supported by the language & std library only)" ...
Well, even Python supports both
Python's `del` isn't guaranteed to free the memory, that's what we are discussing here: core.memory.GC.free / core.stdc.stdlib.free https://www.quora.com/Why-doesnt-Python-release-the-memory-when-I-delete-a-large-object In CPython (the default reference distribution), the Garbage collection in Python is not guaranteed to run when you delete the object - all del (or the object going out of scope) does is decrement the reference count on the object. The memory used by the object is not guaranteed to be freed and returned to the processes pool at any time before the process exits. Even if the Garbage collection does run - all it needs is another object referencing the deleted object and the garbage collection won’t free the object at all.
Jan 14 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 18:10:43 UTC, mw wrote:
 Python's `del` isn't guaranteed to free the memory, that's what
Fair point, but I was thinking of the C interop interface. You can create your own wrapper (e.g. numpy) and do manual memory management, but it isn't something people want to do! It is mostly pointless to do that within Python because of the existing overhead. That applies to most high level languages; you can, but it is pointless. You only do it for interop... One can follow the same kind of reasoning for D. It makes no sense for people who want to stay high level and do batch programming. Which is why this disconnect exists in the community... I think.
Jan 14 2021
parent reply welkam <wwwelkam gmail.com> writes:
On Thursday, 14 January 2021 at 18:51:16 UTC, Ola Fosheim Grøstad 
wrote:
 One can follow the same kind of reasoning for D. It makes no 
 sense for people who want to stay high level and do batch 
 programming. Which is why this disconnect exists in the 
 community... I think.
The reasoning of why we do not implement write barriers is that it will hurt low level programming. But I feel like if we drew a ven diagram of people who rely on GC and those who do a lot of writes trough a pointer we would get almost no overlap. In other words if D compiler had a switch that turned on write barriers and better GC I think many people would use it and find the trade offs acceptable.
Jan 15 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
 The reasoning of why we do not implement write barriers is that 
 it will hurt low level programming. But I feel like if we drew 
 a ven diagram of people who rely on GC and those who do a lot 
 of writes trough a pointer we would get almost no overlap. In 
 other words if D compiler had a switch that turned on write 
 barriers and better GC I think many people would use it and 
 find the trade offs acceptable.
Yes, I think this is what we need some way of making the compiler know which pointers has to be traced so that it can avoid redundant pointers. For instance, a type for telling the compiler that a pointer is non-owning. Then we don't have to use a write barrier for that non-owning pointer I think? Or maybe I am missing something? Then we can also have a switch. But I also think that we could do this: 1. Make all class objects GC allocated and use write barriers for those. 2. Allow non-owning annotations for class object pointers. 3. Make slices and dynamic arrays RC. 4. Let structs be held by unique_ptr style (Rust/C++ default). Then we need a way to improve precise tracing: 1. make use of LLVM precise stack/register information 2. introduce tagged unions and only allow redundant pointers in untagged unions 3. Each compile phase emits information for GC. 4. Before linking the compiler generates code to narrowly trace the correct pointers. Then we don't have to deal with real time type information lookup and don't have to do expensive lookup to figure out if a pointer points to GC memory or not. The compiler can then just assume that the generated collection code is exact.
Jan 15 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 14:59:18 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
 avoid redundant pointers. For instance, a type for telling the 
 compiler that a pointer is non-owning.
I guess "non-owning" is the wrong term. I mean pointers that are redundant. Not all "non-owning" pointers are redundant.
Jan 15 2021
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
 On Thursday, 14 January 2021 at 18:51:16 UTC, Ola Fosheim 
 Grøstad wrote:
 One can follow the same kind of reasoning for D. It makes no 
 sense for people who want to stay high level and do batch 
 programming. Which is why this disconnect exists in the 
 community... I think.
The reasoning of why we do not implement write barriers is that it will hurt low level programming. But I feel like if we drew a ven diagram of people who rely on GC and those who do a lot of writes trough a pointer we would get almost no overlap. In other words if D compiler had a switch that turned on write barriers and better GC I think many people would use it and find the trade offs acceptable.
Hypothetically, would it be possible for users to supply their own garbage collector that uses write barriers?
Jan 15 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 15:20:05 UTC, jmh530 wrote:
 Hypothetically, would it be possible for users to supply their 
 own garbage collector that uses write barriers?
Yes. You could translate Google Chrome's Oilpan to D. It uses library smart pointers for dirty-marking. But it requires you to write a virtual function that points out what should be traced (actually does the tracing for the outgoing pointers from that object):
Jan 15 2021
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 15 January 2021 at 15:36:37 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 15 January 2021 at 15:20:05 UTC, jmh530 wrote:
 Hypothetically, would it be possible for users to supply their 
 own garbage collector that uses write barriers?
Yes. You could translate Google Chrome's Oilpan to D. It uses library smart pointers for dirty-marking. But it requires you to write a virtual function that points out what should be traced (actually does the tracing for the outgoing pointers from that object):
The library smart pointers would make it difficult to interact with existing D GC code though.
Jan 15 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 15 January 2021 at 16:21:43 UTC, jmh530 wrote:
 On Friday, 15 January 2021 at 15:36:37 UTC, Ola Fosheim Grøstad 
 wrote:
 The library smart pointers would make it difficult to interact 
 with existing D GC code though.
Yes. So it would be better to do it automatically in the compiler for designated GC objects. ARC is also a good alternative. Probably less work to get a high quality ARC implementation, than a high quality GC implementation.
Jan 15 2021
prev sibling next sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
Languages where the GC usage is unavoidable (Javascript and Java) have created a lot of situations where there is a GC pause in realtime program and the cause is this dynamically allocated memory. So a lot of people make their opinion of GC while using setup where you couldn't really avoid it. For example in Javascript from 10 years ago just using a closure or an array literals could make your web game stutter.
Jan 14 2021
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 10:05:51 UTC, Guillaume Piolat 
wrote:
 On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
Languages where the GC usage is unavoidable (Javascript and Java) have created a lot of situations where there is a GC pause in realtime program and the cause is this dynamically allocated memory. So a lot of people make their opinion of GC while using setup where you couldn't really avoid it.
Indeed, but I don't think we should underestimate the perceived value of having a minimal runtime. Like, if D had a better GC solution that involved an even heavier runtime, it would still be a big issue for people interested in low level system programming. Transparency is an issue. System level programming means you want to have a clear picture of what is going on in the system at all levels, all the way down to the hardware. If you cannot understand how the runtime works you also cannot fix issues... so a simple runtime is more valuable than a feature rich complex runtime. That is kinda what defines system level programming: you know exactly what every subsystem is doing so that you can anticipate performance/resource issues. And that is the opposite of high level programming where you make no assumptions about the underlying machinery and only care about the abstract descriptions of language semantics.
Jan 14 2021
prev sibling parent reply sighoya <sighoya gmail.com> writes:
As other people already mentioned, garbage collection incurs some 
amount of non-determinism and people working in low level areas 
prefer to handle things deterministically because they think 
non-deterministic handling of memory makes your code slow.

For example rendering in gaming gets paused by the GC prolonging 
the whole rendering process.
The conclusion arrives that code runs slow, but it doesn't. In 
fact, a tracing GC tries to do the opposite, in order to make the 
whole process faster, it releases memory in a buffer like mode 
often yielding faster execution in the long run, but not in the 
short run where it is mandatory not to introduce any kind of 
pauses.

So, in the end, it isn't a fate about performance rather a fate 
of determinism vs non-determinism.

Non-determinism has the potential to let the code run faster.
For instance, no one really uses the proposed threading model in 
Rust with owned values where one thread can only work mutably on 
an owned value because other threads wanting to mutate that value 
have to wait. This works by creating a deterministic order of 
thread execution where each thread can only work after the other 
thread releases its work.
This model gets often praised in Rust but the potential seems 
only a theoretical one.
As a result you most often see the use of atomic reference 
counting (ARC) cluttering in codebases in Rust.

The other point is the increased memory footprint because you 
have a runtime memory manager taking responsibility over 
(de)allocation which is impossible to have in some areas of 
limited memory systems.

However, why just provide a one size fits all solution when there 
are plenty of GC algorithms for different kinds of problem 
domains?
Why not offering more than one just as it is the case in Java?
The advantage hereby is to adapt the GC algorithm after the 
program was compiled, so you can reuse the same program with 
different GC algorithms.
Jan 14 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 13:01:04 UTC, sighoya wrote:
 Why not offering more than one just as it is the case in Java?
 The advantage hereby is to adapt the GC algorithm after the 
 program was compiled, so you can reuse the same program with 
 different GC algorithms.
Because Java has a well defined virtual machine with lots of restrictions.
Jan 14 2021
parent reply sighoya <sighoya gmail.com> writes:
On Thursday, 14 January 2021 at 13:08:06 UTC, Ola Fosheim Grøstad 
wrote:

 Because Java has a well defined virtual machine with lots of 
 restrictions.
So you're insisting this isn't possible in D?
Jan 14 2021
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 13:10:14 UTC, sighoya wrote:
 On Thursday, 14 January 2021 at 13:08:06 UTC, Ola Fosheim 
 Grøstad wrote:

 Because Java has a well defined virtual machine with lots of 
 restrictions.
So you're insisting this isn't possible in D?
It isn't possible in a meaningful way. In system level programming languages you have to manually uphold the invariants needed to not break the GC collection algorithm. So if you change to a significantly different collection model, the needed invariants will change. The possible alternatives are: 1. Use "shared" to prevent GC allocated memory from entering other threads and switch to thread local GC. Then use ARC for shared. 2. Redefine language semantics/type system for a different GC model. This will break existing code.
Jan 14 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 13:16:16 UTC, Ola Fosheim Grøstad 
wrote:
 1. Use "shared" to prevent GC allocated memory from entering 
 other threads and switch to thread local GC. Then use ARC for 
 shared.

 2. Redefine language semantics/type system for a different GC 
 model. This will break existing code.
3. Keep the existing GC for existing code and introduce ARC across the board for new code. Add a versioning statement that people can add to their libraries to tell the compiler which models they support.
Jan 14 2021
prev sibling next sibling parent reply Basile B. <b2.temp gmx.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
Semi serious answer: In the domain of hoby-ism and small companies programmers that work with statically typed languages all believe that they are super hero in the domain of memory managment. When they see "GC" they think that they are considered as 2nd grade student ^^ It's basically snobbism.
Jan 14 2021
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 10:28:13 UTC, Basile B. wrote:
 Semi serious answer:

 In the domain of hoby-ism and small companies programmers that 
 work with statically typed languages all believe that they are 
 super hero in the domain of memory managment. When they see 
 "GC" they think that they are considered as 2nd grade student ^^

 It's basically snobbism.
I know your response is *tongue in cheek*, but I actually find it easier to use c++11 style memory management across the board than mixing two models. C style memory management, on the other hand, is pretty horrible and you'll end up spend much of your time debugging "unexplainable" crashes. I don't experience that much in C++ when staying within their standard regime. When you want more performance than standard C++ memory management, things can go wrong e.g. manual emplace strategies and forgetting to call destructors etc, but that is the same in D. And frankly, you seldom need that, maybe 2-3 critical places in your program (e.g. graphics/audio).
Jan 14 2021
parent reply sighoya <sighoya gmail.com> writes:
On Thursday, 14 January 2021 at 11:11:58 UTC, Ola Fosheim Grøstad 
wrote:

 I know your response is *tongue in cheek*, but I actually find 
 it easier to use c++11 style memory management across the board 
 than mixing two models.
But this is already the case for C++ and Rust. Remembering the days back developing in C++ there were a huge amount of memory deallocation side effects because opencv's memory management differs from qt's memory management. Just to say it was a hell. Personally, I find it better to prefer encapsulating manual memory management and not to leak them outside.
Jan 14 2021
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 14 January 2021 at 13:05:31 UTC, sighoya wrote:
 But this is already the case for C++ and Rust. Remembering the 
 days back developing in C++ there were a huge amount of memory 
 deallocation side effects because opencv's memory management 
 differs from qt's memory management.
The problem in C++ is that older frameworks have their own ways of doing things for performance reasons or because the C++ standard they started with didn't provide what they needed... And... most C++ frameworks that are big are old... If you avoid big frameworks then it gets better.
 Personally, I find it better to prefer encapsulating manual 
 memory management and not to leak them outside.
Yes. Most programmers don't need system level programming. So if D defines itself to not be a system level programming language then there would be room to improve a lot, but then it should move towards more high level features and prevent the usage of some low level features like untagged non-discriminating unions of pointers. Rust is more high level than D... I think.
Jan 14 2021
prev sibling parent reply ddcovery <antoniocabreraperez gmail.com> writes:
On Thursday, 14 January 2021 at 10:28:13 UTC, Basile B. wrote:
 On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
Semi serious answer: In the domain of hoby-ism and small companies programmers that work with statically typed languages all believe that they are super hero in the domain of memory managment. When they see "GC" they think that they are considered as 2nd grade student ^^ It's basically snobbism.
Hi Basile, My experience: in 90's I worked with Pascal, C and C++ with rudimentary memory management: basically it was no difference between working with memory or files in terms of life-cycle management: you must alloc/free memory and you must open/close files. The secret for "stability" was a set of conventions to determine who was the responsible of the resource handler or memory pointer: I developed some ERP/CRMs, some multimedia products and some industrial environment applications (real time ones). At the end of 90's I began to work with VB and the COM model (that uses references counter) and I discovered that the best way to manage memory (avoiding death-locks) was treating objects as "external" unmanaged resources: The VB6 "WITH" statement was key good enough for all applications and services that I have been developing last 20 years because this languages (and it's frameworks+based libraries) have never crossed certain limits: they always separated managed and unmanaged resources: developer is responsible of unmanaged resources, and Memory is managed by GC. Language itself offers you good tooling to ARM (like Finally arrived the last actors to the scene: mainly javascript and derivatives (when working in a browser context), where developer is abstracted of how memory and resources are really managed (I can remember critical bugs in chrome like Image object memory leaks because this "abstraction"). GC has introduced a "productive" way of working removing old memory problems for large scale projects (and finally with other kind of resources in some scenarios) but, as developers/architects, we have de responsibility to recognize the limits to each technique and when it fits to our needs. After all, my opinion is that if I was to develop something like a Real Time app (industrial/medical/aeronautics/...) or a game where a large amount of objects must be mutated ~30 times per second, GC "unpredictable" or "large" time cost will be enough to stop using it. There is other reasons (like "efficient" memory management when we need to manage large amounts of memory or to run in limited memory environments). I understand perfectly the D community people that needs to work without GC: **it is not snobbish**: it is a real need. But not only a "need"... sometimes it is basically the way a team wants to work: explicit memory management vs GC. D toke the way of GC without "cutting" the relationship with C/C++ developers: I really don't have enough knowledge of the language and libraries to know the level of support that D offers to non GC based developments, but I find completely logic trying to maintain this relationship (in the basis that GC must continue been the default way of working) Sorry for my "extended", may be unnecessary, explanation (and my "poor" english :-p).
Jan 14 2021
parent IGotD- <nise nise.com> writes:
On Thursday, 14 January 2021 at 15:18:28 UTC, ddcovery wrote:
 I understand perfectly the D community people that needs to 
 work without GC:  **it is not snobbish**:  it is a real need.  
 But not only a "need"... sometimes it is basically the way a 
 team wants to work:  explicit memory management vs GC.
D already supports manual memory management so that escape hatch was always there. My main criticism of D is the inability to freely exchange the GC algorithms as one type of GC might not be the best fit for everyone. The problem is of course that there is no differentiation between raw and fat pointers. With fat pointers, the community would have a better opportunities to experiment with different GC designs which would lead to a larger palette of GC algorithms.
Jan 14 2021
prev sibling next sibling parent reply =?UTF-8?B?0JLQuNGC0LDQu9C40Lkg0KTQsNC0?= =?UTF-8?B?0LXQtdCy?= writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
I like GC. How write quickly without GC ?
Jan 14 2021
parent Dukc <ajieskola gmail.com> writes:
On Thursday, 14 January 2021 at 14:28:43 UTC, Виталий Фадеев 
wrote:
 On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
How write quickly without GC ?
In DMD style: never release memory! This is not an option for long-running programs though, nor for anything that otherwise uses significant amounts of memory. Better to just use the GC if unsure.
Jan 14 2021
prev sibling parent welkam <wwwelkam gmail.com> writes:
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
 I've always heard programmers complain about Garbage Collector 
 GC. But I never understood why they complain. What's bad about 
 GC?
promote the use of OOP and they say that you dont need to worry about memory management. The result is that people write code that doesnt utilize CPU caches effectively and makes a lot of temporary allocations. For example people at Microsoft for every 1GB parsed. Add to that virtual machines and you find that programs written in those languages run like they are coded in molasse. People with experience of those programs conclude that is all because of GC. And its simple explanation for simple people.
Jan 15 2021