www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [Off-Topic] John Carmack's point of view on GC and languages like

reply Paulo Pinto <pjmlp progtools.org> writes:
A bit off topic, but worth keeping as reference when someone 
complains about the GC,

John Carmack's answer regarding how he sees GC question on the 
Lex Fridman Podcast.

"It is only when you get into the tighest of the real time things 
that you start saying, no the GC is more cost than it has 
benefits for, but that is not 99.9+% of all software in the 
world..."

https://youtu.be/I845O57ZSy4?t=1370

He eventually follows up with a discussion he had about the 
matter on Twitter, and how some developers cannot let go of the 
good old days fighting for each byte.

He might know a thing or two about high performance code.

Maybe an interview to bookmark and post on the regular GC 
discussion threads, as pinned answer.
Aug 07 2022
next sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 A bit off topic, but worth keeping as reference when someone 
 complains about the GC,

 John Carmack's answer regarding how he sees GC question on the 
 Lex Fridman Podcast.

 "It is only when you get into the tighest of the real time 
 things that you start saying, no the GC is more cost than it 
 has benefits for, but that is not 99.9+% of all software in the 
 world..."

 https://youtu.be/I845O57ZSy4?t=1370

 He eventually follows up with a discussion he had about the 
 matter on Twitter, and how some developers cannot let go of the 
 good old days fighting for each byte.

 He might know a thing or two about high performance code.

 Maybe an interview to bookmark and post on the regular GC 
 discussion threads, as pinned answer.
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
Aug 07 2022
next sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 A bit off topic, but worth keeping as reference when someone 
 complains about the GC,

 John Carmack's answer regarding how he sees GC question on the 
 Lex Fridman Podcast.

 "It is only when you get into the tighest of the real time 
 things that you start saying, no the GC is more cost than it 
 has benefits for, but that is not 99.9+% of all software in 
 the world..."

 https://youtu.be/I845O57ZSy4?t=1370

 He eventually follows up with a discussion he had about the 
 matter on Twitter, and how some developers cannot let go of 
 the good old days fighting for each byte.

 He might know a thing or two about high performance code.

 Maybe an interview to bookmark and post on the regular GC 
 discussion threads, as pinned answer.
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
What we should promote more about D is the fact that "GC is here when you need it, but you can also go raw when you need it, pragmatism allows D to be used for 99.9% of traditional softwares, but is also suitable for the remaining 0.1%" And not just "We have a GC too, who needs to manage memory manually LOL"
Aug 07 2022
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Sunday, 7 August 2022 at 20:48:02 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 That's kinda bullshit, it depends on the GC implementation

 D's GC is not good for 99.99% "of all software in the world", 
 it's wrong to say this, and is misleading

 Java's ones are, because they offer multiple implementations 
 that you can configure and the, they cover a wide range of use 
 cases

 D's GC is not the panacea, it's nice to have, but it's not 
 something to brag about, specially when it STILL stop the 
 world during collection, and is STILL not scalable

 Go did it right by focusing on low latency, and parallelism, 
 we should copy their GC
What we should promote more about D is the fact that "GC is here when you need it, but you can also go raw when you need it, pragmatism allows D to be used for 99.9% of traditional softwares, but is also suitable for the remaining 0.1%" And not just "We have a GC too, who needs to manage memory manually LOL"
You seem to be unaware that D does have more than one GC available. Specifically, there is a fork based GC available for linux that is not stop-the-world, and is usable fro real time applications. Perhaps we should advertise that more. Its only real downside is that it is linux only.
Aug 07 2022
parent Tejas <notrealemail gmail.com> writes:
On Monday, 8 August 2022 at 01:49:57 UTC, Nicholas Wilson wrote:
 On Sunday, 7 August 2022 at 20:48:02 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 [...]
What we should promote more about D is the fact that "GC is here when you need it, but you can also go raw when you need it, pragmatism allows D to be used for 99.9% of traditional softwares, but is also suitable for the remaining 0.1%" And not just "We have a GC too, who needs to manage memory manually LOL"
You seem to be unaware that D does have more than one GC available. Specifically, there is a fork based GC available for linux that is not stop-the-world, and is usable fro real time applications. Perhaps we should advertise that more. Its only real downside is that it is linux only.
Oh! Do we have any benchmarks comparing the performance(like throughout, memory consumption, latency, etc)?
Aug 07 2022
prev sibling next sibling parent reply max haughton <maxhaton gmail.com> writes:
On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 [...]
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
It's actually 69.420% of all software in the world
Aug 07 2022
parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 [...]
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
Aug 07 2022
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 [...]
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app. Meanwhile companies ship production quality firmware for IoT secure keys written in Go.
Aug 07 2022
next sibling parent ryuukk_ <ryuukk.dev gmail.com> writes:
On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:
 On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 [...]
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app. Meanwhile companies ship production quality firmware for IoT secure keys written in Go.
Or maybe they wanted to reduce the servers bill?
Aug 07 2022
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:
 Discord switched to Rust, because they wanted to work in cool 
 new toys, that was the actual reason, while they use Electron 
 for their "desktop" app.
I don't know what their reasoning was, but you need twice as much memory for GC. But yeah, chat is not a low-latency application.
Aug 08 2022
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:
 On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 [...]
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
Discord switched to Rust, because they wanted to work in cool new toys, that was the actual reason, while they use Electron for their "desktop" app. Meanwhile companies ship production quality firmware for IoT secure keys written in Go.
I think this kind of start-with-the-desired-conclusion-and-work-backwards thinking seems to be alarmingly prevalent in the computing world (and on the Supreme Court). It is certainly a requirement for being a Rust fan-boy. But I can tell you that I saw this kind of thing 50+ years ago (human nature just doesn't change), when performance measurement was my specialty. I constantly ran into people who "just knew" why certain code, even their code, performed as it did. Measurements (evidence) were/was unnecessary. I could tell you many war stories where these people were dead wrong (almost always), even about the behavior of their own code.
Aug 08 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Aug 08, 2022 at 07:11:46PM +0000, Don Allen via Digitalmars-d wrote:
 On Sunday, 7 August 2022 at 22:39:24 UTC, Paulo Pinto wrote:
[...]
 Discord switched to Rust, because they wanted to work in cool new
 toys, that was the actual reason, while they use Electron for their
 "desktop" app.
 
 Meanwhile companies ship production quality firmware for IoT secure
 keys written in Go.
I think this kind of start-with-the-desired-conclusion-and-work-backwards thinking seems to be alarmingly prevalent in the computing world (and on the Supreme Court). It is certainly a requirement for being a Rust fan-boy. But I can tell you that I saw this kind of thing 50+ years ago (human nature just doesn't change), when performance measurement was my specialty. I constantly ran into people who "just knew" why certain code, even their code, performed as it did. Measurements (evidence) were/was unnecessary. I could tell you many war stories where these people were dead wrong (almost always), even about the behavior of their own code.
Once upon a time, I was one of those guilty as charged. I cherished my l33t C skillz, hand-tweaked every line of code in fits of premature optimization, and "just knew" my code would be faster if I wrote `x++` instead of `x = x + 1`, ad nauseaum. Then one day, I ran a profiler. It revealed the performance bottleneck was somewhere *completely* different from where I thought it was. (It was a stray debug printf that I'd forgotten to remove after fixing a bug.) Deleting that one line of code boosted my performance MAGNITUDES more than countless hours of sweating over every line of code to "squeeze all the juice out of the machine". That was only the beginning; the first dawning of the gradual realization that I was actually WRONG about the performance of my code. Most of the time. Although one can make educated guesses about where the bottleneck is, without hard proof from a profiler you're just groping in the dark. And most of the time you're wrong. Eventually, I learned (the hard way) that most real-world bottlenecks are (1) not where you expect them to be, and (2) can be largely alleviated with a small code change. Straining through every line of code is 99.9% of the time unnecessary (and an unproductive use of time). Always profile, profile, profile. Only optimize what the profiler reveals, don't bother with the rest. That's why these days, I don't pay much attention to people complaining about how this or that is "too inefficient" or "too slow". Show me actual profiler measurements, and I might pay more attention. Otherwise, I just consign it to the premature optimization bin. T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Aug 08 2022
parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
<snip>
 That's why these days, I don't pay much attention to people 
 complaining about how this or that is "too inefficient" or "too 
 slow".  Show me actual profiler measurements, and I might pay 
 more attention. Otherwise, I just consign it to the premature 
 optimization bin.
Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.
Aug 09 2022
parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Tuesday, 9 August 2022 at 14:03:47 UTC, Patrick Schluter wrote:
 On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
 <snip>
 That's why these days, I don't pay much attention to people 
 complaining about how this or that is "too inefficient" or 
 "too slow".  Show me actual profiler measurements, and I might 
 pay more attention. Otherwise, I just consign it to the 
 premature optimization bin.
Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.
GC is also a premature optimization Do you need it when you write a 1 step cli tool? no you don't, DMD disables it The key is to understand your domain and pick the right tool for the job We should not fall into the trap of using a screwdriver for everything; our strength is our ability to have a GC, but also stray away from it whenever you domain requires it, vice versa That's in the vision document, and Atilla perfectly explained it at DConf And the doom example from Manu is the perfect real world usecase of my point Some interesting thread: https://twitter.com/TheGingerBill/status/1556961078252343296
Aug 09 2022
parent reply max haughton <maxhaton gmail.com> writes:
On Tuesday, 9 August 2022 at 14:36:13 UTC, ryuukk_ wrote:
 On Tuesday, 9 August 2022 at 14:03:47 UTC, Patrick Schluter 
 wrote:
 On Monday, 8 August 2022 at 19:49:16 UTC, H. S. Teoh wrote:
 <snip>
 [...]
Exactly. That's why I always call the C++ and Rust for performance (Rust for safety is a bit different) as POOP languages: Premature Optimization Oriented Programming languages.
GC is also a premature optimization Do you need it when you write a 1 step cli tool? no you don't, DMD disables it The key is to understand your domain and pick the right tool for the job We should not fall into the trap of using a screwdriver for everything; our strength is our ability to have a GC, but also stray away from it whenever you domain requires it, vice versa That's in the vision document, and Atilla perfectly explained it at DConf And the doom example from Manu is the perfect real world usecase of my point Some interesting thread: https://twitter.com/TheGingerBill/status/1556961078252343296
dmd not freeing by default is/was a bad idea. The memory usage on large projects is catastrophic. So just enable the GC? In theory yes but in practice people hold references to stuff all over the place so the GC often can't actually free anything.
Aug 09 2022
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Aug 09 2022
next sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Exactly ``` -lowmem Enable the garbage collector for the compiler, reducing the compiler memory requirements but increasing compile times. ``` Having control over your memory allocation strategy is what's important Hence forcing one on the users is a bad idea when you need that little performance boost that ends up being your killer feature (fast compile speed)
Aug 09 2022
parent Paulo Pinto <pjmlp progtools.org> writes:
On Tuesday, 9 August 2022 at 23:41:04 UTC, ryuukk_ wrote:
 On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Exactly ``` -lowmem Enable the garbage collector for the compiler, reducing the compiler memory requirements but increasing compile times. ``` Having control over your memory allocation strategy is what's important Hence forcing one on the users is a bad idea when you need that little performance boost that ends up being your killer feature (fast compile speed)
molasses by having their compilers using GC.
Aug 09 2022
prev sibling parent reply max haughton <maxhaton gmail.com> writes:
On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Your "natural" conclusion is based off a biased sample
Aug 09 2022
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
 On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Your "natural" conclusion is based off a biased sample
Thats fairly self serving a take. If it was as bad as your impling even a good team would have fucked it up; no?
Aug 10 2022
parent reply max haughton <maxhaton gmail.com> writes:
On Wednesday, 10 August 2022 at 19:15:46 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
 On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
 dmd not freeing by default is/was a bad idea. The memory 
 usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Your "natural" conclusion is based off a biased sample
Thats fairly self serving a take. If it was as bad as your impling even a good team would have fucked it up; no?
It's easy to get wrong but I think you can avoid most of the "bloat" (most of this isn't so much bloat as in wasted space as much as wasted cycles which I think requires slightly more nuanced discussion) by having cycle-counting and hard upper bounds on runtimes in CI.
Aug 10 2022
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Wednesday, 10 August 2022 at 20:23:17 UTC, max haughton wrote:
 On Wednesday, 10 August 2022 at 19:15:46 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
 On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
 On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton 
 wrote:
 dmd not freeing by default is/was a bad idea. The memory 
 usage
Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team? Why isnt the natural conclusion that it looks like it worked out; just corrrect?
Your "natural" conclusion is based off a biased sample
Thats fairly self serving a take. If it was as bad as your impling even a good team would have fucked it up; no?
It's easy to get wrong but I think you can avoid most of the "bloat" (most of this isn't so much bloat as in wasted space as much as wasted cycles which I think requires slightly more nuanced discussion) by having cycle-counting and hard upper bounds on runtimes in CI.
If I look at some of the old std code the only explaination for why some of it is so terrible is that it was written before there were good d programmers Presumably parts of dmd were written before there would good programmers; and if they managed to make a fast compiler(when most compilers are terrible) I would think maybe theres some unique design decisions deserve some credence
Aug 10 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Aug 10, 2022 at 09:33:27PM +0000, monkyyy via Digitalmars-d wrote:
[...]
 If I look at some of the old std code the only explaination for why
 some of it is so terrible is that it was written before there were
 good d programmers
[...] Don't forget also that D has changed a lot since its beginning. The language we have today is very different from the language back then when some of this code was originally written. Back then, a lot of D's powerful features did not exist yet, so what was considered good style back then is very different from what's considered good style today. A lot of things possible today were not possible back then, so the old code had to be written under different constraints, and could not have taken advantage of the advances in the language that we have today. For example, the earliest versions of D did not have templates, static if's, CTFE, or compile-time introspection, which today are a large part of what defines D. Can you imagine what kind of code you had to write back then, compared to what we can write today? T -- Winners never quit, quitters never win. But those who never quit AND never win are idiots.
Aug 10 2022
parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Wednesday, 10 August 2022 at 21:51:21 UTC, H. S. Teoh wrote:
 For example, the earliest versions of D did not have templates, 
 static if's, CTFE, or compile-time introspection, which today 
 are a large part of what defines D.  Can you imagine what kind 
 of code you had to write back then, compared to what we can 
 write today?
Considering im almost entirely here for templates the answer is simple; I wouldnt be here
Aug 10 2022
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Aug 10, 2022 at 09:55:36PM +0000, monkyyy via Digitalmars-d wrote:
 On Wednesday, 10 August 2022 at 21:51:21 UTC, H. S. Teoh wrote:
 For example, the earliest versions of D did not have templates,
 static if's, CTFE, or compile-time introspection, which today are a
 large part of what defines D.  Can you imagine what kind of code you
 had to write back then, compared to what we can write today?
Considering im almost entirely here for templates the answer is simple; I wouldnt be here
Haha, me too. I heavily use templates, CTFE, static-if, and compile-time introspection (DbI rocks!). Without these, I might as well crawl back to C++. Or swallow the bitter pill and go back to plain old C (as I have to at work). I was about to say Java, but that pill would be too bitter to take even in the face of C++'s flaws and C's lack of safety, so no. :-P T -- VI = Visual Irritation
Aug 10 2022
prev sibling next sibling parent Meta <jared771 gmail.com> writes:
On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
https://i.kym-cdn.com/photos/images/original/000/732/494/c35.gif 😉
Aug 08 2022
prev sibling parent reply wjoe <invalid example.com> writes:
On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 [...]
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
I don't see how that is related. According to the investigation they described in the article you linked, Go's GC is set up to run every 2 minutes no questions asked. That's not true for D's GC. Instead of jumping on the rust hype train they could have forked Go's GC and solved the actual performance problem - the forced 2 minutes GC run. As far as D's default GC is concerned. Last time I checked it only runs a collection cycle on an allocation, further, once the GC has allocated the memory from the OS it won't release it back until the program terminates. This means that the GC can re-alloc previously allocated, but now collected, memory basically for free, because there's not context switch into kernel and back. Which may have additional cost of reloading cache lines. But all of this depends on a lot of factors so this may or may not be a big deal. Also, when you run your own memory management, you need to keep in mind that your manual call to *alloc/free is just as expensive as if the GC calls it. You also need to keep in mind that your super fast allocator (as in the lib/system call you use to allocate the memory) may not actually allocate the memory on your call but the real allocation may be deferred until such time when the memory is actually accessed, which may cause lag akin to that of a collection cycle, depending on the amount of memory you allocate. It's possible to pre-allocate memory with a GC. Re-use those buffers, and slice them as you see fit. Without ever triggering a collection cycle. You can also disable garbage collection for D'c GC for hot areas. IME the GC saves a lot of headaches, much more than it causes and I'd much rather have more convenience in communicating my intentions to the GC than cluttering every API with allocator parameters. Something like: ``` GC(DND)//Do Not Disturb { foreach (...) // hot code goes here and no collection cycles will happen } ``` or, ``` void load_assets() { // allocate, load stuff, etc.. GC(collect); // lag doesn't matter here } ```
Aug 08 2022
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Monday, 8 August 2022 at 15:05:49 UTC, wjoe wrote:
 ```
  GC(DND)//Do Not Disturb
 {
   foreach (...)
     // hot code goes here and no collection cycles will happen
 }
 ```
 or,
 ```
 void load_assets()
 {
   // allocate, load stuff, etc..
    GC(collect); // lag doesn't matter here
 }
 ```
This is possible using the `GC` API in `core.memory`: ```d { import core.memory: GC; GC.disable(); scope(exit) GC.enable(); foreach (...) // hot code goes here } ``` ```d void load_assets() { import core.memory: GC; // allocate, load stuff, etc.. GC.collect(); } ```
Aug 08 2022
next sibling parent reply wjoe <invalid example.com> writes:
On Monday, 8 August 2022 at 15:25:40 UTC, Paul Backus wrote:
 This is possible using the `GC` API in `core.memory`:

 ```d
 {
     import core.memory: GC;

     GC.disable();
     scope(exit) GC.enable();

     foreach (...)
         // hot code goes here
 }
 ```

 ```d
 void load_assets()
 {
     import core.memory: GC;

     // allocate, load stuff, etc..
     GC.collect();
 }
 ```
Yes, but more typing and it requires an import. No intention to complain; just saying convenience and such. :)
Aug 08 2022
parent reply Paul Backus <snarwin gmail.com> writes:
On Monday, 8 August 2022 at 15:51:11 UTC, wjoe wrote:
 Yes, but more typing and it requires an import.
 No intention to complain; just saying convenience and such. :)
These days, new attributes are added to the `core.attribute` module rather than being available globally, so if the ` GC(...)` syntax were added, it would also require an import. :)
Aug 08 2022
parent wjoe <invalid example.com> writes:
On Monday, 8 August 2022 at 17:25:11 UTC, Paul Backus wrote:
 On Monday, 8 August 2022 at 15:51:11 UTC, wjoe wrote:
 Yes, but more typing and it requires an import.
 No intention to complain; just saying convenience and such. :)
These days, new attributes are added to the `core.attribute` module rather than being available globally, so if the ` GC(...)` syntax were added, it would also require an import. :)
GC(...) was not supposed to be an attribute, more akin to pragma but for the GC. Literally reading: "At GC: Do not disturb", "At GC: collect now" (or for those who'd prefer a polite rather than a commanding tone: "At GC: Please don't disturb", "At GC: Would you kindly collect the garbage now."). There are hints in the language and libraries which indicate a desire of programmers for a programming language to read or sound like a natural, spoken language. That's where the (at) came from. :)
Aug 11 2022
prev sibling parent Johan <j j.nl> writes:
On Monday, 8 August 2022 at 15:25:40 UTC, Paul Backus wrote:
 This is possible using the `GC` API in `core.memory`:

 ```d
 {
     import core.memory: GC;

     GC.disable();
     scope(exit) GC.enable();

     foreach (...)
         // hot code goes here
 }
 ```

 ```d
 void load_assets()
 {
     import core.memory: GC;

     // allocate, load stuff, etc..
     GC.collect();
 }
 ```
I'll join this week's coffee corner talk about GC. At ASML, Julia is now used on (part of) the machine. The machine is a time critical production system (you all want more chips right? ;), and GC was apparently one of the main concerns. They solved it using the manual `GC.disable` / `GC.collect` approach. https://pretalx.com/juliacon-2022/talk/GUQBSE/ I work on hardware at ASML and am not involved with software development for the scanner; so I do not know any details but found it quite interesting to see that Julia is used in this way. -Johan
Aug 10 2022
prev sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
On Monday, 8 August 2022 at 15:05:49 UTC, wjoe wrote:
 On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 21:17:50 UTC, max haughton wrote:
 On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 On Sunday, 7 August 2022 at 17:23:52 UTC, Paulo Pinto wrote:
 [...]
That's kinda bullshit, it depends on the GC implementation D's GC is not good for 99.99% "of all software in the world", it's wrong to say this, and is misleading Java's ones are, because they offer multiple implementations that you can configure and the, they cover a wide range of use cases D's GC is not the panacea, it's nice to have, but it's not something to brag about, specially when it STILL stop the world during collection, and is STILL not scalable Go did it right by focusing on low latency, and parallelism, we should copy their GC
It's actually 69.420% of all software in the world
Exactly, hence why this quote is bullshit But nobody wants to understand the problems anymore https://discord.com/blog/why-discord-is-switching-from-go-to-rust Let's miss every opportunities to catch market shares
I don't see how that is related. According to the investigation they described in the article you linked, Go's GC is set up to run every 2 minutes no questions asked. That's not true for D's GC. Instead of jumping on the rust hype train they could have forked Go's GC and solved the actual performance problem - the forced 2 minutes GC run. As far as D's default GC is concerned. Last time I checked it only runs a collection cycle on an allocation, further, once the GC has allocated the memory from the OS it won't release it back until the program terminates. This means that the GC can re-alloc previously allocated, but now collected, memory basically for free, because there's not context switch into kernel and back. Which may have additional cost of reloading cache lines. But all of this depends on a lot of factors so this may or may not be a big deal. Also, when you run your own memory management, you need to keep in mind that your manual call to *alloc/free is just as expensive as if the GC calls it. You also need to keep in mind that your super fast allocator (as in the lib/system call you use to allocate the memory) may not actually allocate the memory on your call but the real allocation may be deferred until such time when the memory is actually accessed, which may cause lag akin to that of a collection cycle, depending on the amount of memory you allocate. It's possible to pre-allocate memory with a GC. Re-use those buffers, and slice them as you see fit. Without ever triggering a collection cycle. You can also disable garbage collection for D'c GC for hot areas. IME the GC saves a lot of headaches, much more than it causes and I'd much rather have more convenience in communicating my intentions to the GC than cluttering every API with allocator parameters. Something like: ``` GC(DND)//Do Not Disturb { foreach (...) // hot code goes here and no collection cycles will happen } ``` or, ``` void load_assets() { // allocate, load stuff, etc.. GC(collect); // lag doesn't matter here } ```
I'm not on the anti-GC train, i use it myself in some of my projects, i find it very useful to have The point i am trying to make is D has the capabilities to provide a solution to both GC users and people whose performance constraints prohibit the use of a GC But for some reason, people in the community only focus on the GC, and disregard anything else, preventing me to properly advertise D as a pragmatic solution That's it
Aug 08 2022
parent reply wjoe <invalid example.com> writes:
On Monday, 8 August 2022 at 16:59:20 UTC, ryuukk_ wrote:
 On Monday, 8 August 2022 at 15:05:49 UTC, wjoe wrote:
 [...]

 On Sunday, 7 August 2022 at 21:25:57 UTC, ryuukk_ wrote:

 I'm not on the anti-GC train, i use it myself in some of my 
 projects, i find it very useful to have
I don't understand where in my post I implied that you are on the anti GC train. You brought up the discord thing to make a point that GC's have bottlenecks and allocators are the solution? However, their solution to a GC bottleneck was not to fix it by using allocators but to dump their Go code in favor of a Rust port. Neither can I see a connection of Go's GC's bottleneck to D's nor how allocators would fix it. Go's GC would still run a collection every 2 minutes. So you would either need a GC.disable to stop it from collecting where you can't suffer the impact, with a GC.collect to run it when it suits you, or you would need to completely disable the GC. Or you port your entire code base to a non-GC language.
 The point i am trying to make is D has the capabilities to 
 provide a solution to both GC users and people whose 
 performance constraints prohibit the use of a GC

 But for some reason, people in the community only focus on the 
 GC, and disregard anything else, preventing me to properly 
 advertise D as a pragmatic solution
Maybe because the GC has no disadvantage to them. But that's my own guess. If you absolutely can't suffer a GC, there's -betterC, in case you didn't know. I guess it's an ignorant point of view, but I don't see how someone whose constraints for writing high-performance, and/or real-time, and/or embedded/micro-architecture code, which prohibits the use of a GC, would find D-Runtime/Phobos meeting their requirements. There's at least one lightweight implementation of D-Runtime. And Phobos? Everything that allocates or is templated will likely be too slow/bloated in that case, thus a specialized solution that takes advantage of context seems to be necessary.
Aug 11 2022
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 12/08/2022 1:20 AM, wjoe wrote:
 I guess it's an ignorant point of view, but I don't see how someone 
 whose constraints for writing high-performance, and/or real-time, and/or 
 embedded/micro-architecture code, which prohibits the use of a GC, would 
 find D-Runtime/Phobos meeting their requirements.
There have been multiple users in this category of D who have successfully made production systems with the GC linked in. All memory allocations are expensive and can fail, if you want performant safe code you have no choice but to prevent allocating memory. It does not matter what language you use, if you allocate you slow your code down.
Aug 11 2022
next sibling parent reply wjoe <invalid example.com> writes:
On Thursday, 11 August 2022 at 13:26:21 UTC, rikki cattermole 
wrote:
 On 12/08/2022 1:20 AM, wjoe wrote:
 I guess it's an ignorant point of view, but I don't see how 
 someone whose constraints for writing high-performance, and/or 
 real-time, and/or embedded/micro-architecture code, which 
 prohibits the use of a GC, would find D-Runtime/Phobos meeting 
 their requirements.
There have been multiple users in this category of D who have successfully made production systems with the GC linked in.
I was referring to the "someone who has a constraint that prohibits the use of a GC" part and my reasoning is that someone with such a use case probably wouldn't find solutions in *Phobos* meeting their demands because the result would be too bloated/slow. And as such they would probably tailor their own optimized solutions. With no word did I claim that it's impossible to write production systems in D with a GC linked in.
 All memory allocations are expensive and can fail, if you want 
 performant safe code you have no choice but to prevent 
 allocating memory.

 It does not matter what language you use, if you allocate you 
 slow your code down.
Thank you. That's what I'm saying - or trying to say, at least.
Aug 11 2022
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 12/08/2022 1:54 AM, wjoe wrote:
 All memory allocations are expensive and can fail, if you want 
 performant safe code you have no choice but to prevent allocating memory.

 It does not matter what language you use, if you allocate you slow 
 your code down.
Thank you. That's what I'm saying - or trying to say, at least.
Pretty much all programs have to allocate at some point during their lifecycle. If you understand this, having a GC during points in it that don't matter about performance is fine and that is what people who use D in this sort of environment do.
Aug 11 2022
prev sibling parent Don Allen <donaldcallen gmail.com> writes:
On Thursday, 11 August 2022 at 13:26:21 UTC, rikki cattermole 
wrote:
 On 12/08/2022 1:20 AM, wjoe wrote:
 I guess it's an ignorant point of view, but I don't see how 
 someone whose constraints for writing high-performance, and/or 
 real-time, and/or embedded/micro-architecture code, which 
 prohibits the use of a GC, would find D-Runtime/Phobos meeting 
 their requirements.
There have been multiple users in this category of D who have successfully made production systems with the GC linked in. All memory allocations are expensive and can fail, if you want performant safe code you have no choice but to prevent allocating memory. It does not matter what language you use, if you allocate you slow your code down.
Yes. The important question is: does it matter?
Aug 11 2022
prev sibling parent reply IGotD- <nise nise.com> writes:
On Thursday, 11 August 2022 at 13:20:13 UTC, wjoe wrote:
 I guess it's an ignorant point of view, but I don't see how 
 someone whose constraints for writing high-performance, and/or 
 real-time, and/or embedded/micro-architecture code, which 
 prohibits the use of a GC, would find D-Runtime/Phobos meeting 
 their requirements.
 There's at least one lightweight implementation of D-Runtime.
 And Phobos? Everything that allocates or is templated will 
 likely be too slow/bloated in that case, thus a specialized 
 solution that takes advantage of context seems to be necessary.
Yes, real time code today will avoid GC all together. Likely it will use custom everything, like specialized allocators, caches and container algorithms. All in order to avoid memory allocation from a heap as well as avoiding memory fragmentation. Many times real time systems has a real time part but also a non real time part which often runs a rich OS like Linux. Services in the rich OS part can usually use GC without any problems. In terms of computer games, correct me if I'm wrong but GC will become the norm there. The reason is that computer games are becoming more and more advanced and there is so much stuff going on that GC becomes a lesser problem compared to everything else in terms of performance. Probably if I came to a computer game technical manager in 20 years and said I wanted to use GC, he would probably kill me. Today if I said the same, he would say it's probably ok. I'm not that convinced by the Doom example as there seem to be a confusion between caches and GC. You are welcome to come with more examples of early computer games that used GC.
Aug 11 2022
parent Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 11 August 2022 at 20:55:23 UTC, IGotD- wrote:
 On Thursday, 11 August 2022 at 13:20:13 UTC, wjoe wrote:
 [...]
Yes, real time code today will avoid GC all together. Likely it will use custom everything, like specialized allocators, caches and container algorithms. All in order to avoid memory allocation from a heap as well as avoiding memory fragmentation. Many times real time systems has a real time part but also a non real time part which often runs a rich OS like Linux. Services in the rich OS part can usually use GC without any problems. In terms of computer games, correct me if I'm wrong but GC will become the norm there. The reason is that computer games are becoming more and more advanced and there is so much stuff going on that GC becomes a lesser problem compared to everything else in terms of performance. Probably if I came to a computer game technical manager in 20 years and said I wanted to use GC, he would probably kill me. Today if I said the same, he would say it's probably ok. I'm not that convinced by the Doom example as there seem to be a confusion between caches and GC. You are welcome to come with more examples of early computer games that used GC.
Unless the game doesn't support any kind of scripting language, there will be some level of GC taking place, Blueprints, Lua, Python, GDScript,..... And then there are the engines that expose it even to the engine layer itself, Unreal, SceneKit, Stride, MonoGame, PlayCanvas, BabylonJS....
Aug 11 2022
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/7/2022 1:43 PM, ryuukk_ wrote:
 That's kinda bullshit, it depends on the GC implementation
I expect his opinions on the GC to reflect the implementations he's used, not ours.
 D's GC is not the panacea, it's nice to have, but it's not something to brag 
 about, specially when it STILL stop the world during collection, and is STILL 
 not scalable
Any implementation strategy is based on tradeoffs. D's are: 1. To run in mixed language instances, must work with C code with a frictionless interface. 2. To coexist peacefully with code that extensively use other allocation strategies 3. To have zero parasitic overhead The requirements of Go and Java are quite different, which drives their different strategies.
Aug 07 2022
prev sibling parent reply IGotD- <nise nise.com> writes:
On Sunday, 7 August 2022 at 20:43:32 UTC, ryuukk_ wrote:
 That's kinda bullshit, it depends on the GC implementation

 D's GC is not good for 99.99% "of all software in the world", 
 it's wrong to say this, and is misleading

 Java's ones are, because they offer multiple implementations 
 that you can configure and the, they cover a wide range of use 
 cases

 D's GC is not the panacea, it's nice to have, but it's not 
 something to brag about, specially when it STILL stop the world 
 during collection, and is STILL not scalable

 Go did it right by focusing on low latency, and parallelism, we 
 should copy their GC
D did the serious mistake by having raw pointers in the default language (even in safe mode) rather than opaque references. This means that D cannot just as easily offer different GC algorithms like other languages. If D would have opaque references then we would have seen more different GC types that would fit more needs. D3 needs to happen so that we can correct these serious flaws.
Aug 08 2022
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 8 August 2022 at 15:07:47 UTC, IGotD- wrote:
 [snip]

 D did the serious mistake by having raw pointers in the default 
 language (even in safe mode) rather than opaque references. 
 This means that D cannot just as easily offer different GC 
 algorithms like other languages.

 If D would have opaque references then we would have seen more 
 different GC types that would fit more needs.

 D3 needs to happen so that we can correct these serious flaws.
It is a bit of a design trade-off though. If you have two separate pointer types, then a function that takes a pointer of one has to have an overload to get the second one working. Some kind of type erasure would be useful to prevent template bloat.
Aug 08 2022
next sibling parent IGotD- <nise nise.com> writes:
On Monday, 8 August 2022 at 15:39:16 UTC, jmh530 wrote:
 It is a bit of a design trade-off though. If you have two 
 separate pointer types, then a function that takes a pointer of 
 one has to have an overload to get the second one working. Some 
 kind of type erasure would be useful to prevent template bloat.
Yes, as always there is a trade off. In almost no cases will you almost never use them other than in special cases. Problems start to arise when when programs and shared libraries are compiled with different GCs. One thing I have personally noticed, having a pointer to the free function in the managed pointer type makes it very versatile. Even change GC in runtime becomes possible. With managed pointers only, the world will open up for us to experiment with things like this.
Aug 08 2022
prev sibling parent reply Tejas <notrealemail gmail.com> writes:
On Monday, 8 August 2022 at 15:39:16 UTC, jmh530 wrote:
 On Monday, 8 August 2022 at 15:07:47 UTC, IGotD- wrote:
 [snip]

 D did the serious mistake by having raw pointers in the 
 default language (even in safe mode) rather than opaque 
 references. This means that D cannot just as easily offer 
 different GC algorithms like other languages.

 If D would have opaque references then we would have seen more 
 different GC types that would fit more needs.

 D3 needs to happen so that we can correct these serious flaws.
It is a bit of a design trade-off though. If you have two separate pointer types, then a function that takes a pointer of one has to have an overload to get the second one working. Some kind of type erasure would be useful to prevent template bloat.
Isn't this already kinda there with `T*` and `ref T`? Let's just go even farther and call `T*` unmanaged and `ref T` managed, imo
Aug 14 2022
next sibling parent IGotD- <nise nise.com> writes:
On Sunday, 14 August 2022 at 07:29:26 UTC, Tejas wrote:
 Isn't this already kinda there with `T*` and `ref T`? Let's 
 just go even farther and call `T*` unmanaged and `ref T` 
 managed, imo
I tend to mix references and managed pointer in the text, which is wrong. The reason I sometimes mention managed pointers as references is that Rust named references as their life time pointers. It should really be managed pointers or fat pointers. References in D are similar to C++ and have nothing to do with memory management. Managed pointers in D requires an own type.
Aug 14 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/14/2022 12:29 AM, Tejas wrote:
 Isn't this already kinda there with `T*` and `ref T`? Let's just go even
farther 
 and call `T*` unmanaged and `ref T` managed, imo
`ref` pointers have an additional property that they cannot escape their scope.
Aug 15 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
I expected Carmack's view to be practical and make perfect sense. I'm pleased
to 
be right!

Anything he has to say about writing code is worth listening too.
Aug 07 2022
parent reply Ethan <gooberman gmail.com> writes:
On Monday, 8 August 2022 at 00:57:52 UTC, Walter Bright wrote:
 I expected Carmack's view to be practical and make perfect 
 sense. I'm pleased to be right!

 Anything he has to say about writing code is worth listening 
 too.
Replying to this to emphasise the point. You know, one of the advantages of Carmack releasing his code is that you can see for yourself what his views on GCs are. https://github.com/id-Software/DOOM/blob/master/linuxdoom-1.10/z_zone.c I've spent a lot of time in the Doom source code, and what I've linked here is the Zone allocator. No allocation in the code is done outside of this, it all goes through zones. Allocating with the PU_STATIC zone is the equivalent of a manual malloc, ie you need to free it yourself. Where it gets interesting, though, is the PU_LEVEL and PU_CACHE tags. These are garbage collected zones that you don't need to free yourself. The level stuff has persistence exactly as long as it takes to reload a level (be it through death/new game/load game/level warp/etc). The Z_FreeTags function is used on such a reload to deallocate anything with the PU_LEVEL tag. PU_CACHE is a bit more fun - if the allocator runs out of memory in the free pool, it'll just plain grab something that's marked PU_CACHE. As such, you have no guarantee that any PU_CACHE memory is valid after the next call to the allocator. This is used for textures, in fact, and is how the game both didn't crash out on no memory on low-spec 386s back in the day _and_ why that disk loading icon showed up so frequently on such a system. So tl;dr is that there's tactical usage of non-GC _AND_ GC memory in Doom. And since it's a C application, there's no concern about destructors. Code is structured in such a way that an init path will be called before attempting to access GC memory again, and the system keeps itself together. That was also nearly 30 years ago now. And as such, I always laugh whenever someone tries to tell me GC has no place in video games. I also shipped a game last year where the GC was a major pain. Unreal Engine's GC is a mark and sweep collector, not too dissimilar in theory to Rainer's GC (written for Visual D). And we had to do things to it to make it not freeze the game for 300+ milliseconds. Given that we need to present a new frame every 16.666666... seconds, that's unacceptable. The solution worked (amortize the collect) but it's not really ideal. I have been meaning to sit down and work on a concurrent GC, but LOLNO as if I have the time. Separately though:
 there is a fork based GC available
Can we stop talking about this please? It's a technological dead end, unusable on a large number of computers and mobile devices. If that's the best we've got, it's really not good enough.
Aug 08 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 8/8/22 6:32 PM, Ethan wrote:

 I have been meaning to sit down and work on a concurrent GC, but LOLNO 
 as if I have the time.
If you mean concurrent as in it can use multiple threads, that already is happening. If you mean concurrent as in you can allocate and mark/sweep in separate threads independently, that would be a huge improvement. Even if you have some way to designate "this one thread can't be paused", and figure out a way to section that off, it would be huge. -Steve
Aug 08 2022
prev sibling next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 09/08/2022 10:32 AM, Ethan wrote:
 there is a fork based GC available
Can we stop talking about this please? It's a technological dead end, unusable on a large number of computers and mobile devices. If that's the best we've got, it's really not good enough.
I double checked this (even though we previously discussed it an I believed you), but processsnapshot.h which is required to do concurrent GC's for Windows is not available for Xbox. So yeah, concurrrent GC's are out on Xbox. Need write barriers if you want a better GC.
Aug 08 2022
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 8/8/2022 3:32 PM, Ethan wrote:
 Replying to this to emphasise the point.
Thank you for writing this. It's clever and sensible. I expect nothing less from Carmack!
Aug 09 2022
prev sibling parent reply Jack Stouffer <jack jackstouffer.com> writes:
On Monday, 8 August 2022 at 22:32:23 UTC, Ethan wrote:
 So tl;dr is that there's tactical usage of non-GC _AND_ GC 
 memory in Doom.
Calling this garbage collection is watering-down the term to near uselessness. This is just manual memory management. I haven't read the code, but from your description it sounds exactly like a bump allocator and a tweaked general purpose allocator which can reuse allocated memory regions when heap fragmentation becomes an issue. These days, these technique should be unremarkable. I have no doubt that this was innovative in '93. But 30 years later, creating a bump allocator for memory which doesn't change much after init with a known lifetime of use should be common practice for programmers, especially when most PCs have > 8gb of RAM.
Aug 10 2022
parent Ethan <gooberman gmail.com> writes:
On Wednesday, 10 August 2022 at 17:09:50 UTC, Jack Stouffer wrote:
 I haven't read the code
Well there's the problem right there. You can compare the code previously linked to an actual bump allocator I wrote for my own branch of the Doom code (that resets every render frame) at https://github.com/GooberMan/rum-and-raisin-doom/blob/master/src/doom/r_main.h#L182-L218
Aug 10 2022