www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - D is nice whats really wrong with gc??

reply Bkoie <bkoie049 gmail.com> writes:
just look at this i know this is overdesign im just trying to get 
a visual on how a api can be design im still new though but the 
fact you can build an api like this and it not break it is 
amazing.

but what is with these ppl and the gc?
just dont allocate new memory or invoke,
you can use scopes to temporry do stuff on immutable slices that 
will auto clean up
the list goes on

and you dont need to use pointers at all.......!!

i honesty see nothing wrong with gc,

ofc d has some downsides,
docs not very good compare to some other lang.
ide support not great but it works sometimes
i use helix and lapce and maybe sometimes intellj
it works better in helix though.
and of d is missing some minor libraries

```
import std.stdio: writeln, readln;
auto struct Game
{
     string title;
     private Board _board;
     private const(Player)[] _players;
     final auto load(T)(T any) {
         static if (is(T == Player)) {
             _pushPlayer(any);
         }
         return this;
     };
     final auto play() {assert(_isPlayersFull, "require players is 
2 consider removing"); "playing the game".writeln;};
     final auto _end() {};
     auto _currentPlayers() const {return _players.length;}
     enum _playerLimit = 2;
     auto _isPlayersFull() const {return _currentPlayers == 
_playerLimit;}
     import std.format: format;
     auto _pushPlayer(T: Player)(T any) {
         if (_isPlayersFull) assert(false, "require %s 
players".format(_playerLimit));
         _players.reserve(_playerLimit);
         _players ~= any;
         }
}
private struct Board {}
enum symbol {none, x, o}
private struct Player {const(string) _name; symbol _hand; 
 disable this(); public this(in string n) {_name = n;}}
alias game = Game;
alias player = Player;
alias board = Board;
auto main()
{
     import std.string: strip;
     game()
     .load(player(readln().strip))
     // .matchmake
     .load(player(readln().strip))
     .play;
}
```
Dec 18 2023
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Dec 18, 2023 at 04:44:11PM +0000, Bkoie via Digitalmars-d-learn wrote:
[...]
 but what is with these ppl and the gc?
[...] It's called GC phobia, a knee-jerk reaction malady common among C/C++ programmers (I'm one of them, though I got cured of GC phobia thanks to D :-P). 95% of the time the GC helps far more than it hurts. And the 5% of the time when it hurts, there are plenty of options for avoiding it in D. It's not shoved down your throat like in Java, there's no need to get all worked up about it. T -- Computerese Irregular Verb Conjugation: I have preferences. You have biases. He/She has prejudices. -- Gene Wirchenko
Dec 18 2023
next sibling parent Imperatorn <johan_forsberg_86 hotmail.com> writes:
On Monday, 18 December 2023 at 17:22:22 UTC, H. S. Teoh wrote:
 On Mon, Dec 18, 2023 at 04:44:11PM +0000, Bkoie via 
 Digitalmars-d-learn wrote: [...]
 but what is with these ppl and the gc?
[...] It's called GC phobia, a knee-jerk reaction malady common among C/C++ programmers (I'm one of them, though I got cured of GC phobia thanks to D :-P). 95% of the time the GC helps far more than it hurts. And the 5% of the time when it hurts, there are plenty of options for avoiding it in D. It's not shoved down your throat like in Java, there's no need to get all worked up about it. T
Truth
Dec 20 2023
prev sibling parent reply Dmitry Ponyatov <dponyatov gmail.com> writes:
 It's called GC phobia, a knee-jerk reaction malady common among 
 C/C++ programmers
I'd like to use D in hard realtime apps (gaming can be thought as one of them, but I mostly mean realtime dynamic multimedia and digital signal processing). So, GC in such applications commonly supposed unacceptable. In contrast, I can find some PhD theses speaking about realtime GC, prioritized message passing and maybe RDMA-based clustering. Unfortunately, I have no hope that D lang is popular enough that somebody in the topic can rewrite its runtime and gc to be usable in more or less hard RT apps.
Dec 22 2023
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 22, 2023 at 07:22:15PM +0000, Dmitry Ponyatov via
Digitalmars-d-learn wrote:
 It's called GC phobia, a knee-jerk reaction malady common among
 C/C++ programmers
I'd like to use D in hard realtime apps (gaming can be thought as one of them, but I mostly mean realtime dynamic multimedia and digital signal processing).
For digital signal processing, couldn't you just preallocate beforehand? Even if we had a top-of-the-line incremental GC I wouldn't want to allocate wantonly in my realtime code. I'd preallocate whatever I can, and use region allocators for the rest.
 So, GC in such applications commonly supposed unacceptable. In
 contrast, I can find some PhD theses speaking about realtime GC,
 prioritized message passing and maybe RDMA-based clustering.
I'm always skeptical of general claims like this. Until you actually profile and identify the real hotspots, it's just speculation.
 Unfortunately, I have no hope that D lang is popular enough that
 somebody in the topic can rewrite its runtime and gc to be usable in
 more or less hard RT apps.
Popularity has nothing to do with it. The primary showstopper here is the lack of write barriers (and Walter's reluctance to change this). If we had write barriers a lot more GC options would open up. T -- What is Matter, what is Mind? Never Mind, it doesn't Matter.
Dec 22 2023
prev sibling next sibling parent reply bomat <Tempest_spam gmx.de> writes:
On Monday, 18 December 2023 at 16:44:11 UTC, Bkoie wrote:
 but what is with these ppl and the gc?
 [...]
I'm a C++ programmer in my day job. Personally, I have no problem with a GC, but one of my colleague is a total C fanboy, so I feel qualified to answer your question. :) I think the problem most "old school" programmers have with automatic garbage collection, or *any* kind of "managed" code, really, is not the GC itself, but that it demonstrates a wrong mindset. If you use (or even feel tempted to use) a GC, it means that you don't care about your memory. Neither about its layout nor its size, nor when chunks of it are allocated or deallocated, etc. And if you don't care about these things, you should not call yourself a programmer. You are the reason why modern software sucks and everything gets slower and slower despite the processors getting faster and faster. In fact, you probably should get another job, like flooring inspector or something. :) And although this is not my opinion (otherwise I wouldn't use D), I have to admit that this isn't completely wrong. I like my abstractions because they make my life easier, but yeah, they detach me from the hardware, which often means things are not quite as fast as they could possibly be. It's a tradeoff. Of course, people with a "purer" mindset could always use the "BetterC" subset of D... but then again, why should they? C is perfect, right? :)
Dec 22 2023
next sibling parent Bkoie <bkoie049 gmail.com> writes:
On Friday, 22 December 2023 at 12:53:44 UTC, bomat wrote:
 I think the problem most "old school" programmers have with 
 automatic garbage collection, or *any* kind of "managed" code, 
 really, is not the GC itself, but that it demonstrates a wrong 
 mindset.

 If you use (or even feel tempted to use) a GC, it means that 
 you don't care about your memory. Neither about its layout nor 
 its size, nor when chunks of it are allocated or deallocated, 
 etc.
 And if you don't care about these things, you should not call 
 yourself a programmer. You are the reason why modern software 
 sucks and everything gets slower and slower despite the 
 processors getting faster and faster. In fact, you probably 
 should get another job, like flooring inspector or something. :)
and that's the reason why modern programs are getting bigger, slower and leaking memory. no one should be manually managing memory, rust is a prime example of that but now "barrow checker the issue" or "too many unsafe blocks", and as one guy said above you can avoid the gc in d so...
Dec 22 2023
prev sibling parent reply bachmeier <no spam.net> writes:
On Friday, 22 December 2023 at 12:53:44 UTC, bomat wrote:

 If you use (or even feel tempted to use) a GC, it means that 
 you don't care about your memory. Neither about its layout nor 
 its size, nor when chunks of it are allocated or deallocated, 
 etc.
 And if you don't care about these things, you should not call 
 yourself a programmer. You are the reason why modern software 
 sucks and everything gets slower and slower despite the 
 processors getting faster and faster. In fact, you probably 
 should get another job, like flooring inspector or something. :)
Given how fast computers are today, the folks that focus on memory and optimizing for performance might want to apply for jobs as flooring inspectors, because they're often solving problems from the 1990s. That's not to say it's never needed, but the number of cases where idiomatic D, Go, or Java will be too slow is shrinking rapidly. And there's a tradeoff. In return for solving a problem that doesn't exist, you get bugs, increased development time, and difficulty changing approaches. I say this as I'm in the midst of porting C code to D. The biggest change by far is deleting line after line of manual memory management. Changing anything in that codebase would be miserable.
Dec 22 2023
parent reply bomat <Tempest_spam gmx.de> writes:
On Friday, 22 December 2023 at 16:51:11 UTC, bachmeier wrote:
 Given how fast computers are today, the folks that focus on 
 memory and optimizing for performance might want to apply for 
 jobs as flooring inspectors, because they're often solving 
 problems from the 1990s.
*Generally* speaking, I disagree. Think of the case of GTA V where several *minutes* of loading time were burned just because they botched the implementation of a JSON parser. Of course, this was unrelated to memory management. But it goes to show that today's hardware being super fast doesn't absolve you from knowing what you're doing... or at least question your implementation once you notice that it's slow. But that is true for any language, obviously. I think there is a big danger of people programming in C/C++ and thinking that it *must* be performing well just because it's C/C++. The C++ codebase I have to maintain in my day job is a really bad example for that as well.
 I say this as I'm in the midst of porting C code to D. The 
 biggest change by far is deleting line after line of manual 
 memory management. Changing anything in that codebase would be 
 miserable.
I actually hate C with a passion. I have to be fair though: What you describe doesn't sound like a problem of the codebase being C, but the codebase being crap. :) If you have to delete "line after line" of manual memory management, I assume you're dealing with micro-allocations on the heap - which are performance poison in any language. A decent system would allocate memory in larger blocks and manage access to it via handles. That way you never do micro-allocations and never have ownership problems. Essentially, it's still a "memory manager" that owns all the memory, the only difference being that it's self-written. Porting a codebase like that would actually be very easy because all the mallocs would be very localized. Of course, this directly leads to the favorite argument of C defenders, which I absolutely hate: "Why, it's not a problem if you're doing it *right*." By this logic, you have to do all these terrible mistakes while learning your terrible language, and then you'll be a good programmer and can actually be trusted with writing production software - after like, what, 20 years of shooting yourself in the foot and learning everything the hard way? :) And even then, the slightest slipup will give you dramatic vulnerabilities. Such a great concept.
Dec 22 2023
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 22, 2023 at 09:40:03PM +0000, bomat via Digitalmars-d-learn wrote:
 On Friday, 22 December 2023 at 16:51:11 UTC, bachmeier wrote:
 Given how fast computers are today, the folks that focus on memory
 and optimizing for performance might want to apply for jobs as
 flooring inspectors, because they're often solving problems from the
 1990s.
*Generally* speaking, I disagree. Think of the case of GTA V where several *minutes* of loading time were burned just because they botched the implementation of a JSON parser.
IMNSHO, if I had very large data files to load, I wouldn't use JSON. Precompile the data into a more compact binary form that's already ready to use, and just mmap() it at runtime.
 Of course, this was unrelated to memory management. But it goes to
 show that today's hardware being super fast doesn't absolve you from
 knowing what you're doing... or at least question your implementation
 once you notice that it's slow.
My favorite example is this area is the poor selection of algorithms, a very common mistake being choosing an O(nē) algorithm because it's easier to implement than the equivalent O(n) algorithm, and not very noticeable on small inputs. But on large inputs it slows to an unusable crawl. "But I wrote it in C, why isn't it fast?!" Because O(nē) is O(nē), and that's independent of language. Given large enough input, an O(n) Java program will beat the heck out of an O(nē) C program.
 But that is true for any language, obviously.

 I think there is a big danger of people programming in C/C++ and
 thinking that it *must* be performing well just because it's C/C++.
 The C++ codebase I have to maintain in my day job is a really bad
 example for that as well.
"Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language." -- Luca De Vitis :-)
 I say this as I'm in the midst of porting C code to D. The biggest
 change by far is deleting line after line of manual memory
 management.  Changing anything in that codebase would be miserable.
I actually hate C with a passion.
Me too. :-D
 I have to be fair though: What you describe doesn't sound like a
 problem of the codebase being C, but the codebase being crap. :)
Yeah, I've seen my fair share of crap C and C++ codebases. C code that makes you do a double take and stare real hard at the screen to ascertain whether it's actually C and not some jokelang or exolang purposely designed to be unreadable/unmaintainable. (Or maybe it would qualify as an IOCCC entry. :-D) And C++ code that looks like ... I dunno what. When business logic is being executed inside of a dtor, you *know* that your codebase has Problems(tm), real big ones at that.
 If you have to delete "line after line" of manual memory management, I
 assume you're dealing with micro-allocations on the heap - which are
 performance poison in any language.
Depends on what you're dealing with. Some micro-allocations are totally avoidable, but if you're manipulating a complex object graph composed of nodes of diverse types, it's hard to avoid. At least, not without uglifying your APIs significantly and introducing long-term maintainability issues. One of my favorite GC "lightbulb" moments is when I realized that having a GC allowed me to simplify my internal APIs significantly, resulting in much cleaner code that's easy to debug and easy to maintain. Whereas the equivalent bit of code in the original C++ codebase would have required disproportionate amounts of effort just to navigate the complex allocation requirements. These days my motto is: use the GC by default, when it becomes a problem, then use a more manual memory management scheme, but *only where the bottleneck is* (as proven by an actual profiler, not where you "know" (i.e., imagine) it is). A lot of C/C++ folk (and I speak from my own experience as one of them) spend far too much time and energy optimizing things that don't need to be optimized, because they are nowhere near the bottleneck, resulting in lots of sunk cost and added maintenance burden with no meaningful benefit. [...]
 Of course, this directly leads to the favorite argument of C
 defenders, which I absolutely hate: "Why, it's not a problem if you're
 doing it *right*."
 
 By this logic, you have to do all these terrible mistakes while
 learning your terrible language, and then you'll be a good programmer
 and can actually be trusted with writing production software - after
 like, what, 20 years of shooting yourself in the foot and learning
 everything the hard way?  :) And even then, the slightest slipup will
 give you dramatic vulnerabilities.  Such a great concept.
Year after year I see reports of security vulnerabilities, the most common of which are buffer overflows, use-after-free, and double-free. All of which are caused directly by using a language that forces you to manage memory manually. If C were only 10 years old, I might concede that C coders are just inexperienced, give them enough time to learn from field experience and the situation will improve. But after 50 years, the stream of memory-related security vulnerabilities still hasn't ebbed. I think it's beyond dispute that even the best C coders make mistakes -- because memory management is HARD, and using a language that gives you no help whatsoever in this department is just inviting trouble. I've personally seen the best C coders commit blunders, and in C, all it takes is *one* blunder among millions of lines of code that manage memory, and you have a glaring security hole. It's high time people stepped back to think hard about why this is happening, and why 50 years of industry experience and hard-earned best practices has not improved things. And also think hard about why eschew the GC when it could single-handedly remove this entire category of bugs from your program in one fell swoop. (Now, just below memory-related security bugs is data sanitization bugs. Unfortunately the choice of language isn't going to help you very much in there...) T -- In theory, software is implemented according to the design that has been carefully worked out beforehand. In practice, design documents are written after the fact to describe the sorry mess that has gone on before.
Dec 22 2023
parent bomat <Tempest_spam gmx.de> writes:
On Friday, 22 December 2023 at 22:33:35 UTC, H. S. Teoh wrote:
 IMNSHO, if I had very large data files to load, I wouldn't use 
 JSON. Precompile the data into a more compact binary form 
 that's already ready to use, and just mmap() it at runtime.
I wondered about that decision as well, especially because this was internal game data that did not have to be user readable. That's beside the point though; it was a ~10 MB JSON file that took them several minutes to parse. That's really just insane. Turns out it helps if you don't count the length of the entire document for every single value. It also helps if you don't iterate over your entire array of already written values every time you want to insert a new one. :) In case you didn't know the story, here's a link: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times-by-70/ I think there are several great lessons in there. Rockstar must have noticed how slow the loading is, but apparently just accepted it as a given... for 7+ years. Who needs optimizations on today's great hardware, right? There couldn't possibly be algorithmic problems in something simple like a JSON parser, right? Second, look at what people suspected as the root cause of the problem, like the P2P architecture. It's funny how speculations about performance problems are *always* wrong. Only measuring will tell you the truth.
Dec 23 2023
prev sibling parent IGotD- <nise nise.com> writes:
On Monday, 18 December 2023 at 16:44:11 UTC, Bkoie wrote:
 just look at this i know this is overdesign im just trying to 
 get a visual on how a api can be design im still new though but 
 the fact you can build an api like this and it not break it is 
 amazing.

 but what is with these ppl and the gc?
 just dont allocate new memory or invoke,
 you can use scopes to temporry do stuff on immutable slices 
 that will auto clean up
 the list goes on

 and you dont need to use pointers at all.......!!

 i honesty see nothing wrong with gc,
I don't think there is any wrong having GC in language either and upcoming languages also show that as a majority of the have some form of GC. GC is here to stay regardless. So what is the problem with D? The problem with D is that it is limited to what type of GC it can support. Right now D only supports stop the world GC which is quickly becoming unacceptable on modern systems. Sure it was fine when when we had dual core CPUs but today desktop PCs can have 32 execution units (server CPUs can have an insane amount of of them like 128). Stopping 32 execution (potentially even more if you have more threads) units is just unacceptable, which not only takes a lot of time but a very clumsy approach on modern systems. What GC should D then support? In my opinion, all of them. Memory management is a moving target and I don't know how it will look like in 10 years. Will cache snoop be viable for example, will the cores be clustered so that snoops are only possible within them etc? D needs a more future proof language design when it comes to memory management. Because of this it is important that D can as seamless as possible support different types of GC types. Exposing raw pointers in the language for GC allocated type was a big mistake in the D language design which I think should be rectified. About all other new languages have opaque pointers/reference types in order to hide the GC mechanism and so that other GC algorithms like reference counting can be used. This is a an F- in language design.
Dec 23 2023