www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Why I Like D

reply Walter Bright <newshound2 digitalmars.com> writes:
"Why I like D" is on the front page of HackerNews at the moment at number 11.

https://news.ycombinator.com/news
Jan 11
next sibling parent ag0aep6g <anonymous example.com> writes:
On 12.01.22 03:37, Walter Bright wrote:
 "Why I like D" is on the front page of HackerNews at the moment at 
 number 11.
 
 https://news.ycombinator.com/news
https://news.ycombinator.com/item?id=29863557 https://aradaelli.com/blog/why-i-like-d/
Jan 11
prev sibling next sibling parent surlymoor <surlymoor cock.li> writes:
On Wednesday, 12 January 2022 at 02:37:47 UTC, Walter Bright 
wrote:
 "Why I like D" is on the front page of HackerNews at the moment 
 at number 11.

 https://news.ycombinator.com/news
Nice article, especially this paragraph:
In case you are writing a performance critical piece of 
software, remember you
can turn off the garbage collector! People on forums like to 
bash that in such
case you cannot use many functions from standard library. So 
what? If
performances are essential for your system you are likely 
already writing you own
utility library with highly optimized algorithms and data 
structures for your use
case, so you won’t really miss the standard library much.
Good luck to the boys and girls in the HN comments as the dumpster fire is already raging.
Jan 11
prev sibling next sibling parent reply forkit <forkit gmail.com> writes:
On Wednesday, 12 January 2022 at 02:37:47 UTC, Walter Bright 
wrote:
 "Why I like D" is on the front page of HackerNews at the moment 
 at number 11.

 https://news.ycombinator.com/news
surely this article needs to be balanced, with another article, titled 'why I don't like D' ;-) (..but written by someone who really knows D). IMO... the next generation programming language (that will succeed) will be defined by it's tooling, and not just the language. Language complexity increases the demands on tooling. I remember Scott Meyers.. the last thing D needs.. 2014 talk. We really need him now.. more than ever ;-)
Jan 11
parent forkit <forkit gmail.com> writes:
On Wednesday, 12 January 2022 at 06:27:47 UTC, forkit wrote:
 surely this article needs to be balanced, with another article, 
 titled 'why I don't like D' ;-) (..but written by someone who 
 really knows D).
oh. btw. I'd love to see Walter (or Andrei, or both) write this article ;-)
Jan 11
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Jan 11, 2022 at 06:37:47PM -0800, Walter Bright via
Digitalmars-d-announce wrote:
 "Why I like D" is on the front page of HackerNews at the moment at number 11.
 
 https://news.ycombinator.com/news
Favorite quote: Some people may not consider the GC a feature, I certainly did not at the beginning. I came from a hard-core game developer mindset where you need to know the exact timing for every operation in your critical path. I lived by quotes like: “the programmer knows better how to manage memory” and “you cannot have unexpected pauses for GC collection”. However it turns out that unless you are writing a computer game, a high frequency trading system, a web server, or anything that really cares about sub-second latency, chances are that a garbage collector is your best friend. It will remove the burden of having to think about memory management at all and at the same time guarantee that you won’t have any memory leaks in your code. *Flamesuit on.* T -- Insanity is doing the same thing over and over again and expecting different results.
Jan 12
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh wrote:
 	However it turns out that unless you are writing a computer
 	game, a high frequency trading system, a web server
Most computer games and web servers use GC too. idk about hf trading.
Jan 12
next sibling parent Elronnd <elronnd elronnd.net> writes:
On Wednesday, 12 January 2022 at 15:41:03 UTC, Adam D Ruppe wrote:
 idk about hf trading
Per hearsay, some is c++, some is java, frequently it is fpga-assisted. Certainly, gc is not unheard of in that domain.
Jan 12
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/12/22 10:41 AM, Adam D Ruppe wrote:
 On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh wrote:
     However it turns out that unless you are writing a computer
     game, a high frequency trading system, a web server
Most computer games and web servers use GC too. idk about hf trading.
Yeah, I had trouble agreeing with that statement too. For computer gaming even, GC is not horrific as long as you aren't allocating and freeing loads of things every frame. And a web server works great I think with GC. vibe-d makes non-stop use of the GC (allocating a bunch of class objects for every request). sub-second latency is also quite possible even with a stop-the-world GC. Look at Sociomantic -- they still used the GC, just made sure to minimize the possibility of collections. I wonder if there is just so much fear of the GC vs people who actually tried to use the GC and it failed to suit their needs. I've never been afraid of the GC in my projects, and it hasn't hurt me at all. -Steve
Jan 12
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 12, 2022 at 11:14:54AM -0500, Steven Schveighoffer via
Digitalmars-d-announce wrote:
[...]
 Look at Sociomantic -- they still used the GC, just made sure to
 minimize the possibility of collections.
 
 I wonder if there is just so much fear of the GC vs people who
 actually tried to use the GC and it failed to suit their needs. I've
 never been afraid of the GC in my projects, and it hasn't hurt me at
 all.
[...] Like I said, my suspicion is that it's more of a knee-jerk reaction to the word "GC" than anything actually founded in reality, like somebody actually wrote a game in D and discovered the GC is a problem vs somebody is *thinking* about writing a game in D, then thinks about the GC, then balks because of the expectation that the GC is going to do something bad like kill the hypothetical framerate or make the not-yet-implemented animation jerky. Those who actually wrote code and found GC performance problems would have just slapped nogc on their code or inserted GC.disable at the beginning of the game loop and called it a day, instead of getting all knotted up in the forums about why GC is bad in principle. T -- I'm still trying to find a pun for "punishment"...
Jan 12
prev sibling next sibling parent reply Arjan <arjan ask.me.to> writes:
On Wednesday, 12 January 2022 at 16:14:54 UTC, Steven 
Schveighoffer wrote:
 On 1/12/22 10:41 AM, Adam D Ruppe wrote:
 On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh 
 wrote:
     However it turns out that unless you are writing a 
 computer
     game, a high frequency trading system, a web server
Most computer games and web servers use GC too. idk about hf trading.
I know for sure that one was written in JAVA, using Azul C4 and other tech to maximize performance...
 Yeah, I had trouble agreeing with that statement too.
Just wait for Paulo Pinto to join the conversation, he will happily refer a lot of tech and products using GC which are high performant and very successful ;-)
 I wonder if there is just so much fear of the GC vs people who 
 actually tried to use the GC and it failed to suit their needs. 
 I've never been afraid of the GC in my projects, and it hasn't 
 hurt me at all.
I think it stems from experience from long ago when JAVA was HOT and sold as the solution of all world problems, but failed to meet expectations and was dismissed because they found is was the GC what made it fail.. A lot of engineers just repeat the opinion of some guru they admire without fact checking. Although I've seen various serious performance issues with JAVA and python software, only once it was related to the GC..
Jan 12
parent reply bachmeier <no spam.net> writes:
On Wednesday, 12 January 2022 at 16:52:02 UTC, Arjan wrote:
 I wonder if there is just so much fear of the GC vs people who 
 actually tried to use the GC and it failed to suit their 
 needs. I've never been afraid of the GC in my projects, and it 
 hasn't hurt me at all.
I think it stems from experience from long ago when JAVA was HOT and sold as the solution of all world problems, but failed to meet expectations and was dismissed because they found is was the GC what made it fail.. A lot of engineers just repeat the opinion of some guru they admire without fact checking. Although I've seen various serious performance issues with JAVA and python software, only once it was related to the GC..
I don't think they're necessarily wrong. If you don't want to deal with GC pauses, it may well be easier to use an approach that doesn't have them, in spite of what you have to give up. On the other hand, many of them have no idea what they're talking about. Like claims that a GC gets in your way if the language has one.
Jan 12
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 12, 2022 at 05:42:46PM +0000, bachmeier via Digitalmars-d-announce
wrote:
 On Wednesday, 12 January 2022 at 16:52:02 UTC, Arjan wrote:
[...]
 I think it stems from experience from long ago when JAVA was HOT and
 sold as the solution of all world problems, but failed to meet
 expectations and was dismissed because they found is was the GC what
 made it fail..
That was my perception of GC too, colored by the bad experiences of Java from the 90's. Ironically, Java's GC has since improved to be one of the top world-class GC implementations, yet the opinions of those who turned away from Java in the 90's have not caught up with today's reality. [...]
 I don't think they're necessarily wrong. If you don't want to deal
 with GC pauses, it may well be easier to use an approach that doesn't
 have them, in spite of what you have to give up. On the other hand,
 many of them have no idea what they're talking about. Like claims that
 a GC gets in your way if the language has one.
Depends on the language; some may indeed require GC use to write anything meaningful at all, and some may have the GC running in the background. However, D's GC only ever triggers on allocations, and as of a few releases ago, it doesn't even initialize itself until the first allocation, meaning that it doesn't even use up *any* resources if you don't actually use it (except for increasing executable size, if you want to nitpick on that). This must be one of the most non-intrusive GC implementations I've ever seen. Which makes me *really* incredulous when the naysayers complain about it. T -- There are two ways to write error-free programs; only the third one works.
Jan 12
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/12/2022 8:14 AM, Steven Schveighoffer wrote:
 I wonder if there is just so much fear of the GC vs people who actually tried
to 
 use the GC and it failed to suit their needs. I've never been afraid of the GC 
 in my projects, and it hasn't hurt me at all.
My experience with people who don't want to use a product I've worked on is: 1. they'll give reason X, which is usually something that sounds convenient 2. I fix X, they can use it now! 3. they then give reason Y, after thinking about it for a minute What's happening is neither X nor Y is the real reason. They just don't want to tell me the real reason, usually because it's an emotional one. The GC issue fits all of that. For example, back in the olden days (the 1980s), as related to me by a friend: X: The most important thing I want in a C++ compiler is speed! I cannot emphasize enough how important that is! Y: No, that isn't the reason. The most important thing to you in a C++ compiler is brand name. X: (Dumbfounded) Why would you say that? Y: Because you are using Microsoft C++, which is 4 times slower than Zortech C++. X: Oh. Another one: Friend: You should write a native Java compiler! It'll take over the world! I really want a native Java compiler! Me: I already wrote one, Symantec's Java compiler. You can get it and use it today! Friend: Oh. [Changes the subject] Now, consider BetterC, a 90% subset of D, and no GC in sight. It changed nobody's mind who didn't use D "because of the GC", because that is not the real reason.
Jan 12
next sibling parent forkit <forkit gmail.com> writes:
On Wednesday, 12 January 2022 at 20:41:56 UTC, Walter Bright 
wrote:
 My experience with people who don't want to use a product I've 
 worked on is:

 1. they'll give reason X, which is usually something that 
 sounds convenient
 2. I fix X, they can use it now!
 3. they then give reason Y, after thinking about it for a minute

 What's happening is neither X nor Y is the real reason. They 
 just don't want to tell me the real reason, usually because 
 it's an emotional one.
Yes, emotions come into play, but the 'emotion argument' on its own explains nothing. The 'real reason' is that people are by nature, aversive to losses. This impacts how people evaluate a choice. e.g. an aversion to losing an existing skill set... what you need to do, is argue you're case in a way that produces more dopamine neurons to activate ;-) https://en.wikipedia.org/wiki/Loss_aversion
Jan 12
prev sibling parent Vinod K Chandran <kcvinu82 gmail.com> writes:
On Wednesday, 12 January 2022 at 20:41:56 UTC, Walter Bright 
wrote:

You nailed it. Bravo :)
Jan 14
prev sibling parent reply forkit <forkit gmail.com> writes:
On Wednesday, 12 January 2022 at 16:14:54 UTC, Steven 
Schveighoffer wrote:
 I wonder if there is just so much fear of the GC vs people who 
 actually tried to use the GC and it failed to suit their needs. 
 I've never been afraid of the GC in my projects, and it hasn't 
 hurt me at all.

 -Steve
No. Fear is irrelevant. Fear of GC is just a catch-all-phrase that serves no real purpose, and provides no real insight into what programmers are thinking. It's all about autonomy and self-government (on the decision of whether to use GC or not, or when to use it, and when not to use it. Programmers want the right of self-government, over their code. This is not politics. It's human psychology. It is, to a large extent I believe, what attracts people to D. I don't believe people are attracted to D because it has GC. There are better languages, and better supported languages, with GC. D should not spend time promoting 'GC', but rather promote how programmers can have this autonomy. Also, the idea that 'GC' means you never have to think about memory management... is just a ridiculous statement.. ..Every programmer *should* be thinking about memory management.
Jan 12
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 12 January 2022 at 20:48:39 UTC, forkit wrote:
 Fear of GC is just a catch-all-phrase that serves no real 
 purpose, and provides no real insight into what programmers are 
 thinking.
"Fear of GC" is just a recurring _excuse_ for not fixing the most outdated aspects of the language/compiler/runtime. I have no fear of GC, I've used GC languages since forever, but I would never want a GC in the context of system level or real time programming. I also don't want to deal with mixing mostly incompatible memory management schemes in an application dominated by system level programming. In this context a GC should be something local, e.g. you might want to use a GC for a specific graph or scripting language in your application. Do I want a GC/ARC for most of my high level programming? Hell yes! But not for system level programming, ever. (Walter has always positioned D as a system level language and it should be judged as such. Maybe D isn't a system level language, but then the vision should be changed accordingly.)
 It's all about autonomy and self-government (on the decision of 
 whether to use GC or not, or when to use it, and when not to 
 use it.
Which essentially is the essence of system level programming. You adapt the language usage to the hardware/use context, not the other way around. You shouldn't be glued to nonsensical defaults that you have to disable. You should have access to building blocks that you can compose to suit the domain you are working with. A GC can be one such building block, and in fact, the C++ community does provide several GCs as building blocks, but there is no force feeding… Which is why C++ is viewed as a hard core system level language by everyone and D isn't.
 I don't believe people are attracted to D because it has GC. 
 There are better languages, and better supported languages, 
 with GC.
Or more importantly; low latency GCs and a language designed for it!
 Also, the idea that 'GC' means you never have to think about 
 memory management... is just a ridiculous statement..
I don't have to think much about memory management in Python, JavaScript or Go, but I would also never do anything close to system level programming in those languages. You can create very interesting interactive applications in JavaScript, but then you: 1. Rely on clever system level programming in a very heavy browser runtime. 2. Use an eco system for interactive applications that is designed around the specific performance characteristics of the javascript runtime. 3. Adapt the application design to the limitations of the browser platform. 4. Get to use a much better low latency GC. Point 1, 2 and 3 are not acceptable for a system level language… So that is a different situation. And D does not provide 4, so again, a different situation. Cheers!
Jan 12
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Wednesday, 12 January 2022 at 20:48:39 UTC, forkit wrote:
 Fear of GC is just a catch-all-phrase that serves no real 
 purpose, and provides no real insight into what programmers are 
 thinking.

 It's all about autonomy and self-government (on the decision of 
 whether to use GC or not, or when to use it, and when not to 
 use it.

 Programmers want the right of self-government, over their code.
Actually, I think *self*-government has very little to do with it. As you correctly observe, D is a great language for programmers who want autonomy--far better than something like Java, Go, or Rust, which impose relatively strict top-down visions of how code ought to be written. In D, you can write C-style procedural code, Java-style object-oriented code, or (with a bit of effort) even ML-style functional code. You can use a GC, or you can avoid it. You can take advantage of built-in memory-safety checking, or you can ignore it. If what programmers care about is autonomy, it seems like D should be the ideal choice. So, why do so many programmers reject D? Because there's something else they care about more than their own autonomy: other programmers' *lack* of autonomy. Or, as it's usually put, "the ecosystem." If you go to crates.io and download a Rust library, you can be almost 100% sure that library will not use GC, because Rust doesn't have a GC. If you go to pkg.go.dev and download a Go library, you can be almost 100% sure that library *will* use GC, because Go *does* have a GC. On the other hand, if you go to code.dlang.org and download a D library...well, who knows? Maybe it'll use the GC, and maybe it won't. The only way to tell is to look at that specific library's documentation (or its source code). Suppose you've already decided that you don't want to use a GC, and you also don't want to write every part of your project from scratch--that is, you would like to depend on existing libraries. Where would you rather search for those libraries: code.dlang.org, or crates.io? Who would you want the authors of those libraries to be: self-governing, autonomous programmers, who are free to use GC as much or as little as they like; or programmers who have chosen to give up that autonomy and limit themselves to *never* using GC? If you're working on a project as a solo developer, autonomy is great. But if you're working as part of a team, you don't want every team member to be fully autonomous--you want some kind of guidance and leadership to make sure everyone is moving in the same direction. In a business setting, that leadership comes from your boss. But in an open-source community, there is no boss. In open source, the only source of leadership and guidance is *the language itself*. If you want to make sure other programmers in your community--your "team"--all agree to not use a GC, the only way you can do that is by choosing a language where GC isn't even an option.
Jan 13
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 13 January 2022 at 21:32:15 UTC, Paul Backus wrote:
 As you correctly observe, D is a great language for programmers 
 who want autonomy--far better than something like Java, Go, or 
 Rust, which impose relatively strict top-down visions of how 
 code ought to be written.
I keep seeing people in forum threads claiming that Rust is not a system level language, but a high level language (that poses as system level). With the exception of exceptions (pun?) C++ pretty much is an add-on language. You can enable stuff you need. The default is rather limited. I personally always enable g++-extensions. And having to deal with exceptions when using the system library is a point of contention. It should have been an add-on for C++ to fulfil the system level vision. C is very much bare bone, but you have different compilers that "adds on" things you might need for particular niches. Which of course is also why the bit widths are platform dependent. By being bare bone C is to a large extent extended by add ons in terms of macros and assembly routines for specific platforms. This modular add-on aspect is essential for system level programming as the contexts are very different (hardware, OS, usage, correctness requirements etc). In hardcore system level programming the eco system actually isn't all that critical. Platform support is important. Cross platform is important. One singular domain specific framework might be important. But you will to a large extent end up writing your own libraries.
Jan 13
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 13, 2022 at 09:32:15PM +0000, Paul Backus via
Digitalmars-d-announce wrote:
 On Wednesday, 12 January 2022 at 20:48:39 UTC, forkit wrote:
[...]
 Programmers want the right of self-government, over their code.
Actually, I think *self*-government has very little to do with it.
[...]
 So, why do so many programmers reject D? Because there's something
 else they care about more than their own autonomy: other programmers'
 *lack* of autonomy. Or, as it's usually put, "the ecosystem."
[...]
 Suppose you've already decided that you don't want to use a GC, and
 you also don't want to write every part of your project from
 scratch--that is, you would like to depend on existing libraries.
 Where would you rather search for those libraries: code.dlang.org, or
 crates.io? Who would you want the authors of those libraries to be:
 self-governing, autonomous programmers, who are free to use GC as much
 or as little as they like; or programmers who have chosen to give up
 that autonomy and limit themselves to *never* using GC?
This reminds me of the Lisp Curse: the language is so powerful that everyone can easily write their own [GUI toolkit] (insert favorite example library here). As a result, everyone invents their own solution, all solving more-or-less the same problem, but just differently enough to be incompatible with each other. And since they're all DIY solutions, they each suffer from a different set of shortcomings. As a result, there's a proliferation of [GUI toolkits], but none of them have a full feature set, most are in various states of (in)completion, and all are incompatible with each other. For the newcomer, there's a bewildering abundance of choices, but none of them really solves his particular use-case (because none of the preceding authors faced his specific problem). As a result, his only choices are to arbitrarily choose one solution and live with its problems, or reinvent his own solution. (Or give up and go back to Java. :-D) Sounds familiar? :-P T -- Democracy: The triumph of popularity over principle. -- C.Bond
Jan 13
prev sibling parent reply forkit <forkit gmail.com> writes:
On Thursday, 13 January 2022 at 21:32:15 UTC, Paul Backus wrote:
 Actually, I think *self*-government has very little to do with 
 it.
I'm not so sure. Presumably, C++ provides a programmer with much greater autonomy over their code than D? C provides even greater autonomy over both C++ and D. And I'd argue, that's why C remains so useful, and so popular (for those problems where such a level of autonomy is needed). By, 'autonomy', I mean a language provided means, for choosing what code can do, and how it does it. A language that makes you jump through loops to get that autonomy, will serve a niche purpose (like Java for example). An aversion to losing that autonomy, I believe, is a very real reason as to why larger numbers of C++ programmers do not even consider switching to D. Of course, even if they did consider D, there are other considerations at play as well. It think this is also why D (in contrast to C++ programmers) That is, D provides greater autonomy (which should translate to greater freedom to innovate and be creative with code). Of course autonomy is not something that is real. Only the 'perception of autonomy' can be real ;-)
Jan 13
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 14, 2022 at 01:19:01AM +0000, forkit via Digitalmars-d-announce
wrote:
[...]
 C provides even greater autonomy over both C++ and D. And I'd argue,
 that's why C remains so useful, and so popular (for those problems
 where such a level of autonomy is needed).
 
 By, 'autonomy', I mean a language provided means, for choosing what
 code can do, and how it does it.
[...]
 An aversion to losing that autonomy, I believe, is a very real reason
 as to why larger numbers of C++ programmers do not even consider
 switching to D.
How is using D "losing autonomy"? Unlike Java, D does not force you to use anything. You can write all-out GC code, you can write nogc code (slap it on main() and your entire program will be guaranteed to be GC-free -- statically verified by the compiler). You can write functional-style code, and, thanks to metaprogramming, you can even use more obscure paradigms like declarative programming. If anything, D makes it *easier* to have "autonomy", because its metaprogramming capabilities let you do so without contorting syntax or writing unmaintainable write-only code. I can theoretically do everything in C++ that I do in D, for example, but C++ requires that I spend 5x the amount of effort to navigate its minefield of language gotchas (and then 50x the effort to debug the resulting mess), and afterwards I have to visit the optometrist due to staring at unreadable syntax for extended periods of time. In D, I get to choose how low-level I want to go -- if all I need is a one-off shell script substitute, I can just allocate away and the GC will worry about cleaning after me. If I need to squeeze out more performance, I run the profiler and identify GC hotspots and fix them (or discover that the GC doesn't even affect performance, and redirect my efforts elsewhere, where it actually matters more). If that's not enough, GC.disable and GC.collect lets me control how the GC behaves. If that's still not enough, I slap nogc on my inner loops and pull out malloc/free. In C++, I'm guaranteed that there is no GC -- even when having a GC might actually help me achieve what I want. In order to reap the benefits of a GC in C++, I have to jump through *tons* of hoops -- install a 3rd party GC, carefully read the docs to avoid doing things that might break it ('cos it's not language-supported), be excluded from using 3rd party libraries that are not compatible with the GC, etc.. Definitely NOT worth the effort for one-off shell script replacements. It takes 10x the effort to write a shell-script substitute in C++ because at every turn the language works against me -- I can't avoid dealing with memory management issues at every turn -- should I use malloc/free and fix leaks / dangling pointers myself? Should I use std::autoptr? Should I use std::shared_ptr? Write my own refcounted pointer for the 15th time? Half my APIs would be cluttered with memory management paraphrenalia, and half my mental energy would be spent fiddling with pointers instead of MAKING PROGRESS IN MY PROBLEM DOMAIN. With D, I can work at the high level and solve my problem long before I even finish writing the same code in C++. And when I need to dig under the hood, D doesn't stop me -- it's perfectly fine with malloc/free and other such alternatives. Even if I can't use parts of Phobos because of GC dependence, D gives me the tools to roll my own easily. (It's not as if I don't already have to do it myself in C++ anyway -- and D is a nicer language for it; I can generally get it done faster in D.) Rather than take away "autonomy", D empowers me to choose whether I want to do things manually or use the premade high-level niceties the language affords me. (*And* D lets me mix high-level and low-level code in the same language. I can even drop down to asm{} blocks if that's what it takes. Now *that's* empowerment.) With C++, I HAVE to do everything manually. It's actually less choice than D affords me. T -- People tell me I'm stubborn, but I refuse to accept it!
Jan 13
next sibling parent reply forkit <forkit gmail.com> writes:
On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 How is using D "losing autonomy"?  Unlike Java, D does not 
 force you to use anything. You can write all-out GC code, you 
 can write  nogc code (slap it on main() and your entire program 
 will be guaranteed to be GC-free -- statically verified by the 
 compiler). You can write functional-style code, and, thanks to 
 metaprogramming, you can even use more obscure paradigms like 
 declarative programming.
I'm talking about the 'perception of autonomy' - which will differ between people. Actual autonomy does not, and cannot, exist. I agree, that if a C++ programmer wants the autonomy of chosing between GC or not, in their code, then they really don't have that autonomy in C++ (well, of course they do actually - but some hoops need to be jumped through). attracted to D, because D creates a perception of there being I'm not saying it's the only thing people consider. Obviously their choice is also reflected by the needs of their problem domain, their existing skill set, the skillset of those they work with, the tools they need, the extent to which their identity is attached to a language or community, etc..etc. And I'm just talking about probability - that is, people are more likely to be attracted to something new, something that could benefit them, if it also enhances their perception of autonomy, or at least, does not not seek to diminish their existing autonomy (e.g a C programmer might well be attracted to betterC, for example). D should really focus more on marketing one of its biggest strenghts - increased autonomy (well, the perception of). Getting back to the subject of this thread, that's why 'I' like D.
Jan 13
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 14, 2022 at 03:51:17AM +0000, forkit via Digitalmars-d-announce
wrote:
 On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 
 How is using D "losing autonomy"?  Unlike Java, D does not force you
 to use anything. You can write all-out GC code, you can write  nogc
 code (slap it on main() and your entire program will be guaranteed
 to be GC-free -- statically verified by the compiler). You can write
 functional-style code, and, thanks to metaprogramming, you can even
 use more obscure paradigms like declarative programming.
 
I'm talking about the 'perception of autonomy' - which will differ between people. Actual autonomy does not, and cannot, exist. I agree, that if a C++ programmer wants the autonomy of chosing between GC or not, in their code, then they really don't have that autonomy in C++ (well, of course they do actually - but some hoops need to be jumped through).
IMO, 'autonomy' isn't the notion you're looking for. The word I prefer to use is *empowerment*. A programming language should be a toolbox filled with useful tools that you can use to solve your problem. It should not be a straitjacket that forces you to conform to what its creators decided is good for you (e.g., Java), nor should it be a minefield full of powerful but extremely dangerous explosives that you have to be very careful not to touch in the wrong way (e.g., C++). It should let YOU decide what's the best way to solve a problem -- and give you the tools to help you on your way. I mean, you *can* write functional-style code in C if you really, really wanted to -- but you will face a lot of friction and it will be a constant uphill battle. The result will be a huge unmaintainable mess. With D, UFCS gets you 90% of the way there, and the syntax is even pleasant to read. Functional not your style? No problem, you can do OO too. Or just plain ole imperative. Or all-out metaprogramming. Or a combination of all four -- the language lets you intermingle all of them in the *same* piece of code. I've yet to find another language that actively *encourages* you to mix multiple paradigms together into a seamless whole. Furthermore, the language should empower you to do what it does -- for example, user-defined types ought to be able to do everything built-in types can. Built-in stuff shouldn't have "magical properties" that cannot be duplicated in a user-defined type. The language shouldn't hide magical properties behind a bunch of opaque, canned black-box solutions that you're not allowed to look into. The fact that D's GC is written in D, for example, is a powerful example of not hiding things behind opaque black-boxes. You can, in theory, write your own GC and use that instead of the default one. D doesn't completely meet my definition of empowerment, of course, but it's pretty darned close -- closer than any other language I've used. That's why I'm sticking with it, in spite of various flaws that I'm not going to pretend don't exist. As for why anyone would choose something over another -- who knows. My own choices and preferences have proven to be very different from the general population, so I'm not even gonna bother to guess how anyone else thinks. T -- English is useful because it is a mess. Since English is a mess, it maps well onto the problem space, which is also a mess, which we call reality. Similarly, Perl was designed to be a mess, though in the nicest of all possible ways. -- Larry Wall
Jan 14
parent forkit <forkit gmail.com> writes:
On Friday, 14 January 2022 at 14:50:50 UTC, H. S. Teoh wrote:
 IMO, 'autonomy' isn't the notion you're looking for.  The word 
 I prefer to use is *empowerment*.  A programming language 
 should be a toolbox filled with useful tools that you can use 
 to solve your problem.  It should not be a straitjacket that 
 forces you to conform to what its creators decided is good for 
 you (e.g., Java), nor should it be a minefield full of powerful 
 but extremely dangerous explosives that you have to be very 
 careful not to touch in the wrong way (e.g., C++). It should 
 let YOU decide what's the best way to solve a problem -- and 
 give you the tools to help you on your way.
Yes, trying to reduce a concept into a word, can be tricky. Even so, 'autonomy' is the right word I think: 'the capacity of an agent to act in accordance with an objective'. I've found the D programming language 'empowers' me to be more 'autonomous' (or at least, to more 'easily' be autonomous. I don't feel like D restricts me, before I even begin (like other languages often do, or the learning curve associated with their syntax does). So I far less concerned about features, and more interested in how a programming language empowers autonomy.
Jan 14
prev sibling next sibling parent reply Araq <rumpf_a web.de> writes:
On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 It takes 10x the effort to write a shell-script substitute in 
 C++ because at every turn the language works against me -- I 
 can't avoid dealing with memory management issues at every turn 
 -- should I use malloc/free and fix leaks / dangling pointers 
 myself? Should I use std::autoptr? Should I use 
 std::shared_ptr? Write my own refcounted pointer for the 15th 
 time?  Half my APIs would be cluttered with memory management 
 paraphrenalia, and half my mental energy would be spent 
 fiddling with pointers instead of MAKING PROGRESS IN MY PROBLEM 
 DOMAIN.

 With D, I can work at the high level and solve my problem long 
 before I even finish writing the same code in C++.
Well C++ ships with unique_ptr and shared_ptr, you don't have to roll your own. And you can use them and be assured that the performance profile of your program doesn't suddenly collapse when the data/heap grows too big as these tools assure independence of the heap size. (What does D's GC assure you? That it won't run if you don't use it? That's such a low bar...) Plus with D you cannot really work at the "high level" at all, it is full of friction. Is this data const? Or immutable? Is this safe? system? Should I use nogc? Are exceptions still a good idea? Should I use interfaces or inheritance? Should I use class or struct? Pointers or inout? There are many languages where it's much easier to focus on the PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".
Jan 13
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 14, 2022 at 06:20:58AM +0000, Araq via Digitalmars-d-announce wrote:
 On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 It takes 10x the effort to write a shell-script substitute in C++
 because at every turn the language works against me -- I can't avoid
 dealing with memory management issues at every turn -- should I use
 malloc/free and fix leaks / dangling pointers myself? Should I use
 std::autoptr? Should I use std::shared_ptr? Write my own refcounted
 pointer for the 15th time?  Half my APIs would be cluttered with
 memory management paraphrenalia, and half my mental energy would be
 spent fiddling with pointers instead of MAKING PROGRESS IN MY
 PROBLEM DOMAIN.
 
 With D, I can work at the high level and solve my problem long
 before I even finish writing the same code in C++.
Well C++ ships with unique_ptr and shared_ptr, you don't have to roll your own. And you can use them and be assured that the performance profile of your program doesn't suddenly collapse when the data/heap grows too big as these tools assure independence of the heap size.
That's not entirely accurate. Using unique_ptr or shared_ptr does not guarantee you won't get a long pause when the last reference to a large object graph goes out of scope, for example, and a whole bunch of dtors get called all at once. In code that's complex enough to warrant shared_ptr, the point at which this happens is likely not predictable (if it was, you wouldn't have needed to use shared_ptr).
 (What does D's GC assure you? That it won't run if you don't use it?
 That's such a low bar...)
When I'm writing a shell-script substitute, I DON'T WANT TO CARE about memory management, that's the point. I want the GC to clean up after me, no questions asked. I don't want to spend any time thinking about memory allocation issues. If I need to manually manage memory, *then* I manually manage memory and don't use the GC. D gives me that choice. C++ forces me to think about memory allocation WHETHER I WANT TO OR NOT. And unique_ptr/shared_ptr doesn't help in this department, because their use percolates through all of my APIs. I cannot pass a unique_ptr to an API that receives only shared_ptr, and vice versa, without jumping through hoops. Having a GC lets me completely eliminate memory management concerns from my APIs, resulting in cleaner APIs and less time wasted fiddling with memory management. It's a needless waste of time. WHEN performance demands it, THEN I can delve into the dirty details of how to manually manage memory. When performance doesn't really matter, I don't care, and I don't *want* to care.
 Plus with D you cannot really work at the "high level" at all, it is
 full of friction. Is this data const? Or immutable? Is this  safe?
  system? Should I use  nogc?
When I'm writing a shell-script substitute, I don't care about const/immutable or safe/ system. Let all data be mutable for all I care, it doesn't matter. nogc is a waste of time in shell-script substitutes. Just use templates and let the compiler figure out the attributes for you. When I'm designing something longer term, *then* I worry about const/immutable/etc.. And honestly, I hardly ever bother with const/immutable, because IME they just become needless encumbering past the first few levels of abstraction. They preclude useful things like caching, lazy initialization, etc., and are not worth the effort except for leaf-node types. There's nothing wrong with mutable by default in spite of what academic types tell you.
 Are exceptions still a good idea?
Of course it's a good idea. Esp in a shell-script substitute, where I don't want to waste my time worrying about checking error codes and all of that nonsense. Just let it throw an exception and die when something fails, that's good enough. If exceptions ever become a problem, you're doing something wrong. Only in rare cases do you actually need nothrow -- in hotspots identified by a profiler where try-blocks actually make a material difference. 90% of code doesn't need to worry about this.
 Should I use interfaces or inheritance?  Should I use class or struct?
For shell script substitutes? Don't even bother with OO. Just use structs and templates with attribute inference, job done. Honestly, even for most serious programs I wouldn't bother with OO, unless the problem domain actually maps well onto the OO paradigm. Most problem domains are better handled with data-only types and external operations on them. Only for limited domains OO is actually useful. Even many polymorphic data models are better handled in other ways than OO (like ECS for runtime dynamic composition).
 Pointers or inout?
inout is a misfeature. Avoid it like the plague. As for pointers vs. non-pointers: thanks to type inference and `.` working for both pointers and non-pointers, most of the time you don't even need to care. I've written lots of code where I started with a non-pointer and later decided to change it to a pointer (or vice versa) -- most of the code that works with it doesn't even need to be changed, I just change the type definition, and maybe 1 or 2 places where the difference actually matters, and type inference takes care of the rest. No such nonsense as needing to change '.' to '->' in 50 different places, or respell types in 25 different modules scattered across the program. `auto` and templated types are your friend. Let the compiler figure out what the concrete types are -- that's its job, the human shouldn't need to constantly fiddle with this manually except in a few places.
 There are many languages where it's much easier to focus on the
 PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".
I'm curious. Do you have any actual examples to show? T -- Some days you win; most days you lose.
Jan 14
parent Vinod K Chandran <kcvinu82 gmail.com> writes:
On Friday, 14 January 2022 at 14:29:54 UTC, H. S. Teoh wrote:

Well explained. :)
Jan 14
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 1/14/22 1:20 AM, Araq wrote:

 Plus with D you cannot really work at the "high level" at all, it is 
 full of friction. Is this data const? Or immutable? Is this  safe? 
  system? Should I use  nogc? Are exceptions still a good idea? Should I 
 use interfaces or inheritance? Should I use class or struct? Pointers or 
 inout? There are many languages where it's much easier to focus on the 
 PROBLEM DOMAIN. Esp if the domain is "shell-script substitute".
I realize you have a different horse in the language race, but this statement is completely strawman (as countless existing "high level" D projects demonstrate) You might as well say that C is unusable at a high level vs. javascript because you need to decide what type of number you want, is it int, float, long? OMG SO MANY CHOICES. -Steve
Jan 14
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 14 January 2022 at 18:54:26 UTC, Steven Schveighoffer 
wrote:
 You might as well say that C is unusable at a high level vs. 
 javascript because you need to decide what type of number you 
 want, is it int, float, long? OMG SO MANY CHOICES.
Bad choice of example… C is close to unusable at a high level and C++ is remarkably unproductive if you only want to do high level stuff. But yes, the problem with D const isn't that there are many choices. The problem is that there is only one over-extended choice.
Jan 14
prev sibling next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 On Fri, Jan 14, 2022 at 01:19:01AM +0000, forkit via 
 Digitalmars-d-announce wrote: [...]
 [...]
[...]
 [...]
How is using D "losing autonomy"? Unlike Java, D does not force you to use anything. You can write all-out GC code, you can write nogc code (slap it on main() and your entire program will be guaranteed to be GC-free -- statically verified by the compiler). You can write functional-style code, and, thanks to metaprogramming, you can even use more obscure paradigms like declarative programming. [...]
When languages are compared in grammar and semantics alone, you are fully correct. Except we have this nasty thing called eco-system, where libraries, IDE tooling, OS, team mates, books, contractors, .... are also part of the comparisasion. tooling, OS, team mates, books,.... to help me getting there. has grown multiple features that used to be only on the D's side of the comparisasion. that have a flowershing ecosytem and keep getting the features only D could brag about when Andrei's book came out 10 years ago.
Jan 14
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Jan 14, 2022 at 09:18:23AM +0000, Paulo Pinto via
Digitalmars-d-announce wrote:
 On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
[...]
 How is using D "losing autonomy"?  Unlike Java, D does not force you
 to use anything. You can write all-out GC code, you can write  nogc
 code (slap it on main() and your entire program will be guaranteed
 to be GC-free -- statically verified by the compiler). You can write
 functional-style code, and, thanks to metaprogramming, you can even
 use more obscure paradigms like declarative programming.
[..]
 When languages are compared in grammar and semantics alone, you are
 fully correct.
 
 Except we have this nasty thing called eco-system, where libraries,
 IDE tooling, OS, team mates, books, contractors, .... are also part of
 the comparisasion.
[...] That's outside of the domain of the language itself. I'm not gonna pretend we don't have ecosystem problems, but that's a social issue, not a technical one. Well OK, maybe IDE tooling is a technical issue too... but I write D just fine in Vim. Unlike Java, using an IDE is not necessary to be productive in D. You don't have to write aneurysm-inducing amounts of factory classes and wrapper types just to express the simplest of abstraction. I see an IDE for D as something nice to have, not an absolute essential.

 have a flowershing ecosytem and keep getting the features only D could
 brag about when Andrei's book came out 10 years ago.
IMNSHO, D should forget all pretenses of being a stable language, and continue to evolve as it did 5-10 years ago. D3 should be a long-term goal, not a taboo that nobody wants to talk about. But hey, I'm not the one making decisions here, and talk is cheap... T -- Give me some fresh salted fish, please.
Jan 14
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 14 January 2022 at 02:13:48 UTC, H. S. Teoh wrote:
 compiler). You can write functional-style code, and, thanks to 
 metaprogramming, you can even use more obscure paradigms like 
 declarative programming.
No, you can't. You can do a little bit of weak declarative programming in C++ thanks to SFINAE. The D type system does not provide a capable solver.
 I can theoretically do everything in C++ that I do in D, for 
 example,
Only with the GC, and even then that claim is a stretch. Without the GC you loose features that C++ has.
 In C++, I'm guaranteed that there is no GC -- even when having 
 a GC might actually help me achieve what I want.  In order to
You have access to several GCs in the C++ eco system.
 that are not compatible with the GC, etc.. Definitely NOT worth 
 the effort for one-off shell script replacements. It takes 10x
Never seen a scripting problem that cannot be handled well with Python, why would I not use Python for scripting? When you sacrifice system level programming aspect in order to make scripting more convenient, then you loose focus. And people who primarily want to do system level programming will not respond well to it. Hardly surprising.
 With D, I can work at the high level and solve my problem long 
 before I even finish writing the same code in C++.
This is great, but does not solve the other issues.
 And when I need to dig under the hood, D doesn't stop me -- 
 it's perfectly fine with malloc/free and other such 
 alternatives.
Nobody are fine with malloc/free. Even in C++ that is considered bad form. This is why these fanboy-discussions never go anywhere. People make up arguments and pretend that they are reality. Well, it isn't. Rust and C++ are doing better than D in terms of adoption, and it isn't just marketing. It is related to actual design considerations and a willingness to adapt to the usage scenario. Rust has actually focused on runtime-free builds. They pay attention to demand. Despite Rust being "high level" and "normative" they pay attention to system level usage scenarios beyond those of browsers. I think this is why it is easier to belive in the future of Rust than many other alternatives. And I don't have a preference for Rust, at all.
Jan 14
prev sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Jan 12, 2022 at 03:41:03PM +0000, Adam D Ruppe via
Digitalmars-d-announce wrote:
 On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh wrote:
 	However it turns out that unless you are writing a computer
 	game, a high frequency trading system, a web server
Most computer games and web servers use GC too.
[...] Depends on what kind of games, I guess. If you're writing a 60fps real-time raytraced 3D FPS running at 2048x1152 resolution, then *perhaps* you might not want a GC killing your framerate every so often. (But even then, there's always GC.disable and nogc... so it's not as if you *can't* do it in D. It's more a psychological barrier triggered by the word "GC" than anything else, IMNSHO.) T -- A mathematician is a device for turning coffee into theorems. -- P. Erdos
Jan 12
next sibling parent Elronnd <elronnd elronnd.net> writes:
On Wednesday, 12 January 2022 at 16:17:02 UTC, H. S. Teoh wrote:
 Depends on what kind of games, I guess. If you're writing a 
 60fps real-time raytraced 3D FPS running at 2048x1152 
 resolution, then *perhaps* you might not want a GC killing your 
 framerate every so often.
Resolution is down to GPU performance, raytracing ditto. Realtime GC is a thing (not for d, of course); consistently sub-millisecond pause times are achievable. Given 16ms frametimes (you mention 60fps), that seems reasonable.
 (But even then, there's always GC.disable and  nogc... so it's 
 not as if you *can't* do it in D. It's more a psychological 
 barrier triggered by the word "GC" than anything else, IMNSHO.)
Agree.
Jan 12
prev sibling parent reply Stanislav Blinov <stanislav.blinov gmail.com> writes:
On Wednesday, 12 January 2022 at 16:17:02 UTC, H. S. Teoh wrote:
 On Wed, Jan 12, 2022 at 03:41:03PM +0000, Adam D Ruppe via 
 Digitalmars-d-announce wrote:
 On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh 
 wrote:
 	However it turns out that unless you are writing a computer
 	game, a high frequency trading system, a web server
Most computer games and web servers use GC too.
[...] Depends on what kind of games, I guess. If you're writing a 60fps real-time raytraced 3D FPS running at 2048x1152 resolution, then *perhaps* you might not want a GC killing your framerate every so often. (But even then, there's always GC.disable and nogc... so it's not as if you *can't* do it in D. It's more a psychological barrier triggered by the word "GC" than anything else, IMNSHO.) T
Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine. That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window. In other words, it's only acceptable if you have natural pauses (loading screens, transitions, etc.) with limited resource consumption between them OR if you can afford to e.g. halve your FPS for a while. The alternative is to collect every frame, which means sacrificing a quarter of runtime. No, thanks. Thing is, "limited resource consumption" means you're preallocating anyway, at which point one has to question why use the GC in the first place. The majority of garbage created per frame can be trivially allocated from an arena and "deallocated" in one `mov` instruction (or a few of them). And things that can't be allocated in an arena, i.e. things with destructors - you *can't* reliably delegate to the GC anyway - which means your persistent state is more likely to be manually managed. TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.
Jan 13
next sibling parent reply Araq <rumpf_a web.de> writes:
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov 
wrote:
 Oh there is a psychological barrier for sure. On both sides of 
 the, uh, "argument". I've said this before but I can repeat it 
 again: time it. 4 milliseconds. That's how long a single 
 GC.collect() takes on my machine. That's a quarter of a frame. 
 And that's a dry run. Doesn't matter if you can GC.disable or 
 not, eventually you'll have to collect, so you're paying that 
 cost (more, actually, since that's not going to be a dry run). 
 If you can afford that - you can befriend the GC. If not - GC 
 goes out the window.
But the time it takes depends on the number of threads it has to stop and the amount of live memory of your heap. If it took 4ms regardless of these factors it wouldn't be bad, but that's not how D's GC works... And the language design of D isn't all that friendly to better GC implementation. That is the real problem here, that is why it keeps coming up.
Jan 13
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 13 January 2022 at 11:57:41 UTC, Araq wrote:
 But the time it takes depends on the number of threads it has 
 to stop and the amount of live memory of your heap. If it took 
 4ms regardless of these factors it wouldn't be bad, but that's 
 not how D's GC works...
Sadly fast scanning is still bad, unless you are on an architecture where you can scan without touching the caches. If you burst through gigabytes of memory then you have a negative effect on real time threads that expect lookup tables to be in the caches. That means you need more headroom in real time threads, so you sacrifice the quality of work done by real time threads by saturating the memory data bus. It would be better to have a concurrent collector that slowly crawls or just take the predicable overhead of ARC that is distributed fairly even in time (unless you do something silly).
Jan 13
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov 
wrote:
 On Wednesday, 12 January 2022 at 16:17:02 UTC, H. S. Teoh wrote:
 [...]
Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine. That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window. In other words, it's only acceptable if you have natural pauses (loading screens, transitions, etc.) with limited resource consumption between them OR if you can afford to e.g. halve your FPS for a while. The alternative is to collect every frame, which means sacrificing a quarter of runtime. No, thanks. Thing is, "limited resource consumption" means you're preallocating anyway, at which point one has to question why use the GC in the first place. The majority of garbage created per frame can be trivially allocated from an arena and "deallocated" in one `mov` instruction (or a few of them). And things that can't be allocated in an arena, i.e. things with destructors - you *can't* reliably delegate to the GC anyway - which means your persistent state is more likely to be manually managed. TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.
You collect it when it matters less, like loading a level, some of them take so long that people even have written mini-games that play during loading scenes, they won't notice a couple of ms more. Hardly any different from having an arena throw away the whole set of frame data during loading. Unless we start talking about DirectStorage and similar.
Jan 13
prev sibling next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov 
wrote:
 TLDR: it's pointless to lament on irrelevant trivia. Time it! 
 Any counter-arguments from either side are pointless without 
 that.
"Time it" isn't really useful for someone starting on a project, as it is too late when you have something worth measuring. The reason for this is that it gets worse and worse as your application grows. Then you end up either giving up on the project or going through a very expensive and bug prone rewrite. There is no trivial upgrade path for code relying on the D GC. And quite frankly, 4 ms is not a realistic worse case scenario for the D GC. You have to wait for all threads to stop on the worst possible OS/old-budget-hardware/program state configuration. It is better to start with a solution that is known to scale well if you are writing highly interactive applications. For D that could be ARC.
Jan 13
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 13 January 2022 at 15:44:33 UTC, Ola Fosheim Grøstad 
wrote:
 On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov 
 wrote:
 TLDR: it's pointless to lament on irrelevant trivia. Time it! 
 Any counter-arguments from either side are pointless without 
 that.
"Time it" isn't really useful for someone starting on a project, as it is too late when you have something worth measuring. The reason for this is that it gets worse and worse as your application grows. Then you end up either giving up on the project or going through a very expensive and bug prone rewrite. There is no trivial upgrade path for code relying on the D GC. And quite frankly, 4 ms is not a realistic worse case scenario for the D GC. You have to wait for all threads to stop on the worst possible OS/old-budget-hardware/program state configuration. It is better to start with a solution that is known to scale well if you are writing highly interactive applications. For D that could be ARC.
Just leaving this here from a little well known company. https://developer.arm.com/solutions/internet-of-things/languages-and-libraries/go ARC, tracing GC, whatever, but make your mind otherwise other languages that know what they want to be get the spotlight in such vendors.
Jan 13
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Thursday, 13 January 2022 at 16:33:59 UTC, Paulo Pinto wrote:
 ARC, tracing GC, whatever, but make your mind otherwise other 
 languages that know what they want to be get the spotlight in 
 such vendors.
Go has a concurrent collector, so I would assume it is reasonable well-behaving in regards to other system components (e.g. does not sporadically saturate the data-bus for a long time). Go's runtime also appears to be fairly limited, so it does not surprise me that people want to use it on micro controllers. We had some people in these forums who were interested in using D for embedded, but they seemed to give up as modifying the runtime was more work than it was worth for them. That is at least my interpretation of what they stated when they left. So well, D has not made a point of capturing embedded programmers in the past, and there are no plans for a strategic change in that regard AFAIK.
Jan 13
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Jan 13, 2022 at 10:21:12AM +0000, Stanislav Blinov via
Digitalmars-d-announce wrote:
[...]
 Oh there is a psychological barrier for sure. On both sides of the,
 uh, "argument". I've said this before but I can repeat it again: time
 it. 4 milliseconds. That's how long a single GC.collect() takes on my
 machine.  That's a quarter of a frame. And that's a dry run. Doesn't
 matter if you can GC.disable or not, eventually you'll have to
 collect, so you're paying that cost (more, actually, since that's not
 going to be a dry run). If you can afford that - you can befriend the
 GC. If not - GC goes out the window.
?? That was exactly my point. If you can't afford it, you use nogc. That's what it's there for! And no, if you don't GC-allocate, you won't eventually have to collect 'cos there'd be nothing to collect. Nobody says you HAVE to use the GC. You use it when it fits your case; when it doesn't, you GC.disable or write nogc, and manage your own allocations, e.g., with an arena allocator, etc.. Outside of your game loop you can still use GC allocations freely. You just collect before entering the main loop, then GC.disable or just enter nogc code. You can even use GC memory to pre-allocate your arena allocator buffers, then run your own allocator on top of that. E.g., allocate a 500MB buffer (or however big you need it to be) before the main loop, then inside the main loop a per-frame arena allocator hands out pointers into this buffer. At the end of the frame, reset the pointer. That's a single-instruction collection. After you exit your main loop, call GC.collect to collect the buffer itself. This isn't Java where every allocation must come from the GC. D lets you work with raw pointers for a reason.
 In other words, it's only acceptable if you have natural pauses
 (loading screens, transitions, etc.) with limited resource consumption
 between them OR if you can afford to e.g. halve your FPS for a while.
 The alternative is to collect every frame, which means sacrificing a
 quarter of runtime. No, thanks.
Nobody says you HAVE to use the GC in your main loop.
 Thing is, "limited resource consumption" means you're preallocating
 anyway, at which point one has to question why use the GC in the first
 place.
You don't have to use the GC. You can malloc your preallocated buffers. Or GC-allocate them but call GC.disable before entering your main loop.
 The majority of garbage created per frame can be trivially
 allocated from an arena and "deallocated" in one `mov` instruction (or
 a few of them). And things that can't be allocated in an arena, i.e.
 things with destructors - you *can't* reliably delegate to the GC
 anyway - which means your persistent state is more likely to be
 manually managed.
[...] Of course. So don't use the GC for those things. That's all. The GC is still useful for things outside the main loop, e.g., setup code, loading resources in between levels, etc.. The good thing about D is that you *can* make this choice. It's not like Java where you're forced to use the GC whether you like it or not. There's no reason to clamor to *remove* the GC from D, like some appear to be arguing for. T -- The only difference between male factor and malefactor is just a little emptiness inside.
Jan 13
prev sibling next sibling parent reply zjh <fqbqrr 163.com> writes:
On Wednesday, 12 January 2022 at 02:37:47 UTC, Walter Bright 
wrote:
 "Why I like D" is on the front page of HackerNews at the moment 
 at number 11.

 https://news.ycombinator.com/news
[Chinese version](https://fqbqrr.blog.csdn.net/article/details/122469247),and [another one](https://fqbqrr.blog.csdn.net/article/details/109110021).
Jan 12
parent reply zjh <fqbqrr 163.com> writes:
On Thursday, 13 January 2022 at 03:10:14 UTC, zjh wrote:

I'm a `GC phobia`.
Jan 13
parent forkit <forkit gmail.com> writes:
On Thursday, 13 January 2022 at 11:30:40 UTC, zjh wrote:
 On Thursday, 13 January 2022 at 03:10:14 UTC, zjh wrote:

 I'm a `GC phobia`.
"A phobia is an irrational fear of something that's unlikely to cause harm." "A phobia is a type of anxiety disorder defined by a persistent and excessive fear of an object or situation." "A phobia is an excessive and irrational fear reaction." " phobias .. are maladaptive fear response" plz... go get some help ;-)
Jan 13
prev sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Wednesday, 12 January 2022 at 02:37:47 UTC, Walter Bright 
wrote:
 "Why I like D" is on the front page of HackerNews at the moment 
 at number 11.

 https://news.ycombinator.com/news
I enjoyed reading the article.
Jan 13