www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Some Notes on 'D for the Win'

reply Walter Bright <newshound2 digitalmars.com> writes:
http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

http://tomerfiliba.com/blog/dlang-part2/
Aug 23 2014
next sibling parent reply "nan0a" <nan0a gmail.com> writes:
On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
I posted in the thread (under the same name as here) and regularly discuss D in /r/programming. I really enjoy trying out new languages and I think D is excellent - I was up and running in it in just a few days. TDP+Ali's book are great references. One of the things I like the best about D is the community, it's very lively. I have a few complaints about the language because nothing is perfect(e.g, attribute puke), but overall I find the language extremely usable. Rust was mentioned a few times in the thread -- I've been using Rust for a while too(which I see constantly mentioned anytime D is) and I'm always tripping over my own feet trying to understand the memory/type system, lifetimes, etc. I'm not even going to claim that I'm a good programmer, but I'm not sure how I feel about Rust being widely adopted outside of extremely memory-safe applications(e.g, Mozilla's Servo.) The obvious ML/functional inspiration is also going to be a stumbling block for C/C++/Java/etc programmers IMO, I've done some hobby work with Haskell so I wasn't exactly fish out of water but I could understand why some may be. Keep up the great work on D.
Aug 23 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Saturday, 23 August 2014 at 20:23:37 UTC, nan0a wrote:
 On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright 
 wrote:
 language because nothing is perfect(e.g, attribute puke), but
Yeah, this should be addressed. Eventually...
Aug 23 2014
prev sibling parent reply "Chris" <wendlec tcd.ie> writes:
On Saturday, 23 August 2014 at 20:23:37 UTC, nan0a wrote:
 On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright 
 wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
I posted in the thread (under the same name as here) and regularly discuss D in /r/programming. I really enjoy trying out new languages and I think D is excellent - I was up and running in it in just a few days. TDP+Ali's book are great references. One of the things I like the best about D is the community, it's very lively. I have a few complaints about the language because nothing is perfect(e.g, attribute puke), but overall I find the language extremely usable. Rust was mentioned a few times in the thread -- I've been using Rust for a while too(which I see constantly mentioned anytime D is) and I'm always tripping over my own feet trying to understand the memory/type system, lifetimes, etc. I'm not even going to claim that I'm a good programmer, but I'm not sure how I feel about Rust being widely adopted outside of extremely memory-safe applications(e.g, Mozilla's Servo.) The obvious ML/functional inspiration is also going to be a stumbling block for C/C++/Java/etc programmers IMO, I've done some hobby work with Haskell so I wasn't exactly fish out of water but I could understand why some may be. Keep up the great work on D.
I sometimes have the feeling that because the D community discusses every tiny feature or change elaborately (and rightly so), people coming from the outside have the impression that it's a half-baked (and thus unreliable) thing. Other languages often just "cover up" the discussions, or in the case of Go, decisions are presented post factum (with a lot of hype and "Hurra!" around them), which gives the impression of a clean and tight (and thus reliable) project. The changes to D (especially new killer features, improved libraries and the like) are often not communicated to the general public (as in "This is new, and this is how you use it"). So people from the outside have the impression that it's a bit of a mess and they are ignorant as regards features and improvements. Maybe that's part of the "anti D" attitude often encountered.
Aug 25 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 25 August 2014 at 11:20:51 UTC, Chris wrote:
 I sometimes have the feeling that because the D community 
 discusses every tiny feature or change elaborately (and rightly 
 so), people coming from the outside have the impression that 
 it's a half-baked (and thus unreliable) thing.
How is it not half-baked when DMD is released with regressions? D lacks proper quality assurance and a clear distinction by experimental releases/features and production quality. Features should have at least 6-12 months of testing before being used in production, IMO.
 Other languages often just "cover up" the discussions, or in 
 the case of Go, decisions are presented post factum (with a lot 
 of hype and "Hurra!" around them), which gives the impression 
 of a clean and tight (and thus reliable) project.
Go has reached feature stability and does not push experimental features into the production compiler. The libraries are limited in scope and thus of reasonable quality AFAIK. Note that on Google App Engine the Go runtime is still listed as experimental. Only Java 7 and Python 2.7 are labeled as ready for production. The problem is that D will stay at 95% done forever without proper management. A common thumb-of-rule is that 90% remains to be done when 90% of the code is written. I.e. the final polish, refactoring and maintenance is the most work. Don't underestimate the importance of the last 5%. Go is lack-lustre, but it is more mature than D.
 So people from the outside have the impression that it's a bit 
 of a mess and they are ignorant as regards features and 
 improvements.
The process is a mess if you don't do feature freeze on one version that is mature and keep it in maintenance mode (only receiving safety bug fixes and focusing on stability).
Aug 25 2014
parent reply "Chris" <wendlec tcd.ie> writes:
On Monday, 25 August 2014 at 12:35:51 UTC, Ola Fosheim Grøstad 
wrote:
 On Monday, 25 August 2014 at 11:20:51 UTC, Chris wrote:
 I sometimes have the feeling that because the D community 
 discusses every tiny feature or change elaborately (and 
 rightly so), people coming from the outside have the 
 impression that it's a half-baked (and thus unreliable) thing.
How is it not half-baked when DMD is released with regressions? D lacks proper quality assurance and a clear distinction by experimental releases/features and production quality. Features should have at least 6-12 months of testing before being used in production, IMO.
 Other languages often just "cover up" the discussions, or in 
 the case of Go, decisions are presented post factum (with a 
 lot of hype and "Hurra!" around them), which gives the 
 impression of a clean and tight (and thus reliable) project.
Go has reached feature stability and does not push experimental features into the production compiler. The libraries are limited in scope and thus of reasonable quality AFAIK. Note that on Google App Engine the Go runtime is still listed as experimental. Only Java 7 and Python 2.7 are labeled as ready for production. The problem is that D will stay at 95% done forever without proper management. A common thumb-of-rule is that 90% remains to be done when 90% of the code is written. I.e. the final polish, refactoring and maintenance is the most work. Don't underestimate the importance of the last 5%. Go is lack-lustre, but it is more mature than D.
 So people from the outside have the impression that it's a bit 
 of a mess and they are ignorant as regards features and 
 improvements.
The process is a mess if you don't do feature freeze on one version that is mature and keep it in maintenance mode (only receiving safety bug fixes and focusing on stability).
I'd say half-baked refers to real world usability. Nimrod would be half-baked, because it is still young and a lot of features will change (breaking changes). The language sounds interesting, but I wouldn't touch it right now. D on the other hand is already perfectly usable for production code, which, for me, is the most important thing. Whether or not the whole infrastructure is good enough (there's always room for improvement) is not so important for this particular aspect (i.e. actual coding). I started to use D before dub was the official package manager. Although I use dub a lot now, I could manage without it for two or more years (which in turn says a lot about D itself). Here's a list of Go apps: http://go-lang.cat-v.org/go-code Similar to D apps. Some of them still going, some of them in a limbo, some of them dead (would be nice to know why they were discontinued). I wonder how safe it would be to use Go in production (breaking changes, availability / implementation of useful features etc.)
 Note that on Google App Engine the Go runtime is still listed 
 as experimental. Only Java 7 and Python 2.7 are labeled as 
 ready for production.
They are listed as "experimental" partly for legal reasons, I suppose, and "ready for production" still means "experimental". All software is experimental. No matter how you label it. Java has moved very slowly over the years and I think Go has kind of slowed down a bit. D is still moving quite fast as regards adding features and improvements and is remarkably stable and reliable.
Aug 25 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 25 August 2014 at 13:30:45 UTC, Chris wrote:
 D on the other hand is already perfectly usable for production 
 code, which, for me, is the most important thing.
That would depend on what you mean by "usable for production code". For me "usable" means that I can fix compiler/library bugs by recompiling the project 12 months from now without any source code changes. It also means no regressions and a long testing window.
 Whether or not the whole infrastructure is good enough (there's 
 always room for improvement) is not so important for this 
 particular aspect (i.e. actual coding).
Well, I think it is important that you cannot know which part of Phobos you can rely on. The basic problem here is that the D "managers" think that the language will take off if it gains more features. So new features are eagerly pushed without polish and serious harnessing. I think the opposite is true, I think it will take off when people see that the distributed tar-ball only includes code that is polished and harnessed. (leaving the less robust stuff to other repositories or as an experimental switch)
 discontinued). I wonder how safe it would be to use Go in 
 production (breaking changes, availability / implementation of 
 useful features etc.)
Pretty safe! https://golang.org/doc/devel/release.html
 Note that on Google App Engine the Go runtime is still listed 
 as experimental. Only Java 7 and Python 2.7 are labeled as 
 ready for production.
They are listed as "experimental" partly for legal reasons, I suppose, and "ready for production" still means "experimental". All software is experimental. No matter how you label it.
For contractual reasons, but based on real world management needs. The Java and Python runtimes have a stable configuration window of 12 months (meaning no changes without 12 months notice). While Go and PhP can be dropped by Gogle over night IIRC. So, Go is not useful for production on App Engine unless you do double implementation (like reimplementing parts in Go for performance reasons).
Aug 25 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 25 Aug 2014 13:51:22 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 discontinued). I wonder how safe it would be to use Go in=20
 production (breaking changes, availability / implementation of=20
 useful features etc.)
Pretty safe! =20 While Go and PhP can be dropped by Gogle over night IIRC.
pretty safe.
Aug 25 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 25 August 2014 at 14:04:03 UTC, ketmar via 
Digitalmars-d wrote:
 On Mon, 25 Aug 2014 13:51:22 +0000
 via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 discontinued). I wonder how safe it would be to use Go in 
 production (breaking changes, availability / implementation 
 of useful features etc.)
Pretty safe! While Go and PhP can be dropped by Gogle over night IIRC.
pretty safe.
It is. You can use Go on Compute Engine.
Aug 25 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 25 Aug 2014 14:07:53 +0000
via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 It is. You can use Go on Compute Engine.
i can use D in production. i'm pretty sure that current GDC will not rot after five years: i will stil be able to build gcc 4.9.1 and gdc with 2.065 backend, then fix and compile my code. what's wrong with it?
Aug 25 2014
prev sibling next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
Comment: "The only thing I hate about D is the GC, I think that to include a GC in a system programming language like D was a very big mistake... I know it's possible to disable the GC but its standard library needs it so you must live with it."
Aug 23 2014
parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
On Saturday, 23 August 2014 at 21:52:01 UTC, eles wrote:
 On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright 
 wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
Comment: "The only thing I hate about D is the GC, I think that to include a GC in a system programming language like D was a very big mistake... I know it's possible to disable the GC but its standard library needs it so you must live with it."
Examples of real, working desktop OS, that people really used at their work, done in system programming languages with GC. Mesa/Cedar https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000 Oberon and derivatives http://progtools.org/article.php?name=oberon&section=compilers&type=tutorial SPIN http://en.wikipedia.org/wiki/Modula-3 http://en.wikipedia.org/wiki/SPIN_%28operating_system%29 What is really needed for the average Joe systems programmer to get over this "GC in systems programming" stygma, is getting the likes of Apple, Googlel, Microsoft and such to force feed them into programming languages. The day when Swift, .NET Native, Go (with better low level support) are the languages on their SDKs. It doesn't help that Singularity project died, and outside Microsoft no one knows what Midori (its sucessor) is about. Maybe Facebook could do an operating system in D. :) -- Paulo
Aug 23 2014
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 8/24/2014 2:39 AM, Paulo Pinto wrote:
 Maybe Facebook could do an operating system in D. :)
The way facebook's been growing/expanding (ex, oculus rift. Bought by a social networking site? Go figure.), I wouldn't be surprised to see a facebook OS some day.
Aug 24 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
24-Aug-2014 23:31, Nick Sabalausky пишет:
 On 8/24/2014 2:39 AM, Paulo Pinto wrote:
 Maybe Facebook could do an operating system in D. :)
The way facebook's been growing/expanding (ex, oculus rift. Bought by a social networking site? Go figure.), I wouldn't be surprised to see a facebook OS some day.
Written in D, of course :) -- Dmitry Olshansky
Aug 24 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 8/24/2014 3:34 PM, Dmitry Olshansky wrote:
 24-Aug-2014 23:31, Nick Sabalausky пишет:
 On 8/24/2014 2:39 AM, Paulo Pinto wrote:
 Maybe Facebook could do an operating system in D. :)
The way facebook's been growing/expanding (ex, oculus rift. Bought by a social networking site? Go figure.), I wouldn't be surprised to see a facebook OS some day.
Written in D, of course :)
That would indeed be a heck of a feather in D's cap. I mean, heck, PHP is easily one of the worst languages in major use, and one of the biggest reasons^H^H^H^H^H^H^Hexcuses people have given for using it is "Well, Facebook is written in PHP!" - Even years after that statement had become woefully inaccurate (half the codebase in C/C++ including all/most of the critical stuff plus the "PHP" stuff not being normal PHP but the third incarnation of their own rewrite - Yea, not exactly "Is written in PHP"). Point of course being: Given what facebook "endorsement" (sort of) has done for *PHP*, even when it wasn't even entirely true(!), imagine what a boon *true* largescale FB usage could do for a language like D. Or even if not, at the very least it would get people to knock it off with the appeal-to-authority language selection arguments ;)
Aug 24 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 24 Aug 2014 23:15:53 -0400
Nick Sabalausky via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Or even if not, at the very least it would get people to knock it off=20
 with the appeal-to-authority language selection arguments ;)
yeah, sometimes "but look at FB! they using D and hired it's core developers!" works better than any technical arguments. i'm not happy with this, but if it works...
Aug 24 2014
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 24 Aug 2014 06:39:28 +0000
schrieb "Paulo Pinto" <pjmlp progtools.org>:

 Examples of real, working desktop OS, that people really used at 
 their work, done in system programming languages with GC.
 
 Mesa/Cedar
 https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000
 
 Oberon and derivatives
 http://progtools.org/article.php?name=oberon&section=compilers&type=tutorial
 
 SPIN
 http://en.wikipedia.org/wiki/Modula-3
 http://en.wikipedia.org/wiki/SPIN_%28operating_system%29
 
 What is really needed for the average Joe systems programmer to 
 get over this "GC in systems programming" stygma, is getting the 
 likes of Apple, Googlel, Microsoft and such to force feed them 
 into programming languages.
Yes, but when these systems were invented, was the focus on a fast lag free multimedia experience or on safety? How do you get the memory for the GC heap, when you are just about to write the kernel that manages the systems physical memory? Do these systems manually manage memory in performance sensitive parts or do they rely on GC as much as technically feasible? Could they use their languages as is or did they create a fork for their OS? What was the expected memory space at the time of authoring the kernel? Does the language usually allow raw pointers, unions, interfacing with C etc., or is it more confined like Java? I see how you can write an OS with GC already in the kernel or whatever. However there are too many question marks to jump to conclusions about D. o the larger the heap the slower the collection cycles (how will it scale in the future with e.g. 1024 GiB RAM?) o the less free RAM, the more often the collector is called (game consoles are always out of RAM) o tracing GCs have memory overhead (memory, that could have been used as disk cache for example) o some code needs to run in soft-real-time, like audio processing plugins; stop-the-world GC is bad here o non-GC threads are still somewhat arcane and system specific o if you accidentally store the only reference to a GC heap object in a non-GC thread it might get collected (a hypothetical or existing language may have a better offering here) "For programs that cannot afford garbage collection, Modula-3 provides a set of reference types that are not traced by the garbage collector." Someone evaluating D may come across the question "What if I get into one of those 10% of use case where tracing GC is not a good option?" It might be some application developer working for a company that sells video editing software, that has to deal with complex projects and object graphs, playing sounds and videos while running some background tasks like generating preview versions of HD material or auto-saving and serializing the project to XML. Or someone writing an IDE auto-completion plugin that has graphs of thousands of small objects from the source files in import paths that are constantly modified while the user types in the code-editor. Plus anyone who finds him-/herself in a memory constrained and/or (soft-)realtime environment. Sometimes it results in the idea that D's GC is some day going found to be acceptable for soft-real-time desktop applications. Others start contemplating if it is worth writing their own D runtime to remove the stop-the-world GC entirely. Personally I mostly want to be sure Phobos is transparent about GC allocations and that not all threads stop for the GC cycle. That should make soft-real-time in D a lot saner :) -- Marco
Aug 25 2014
parent Paulo Pinto <pjmlp progtools.org> writes:
On 25.08.2014 12:14, Marco Leise wrote:
 Am Sun, 24 Aug 2014 06:39:28 +0000
 schrieb "Paulo Pinto" <pjmlp progtools.org>:

 Examples of real, working desktop OS, that people really used at
 their work, done in system programming languages with GC.

 Mesa/Cedar
 https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000

 Oberon and derivatives
 http://progtools.org/article.php?name=oberon&section=compilers&type=tutorial

 SPIN
 http://en.wikipedia.org/wiki/Modula-3
 http://en.wikipedia.org/wiki/SPIN_%28operating_system%29

 What is really needed for the average Joe systems programmer to
 get over this "GC in systems programming" stygma, is getting the
 likes of Apple, Googlel, Microsoft and such to force feed them
 into programming languages.
Yes, but when these systems were invented, was the focus on a fast lag free multimedia experience or on safety?
BlueBottle an Oberon sucessor has a video player. Codecs are written in Assembly everything else in Active Oberon. This is no different from using C, as the language does not support vector instructions.
 How do you
 get the memory for the GC heap, when you are just about to
 write the kernel that manages the systems physical memory?
Easy, the GC lives at the kernel.
 Do these systems manually manage memory in performance
 sensitive parts or do they rely on GC as much as technically
 feasible?
They are a bit more GC friendly than D. Only memory allocated via NEW is on the GC heap. Additionally, there is also the possibility to use manual memory management, but only inside UNSAFE modules.
 Could they use their languages as is or did they
 create a fork for their OS? What was the expected memory space
 at the time of authoring the kernel? Does the language usually
 allow raw pointers, unions, interfacing with C etc., or is it
 more confined like Java?
I am lost here. These languages have a symbiotic relationship with the OS, just like C has with UNIX. Thankfully no C at sight. Spin does support C and has a POSIX interface implemented in Modula-3, though.
 I see how you can write an OS with GC already in the kernel or
 whatever. However there are too many question marks to jump to
 conclusions about D.

 o the larger the heap the slower the collection cycles
    (how will it scale in the future with e.g. 1024 GiB RAM?)
 o the less free RAM, the more often the collector is called
    (game consoles are always out of RAM)
 o tracing GCs have memory overhead
    (memory, that could have been used as disk cache for example)
 o some code needs to run in soft-real-time, like audio
    processing plugins; stop-the-world GC is bad here
 o non-GC threads are still somewhat arcane and system
    specific
 o if you accidentally store the only reference to a GC heap
    object in a non-GC thread it might get collected
    (a hypothetical or existing language may have a better
    offering here)

 "For programs that cannot afford garbage collection, Modula-3
 provides a set of reference types that are not traced by the
 garbage collector."
 Someone evaluating D may come across the question "What if I
 get into one of those 10% of use case where tracing GC is not
 a good option?" It might be some application developer working
 for a company that sells video editing software, that has to
 deal with complex projects and object graphs, playing sounds
 and videos while running some background tasks like generating
 preview versions of HD material or auto-saving and serializing
 the project to XML.
 Or someone writing an IDE auto-completion plugin that has
 graphs of thousands of small objects from the source files in
 import paths that are constantly modified while the user types
 in the code-editor.
 Plus anyone who finds him-/herself in a memory constrained
 and/or (soft-)realtime environment.

 Sometimes it results in the idea that D's GC is some day going

 found to be acceptable for soft-real-time desktop applications.
 Others start contemplating if it is worth writing their own D
 runtime to remove the stop-the-world GC entirely.
 Personally I mostly want to be sure Phobos is transparent about
 GC allocations and that not all threads stop for the GC cycle.
 That should make soft-real-time in D a lot saner :)
Those questions I can hardly answer due to lack of experience. I can only tell that we have replaced C++ applications by Java ones for telecommunications control and they perform fast enough. The same type of work Erlang is used for. Another thing is that many programmers don't know how to use profilers. I was able in some cases, thanks to memory profilers, to have a better standard malloc that was part of the compiler. But this will only change with generation change. -- Paulo
Aug 25 2014
prev sibling next sibling parent reply "Mike" <none none.com> writes:
On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
Just spent some time reading about Nimrod. It looks like it has a lot of potential. One quote from the FAQ: "Nimrod primarily focuses on thread local (and garbage collected) heaps and asynchronous message passing between threads. Each thread has its own GC, so no "stop the world" mechanism is necessary. An unsafe shared memory heap is also provided. Future versions will additionally include a GC "per thread group" and Nimrod's type system will be enhanced to accurately model this shared memory heap." Cool! When I first started reading about D, I was quite excited (still am, but certainly frustrated with a few things). I never felt that level of excitement when I read about Go and Rust. I never even thought about them as competitors with D. But Nimrod has perked my interested. It looks like much more of a competitor with D than Go and Rust. Mike
Aug 23 2014
parent reply "Kagamin" <spam here.lot> writes:
On Saturday, 23 August 2014 at 23:46:43 UTC, Mike wrote:
 One quote from the FAQ:
 "Nimrod primarily focuses on thread local (and garbage 
 collected) heaps and asynchronous message passing between 
 threads. Each thread has its own GC, so no "stop the world" 
 mechanism is necessary. An unsafe shared memory heap is also 
 provided.
Do you have an idea, how Nimrod shares memory? Thread-local GCs mean the memory should be deep copied between threads, which is not always a good idea, e.g. when you receive a big complex data structure from network in one thread and pass it to the main thread.
Aug 24 2014
parent "Kagamin" <spam here.lot> writes:
Though it will be a big win for server software, which rarely 
shares memory.
Aug 24 2014
prev sibling parent "ixid" <nuaccount gmail.com> writes:
On Saturday, 23 August 2014 at 18:29:43 UTC, Walter Bright wrote:
 http://www.reddit.com/r/programming/comments/2ed9ah/some_notes_on_d_for_the_win/

 http://tomerfiliba.com/blog/dlang-part2/
I'd surprised not to have seen anyone point this out (perhaps I missed it)given all the talk of niches for D. D should be promoted as the start up language. That's what it's try to do by being highly productive but also offering serious performance and some start up people are more willing to take a risk to get an edge. It would also give D some serious cool. Perhaps that's how to market it?
Aug 25 2014