www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Thoughts about D

reply IM <3di gm.com> writes:
Hi,
I'm a full-time C++ software engineer in Silicon Valley. I've 
been learning D and using it in a couple of personal side 
projects for a few months now.

First of all, I must start by saying that I like D, and wish to 
use it everyday. I'm even considering to donate to the D 
foundation. However, some of D features and design decisions 
frustrates me a lot, and sometimes urges me to look for an 
alternative. I'm here not to criticize, but to channel my 
frustrations to whom it may concern. I want D to become better 
and more widely used. I'm sure many others might share with me 
some of the following points:
- D is unnecessarily a huge language. I remember in DConf 2014, 
Scott Meyers gave a talk about the last thing D needs, which is a 
guy like him writing a lot of books covering the many subtleties 
of the language. However, it seems that the D community went 
ahead and created exactly this language!
- ‎D is very verbose. It requires a lot of typing. Look at how 
long 'immutable' is. Very often that I find myself tagging my 
methods with something like 'final override nothrow  safe  nogc 
...' etc.
- ‎It's quite clear that D was influenced a lot by Java at some 
point, which led to borrowing (copying?) a lot of Java features 
that may not appeal to everyone.
- ‎The amount of trickeries required to avoid the GC and do 
manual memory management are not pleasant and counter productive. 
I feel they defeat any productivity gains the language was 
supposed to offer.
- ‎The thread local storage, shared, and __gshared business is 
annoying and doesn't seem to be well documented, even though it 
is unnatural to think about (at least coming from other 
languages).
- ‎D claims to be a language for productivity, but it slows down 
anyone thinking about efficiency, performance, and careful design 
decisions. (choosing structs vs classes, structs don't support 
hierarchy, use alias this, structs don't allow default 
constructors {inconsistent - very annoying}, avoiding the GC, 
look up that type to see if it's a struct or a class to decide 
how you may use it ... etc. etc.).

I could add more, but I'm tired of typing. I hope that one day I 
will overcome my frustrations as well as D becomes a better 
language that enables me to do what I want easily without 
standing in my way.
Nov 26
next sibling parent reply Adam Wilson <flyboynw gmail.com> writes:
On 11/26/17 16:14, IM wrote:
 Hi,
 I'm a full-time C++ software engineer in Silicon Valley. I've been
 learning D and using it in a couple of personal side projects for a few
 months now.

 First of all, I must start by saying that I like D, and wish to use it
 everyday. I'm even considering to donate to the D foundation. However,
 some of D features and design decisions frustrates me a lot, and
 sometimes urges me to look for an alternative. I'm here not to
 criticize, but to channel my frustrations to whom it may concern. I want
 D to become better and more widely used. I'm sure many others might
 share with me some of the following points:
 - D is unnecessarily a huge language. I remember in DConf 2014, Scott
 Meyers gave a talk about the last thing D needs, which is a guy like him
 writing a lot of books covering the many subtleties of the language.
 However, it seems that the D community went ahead and created exactly
 this language!
 - ‎D is very verbose. It requires a lot of typing. Look at how long
 'immutable' is. Very often that I find myself tagging my methods with
 something like 'final override nothrow  safe  nogc ...' etc.
 - ‎It's quite clear that D was influenced a lot by Java at some point,
 which led to borrowing (copying?) a lot of Java features that may not
 appeal to everyone.
 - ‎The amount of trickeries required to avoid the GC and do manual
 memory management are not pleasant and counter productive. I feel they
 defeat any productivity gains the language was supposed to offer.
 - ‎The thread local storage, shared, and __gshared business is annoying
 and doesn't seem to be well documented, even though it is unnatural to
 think about (at least coming from other languages).
 - ‎D claims to be a language for productivity, but it slows down anyone
 thinking about efficiency, performance, and careful design decisions.
 (choosing structs vs classes, structs don't support hierarchy, use alias
 this, structs don't allow default constructors {inconsistent - very
 annoying}, avoiding the GC, look up that type to see if it's a struct or
 a class to decide how you may use it ... etc. etc.).

 I could add more, but I'm tired of typing. I hope that one day I will
 overcome my frustrations as well as D becomes a better language that
 enables me to do what I want easily without standing in my way.
Well. D has it's own idioms and patterns. So we fully expect some of the idioms that are easy in C++ to be not easy in D, and indeed that's kind of the point. If you look at the those idioms that are hard in D, it's probably because said idiom allows you to do some fantastically unsafe (and insecure) thing in C++. Yes, I am sure that it gives some performance boost, but D is not trying to be the pinnacle of language performance. D is trying to be a memory-safe language, quite intentionally at the expense of speed (see this DConf 2017 talk: https://www.youtube.com/watch?v=iDFhvCkCLb4&index=1&list=PL3jwVPmk_PRxo23 yoc0Ip_cP3-rCm7eB). Bounds checking by default, GC, etc. are all memory safety features that come explicitly at the cost of performance. We are not trying to be C++ and we are not trying to replace C++. It sounds like C++ works better for you. We are OK with that. We always recommend using what works best for you. That is after all why WE are here. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Nov 26
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Monday, 27 November 2017 at 01:03:29 UTC, Adam Wilson wrote:
 On 11/26/17 16:14, IM wrote:
 Hi,
 I'm a full-time C++ software engineer in Silicon Valley. I've 
 been
 learning D and using it in a couple of personal side projects 
 for a few
 months now.
[snip]
 I could add more, but I'm tired of typing. I hope that one day 
 I will
 overcome my frustrations as well as D becomes a better 
 language that
 enables me to do what I want easily without standing in my way.
Well. D has it's own idioms and patterns. So we fully expect some of the idioms that are easy in C++ to be not easy in D, and indeed that's kind of the point. If you look at the those idioms that are hard in D, it's probably because said idiom allows you to do some fantastically unsafe (and insecure) thing in C++. Yes, I am sure that it gives some performance boost, but D is not trying to be the pinnacle of language performance.
It sure does. Otherwise there are plenty languages that are safe and somewhat fast. C# or Java fit the bill nicely. Also Go.
 D is trying to be a memory-safe language, quite intentionally
And thus system is the default, right? I think memory safety came as an afterthought.
 at the expense of speed (see this DConf 2017 talk: 
 https://www.youtube.com/watch?v=iDFhvCkCLb4&index=1&list=PL3jwVPmk_PRxo23
yoc0Ip_cP3-rCm7eB). Bounds checking by default, GC, etc. are all memory safety
features that come explicitly at the cost of performance.

 We are not trying to be C++
True, albeit doesn’t feel like that.
 and we are not trying to replace C++
Patently false.
 It sounds like C++ works better for you. We are OK with that. 
 We always recommend using what works best for you. That is 
 after all why WE are here. :)
That WE might have been just you :) D certainly tries to replace C++ in a number of ways. It’s not a drop-in replacement and doesn’t cover all of C++ niche.
Nov 26
prev sibling next sibling parent reply codephantom <me noyb.com> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - D is unnecessarily a huge language. I remember in DConf 2014, 
 Scott Meyers gave a talk about the last thing D needs, which is 
 a guy like him writing a lot of books covering the many 
 subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
I hear this argument a lot, about this language or that. It has become an argument void of any real value, in my view. The reason is, programming needs have changed a lot, the problems being solved have changed alot, there is a great diversity in how people think about solving those problems, and a greater need to solve problems that are not solvable with current langauges. So languages necessarily evolve according to the pressures put upon them by the environment in which they exist. As for using features from other languages, this is a normal process of evolution - convergent evolution. A language that is getting smaller, is NOT a language that is evolving (such a language is useful in a particular domain only). The question is really not how 'huge' a language should or should not be. The question is how we can grasp that hugeness. The universe is huge, and extremely complex. But it can be understood (to an extent) according to some pretty basic principles. The human brain is equally 'huge' and 'complex'...but we are slowly coming to grips with (some of) it's basic princplez (deliberate spelling mistake for my friends ;-).
 - ‎D is very verbose. It requires a lot of typing. Look at how 
 long 'immutable' is. Very often that I find myself tagging my 
 methods with something like 'final override nothrow  safe  nogc 
 ...' etc.
Sorry..what. You program in C++ and you're saying D is very verbose ;-) I do think though, that function headers are becoming far too long .... but this maybe something that one just has to accept, if one wants to have those choices. i.e. You can only really make them shorter, by not allowing the programmer to have those choices. And if D is about anything, it's about programmer choices.
 - ‎D claims to be a language for productivity, but it slows 
 down anyone thinking about efficiency, performance, and careful 
 design decisions. (choosing structs vs classes, structs don't 
 support hierarchy, use alias this, structs don't allow default 
 constructors {inconsistent - very annoying}, avoiding the GC, 
 look up that type to see if it's a struct or a class to decide 
 how you may use it ... etc. etc.).
Some very smart people who design programming languages think ALL inheritance should be completely banned. I'd be very happy if that were the case. Personally, I find it very easy to be productive in D, without classes. It's annoying language/library bugs that slow down my productivity. But think of D as a work in progress...like galaxies crashing into each other.. it doesn't happen very often..so enjoy it while you can.
Nov 26
parent Bastiaan Veelo <Bastiaan Veelo.net> writes:
On Monday, 27 November 2017 at 01:05:08 UTC, codephantom wrote:
 On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - D is unnecessarily a huge language. I remember in DConf 
 2014, Scott Meyers gave a talk about the last thing D needs, 
 which is a guy like him writing a lot of books covering the 
 many subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
I hear this argument a lot, about this language or that. It has become an argument void of any real value, in my view. The reason is, programming needs have changed a lot, the problems being solved have changed alot, there is a great diversity in how people think about solving those problems, and a greater need to solve problems that are not solvable with current langauges.
+1: I'd say D is sufficiently sized for the things I try to accomplish -- meaning I wouldn't want it to be smaller. I am not aware of any other language that is as much an enabler as D is. D allows magic to happen when you need magic, although it arguably takes time to learn to be a magician. The good thing is that, simultaneously, people can be productive (safely) writing ordinary code without the need to be an expert.
Nov 27
prev sibling next sibling parent Guillaume Piolat <first.last gmail.com> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 Hi,
 I'm a full-time C++ software engineer in Silicon Valley. I've 
 been learning D and using it in a couple of personal side 
 projects for a few months now.

 First of all, I must start by saying that I like D, and wish to 
 use it everyday. I'm even considering to donate to the D 
 foundation. However, some of D features and design decisions 
 frustrates me a lot, and sometimes urges me to look for an 
 alternative. I'm here not to criticize, but to channel my 
 frustrations to whom it may concern. I want D to become better 
 and more widely used. I'm sure many others might share with me 
 some of the following points:
 - D is unnecessarily a huge language. I remember in DConf 2014, 
 Scott Meyers gave a talk about the last thing D needs, which is 
 a guy like him writing a lot of books covering the many 
 subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
 - ‎D is very verbose. It requires a lot of typing. Look at how 
 long 'immutable' is. Very often that I find myself tagging my 
 methods with something like 'final override nothrow  safe  nogc 
 ...' etc.
 - ‎It's quite clear that D was influenced a lot by Java at some 
 point, which led to borrowing (copying?) a lot of Java features 
 that may not appeal to everyone.
 - ‎The amount of trickeries required to avoid the GC and do 
 manual memory management are not pleasant and counter 
 productive. I feel they defeat any productivity gains the 
 language was supposed to offer.
 - ‎The thread local storage, shared, and __gshared business is 
 annoying and doesn't seem to be well documented, even though it 
 is unnatural to think about (at least coming from other 
 languages).
 - ‎D claims to be a language for productivity, but it slows 
 down anyone thinking about efficiency, performance, and careful 
 design decisions. (choosing structs vs classes, structs don't 
 support hierarchy, use alias this, structs don't allow default 
 constructors {inconsistent - very annoying}, avoiding the GC, 
 look up that type to see if it's a struct or a class to decide 
 how you may use it ... etc. etc.).

 I could add more, but I'm tired of typing. I hope that one day 
 I will overcome my frustrations as well as D becomes a better 
 language that enables me to do what I want easily without 
 standing in my way.
Hi and welcome here, I've been using D for 10 years (fulltime since 3), and my frustrations almost mirrors yours. Probably the C++background? I don't think D complexity negates productivity in the long run, but you get to avoid a lot of the language in daily operations. For me: pure, shared, synchronized and TLS-by-default are _not_ pulling their weight (off the top of my head). Being a language "that can do what C++ can do" comes with large requirements both in capabilities and standard library. A language also seems more complex when we are more interested in it and eager to learn details (C++ may teach you to give up early on that point). If you think about it, D has only l-values and r-values which is much simpler than where we were before. Actually I think a "Scott Meyers of D" would have difficulty coming up with four books of guidelines. The good news is that you _can_ learn D in excrutiating detail, it's not something out of reach during a lifespan. The bad news is that the language accumulated complexity at one point and it can only be a long neural process (though initial familiarity helps a lot). I don't think anyone would have described D1 as verbose! (Now defending no-default-constructor-for-structs: it's because S.init is supposed to be a valid struct - which implies your destructors should be reentrant. Buy the book for 50+ other tips)
Nov 26
prev sibling next sibling parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:

 - D is unnecessarily a huge language. I remember in DConf 2014, 
 Scott Meyers gave a talk about the last thing D needs, which is 
 a guy like him writing a lot of books covering the many 
 subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
IMO, I don't think it's too bad. I'd rather have those features, than not have them. One of the best features of D is it's modeling power. Due to the rich feature set of D, you can model your code exactly how you think about it; you don't have to change the way you think about a problem to accommodate the limitations of your programming language. Furthermore, compared to C++, the end results is MUCH better. I've written a small memory-mapped IO library in both C++ and D. It heavily leverages templates and compile-time features of both languages. The C++ version turned into a monstrosity that even I, the author, couldn't understand. The D version was quite beautiful and elegant, much less verbose, and even had a few features I couldn't figure out how to do in C++.
 - ‎D is very verbose. It requires a lot of typing. Look at how 
 long 'immutable' is. Very often that I find myself tagging my 
 methods with something like 'final override nothrow  safe  nogc 
 ...' etc.
I agree, but I don't think it's as bad as C++ (see my comment above). But, unfortunately, D has chosen the wrong defaults, IMO: (1) We should be opting out of safe, not opting into it (2) D should be final-by-default (See https://wiki.dlang.org/Language_design_discussions#final-by-default for how that got shot down) (3) Perhaps D should be nothrow by default, but I'm not sure exactly how that would work (4) As a systems programming language first and an applications programming language second, I argue that the GC should be something we opt into, not opt out of. With `scope` and DIP 1000 features, we may eventually get there. (5) I think variables should be `immutable` by default like they are in Rust, but others disagree. You get the idea. I think part of this is due to historical accidents. D is, unfortunately, carrying a lot of technical debt.
 - ‎It's quite clear that D was influenced a lot by Java at some 
 point, which led to borrowing (copying?) a lot of Java features 
 that may not appeal to everyone.
Many seem to think of D as a better C++, but like you, I see more of an influence from Java too. I like the convenience of *some* of those Java-like features. So, I consider the influence of Java somewhat of a strength in D.
 - ‎The amount of trickeries required to avoid the GC and do 
 manual memory management are not pleasant and counter 
 productive. I feel they defeat any productivity gains the 
 language was supposed to offer.
I agree. See the documentation for `scope` and DIP1000. I think the situation may get better if we can continue momentum on those features. https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md There's also been some recent work this month on RAII with -betterC which may be of interest to you. But I still see -betterC as a copout, avoiding the difficult work of decoupling the compiler from the runtime.
 - ‎The thread local storage, shared, and __gshared business is 
 annoying and doesn't seem to be well documented, even though it 
 is unnatural to think about (at least coming from other 
 languages).
I think the thread local storage is a great feature of D. It's one of the defaults that D actually got right. I don't care for the double-underscore convention of __gshared that seems to be borrowed from C, but we're lucky to have it. It comes in handy sometimes. It's hard to know what difficulty users encounter when reading the documentation. If you think it can be improved, please submit a pull request to https://github.com/dlang/dlang.org
 - ‎D claims to be a language for productivity, but it slows 
 down anyone thinking about efficiency, performance, and careful 
 design decisions. (choosing structs vs classes, structs don't 
 support hierarchy, use alias this, structs don't allow default 
 constructors {inconsistent - very annoying}, avoiding the GC, 
 look up that type to see if it's a struct or a class to decide 
 how you may use it ... etc. etc.).
I run into those design dilemmas any time I start learning a new programming language, even with highly productive languages like C#. It takes time for me to work through a few ideas and finally arrive at the right idioms that work. But once they get worked out, it's cooking with gas.
 I could add more, but I'm tired of typing. I hope that one day 
 I will overcome my frustrations as well as D becomes a better 
 language that enables me to do what I want easily without 
 standing in my way.
Thanks for sharing your thoughts. It's always interesting to hear what people think. Mike
Nov 26
prev sibling next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 27/11/2017 12:14 AM, IM wrote:

snip

 - ‎D is very verbose. It requires a lot of typing. Look at how long 
 'immutable' is. Very often that I find myself tagging my methods with 
 something like 'final override nothrow  safe  nogc ...' etc.
There be solutions here! struct Foo { final nothrow safe nogc { void a() {} void b() {} } }
Nov 26
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/26/2017 4:14 PM, IM wrote:
 I'm a full-time C++ software engineer in Silicon Valley. I've been learning D 
 and using it in a couple of personal side projects for a few months now.
Great! Glad you're enjoying it and took the time to post your thoughts.
 - D is unnecessarily a huge language. I remember in DConf 2014, Scott Meyers 
 gave a talk about the last thing D needs, which is a guy like him writing a
lot 
 of books covering the many subtleties of the language. However, it seems that 
 the D community went ahead and created exactly this language!
You'll find the same language in 2014 as today, it hasn't changed much. All languages (except C) in common use accrete features. D does have some baggage that has been removed, like `typedef`, but on the whole whenever we try to remove something, someone always has built their store around it. The good news, however, is just use the subset of D that works for you.
 - ‎D is very verbose. It requires a lot of typing. Look at how long
'immutable' 
 is. Very often that I find myself tagging my methods with something like
'final 
 override nothrow  safe  nogc ...' etc.
The idea is if you just want to write code, you can eschew using them (except 'override'), and just write code. They're all used for optimization or to provide enforceable self-documentation. Other languages would require those to be documented in the comments, which is not enforceable and even more wordy :-) 'override' is as opposed to 'virtual' which C++ requires and D doesn't.
 - ‎It's quite clear that D was influenced a lot by Java at some point, which
led 
 to borrowing (copying?) a lot of Java features that may not appeal to everyone.
That's true. Java looked like it was going to take over the world when D was young. These days I'd get rid of inner classes in favor of lambdas if I could, but you can just ignore inner classes.
 - ‎The amount of trickeries required to avoid the GC and do manual memory 
 management are not pleasant and counter productive. I feel they defeat any 
 productivity gains the language was supposed to offer.
That's true. But it's hard to beat GC for just writing code and getting it to run safely and without pointer bugs.
 - ‎The thread local storage, shared, and __gshared business is annoying and 
 doesn't seem to be well documented, even though it is unnatural to think about 
 (at least coming from other languages).
The idea with TLS is to deal with endemic threading bugs other languages have. The default in C/C++ is for globals to be shared, which is completely impractical to examine a large code base for. __gshared is meant to stand out and be greppable, making code much more auditable.
 - ‎D claims to be a language for productivity, but it slows down anyone
thinking 
 about efficiency, performance, and careful design decisions. (choosing structs 
 vs classes, structs don't support hierarchy, use alias this, structs don't
allow 
 default constructors {inconsistent - very annoying}, avoiding the GC, look up 
 that type to see if it's a struct or a class to decide how you may use it ... 
 etc. etc.).
D structs are value types, and classes are reference types. Everything flows from that. C++ structs and classes are the same thing, and can be used as both reference and value types at the same time, whether that works or not. I rarely find C++ classes with documentation saying if they are intended as a reference or value type, and the documentation won't prevent one from misusing it. I'm not the only one to suggest that making the value/ref design decision is a pretty crucial one to make before designing the code. :-)
 I could add more, but I'm tired of typing. I hope that one day I will overcome 
 my frustrations as well as D becomes a better language that enables me to do 
 what I want easily without standing in my way.
Many people have difficulty with D when coming from, say, C++, because it does require a different way of thinking about code. This passes once one gains experience and comfort with D. After all, my early Fortran code looked just like BASIC, my C code looked like Fortran, my C++ code looked like "C with a few classes", and my D code looked a bit too much like C++ :-) I have recently finished converting the Digital Mars C++ compiler front end from "C with classes" to D. Even though it is a rote line-by-line translation, it simply looks better in D (much less of a snarl). Over time, as I refactor bits of it, it'll steadily look better. I find it significantly easier to write good looking code in D, and it is less verbose than C++. For some trivial examples, C++: unsigned long long D: ulong C++: template<typename T> struct S { ... }; D: struct S(T) { ... } C++: for (int i = 0; i < 10; ++i) D: foreach (i; 0..10) C++: decltype D: auto
Nov 26
next sibling parent reply IM <3di gm.com> writes:
On Monday, 27 November 2017 at 02:56:34 UTC, Walter Bright wrote:
 On 11/26/2017 4:14 PM, IM wrote:
 I'm a full-time C++ software engineer in Silicon Valley. I've 
 been learning D and using it in a couple of personal side 
 projects for a few months now.
Great! Glad you're enjoying it and took the time to post your thoughts.
 - D is unnecessarily a huge language. I remember in DConf 
 2014, Scott Meyers gave a talk about the last thing D needs, 
 which is a guy like him writing a lot of books covering the 
 many subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
You'll find the same language in 2014 as today, it hasn't changed much. All languages (except C) in common use accrete features. D does have some baggage that has been removed, like `typedef`, but on the whole whenever we try to remove something, someone always has built their store around it. The good news, however, is just use the subset of D that works for you.
 - ‎D is very verbose. It requires a lot of typing. Look at how 
 long 'immutable' is. Very often that I find myself tagging my 
 methods with something like 'final override nothrow  safe 
  nogc ...' etc.
The idea is if you just want to write code, you can eschew using them (except 'override'), and just write code. They're all used for optimization or to provide enforceable self-documentation. Other languages would require those to be documented in the comments, which is not enforceable and even more wordy :-) 'override' is as opposed to 'virtual' which C++ requires and D doesn't.
 - ‎It's quite clear that D was influenced a lot by Java at 
 some point, which led to borrowing (copying?) a lot of Java 
 features that may not appeal to everyone.
That's true. Java looked like it was going to take over the world when D was young. These days I'd get rid of inner classes in favor of lambdas if I could, but you can just ignore inner classes.
 - ‎The amount of trickeries required to avoid the GC and do 
 manual memory management are not pleasant and counter 
 productive. I feel they defeat any productivity gains the 
 language was supposed to offer.
That's true. But it's hard to beat GC for just writing code and getting it to run safely and without pointer bugs.
 - ‎The thread local storage, shared, and __gshared business is 
 annoying and doesn't seem to be well documented, even though 
 it is unnatural to think about (at least coming from other 
 languages).
The idea with TLS is to deal with endemic threading bugs other languages have. The default in C/C++ is for globals to be shared, which is completely impractical to examine a large code base for. __gshared is meant to stand out and be greppable, making code much more auditable.
 - ‎D claims to be a language for productivity, but it slows 
 down anyone thinking about efficiency, performance, and 
 careful design decisions. (choosing structs vs classes, 
 structs don't support hierarchy, use alias this, structs don't 
 allow default constructors {inconsistent - very annoying}, 
 avoiding the GC, look up that type to see if it's a struct or 
 a class to decide how you may use it ... etc. etc.).
D structs are value types, and classes are reference types. Everything flows from that. C++ structs and classes are the same thing, and can be used as both reference and value types at the same time, whether that works or not. I rarely find C++ classes with documentation saying if they are intended as a reference or value type, and the documentation won't prevent one from misusing it. I'm not the only one to suggest that making the value/ref design decision is a pretty crucial one to make before designing the code. :-)
 I could add more, but I'm tired of typing. I hope that one day 
 I will overcome my frustrations as well as D becomes a better 
 language that enables me to do what I want easily without 
 standing in my way.
Many people have difficulty with D when coming from, say, C++, because it does require a different way of thinking about code. This passes once one gains experience and comfort with D. After all, my early Fortran code looked just like BASIC, my C code looked like Fortran, my C++ code looked like "C with a few classes", and my D code looked a bit too much like C++ :-) I have recently finished converting the Digital Mars C++ compiler front end from "C with classes" to D. Even though it is a rote line-by-line translation, it simply looks better in D (much less of a snarl). Over time, as I refactor bits of it, it'll steadily look better. I find it significantly easier to write good looking code in D, and it is less verbose than C++. For some trivial examples, C++: unsigned long long D: ulong C++: template<typename T> struct S { ... }; D: struct S(T) { ... } C++: for (int i = 0; i < 10; ++i) D: foreach (i; 0..10) C++: decltype D: auto
Thank you Walter, and all for your replies. I want to re-state that I really like D, *despite* of feeling frustrated with it sometimes. I only mentioned in my post some of the things I don't like about D, but I didn't mention the many things I _DO_ like about it, but that's not the point now. I intend to stick with D for a while. I bought all the available books about D I could find and reading through them. I watched many of the DConf videos, and I read the (weekly?) blog post. I also intend to write a doc introducing D and its benefits to my co-workers and colleagues, suggesting that we could use it in some areas where it makes sense, but my D knowledge isn't quite there yet. Also I won't be able to write it until I manage to overcome my points of frustration with the language. My plan to overcome my frustration is: - Continue learning and experimenting with D. - Stop thinking about D as C++ with slightly better syntax, and do a paradigm shift. What I hope to see in D in the feature: - Sane correct defaults (as someone mentioned above, safe by default for instance?). - More exposure. I sometimes feel like there isn't enough D material to consume on a regular basis (and I and certainly many others are eager to learn more and more about the language). i.e. one blog post (weekly?), and a single DConf annually is not enough. In the C++ world, there's always something to read (various blog posts) or something to watch (CppCon, C++Now, Meeting C++, code::dive, Pacific++, ...etc.) Thank you all for the hard work that I really appreciate!
Nov 27
parent reply IM <3di gm.com> writes:
On Monday, 27 November 2017 at 08:33:42 UTC, IM wrote:
 - More exposure. I sometimes feel like there isn't enough D 
 material to consume on a regular basis (and I and certainly 
 many others are eager to learn more and more about the 
 language). i.e. one blog post (weekly?), and a single DConf 
 annually is not enough. In the C++ world, there's always 
 something to read (various blog posts) or something to watch 
 (CppCon, C++Now, Meeting C++, code::dive, Pacific++, ...etc.)
What are the plans to increase exposure?
Nov 27
parent reply Mike Parker <aldacron gmail.com> writes:
On Tuesday, 28 November 2017 at 04:35:04 UTC, IM wrote:
 On Monday, 27 November 2017 at 08:33:42 UTC, IM wrote:
 - More exposure. I sometimes feel like there isn't enough D 
 material to consume on a regular basis (and I and certainly 
 many others are eager to learn more and more about the 
 language). i.e. one blog post (weekly?), and a single DConf 
 annually is not enough. In the C++ world, there's always 
 something to read (various blog posts) or something to watch 
 (CppCon, C++Now, Meeting C++, code::dive, Pacific++, ...etc.)
What are the plans to increase exposure?
This is something that has gone in fits and starts over the years because of a lack of dedicated manpower, but the pace has been gradually to picking up. As of recently, I'm working on several tasks in this direction, big and small, with the support of the D Foundation. For example, we now have a D Language Foundation channel on youtube [1] where I'm currently in the process of collecting DConf videos that are scattered around different sites and accounts (working on 2014 first, since several of the video links on that edition of dconf.org were broken). It's not ready for announcement yet, but I hope to be there by the end of the year. There are a number of other things I'm looking at that have tended to slip through the cracks because they've been overlooked or no one has stepped in to do them. On the blog, I would love it if I could keep up a steady pace of once a week (I optimistically suggested twice-weekly postings when I first pitched it!), but I have neither the time nor the depth, for the sort of content we need, to maintain that pace myself. I'm always open to proposals for new material -- guest post ideas, project highlight suggestions, anything I can evaluate for suitability. That's only a part of the story though. There are D blogs out there other than the official one, but they're quiet for long periods of time. I want to see people writing about their projects, posting daily/weekly/monthly progress reports, live-streaming code sessions, writing articles for other web sites (like gamedev.net), initiating conversations on reddit (particularly on /r/d_language [2], the recent updating of which was another of the little tasks that needed doing), sharing D examples in other programming forums, filling in the holes in our Wiki and Wikipedia... the same stuff C++ users do at scale. [1] https://www.youtube.com/channel/UC5DNdmeE-_lS6VhCVydkVvQ [2] https://www.reddit.com/r/d_language/
Nov 28
parent reply Joakim <dlang joakim.fea.st> writes:
On Tuesday, 28 November 2017 at 08:33:20 UTC, Mike Parker wrote:
 On Tuesday, 28 November 2017 at 04:35:04 UTC, IM wrote:
 [...]
This is something that has gone in fits and starts over the years because of a lack of dedicated manpower, but the pace has been gradually to picking up. As of recently, I'm working on several tasks in this direction, big and small, with the support of the D Foundation. For example, we now have a D Language Foundation channel on youtube [1] where I'm currently in the process of collecting DConf videos that are scattered around different sites and accounts (working on 2014 first, since several of the video links on that edition of dconf.org were broken). It's not ready for announcement yet, but I hope to be there by the end of the year. There are a number of other things I'm looking at that have tended to slip through the cracks because they've been overlooked or no one has stepped in to do them. [...]
Since Mike started the official D blog last summer, downloads of the reference compiler are up 90%: http://erdani.com/d/downloads.daily.png I don't think that's a coincidence and attribute a significant chunk of that to his efforts and those who wrote posts, which is why I suggested starting an official blog years ago.
Nov 28
parent reply John Gabriele <jgabriele fastmail.fm> writes:
On Tuesday, 28 November 2017 at 08:58:46 UTC, Joakim wrote:
 Since Mike started the official D blog last summer, downloads 
 of the reference compiler are up 90%:

 http://erdani.com/d/downloads.daily.png

 I don't think that's a coincidence and attribute a significant 
 chunk of that to his efforts and those who wrote posts, which 
 is why I suggested starting an official blog years ago.
The big recent spike appears to coincide with DMD being re-licensed as fully open source, as well as the GDC inclusion into GCC. Years ago I was interested in D but considered the licensing to be a show-stopper. I've recently come back to learn it proper and try it for some small projects precisely because of the licensing change.
Nov 28
parent codephantom <me noyb.com> writes:
On Tuesday, 28 November 2017 at 23:07:32 UTC, John Gabriele wrote:
 The big recent spike appears to coincide with DMD being 
 re-licensed as fully open source, as well as the GDC inclusion 
 into GCC.

 Years ago I was interested in D but considered the licensing to 
 be a show-stopper. I've recently come back to learn it proper 
 and try it for some small projects precisely because of the 
 licensing change.
I didn't download it cause someone was blogging about it ;-) I only downloaded it because I discovered ldc2 in FreeBSD ports, and it mentioned a new langauge called D, which I had never heard of. After a bit of googling, I discovered the reference compiler was fully released as open source, under the Boost licence. (had it been GPL'd my interest would likely have stopped there, and had it only been the frontend, and not the backend, my interest would have stopped there too). And knowing that LLVM was on board, was a really important factor for me too.
Nov 28
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/26/17 9:56 PM, Walter Bright wrote:
 I have recently finished converting the Digital Mars C++ compiler front 
 end from "C with classes" to D.
I did a double-take on this. Are we to see an article about it soon? Very exciting! -Steve
Nov 27
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/27/2017 7:16 AM, Steven Schveighoffer wrote:
 On 11/26/17 9:56 PM, Walter Bright wrote:
 I have recently finished converting the Digital Mars C++ compiler front end 
 from "C with classes" to D.
I did a double-take on this. Are we to see an article about it soon?
I suppose I should write one :-) It was a very satisfying project. I'm looking at converting all my C projects still in use (like 'make') to D. BetterC has removed the last barriers to it.
 
 Very exciting!
 
 -Steve
Nov 27
next sibling parent reply John <j t.com> writes:
On Monday, 27 November 2017 at 23:13:00 UTC, Walter Bright wrote:
 On 11/27/2017 7:16 AM, Steven Schveighoffer wrote:
 On 11/26/17 9:56 PM, Walter Bright wrote:
 I have recently finished converting the Digital Mars C++ 
 compiler front end from "C with classes" to D.
I did a double-take on this. Are we to see an article about it soon?
I suppose I should write one :-) It was a very satisfying project. I'm looking at converting all my C projects still in use (like 'make') to D. BetterC has removed the last barriers to it.
Should add optlink to that list, would love to see it converted to D!
Nov 27
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it converted to D!
So would I, but there's no chance of that (unless someone else wants to pick up that flag). Years ago, I attempted to convert it to C. It was possible, but an agonizingly slow process. The worst problem was the complete lack of a test suite for optlink, so there was no reasonable way to know if I broke it or not. Next, Win32 is facing obsolescence. 15 years ago, it would be worth the investment. Today, not likely.
Nov 27
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it converted to D!
So would I, but there's no chance of that (unless someone else wants to pick up that flag). Years ago, I attempted to convert it to C. It was possible, but an agonizingly slow process. The worst problem was the complete lack of a test suite for optlink, so there was no reasonable way to know if I broke it or not.
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
Nov 27
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/27/2017 9:11 PM, rikki cattermole wrote:
 On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it converted to D!
So would I, but there's no chance of that (unless someone else wants to pick up that flag). Years ago, I attempted to convert it to C. It was possible, but an agonizingly slow process. The worst problem was the complete lack of a test suite for optlink, so there was no reasonable way to know if I broke it or not.
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
Yes, I've thought about making dmc++ 64 bit, but there'd be a fair amount of work (mostly upgrading SNN to 64 bits.)
Nov 27
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Tuesday, 28 November 2017 at 06:12:19 UTC, Walter Bright wrote:
 On 11/27/2017 9:11 PM, rikki cattermole wrote:
 On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it 
 converted to D!
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
Yes, I've thought about making dmc++ 64 bit, but there'd be a fair amount of work (mostly upgrading SNN to 64 bits.)
We could also convert that libc to D ;) Seriously betterC mode would make that way easier and more fun, is it on GitHub? Actually Herb Sutter shared once that Microsoft used C++ (as in templates C++) to reimplement a significant chunk of its libc with great success. Less code, less ifdef hell and macro abuse I think were presented as advantages.
Nov 27
next sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Tuesday, 28 November 2017 at 06:24:38 UTC, Dmitry Olshansky 
wrote:
 On Tuesday, 28 November 2017 at 06:12:19 UTC, Walter Bright 
 wrote:
 On 11/27/2017 9:11 PM, rikki cattermole wrote:
 On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it 
 converted to D!
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
Yes, I've thought about making dmc++ 64 bit, but there'd be a fair amount of work (mostly upgrading SNN to 64 bits.)
We could also convert that libc to D ;) Seriously betterC mode would make that way easier and more fun, is it on GitHub? Actually Herb Sutter shared once that Microsoft used C++ (as in templates C++) to reimplement a significant chunk of its libc with great success. Less code, less ifdef hell and macro abuse I think were presented as advantages.
Yes, the new MSVCRT.dll, is implemented in C++. https://blogs.msdn.microsoft.com/vcblog/2014/06/10/the-great-c-runtime-crt-refactoring/ After Midori and Longhorn's failure, there has been a migration effort to slowly get rid of C and focus on C++ for lower level stuff and .NET Native for everything else, at least on what concerns kernel, desktop and UWP.
Nov 27
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/27/2017 11:26 PM, Paulo Pinto wrote:
 Yes, the new MSVCRT.dll, is implemented in C++.
 
 https://blogs.msdn.microsoft.com/vcblog/2014/06/10/the-great-c-runt
me-crt-refactoring/ 
 
 
 After Midori and Longhorn's failure, there has been a migration effort to
slowly 
 get rid of C and focus on C++ for lower level stuff and .NET Native for 
 everything else, at least on what concerns kernel, desktop and UWP.
My experience using BetterC for this task should produce much better results than C++!
Nov 28
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/27/2017 10:24 PM, Dmitry Olshansky wrote:
 On Tuesday, 28 November 2017 at 06:12:19 UTC, Walter Bright wrote:
 On 11/27/2017 9:11 PM, rikki cattermole wrote:
 On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it converted to D!
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
Yes, I've thought about making dmc++ 64 bit, but there'd be a fair amount of work (mostly upgrading SNN to 64 bits.)
We could also convert that libc to D ;) Seriously betterC mode would make that way easier and more fun, is it on GitHub?
Yes, and I should finish boost licensing it! It's written in old-fashioned C code, and a fair bit of assembler. Every line of it would have to be reviewed for 64 bit portability, and there's no test suite :-( The good news is it has been pretty darned reliable. There's also the STL library, which is pretty complex.
 Actually Herb Sutter shared once that Microsoft used C++ (as in templates C++) 
 to reimplement a significant chunk of its libc with great success. Less code, 
 less ifdef hell and macro abuse I think were presented as advantages.
Yes, I came late to the game of not using ifdef hell. I'm pretty proud of the near complete absence of version() statements in the dmd front end. It didn't start out that way!
Nov 28
prev sibling parent reply Guillaume Piolat <first.last gmail.com> writes:
On Tuesday, 28 November 2017 at 06:12:19 UTC, Walter Bright wrote:
 Yes, I've thought about making dmc++ 64 bit, but there'd be a 
 fair amount of work (mostly upgrading SNN to 64 bits.)
Could I help with that? I'm familiar with x86 assembly, including "mixed" one that use the same source for 32-bit and 64-bit. I'd say porting 32-bit assembly to 64-bit assembly in x86 is way faster/fool-proof than removing that assembly.
Nov 28
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/28/2017 2:57 AM, Guillaume Piolat wrote:
 On Tuesday, 28 November 2017 at 06:12:19 UTC, Walter Bright wrote:
 Yes, I've thought about making dmc++ 64 bit, but there'd be a fair amount of 
 work (mostly upgrading SNN to 64 bits.)
Could I help with that? I'm familiar with x86 assembly, including "mixed" one that use the same source for 32-bit and 64-bit. I'd say porting 32-bit assembly to 64-bit assembly in x86 is way faster/fool-proof than removing that assembly.
Yes, you can. I appreciate the offer! I'll get back to you.
Nov 28
prev sibling parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Tuesday, 28 November 2017 at 05:11:25 UTC, rikki cattermole 
wrote:
 On 28/11/2017 5:03 AM, Walter Bright wrote:
 On 11/27/2017 6:55 PM, John wrote:
 Should add optlink to that list, would love to see it 
 converted to D!
So would I, but there's no chance of that (unless someone else wants to pick up that flag). Years ago, I attempted to convert it to C. It was possible, but an agonizingly slow process. The worst problem was the complete lack of a test suite for optlink, so there was no reasonable way to know if I broke it or not.
We have discussed this on Discord a little bit lately. What we are hoping for is a D dmc+libc updated to use dmd-be. Potentially allowing us to use LLVM's linker but with dmc's libc as well. Giving us out of the box experience for 64bit. It would be nice, but well, your site would need a lot of changes to go in this direction.
That would be great! So much easier to get nontechnical users (biologists) to an easy out of the box experience that can scale to their needs.
Nov 27
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2017-11-28 00:13, Walter Bright wrote:

 I suppose I should write one :-) It was a very satisfying project. I'm 
 looking at converting all my C projects still in use (like 'make') to D. 
 BetterC has removed the last barriers to it.
Why would druntime be a barrier for you for those projects? -- /Jacob Carlborg
Nov 28
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/28/2017 9:27 AM, Jacob Carlborg wrote:
 Why would druntime be a barrier for you for those projects?
When the C version is 90K and the translated D version is 1200K, it is a barrier. It's a barrier for others, as well. Another barrier for me has turned out to be the way assert() works in D. It just is not lightweight, and it visibly slows down dmd to have assert's turned on internally. The amount of machinery involved with it in druntime is way overblown. Hence, asserts in dmd are turned off, and that wound up causing me a lot of problems recently. There are even initiatives to add writefln like formatting to asserts. With betterC, asserts became lightweight and simple again. Andrei's been directing some work on using templates more in druntime to reduce this, such as Lucia's work. Martin has done some work with array ops, too. Exception handling support has been a bloat problem, too. DMC++ is built with all exceptions turned off. I've been writing PRs for dmd to greatly improve things so that it can generate similar code for RAII. (Exceptions require druntime.) BetterC is a door-opener for an awful lot of areas D has been excluded from, and requiring druntime is a barrier for that.
Nov 28
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Nov 28, 2017 at 06:18:20PM -0800, Walter Bright via Digitalmars-d wrote:
[...]
 When the C version is 90K and the translated D version is 1200K, it is
 a barrier. It's a barrier for others, as well.
 
 Another barrier for me has turned out to be the way assert() works in
 D. It just is not lightweight, and it visibly slows down dmd to have
 assert's turned on internally. The amount of machinery involved with
 it in druntime is way overblown. Hence, asserts in dmd are turned off,
 and that wound up causing me a lot of problems recently. There are
 even initiatives to add writefln like formatting to asserts. With
 betterC, asserts became lightweight and simple again.
 
 Andrei's been directing some work on using templates more in druntime
 to reduce this, such as Lucia's work. Martin has done some work with
 array ops, too.
 
 Exception handling support has been a bloat problem, too. DMC++ is
 built with all exceptions turned off. I've been writing PRs for dmd to
 greatly improve things so that it can generate similar code for RAII.
 (Exceptions require druntime.)
 
 BetterC is a door-opener for an awful lot of areas D has been excluded
 from, and requiring druntime is a barrier for that.
Doesn't this mean that we should rather focus our efforts on improving druntime instead of throwing out the baby with the bathwater with BetterC? For example, the way assert() works, if indeed it's overblown, then shouldn't we rather fix/improve it? While generally I would still use fullblown D rather than BetterC for my projects, the bloat from druntime/phobos does still bother me at the back of my mind. IIRC, the Phobos docs used to state that the philosophy for Phobos is pay-as-you-go. As in, if you don't use feature X, the code and associated data that implements feature X shouldn't even appear in the executable. It seems that we have fallen away from that for a while now. Perhaps it's time to move D back in that direction. T -- I've been around long enough to have seen an endless parade of magic new techniques du jour, most of which purport to remove the necessity of thought about your programming problem. In the end they wind up contributing one or two pieces to the collective wisdom, and fade away in the rearview mirror. -- Walter Bright
Nov 29
next sibling parent bpr <brogoff gmail.com> writes:
On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh wrote:
 On Tue, Nov 28, 2017 at 06:18:20PM -0800, Walter Bright via 
 Digitalmars-d wrote: [...]
 BetterC is a door-opener for an awful lot of areas D has been 
 excluded from, and requiring druntime is a barrier for that.
Doesn't this mean that we should rather focus our efforts on improving druntime instead of throwing out the baby with the bathwater with BetterC?
Isn't it possible to do both? For example, make D's GC a precise one (thus improving the runtime) and making the experience of using D sans GC and runtime a simple one? In answer to your question, if D is excluded from a lot of areas on account of requiring druntime, then it may be that no version of what you expect from druntime (I'll use GC as an obvious example) will remove that barrier.
Nov 29
prev sibling next sibling parent reply Jon Degenhardt <jond noreply.com> writes:
On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh wrote:
 While generally I would still use fullblown D rather than 
 BetterC for my projects, the bloat from druntime/phobos does 
 still bother me at the back of my mind.  IIRC, the Phobos docs 
 used to state that the philosophy for Phobos is pay-as-you-go. 
 As in, if you don't use feature X, the code and associated data 
 that implements feature X shouldn't even appear in the 
 executable. It seems that we have fallen away from that for a 
 while now.  Perhaps it's time to move D back in that direction.
If there specific apps where druntime and/or phobos bloat is thought to be too high, it might be worth trying the new LDC support for building a binary with druntime and phobos compiled with LTO (Link Time Optimization). I saw reduced binary sizes on my apps, it'd be interesting to hear other experiences.
Nov 29
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 3:36 PM, Jon Degenhardt wrote:
 On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh wrote:
 While generally I would still use fullblown D rather than BetterC for my 
 projects, the bloat from druntime/phobos does still bother me at the back of 
 my mind.  IIRC, the Phobos docs used to state that the philosophy for Phobos 
 is pay-as-you-go. As in, if you don't use feature X, the code and associated 
 data that implements feature X shouldn't even appear in the executable. It 
 seems that we have fallen away from that for a while now.  Perhaps it's time 
 to move D back in that direction.
If there specific apps where druntime and/or phobos bloat is thought to be too high, it might be worth trying the new LDC support for building a binary with druntime and phobos compiled with LTO (Link Time Optimization). I saw reduced binary sizes on my apps, it'd be interesting to hear other experiences.
Ideally, the druntime library should be come with the operating system so every user has a copy of it. Practically, I can't see that happening for the foreseeable future. It doesn't even happen for Windows with Microsoft's own compiler.
Nov 29
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/29/17 9:16 PM, Walter Bright wrote:
 On 11/29/2017 3:36 PM, Jon Degenhardt wrote:
 On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh wrote:
 While generally I would still use fullblown D rather than BetterC for 
 my projects, the bloat from druntime/phobos does still bother me at 
 the back of my mind.  IIRC, the Phobos docs used to state that the 
 philosophy for Phobos is pay-as-you-go. As in, if you don't use 
 feature X, the code and associated data that implements feature X 
 shouldn't even appear in the executable. It seems that we have fallen 
 away from that for a while now.  Perhaps it's time to move D back in 
 that direction.
If there specific apps where druntime and/or phobos bloat is thought to be too high, it might be worth trying the new LDC support for building a binary with druntime and phobos compiled with LTO (Link Time Optimization). I saw reduced binary sizes on my apps, it'd be interesting to hear other experiences.
Ideally, the druntime library should be come with the operating system so every user has a copy of it. Practically, I can't see that happening for the foreseeable future. It doesn't even happen for Windows with Microsoft's own compiler.
But even though it doesn't come with Windows, it can be installed once, and shared between all applications that use it. The issue with druntime isn't that it's not installed, it's the static linking. A second problem is that due to the way D works most of the time (with templates just about everywhere), each new release is likely to be binary-incompatible. So you will essentially need many copies of druntime, probably one per release that was used to compile any D programs on your system. But this is much less of an issue, especially if there are many programs that build using the same release. -Steve
Nov 29
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 6:54 PM, Steven Schveighoffer wrote:
 But even though it doesn't come with Windows, it can be installed once, and 
 shared between all applications that use it.
That's the theory. Unfortunately, that relies on the library not changing. Microsoft changes it all the time, hence "dll hell".
 The issue with druntime isn't that it's not installed, it's the static linking.
If the user still has to download the dll, he hasn't gained anything by dynamic linking.
 A second problem is that due to the way D works most of the time (with
templates 
 just about everywhere), each new release is likely to be binary-incompatible.
So 
 you will essentially need many copies of druntime, probably one per release
that 
 was used to compile any D programs on your system.
At this point, relying on druntime not changing is just not realistic. libc is different, having been cast in stone for nearly 30 years now.
 But this is much less of an issue, especially if there are many programs that 
build using the same release. It didn't work for Microsoft shipping a different, incompatible C runtime DLL with each compiler.
Nov 29
next sibling parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/29/17 10:23 PM, Walter Bright wrote:
 On 11/29/2017 6:54 PM, Steven Schveighoffer wrote:
 But even though it doesn't come with Windows, it can be installed 
 once, and shared between all applications that use it.
That's the theory. Unfortunately, that relies on the library not changing. Microsoft changes it all the time, hence "dll hell".
My understanding is that if you use the Microsoft supplied MSI package as a dependency, there is only one installation of the libraries necessary. Granted, the last time I built MSI packages was about 10 years ago... But I don't remember having issues with DLL hell with that, even across compiler releases. Yes, they changed the library on some revisions, but your MSVCRT dll would also be installed in the right places (and the right version loaded at runtime). But we don't have to look at Microsoft as the gurus of library deployment, there are many good solutions already out there for other OSes.
 
 A second problem is that due to the way D works most of the time (with 
 templates just about everywhere), each new release is likely to be 
 binary-incompatible. So you will essentially need many copies of 
 druntime, probably one per release that was used to compile any D 
 programs on your system.
At this point, relying on druntime not changing is just not realistic. libc is different, having been cast in stone for nearly 30 years now.
Baby steps, let's deploy it as a shared object first :) Until we do that, there is no point to worry about binary compatibility. -Steve
Nov 29
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2017-11-30 04:23, Walter Bright wrote:

 At this point, relying on druntime not changing is just not realistic. 
 libc is different, having been cast in stone for nearly 30 years now.
There are still problems with libc on Linux. One cannot assume a binary compiled one distribution works on another. So currently I think it's create that all D code is statically linked. -- /Jacob Carlborg
Nov 30
prev sibling next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh wrote:

 Doesn't this mean that we should rather focus our efforts on 
 improving druntime instead of throwing out the baby with the 
 bathwater with BetterC?
Exactly! We should be making a better D, not a better C. Mike
Nov 29
parent reply codephantom <me noyb.com> writes:
On Thursday, 30 November 2017 at 00:05:10 UTC, Michael V. 
Franklin wrote:
 On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh 
 wrote:

 Doesn't this mean that we should rather focus our efforts on 
 improving druntime instead of throwing out the baby with the 
 bathwater with BetterC?
Exactly! We should be making a better D, not a better C. Mike
There is no better C, than C, full stop. -betterC should become .. -slimD But we really do need a focus on both -slimD and -bloatyD For D to be successful, it needs to be a flexible language that enables programmer choice. We don't all have the same problems to solve. C is not successful because of how much it constrains you.
Nov 29
parent A Guy With a Question <aguywithanquestion gmail.com> writes:
On Thursday, 30 November 2017 at 00:23:10 UTC, codephantom wrote:
 On Thursday, 30 November 2017 at 00:05:10 UTC, Michael V. 
 Franklin wrote:
 On Wednesday, 29 November 2017 at 16:57:36 UTC, H. S. Teoh 
 wrote:

 Doesn't this mean that we should rather focus our efforts on 
 improving druntime instead of throwing out the baby with the 
 bathwater with BetterC?
Exactly! We should be making a better D, not a better C. Mike
There is no better C, than C, full stop. -betterC should become .. -slimD But we really do need a focus on both -slimD and -bloatyD For D to be successful, it needs to be a flexible language that enables programmer choice. We don't all have the same problems to solve. C is not successful because of how much it constrains you.
I'm personally a big believer that the thing that will replace C is going to be something that is flexible, but more than anything prevents the security bugs that plague the web right now. Things like heartbleed are preventable with safety guarantees that don't prevent fast code. Rust has some good ideas, but so does D.
Nov 29
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 8:57 AM, H. S. Teoh wrote:
 BetterC is a door-opener for an awful lot of areas D has been excluded
 from, and requiring druntime is a barrier for that.
Doesn't this mean that we should rather focus our efforts on improving druntime instead of throwing out the baby with the bathwater with BetterC?
What BetterC does is shine a spotlight on these issues. They've also come up with Ilya Yaroshenko's work.
 For example, the way assert() works, if indeed it's overblown, then
 shouldn't we rather fix/improve it?
I've tried, and met a lot of resistance.
Nov 29
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, November 28, 2017 18:18:20 Walter Bright via Digitalmars-d 
wrote:
 On 11/28/2017 9:27 AM, Jacob Carlborg wrote:
 Why would druntime be a barrier for you for those projects?
When the C version is 90K and the translated D version is 1200K, it is a barrier. It's a barrier for others, as well. Another barrier for me has turned out to be the way assert() works in D. It just is not lightweight, and it visibly slows down dmd to have assert's turned on internally. The amount of machinery involved with it in druntime is way overblown. Hence, asserts in dmd are turned off, and that wound up causing me a lot of problems recently. There are even initiatives to add writefln like formatting to asserts. With betterC, asserts became lightweight and simple again.
I wouldn't have expected assertions to cost much more than however much it costs to evaluate the expression being asserted unless the assertion fails. Now, even that can slow down a program a fair bit, depending on what's being asserted and how many assertions there are, but it's not something that I would have expected to vary particular between C and D. It doesn't surprise me that the generated code would be larger than you'd get for the same assertions in C because how assertions are handled when they fail is quite different, but I would expect the assertions themselves to cost about the same in terms of performance as long as they don't fail. What's going on that's making them so much worse? - Jonathan M Davis
Nov 29
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 7:15 PM, Jonathan M Davis wrote:
 I wouldn't have expected assertions to cost much more than however much it
 costs to evaluate the expression being asserted unless the assertion fails.
 Now, even that can slow down a program a fair bit, depending on what's being
 asserted and how many assertions there are, but it's not something that I
 would have expected to vary particular between C and D. It doesn't surprise
 me that the generated code would be larger than you'd get for the same
 assertions in C because how assertions are handled when they fail is quite
 different, but I would expect the assertions themselves to cost about the
 same in terms of performance as long as they don't fail. What's going on
 that's making them so much worse?
The code *size* causes problems because it pushes the executing code out of the cache. Another issue (I should check this again) was doing null checks on member function calls, which is not necessary since if they're null it'll seg fault.
Nov 29
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 29, 2017 19:29:56 Walter Bright via Digitalmars-d 
wrote:
 On 11/29/2017 7:15 PM, Jonathan M Davis wrote:
 I wouldn't have expected assertions to cost much more than however much
 it costs to evaluate the expression being asserted unless the assertion
 fails. Now, even that can slow down a program a fair bit, depending on
 what's being asserted and how many assertions there are, but it's not
 something that I would have expected to vary particular between C and
 D. It doesn't surprise me that the generated code would be larger than
 you'd get for the same assertions in C because how assertions are
 handled when they fail is quite different, but I would expect the
 assertions themselves to cost about the same in terms of performance as
 long as they don't fail. What's going on that's making them so much
 worse?
The code *size* causes problems because it pushes the executing code out of the cache.
Well, given that assertions would normally be used in a debug build where you generally don't optimize, and the debug symbols are compiled in, I wouldn't think that that would matter much in most cases.
 Another issue (I should check this again) was doing null
 checks on member function calls, which is not necessary since if they're
 null it'll seg fault.
I didn't think that we _ever_ checked for null on accessing members (though as I understand it, we actually do need to if the type is large enough that segfaults don't actually happen when dereferencing null - at least, we need to check for null in those cases in safe code, or the code isn't really safe). - Jonathan M Davis
Nov 29
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 8:02 PM, Jonathan M Davis wrote:
 Well, given that assertions would normally be used in a debug build where
 you generally don't optimize, and the debug symbols are compiled in, I
 wouldn't think that that would matter much in most cases.
I want them in the release build, which means they should be at minimal cost.
Nov 29
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 29, 2017 20:08:53 Walter Bright via Digitalmars-d 
wrote:
 On 11/29/2017 8:02 PM, Jonathan M Davis wrote:
 Well, given that assertions would normally be used in a debug build
 where
 you generally don't optimize, and the debug symbols are compiled in, I
 wouldn't think that that would matter much in most cases.
I want them in the release build, which means they should be at minimal cost.
Well, we could always have an alternate implementation of assertions for release builds that acted closer to C's assert. We already transform assert(0) into something else with -release. In general, for debugging, I'd much prefer to have D's assert as it is now, but I don't see any reason why we couldn't do something differently with a flag like -release but which specifically made assertions more primitive and lightweight for a release build rather than removing them for those folks that want to leave assertions enabled in a release build. Personally, I wouldn't want to enable assertions in the release build of most stuff, but some folks definitely have expressed the sentiment that they don't like the -release flag being named what it is, because they don't think that assertions should be disabled for release builds. - Jonathan M Davis
Nov 29
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/29/2017 8:49 PM, Jonathan M Davis wrote:
 Well, we could always have an alternate implementation of assertions for
 release builds that acted closer to C's assert.
My plan for release builds is to have an option to make them just execute a HALT instruction. Short, sweet, un-ignorable and too the point!
Nov 29
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/29/17 10:29 PM, Walter Bright wrote:
 Another issue (I should check this again) was doing null 
 checks on member function calls, which is not necessary since if they're 
 null it'll seg fault.
Just happened last release. https://dlang.org/changelog/2.077.0.html#removePreludeAssert -Steve
Nov 29
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Thursday, 30 November 2017 at 04:21:20 UTC, Steven 
Schveighoffer wrote:
 On 11/29/17 10:29 PM, Walter Bright wrote:
 Another issue (I should check this again) was doing null 
 checks on member function calls, which is not necessary since 
 if they're null it'll seg fault.
Just happened last release. https://dlang.org/changelog/2.077.0.html#removePreludeAssert -Steve
That was specifically for constructors and destructors (i.e. (cast(Foo)null).__ctor() was the only way to trigger that assert) not member functions (of classes), which I believe is still part of the compiler.
Nov 29
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/29/17 11:50 PM, Nicholas Wilson wrote:
 On Thursday, 30 November 2017 at 04:21:20 UTC, Steven Schveighoffer wrote:
 On 11/29/17 10:29 PM, Walter Bright wrote:
 Another issue (I should check this again) was doing null checks on 
 member function calls, which is not necessary since if they're null 
 it'll seg fault.
Just happened last release. https://dlang.org/changelog/2.077.0.html#removePreludeAssert
That was specifically for constructors and destructors (i.e. (cast(Foo)null).__ctor() was the only way to trigger that assert) not member functions (of classes), which I believe is still part of the compiler.
Then either the changelog entry is wrong, or the fix was more encompassing than the author/reviewers thought: Stevens-MacBook-Pro:testd steves$ cat testprelude.d struct S { int foo() { return 1; } } void main() { S s; auto x = s.foo; } Stevens-MacBook-Pro:testd steves$ dvm use 2.076.1 Stevens-MacBook-Pro:testd steves$ dmd -vcg-ast testprelude.d Stevens-MacBook-Pro:testd steves$ cat testprelude.d.cg import object; struct S { int foo() { assert(&this, "null this"); return 1; } } void main() { S s = 0; int x = s.foo(); return 0; } RTInfo!(S) { enum typeof(null) RTInfo = null; } Stevens-MacBook-Pro:testd steves$ dvm use 2.077.0 Stevens-MacBook-Pro:testd steves$ dmd -vcg-ast testprelude.d Stevens-MacBook-Pro:testd steves$ cat testprelude.d.cg import object; struct S { int foo() { return 1; } } void main() { S s = 0; int x = s.foo(); return 0; } RTInfo!(S) { enum typeof(null) RTInfo = null; } -Steve
Nov 30
prev sibling next sibling parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Thursday, 30 November 2017 at 03:29:56 UTC, Walter Bright 
wrote:
 The code *size* causes problems because it pushes the executing 
 code out of the cache.
Not if you do a branch to a cold cacheline on assert failure.
Nov 29
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 30.11.2017 04:29, Walter Bright wrote:
 
 The code *size* causes problems because it pushes the executing code out 
 of the cache. Another issue (I should check this again) was doing null 
 checks on member function calls, which is not necessary since if they're 
 null it'll seg fault.
This is not true for final member functions.
Nov 30
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 09:37:30 Timon Gehr via Digitalmars-d wrote:
 On 30.11.2017 04:29, Walter Bright wrote:
 The code *size* causes problems because it pushes the executing code out
 of the cache. Another issue (I should check this again) was doing null
 checks on member function calls, which is not necessary since if they're
 null it'll seg fault.
This is not true for final member functions.
It's close enough. Instead of segfaulting when the member function is called, it'll segfault when it tries to access one of the member variables or non-final member functions inside the member function. So, there isn't any more need to add null checks for final member functions than there is for non-final member functions. - Jonathan M Davis
Nov 30
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 09:01:20 UTC, Jonathan M Davis 
wrote:
 function is called, it'll segfault when it tries to access one 
 of the member variables
Is there an upper limit for how large an object can be?
Nov 30
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 09:56:35 Ola Fosheim Grstad via Digitalmars-
d wrote:
 On Thursday, 30 November 2017 at 09:01:20 UTC, Jonathan M Davis

 wrote:
 function is called, it'll segfault when it tries to access one
 of the member variables
Is there an upper limit for how large an object can be?
Not AFAIK, but it _is_ my understanding that if a type is large enough (larger than the page size IIRC), a segfault won't be triggered when the reference or pointer is null, and in those cases, we really do need to add null checks in safe code, or the code isn't going to truly be safe. That's completely separate from whether a function is final or not though, and it would apply to pointers to structs as well as class references. - Jonathan M Davis
Nov 30
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 10:14:26 UTC, Jonathan M Davis 
wrote:
 the code isn't going to truly be  safe. That's completely 
 separate from whether a function is final or not though, and it 
 would apply to pointers to structs as well as class references.
Indeed. So maybe the compiler find the get the largest object for a given program and protect the same amount of pages? I guess pointers to C-style arrays would still be an issue, but probably not as frequently used in D.
Nov 30
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 10:23:09 UTC, Ola Fosheim 
Grøstad wrote:
 On Thursday, 30 November 2017 at 10:14:26 UTC, Jonathan M Davis 
 wrote:
 the code isn't going to truly be  safe. That's completely 
 separate from whether a function is final or not though, and 
 it would apply to pointers to structs as well as class 
 references.
Indeed. So maybe the compiler find the get the largest object
"can find and get the size of the largest object…" (not sure what happend there :)
Nov 30
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 09:01:20 UTC, Jonathan M Davis 
wrote:
 It's close enough. Instead of segfaulting when the member 
 function is called, it'll segfault when it tries to access one 
 of the member variables or non-final member functions inside 
 the member function. So, there isn't any more need to add null 
 checks for final member functions than there is for non-final 
 member functions.
Err... wait. What if you have a conditional: if(input == 0) { do something bad } access field Seems like you would be better off by injecting: assert this not null at the beginning of all final methods and remove the assertion if all paths will lead to a field access before something bad can happen. Adding checks and then only remove them if they provably have no consequence tend to be the safer approach.
Nov 30
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 10:39:07 UTC, Ola Fosheim 
Grøstad wrote:
 On Thursday, 30 November 2017 at 09:01:20 UTC, Jonathan M Davis 
 wrote:
 It's close enough. Instead of segfaulting when the member 
 function is called, it'll segfault when it tries to access one 
 of the member variables or non-final member functions inside 
 the member function. So, there isn't any more need to add null 
 checks for final member functions than there is for non-final 
 member functions.
Err... wait. What if you have a conditional: if(input == 0) { do something bad } access field
Or even worse: if (input != 0) access fields else do bad stuff
Nov 30
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 10:39:07 Ola Fosheim Grstad via Digitalmars-
d wrote:
 On Thursday, 30 November 2017 at 09:01:20 UTC, Jonathan M Davis

 wrote:
 It's close enough. Instead of segfaulting when the member
 function is called, it'll segfault when it tries to access one
 of the member variables or non-final member functions inside
 the member function. So, there isn't any more need to add null
 checks for final member functions than there is for non-final
 member functions.
Err... wait. What if you have a conditional: if(input == 0) { do something bad } access field Seems like you would be better off by injecting: assert this not null at the beginning of all final methods and remove the assertion if all paths will lead to a field access before something bad can happen. Adding checks and then only remove them if they provably have no consequence tend to be the safer approach.
All we need to do is insert null checks before calling any member function on an object that is large enough where segfaulting won't happen if the reference or pointer is null. Whether the function is virtual or not really doesn't matter, and there's no need to add checks if segfaults are going to happen on null. That would mean that for large objects that have a path in a non-virtual function that would not access any members, you'd end up with an Error being thrown or a HLT being triggered (or whatever the compiler inserted check did), whereas it would have squeaked by in a smaller object, but it's really a bug to be calling a member function on a null object anyway. The key thing here is that we properly guaranteed safe, and normally that doesn't require null checks thanks to the CPU doing that for you and segfaulting. It's just the overly large objects where that's not going to work, and we can add null checks then - which hopefully the compiler can optimize out in at least some cases, but even if it can't, that's just the price of having that code be safe; most code wouldn't be affected anyway, since it only applies to particularly large objects. - Jonathan M Davis
Nov 30
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 30 November 2017 at 18:10:01 UTC, Jonathan M Davis 
wrote:
 whereas it would have squeaked by in a smaller object, but it's 
 really a bug to be calling a member function on a null object 
 anyway.
Well, it is a bug, but the member-function may have been written with an invariant in mind, so it would then go undetected on a small object and continue running with broken invariants (state outside the object). So without such a check there would be reduced value in builds with contracts. E.g. there could be a global involved that now has a broken invariant. Maybe contracts aren't really a major feature anyway, but such gotchas should be listed in the spec at least.
 doing that for you and segfaulting. It's just the overly large 
 objects where that's not going to work, and we can add null 
 checks then
I think the objection is that small objects with non-virtual member-functions and a path that does not dereference the this-pointer will pass incorrectly if this is null. Assume you add a non-nullable pointer type. Then you probably would want to assume that the this pointer is never null so that you don't have to test it manually before assigning it to a non-nullable pointer variable. Or you risk getting null into non-nullable pointers… But it really depends on how strong you want the type system to be.
Nov 30
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 19:14:50 Ola Fosheim Grstad via Digitalmars-
d wrote:
 On Thursday, 30 November 2017 at 18:10:01 UTC, Jonathan M Davis

 wrote:
 whereas it would have squeaked by in a smaller object, but it's
 really a bug to be calling a member function on a null object
 anyway.
Well, it is a bug, but the member-function may have been written with an invariant in mind, so it would then go undetected on a small object and continue running with broken invariants (state outside the object). So without such a check there would be reduced value in builds with contracts. E.g. there could be a global involved that now has a broken invariant. Maybe contracts aren't really a major feature anyway, but such gotchas should be listed in the spec at least.
If there's an invariant, it's going to segfault as soon as it accesses any member variables, and actually, it wouldn't surprise me if the invariant were virtual given that I would expect a base class invariant to be run in a derived class. And if the invariant is virtual, then you'll get a segfault when it's called on a null reference even if the function itself isn't virtual. In the case of pointers to structs, the invariant definitely wouldn't be virtual, and the invariant would be executed, but it would segfault as soon as it accessed a member. Ultimately, I think that the main concern here is ensuring that safe code is actually safe. As long it segfaults (or HLTs or whatever if an explicit check is added) when it tries to use a null pointer, I don't think that it really matters much whether it fails at the call point of the member function or when accessing a member inside, and in my experience, having a member function that doesn't use member variables is so rare as to be pretty much a non-issue. Sure, having a code path that shortcuts things under some set of circumstances and returns rather than accessing any members does increase how often it happens, but arguably in that case, it also didn't matter that the pointer/reference was null, since the object itself wasn't actually needed. - Jonathan M Davis
Nov 30
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Nov 29, 2017 at 07:29:56PM -0800, Walter Bright via Digitalmars-d wrote:
 On 11/29/2017 7:15 PM, Jonathan M Davis wrote:
 I wouldn't have expected assertions to cost much more than however
 much it costs to evaluate the expression being asserted unless the
 assertion fails.  Now, even that can slow down a program a fair bit,
 depending on what's being asserted and how many assertions there
 are, but it's not something that I would have expected to vary
 particular between C and D. It doesn't surprise me that the
 generated code would be larger than you'd get for the same
 assertions in C because how assertions are handled when they fail is
 quite different, but I would expect the assertions themselves to
 cost about the same in terms of performance as long as they don't
 fail. What's going on that's making them so much worse?
The code *size* causes problems because it pushes the executing code out of the cache. Another issue (I should check this again) was doing null checks on member function calls, which is not necessary since if they're null it'll seg fault.
Can you elaborate? Because in my current understanding, assert(expr) is implemented by evaluating expr, which is unavoidable, and if it fails, calls a function in druntime to handle the failure. So as far as the user's code is concerned, there shouldn't be any performance issue -- presumably, druntime's assert() implementation shouldn't even be in the cache because it's not being used (up to that point). It's just a single function call in the user's code, which is at most just a few bytes. What happens inside the assert() implementation seems to be irrelevant because at that point your program is going to terminate anyway. So a cache miss for the assert() implementation isn't going to be a big deal (unless your program is asserting at a high frequency, in which case you have bigger problems than performance!). Unless you're talking about applications where the entire program must fit in cache or flash or SRAM or whatever. In that case, perhaps the solution is to have a different druntime that has a leaner implementation of assert(). Which brings us to the implementation of assert() itself. What about it makes it so big? I suspect most of the bloat comes from throwing AssertError, which pulls in the stack-unwinding code, which, if my memory is still up to date, suffers from performance issues where it tries to construct the stacktrace regardless of whether or not the catch block actually wants the stacktrace. I vaguely remember suggesting that this should be done lazily, so that the actual construction of the stacktrace (including symbol lookups, etc.) isn't done until somebody actually asks for it. You'd still have to save the addresses of the call somewhere, since otherwise it might get overwritten by the time the stack unwinding is done, but it should be a lot cheaper than doing symbol lookups eagerly. But perhaps this issue has since been fixed? But of course, this assumes that we even need to throw AssertError in the first place. If this can be made optional, we can skip the stack unwinding code altogether. (But I can see that this will only work for specific applications, since you may not be able to avoid the need for the unwinding code to call dtors and stuff to free up allocated resources, etc., which, if it's necessary, means you can't avoid linking in the stack unwinding code. But it *can*, at least in theory, be something separate from the stacktrace construction code, so you can still save a bit of code there. Make the stacktrace construction code a zero-argument template, then it won't get linked into the executable unless it's actually used.) T -- Let's not fight disease by killing the patient. -- Sean 'Shaleh' Perry
Nov 30
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Thursday, 30 November 2017 at 16:12:04 UTC, H. S. Teoh wrote:
 Can you elaborate?
As I understand it, DWARF exception handling even on the non-exceptional case, is a bit more expensive than our old way. Not hugely expensive but for dmd where we count milliseconds it might add up.
 Which brings us to the implementation of assert() itself. What 
 about it makes it so big? I suspect most of the bloat comes 
 from throwing AssertError, which pulls in the stack-unwinding 
 code, which, if my memory is still up to date, suffers from 
 performance issues where it tries to construct the stacktrace 
 regardless of whether or not the catch block actually wants the 
 stacktrace.
That's false, I changed that many years ago myself, unless the DWARF change involved that too, but I don't think so. What happens is the exception constructor walks the stack and copies the addresses to a local, static buffer. This is very fast - just walking a linked list and copying some void* into a void*[64] or whatever - and little code. The expensive part of formatting it to a string, actually looking up the debug info, parsing out the addresses, etc., is done lazily when it is printed, which occurs only on demand or right before the program terminates.
Nov 30
next sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 16:48:10 Adam D. Ruppe via Digitalmars-d 
wrote:
 On Thursday, 30 November 2017 at 16:12:04 UTC, H. S. Teoh wrote:
 Which brings us to the implementation of assert() itself. What
 about it makes it so big? I suspect most of the bloat comes
 from throwing AssertError, which pulls in the stack-unwinding
 code, which, if my memory is still up to date, suffers from
 performance issues where it tries to construct the stacktrace
 regardless of whether or not the catch block actually wants the
 stacktrace.
That's false, I changed that many years ago myself, unless the DWARF change involved that too, but I don't think so. What happens is the exception constructor walks the stack and copies the addresses to a local, static buffer. This is very fast - just walking a linked list and copying some void* into a void*[64] or whatever - and little code. The expensive part of formatting it to a string, actually looking up the debug info, parsing out the addresses, etc., is done lazily when it is printed, which occurs only on demand or right before the program terminates.
Yeah, that change was a _huge_ speedup. It had a significant impact on std.datetime's unit tests. - Jonathan M Davis
Nov 30
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Thu, Nov 30, 2017 at 11:12:58AM -0700, Jonathan M Davis via Digitalmars-d
wrote:
 On Thursday, November 30, 2017 16:48:10 Adam D. Ruppe via Digitalmars-d 
 wrote:
 On Thursday, 30 November 2017 at 16:12:04 UTC, H. S. Teoh wrote:
 Which brings us to the implementation of assert() itself. What
 about it makes it so big? I suspect most of the bloat comes from
 throwing AssertError, which pulls in the stack-unwinding code,
 which, if my memory is still up to date, suffers from performance
 issues where it tries to construct the stacktrace regardless of
 whether or not the catch block actually wants the stacktrace.
That's false, I changed that many years ago myself, unless the DWARF change involved that too, but I don't think so. What happens is the exception constructor walks the stack and copies the addresses to a local, static buffer. This is very fast - just walking a linked list and copying some void* into a void*[64] or whatever - and little code. The expensive part of formatting it to a string, actually looking up the debug info, parsing out the addresses, etc., is done lazily when it is printed, which occurs only on demand or right before the program terminates.
Yeah, that change was a _huge_ speedup. It had a significant impact on std.datetime's unit tests.
[...] Ah yes, now I vaguely remember that somebody, I guess it was you Adam, fixed that stacktrace thing. Still, I think Walter's complaint wasn't the *performance* of assert() per se, since the cost of evaluating the expression should be pretty small, and the only real bottleneck is the stack unwinding, and I doubt anybody actually cares about the *performance* of that. But the complaint was about the code bloat of linking druntime into the executable. Before Adam's fix, assert() would create the entire the stacktrace upon constructing the AssertError, which means it has to pull in a whole bunch of code for decoding stack addresses and cross-referencing them with symbols, etc., but all that code would be useless if the user didn't care about the stacktrace to begin with. With Adam's fix, there's now the possibility of templatizing the stacktrace code so that the code won't even be compiled into the executable until you actually use it. I just took a quick glance at druntime, and it seems that the stacktrace symbol lookup code currently is referenced by module static ctor, so it will be included by default whether or not you use it. I haven't looked further but perhaps it's possible to refactor some of the code around this to make it lazy / lazier, so that the code isn't actually instantiated until your program actually calls it. In general, I think druntime / Phobos should adopt the policy of zero cost until first use / first reference, where possible. Things like the GC are probably too tightly integrated for this to be possible, but I think there's still plenty of room for improvement with other parts of druntime and Phobos. T -- Three out of two people have difficulties with fractions. -- Dirk Eddelbuettel
Nov 30
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, November 30, 2017 08:12:04 H. S. Teoh via Digitalmars-d wrote:
 But of course, this assumes that we even need to throw AssertError in
 the first place.  If this can be made optional, we can skip the stack
 unwinding code altogether. (But I can see that this will only work for
 specific applications, since you may not be able to avoid the need for
 the unwinding code to call dtors and stuff to free up allocated
 resources, etc., which, if it's necessary, means you can't avoid linking
 in the stack unwinding code. But it *can*, at least in theory, be
 something separate from the stacktrace construction code, so you can
 still save a bit of code there. Make the stacktrace construction code a
 zero-argument template, then it won't get linked into the executable
 unless it's actually used.)
If we're not talking about unit tests, then we could pretty much just print everything out on the failed assertion and kill the program right then and there rather throwing an Error - and really for most Errors, we could do exactly that. It would be really annoying for unit tests though, since being able to do scope(error) to print out extra information on the failure can be really useful. But otherwise, printing out the stack trace and killing the application right then and there rather than throwing an Error shouldn't be expensive at all, since it's only going to happen once. But I have a hard time believing that the cost of assertions relates to constructing an AssertError unless the compiler is inlining a bunch of stuff at the assertion site. If that's what's happening, then it would increase the code size around assertions and potentially affect performance. - Jonathan M Davis
Nov 30
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Thursday, 30 November 2017 at 18:18:41 UTC, Jonathan M Davis 
wrote:
 But I have a hard time believing that the cost of assertions 
 relates to constructing an AssertError unless the compiler is 
 inlining a bunch of stuff at the assertion site. If that's 
 what's happening, then it would increase the code size around 
 assertions and potentially affect performance.

 - Jonathan M Davis
Indeed, if DMD is not marking the conditional call to _d_assert (or whatever it is) 'cold' and the call itself `pragma(inline, false)` then it needs to be changed to do so.
Nov 30
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/30/2017 3:51 PM, Nicholas Wilson wrote:
 On Thursday, 30 November 2017 at 18:18:41 UTC, Jonathan M Davis wrote:
 But I have a hard time believing that the cost of assertions relates to 
 constructing an AssertError unless the compiler is inlining a bunch of stuff 
 at the assertion site. If that's what's happening, then it would increase the 
 code size around assertions and potentially affect performance.

 - Jonathan M Davis
Indeed, if DMD is not marking the conditional call to _d_assert (or whatever it is) 'cold' and the call itself `pragma(inline, false)` then it needs to be changed to do so.
Instead of speculation, let's look at what actually happens: --------------------------------- void test(int i) { assert(i, "message"); } --------------------------------- dmd -c -m64 -O test obj2asm -x test.obj --------------------------------- __a6_746573742e64: db 074h,065h,073h,074h,02eh,064h,000h ;test.d. __a7_6d657373616765: db 06dh,065h,073h,073h,061h,067h,065h,000h ;message. _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: jne $+3Ah --- start of inserted assert failure code --- 0011: mov R8D,5 // line number 0017: lea RAX,FLAT:_BSS[00h][RIP] 001e: mov -018h[RBP],RAX // filename.ptr 0022: mov qword ptr -020h[RBP],6 // filename.length 002a: lea RDX,-020h[RBP] // &filename[] 002e: lea RCX,FLAT:_BSS[00h][RIP] 0035: mov -8[RBP],RCX // msg.ptr 0039: mov qword ptr -010h[RBP],7 // msg.length 0041: lea RCX,-010h[RBP] // &msg[] 0045: call L0 --- end of inserted assert failure code --- 004a: mov RSP,RBP 004d: pop RBP 004e: ret ------------------------------------------- 26 bytes of inserted Bloaty McBloatface code and 15 bytes of data. My proposal: _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: jne $+01h 0011: hlt // look ma, 1 byte! 0012: mov RSP,RBP 0015: pop RBP 0016: ret 1 byte of inserted code, and the data strings are gone as well.
Nov 30
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 1 December 2017 at 03:23:23 UTC, Walter Bright wrote:
 26 bytes of inserted Bloaty McBloatface code and 15 bytes of 
 data. My proposal:
I suggest we break up the -release switch into different options. I never, never use -release since it implies no bounds checking. But if we wanted small asserts, we'd ideally like -slimassert perhaps to change that behavior without killing arrays too.
 0011:           hlt                                // look ma, 
 1 byte!
BTW, are you against using `int 3` as the opcode instead? (0xCC) hlt kinda bothers me because it actually has a meaning. You're just abusing the fact that it is privileged so it traps on operating systems, but on bare metal, it just pauses until the next interrupt. int 3, on the other hand, is explicitly for debugging - which is what we want asserts to be.
Nov 30
next sibling parent reply user1234 <user1234 12.nl> writes:
On Friday, 1 December 2017 at 03:43:07 UTC, Adam D. Ruppe wrote:
 On Friday, 1 December 2017 at 03:23:23 UTC, Walter Bright wrote:
 26 bytes of inserted Bloaty McBloatface code and 15 bytes of 
 data. My proposal:
[...] int 3, on the other hand, is explicitly for debugging - which is what we want asserts to be.
Aren't hardware breakpoints limited in numbers ?
Nov 30
parent Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 1 December 2017 at 03:52:01 UTC, user1234 wrote:
 Aren't hardware breakpoints limited in numbers ?
I've never heard that and can't think of any reason why it would be. The instruction as far as the CPU is concerned is to just trigger the behavior when it is executed, it doesn't know or care how many times it appears.
Nov 30
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, December 01, 2017 03:43:07 Adam D. Ruppe via Digitalmars-d wrote:
 On Friday, 1 December 2017 at 03:23:23 UTC, Walter Bright wrote:
 26 bytes of inserted Bloaty McBloatface code and 15 bytes of
 data. My proposal:
I suggest we break up the -release switch into different options. I never, never use -release since it implies no bounds checking. But if we wanted small asserts, we'd ideally like -slimassert perhaps to change that behavior without killing arrays too.
It only implies no bounds checking in system code, though obviously, if you're not marking code with safe or simply want the bounds checking to be enabled in system code, then using -release isn't something that you're going to want to do. Regardless, I don't think that it makes sense for -release to imply any kind of enabling of assertions, since a lot of folks write assertions with the idea that they're only going to be enabled in debug builds, and sometimes, what's being asserted isn't cheap. So, we do need to either have a different flag to enable stripped down assertions in a release build or have some sort of argument to -release that controls what it does (which has been suggested for the DIP related to controlling contracts). And once you're talking about enabling some kind of assertion in a release build, that begs the question of whether contracts and invariants should be included. Pretty quickly, it sounds like what -release does isn't appropriate at all, and a debug build with optimizations enabled makes more sense - though it would still need a flag of some kind to indicate that you wanted the stripped down assertions. - Jonathan M Davis
Nov 30
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 1 December 2017 at 04:07:24 UTC, Jonathan M Davis 
wrote:
 It only implies no bounds checking in  system code, though 
 obviously, if you're not marking code with  safe or simply want 
 the bounds checking to be enabled in  system code, then using 
 -release isn't something that you're going to want to do.
Indeed, but disabling bounds checking in system code is trivial anyway (just use `.ptr` in the index expression, and that's the level it should be done - individual expressions that you've verified somehow already).
 Pretty quickly, it sounds like what -release does isn't 
 appropriate at all
Well, my real point is that -release is too blunt to ever use. Every time it is mentioned, my reply is "-release is extremely harmful, NEVER use it", since it does far too much stuff at once. So what I'm asking for here is just independent switches to control the other stuff too. Like we used to just have `-release`, but now we have `-boundscheck=[on|safeonly|off]`. I propose we also add `-assert=[throws|c|traps|off]` and `-contracts=[on|off]` and perhaps `-invariants=[on|off]` if they aren't lumped into contracts. Of those assert options: throws = what normal D does now (throw AssertError) c = what -betterC D does now (call the C runtime assert) traps = emit the hardware instruction (whether hlt or int 3) off = what -release D does now (omit entirely, except the assert(0) case, which traps) Then, we can tweak the asserts without also killing D's general memory safety victory that bounds checking brings.
Nov 30
parent Kagamin <spam here.lot> writes:
On Friday, 1 December 2017 at 04:27:35 UTC, Adam D. Ruppe wrote:
 Then, we can tweak the asserts without also killing D's general 
 memory safety victory that bounds checking brings.
The default compilation mode is fine for me, it's just phobos is not written for it.
Dec 01
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/30/2017 7:43 PM, Adam D. Ruppe wrote:
 I never, never use -release since it implies no bounds checking.
This is incorrect. -release leaves bounds checking on.
 BTW, are you against using `int 3` as the opcode instead? (0xCC)
That might be a better idea.
Dec 01
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/1/2017 2:57 AM, Walter Bright wrote:
 On 11/30/2017 7:43 PM, Adam D. Ruppe wrote:
 I never, never use -release since it implies no bounds checking.
This is incorrect. -release leaves bounds checking on.
Correction: https://dlang.org/dmd-windows.html#switch-release "compile release version, which means not emitting run-time checks for contracts and asserts. Array bounds checking is not done for system and trusted functions, and assertion failures are undefined behaviour."
Dec 01
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Friday, 1 December 2017 at 11:01:13 UTC, Walter Bright wrote:
 assertion failures  are undefined behaviour."
=8-[
Dec 01
prev sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 1 December 2017 at 11:01:13 UTC, Walter Bright wrote:
 Correction:

 https://dlang.org/dmd-windows.html#switch-release

 "compile release version, which means not emitting run-time 
 checks for contracts and asserts. Array bounds checking is not 
 done for system and trusted functions, and assertion failures 
 are undefined behaviour."
Right, that's what I was talking about in this post: http://forum.dlang.org/post/luuwsdbfzunjmzbarxyd forum.dlang.org "Indeed, but disabling bounds checking in system code is trivial anyway" So leaving them only in safe isn't much help #1 much D code isn't safe (that might change if it were default, but it isn't), and #2 it is easy to turn off in system code since the `.ptr[index]` trick works very well.
Dec 01
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/30/2017 7:43 PM, Adam D. Ruppe wrote:
 int 3, on the other hand, is explicitly for debugging - which is what we want 
 asserts to be.
https://github.com/dlang/dmd/pull/7391
Dec 03
prev sibling next sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Friday, 1 December 2017 at 03:23:23 UTC, Walter Bright wrote:
 On 11/30/2017 3:51 PM, Nicholas Wilson wrote:
 On Thursday, 30 November 2017 at 18:18:41 UTC, Jonathan M 
 Davis wrote:
 But I have a hard time believing that the cost of assertions 
 relates to constructing an AssertError unless the compiler is 
 inlining a bunch of stuff at the assertion site. If that's 
 what's happening, then it would increase the code size around 
 assertions and potentially affect performance.

 - Jonathan M Davis
Indeed, if DMD is not marking the conditional call to _d_assert (or whatever it is) 'cold' and the call itself `pragma(inline, false)` then it needs to be changed to do so.
Instead of speculation, let's look at what actually happens: --------------------------------- void test(int i) { assert(i, "message"); } --------------------------------- dmd -c -m64 -O test obj2asm -x test.obj --------------------------------- __a6_746573742e64: db 074h,065h,073h,074h,02eh,064h,000h ;test.d. __a7_6d657373616765: db 06dh,065h,073h,073h,061h,067h,065h,000h ;message. _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: jne $+3Ah --- start of inserted assert failure code --- 0011: mov R8D,5 // line number 0017: lea RAX,FLAT:_BSS[00h][RIP] 001e: mov -018h[RBP],RAX // filename.ptr 0022: mov qword ptr -020h[RBP],6 // filename.length 002a: lea RDX,-020h[RBP] // &filename[] 002e: lea RCX,FLAT:_BSS[00h][RIP] 0035: mov -8[RBP],RCX // msg.ptr 0039: mov qword ptr -010h[RBP],7 // msg.length 0041: lea RCX,-010h[RBP] // &msg[] 0045: call L0 --- end of inserted assert failure code --- 004a: mov RSP,RBP 004d: pop RBP 004e: ret ------------------------------------------- 26 bytes of inserted Bloaty McBloatface code and 15 bytes of data. My proposal: _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: jne $+01h 0011: hlt // look ma, 1 byte! 0012: mov RSP,RBP 0015: pop RBP 0016: ret 1 byte of inserted code, and the data strings are gone as well.
I see you are concerned with the total size, which I understand. I think we misunderstood each other. What I meant in terms of icache pollution is with the 'cold' is instead of generating: if(!cond) _d_assert(__FILE__, __LINE__,message); //rest of code it should actually generate, if (!cond) goto failed; //rest of code failed: _d_assert(__FILE__, __LINE__,message);//call is cold & out of line. no icache pollution I'm not sure that it does that given the triviality of the example, but it looks like it doesn't.
Nov 30
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/30/2017 8:34 PM, Nicholas Wilson wrote:
 What I meant in terms of icache pollution is with the 'cold' is instead of 
 generating:
 
 if(!cond)
      _d_assert(__FILE__, __LINE__,message);
 //rest of code
 
 it should actually generate,
 
 if (!cond)
      goto failed;
 //rest of code
 
 failed:
       _d_assert(__FILE__, __LINE__,message);//call is cold & out of line.
no 
 icache pollution
 
 I'm not sure that it does that given the triviality of the example, but it
looks 
 like it doesn't.
You're right, it would be better to generate code that way. But it currently does not (I should fix that). It's not completely correct that icache isn't polluted. Functions that are tightly coupled can be located adjacent for better cache performance, and the various asserts would push them apart. Also, the conditional jumps may need to be the longer variety due to the longer distance, rather than the 2 byte one.
Dec 01
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Friday, 1 December 2017 at 11:07:32 UTC, Walter Bright wrote:
 On 11/30/2017 8:34 PM, Nicholas Wilson wrote:
 What I meant in terms of icache pollution is with the 'cold' 
 is instead of generating:
 
 if(!cond)
      _d_assert(__FILE__, __LINE__,message);
 //rest of code
 
 it should actually generate,
 
 if (!cond)
      goto failed;
 //rest of code
 
 failed:
       _d_assert(__FILE__, __LINE__,message);//call is cold & 
 out of line. no icache pollution
 
 I'm not sure that it does that given the triviality of the 
 example, but it looks like it doesn't.
You're right, it would be better to generate code that way. But it currently does not (I should fix that).
Great!
 It's not completely correct that icache isn't polluted.
True.
 Functions that are tightly coupled can be located adjacent for 
 better cache performance, and the various asserts would push 
 them apart.
Does DMD optimise for locality? I would hope co-located functions are either larger than cache lines by a reasonable amount or, if they are small enough, inlined so that the asserts can be aggregated. It is also possible (though I can't comment on how easy it would be to implement) if you are trying to optimise for co-location to have the asserts be completely out of line so that you have function1 function2 function3 call asserts of function1 call asserts of function2 call asserts of function3 such that the calls to the asserts never appear in the icache at all apart from overlap of e.g. function1's asserts after the end of function3, or one of the the asserts fail.
 Also, the conditional jumps may need to be the longer variety 
 due to the longer distance, rather than the 2 byte one.
Then it becomes a tradeoff, one that I'm glad the compiler is doing for me.
Dec 01
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/1/2017 3:31 AM, Nicholas Wilson wrote:
 On Friday, 1 December 2017 at 11:07:32 UTC, Walter Bright wrote:
 Does DMD optimise for locality?
No. However, the much-despised Optlink does! It uses the trace.def output from the profiler to set the layout of functions, so that tightly coupled functions are co-located. https://digitalmars.com/ctg/trace.html It's not even just cache locality - rarely used functions can be allocated to pages so they are never even loaded in from disk. (The executable files are demand loaded.) The speed improvement can be dramatic, especially on program startup times, and if the program does a lot of swapping. I don't know if the Linux linker can accept a script file telling it the function layout. The downside is because it relies on runtime profile information, it is awkward to set up and needs a representative usage test case to drive it. dmd could potentially use a static call graph to do a better-than-nothing stab at it, but it would only work on code supplied to it as a group on the command line.
 I would hope co-located functions are either larger than cache lines by a 
 reasonable amount or, if they are small enough, inlined so that the asserts
can 
 be aggregated. It is also possible (though I can't comment on how easy it
would 
 be to implement) if you are trying to optimise for co-location to have the 
 asserts be completely out of line so that you have
 
 function1
 function2
 function3
 call asserts of function1
 call asserts of function2
 call asserts of function3
 
 such that the calls to the asserts never appear in the icache at all apart
from 
 overlap of e.g. function1's asserts after the end of function3, or one of the 
 the asserts fail.
It's possible, although the jmps to the assert code would now have to be unconditional relocatable jmps which are larger: jne L1 jmp assertcode L1:
 Then it becomes a tradeoff, one that I'm glad the compiler is doing for me.
Everything about codegen is a tradeoff :-)
Dec 01
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/30/2017 8:34 PM, Nicholas Wilson wrote:
 I'm not sure that it does that given the triviality of the example, but it
looks 
 like it doesn't.
https://github.com/dlang/dmd/pull/7386
Dec 02
prev sibling next sibling parent codephantom <me noyb.com> writes:
On Friday, 1 December 2017 at 03:23:23 UTC, Walter Bright wrote:
 26 bytes of inserted Bloaty McBloatface code and 15 bytes of
[WARNING: This post may be considered 'off topic', and may therefore deeply offend people - hopefully those people are hiding me.] Hey..I like it..'Bloaty McBloatface'... good name for a ferry... If only Sydney had more 'D programmers' taking the ferry... http://www.abc.net.au/news/2017-11-13/sydney-ferry-will-actually-be-called-ferry-mcferryface/9146446
Dec 01
prev sibling next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 1 December 2017 at 04:23, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 11/30/2017 3:51 PM, Nicholas Wilson wrote:
 On Thursday, 30 November 2017 at 18:18:41 UTC, Jonathan M Davis wrote:
 But I have a hard time believing that the cost of assertions relates to
 constructing an AssertError unless the compiler is inlining a bunch of stuff
 at the assertion site. If that's what's happening, then it would increase
 the code size around assertions and potentially affect performance.

 - Jonathan M Davis
Indeed, if DMD is not marking the conditional call to _d_assert (or whatever it is) 'cold' and the call itself `pragma(inline, false)` then it needs to be changed to do so.
Instead of speculation, let's look at what actually happens: --------------------------------- void test(int i) { assert(i, "message"); } --------------------------------- dmd -c -m64 -O test obj2asm -x test.obj --------------------------------- __a6_746573742e64: db 074h,065h,073h,074h,02eh,064h,000h ;test.d. __a7_6d657373616765: db 06dh,065h,073h,073h,061h,067h,065h,000h ;message. _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: jne $+3Ah --- start of inserted assert failure code --- 0011: mov R8D,5 // line number 0017: lea RAX,FLAT:_BSS[00h][RIP] 001e: mov -018h[RBP],RAX // filename.ptr 0022: mov qword ptr -020h[RBP],6 // filename.length 002a: lea RDX,-020h[RBP] // &filename[] 002e: lea RCX,FLAT:_BSS[00h][RIP] 0035: mov -8[RBP],RCX // msg.ptr 0039: mov qword ptr -010h[RBP],7 // msg.length 0041: lea RCX,-010h[RBP] // &msg[] 0045: call L0 --- end of inserted assert failure code --- 004a: mov RSP,RBP 004d: pop RBP 004e: ret -------------------------------------------
Wouldn't it be more optimal if dmd instead emitted the following? --------------------------------- __a6_746573742e64: db 074h,065h,073h,074h,02eh,064h,000h ;test.d. __a7_6d657373616765: db 06dh,065h,073h,073h,061h,067h,065h,000h ;message. _D4test4testFiZv: 0000: push RBP 0001: mov RBP,RSP 0004: sub RSP,040h 0008: mov 010h[RBP],ECX 000b: cmp dword ptr 010h[RBP],0 000f: je $+3Ah ; <--- Using `je` instead of `jne` 0011: mov RSP,RBP 0014: pop RBP 0015: ret --- start of inserted assert failure code --- ...
 --- end of inserted assert failure code ---
-------------------------------------------
Dec 02
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 1 December 2017 at 04:23, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 26 bytes of inserted Bloaty McBloatface code and 15 bytes of data. My
 proposal:

 _D4test4testFiZv:
 0000:           push    RBP
 0001:           mov     RBP,RSP
 0004:           sub     RSP,040h
 0008:           mov     010h[RBP],ECX
 000b:           cmp     dword ptr 010h[RBP],0
 000f:           jne     $+01h
 0011:           hlt                                // look ma, 1 byte!
 0012:           mov     RSP,RBP
 0015:           pop     RBP
 0016:           ret

 1 byte of inserted code, and the data strings are gone as well.
But then you need to bloat your program with debug info in order to understand what, why, and how things went wrong. Not worth it IMO.
Dec 02
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/2/2017 4:38 AM, Iain Buclaw wrote:
 But then you need to bloat your program with debug info in order to
 understand what, why, and how things went wrong.
Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build.
 Not worth it IMO.
I assumed that many would feel that way, hence making it an option. It's still better than running with NO asserts because of the bloat, which is the problem I was addressing.
Dec 02
next sibling parent reply Richard Delorme <abulmo club-internet.fr> writes:
On Saturday, 2 December 2017 at 23:44:39 UTC, Walter Bright wrote:
 On 12/2/2017 4:38 AM, Iain Buclaw wrote:
 But then you need to bloat your program with debug info in 
 order to
 understand what, why, and how things went wrong.
Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build.
+1 To me, the current D assert is useless, and I prefer to use a C-like equivalent, that "crash" the program without unwinding the stack. Then I can inspect the cause of the crash on a debugger, with access to the current context (variable contents, etc.), is it from a (core file) or by running the program on the debugger. That way I do find the bug(s) much faster. More generally treating errors (ie bugs) as unrecoverable exceptions is a mistake in IMHO. I prefer a call to the C function abort() that leaves the context intact.
Dec 02
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2017-12-03 08:29, Richard Delorme wrote:

 +1
 To me, the current D assert is useless, and I prefer to use a C-like 
 equivalent, that "crash" the program without unwinding the stack. Then I 
 can inspect the cause of the crash on a debugger, with access to the 
 current context (variable contents, etc.), is it from a (core file) or 
 by running the program on the debugger.
Ideally druntime should identify if it's running inside a debugger and adapt accordingly. -- /Jacob Carlborg
Dec 03
prev sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 3 December 2017 at 08:29, Richard Delorme via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Saturday, 2 December 2017 at 23:44:39 UTC, Walter Bright wrote:
 On 12/2/2017 4:38 AM, Iain Buclaw wrote:
 But then you need to bloat your program with debug info in order to
 understand what, why, and how things went wrong.
Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build.
+1 To me, the current D assert is useless, and I prefer to use a C-like equivalent, that "crash" the program without unwinding the stack. Then I can inspect the cause of the crash on a debugger, with access to the current context (variable contents, etc.), is it from a (core file) or by running the program on the debugger. That way I do find the bug(s) much faster. More generally treating errors (ie bugs) as unrecoverable exceptions is a mistake in IMHO. I prefer a call to the C function abort() that leaves the context intact.
Core dumps are of no use if there's no debug info. ;-)
Dec 03
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Saturday, 2 December 2017 at 23:44:39 UTC, Walter Bright wrote:
 On 12/2/2017 4:38 AM, Iain Buclaw wrote:
 But then you need to bloat your program with debug info in 
 order to
 understand what, why, and how things went wrong.
Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build.
Ehm, if it’s a simple reproducable tool. Good luck debugging servers like that.
Dec 03
parent reply Adam Wilson <flyboynw gmail.com> writes:
On 12/3/17 00:09, Dmitry Olshansky wrote:
 On Saturday, 2 December 2017 at 23:44:39 UTC, Walter Bright wrote:
 On 12/2/2017 4:38 AM, Iain Buclaw wrote:
 But then you need to bloat your program with debug info in order to
 understand what, why, and how things went wrong.
Most of the time (for me) that isn't necessary, because the debugger still shows where it failed and that's enough. Besides, you can always rerun the test case with a debug build.
Ehm, if it’s a simple reproducable tool. Good luck debugging servers like that.
I have to agree with this. I make my living on server side software, and we aren't allowed (by legal) to connect to the server to run debuggers. The *only* thing I have is logging. If the program crashes with no option to trap an exception or otherwise log the crash this could cost me weeks-to-months of debugging time. -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Dec 03
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 8:59 PM, Adam Wilson wrote:
 I have to agree with this. I make my living on server side software, and we 
 aren't allowed (by legal) to connect to the server to run debuggers. The
*only* 
 thing I have is logging. If the program crashes with no option to trap an 
 exception or otherwise log the crash this could cost me weeks-to-months of 
 debugging time.
As I said, the halt behavior will be an option. Nobody is taking away anything.
Dec 03
parent Adam Wilson <flyboynw gmail.com> writes:
On 12/3/17 21:28, Walter Bright wrote:
 On 12/3/2017 8:59 PM, Adam Wilson wrote:
 I have to agree with this. I make my living on server side software,
 and we aren't allowed (by legal) to connect to the server to run
 debuggers. The *only* thing I have is logging. If the program crashes
 with no option to trap an exception or otherwise log the crash this
 could cost me weeks-to-months of debugging time.
As I said, the halt behavior will be an option. Nobody is taking away anything.
Awesome. :) -- Adam Wilson IRC: LightBender import quiet.dlang.dev;
Dec 03
prev sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 30 November 2017 at 04:29, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 11/29/2017 7:15 PM, Jonathan M Davis wrote:
 I wouldn't have expected assertions to cost much more than however much it
 costs to evaluate the expression being asserted unless the assertion
 fails.
 Now, even that can slow down a program a fair bit, depending on what's
 being
 asserted and how many assertions there are, but it's not something that I
 would have expected to vary particular between C and D. It doesn't
 surprise
 me that the generated code would be larger than you'd get for the same
 assertions in C because how assertions are handled when they fail is quite
 different, but I would expect the assertions themselves to cost about the
 same in terms of performance as long as they don't fail. What's going on
 that's making them so much worse?
The code *size* causes problems because it pushes the executing code out of the cache. Another issue (I should check this again) was doing null checks on member function calls, which is not necessary since if they're null it'll seg fault.
This is only a problem if you (dmd) are not able to move code blocks into hot and cold paths?
Dec 02
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 29 November 2017 at 03:18, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 11/28/2017 9:27 AM, Jacob Carlborg wrote:
 Why would druntime be a barrier for you for those projects?
When the C version is 90K and the translated D version is 1200K, it is a barrier. It's a barrier for others, as well.
Did you add an extra 0 here for the D version?
 Another barrier for me has turned out to be the way assert() works in D. It
 just is not lightweight, and it visibly slows down dmd to have assert's
 turned on internally. The amount of machinery involved with it in druntime
 is way overblown. Hence, asserts in dmd are turned off, and that wound up
 causing me a lot of problems recently. There are even initiatives to add
 writefln like formatting to asserts. With betterC, asserts became
 lightweight and simple again.
I find this assertion hard to believe (pun not intended). Unless you are omitting some extra check (invariant calls?), whether you are using D's assert or C's assert, both have the same runtime cost.
 Andrei's been directing some work on using templates more in druntime to
 reduce this, such as Lucia's work. Martin has done some work with array ops,
 too.

 Exception handling support has been a bloat problem, too. DMC++ is built
 with all exceptions turned off. I've been writing PRs for dmd to greatly
 improve things so that it can generate similar code for RAII. (Exceptions
 require druntime.)

 BetterC is a door-opener for an awful lot of areas D has been excluded from,
 and requiring druntime is a barrier for that.
It's not a door opener, and druntime is not a barrier.
Dec 02
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/2/2017 4:13 AM, Iain Buclaw wrote:
 On 29 November 2017 at 03:18, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 11/28/2017 9:27 AM, Jacob Carlborg wrote:
 Why would druntime be a barrier for you for those projects?
When the C version is 90K and the translated D version is 1200K, it is a barrier. It's a barrier for others, as well.
Did you add an extra 0 here for the D version?
No. I used the sizes for MicroEmacs in C vs MicroEmacs in D. (I translated the former into the latter.) With BetterC, I've been able to create virtually identical binaries for C and D.
 Another barrier for me has turned out to be the way assert() works in D. It
 just is not lightweight, and it visibly slows down dmd to have assert's
 turned on internally. The amount of machinery involved with it in druntime
 is way overblown. Hence, asserts in dmd are turned off, and that wound up
 causing me a lot of problems recently. There are even initiatives to add
 writefln like formatting to asserts. With betterC, asserts became
 lightweight and simple again.
I find this assertion hard to believe (pun not intended). Unless you are omitting some extra check (invariant calls?), whether you are using D's assert or C's assert, both have the same runtime cost.
asserts are significantly larger in D. One reason is the filename string is passed as a D string, which is 2 pushes. A C string is one push. Consider that D asserts throw an exception. Where's the catch going to be, if you're linking the D code into a C program? And the D personality function, needed for D exception unwinding, is in druntime.
 BetterC is a door-opener for an awful lot of areas D has been excluded from,
 and requiring druntime is a barrier for that.
It's not a door opener, and druntime is not a barrier.
If you have a C program, and want to add a D function to it, you really don't want to add druntime as well. BetterC enables people to provide D addon libraries for people who have C main programs. There's a point to making incremental use of D as low cost as possible.
Dec 02
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On 3 December 2017 at 00:56, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 12/2/2017 4:13 AM, Iain Buclaw wrote:
 On 29 November 2017 at 03:18, Walter Bright via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On 11/28/2017 9:27 AM, Jacob Carlborg wrote:
 Why would druntime be a barrier for you for those projects?
When the C version is 90K and the translated D version is 1200K, it is a barrier. It's a barrier for others, as well.
Did you add an extra 0 here for the D version?
No. I used the sizes for MicroEmacs in C vs MicroEmacs in D. (I translated the former into the latter.) With BetterC, I've been able to create virtually identical binaries for C and D.
Ah, you're referring to binary sizes. Well you have two central components which do well to bolster that, core.thread and the GC. I guess if your project is to port something from C to D. You'd start out with `extern(C) int main() nogc` and go from there. That then leaves moduleinfo, to which there already exists the machinery in the compiler for determining whether it needs to be generated, dmd could start using it again instead of defaulting to always generating moduleinfo symbols that pull in druntime. That would mean a switch that gives fine control over generation (-moduleinfo=[on|asneeded|off]), still better than a betterC feature gate.
 Another barrier for me has turned out to be the way assert() works in D.
 It
 just is not lightweight, and it visibly slows down dmd to have assert's
 turned on internally. The amount of machinery involved with it in
 druntime
 is way overblown. Hence, asserts in dmd are turned off, and that wound up
 causing me a lot of problems recently. There are even initiatives to add
 writefln like formatting to asserts. With betterC, asserts became
 lightweight and simple again.
I find this assertion hard to believe (pun not intended). Unless you are omitting some extra check (invariant calls?), whether you are using D's assert or C's assert, both have the same runtime cost.
asserts are significantly larger in D. One reason is the filename string is passed as a D string, which is 2 pushes. A C string is one push.
D strings require one extra push, true. But what's that, one extra clock cycle? Not even the cost of half a crown.
 Consider that D asserts throw an exception. Where's the catch going to be,
 if you're linking the D code into a C program? And the D personality
 function, needed for D exception unwinding, is in druntime.
If there is no catch, what happens is that unwind raise exception returns end of stack, and you abort in the throw() function.
 BetterC is a door-opener for an awful lot of areas D has been excluded
 from,
 and requiring druntime is a barrier for that.
It's not a door opener, and druntime is not a barrier.
If you have a C program, and want to add a D function to it, you really don't want to add druntime as well. BetterC enables people to provide D addon libraries for people who have C main programs. There's a point to making incremental use of D as low cost as possible.
People have been making alternative druntime libraries for a while now, either as stubs inside their own project, or by using minilibd. There's nothing stopping you from simply swapping out druntime for another implementation.
Dec 03
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 2:32 AM, Iain Buclaw wrote:
 People have been making alternative druntime libraries for a while
 now, either as stubs inside their own project, or by using minilibd.
 There's nothing stopping you from simply swapping out druntime for
 another implementation.
It may indeed work to use a special druntime. My expectation, however, is that it's a lot more work trying to develop and support another runtime library, and a lot more work for the user trying to get that library worked into his build system. This will drastically cut down on the number of users willing to give it a try. (Consider the ENDLESS problems Win64 users have trying to link in the VC C runtime library, something that should be trivial. And these are experienced VC developers.) Meanwhile, we've got -betterC today, and it's simple and it works.
Dec 03
next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On 3 December 2017 at 13:20, Walter Bright via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On 12/3/2017 2:32 AM, Iain Buclaw wrote:
 People have been making alternative druntime libraries for a while
 now, either as stubs inside their own project, or by using minilibd.
 There's nothing stopping you from simply swapping out druntime for
 another implementation.
It may indeed work to use a special druntime. My expectation, however, is that it's a lot more work trying to develop and support another runtime library, and a lot more work for the user trying to get that library worked into his build system. This will drastically cut down on the number of users willing to give it a try. (Consider the ENDLESS problems Win64 users have trying to link in the VC C runtime library, something that should be trivial. And these are experienced VC developers.) Meanwhile, we've got -betterC today, and it's simple and it works.
I prefer the approach of, try compiling a simple "hello world" with an empty object.d file. Then inspect what the compiler does. Does it error and exit, or does it ICE? What can be done to prevent that from happening? When you reach the point that there's no way but to declare a symbol or function, add that to object.d and then move onto the next error or ICE. Repeat until you can compile and link your program. Next, try something a little more complex, such as define a struct and use it. Then again address all problems that you encounter with that. What you end up with, should be a compiler that is less coupled to the existence of object.d and all its definitions than what it was before. Doing things this way in a bottom-up fashion has allowed people to use gdc to target STM micro controllers.
Dec 03
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Sunday, 3 December 2017 at 12:20:14 UTC, Walter Bright wrote:
 It may indeed work to use a special druntime. My expectation, 
 however, is that it's a lot more work trying to develop and 
 support another runtime library, and a lot more work for the 
 user trying to get that library worked into his build system.
It's pretty easy, actually, and you can then selectively opt into features by copying function implementations as you need them. That said, I like the idea of betterC just working... as long as it doesn't break the opt-in option.
 Meanwhile, we've got -betterC today, and it's simple and it 
 works.
It is a bit simpler than the old way, but not that much... like other people have copy/pasted my minimal object.d into new projects and gotten it to work pretty easily. Downloading a file and compiling isn't much different than compiling with -betterC. (And actually, my minimal one gives you classes and exceptions if you want them too via -version! And is bare-metal compatible as well, which -betterC still needs a few little stubs to work on right now.) So it is one thing to say "this is a bit more convenient", but don't say "this enables something D couldn't do before". The latter is patently false in all contexts, and in some of those contexts, further spreads FUD about druntime.
Dec 03
prev sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Sunday, 3 December 2017 at 12:20:14 UTC, Walter Bright wrote:
 It may indeed work to use a special druntime. My expectation, 
 however, is that it's a lot more work trying to develop and 
 support another runtime library, and a lot more work for the 
 user trying to get that library worked into his build system. 
 This will drastically cut down on the number of users willing 
 to give it a try.
I don't think it's necessary for you or anyone else to create a special officially supported runtime. What I need is a way to create a very minimal runtime that supports just the features of D that I'm opting in to, without having to write phony stubs and boiler plate that, in the end, is just going to be discarded by the linker. Currently the compiler expects things to exist in the runtime that have no hope of ever being used, just to get a build. In fact, one can compile against the stock object.d to satisfy the compiler, but then omit linking to druntime, and still get a proper binary. I had to stop pursuing it because I couldn't suggest it professionally and expect to be taken seriously.
 Meanwhile, we've got -betterC today, and it's simple and it 
 works.
IMO -betterC is papering over the problem rather than dealing with it. Mike
Dec 03
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 11:22 AM, Michael V. Franklin wrote:
 On Sunday, 3 December 2017 at 12:20:14 UTC, Walter Bright wrote:
 Meanwhile, we've got -betterC today, and it's simple and it works.
IMO -betterC is papering over the problem rather than dealing with it.
If -betterC motivates people to come up with better solutions, I'm all for it.
Dec 03
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Sunday, 3 December 2017 at 23:34:13 UTC, Walter Bright wrote:

 If -betterC motivates people to come up with better solutions, 
 I'm all for it.
A better solution would be to do what Iain said:
 Try compiling a simple "hello world" with an empty object.d 
 file.  Then inspect what the compiler does.  Does it error and 
 exit, or does it ICE?  What can be done to prevent that from 
 happening?
 When you reach the point that there's no way but to declare a 
 symbol or function, add that to object.d and then move onto the 
 next error or ICE.  Repeat until you can compile and link your 
 program.
 Next, try something a little more complex, such as define a 
 struct and use it.  Then again address all problems that you 
 encounter with that.
It would be great for the compiler devs to run through this exercise themselves and make changes to the compiler/runtime interface to remove the irrelevant requirements the compiler is imposing. Mike
Dec 03
next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Monday, 4 December 2017 at 00:25:53 UTC, Michael V. Franklin 
wrote:

 A better solution would be to do what Iain said:

 Try compiling a simple "hello world" with an empty object.d 
 file.  Then inspect what the compiler does.  Does it error and 
 exit, or does it ICE?  What can be done to prevent that from 
 happening?
 When you reach the point that there's no way but to declare a 
 symbol or function, add that to object.d and then move onto 
 the next error or ICE.  Repeat until you can compile and link 
 your program.
 Next, try something a little more complex, such as define a 
 struct and use it.  Then again address all problems that you 
 encounter with that.
Here's an illustration: object.d ---------------------------- module object; alias immutable(char)[] string; struct ModuleInfo { } main.d ----------------------------- module main; long sys_write(long arg1, in void* arg2, long arg3) { long result; asm { mov RAX, 1; mov RDI, arg1; mov RSI, arg2; mov RDX, arg3; syscall; } return result; } void write(in string text) { sys_write(2, text.ptr, text.length); } extern(C) void main() { write("Hello\n"); } .$> dmd -defaultlib= -debuglib= -conf= main.d -of=main /usr/bin/ld: main.o: relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Nonrepresentable section on output I didn't compile with -shared. What's going on here? .$> dmd -defaultlib= -debuglib= -conf= main.d -of=main -fPIC main.o:(.text.d_dso_init[.data.d_dso_rec]+0x22): undefined reference to `_d_dso_registry' Again, not sure why the compiler's generated code for that? Would it help for me to file bugzilla issues for things like this? The reason I haven't thus far, is that this is just a symptom of a larger issue. It needs a development strategy to be solved holistically. Mike
Dec 03
next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Monday, 4 December 2017 at 01:01:47 UTC, Michael V. Franklin 
wrote:

 .$> dmd -defaultlib= -debuglib= -conf= main.d -of=main
 /usr/bin/ld: main.o: relocation R_X86_64_32 against 
 `.rodata.str1.1' can not be used when making a shared object; 
 recompile with -fPIC
 /usr/bin/ld: final link failed: Nonrepresentable section on 
 output

 I didn't compile with -shared.  What's going on here?
 .$> dmd -defaultlib= -debuglib= -conf= main.d -of=main -fPIC

 main.o:(.text.d_dso_init[.data.d_dso_rec]+0x22): undefined 
 reference to `_d_dso_registry'

 Again, not sure why the compiler's generated code for that?
Ok, well perhaps that makes sense compiling with -fPIC, but the "relocation R_X86_64_32 against `.rodata.str1.1'" seems unnecessary. Mike
Dec 03
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 5:12 PM, Michael V. Franklin wrote:
 Ok, well perhaps that makes sense compiling with -fPIC, but the "relocation 
 R_X86_64_32 against `.rodata.str1.1'" seems unnecessary.
Why certain relocations are there is not at all a simple subject. And changing them tends to produce all sorts of frustrating bugs :-(
Dec 03
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 5:01 PM, Michael V. Franklin wrote:
 Would it help for me to file bugzilla issues for things like this?
No, because "why does the compiler do xxx" isn't really a bug report. You could ask on the learn n.g., or use obj2asm to examine the generated code. You can also grep the druntime source for what _d_dso_registry does.
 The reason I 
 haven't thus far, is that this is just a symptom of a larger issue.  It needs
a 
 development strategy to be solved holistically.
Having a new process isn't going to do much, because at some point someone has to put in work. It's the doing work that produces results.
Dec 03
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/3/2017 4:25 PM, Michael V. Franklin wrote:
 It would be great for the compiler devs to run through this exercise
themselves 
 and make changes to the compiler/runtime interface to remove the irrelevant 
 requirements the compiler is imposing.
I don't agree that creating a stub druntime is better, for reasons mentioned before. As for changing the way the compiler generates code so it is more pay-only-for-what-you-use, I'm all for it. I mentioned already work being done in this direction, PRs for more are welcome.
Dec 03
prev sibling next sibling parent reply Jon Degenhardt <jond noreply.com> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 I'm a full-time C++ software engineer in Silicon Valley. I've 
 been learning D and using it in a couple of personal side 
 projects for a few months now.

 First of all, I must start by saying that I like D, and wish to 
 use it everyday. I'm even considering to donate to the D 
 foundation. However, some of D features and design decisions 
 frustrates me a lot, and sometimes urges me to look for an 
 alternative. I'm here not to criticize, but to channel my 
 frustrations to whom it may concern. I want D to become better 
 and more widely used. I'm sure many others might share with me 
 some of the following points:
Forum discussions are valuable venue. Since you are in Silicon Valley, you might also consider attending one of the Silicon Valley D meetups (https://www.meetup.com/D-Lang-Silicon-Valley). It's hard to beat face-to-face conversations with other developers to get a variety of perspectives. The ultimate would be DConf, if you can manage to attend.
Nov 26
parent reply IM <3di gm.com> writes:
On Monday, 27 November 2017 at 03:01:24 UTC, Jon Degenhardt wrote:
 Forum discussions are valuable venue. Since you are in Silicon 
 Valley, you might also consider attending one of the Silicon 
 Valley D meetups 
 (https://www.meetup.com/D-Lang-Silicon-Valley). It's hard to 
 beat face-to-face conversations with other developers to get a 
 variety of perspectives. The ultimate would be DConf, if you 
 can manage to attend.
Thanks. I intend to attend some of their meetup events.
Nov 26
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Monday, 27 November 2017 at 07:58:27 UTC, IM wrote:
 On Monday, 27 November 2017 at 03:01:24 UTC, Jon Degenhardt 
 wrote:
 Forum discussions are valuable venue. Since you are in Silicon 
 Valley, you might also consider attending one of the Silicon 
 Valley D meetups 
 (https://www.meetup.com/D-Lang-Silicon-Valley). It's hard to 
 beat face-to-face conversations with other developers to get a 
 variety of perspectives. The ultimate would be DConf, if you 
 can manage to attend.
Thanks. I intend to attend some of their meetup events.
I'll be giving a presentation on Thursday evening on using D for (among other things) GPGPU, hope you can make it!
Nov 27
parent reply Zoadian <no no.no> writes:
On Monday, 27 November 2017 at 09:07:10 UTC, Nicholas Wilson 
wrote:
 On Monday, 27 November 2017 at 07:58:27 UTC, IM wrote:
 On Monday, 27 November 2017 at 03:01:24 UTC, Jon Degenhardt 
 wrote:
 Forum discussions are valuable venue. Since you are in 
 Silicon Valley, you might also consider attending one of the 
 Silicon Valley D meetups 
 (https://www.meetup.com/D-Lang-Silicon-Valley). It's hard to 
 beat face-to-face conversations with other developers to get 
 a variety of perspectives. The ultimate would be DConf, if 
 you can manage to attend.
Thanks. I intend to attend some of their meetup events.
I'll be giving a presentation on Thursday evening on using D for (among other things) GPGPU, hope you can make it!
Any chance it gets recorded? I'd be highly interested!
Nov 27
parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Monday, 27 November 2017 at 10:11:40 UTC, Zoadian wrote:
 On Monday, 27 November 2017 at 09:07:10 UTC, Nicholas Wilson 
 wrote:
 On Monday, 27 November 2017 at 07:58:27 UTC, IM wrote:
 On Monday, 27 November 2017 at 03:01:24 UTC, Jon Degenhardt 
 wrote:
 [...]
Thanks. I intend to attend some of their meetup events.
I'll be giving a presentation on Thursday evening on using D for (among other things) GPGPU, hope you can make it!
Any chance it gets recorded? I'd be highly interested!
We'll try, but I'm lead to believe the track record is not great, though it has worked before. I'll see if I can record on my end as well.
Nov 27
prev sibling next sibling parent codephantom <me noyb.com> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - D is unnecessarily a huge language. I remember in DConf 2014, 
 Scott Meyers gave a talk about the last thing D needs, which is 
 a guy like him writing a lot of books covering the many 
 subtleties of the language. However, it seems that the D 
 community went ahead and created exactly this language!
btw. I found this interesting article from Rob Pike (golang). https://commandcenter.blogspot.com.au/2012/06/less-is-exponentially-more.html He discusses the reasons why C++ developers are *not* flocking to Go...which interestingly, is the very opposite with regards to D (well 'flocking' might be a slight over exaggeration ;-) Neither (Go's) 'unnecessarily small philosophy', or (C++'s) 'unnecessarily huge philosophy' are good design philosophies. Go programmers will eventually look for 'more', and C++ programmers will continue to look for 'less'. "When all the forces acting upon an object balance each other, the object will be at equilibrium."
Nov 26
prev sibling next sibling parent reply Neia Neutuladh <neia ikeran.org> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - ‎It's quite clear that D was influenced a lot by Java at some 
 point, which led to borrowing (copying?) a lot of Java features 
 that may not appeal to everyone.
Have you observed a human to exist who has complained about a specific feature in D that is similar to a feature in Java? Relaying their complaints would be much more useful.
 - ‎The amount of trickeries required to avoid the GC and do 
 manual memory management are not pleasant and counter 
 productive. I feel they defeat any productivity gains the 
 language was supposed to offer.
Since so much of C++'s coding style revolves around memory management, the idea that you might be able to avoid manual memory management and still not see a giant reduction in performance is transformative. It means writing code in an entirely different style. And because Java is the point of comparison for using a garbage collector, it's easy to think that a GC is impossibly inefficient. But that's because Java abuses its GC so heavily (and because Java came out in an era when having 64MB of RAM on a home computer was pretty swish). Let's compare with how D uses the GC. Here's an example of code that I wrote in the most obvious way possible that is more efficient than the equivalent in C++, specifically because of the garbage collector: https://github.com/dhasenan/subtex The only nods to performance in that are when I have various Appenders reserve some space in advance. On my 410KB reference document, this completes in 50ms. (That's 90ms faster than a Java "hello world" program.) I ported it to C# for comparison, and it took over one second. I ran a GC profiler and discovered that it allocated 4GB in string data. The D version, by contrast, allocates a total of 12MB, runs three GC collections, and has a total pause time under 1ms. (If I add a call to `GC.collect()` just before the application exits, the total pause time exceeds 1ms but is still below 2ms.) You might say that I could use C++ style manual memory management and get even better performance. And you'd be wrong. The culprit for the C# version's poor performance was System.String.Substring, which allocates a copy of its input data. So "Hello world".Substring(5) creates a new char* pointing at a new memory allocation containing "Hello\0". C++'s std::string does the same thing. So if I reimplemented subtex naively in C++, its performance would be closer to the C# version than to the D version. I could probably get slightly better performance than the D version by writing a special `stringslice` struct. But that's a lot of work, and it's currently just barely fast enough that I realize that I've actually run the command instead of my shell inexplicably swallowing it. On the whole, it sounds like you don't like D because it's not C++. Which is fine, but D isn't going to become C++.
Nov 26
next sibling parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 November 2017 at 05:11:06 UTC, Neia Neutuladh wrote:
 You might say that I could use C++ style manual memory 
 management and get even better performance. And you'd be wrong.
No... Not if you do it right, but it takes more planning. I.e. Design. Which is why scripting and high level languages dont use it.
 std::string does the same thing. So if I reimplemented subtex 
 naively in C++, its performance would be closer to the C# 
 version than to the D version.
You meant stupidly, you would rather use std::string_view for string references in C++. std::string is a library convenience type that typically is only used for debugging and filenames. If you want performance then it really isnt possible to make do with a fixed library type for strings so in a realistic program people would write their own.
 I could probably get slightly better performance than the D 
 version by writing a special `stringslice` struct. But that's a 
 lot of work,
No...
 On the whole, it sounds like you don't like D because it's not 
 C++. Which is fine, but D isn't going to become C++.
Sure. But maybe you shouldn't use a tiny 400k input when discussing performance. Try to think about how many instructions a CPU executes in 50ms... If you dont know C++ then it makes no sense for you to compare performance to C++.
Nov 26
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Monday, 27 November 2017 at 06:12:53 UTC, Ola Fosheim Grostad 
wrote:
 On Monday, 27 November 2017 at 05:11:06 UTC, Neia Neutuladh
 std::string does the same thing. So if I reimplemented subtex 
 naively in C++, its performance would be closer to the C# 
 version than to the D version.
You meant stupidly, you would rather use std::string_view for string references in C++.
I last used C++ professionally in 2015, and we were still rolling out C++11. std::string_view is part of C++17. You're calling me stupid for not having already known about it. (Yes, yes, you were sufficiently indirect to have a fig leaf of deniability.)
 std::string is a library convenience type that typically is 
 only used for debugging and filenames. If you want performance 
 then it really isnt possible to make do with a fixed library 
 type for strings so in a realistic program people would write 
 their own.
An efficient text parser doesn't seem like a sufficiently unusual task that it should require you to create your own string type. A large swath of programs will use at least one text parser.
 Sure. But maybe you shouldn't use a tiny 400k input when 
 discussing performance. Try to think about how many 
 instructions a CPU executes in 50ms...
It is often useful to talk about real-world workloads when discussing performance. The reference document I'm talking about is a short novel of 75,000 words. It was a document I already had on hand, and it was within a factor of two of the largest I expected to feed through subtex. And I already had the numbers on hand: <https://blog.ikeran.org/?p=277> If you want me to do more in-depth testing, you'll have to pay me.
Nov 27
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh wrote:
 I last used C++ professionally in 2015, and we were still 
 rolling out C++11. std::string_view is part of C++17. You're 
 calling me stupid for not having already known about it. (Yes, 
 yes, you were sufficiently indirect to have a fig leaf of 
 deniability.)
I'n not talking about you obviously. I am talking about using languages stupidly... You could use GSL string_span or the array version span, or write your own in 20 minutes. These are not language constructs, but library constructs, so they dont speak to the efficiency of the language...
 An efficient text parser doesn't seem like a sufficiently 
 unusual task that it should require you to create your own 
 string type. A large swath of programs will use at least one 
 text parser.
C++ requires you to write basically most things from scratch or use external libraries... What ships with it is very rudimentary. There are many parser libraries available. C++ is very much batteries not included... Which is good for low level programming.
 It is often useful to talk about real-world workloads when 
 discussing performance.
Well, in that case Java was sufficiently fast, so all languages came out the same... If we talk about language performance the we need use a different approach. If we do a direct translation from lang A to B, then we essentially give A an advantage. So that methodology is flawed. Assuming that your CPU can execute 20 billion instuctions per second. That means 1 billion per 50 ms, so your budget is 1 million instructions on 400 bytes? Doesnt that suggest that the program is far from optimal or that most of the time is spent on something else? Anyway benchmarking different languages isnt easy, so failing at doing it well is usual... It is basically very difficult to do convincingly.
Nov 27
next sibling parent reply Neia Neutuladh <neia ikeran.org> writes:
On Monday, 27 November 2017 at 17:35:53 UTC, Ola Fosheim Grostad 
wrote:
 On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh 
 wrote:
 I last used C++ professionally in 2015, and we were still 
 rolling out C++11. std::string_view is part of C++17. You're 
 calling me stupid for not having already known about it. (Yes, 
 yes, you were sufficiently indirect to have a fig leaf of 
 deniability.)
I'n not talking about you obviously. I am talking about using languages stupidly...
You can ask your local HR representative how much better it is to say "your ideas are stupid" than "you are stupid".
 You could use GSL string_span or the array version span, or 
 write your own in 20 minutes. These are not language 
 constructs, but library constructs, so they dont speak to the 
 efficiency of the language...
Only for people who are happy to eschew the standard library, which pretty much just includes C++ users.
 C++ is very much batteries not included... Which is good for 
 low level programming.
So you're saying that having a body of well tested code that does what you want already but *might* be making performance tradeoffs that don't work for your use case, is *worse* than not having it. Well, it's a heterodox opinion to be sure.
 It is often useful to talk about real-world workloads when 
 discussing performance.
Well, in that case Java was sufficiently fast, so all languages came out the same...
You might try reading my first post. Java: 140ms to print "Hello world" D: 50ms to turn a 400kb subtex document into an epub
 Assuming that your CPU can execute 20 billion instuctions per 
 second.  That means 1 billion per 50 ms, so your budget is 1 
 million instructions on 400 bytes? Doesnt that suggest that the 
 program is far from optimal or that most of the time is spent 
 on something else?
Again, you might try reading my first post. In it, I mentioned memory a lot, since allocating memory is relatively slow. I specifically called it out as the reason that the C# version of subtex was so slow. And allocating memory isn't slow simply because it requires executing a large number of instructions.
Nov 27
next sibling parent John <j t.com> writes:
On Tuesday, 28 November 2017 at 02:26:34 UTC, Neia Neutuladh 
wrote:
 On Monday, 27 November 2017 at 17:35:53 UTC, Ola Fosheim 
 Grostad wrote:
 On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh 
 wrote:
 I last used C++ professionally in 2015, and we were still 
 rolling out C++11. std::string_view is part of C++17. You're 
 calling me stupid for not having already known about it. 
 (Yes, yes, you were sufficiently indirect to have a fig leaf 
 of deniability.)
I'n not talking about you obviously. I am talking about using languages stupidly...
You can ask your local HR representative how much better it is to say "your ideas are stupid" than "you are stupid".
Could ask Linus about that, think I recall something about baby sloth dropped on it's head retardation level, or something.
 C++ is very much batteries not included... Which is good for 
 low level programming.
So you're saying that having a body of well tested code that does what you want already but *might* be making performance tradeoffs that don't work for your use case, is *worse* than not having it. Well, it's a heterodox opinion to be sure.
For C++ that's boost. Most people avoid it cause of all the bloat anyways.
 It is often useful to talk about real-world workloads when 
 discussing performance.
Well, in that case Java was sufficiently fast, so all languages came out the same...
You might try reading my first post. Java: 140ms to print "Hello world" D: 50ms to turn a 400kb subtex document into an epub
Were you including startup times? Then that's not a very fair comparison. Lots of applications aren't just start and stop frequently. These benchmarks are all but pointless for performance anyways. Like when someone updated the D sort functions, they made an article about how much faster sorting was in D than C++. Well no shit, you just spent a bunch of time optimizing it based on some tiny stupid test. Anyways if you think that's a valid comparison of performance you have no idea what's going on.
Nov 27
prev sibling next sibling parent reply bauss <jj_1337 live.dk> writes:
On Tuesday, 28 November 2017 at 02:26:34 UTC, Neia Neutuladh 
wrote:
 You might try reading my first post.

 Java: 140ms to print "Hello world"

 D: 50ms to turn a 400kb subtex document into an epub
You're not measuring what you think for the Java program. Did you calculate the runtime and JIT initialization time and subtracted that from the actual execution time? Otherwise your benchmark isn't sufficient.
Nov 27
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Tuesday, 28 November 2017 at 04:52:52 UTC, bauss wrote:
 You're not measuring what you think for the Java program. Did 
 you calculate the runtime and JIT initialization time and 
 subtracted that from the actual execution time? Otherwise your 
 benchmark isn't sufficient.
For small programs, startup time is fair to consider since the end user still has to deal with that too. But for larger programs, I suspect it would disappear too.
Nov 27
parent John <j t.com> writes:
On Tuesday, 28 November 2017 at 05:18:42 UTC, Adam D. Ruppe wrote:
 On Tuesday, 28 November 2017 at 04:52:52 UTC, bauss wrote:
 You're not measuring what you think for the Java program. Did 
 you calculate the runtime and JIT initialization time and 
 subtracted that from the actual execution time? Otherwise your 
 benchmark isn't sufficient.
For small programs, startup time is fair to consider since the end user still has to deal with that too. But for larger programs, I suspect it would disappear too.
Not when the startup time is in milliseconds. If it was a large program taking minutes to startup, but that's not the case at all. A user will barely even be able to tell the difference between 50ms.
Nov 28
prev sibling parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 28 November 2017 at 02:26:34 UTC, Neia Neutuladh 
wrote:
 On Monday, 27 November 2017 at 17:35:53 UTC, Ola Fosheim 
 Grostad wrote:
 On Monday, 27 November 2017 at 16:44:41 UTC, Neia Neutuladh 
 wrote:
 I last used C++ professionally in 2015, and we were still 
 rolling out C++11. std::string_view is part of C++17. You're 
 calling me stupid for not having already known about it. 
 (Yes, yes, you were sufficiently indirect to have a fig leaf 
 of deniability.)
I'n not talking about you obviously. I am talking about using languages stupidly...
You wrote "std::string does the same thing. So if I reimplemented subtex naively in C++, its performance would be closer to the C# version than to the D version." "Naively" would mean that you didnt know better or that an alternative would be complex, but later on you acknowledged that doing it with slices would be better, but that you could not be bothered. So you know better, but would rather choose to do it stupedly... I have never said that you are stupid, what I said was the equivalent of "std::string does the same thing. So if I reimplemented subtex stupidly in C++, its performance would be closer to the C# version than to the D version." That line of reasoning is silly. I know that you know better, because you clearly stated so in the post I responded to.
 allocating memory isn't slow simply because it requires 
 executing a large number of instructions.
Thats debatable...
Nov 28
prev sibling parent reply Elronnd <elronnd em.slashem.me> writes:
On Monday, 27 November 2017 at 17:35:53 UTC, Ola Fosheim Grostad 
wrote:
 C++ is very much batteries not included... Which is good for 
 low level programming.
In that case, why is libstdc++ 12MB, while libphobos2 is half the size, at 5.5MB?
Nov 27
parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 28 November 2017 at 06:58:58 UTC, Elronnd wrote:
 In that case, why is libstdc++ 12MB, while libphobos2 is half 
 the size, at 5.5MB?
I havent checked, if true then probably because it contains code that goes beyond the minimal requirements (legacy, bloat, portability, tuning, etc). Phobos contain more application oriented APIs than C++17.
Nov 28
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/26/2017 9:11 PM, Neia Neutuladh wrote:
 The culprit for the C# version's poor performance was System.String.Substring, 
 which allocates a copy of its input data. So "Hello world".Substring(5)
creates 
 a new char* pointing at a new memory allocation containing "Hello\0". C++'s 
 std::string does the same thing. So if I reimplemented subtex naively in C++, 
 its performance would be closer to the C# version than to the D version.
 
 I could probably get slightly better performance than the D version by writing
a 
 special `stringslice` struct. But that's a lot of work, and it's currently
just 
 barely fast enough that I realize that I've actually run the command instead
of 
 my shell inexplicably swallowing it.
0 terminated strings in C (and C++) have always been a severe performance issue for programs that deal a lot in strings, for these reasons: 1. To get a substring, a copy must be made, meaning also that storage allocated and managed for it. 2. To do most operations on it, you need to do a strlen() or equivalent. You can always write your own string package to deal with, and I've written many :-( and they all failed for one reason or another, mostly because about everything in the C/C++ ecosystem is built around 0 terminated strings.
Nov 26
prev sibling parent reply IM <3di gm.com> writes:
On Monday, 27 November 2017 at 05:11:06 UTC, Neia Neutuladh wrote:
 On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - ‎It's quite clear that D was influenced a lot by Java at 
 some point, which led to borrowing (copying?) a lot of Java 
 features that may not appeal to everyone.
Have you observed a human to exist who has complained about a specific feature in D that is similar to a feature in Java? Relaying their complaints would be much more useful.
Yes, in particular All classes inherit from `Object`, virtual methods by default, inner classes pointers to parent classes ... to name a few.
Nov 27
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 27.11.2017 09:01, IM wrote:
 On Monday, 27 November 2017 at 05:11:06 UTC, Neia Neutuladh wrote:
 On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 - ‎It's quite clear that D was influenced a lot by Java at some 
 point, which led to borrowing (copying?) a lot of Java features that 
 may not appeal to everyone.
Have you observed a human to exist who has complained about a specific feature in D that is similar to a feature in Java? Relaying their complaints would be much more useful.
Yes, in particular All classes inherit from `Object`,
Actually, extern(C++) classes don't. (Also, Object does not. :o) )
 virtual methods by default,
This was almost changed some time ago, but doing so would break a lot of existing code.
 inner classes pointers to parent classes
I think marking inner classes static or just not nesting them works around this well enough in those cases where it is not desired.
 ... to name a few.
Another one: Object has a lazily initialized 'monitor' field.
Nov 27
prev sibling parent reply Kagamin <spam here.lot> writes:
On Monday, 27 November 2017 at 00:14:40 UTC, IM wrote:
 I could add more, but I'm tired of typing. I hope that one day 
 I will overcome my frustrations as well as D becomes a better 
 language that enables me to do what I want easily without 
 standing in my way.
Among recent native languages only some game designer's (forgot who) one follows this design principle, others focus on safety to some degree.
Dec 01
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 12/1/17 11:47 AM, Kagamin wrote:

 Among recent native languages only some game designer's (forgot who) one 
 follows this design principle, others focus on safety to some degree.
https://en.wikipedia.org/wiki/Jonathan_Blow#JAI_language -Steve
Dec 01
parent Kagamin <spam here.lot> writes:
On Friday, 1 December 2017 at 17:05:08 UTC, Steven Schveighoffer 
wrote:
 On 12/1/17 11:47 AM, Kagamin wrote:

 Among recent native languages only some game designer's 
 (forgot who) one follows this design principle, others focus 
 on safety to some degree.
https://en.wikipedia.org/wiki/Jonathan_Blow#JAI_language -Steve
Yes, that, in text: https://github.com/BSVino/JaiPrimer/blob/master/JaiPrimer.md
Dec 01