www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Are there any default dmd optimizations

reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
Does the compiler inline any function even without -inline?

What level of optimization is applied even without -O? One that comes to 
mind is the elision of certain struct object copies. Is such an 
optimization applied without -O? If so, are there similar other 
optimizations?

Thank you,
Ali
Feb 23 2013
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
24-Feb-2013 04:14, Ali Çehreli пишет:
 Does the compiler inline any function even without -inline?

 What level of optimization is applied even without -O? One that comes to
 mind is the elision of certain struct object copies. Is such an
 optimization applied without -O? If so, are there similar other
 optimizations?
AFAIK NRVO/RVO are working w/o -O switch and are performed in the front-end (for better or worse).
 Thank you,
 Ali
-- Dmitry Olshansky
Feb 23 2013
parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 02/23/2013 10:53 PM, Dmitry Olshansky wrote:
 24-Feb-2013 04:14, Ali Çehreli пишет:
 Does the compiler inline any function even without -inline?

 What level of optimization is applied even without -O? One that comes to
 mind is the elision of certain struct object copies. Is such an
 optimization applied without -O? If so, are there similar other
 optimizations?
AFAIK NRVO/RVO are working w/o -O switch and are performed in the front-end (for better or worse).
Makes sense that they are performed in the front-end because NRVO and RVO and defined at language level in C++ as well. I will strongly :) assume that no function is inlined unless the -inline switch is used. Ali
Feb 23 2013
prev sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 24 February 2013 at 00:14:14 UTC, Ali Çehreli wrote:
 Does the compiler inline any function even without -inline?

 What level of optimization is applied even without -O? One that 
 comes to mind is the elision of certain struct object copies. 
 Is such an optimization applied without -O? If so, are there 
 similar other optimizations?

 Thank you,
 Ali
I would bet for register promotions, dead read/write eliminations and alike. I don't expect inline to be part of it.
Feb 23 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 24, 2013 08:58:38 deadalnix wrote:
 I would bet for register promotions, dead read/write eliminations
 and alike. I don't expect inline to be part of it.
It would actually be problematic if it were, because it would screw with debugging. - Jonathan M Davis
Feb 24 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 24 February 2013 at 08:16:37 UTC, Jonathan M Davis 
wrote:
 On Sunday, February 24, 2013 08:58:38 deadalnix wrote:
 I would bet for register promotions, dead read/write 
 eliminations
 and alike. I don't expect inline to be part of it.
It would actually be problematic if it were, because it would screw with debugging. - Jonathan M Davis
Does disabling optimization imply debug build ?
Feb 24 2013
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 24, 2013 09:33:12 deadalnix wrote:
 On Sunday, 24 February 2013 at 08:16:37 UTC, Jonathan M Davis
 
 wrote:
 On Sunday, February 24, 2013 08:58:38 deadalnix wrote:
 I would bet for register promotions, dead read/write
 eliminations
 and alike. I don't expect inline to be part of it.
It would actually be problematic if it were, because it would screw with debugging. - Jonathan M Davis
Does disabling optimization imply debug build ?
Well, there are three aspects to what usually is meant by a debug build: 1. Debug symbols are compiled in. 2. Optimizations are not enabled. 3. Assertions _are_ enabled. For dmd, the first one is controlled by -g and -gc, the second one is controlled by -O (and probably to some extent by -release), and the third one is controlled by -release (and to some extent -noboundscheck). But what it actually means for a build to be "debug" or "release" really isn't all that well defined. It pretty much just boils down to one of them being compiled for debugging and development purposes, whereas the other is what's used in released code. In general though, when people compile release builds, they use all of the various flags for enabling optimizations and disabling debug symbols and assertions, whereas when they compile debug builds, they disable all optimizations and enable debugging symbols and assertions. But as for inlining, enabling it (or any other optimizations) screw with debugging, so they shouldn't be used with any builds which are intended for debugging, and with the way most compilers' flags work, optimizations aren't enabled unless you ask for them, making debug builds the default. - Jonathan M Davis
Feb 24 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-24 09:43, Jonathan M Davis wrote:

 Well, there are three aspects to what usually is meant by a debug build:

 1. Debug symbols are compiled in.

 2. Optimizations are not enabled.

 3. Assertions _are_ enabled.

 For dmd, the first one is controlled by -g and -gc, the second one is
 controlled by -O (and probably to some extent by -release), and the third one
 is controlled by -release (and to some extent -noboundscheck). But what it
 actually means for a build to be "debug" or "release" really isn't all that
 well defined. It pretty much just boils down to one of them being compiled for
 debugging and development purposes, whereas the other is what's used in
 released code. In general though, when people compile release builds, they use
 all of the various flags for enabling optimizations and disabling debug symbols
 and assertions, whereas when they compile debug builds, they disable all
 optimizations and enable debugging symbols and assertions.

 But as for inlining, enabling it (or any other optimizations) screw with
 debugging, so they shouldn't be used with any builds which are intended for
 debugging, and with the way most compilers' flags work, optimizations aren't
 enabled unless you ask for them, making debug builds the default.
Then there's the -debug flag as well. -- /Jacob Carlborg
Feb 24 2013
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 24, 2013 13:07:39 Jacob Carlborg wrote:
 On 2013-02-24 09:43, Jonathan M Davis wrote:
 Well, there are three aspects to what usually is meant by a debug build:
 
 1. Debug symbols are compiled in.
 
 2. Optimizations are not enabled.
 
 3. Assertions _are_ enabled.
 
 For dmd, the first one is controlled by -g and -gc, the second one is
 controlled by -O (and probably to some extent by -release), and the third
 one is controlled by -release (and to some extent -noboundscheck). But
 what it actually means for a build to be "debug" or "release" really
 isn't all that well defined. It pretty much just boils down to one of
 them being compiled for debugging and development purposes, whereas the
 other is what's used in released code. In general though, when people
 compile release builds, they use all of the various flags for enabling
 optimizations and disabling debug symbols and assertions, whereas when
 they compile debug builds, they disable all optimizations and enable
 debugging symbols and assertions.
 
 But as for inlining, enabling it (or any other optimizations) screw with
 debugging, so they shouldn't be used with any builds which are intended
 for
 debugging, and with the way most compilers' flags work, optimizations
 aren't enabled unless you ask for them, making debug builds the default.
Then there's the -debug flag as well.
Yeah, which just adds the confusion, because all it does is enable debug bocks and isn't at all what people normal mean by "debug mode" (though it would make sense to use -debug in "debug mode"). The -debug flag is why I generally end up talking about release mode and non-release mode in D rather than release vs debug, as release mode has far more to do with -release than -debug. Heck, you can technically enable -debug with -release and full optimizations turned on. So, useful as -debug may be, its name confuses things. - Jonathan M Davis
Feb 24 2013
prev sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/24/13, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Yeah, which just adds the confusion, because all it does is enable debug
 bocks.
The feature almost doesn't pay its weight. I mean technically you can use -version=Debug and then use version(Debug) blocks. All `debug` does is saves a little bit of typing.
Feb 24 2013
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/24/2013 09:57 PM, Andrej Mitrovic wrote:
 On 2/24/13, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Yeah, which just adds the confusion, because all it does is enable debug
 bocks.
The feature almost doesn't pay its weight. I mean technically you can use -version=Debug and then use version(Debug) blocks. All `debug` does is saves a little bit of typing.
debug blocks also disable purity checking.
Feb 24 2013
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/24/13, Timon Gehr <timon.gehr gmx.ch> wrote:
 debug blocks also disable purity checking.
Ah good point.
Feb 24 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/24/2013 12:57 PM, Andrej Mitrovic wrote:
 On 2/24/13, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Yeah, which just adds the confusion, because all it does is enable debug
 bocks.
The feature almost doesn't pay its weight. I mean technically you can use -version=Debug and then use version(Debug) blocks. All `debug` does is saves a little bit of typing.
I should explain the reasoning for this. I've talked to many C/C++ programming managers. They lament that every C/C++ coding group feels compelled to reinvent their own debug macro scheme. This makes it pointlessly difficult to share code between groups. It's not that unlike how pre-C++98 code bases all had their own "string" class. By baking one scheme into the language, people will rarely feel a need to reinvent the wheel, and will go on to more productive uses of their time.
Feb 24 2013
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, February 24, 2013 14:28:42 Walter Bright wrote:
 On 2/24/2013 12:57 PM, Andrej Mitrovic wrote:
 On 2/24/13, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Yeah, which just adds the confusion, because all it does is enable debug
 bocks.
The feature almost doesn't pay its weight. I mean technically you can use -version=Debug and then use version(Debug) blocks. All `debug` does is saves a little bit of typing.
I should explain the reasoning for this. I've talked to many C/C++ programming managers. They lament that every C/C++ coding group feels compelled to reinvent their own debug macro scheme. This makes it pointlessly difficult to share code between groups. It's not that unlike how pre-C++98 code bases all had their own "string" class. By baking one scheme into the language, people will rarely feel a need to reinvent the wheel, and will go on to more productive uses of their time.
I don't disagree with any of this. It's just that the name of the flag (-debug) is unfortunate due to the confusion it helps engender with regards to the difference between "release" and "debug" builds. I don't know what else the flag could have reasonably been called though. Alternatively, version(Debug) could have been used instead (complete with whatever special capabilities debug statements currently grant - e.g. skipping purity checks), but that also might make the feature just enough more obscure that it wouldn't be used as much. So, I don't know that we have a good solution to the problem or that there's anything obvious that we could have done differently were we to have known better when the feature was first introduced, but there _are_ some downsides to the current scheme. - Jonathan M Davis
Feb 24 2013
prev sibling parent reply "foobar" <foo bar.com> writes:
On Sunday, 24 February 2013 at 22:28:46 UTC, Walter Bright wrote:
 On 2/24/2013 12:57 PM, Andrej Mitrovic wrote:
 On 2/24/13, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Yeah, which just adds the confusion, because all it does is 
 enable debug
 bocks.
The feature almost doesn't pay its weight. I mean technically you can use -version=Debug and then use version(Debug) blocks. All `debug` does is saves a little bit of typing.
I should explain the reasoning for this. I've talked to many C/C++ programming managers. They lament that every C/C++ coding group feels compelled to reinvent their own debug macro scheme. This makes it pointlessly difficult to share code between groups. It's not that unlike how pre-C++98 code bases all had their own "string" class. By baking one scheme into the language, people will rarely feel a need to reinvent the wheel, and will go on to more productive uses of their time.
This is a fallacy caused by the "culture" of c++ programmers - there is exactly *zero* benefit in baking this into the language. Yes, I agree with the sentiment that there should be a standard way to save programmers the hassle and all that. The correct solution to that is a culture of cultivating standard conventions and "convention over configuration". E.g. Java has many such convections followed to a level of religious zeal, such as using camelCase everywhere and using PascalCase for types, etc, etc. None of which is _enforced by the language_. On the other hand, many major c++ libraries re-invent "string" even though there exists already std::string and there are even libraries that advocate _avoiding_ the use of stl entirely, all for the perceived benefit of efficiency which is a prime example of premature optimization. Even if there is efficiency gain in a specific implementation, ideally it should have been used to improve the standard stl::string but trying to change anything in the c++ standard is futile - you can't just send a pull request, you need to pass boat loads of red tape and wait a decade or two for the next version, thus causeing this major NIH attitude. All of this is to say, that instead of trying to "fix" the c++ culture in D, we should try to create a *better* D culture. When you're buying an airplane ticket, what do you say to the travel agent? The human reflex of "I don't want to go to ___" doesn't get you a ticket anywhere. This feature is analogous - it's designed to not allow c++ misbehavior, instead of actually thinking what we do want and how best to achieve that. In fact there are many such "not c++" features in D and which is why I find other languages such as rust a *much* better design and it evolves much faster because it is designed in terms of - what we want to achieve, how best to implement that.
Feb 25 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2013 2:00 PM, foobar wrote:
 On Sunday, 24 February 2013 at 22:28:46 UTC, Walter Bright wrote:
 By baking one scheme into the language, people will rarely feel a need to
 reinvent the wheel, and will go on to more productive uses of their time.
This is a fallacy caused by the "culture" of c++ programmers - there is exactly *zero* benefit in baking this into the language.
On the contrary, I think it has turned out rather well. Another success story of baking certain things into the language is Ddoc. Unittest is a third. They've been big wins for D. None of those strictly has to be in the language - they can be done by convention and 3rd party tools. Nevertheless, convenience, standardization and mechanical enforcement of a convention seem to work better than applying religious zeal to enforce a convention.
 All of this is to say, that instead of trying to "fix" the c++ culture in D, we
 should try to create a *better* D culture.
We do have a significantly better D culture than the C++ one. For example, C++ relies heavily and unapologetically on convention for writing correct, robust code. D eschews that, and instead is very biased towards mechanical verification.
  In fact there are many such "not c++"
 features in D and which is why I find other languages such as rust a *much*
 better design and it evolves much faster because it is designed in terms of -
 what we want to achieve, how best to implement that.
How does rust handle this particular issue?
Feb 25 2013
parent reply "foobar" <foo bar.com> writes:
On Monday, 25 February 2013 at 22:26:33 UTC, Walter Bright wrote:
 On 2/25/2013 2:00 PM, foobar wrote:
 On Sunday, 24 February 2013 at 22:28:46 UTC, Walter Bright 
 wrote:
 By baking one scheme into the language, people will rarely 
 feel a need to
 reinvent the wheel, and will go on to more productive uses of 
 their time.
This is a fallacy caused by the "culture" of c++ programmers - there is exactly *zero* benefit in baking this into the language.
On the contrary, I think it has turned out rather well. Another success story of baking certain things into the language is Ddoc. Unittest is a third. They've been big wins for D. None of those strictly has to be in the language - they can be done by convention and 3rd party tools. Nevertheless, convenience, standardization and mechanical enforcement of a convention seem to work better than applying religious zeal to enforce a convention.
DDoc isn't part of the language but rather part of the compiler, nevertheless it has its downsides. Being part of the compiler means that the compiler needs to be changed to address those and it isn't even written in D! The end result is all sort of additional auxiliary D utilities to post-process this in order to address some of those issues. Hence, A prime example of the failure that I'm talking about. unittest is worse, it is indeed part of the language so now the _language grammar_ needs to be changed to fix the problems with it, such as not having test names. A far better solution practiced in all other major languages is to use annotations and in fact, there probably already are similar D frameworks, thus exhibiting the same problem of multiple conflicting implementations you wished to avoid. Additional such problems - the AA issue which has been going own for years now. The endless discussions regarding tuples. It seems that D strives to bloat the language with needless features that really should have been standardized in the library and on the other hand tries to put in the library things that really ought to be built into the language to benefit from proper integration and syntax. The latest case was the huge properties debate and its offshoots regarding ref semantics which I didn't even bother participate in. Bartosz developed an ownership system for D to address all the safety issues raised by ref *years ago* and it was rejected due to complexity. Now, Andrei tries to achieve similar safety guaranties by giving ref the semantics of borrowed pointers. It all seems to me like trying to build an airplane without wings cause they are too complex. Rust on the other hand already integrated an ownership system and is already far ahead of D's design. D had talked about macros *years ago* and rust already implemented them.
 All of this is to say, that instead of trying to "fix" the c++ 
 culture in D, we
 should try to create a *better* D culture.
We do have a significantly better D culture than the C++ one. For example, C++ relies heavily and unapologetically on convention for writing correct, robust code. D eschews that, and instead is very biased towards mechanical verification.
I call bullshit. This is an half hearted intention at best. safe has holes in it, integers has no overflow checks, ref also has holes, Not only D has null pointer bugs but they also cause segfaults.
 In fact there are many such "not c++"
 features in D and which is why I find other languages such as 
 rust a *much*
 better design and it evolves much faster because it is 
 designed in terms of -
 what we want to achieve, how best to implement that.
How does rust handle this particular issue?
Feb 25 2013
next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 26 February 2013 at 07:56:05 UTC, foobar wrote:
 The latest case was the huge properties debate and its 
 offshoots regarding ref semantics which I didn't even bother 
 participate in. Bartosz developed an ownership system for D to 
 address all the safety issues raised by ref *years ago* and it 
 was rejected due to complexity. Now, Andrei tries to achieve 
 similar safety guaranties by giving ref the semantics of 
 borrowed pointers. It all seems to me like trying to build an 
 airplane without wings cause they are too complex. Rust on the 
 other hand already integrated an ownership system and is 
 already far ahead of D's design. D had talked about macros 
 *years ago* and rust already implemented them.
Do you have a link to that ? I know that he did, but never could find the proposal itself.
Feb 26 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-26 08:56, foobar wrote:

 DDoc isn't part of the language but rather part of the compiler,
 nevertheless it has its downsides. Being part of the compiler means that
 the compiler needs to be changed to address those and it isn't even
 written in D! The end result is all sort of additional auxiliary D
 utilities to post-process this in order to address some of those issues.
 Hence, A prime example of the failure that I'm talking about.
Having to use the JSON output to generate documentation just shows that DDoc is lacking.
 unittest is worse, it is indeed part of the language so now the
 _language grammar_ needs to be changed to fix the problems with it, such
 as not having test names. A far better solution practiced in all other
 major languages is to use annotations and in fact, there probably
 already are similar D frameworks, thus exhibiting the same problem of
 multiple conflicting implementations you wished to avoid.
There are already several testing frameworks for D. The only thing I think the "unittest" keyword is useful for, is creating a block where to put the code. The rest will my framework handle.
 Additional such problems - the AA issue which has been going own for
 years now. The endless discussions regarding tuples.
 It seems that D strives to bloat the language with needless features
 that really should have been standardized in the library and on the
 other hand tries to put in the library things that really ought to be
 built into the language to benefit from proper integration and syntax.

 The latest case was the huge properties debate and its offshoots
 regarding ref semantics which I didn't even bother participate in.
 Bartosz developed an ownership system for D to address all the safety
 issues raised by ref *years ago* and it was rejected due to complexity.
 Now, Andrei tries to achieve similar safety guaranties by giving ref the
 semantics of borrowed pointers. It all seems to me like trying to build
 an airplane without wings cause they are too complex. Rust on the other
 hand already integrated an ownership system and is already far ahead of
 D's design. D had talked about macros *years ago* and rust already
 implemented them.
I agree in general. -- /Jacob Carlborg
Feb 26 2013
prev sibling next sibling parent reply "Dicebot" <m.strashun gmail.com> writes:
I agree with all but small comment on unit tests : current 
approach makes it really easy to start adding tests for projects 
that do not have them and this is huge. So having "unittest" 
blocks themselves is really a success feature. Tightly coupling 
handling of this blocks to compiler is an issue though.
Feb 26 2013
next sibling parent reply "foobar" <foo bar.com> writes:
On Tuesday, 26 February 2013 at 10:59:41 UTC, Dicebot wrote:
 I agree with all but small comment on unit tests : current 
 approach makes it really easy to start adding tests for 
 projects that do not have them and this is huge. So having 
 "unittest" blocks themselves is really a success feature. 
 Tightly coupling handling of this blocks to compiler is an 
 issue though.
Again, this is a completely superfluous feature. D already has annotations (took only several years to convince Walter to add them) which are more flexible and much better suited for this. unittest // <- this is a unit-test function void mySuperDuperTestFunction(...); There is no benefit in having all those special case features in the language which have all sorts of integration issues yet deny the usefulness of a more general solution generally accepted in the programming world, successfully used in many languages and thus also familiar to programmers coming from those languages.
Feb 26 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/13 3:51 PM, foobar wrote:
 On Tuesday, 26 February 2013 at 10:59:41 UTC, Dicebot wrote:
 I agree with all but small comment on unit tests : current approach
 makes it really easy to start adding tests for projects that do not
 have them and this is huge. So having "unittest" blocks themselves is
 really a success feature. Tightly coupling handling of this blocks to
 compiler is an issue though.
Again, this is a completely superfluous feature. D already has annotations (took only several years to convince Walter to add them) which are more flexible and much better suited for this. unittest // <- this is a unit-test function void mySuperDuperTestFunction(...); There is no benefit in having all those special case features in the language which have all sorts of integration issues yet deny the usefulness of a more general solution generally accepted in the programming world, successfully used in many languages and thus also familiar to programmers coming from those languages.
Agreed, but it does happen often that a language feature is later superseded by a generalization thereof. Andrei
Feb 26 2013
next sibling parent "foobar" <foo bar.com> writes:
On Tuesday, 26 February 2013 at 23:37:57 UTC, Andrei Alexandrescu 
wrote:
<snip>
 Agreed, but it does happen often that a language feature is 
 later superseded by a generalization thereof.

 Andrei
I agree that this does happen. The question is how the language evolve with that in mind. Do we choose to preserve *all* previous semantics? Do we have a proper deprecation process? etc, etc.. I think that unfortunately, the D design process is not adequate as of yet - there is no proper deprecation process which causes things like a "sudden death" for previously valid syntax (alias syntax) instead of gradually deprecate it with proper announcements *everywhere* (website, NG, ..) on the one hand and keeping redundant special cases in the language and refusing to change semantics (AAs) to allow for an easier path forward, on the other hand. So I think we agree on the spirit of things but the problems I'm voicing are about the details or lack thereof. Where are the guidelines describing all these issues? How to address required semantic changes in the language? how to address syntax changes? how to remove previously failed features or no longer needed ones? What constitutes a feature that should be completely removed and what should only be put on permanent deprecation status? How all those things interact? If we still (after a decade!) decide these based on c++ common wisdom only than I think something is really broken here. Previously valid D syntax is easily broken without much regard yet we are still stuck on the comma backwards compatibility to C. We ignore all other languages that programmers migrate from to D and ignore their common wisdom and experience. Isn't it time for D to become its own language instead of a leech on C++ idioms?
Feb 27 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-27 00:37, Andrei Alexandrescu wrote:

 Agreed, but it does happen often that a language feature is later
 superseded by a generalization thereof.
In this case it would be two features: 1. Allow to run arbitrary code at top level 2. Allow to pass a delegate to a function after the parameter list void unittest (void delegate () dg) unittest { assert(true); } Would be lowered to: unittest({ assert(true); }); Then we also can easily support named unit tests: void unittest (string name, void delegate () dg) unittest("foo") { assert(true); } Would be lowered to: unittest("foo", { assert(true); }); I think it would be nice if D could get better at declarative programming. -- /Jacob Carlborg
Feb 27 2013
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 27 Feb 2013 11:42:53 +0100
Jacob Carlborg <doob me.com> wrote:

 On 2013-02-27 00:37, Andrei Alexandrescu wrote:
 
 Agreed, but it does happen often that a language feature is later
 superseded by a generalization thereof.
In this case it would be two features: 1. Allow to run arbitrary code at top level 2. Allow to pass a delegate to a function after the parameter list void unittest (void delegate () dg) unittest { assert(true); } Would be lowered to: unittest({ assert(true); }); Then we also can easily support named unit tests: void unittest (string name, void delegate () dg) unittest("foo") { assert(true); } Would be lowered to: unittest("foo", { assert(true); }); I think it would be nice if D could get better at declarative programming.
I like that, but "run arbitrary code at top level" may be a bit of a problem because it conflicts with allowing forward references. Ie, for example: void foo() { bar(); } void bar() { i = 3; } int i; vs: void main() { void foo() { bar(); } void bar() { i = 3; } int i; } The first one works, but the second doesn't. And my understanding is that the second one not working is a deliberate thing related to not being in a declaration-only context.
Feb 27 2013
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 2:55 PM, Nick Sabalausky wrote:
 I like that, but "run arbitrary code at top level" may be a bit of a
 problem because it conflicts with allowing forward references.

 Ie, for example:

      void foo() { bar(); }
      void bar() { i = 3; }
      int i;

 vs:

      void main() {
          void foo() { bar(); }
          void bar() { i = 3; }
          int i;
      }


 The first one works, but the second doesn't. And my understanding is
 that the second one not working is a deliberate thing related to
 not being in a declaration-only context.
It's a little more than that. People have a natural view of function bodies as executing top down. Functions also tend to be simple enough that this makes sense. People have a natural view outside functions as everything happening in parallel.
Feb 27 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 27 February 2013 at 23:34:42 UTC, Walter Bright 
wrote:
 On 2/27/2013 2:55 PM, Nick Sabalausky wrote:
 I like that, but "run arbitrary code at top level" may be a 
 bit of a
 problem because it conflicts with allowing forward references.

 Ie, for example:

     void foo() { bar(); }
     void bar() { i = 3; }
     int i;

 vs:

     void main() {
         void foo() { bar(); }
         void bar() { i = 3; }
         int i;
     }


 The first one works, but the second doesn't. And my 
 understanding is
 that the second one not working is a deliberate thing related 
 to
 not being in a declaration-only context.
It's a little more than that. People have a natural view of function bodies as executing top down. Functions also tend to be simple enough that this makes sense. People have a natural view outside functions as everything happening in parallel.
Plus, this is really hard to ensure that everything is initialized properly without going eager with setting to init.
Feb 27 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 8:01 PM, deadalnix wrote:
 Plus, this is really hard to ensure that everything is initialized properly
 without going eager with setting to init.
Yeah, the forward reference order of evaluation thingie.
Feb 27 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 02/28/2013 05:55 AM, Walter Bright wrote:
 On 2/27/2013 8:01 PM, deadalnix wrote:
 Plus, this is really hard to ensure that everything is initialized
 properly
 without going eager with setting to init.
Yeah, the forward reference order of evaluation thingie.
It's really easy. DMD can be convinced to do a sufficient conservative analysis even now, but the behaviour is undocumented and seems unintentional. void main() { void foo()() { bar(); } void bar()() { i = 3; } int i; foo(); } It's a fine idea to disallow the use of both a shadowed and the shadowing symbol in the same function anyway, so we wouldn't lose a lot by allowing forward references from local functions. Then the template behaviour could be fixed: int i = 0; void main(){ void foo(int x){ i = x; } foo(1); int i = 0; foo(2); assert(.i==2&&i==0); } int i = 0; void main(){ void foo(int x){ i = x; } // foo(1); int i = 0; foo(2); assert(.i==0&&.i==2); } (Currently, presence of one call can magically influence the behaviour of other calls.)
Feb 28 2013
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/28/13, Timon Gehr <timon.gehr gmx.ch> wrote:
 It's really easy. DMD can be convinced to do a sufficient conservative
 analysis even now, but the behaviour is undocumented and seems
 unintentional.


      void main() {
          void foo()() { bar(); }
          void bar()() { i = 3; }
          int i;
          foo();
      }
You can even make them non-templates if they're part of a mixin template. mixin template T() { void foo() { bar(); } void bar() { i = 3; } int i; } void main() { mixin T!(); foo(); }
Feb 28 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 11:03 AM, Timon Gehr wrote:
 It's really easy. DMD can be convinced to do a sufficient conservative analysis
 even now, but the behaviour is undocumented and seems unintentional.


      void main() {
          void foo()() { bar(); }
          void bar()() { i = 3; }
          int i;
          foo();
      }
No, the compiler is not being clever here, nor is this undocumented. Templates are not semantically analyzed until they are instantiated. In fact, they can't be semantically analyzed until they are instantiated.
Feb 28 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/01/2013 12:01 AM, Walter Bright wrote:
 On 2/28/2013 11:03 AM, Timon Gehr wrote:
 It's really easy. DMD can be convinced to do a sufficient[ly] conservative
 analysis
 even now, but the behaviour is undocumented and seems unintentional.


      void main() {
          void foo()() { bar(); }
          void bar()() { i = 3; }
          int i;
          foo();
      }
No, the compiler is not being clever here,
I wasn't stating that.
 nor is this undocumented.
Where is it documented?
 Templates are not semantically analyzed until they are instantiated.  In
 fact, they can't be semantically analyzed until they are instantiated.
Obviously. In the above code all templates are instantiated. This is about which state of the function local symbol table is used.
Feb 28 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 3:07 PM, Timon Gehr wrote:
 On 03/01/2013 12:01 AM, Walter Bright wrote:
 On 2/28/2013 11:03 AM, Timon Gehr wrote:
 It's really easy. DMD can be convinced to do a sufficient[ly] conservative
 analysis
 even now, but the behaviour is undocumented and seems unintentional.


      void main() {
          void foo()() { bar(); }
          void bar()() { i = 3; }
          int i;
          foo();
      }
No, the compiler is not being clever here,
I wasn't stating that.
 nor is this undocumented.
Where is it documented?
Under templates, it says that templates are instantiated in the scope of the corresponding template declaration. At the time of foo(); bar has been added to the scope.
 Obviously. In the above code all templates are instantiated. This is about
which
 state of the function local symbol table is used.
C++ has the notion of point of instantiation and point of definition, with D, it's about scope of instantiation and scope of definition instead.
Feb 28 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-27 23:55, Nick Sabalausky wrote:

 I like that, but "run arbitrary code at top level" may be a bit of a
 problem because it conflicts with allowing forward references.
It would basically be an implicit "static this" declaration. I guess having to explicitly wrap it in a "static this" would be acceptable too. -- /Jacob Carlborg
Feb 28 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-26 21:51, foobar wrote:

 Again, this is a completely superfluous feature. D already has
 annotations (took only several years to convince Walter to add them)
 which are more flexible and much better suited for this.

  unittest // <- this is a unit-test function
 void mySuperDuperTestFunction(...);

 There is no benefit in having all those special case features in the
 language which have all sorts of integration issues yet deny the
 usefulness of a more general solution generally accepted in the
 programming world, successfully used in many languages and thus also
 familiar to programmers coming from those languages.
I like that I don't have to create a function for the tests. I can run arbitrary code in the unit test blocks. Otherwise I agree with you. -- /Jacob Carlborg
Feb 27 2013
prev sibling parent 1100110 <0b1100110 gmail.com> writes:
On 02/26/2013 04:59 AM, Dicebot wrote:
 I agree with all but small comment on unit tests : current approach
 makes it really easy to start adding tests for projects that do not have
 them and this is huge. So having "unittest" blocks themselves is really
 a success feature. Tightly coupling handling of this blocks to compiler
 is an issue though.
Couldn't have said it better myself.
Feb 26 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/13 2:56 AM, foobar wrote:
 DDoc isn't part of the language but rather part of the compiler,
 nevertheless it has its downsides. Being part of the compiler means that
 the compiler needs to be changed to address those and it isn't even
 written in D! The end result is all sort of additional auxiliary D
 utilities to post-process this in order to address some of those issues.
 Hence, A prime example of the failure that I'm talking about.

 unittest is worse, it is indeed part of the language so now the
 _language grammar_ needs to be changed to fix the problems with it, such
 as not having test names. A far better solution practiced in all other
 major languages is to use annotations and in fact, there probably
 already are similar D frameworks, thus exhibiting the same problem of
 multiple conflicting implementations you wished to avoid.
I think this is unnecessarily negative because far as I can tell ddoc and unittest are features that people use and appreciate. BTW no need to change the grammar for improving unittests, only an API is sufficient - consider e.g.: "name" unittest { ... } or simply unittest { testName("name"); ... } One can find negatives of most any feature.
 Additional such problems - the AA issue which has been going own for
 years now. The endless discussions regarding tuples.
There are plenty of cases in which discussions have been concluded with successful improvements. We can't tell people what to work on, so some are longer than others.
 It seems that D strives to bloat the language with needless features
 that really should have been standardized in the library and on the
 other hand tries to put in the library things that really ought to be
 built into the language to benefit from proper integration and syntax.
If I wanted to clarify how subjective that is, I couldn't have written it better :o).
 The latest case was the huge properties debate and its offshoots
 regarding ref semantics which I didn't even bother participate in.
 Bartosz developed an ownership system for D to address all the safety
 issues raised by ref *years ago* and it was rejected due to complexity.
That was for threads.
 Now, Andrei tries to achieve similar safety guaranties by giving ref the
 semantics of borrowed pointers. It all seems to me like trying to build
 an airplane without wings cause they are too complex. Rust on the other
 hand already integrated an ownership system and is already far ahead of
 D's design. D had talked about macros *years ago* and rust already
 implemented them.
I think Rust has gone a bit too far with with complexity of the type system. There's only this much karma one can spend for such. In my opinion we're in better shape than Rust in that particular regard.
 We do have a significantly better D culture than the C++ one. For
 example, C++ relies heavily and unapologetically on convention for
 writing correct, robust code. D eschews that, and instead is very
 biased towards mechanical verification.
I call bullshit. This is an half hearted intention at best. safe has holes in it, integers has no overflow checks, ref also has holes, Not only D has null pointer bugs but they also cause segfaults.
Last one is different. Anyhow, it is a certainty we regard e.g. safe hole as problems that need fixing, which is a significant cultural difference. At any rate, I agree one ought to be suspicious whenever someone claims "Language X's culture is better than language Y's culture." Andrei
Feb 26 2013
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/26/13, foobar <foo bar.com> wrote:
 Rust on the other hand already
 integrated an ownership system and is already far ahead of D's
 design.
Far ahead? It allows things like local variables shadowing earlier declarations: let monster_size = monster_factor * 10.0; ... let monster_size: int = 50; That's straight from the tutorial. When has anyone thought to themselves "I need a new variable to store some result to, but damn, I wish I could use an existing name but use it to store a completely different type". That is an incentive to write unreadable code. And then there are things like this: "Note that, if applied to an integer value, ! flips all the bits (like ~ in C)." So (!2 == true)? There are plenty of complaints for both languages (it's only natural), but saying that Rust is somehow far ahead, I don't buy that one bit. It's all too easy to cherrypick features which are in one language and not in the other.
Feb 26 2013
parent reply "foobar" <foo bar.com> writes:
On Tuesday, 26 February 2013 at 18:41:24 UTC, Andrej Mitrovic 
wrote:
 On 2/26/13, foobar <foo bar.com> wrote:
 Rust on the other hand already
 integrated an ownership system and is already far ahead of D's
 design.
Far ahead? It allows things like local variables shadowing earlier declarations: let monster_size = monster_factor * 10.0; ... let monster_size: int = 50; That's straight from the tutorial. When has anyone thought to themselves "I need a new variable to store some result to, but damn, I wish I could use an existing name but use it to store a completely different type". That is an incentive to write unreadable code. And then there are things like this: "Note that, if applied to an integer value, ! flips all the bits (like ~ in C)." So (!2 == true)? There are plenty of complaints for both languages (it's only natural), but saying that Rust is somehow far ahead, I don't buy that one bit. It's all too easy to cherrypick features which are in one language and not in the other.
I don't get what fault you find in the binary NOT operation. Regardless, my post wasn't really about specific features, (Walter actually mentioned those) but rather about the general design philosophy of D which I find lacking. Yes, it is obviously true that Rust has it own share of faults, The difference is what *principals* they use to address those faults and evolve the language. Rust is written in Rust, thus the developers themselves feel all the shortcomings, they also listen to their users and they strive to find the best way to express the semantics they want in the possibly simplest yet readable syntax possible. They think positive and build on their vision, whereas D thinks negatively based on C++'s vision. D exists for more than a decade and all it provides is slightly less hackish C++. At first, I dismissed Rust for having poor syntax ("ret" really? Are we back to assembly?) but lo and behold, in a very short time they considerably improved the syntax. D still argues about the exact same issues from several years ago, as if it's stuck in a time loop. This to me shows a lack of direction. I expected thing to improve lately with all those newly minted release process discussions and such, but alas the attitude hasn't shifted at all.
Feb 26 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/26/13 4:23 PM, foobar wrote:
 I don't get what fault you find in the binary NOT operation.
 Regardless, my post wasn't really about specific features, (Walter
 actually mentioned those) but rather about the general design philosophy
 of D which I find lacking. Yes, it is obviously true that Rust has it
 own share of faults, The difference is what *principals* they use to
 address those faults and evolve the language.
 Rust is written in Rust, thus the developers themselves feel all the
 shortcomings, they also listen to their users and they strive to find
 the best way to express the semantics they want in the possibly simplest
 yet readable syntax possible. They think positive and build on their
 vision, whereas D thinks negatively based on C++'s vision.
 D exists for more than a decade and all it provides is slightly less
 hackish C++.
 At first, I dismissed Rust for having poor syntax ("ret" really? Are we
 back to assembly?) but lo and behold, in a very short time they
 considerably improved the syntax. D still argues about the exact same
 issues from several years ago, as if it's stuck in a time loop. This to
 me shows a lack of direction. I expected thing to improve lately with
 all those newly minted release process discussions and such, but alas
 the attitude hasn't shifted at all.
I understand how you see it, and honestly could see it from a mile. When a post (a) cherry-picks all negatives and (b) has a final tone that mentions no possible solution - it's a foregone conclusion that no amount of explaining, amending, arguing, etc. will improve the poster's outlook. I could say e.g. "Well I think things have changed, and look we've turned the bug trend curve (http://goo.gl/kf4ZC) which is unprecedented, fixed some incomplete features, break new records on bug fixes with each release, and have a big conference coming." - to which I have no doubt it's possible to concoct a negative answer. So I understand you have a negative outlook on D. That's entirely fine, as is mentioning it on the forum. The only thing I'd like you to understand and appreciate is that we who work on D are doing our best to find solutions to the various problems in front of us, and in quite a literal sense we don't know how to do any better. The constructive thing I'm getting out of this is that we could use some more radicalization - try things that push stronger against our comfort zone. I have a few in mind but it's too early to discuss them publicly. Andrei
Feb 26 2013
parent reply "foobar" <foo bar.com> writes:
On Wednesday, 27 February 2013 at 00:01:31 UTC, Andrei 
Alexandrescu wrote:
 I understand how you see it, and honestly could see it from a 
 mile. When a post (a) cherry-picks all negatives and (b) has a 
 final tone that mentions no possible solution - it's a foregone 
 conclusion that no amount of explaining, amending, arguing, 
 etc. will improve the poster's outlook.

 I could say e.g. "Well I think things have changed, and look 
 we've turned the bug trend curve (http://goo.gl/kf4ZC) which is 
 unprecedented, fixed some incomplete features, break new 
 records on bug fixes with each release, and have a big 
 conference coming." - to which I have no doubt it's possible to 
 concoct a negative answer.

 So I understand you have a negative outlook on D. That's 
 entirely fine, as is mentioning it on the forum. The only thing 
 I'd like you to understand and appreciate is that we who work 
 on D are doing our best to find solutions to the various 
 problems in front of us, and in quite a literal sense we don't 
 know how to do any better. The constructive thing I'm getting 
 out of this is that we could use some more radicalization - try 
 things that push stronger against our comfort zone. I have a 
 few in mind but it's too early to discuss them publicly.


 Andrei
Let me emphasize again, I did *not* intend to discuss the specific features, Walter brought that topic up. I intended to point out the lack of good general guidelines for the D design process. And i did actually mention one (partial) solution. I mean no disrespect to the hard work of the contributers and did not wish to discourage them, just to prevent wasted effort due to lack of proper guidelines. The other criticism I have is exactly the last paragraph above. Rust is designed in the open and so I can read the weekly minutes and get the bigger picture of the design process, what are the different proposed alternatives, what are the considerations and trade-offs, etc. In D on the other hand, it's all closed. D claims that it is an open source project but all the major design decisions happen personally between you and Walter and this is worse than big company languages that at least publish some articles online.
Feb 27 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 5:01 AM, foobar wrote:
 Let me emphasize again, I did *not* intend to discuss the specific
 features, Walter brought that topic up. I intended to point out the lack
 of good general guidelines for the D design process. And i did actually
 mention one (partial) solution. I mean no disrespect to the hard work of
 the contributers and did not wish to discourage them, just to prevent
 wasted effort due to lack of proper guidelines.
I agree we should have a more organized process.
 The other criticism I have is exactly the last paragraph above. Rust is
 designed in the open and so I can read the weekly minutes and get the
 bigger picture of the design process, what are the different proposed
 alternatives, what are the considerations and trade-offs, etc.
 In D on
 the other hand, it's all closed. D claims that it is an open source
 project but all the major design decisions happen personally between you
 and Walter and this is worse than big company languages that at least
 publish some articles online.
Four years ago I would've entirely agreed. But right now it's an odd comment to make seeing as we're discussing all major decisions in this group and we're switching full-bore to DIPs. Andrei
Feb 27 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-27 14:29, Andrei Alexandrescu wrote:

 Four years ago I would've entirely agreed. But right now it's an odd
 comment to make seeing as we're discussing all major decisions in this
 group and we're switching full-bore to DIPs.
The "alias this" syntax the foobar mentioned was removed under the radar and only discussed in a pull request. -- /Jacob Carlborg
Feb 27 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 2/27/13 3:16 PM, Jacob Carlborg wrote:
 On 2013-02-27 14:29, Andrei Alexandrescu wrote:

 Four years ago I would've entirely agreed. But right now it's an odd
 comment to make seeing as we're discussing all major decisions in this
 group and we're switching full-bore to DIPs.
The "alias this" syntax the foobar mentioned was removed under the radar and only discussed in a pull request.
I agree we were sloppy on that. Kenji was feeling strong about and Walter and I didn't have particular objections, so we gave him green light. In the process we neglected backward compatibility. Andrei
Feb 27 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-27 22:57, Andrei Alexandrescu wrote:

 I agree we were sloppy on that. Kenji was feeling strong about and
 Walter and I didn't have particular objections, so we gave him green
 light. In the process we neglected backward compatibility.
It's at least easy to through up a new thread here to let the rest of us know, even if the decision already has been made. -- /Jacob Carlborg
Feb 28 2013
prev sibling parent reply "foobar" <foo bar.com> writes:
On Wednesday, 27 February 2013 at 21:57:06 UTC, Andrei 
Alexandrescu wrote:
 On 2/27/13 3:16 PM, Jacob Carlborg wrote:
 On 2013-02-27 14:29, Andrei Alexandrescu wrote:

 Four years ago I would've entirely agreed. But right now it's 
 an odd
 comment to make seeing as we're discussing all major 
 decisions in this
 group and we're switching full-bore to DIPs.
The "alias this" syntax the foobar mentioned was removed under the radar and only discussed in a pull request.
I agree we were sloppy on that. Kenji was feeling strong about and Walter and I didn't have particular objections, so we gave him green light. In the process we neglected backward compatibility. Andrei
At a bare minimum, there should be a well defined place to notify *users* (NOT DMD contributers) about language changes *ahead of time*. If such place is not defined, than people do not know where to look for that info if that is even available at all. This at least will remove the element of surprise. For D to be really open as it claims to be, there should be additional guidelines about the decision making process itself. This should be made accessible for *users* (D programmers). I need not be a core DMD contributer, nor should I need to know C++ or DMD's internals, Nor should I need to browse among hundreds of bugs on bugzilla or pull requests on github (both can be labeled very poorly) to discern out of that sea of information what are the *few* visible changes to the language and core library APIs.
Feb 28 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/28/2013 10:32 AM, foobar wrote:
 At a bare minimum, there should be a well defined place to notify *users* (NOT
 DMD contributers) about language changes *ahead of time*. If such place is not
 defined, than people do not know where to look for that info if that is even
 available at all. This at least will remove the element of surprise.
The mailing lists serve that purpose, and the discussion was on the mailing list (which is an echo of what happens on github). It also showed up in the bugzilla n.g., as do all things posted on bugzilla. You could argue that there's too much other 'noise' on those things. But at what point can one decide what is 'noise' and what isn't? Everybody has a different set of things they want to watch.
 For D to be really open as it claims to be, there should be additional
 guidelines about the decision making process itself. This should be made
 accessible for *users* (D programmers). I need not be a core DMD contributer,
 nor should I need to know C++ or DMD's internals, Nor should I need to browse
 among hundreds of bugs on bugzilla or pull requests on github (both can be
 labeled very poorly) to discern out of that sea of information what are the
 *few* visible changes to the language and core library APIs.
All of the changes affects somebody, otherwise they would never have gotten onto bugzilla in the first place. There are searches you can make on bugzilla for fulfilled enhancements only, such as: http://d.puremagic.com/issues/buglist.cgi?chfieldto=Now&query_format=advanced&chfield=resolution&chfieldfrom=2013-02-18&chfieldvalue=FIXED&bug_severity=enhancement&bug_status=RESOLVED&version=D2&version=D1%20%26%20D2&resolution=FIXED&product=D
Feb 28 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/25/2013 11:56 PM, foobar wrote:
 DDoc isn't part of the language but rather part of the compiler, nevertheless
it
 has its downsides.  [...]
 unittest is worse,
I think you're missing something gigantic. Before D had ddoc, the documentation for Phobos was TERRIBLE - it was mostly missing, and the rest would describe something that had no resemblance to what the code did. Adding Ddoc completely revolutionized this. It's like night and day. Sure, you can pick at Ddoc's flaws all day, but without Ddoc, the Phobos documentation would have remained utter s**t. Yes, one could use Doxygen. One could hope an up-to-date version exists on all the platforms D is on. One could nag people to use it. One could argue with people who wanted to use a different doc generator. And one could look at typical C and C++ projects, which use no documentation generator at all, and pretty much have no documentation or have documentation as bad as the pre-Ddoc Phobos docs. Having Ddoc always there, always installed, always up to date, with literally zero effort, tips the balance. It gets used. It raised the bar on what is acceptable D code - it looks wrong without Ddoc documentation. By tipping the balance I mean it *revolutionized* D code. The same goes for unittest. How many C/C++ projects have you run across that have unit tests? Again, yes, you can use 3rd party tools (of which there are a plethora). You can try to use multiple libraries that use different unit test frameworks. You can look at Phobos before unittest and see that it was pretty much completely untested. Unittest in the language, always there, always installed, zero effort, completely changed the game. I'm very pleased at the depth and breadth of unittests in Phobos. I have no doubt that would not have happened without unittest. Sure, you can pick all day at the flaws of unittest, but you'd be missing the point - without builtin unittest, there'd be nothing to pick at, because people would not have unit tests.
 Additional such problems - the AA issue which has been going own for years now.
 The endless discussions regarding tuples.
 It seems that D strives to bloat the language with needless features that
really
 should have been standardized in the library and on the other hand tries to put
 in the library things that really ought to be built into the language to
benefit
 from proper integration and syntax.
A little history is in order here. AA's were built in to the language from the beginning, a result of my experience with how incredibly useful they were in javascript. This was many years before D had templates. There was no other way at the time to implement them in a nice manner (try doing it in C, for example). D's improving generics has enabled them to be redone as library features.
 The latest case was the huge properties debate and its offshoots regarding ref
 semantics which I didn't even bother participate in. Bartosz developed an
 ownership system for D to address all the safety issues raised by ref *years
 ago* and it was rejected due to complexity. Now, Andrei tries to achieve
similar
 safety guaranties by giving ref the semantics of borrowed pointers. It all
seems
 to me like trying to build an airplane without wings cause they are too
complex.
 Rust on the other hand already integrated an ownership system and is already
far
 ahead of D's design. D had talked about macros *years ago* and rust already
 implemented them.
Bartosz' ownership system was intended to support multithreaded programming. It was and still is too complicated. I've been working on another design which should serve the purpose and will need nearly zero effort from the programmer and it won't break anything. There was some discussion last fall on the n.g. about it.
 We do have a significantly better D culture than the C++ one. For example, C++
 relies heavily and unapologetically on convention for writing correct, robust
 code. D eschews that, and instead is very biased towards mechanical
verification.
I call bullshit. This is an half hearted intention at best. safe has holes in it,
Yes, and those are bugs, and we have every intention of fixing all of them.
 integers has no overflow checks,
This has been discussed ad nauseum. To sum up, adding overflow checks everywhere would seriously degrade performance. Yet you can still have overflow checking integers if you build a library type to do it. See std.halffloat for an example of how to do it. It fits in with your suggestion that things that can be done in the library, should be done in the library.
 ref also has holes,
Yes, and we are actively working to fix them.
 Not only D has null pointer bugs but they also cause segfaults.
D now has all the features to create a library type NotNull!T, which would be a pointer type that is guaranteed to be not null.
 In fact there are many such "not c++"
 features in D and which is why I find other languages such as rust a *much*
 better design and it evolves much faster because it is designed in terms of -
 what we want to achieve, how best to implement that.
How does rust handle this particular issue?
I presume rust does not have an official answer to the debug conditional issue and leaves it up to the user?
Feb 26 2013
next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, February 26, 2013 11:53:11 Walter Bright wrote:
 On 2/25/2013 11:56 PM, foobar wrote:
 DDoc isn't part of the language but rather part of the compiler,
 nevertheless it has its downsides. [...]
 unittest is worse,
I think you're missing something gigantic.
[SNIP] I agree with all of this. Ddoc and the built-in unit testing features may not be perfect, but they put us light years ahead of most everyone else. They actually get _used_. If it's at all difficult to generate documentation or write unit tests, they just don't happen in far too many cases. We are _way_ better off with these. It's quite possible that improvements could be made to their feature set, and it's quite possible that improvements could be made to how they're implemented in order to make improving them easier, but we are _way_ better off having them. And the fact that they exist costs you _nothing_ if you want to use 3rd party stuff like you would in C++. You can write your own unit test framework and not even use -unittest. You can use doxygen if you want to. Nothing is stopping you. But by having all of this built in, it actually gets used, and we have decent documentation and unit testing. They're features in D which have a _huge_ impact, even if they don't first appear like they would. - Jonathan M Davis
Feb 26 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/26/13, Walter Bright <newshound2 digitalmars.com> wrote:
 Sure, you can pick all day at the flaws of unittest, but you'd be missing
 the
 point - without builtin unittest, there'd be nothing to pick at, because
 people
 would not have unit tests.
Also you can implement your own unittest runner function and do some cool customizations, e.g. selecting which unittests will or will not be run, displaying success/failure information, etc. (Runtime.moduleUnitTester is the cutomization point).
Feb 26 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Feb 26, 2013 at 11:53:11AM -0800, Walter Bright wrote:
 On 2/25/2013 11:56 PM, foobar wrote:
DDoc isn't part of the language but rather part of the compiler,
nevertheless it has its downsides.  [...]
unittest is worse,
I think you're missing something gigantic. Before D had ddoc, the documentation for Phobos was TERRIBLE - it was mostly missing, and the rest would describe something that had no resemblance to what the code did. Adding Ddoc completely revolutionized this. It's like night and day. Sure, you can pick at Ddoc's flaws all day, but without Ddoc, the Phobos documentation would have remained utter s**t.
I have to say that even though ddoc hasn't quite grown on me yet, I did start using it recently for one of the generic modules I was working on for my personal projects, and, for all its warts and shortcomings, it's incredibly handy, and actually makes you *want* to write documentation for your code. I mean, it's right there, you just have to type it in and you'll get the docs. YMMV, but personally I find ddoc actually very helpful in improving the general quality of D code. My D code has improved because having to write docs for functions actually made me think about some corner cases I would've overlooked otherwise.
 Yes, one could use Doxygen. One could hope an up-to-date version
 exists on all the platforms D is on. One could nag people to use it.
 One could argue with people who wanted to use a different doc
 generator. And one could look at typical C and C++ projects, which use
 no documentation generator at all, and pretty much have no
 documentation or have documentation as bad as the pre-Ddoc Phobos
 docs.
I've tried to use Doxygen before. I hate it. It feels like something patched onto a deficient language, and just doesn't integrate well with the workflow (nobody wants to run make docs when it might potentially lengthen the code-compile-test cycle). Whereas ddoc is done as you compile, so you get it "for free". It assumes users actually have it installed, which most don't, and ... the number of C/C++ projects I've seen the source code of, that uses Doxygen, can be counted on one hand. Maybe less. Most just have outdated comments (if at all!), haphazardly formatted, and usually inaccurate because people have patched changes without updating the comments (nobody wants to, because there's no standard format, and nobody wants to edit typo-filled non-punctuated remarks on the off-chance that it might actually be saying something important).
 Having Ddoc always there, always installed, always up to date, with
 literally zero effort, tips the balance. It gets used. It raised the
 bar on what is acceptable D code - it looks wrong without Ddoc
 documentation. By tipping the balance I mean it *revolutionized* D
 code.
+1. Well, +0.5, 'cos I'm still not fully sold on ddoc yet... but I'm starting to.
 The same goes for unittest. How many C/C++ projects have you run
 across that have unit tests? Again, yes, you can use 3rd party tools
 (of which there are a plethora). You can try to use multiple
 libraries that use different unit test frameworks. You can look at
 Phobos before unittest and see that it was pretty much completely
 untested.
 
 Unittest in the language, always there, always installed, zero
 effort, completely changed the game. I'm very pleased at the depth
 and breadth of unittests in Phobos. I have no doubt that would not
 have happened without unittest.
Yeah, unittests have improved the quality of my code by leaps and bounds. Especially in the area of regressions: sure you don't need built-in unittests to get it right the first time, but what about when you made the 500-line diff adding a brand new feature? Most of the time, when I do that, I introduce tons of regressions that I'm not even aware of until I run into them much later. Nothing is a better wakeup call than patching the diff in, compiling with -unittest, running the program, and oops, unittest failure left, right, and center! Better fix all those regressions! Result: you find bugs early, before they show up in production environments.
 Sure, you can pick all day at the flaws of unittest, but you'd be
 missing the point - without builtin unittest, there'd be nothing to
 pick at, because people would not have unit tests.
Yep, that's me. I hated unittesting, 'cos I felt it was a waste of time. I had to put the code on hold, switch to a different language made for unittesting (like python or Tcl/Expect or whatever), write tests in a different directory, which then get out of sync with the latest code, and become too troublesome to update, so you disable them then forget to re-enable them later after things are updated, etc.. It's just lots of needless overhead. But D's built-in unittests, for all their warts and shortcoming, have the benefit of being right there, ready to use, and guaranteed to be runnable by whoever is compiling the code (don't have to worry about people not having Expect/python/whatever installed, so contributors have no excuse to not run them, etc.). Plus, it's in pure D syntax, so my brain doesn't have to keep switching gears, which means I'm more likely to actually write them. And there's no need of painstakingly building lots of scaffolding for unittesting, which is the problem when I begin most projects 'cos the code is too small to justify the effort of setting up a unittesting environment, but then once the code grows, too much code isn't unittested as they should be, and by then, it's kinda too late to remember all the corner cases you need to check for. As is the case with ddocs, writing unittests while you're coding makes you think about corner cases you may have overlooked, all while the code is fresh in your mind, as opposed to 20 minutes later when that potentially dangerous pointer manipulation may have been forgotten and lurks in the code until much later. So yeah. Complain as you may about the flaws of D's unittests, but they sure have helped improve my code significantly.
Additional such problems - the AA issue which has been going own for
years now.  The endless discussions regarding tuples.  It seems that
D strives to bloat the language with needless features that really
should have been standardized in the library and on the other hand
tries to put in the library things that really ought to be built into
the language to benefit from proper integration and syntax.
A little history is in order here. AA's were built in to the language from the beginning, a result of my experience with how incredibly useful they were in javascript. This was many years before D had templates. There was no other way at the time to implement them in a nice manner (try doing it in C, for example). D's improving generics has enabled them to be redone as library features.
[...] Not to mention that many of current AA issues were introduced later when people tried to extend it in ways not originally conceived. Like supporting ranges -- which required the schizophrenic duplication of internal data structures in object_.d -- a horrible idea, to say the least, but I can totally sympathize with why it would be preferable to holding off and waiting indefinitely for the ideal solution, and thus having zero range support for a looong time. T -- Stop staring at me like that! You'll offend... no, you'll hurt your eyes!
Feb 26 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-02-26 21:24, H. S. Teoh wrote:

 But D's built-in unittests, for all their warts and shortcoming, have
 the benefit of being right there, ready to use, and guaranteed to be
 runnable by whoever is compiling the code (don't have to worry about
 people not having Expect/python/whatever installed, so contributors have
 no excuse to not run them, etc.).
I think that is one of the problems with unit tests in D. I don't know how to run them. It's just the -unittest flag, but that's not enough. * How do I run all the unit test in all of my files? Some will have a shell script called "test.sh", some will call it "unittest.sh". How do I then run the test on Windows, I can run Bash scripts on Windows. Some will have a D file "test.d", what the h*ll should I do with that? Compile it? run it using RDMD? * How do I run a single test? * How do I run a subset of the tests? The questions go on. Using Ruby on Rails, the first thing I see when I clone a repository is either a "test" or a "spec" folder. These are run using "rake test" or "rake spec". All projects using these frameworks support running all test, a single test and a subset of test. And it's the same commands on for all projects on all platforms. -- /Jacob Carlborg
Feb 27 2013
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/27/13, Jacob Carlborg <doob me.com> wrote:
 * How do I run a single test?
 * How do I run a subset of the tests?
If this is really an often-requested feature it could be implemented by default in Druntime. All you'd need is to parse command-line arguments before calling main(), document these switches, and implement this feature in the default unit tester that's found in core.runtime.runModuleUnitTests after the "if( Runtime.sm_moduleUnitTester is null )" check. For example I use this: https://github.com/AndrejMitrovic/dgen/blob/master/src/dgen/test.d If I want to test specific modules I use: rdmd --testModsRun=a.b.c --testModsRun=d.e.f main.d And if I want to ignore running specific tests: rdmd --testModsSkip=a.b.c main.d
Feb 27 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 4:19 AM, Jacob Carlborg wrote:
 I think that is one of the problems with unit tests in D. I don't know how to
 run them.
Compile with -unittest and then run.
 * How do I run all the unit test in all of my files?
Compile all files with -unittest and then run the program.
 * How do I run a single test?
 * How do I run a subset of the tests?
Compile only the modules you want to run the unittests on with -unittest.
Feb 27 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-02-27 20:23, Walter Bright wrote:
 On 2/27/2013 4:19 AM, Jacob Carlborg wrote:
 I think that is one of the problems with unit tests in D. I don't know
 how to
 run them.
Compile with -unittest and then run.
 * How do I run all the unit test in all of my files?
Compile all files with -unittest and then run the program.
 * How do I run a single test?
 * How do I run a subset of the tests?
Compile only the modules you want to run the unittests on with -unittest.
I don't. I have a script that handles this. Someone else might have another script doing things differently. This is the problem. -- /Jacob Carlborg
Feb 27 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 2/26/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 Most just have outdated comments (if at all!)
Some projects even maintain documentation *separately* from the codebase, which leads to a ton of outdated stuff. For example look at the list of documentation fixes I made for wxWidgets: http://trac.wxwidgets.org/query?reporter=drey&order=priority
Feb 26 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Feb 26, 2013 at 09:41:09PM +0100, Andrej Mitrovic wrote:
 On 2/26/13, H. S. Teoh <hsteoh quickfur.ath.cx> wrote:
 Most just have outdated comments (if at all!)
Some projects even maintain documentation *separately* from the codebase, which leads to a ton of outdated stuff. For example look at the list of documentation fixes I made for wxWidgets: http://trac.wxwidgets.org/query?reporter=drey&order=priority
I thought that was the norm? Especially in the realm of open source, the problem of docs mismatching implementation is sadly very prevalent. The thing is, when code comments are so poor, nobody would even imagine using them as user-consumable docs. And so docs are written separately. But coders love to code; docs are in another format in another subdir, who cares about updating it when you just have a 1-line fix? It's a totally different ball game when the ddoc comments are staring you in the face, right in the source code, crying out "update me! update me!". I'm not pretending that it solves the mismatch problem entirely, of course. You *can* still change the code without updating the ddocs. But you're much more likely to update it because it's right there in front of you, not somewhere else in some obscure subdirectory, out of sight and out of mind. I'd even say having the ddocs embedded in the code *shames* you into updating it, in much the same way as built-in unittest blocks shames you into writing them (so much so, that just this past few days, upon revisiting some of my earlier D code, I was horrified at the huge swaths of unittest-free code -- writing unittests have become such a habit to me now -- and now I feel too ashamed to not start writing unittests for that code). T -- Many open minds should be closed for repairs. -- K5 user
Feb 26 2013
prev sibling next sibling parent reply "foobar" <foo bar.com> writes:
On Tuesday, 26 February 2013 at 19:53:11 UTC, Walter Bright wrote:
 On 2/25/2013 11:56 PM, foobar wrote:
 DDoc isn't part of the language but rather part of the 
 compiler, nevertheless it
 has its downsides.  [...]
 unittest is worse,
I think you're missing something gigantic. Before D had ddoc, the documentation for Phobos was TERRIBLE - it was mostly missing, and the rest would describe something that had no resemblance to what the code did. Adding Ddoc completely revolutionized this. It's like night and day. Sure, you can pick at Ddoc's flaws all day, but without Ddoc, the Phobos documentation would have remained utter s**t. Yes, one could use Doxygen. One could hope an up-to-date version exists on all the platforms D is on. One could nag people to use it. One could argue with people who wanted to use a different doc generator. And one could look at typical C and C++ projects, which use no documentation generator at all, and pretty much have no documentation or have documentation as bad as the pre-Ddoc Phobos docs. Having Ddoc always there, always installed, always up to date, with literally zero effort, tips the balance. It gets used. It raised the bar on what is acceptable D code - it looks wrong without Ddoc documentation. By tipping the balance I mean it *revolutionized* D code.
All of the above describes the benefits of having standardized documentation and I agree with that. That has nothing to do with DDoc's specific design compared to other similar efforts. A quick others all have the same benefits but non has the doc generator built into the compiler/vm with all the problems this entails.
 The same goes for unittest. How many C/C++ projects have you 
 run across that have unit tests? Again, yes, you can use 3rd 
 party tools (of which there are a plethora). You can try to use 
 multiple libraries that use different unit test frameworks. You 
 can look at Phobos before unittest and see that it was pretty 
 much completely untested.

 Unittest in the language, always there, always installed, zero 
 effort, completely changed the game. I'm very pleased at the 
 depth and breadth of unittests in Phobos. I have no doubt that 
 would not have happened without unittest.

 Sure, you can pick all day at the flaws of unittest, but you'd 
 be missing the point - without builtin unittest, there'd be 
 nothing to pick at, because people would not have unit tests.
Same as above. You compare again to C++ and ignore the provably successful models of _many_ other languages. Ruby for instance really shines in this regard as its community is very much oriented towards TDD. Java has such a successful model with its JUnit that it inspired a whole bunch of clones for other languges and you completely ignore this. Instead you discuss the design of a new car based on experiences of horseback riders.
 Additional such problems - the AA issue which has been going 
 own for years now.
 The endless discussions regarding tuples.
 It seems that D strives to bloat the language with needless 
 features that really
 should have been standardized in the library and on the other 
 hand tries to put
 in the library things that really ought to be built into the 
 language to benefit
 from proper integration and syntax.
A little history is in order here. AA's were built in to the language from the beginning, a result of my experience with how incredibly useful they were in javascript. This was many years before D had templates. There was no other way at the time to implement them in a nice manner (try doing it in C, for example). D's improving generics has enabled them to be redone as library features.
I'm familiar with the history of AAs in D and how they came to be this horrible mess. Yet, templates in D are ancient news by now and the problem hadn't been fixed and not due to lack of effort. The problem is again - applying common c++ wisdom and trying to maintain inconsistent semantics.
 The latest case was the huge properties debate and its 
 offshoots regarding ref
 semantics which I didn't even bother participate in. Bartosz 
 developed an
 ownership system for D to address all the safety issues raised 
 by ref *years
 ago* and it was rejected due to complexity. Now, Andrei tries 
 to achieve similar
 safety guaranties by giving ref the semantics of borrowed 
 pointers. It all seems
 to me like trying to build an airplane without wings cause 
 they are too complex.
 Rust on the other hand already integrated an ownership system 
 and is already far
 ahead of D's design. D had talked about macros *years ago* and 
 rust already
 implemented them.
Bartosz' ownership system was intended to support multithreaded programming. It was and still is too complicated. I've been working on another design which should serve the purpose and will need nearly zero effort from the programmer and it won't break anything. There was some discussion last fall on the n.g. about it.
 We do have a significantly better D culture than the C++ one. 
 For example, C++
 relies heavily and unapologetically on convention for writing 
 correct, robust
 code. D eschews that, and instead is very biased towards 
 mechanical verification.
I call bullshit. This is an half hearted intention at best. safe has holes in it,
Yes, and those are bugs, and we have every intention of fixing all of them.
 integers has no overflow checks,
This has been discussed ad nauseum. To sum up, adding overflow checks everywhere would seriously degrade performance. Yet you can still have overflow checking integers if you build a library type to do it. See std.halffloat for an example of how to do it. It fits in with your suggestion that things that can be done in the library, should be done in the library.
Yes, but this conflicts with your statement of intention towards verification machinery for safety. Of course I can implement whatever I need myself but what ensures safety is the fact that the default is safe and the language forces me to be explicit when I choose to sacrifice safety for other benefits. You see, *defaults matter*. I can use an option type in D and still get null pointer segfaults whereas in Rust I cannot get a null value without an explicit option type which I am forced to check. Another case in point - both Germany and Austria have an option to sign an organ donor card when getting a driver's license. Germany requires to tick the box if you want to join the program, Austria requires to tick the box if you do *not* want to join. Austria has a much higher percentage of organ donors.
 ref also has holes,
Yes, and we are actively working to fix them.
 Not only D has null pointer bugs but they also cause segfaults.
D now has all the features to create a library type NotNull!T, which would be a pointer type that is guaranteed to be not null.
irrelevant. See previous comment.
 In fact there are many such "not c++"
 features in D and which is why I find other languages such 
 as rust a *much*
 better design and it evolves much faster because it is 
 designed in terms of -
 what we want to achieve, how best to implement that.
How does rust handle this particular issue?
I presume rust does not have an official answer to the debug conditional issue and leaves it up to the user?
I'm not sure rust has such a feature as it is much more functional in style than D. If I'm not mistaken, "debug" in Rust is used as one of the logging macros.
Feb 26 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2013 2:10 PM, foobar wrote:
 All of the above describes the benefits of having standardized documentation
and
 I agree with that. That has nothing to do with DDoc's specific design compared
 to other similar efforts. A quick survey of languages shows that Ruby, Python,

 generator built into the compiler/vm with all the problems this entails.
Building ddoc into the compiler means it has access to the semantic information that compiler provides, and it uses that information. If it is not built in to the compiler, then the options are: 1. require the user to type the information in twice 2. add parsing and semantic analysis capability to the doc generator I find (1) to be an unacceptable user experience, and (2) to be not viable given our limited resources. BTW, Javadoc apparently can only generate HTML. As Andrei has demonstrated, Ddoc can generate html, pdf, and ebooks without changing the Ddoc comments. I'm curious what fundamental advantage you believe Javadoc has over Ddoc.
 Same as above. You compare again to C++ and ignore the provably successful
 models of _many_ other languages. Ruby for instance really shines in this
regard
 as its community is very much oriented towards TDD. Java has such a successful
 model with its JUnit that it inspired a whole bunch of clones for  other
 languges and you completely ignore this. Instead you discuss the design of a
new
 car based on experiences of horseback riders.
They're not that much different - except in one significant aspect. JUnit (for example) has a plethora of websites devoted to tutorials, cookbooks, how-tos, best practices, etc. D unittest is so simple there is no need for that. Anyone can be up and using D unittests in 2 minutes. The how-to is one slide in a presentation about D. Nobody is going to write a book about D unittest: http://www.amazon.com/JUnit-Action-Second-Petar-Tahchiev/dp/1935182021 That's a 450 page, $30 book on JUnit, in its second edition because it's so complicated. I believe the success of unittests in D speak to the value of making it so simple to use. That said, with the new UDA in D, you (or anyone else) can write a "DUnit" and present it to the community. I doubt you'll find it worth the effort, though, as unittests work very well.
The problem is again - applying common c++
 wisdom and trying to maintain inconsistent semantics.
I'm curious how you ascribe builtin AA's, which C++ does not have and will never have, to applying C++ wisdom. As I said, D's AA's were inspired by my experience with Javascript's AA's.
 integers has no overflow checks,
This has been discussed ad nauseum. To sum up, adding overflow checks everywhere would seriously degrade performance. Yet you can still have overflow checking integers if you build a library type to do it. See std.halffloat for an example of how to do it. It fits in with your suggestion that things that can be done in the library, should be done in the library.
Yes, but this conflicts with your statement of intention towards verification machinery for safety.
Designing *anything* is a compromise among mutually incompatible goals. In no way are we going to say principle A, worship A, use A as a substitute for thought & judgement, and drive right over the cliff with a blind adherence to A. For example, try designing a car where safety is the overriding concern. You can't design it, and if you could you couldn't manufacture it, and if you could manufacture it you couldn't drive it an inch. All language design decisions are tradeoffs. (And to be pedantic, integer overflows are not a memory safety issue. Neither are null pointers. You can have both (like Java does) and yet still have a provably memory safe language.)
 Of course I can implement whatever I need myself but what
 ensures safety is the fact that the default is safe and the language forces me
 to be explicit when I choose to sacrifice safety for other benefits. You see,
 *defaults matter*.
I agree they matter. And I doubt any two people will come up with the same list of what should be the default. Somebody has to make a decision. Again, tradeoffs.
Feb 26 2013
next sibling parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Tuesday, 26 February 2013 at 23:44:32 UTC, Walter Bright wrote:
 As Andrei has demonstrated, Ddoc can generate html, pdf, and 
 ebooks without changing the Ddoc comments.
This is not true in general because ddoc doesn't properly encode its output for different formats (it doesn't even get html right!) http://dlang.org/ddoc.html === Embedded HTML HTML can be embedded into the documentation comments, and it will be passed through to the HTML output unchanged. However, since it is not necessarily true that HTML will be the desired output format of the embedded documentation comment extractor, it is best to avoid using it where practical. === This "feature" is why I haven't bothered documenting my html library: the html examples are incorrectly displayed in the output! BTW, using $(LT) and $(GT) is both hideously ugly and still wrong because other output formats need different characters escaped/encoded. It obviously isn't practical to macroize every character just in case. The easiest way to fix this would be to use the ESCAPES capability ddoc already has over ALL the comment data (except Macros: sections) before doing anything else. This will ensure all data is properly encoded while still keeping all the macro functionality. Of course, it will kill the embedded html misfeature, which even the documentation, as seen above, admits is a bad idea to use anyway!
Feb 26 2013
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 2/26/2013 4:08 PM, Adam D. Ruppe wrote:
 On Tuesday, 26 February 2013 at 23:44:32 UTC, Walter Bright wrote:
 As Andrei has demonstrated, Ddoc can generate html, pdf, and ebooks without
 changing the Ddoc comments.
This is not true in general because ddoc doesn't properly encode its output for different formats
ddoc relies on using the macros to encode for different formats. Setting up the macros right is something for the user, although the default is for html.
 (it doesn't even get html right!)
The default setup should generate standard html. If the html is wrong, that should be a bug report in bugzilla, not an indictment of the approach.
 ===
 Embedded HTML

 HTML can be embedded into the documentation comments, and it will be passed
 through to the HTML output unchanged. However, since it is not necessarily true
 that HTML will be the desired output format of the embedded documentation
 comment extractor, it is best to avoid using it where practical.
 ===


 This "feature" is why I haven't bothered documenting my html library: the html
 examples are incorrectly displayed in the output!
Yes, if you write incorrect html in the ddoc comments, they'll just get passed through to the output. I don't think that is a fault with ddoc, though.
 Of course, it will kill the embedded html misfeature, which even the
 documentation, as seen above, admits is a bad idea to use anyway!
It's not actually a feature of ddoc at all. Ddoc just transmits its input to its output, expanding macros along the way.
Feb 26 2013
parent reply "Adam D. Ruppe" <destructionator gmail.com> writes:
On Wednesday, 27 February 2013 at 01:03:44 UTC, Walter Bright 
wrote:
 It's not actually a feature of ddoc at all. Ddoc just transmits 
 its input to its output, expanding macros along the way.
The problem with that is it makes it extremely inconvenient to document a html library, and *impossible* to output correct data in formats unknown to the original author. You say outputting to different formats is a selling point, but that just isn't true. Consider this simple comment: /// Given "<b>text</b>", returns "text" The most correct output for html is: Given &quot;&lt;b&rt;text&lt;/b&rt;&quot;, returns &quot;text&quot; Now, suppose we want our ddoc to output... say, json, just to pull an easy to show example. It *should* be: "Given \"<b>text<\/b>\", returns \"text\"" In json, the quotes need to be escaped and it is common to also escape the forward slash, so that's what we want. In html, we encode quotes differently and should also encode < and >. But, there's no way to actually do that with ddoc's macros. If we wanted our ddoc to output json, we'd have to write: Given $(QUOTE)<b>text<$(SLASH)b> well you get the idea. And now if we want it to work for both html AND json, we're writing: $(QUOTE)$(LT)b$(GT)text$(LT)$(SLASH)b$(GR) And obviously, that's absurd. What was a readable comment (which ddoc is supposed to aspire to) is now a hideous mess of macros. So, you say "nobody would output ddoc to json", but this is just one easy example with different encoding rules than html. LaTeX I'm pretty sure needs you to escape the backslash. I'm not sure, I've barely used latex, so I wouldn't use those macros in my own code. What if somebody else wants to apply his set of latex macros to my code file expecting it to just work? It probably won't. And, again, this is very easy to solve in the vast majority of cases: put that ESCAPES macro to good use by running it over the input data ASAP. Then the data will be properly encoded for whatever formats we need, without needing these bad preemptive macros.
Feb 26 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Feb 27, 2013 at 02:28:44AM +0100, Adam D. Ruppe wrote:
 On Wednesday, 27 February 2013 at 01:03:44 UTC, Walter Bright wrote:
It's not actually a feature of ddoc at all. Ddoc just transmits
its input to its output, expanding macros along the way.
Alas, the real world is not quite that simple. What if the output requires certain characters to be represented differently? There is currently no way to tell ddoc, the characters \{_^}% are metacharacters in the output format, and you need to escape them using macros $(BACKSLASH), $(OPENBRACE), ... etc.. You do NOT want to require the user to manually write these macros in the comment, because then it becomes completely unreadable, as Adam pointed out. But currently, if you want to target *multiple* output formats (say, you want to produce both HTML docs for your website and PDF via LaTeX for a printed documentation), then you have no choice but write macros for the union of all metacharacters in all output formats. At which point, you might as well say that every character should be written as a macro, which is clearly ludicrous. [...]
 And now if we want it to work for both html AND json, we're writing:
 
 $(QUOTE)$(LT)b$(GT)text$(LT)$(SLASH)b$(GR)
 
 
 And obviously, that's absurd. What was a readable comment (which
 ddoc is supposed to aspire to) is now a hideous mess of macros.
 
 So, you say "nobody would output ddoc to json", but this is just one
 easy example with different encoding rules than html. LaTeX I'm
 pretty sure needs you to escape the backslash. I'm not sure, I've
 barely used latex, so I wouldn't use those macros in my own code.
LaTeX is a far more complex beast than one might imagine at first glance. Not only backslashes need to be escaped, under standard conditions you need to also escape: $ { } ~ -- --- & _ ^ \ % (Yes, multi-character sequences are included.) Not to mention that D source code is UTF-8, but standard LaTeX is not, which means characters like ü need to be output as \"u instead. Furthermore, there are very precise spacing rules, like \ is required following a '.' if it's not a sentence break (e.g., in "Mr. Appleseed"), otherwise the output formatting may have the wrong spacing. Now I said standard conditions. There's math mode, which you will end up in if you leave stray $'s lying around (but which, say, math-oriented software might want to actually use for typesetting equations and the like in the documentation), which has a *different* set of metacharacters, and all *spaces* must be escaped. The only thorough solution for a multi-target ddoc comment currently is to write the entire comment in macros. This is an unreasonable expectation, which also detracts greatly from one's desire to actually use ddoc for this purpose.
 What if somebody else wants to apply his set of latex macros to my
 code file expecting it to just work? It probably won't.
 
 And, again, this is very easy to solve in the vast majority of
 cases: put that ESCAPES macro to good use by running it over the
 input data ASAP. Then the data will be properly encoded for whatever
 formats we need, without needing these bad preemptive macros.
The ESCAPES macro will not completely solve the problem with formats like LaTeX, but it will help greatly. It will also nudge me slightly in the direction of embracing ddoc from my current position on the fence. ;-) T -- Computers are like a jungle: they have monitor lizards, rams, mice, c-moss, binary trees... and bugs.
Feb 26 2013
prev sibling next sibling parent reply "pjmlp" <pjmlp progtools.org> writes:
On Tuesday, 26 February 2013 at 23:44:32 UTC, Walter Bright wrote:
 On 2/26/2013 2:10 PM, foobar wrote:
 All of the above describes the benefits of having standardized 
 documentation and
 I agree with that. That has nothing to do with DDoc's specific 
 design compared
 to other similar efforts. A quick survey of languages shows 
 that Ruby, Python,

 has the doc
 generator built into the compiler/vm with all the problems 
 this entails.
Building ddoc into the compiler means it has access to the semantic information that compiler provides, and it uses that information. If it is not built in to the compiler, then the options are: 1. require the user to type the information in twice 2. add parsing and semantic analysis capability to the doc generator I find (1) to be an unacceptable user experience, and (2) to be not viable given our limited resources. BTW, Javadoc apparently can only generate HTML. As Andrei has demonstrated, Ddoc can generate html, pdf, and ebooks without changing the Ddoc comments. I'm curious what fundamental advantage you believe Javadoc has over Ddoc.
This is not true. Javadoc uses a plugin architecture known as doclet. http://docs.oracle.com/javase/7/docs/technotes/guides/javadoc/doclet/overview.html There are quite a few plugins available, this one for example generates UML diagrams from JavaDoc comments. http://code.google.com/p/apiviz/ -- Paulo
Feb 27 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 2/27/2013 12:05 AM, pjmlp wrote:
 This is not true.

 Javadoc uses a plugin architecture known as doclet.
I didn't know about doclets. Thanks for the correction.
Feb 27 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-27 00:44, Walter Bright wrote:

 1. require the user to type the information in twice
 2. add parsing and semantic analysis capability to the doc generator
3. Build the compiler as a library. Use the library when creating the separate ddoc tool. Number 3 would be the correct approach. -- /Jacob Carlborg
Feb 27 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-02-26 20:53, Walter Bright wrote:
 On 2/25/2013 11:56 PM, foobar wrote:
 DDoc isn't part of the language but rather part of the compiler,
 nevertheless it
 has its downsides.  [...]
 unittest is worse,
[SNIP] I'm going to use the big "if". If the D compiler was built properly as a library, preferably in D. We could build all these features as separate tools (Ddoc, unit test) with the help of the compiler. These tools would then be included in the D distribution. Actually, the unit test support doesn't need the compiler as a library. The build in unit tests support doesn't give much. It's easily implemented as a library with a simple tool to drive it. Example: void unitTest (void delegate () dg); static this () { unitTest({ assert(true); }); } It's then easy to add support for named unit tests: void unitTest (string name, void delegate () dg); static this () { unitTest("foo", { assert(true); }); } If we then add some syntax sugar and allow a delegate to be passed after the parameter list we could have this: static this () { unitTest("foo") { assert(true); } } If we then could support having arbitrary code at top level we would have this: unitTest("foo") { assert(true); } Which is basically the same syntax as we have now but it's implemented as a library function. It can easily be extended with other similar functions doing slightly different things. -- /Jacob Carlborg
Feb 27 2013