www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Study: build times for D programs

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Hello,


I was talking to Walter on how to define a good study of D's compilation 
speed. We figured that we clearly need a good baseline, otherwise 
numbers have little meaning.

One idea would be to take a real, non-trivial application, written in 
both D and another compiled language. We then can measure build times 
for both applications, and also measure the relative speeds of the 
generated executables.

Although it sounds daunting to write the same nontrivial program twice, 
it turns out such an application does exist: dmdscript, a Javascript 
engine written by Walter in both C++ and D. It has over 40KLOC so it's 
of a good size to play with.

What we need is a volunteer who dusts off the codebase (e.g. the D 
source is in D1 and should be adjusted to compile with D2), run careful 
measurements, and show the results. Is anyone interested?


Thanks,

Andrei
Jul 24 2012
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 24-Jul-12 18:34, Andrei Alexandrescu wrote:
 Hello,
 Although it sounds daunting to write the same nontrivial program twice,
 it turns out such an application does exist: dmdscript, a Javascript
 engine written by Walter in both C++ and D. It has over 40KLOC so it's
 of a good size to play with.

 What we need is a volunteer who dusts off the codebase (e.g. the D
 source is in D1 and should be adjusted to compile with D2), run careful
 measurements, and show the results. Is anyone interested?

Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages. -- Dmitry Olshansky
Jul 24 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/12 10:53 AM, Dmitry Olshansky wrote:
 On 24-Jul-12 18:34, Andrei Alexandrescu wrote:
 Hello,
 Although it sounds daunting to write the same nontrivial program twice,
 it turns out such an application does exist: dmdscript, a Javascript
 engine written by Walter in both C++ and D. It has over 40KLOC so it's
 of a good size to play with.

 What we need is a volunteer who dusts off the codebase (e.g. the D
 source is in D1 and should be adjusted to compile with D2), run careful
 measurements, and show the results. Is anyone interested?

Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages.

Excellent, thanks! Andrei
Jul 24 2012
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 24-Jul-12 18:54, Andrei Alexandrescu wrote:
 On 7/24/12 10:53 AM, Dmitry Olshansky wrote:
 On 24-Jul-12 18:34, Andrei Alexandrescu wrote:
 Hello,
 Although it sounds daunting to write the same nontrivial program twice,
 it turns out such an application does exist: dmdscript, a Javascript
 engine written by Walter in both C++ and D. It has over 40KLOC so it's
 of a good size to play with.

 What we need is a volunteer who dusts off the codebase (e.g. the D
 source is in D1 and should be adjusted to compile with D2), run careful
 measurements, and show the results. Is anyone interested?

Well, I'd rather pass this to someone else but I have DMDScript repo that could be built with DMD ~ 2.056. I'll upload it to github, been meaning to do this for ages.

Excellent, thanks! Andrei

Done: https://github.com/blackwhale/DMDScript An awful lot of stuff got deprecated, e.g. it still uses class A: public B{...} syntax. To those taking this task - ready your shovels ;) -- Dmitry Olshansky
Jul 24 2012
prev sibling next sibling parent reply "Roman D. Boiko" <rb d-coding.com> writes:
On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu 
wrote:
 the D source is in D1 and should be adjusted to compile with 
 D2),

That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.
Jul 24 2012
next sibling parent reply "Paulo Pinto" <pjmlp progtools.org> writes:
"Roman D. Boiko"  wrote in message 
news:hpibxcqsmlpmgyngjzwp forum.dlang.org...
On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:
 the D source is in D1 and should be adjusted to compile with D2),

That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.

Still, is a good starting point.
Jul 24 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2012 7:58 AM, Paulo Pinto wrote:
 "Roman D. Boiko"  wrote in message news:hpibxcqsmlpmgyngjzwp forum.dlang.org...
 On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:
 the D source is in D1 and should be adjusted to compile with D2),

That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.

Still, is a good starting point.

The reality is, no matter what such benchmark is chosen, it will be attacked as being biased. There is no such thing as a perfect apples-apples comparison between languages, and there'll be no shortage of criticism of any shortcomings, valid and invalid. That doesn't mean we shouldn't do it. Heck, I've even been accused of "sabotaging" the Digital Mars C++ compiler in order to make D look good!
Jul 24 2012
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 24-Jul-12 18:54, Roman D. Boiko wrote:
 On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:
 the D source is in D1 and should be adjusted to compile with D2),

That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.

In fact it's rather D2-ified. But yeah, no template heavy code in sight. It doesn't even use std.range/std.algorithm IRC. -- Dmitry Olshansky
Jul 24 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/12 10:54 AM, Roman D. Boiko wrote:
 On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu wrote:
 the D source is in D1 and should be adjusted to compile with D2),

That would provide performance (compilation and run-time) for D1 only (with D2 compiler). Performance of a typical D2 app would likely be different.

Ehm. There's any number of arguments that can be made to question the validity of the study: * the coding style does not use the entire language in either or both implementations * the application domain favors one language or the other * the application's use of libraries is too low/too high * the translation is too literal * the translation changes the size of the code (which is the case here, as the D version is actually shorter) Nevertheless, I think there is value in the study. We're looking at a real nontrivial application that wasn't written for a study, but for actual use, and that implements the same design and same functionality in both languages. Andrei
Jul 24 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2012 8:06 AM, Andrei Alexandrescu wrote:
 Nevertheless, I think there is value in the study. We're looking at a real
 nontrivial application that wasn't written for a study, but for actual use, and
 that implements the same design and same functionality in both languages.

The translation is also just that, a line-by-line translation that started by copying the .c source files to .d. It's probably as good as you're going to get in comparing compile speed.
Jul 24 2012
prev sibling next sibling parent "Roman D. Boiko" <rb d-coding.com> writes:
On Tuesday, 24 July 2012 at 15:06:58 UTC, Andrei Alexandrescu 
wrote:
 Nevertheless, I think there is value in the study. We're 
 looking at a real nontrivial application that wasn't written 
 for a study, but for actual use, and that implements the same 
 design and same functionality in both languages.

OK. And it could serve as a basis for further variations: * introduce some feature (e.g., ranges), measure impact * measure impact of multiple features alone and in combination Of course, trivial changes would unlikely yield anything useful, but I believe there is a way to produce valuable data in a controlled research.
Jul 24 2012
prev sibling next sibling parent reply Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 7/24/12, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 snip

I've got a codebase where it takes DMD 15 seconds to output an error message to stdout. The error message is 3000 lines long. (and people thought C++ errors were bad!). It's all thanks to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=8082 The codebase isn't public yet so I can't help you with comparisons. Non-release full builds take 16 seconds for a template-heavy ~12k codebase (without counting lines of external dependencies). I use a lot of static foreach loops btw. Personally I think full builds are very fast compared to C++, although the transition from a small codebase which takes less than a second to compile to a bigger codebase which takes over a dozen seconds to compile is an unpleasant experience. I'd love to see DMD speed up its compile-time features like templates, mixins, static foreach, etc.
Jul 24 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2012 3:18 PM, Jonathan M Davis wrote:
 But we'd have to actually profile
 the compiler on a variety of projects to be sure of that (which is at least
 partially related to what Andrei is suggesting).

I wouldn't be a bit surprised to find that there are some O(n*n) or worse algorithms embedded in the compiler that can be triggered by some types of code. Profiling is the way to root them out.
Jul 24 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-25 00:18, Jonathan M Davis wrote:

 I don't have any hard evidence for it, but I've always gotten the impression
 that it was templates, mixins, and CTFE which really slowed down compilation.
 Certainly, they increase the memory consumption of the compiler by quite a
 bit. My guess would be that if we were looking to improve the compiler's
 performance, that's where we'd need to focus. But we'd have to actually profile
 the compiler on a variety of projects to be sure of that (which is at least
 partially related to what Andrei is suggesting).

We did some profiling on derelict in the process of adding support for D2. This was mostly testing string mixins, the result was: It's a lot faster to use few string mixins containing a lot of code then using many string mixins containing very little code. -- /Jacob Carlborg
Jul 24 2012
prev sibling next sibling parent reply Guillaume Chatelet <chatelet.guillaume gmail.com> writes:
On 07/24/12 16:34, Andrei Alexandrescu wrote:
 I was talking to Walter on how to define a good study of D's compilation
 speed. We figured that we clearly need a good baseline, otherwise
 numbers have little meaning.

I agree.
 One idea would be to take a real, non-trivial application, written in
 both D and another compiled language. We then can measure build times
 for both applications, and also measure the relative speeds of the
 generated executables.

Well I kind of did exactly that. I was planning to start a Blog ("you know the blog you should really really start but can't find time to do so") with such a comparison. I started it a few months ago and can't finish the post so it's still there, lying half finished. But as the subject pops out of the NG it would be stupid not to talk about it. I intended to add relevant numbers and go from deterministic measurable facts to more subjective remarks ( was it fun ? is it more maintainable ? ) but I really just did a bit of the the first part :( Anyway, so for people interested in my "findings" here is the half finished post : http://goo.gl/16Yrb This could serve as a basis of do's and don'ts for a more relevant comparison as Andrei proposed. For instance it could be interesting to compare the performance of several C++ and D compilers to get a measure of the performance standard deviation expected within the language. Also I think the D code could have been more idiomatic and optimized further more : it was just a quick test ( yet quite time consuming ). Both projects are opensource, one is endorsed by the company I'm working for (https://github.com/mikrosimage/sequenceparser), the other one is just a personal project for the purpose of the comparison ( https://github.com/gchatelet/d_sequence_parser ) By the way, it reminds me of the 'Computer Language Benchmarks Game' (http://shootout.alioth.debian.org/). I know D is not welcome aboard but couldn't we try do run the game for ourself so to have some more data ? -- Guillaume
Jul 24 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/24/2012 11:02 AM, Guillaume Chatelet wrote:
 By the way, it reminds me of the 'Computer Language Benchmarks Game'
 (http://shootout.alioth.debian.org/). I know D is not welcome aboard but
 couldn't we try do run the game for ourself so to have some more data ?

Small programs are completely inadequate for getting any reasonable measure of compiler speed. Even worse, they can be terribly wrong. (Back in the olden days, when men were men and and the sun revolved about the earth, everyone raved about Borland's compilation speed. In tests I ran myself, I found that it was fast, right up until you hit a certain size of source code, maybe about 5000 lines. Then, it fell off a cliff, and compile speed was terrible. But hey, it looked great in those tiny benchmarks.) The people who care about compile speed are compiling gigantic programs, and smallish ones can and do exhibit a very different performance profile. DMDScript is a medium sized program, not a gigantic one, but it's the best we've got for comparison.
Jul 24 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/24/12 8:20 PM, Walter Bright wrote:
 On 7/24/2012 11:02 AM, Guillaume Chatelet wrote:
 By the way, it reminds me of the 'Computer Language Benchmarks Game'
 (http://shootout.alioth.debian.org/). I know D is not welcome aboard but
 couldn't we try do run the game for ourself so to have some more data ?

Small programs are completely inadequate for getting any reasonable measure of compiler speed. Even worse, they can be terribly wrong.

Nevertheless there's value in the shootout. Yes, if someone is up for it that would be great. I also think if we have the setup ready we could convince the site maintainer to integrate D into the suite. Andrei
Jul 24 2012
parent Isaac Gouy <igouy2 yahoo.com> writes:
Andrei Alexandrescu Wrote:

-snip-
 Nevertheless there's value in the shootout. Yes, if someone is up for it 
 that would be great.

The Python measurement scripts are here -- http://shootout.alioth.debian.org/download/bencher.zip The whole ball of wax is available for download as a nightly snapshot, but if it was me I'd take the time to select particular programs from public CVS folders -- http://anonscm.debian.org/viewvc/shootout/shootout/bench/
 I also think if we have the setup ready we could 
 convince the site maintainer to integrate D into the suite.

I don't think so ;-)
Jul 25 2012
prev sibling parent Isaac Gouy <igouy2 yahoo.com> writes:
Walter Bright Wrote:

-snip-
 (Back in the olden days, when men were men and and the sun revolved about the 
 earth, everyone raved about Borland's compilation speed. In tests I ran
myself, 
 I found that it was fast, right up until you hit a certain size of source
code, 
 maybe about 5000 lines. Then, it fell off a cliff, and compile speed was 
 terrible. But hey, it looked great in those tiny benchmarks.)

Back in the olden days... "[Wirth] used the compiler's self-compilation speed as a measure of the compiler's quality." http://shootout.alioth.debian.org/dont-jump-to-conclusions.php#app
Jul 25 2012
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Tue, 24 Jul 2012 18:53:25 +0200
Andrej Mitrovic <andrej.mitrovich gmail.com> wrote:

 On 7/24/12, Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:
 snip

I've got a codebase where it takes DMD 15 seconds to output an error message to stdout. The error message is 3000 lines long. (and people thought C++ errors were bad!). It's all thanks to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=8082 The codebase isn't public yet so I can't help you with comparisons. Non-release full builds take 16 seconds for a template-heavy ~12k codebase (without counting lines of external dependencies). I use a lot of static foreach loops btw. Personally I think full builds are very fast compared to C++, although the transition from a small codebase which takes less than a second to compile to a bigger codebase which takes over a dozen seconds to compile is an unpleasant experience. I'd love to see DMD speed up its compile-time features like templates, mixins, static foreach, etc.

Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to compile (by D standards, not by C++ standards). I tried to benchmark it a while back, and was never really confident in the results I was getting or my understanding of the DMD source, so I never brought it up before. But it *seemed* to be template matching that was the big bottleneck (ie, IIUC, determining which template to instantiate, and I think the function was actually called "match" or something like that). Goldie does make use of a *lot* of that sort of thing.
Jul 24 2012
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, July 24, 2012 15:49:38 Nick Sabalausky wrote:
 Yea. Programs using Goldie ( semitwist.com/goldie ) take a long time to
 compile (by D standards, not by C++ standards). I tried to benchmark
 it a while back, and was never really confident in the results I was
 getting or my understanding of the DMD source, so I never brought it up
 before. But it *seemed* to be template matching that was the big
 bottleneck (ie, IIUC, determining which template to instantiate,
 and I think the function was actually called "match" or something like
 that). Goldie does make use of a *lot* of that sort of thing.

I don't have any hard evidence for it, but I've always gotten the impression that it was templates, mixins, and CTFE which really slowed down compilation. Certainly, they increase the memory consumption of the compiler by quite a bit. My guess would be that if we were looking to improve the compiler's performance, that's where we'd need to focus. But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting). - Jonathan M Davis
Jul 24 2012
prev sibling next sibling parent "Peter Alexander" <peter.alexander.au gmail.com> writes:
On Tuesday, 24 July 2012 at 22:19:07 UTC, Jonathan M Davis wrote:
 On Tuesday, July 24, 2012 15:49:38 Nick Sabalausky wrote:
 Yea. Programs using Goldie ( semitwist.com/goldie ) take a 
 long time to
 compile (by D standards, not by C++ standards). I tried to 
 benchmark
 it a while back, and was never really confident in the results 
 I was
 getting or my understanding of the DMD source, so I never 
 brought it up
 before. But it *seemed* to be template matching that was the 
 big
 bottleneck (ie, IIUC, determining which template to 
 instantiate,
 and I think the function was actually called "match" or 
 something like
 that). Goldie does make use of a *lot* of that sort of thing.

I don't have any hard evidence for it, but I've always gotten the impression that it was templates, mixins, and CTFE which really slowed down compilation. Certainly, they increase the memory consumption of the compiler by quite a bit. My guess would be that if we were looking to improve the compiler's performance, that's where we'd need to focus. But we'd have to actually profile the compiler on a variety of projects to be sure of that (which is at least partially related to what Andrei is suggesting). - Jonathan M Davis

There's also the nasty O(n^2) optimiser issue. http://d.puremagic.com/issues/show_bug.cgi?id=7157
Jul 24 2012
prev sibling next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 24/07/12 15:34, Andrei Alexandrescu wrote:
 One idea would be to take a real, non-trivial application, written in both D
and
 another compiled language. We then can measure build times for both
 applications, and also measure the relative speeds of the generated
executables.

Suggest that this gets done with all 3 of the main D compilers, not just DMD. I'd like to see the tradeoff between compilation speed and executable speed that one gets between them. I do have some pretty much equivalent simulation code written in both D and C++. For a rough comparison: Language Compiler Compile time (s) Runtime (s) D GDC 1.5 25.3 D DMD 0.4 52.1 C++ g++ 2.3 21.8 C++ Clang++ 1.8 27.6 DMD used is a fairly recent pull from GitHub; GDC is the 4.6.3 package found in Ubuntu 12.04. I don't have a working LDC2 compiler on my system. :-( The C++ has a template-based policy class design, while the D code uses template mixins to similar effect. The D code can be found here: https://github.com/WebDrake/Dregs While I'm happy to also share the C++ code, I confess I'm shy to do so given that it probably represents a travesty of the beautiful ideas Andrei developed on policy class design ... :-) Best wishes, -- Joe
Jul 24 2012
next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/24/2012 04:54 PM, Joseph Rushton Wakeling wrote:

 Language Compiler Compile time (s) Runtime (s)
 D GDC 1.5 25.3
 D DMD 0.4 52.1
 C++ g++ 2.3 21.8
 C++ Clang++ 1.8 27.6

Those C++ builds have very few C++ source files, right? In my experience each source file takes a few seconds each, except the most trivial ones, because the standard library headers are compiled over and over again. :/ Ali
Jul 24 2012
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/12 4:37 AM, David Nadlinger wrote:
 On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling wrote:
 For a rough comparison: […]

Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge.

Yes, and both debug and release build times are important. Andrei
Jul 25 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
Jul 25 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/12 1:24 PM, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

There are systems that only work in release mode (e.g. performance is part of the acceptability criteria) and for which debugging means watching logs. So the problem is not faster optimization times for less optimization (though that's possible, too), but instead build times for a given level of optimization. Andrei
Jul 25 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2012 10:50 AM, Andrei Alexandrescu wrote:
 On 7/25/12 1:24 PM, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

There are systems that only work in release mode (e.g. performance is part of the acceptability criteria) and for which debugging means watching logs. So the problem is not faster optimization times for less optimization (though that's possible, too), but instead build times for a given level of optimization.

The easy way to improve optimized build times is to do less optimization. I'm saying be careful what you ask for - you might get it!
Jul 25 2012
prev sibling next sibling parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 25.07.2012 19:24, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.
Jul 25 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/25/12 4:53 PM, Rainer Schuetze wrote:
 On 25.07.2012 19:24, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.

The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing? Andrei
Jul 25 2012
next sibling parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 25.07.2012 23:31, Andrei Alexandrescu wrote:
 On 7/25/12 4:53 PM, Rainer Schuetze wrote:
 On 25.07.2012 19:24, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.

The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?

Incremental compilation does not work so well because - with combined declaration and implementation in the source, you also get the full dependencies if you just need a short declaration - even with di-files imports are viral: you must be very careful if you try to remove them from di-files because you might break runtime initialization order. - di-file generation has other known problems (e.g. missing implementation for CTFE) I thought about implementing incremental builds for Visual D, but soon gave up when I noticed that a single file compilation in a medium sized project (Visual D itself) almost takes as long as recompiling the whole thing. I suspect the problem is that dmd fully analyzes all the imported files and only skips the code generation for these. It could be much faster if it would do the analysis lazily (though this might slightly change evaluation order and skip error messages in unused code blocks).
Jul 25 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-26 00:17, Nick Sabalausky wrote:

 Aren't there still issues with what object files DMD chooses to store
 instantiated templates into? Or has that all been fixed?

 The xfbuild developers wrestled a lot with this and AIUI eventually
 gave up. The symptoms are that you'll eventually start getting linker
 errors related to template instantiations, which will be
 fixed when you then do a complete rebuild.

I'm pretty sure nothing has changed. But Walter said if you use the -lib flag it will output the templates to all object files. That will complicate things a bit but still possible to make it work. -- /Jacob Carlborg
Jul 26 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/25/2012 1:53 PM, Rainer Schuetze wrote:
 The "edit-compile-debug loop" is a use case where the D module system does not
 shine so well. Compare build times when only editing a single source file:
 With the help of incremental linking, building a large C++ project only takes
 seconds.
 In contrast, the D project usually recompiles everything from scratch with
every
 little change.

I suspect that's one of two possibilities: 1. everything is passed on one command line to dmd. This, of course, requires dmd to recompile everything. 2. modules are not separated into .d and .di files. Hence every module that imports a .d file has to, at least, parse and semantically analyze the whole thing, although it won't optimize or generate code for it. As for incremental linking, optlink has always been faster at doing a full link than the Microsoft linker does for an incremental link.
Jul 25 2012
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 26.07.2012 03:48, Walter Bright wrote:
 On 7/25/2012 1:53 PM, Rainer Schuetze wrote:
 The "edit-compile-debug loop" is a use case where the D module system
 does not
 shine so well. Compare build times when only editing a single source
 file:
 With the help of incremental linking, building a large C++ project
 only takes
 seconds.
 In contrast, the D project usually recompiles everything from scratch
 with every
 little change.

I suspect that's one of two possibilities: 1. everything is passed on one command line to dmd. This, of course, requires dmd to recompile everything. 2. modules are not separated into .d and .di files. Hence every module that imports a .d file has to, at least, parse and semantically analyze the whole thing, although it won't optimize or generate code for it.

I think working with di-files is too painful. A lot of the analysis in imported files could be skipped.
 As for incremental linking, optlink has always been faster at doing a
 full link than the Microsoft linker does for an incremental link.

Agreed, incremental linking is just a work-around for the linkers slowness.
Jul 25 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-25 23:56, Jonathan M Davis wrote:

 D should actually compile _faster_ if you compile everything at once -
 certainly for smaller projects - since it then only has to lex and parse each
 module once. Incremental builds avoid having to fully compile each module
 every time, but there's still plenty of extra lexing and parsing which goes
 on.

 I don't know how much it shifts with large projects (maybe incremental builds
 actually end up being better then, because you have enough files which aren't
 related to one another that the amount of code which needs to be relexed a
 reparsed is minimal in comparison to the number of files), but you can do
 incremental building with dmd if you want to. It's just more typical to do it
 all at once, because for most projects, that's faster. So, I don't see how
 there's an complaint against D here.

Incremental builds don't have to mean "pass a single file to the compiler". You can start by passing all the files at once to the compiler and then later you just pass all the files that have changed, at once. But I don't know how much difference it will be to recompiling the whole project. -- /Jacob Carlborg
Jul 26 2012
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/26/2012 02:28 AM, Jacob Carlborg wrote:
 On 2012-07-25 23:56, Jonathan M Davis wrote:

 Incremental builds don't have to mean "pass a single file to the
 compiler". You can start by passing all the files at once to the
 compiler and then later you just pass all the files that have changed,
 at once.

GNU make has the special $? prerequisite that may help with the above: "The names of all the prerequisites that are newer than the target, with spaces between them. " http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944 Ali
Jul 26 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-07-26 19:54, Ali Çehreli wrote:

 GNU make has the special $? prerequisite that may help with the above:
 "The names of all the prerequisites that are newer than the target, with
 spaces between them. "


 http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944

 Ali

I'm trying to avoid "make" as much as possible. -- /Jacob Carlborg
Jul 26 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, July 25, 2012 22:53:08 Rainer Schuetze wrote:
 On 25.07.2012 19:24, Walter Bright wrote:
 On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization? I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.

The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file: With the help of incremental linking, building a large C++ project only takes seconds. In contrast, the D project usually recompiles everything from scratch with every little change.

D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on. I don't know how much it shifts with large projects (maybe incremental builds actually end up being better then, because you have enough files which aren't related to one another that the amount of code which needs to be relexed a reparsed is minimal in comparison to the number of files), but you can do incremental building with dmd if you want to. It's just more typical to do it all at once, because for most projects, that's faster. So, I don't see how there's an complaint against D here. - Jonathan M Davis
Jul 25 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 7/25/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 D should actually compile _faster_ if you compile everything at once -
 certainly for smaller projects - since it then only has to lex and parse
 each
 module once. Incremental builds avoid having to fully compile each module
 every time, but there's still plenty of extra lexing and parsing which goes on.

That's assuming that the lexing/parsing is the bottleneck for DMD. For example: a full build of WindowsAPI takes 14.6 seconds on my machine. But when compiling one module at a time and using parallelism it takes 7 seconds instead. And all it takes is a simple parallel loop.
Jul 25 2012
prev sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Thursday, 26 July 2012 at 18:57:30 UTC, Jacob Carlborg wrote:
 On 2012-07-26 19:54, Ali Çehreli wrote:

 GNU make has the special $? prerequisite that may help with 
 the above:
 "The names of all the prerequisites that are newer than the 
 target, with
 spaces between them. "


 http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-944

 Ali

I'm trying to avoid "make" as much as possible.

+1
Jul 28 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/26/12 4:15 AM, Joseph Rushton Wakeling wrote:
 On 25/07/12 16:13, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

If you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.

The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck. You can skip the latter as it won't impact build time. Andrei
Jul 26 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, July 26, 2012 18:00:21 Joseph Rushton Wakeling wrote:
 On 26/07/12 15:42, Andrei Alexandrescu wrote:
 If you can advise some flag combinations (for D and C++) you'd like to
 see tested, I'll happily do them.

The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck.

Here's a little table of DMD to GDC comparisons for the Dregs codebase: ----------DMD---------- ----------GDC---------- compiler flags compile time runtime compile time runtime -O -release -inline 0.43s 52s 1.51s 25s -O -release 0.35s 47s 1.50s 25s -O -noboundscheck 0.35s 56s 1.66s 25s -O -inline 0.47s 1m 5s 1.94s 45s -O 0.36s 1m 5s 1.98s 45s -release -inline 0.31s 1m 3s 0.63s 1m 3s -release 0.29s 1m 3s 0.63s 1m 3s -inline 0.32s 1m 24s 0.70s 1m 26s -noboundscheck 0.29s 1m 10s 0.666s 1m 9s [none] 0.29s 1m 24s 0.72s 1m 26s -debug 0.30s 1m 24s 0.70s 1m 26s -unittest 0.42s 1m 25s 0.75s 1m 26s -debug -unittest 0.42s 1m 25s 0.78s 1m 26s

Clearly -O is where the big runtime speed difference is at between dmd and gdc, which _is_ a bit obvious, but I'm surprised that -inline had no differences, since dmd is generally accused at being poor at inlining. That probably just indicates that it's a frontend issue (which I suppose makes sense when I think about it). I guess that the way to go if you want to maxize both your efficiency and the code's efficiency is to do most of the coding with dmd but generate the final program with gdc (though obviously building and testing with both the whole way is probably necessary to ensure stability; still much of that could be automated and not affect programmer efficiency). - Jonathan M Davis
Jul 26 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:
 Clearly -O is where the big runtime speed difference is at 
 between dmd and gdc,
 which _is_ a bit obvious, but I'm surprised that -inline had no 
 differences,
 since dmd is generally accused at being poor at inlining. That 
 probably just
 indicates that it's a frontend issue (which I suppose makes 
 sense when I think
 about it).

GDC probably performs inlining by default on -O2/-O3, just like LDC does. Also note that for the -release case (any performance measurements without it are most probably not worth it due to all the extra _d_invariant, etc. calls), -inline seems to increase the runtime of the DMD-produced executable by 10%. For inlining, you inevitably have to rely on heuristics, and there will always be cases where it slows down execution (worst case: the slight improvement in code size causes cache thrashing in a hot path), but 10% in a fairly standard application seems to be quite a lot. David
Jul 26 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:
 […] That probably just
 indicates that it's a frontend issue (which I suppose makes 
 sense when I think
 about it).

Oh, and I don't know what exactly you are referring to here, but any difference between DMD and GDC is likely not a frontend issue, as GDC uses the DMD frontend, with only minor modifications. David
Jul 26 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/07/12 20:27, David Nadlinger wrote:
 GDC probably performs inlining by default on -O2/-O3, just like LDC does.

I was surprised that using -inline alone (without any optimization option) doesn't produce any meaningful improvement. It cuts maybe 1s off the DMD-compiled runtime, but it's not clear to me that actually corresponds to a reliable difference. Perhaps GDC just ignores the -inline flag ... ? I suppose it's possible that this is code that does not respond well to inlining, although I'd have thought the obvious optimization would be to inline many of the object methods that are only called internally and that are called in a tight loop: do { userDivergence(ratings); userReputation(ratings); reputationObjectOld_[] = reputationObject_[]; objectReputation(ratings); diff = 0; foreach(size_t o, Reputation rep; reputationObject_) { auto aux = rep - reputationObjectOld_[o]; diff += aux*aux; } ++iterations; } while (diff > convergence_); I might tweak it manually so that userDivergence(), userReputation() and objectReputation() are inline, and see if it makes any difference.
Jul 26 2012
prev sibling next sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 26 July 2012 23:07, Joseph Rushton Wakeling
<joseph.wakeling webdrake.net> wrote:
 On 26/07/12 20:27, David Nadlinger wrote:
 GDC probably performs inlining by default on -O2/-O3, just like LDC does.

I was surprised that using -inline alone (without any optimization option) doesn't produce any meaningful improvement. It cuts maybe 1s off the DMD-compiled runtime, but it's not clear to me that actually corresponds to a reliable difference. Perhaps GDC just ignores the -inline flag ... ?

-inline is mapped to -finline-functions in GDC. Inlining is possibly done, but only in the backend. Some extra notes to bear in mind about GDC. 1) All methods and function literals are marked as 'inline' by default. 2) Cross module inlining does not occur if you are compiling one-at-a-time. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jul 26 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 27/07/12 07:29, Iain Buclaw wrote:
 -inline is mapped to -finline-functions in GDC.  Inlining is possibly
 done, but only in the backend.

 Some extra notes to bear in mind about GDC.
 1) All methods and function literals are marked as 'inline' by default.
 2) Cross module inlining does not occur if you are compiling one-at-a-time.

Good to know. In this case it's all compiled together in one go: ############################################### DC = gdmd DFLAGS = -O -release -inline DREGSRC = dregs/core.d dregs/codetermine.d all: test test: test.d $(DREGSRC) $(DC) $(DFLAGS) -of$ $^ .PHONY: clean clean: rm -f test *.o ############################################### I'm just surprised that using -inline produces no measurable difference at all in performance for GDC, whether or not any other optimization flags are used. As I said, maybe I'll test some manual inlining and see what difference it might make ...
Jul 27 2012
prev sibling parent Iain Buclaw <ibuclaw ubuntu.com> writes:
On 27 July 2012 09:09, Joseph Rushton Wakeling
<joseph.wakeling webdrake.net> wrote:
 On 27/07/12 07:29, Iain Buclaw wrote:
 -inline is mapped to -finline-functions in GDC.  Inlining is possibly
 done, but only in the backend.

 Some extra notes to bear in mind about GDC.
 1) All methods and function literals are marked as 'inline' by default.
 2) Cross module inlining does not occur if you are compiling
 one-at-a-time.

Good to know. In this case it's all compiled together in one go: ############################################### DC = gdmd DFLAGS = -O -release -inline DREGSRC = dregs/core.d dregs/codetermine.d all: test test: test.d $(DREGSRC) $(DC) $(DFLAGS) -of$ $^ .PHONY: clean clean: rm -f test *.o ############################################### I'm just surprised that using -inline produces no measurable difference at all in performance for GDC, whether or not any other optimization flags are used. As I said, maybe I'll test some manual inlining and see what difference it might make ...

My best assumption would be it may say something more about the way the program was written itself rather than the compiler. Regards -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jul 27 2012
prev sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 07/25/2012 09:46 AM, ixid wrote:
 beautiful ideas Andrei developed on policy class design

Where would one find these ideas?

There are some papers at Andrei's site: http://erdani.com/index.php/articles/ Search for "policy" there. Policy based design is the main topic in Andrei's book, "Modern C++ Design": http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315 That book covers a publicly-available library, Loki, again by Andrei: http://loki-lib.sourceforge.net/ Ali
Jul 25 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling 
wrote:
  For a rough comparison: […]

Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge. David
Jul 25 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/07/12 09:37, David Nadlinger wrote:
 On Tuesday, 24 July 2012 at 23:55:03 UTC, Joseph Rushton Wakeling wrote:
  For a rough comparison: […]

Even for a rough comparison of compile times, you need to include compiler switches used. For example, the difference between Clang -O0 vs. Clang -O3 is usually huge.

C++ compiler and library flags: -ansi -pedantic -Wall -O3 -march=native -mtune=native -I. -DHAVE_INLINE -lm -lgsl -lgslcblas -lgrsl dmd and gdmd flags: -O -release -inline (which for gdmd corresponds to -O3 -fweb -frelease -finline-functions -I /usr/local/include/d2/). And yes, as Ali observed, this is a very small codebase (D is 3 files, 374 lines total; C++ is 18 files, 1266 lines -- so the comparison isn't 100% fair; but on the other hand, that's testament to how D can be used for more elegant code....).
Jul 25 2012
prev sibling next sibling parent "ixid" <nuaccount gmail.com> writes:
beautiful ideas Andrei developed on policy class design

Where would one find these ideas?
Jul 25 2012
prev sibling next sibling parent "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Wed, 25 Jul 2012 18:46:58 +0200, ixid <nuaccount gmail.com> wrote:

 beautiful ideas Andrei developed on policy class design

Where would one find these ideas?

http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315 -- Simen
Jul 25 2012
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Wed, 25 Jul 2012 17:31:10 -0400
Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> wrote:

 On 7/25/12 4:53 PM, Rainer Schuetze wrote:
 The "edit-compile-debug loop" is a use case where the D module
 system does not shine so well. Compare build times when only
 editing a single source file:
 With the help of incremental linking, building a large C++ project
 only takes seconds.
 In contrast, the D project usually recompiles everything from
 scratch with every little change.

The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?

Aren't there still issues with what object files DMD chooses to store instantiated templates into? Or has that all been fixed? The xfbuild developers wrestled a lot with this and AIUI eventually gave up. The symptoms are that you'll eventually start getting linker errors related to template instantiations, which will be fixed when you then do a complete rebuild.
Jul 25 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, July 26, 2012 00:34:07 Andrej Mitrovic wrote:
 On 7/25/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 D should actually compile _faster_ if you compile everything at once -
 certainly for smaller projects - since it then only has to lex and parse
 each
 module once. Incremental builds avoid having to fully compile each module
 every time, but there's still plenty of extra lexing and parsing which
 goes on.


Not necessarily. The point is that there's extra work that has to be done when compiling separately. So, whether it takes more or less time depends on how much other work you're avoiding by doing an incremental build. Certainly, I'd expect a full incremental build from scratch to take longer than one which was not incremental.
 For example: a full build of  WindowsAPI takes 14.6 seconds on my machine.
 But when compiling one module at a time and using parallelism it takes
 7 seconds instead. And all it takes is a simple parallel loop.

Parallelism? How on earth do you manage that? dmd has no support for running on multiple threads AFAIK. Do you run multiple copies of dmd at once? Certainly, compiling files in parallel changes things. You've got multiple cores working on it at that point, so the equation is completely different. - Jonathan M Davis
Jul 25 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-07-26 00:42, Jonathan M Davis wrote:

 I'd expect a full incremental build from scratch to take longer than one which
was
 not incremental.

Why? Just pass all the files to the compiler at once. Nothing says an incremental build needs to pass a single file to the compiler. -- /Jacob Carlborg
Jul 26 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Parallelism? How on earth do you manage that? dmd has no support for running
 on multiple threads AFAIK.
 You've got multiple
 cores working on it at that point, so the equation is completely different.

That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?
Jul 25 2012
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Certainly, I'd expect a full incremental build from scratch to take longer
than one which was not incremental.

Well that would probably only be done once. With full builds you do it every time.
Jul 25 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, July 26, 2012 00:44:14 Andrej Mitrovic wrote:
 On 7/26/12, Jonathan M Davis <jmdavisProg gmx.com> wrote:
 Parallelism? How on earth do you manage that? dmd has no support for
 running on multiple threads AFAIK.
 You've got multiple
 cores working on it at that point, so the equation is completely
 different.

That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?

Well, regardless, my and Andrei's point was that C++ has nothing on us here. We can do incremental just fine. The fact that most people just build the whole program from scratch every time is irrelevant. That just means that the build times are fast enough for most people not to care about doing incremental builds, not that they can't do them. - Jonathan M Davis
Jul 25 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 25/07/12 16:13, Andrei Alexandrescu wrote:
 Yes, and both debug and release build times are important.

If you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.
Jul 26 2012
prev sibling next sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 26/07/12 15:42, Andrei Alexandrescu wrote:
 If you can advise some flag combinations (for D and C++) you'd like to
 see tested, I'll happily do them.

The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck.

Here's a little table of DMD to GDC comparisons for the Dregs codebase: ----------DMD---------- ----------GDC---------- compiler flags compile time runtime compile time runtime -O -release -inline 0.43s 52s 1.51s 25s -O -release 0.35s 47s 1.50s 25s -O -noboundscheck 0.35s 56s 1.66s 25s -O -inline 0.47s 1m 5s 1.94s 45s -O 0.36s 1m 5s 1.98s 45s -release -inline 0.31s 1m 3s 0.63s 1m 3s -release 0.29s 1m 3s 0.63s 1m 3s -inline 0.32s 1m 24s 0.70s 1m 26s -noboundscheck 0.29s 1m 10s 0.666s 1m 9s [none] 0.29s 1m 24s 0.72s 1m 26s -debug 0.30s 1m 24s 0.70s 1m 26s -unittest 0.42s 1m 25s 0.75s 1m 26s -debug -unittest 0.42s 1m 25s 0.78s 1m 26s
Jul 26 2012
prev sibling next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, July 26, 2012 21:29:57 David Nadlinger wrote:
 On Thursday, 26 July 2012 at 18:59:14 UTC, Jonathan M Davis wrote:
 [=E2=80=A6] That probably just
 indicates that it's a frontend issue (which I suppose makes
 sense when I think
 about it).

Oh, and I don't know what exactly you are referring to here, but any difference between DMD and GDC is likely not a frontend issue, as GDC uses the DMD frontend, with only minor modifications.

That was my point. -inline seems to be pretty much identical between th= e two=20 compilers, and if the inlining is done in the frontend, then that makes= sense.=20 Thinking on it, it makes sense to me that it would be in the frontend, = but I=20 don't know where it actually is. - Jonathan M Davis
Jul 26 2012
prev sibling next sibling parent "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 26 July 2012 at 19:36:51 UTC, Jonathan M Davis wrote:
 That was my point. -inline seems to be pretty much identical 
 between the two
 compilers, and if the inlining is done in the frontend, then 
 that makes sense.
 Thinking on it, it makes sense to me that it would be in the 
 frontend, but I
 don't know where it actually is.

Ah, okay, I see what you meant. But no, as far as I'm aware, GDC doesn't use DMD's inliner, but rather relies on the GCC one. LDC does the same, we entirely disable DMD's inlining code, it turned out to just be not worth it. David
Jul 26 2012
prev sibling next sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On Thu, 26 Jul 2012 10:54:07 -0700
Ali =C7ehreli <acehreli yahoo.com> wrote:

 On 07/26/2012 02:28 AM, Jacob Carlborg wrote:
  > On 2012-07-25 23:56, Jonathan M Davis wrote:
=20
  > Incremental builds don't have to mean "pass a single file to the
  > compiler". You can start by passing all the files at once to the
  > compiler and then later you just pass all the files that have
  > changed, at once.
=20
 GNU make has the special $? prerequisite that may help with the
 above: "The names of all the prerequisites that are newer than the
 target, with spaces between them. "
=20
    http://www.gnu.org/software/make/manual/make.html#index-g_t_0024_003f-=

=20

So in other words, it'll completely crap out when a path contains spaces? (What is this, 1994?)
Jul 26 2012
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 24 Jul 2012 10:34:57 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Hello,


 I was talking to Walter on how to define a good study of D's compilation  
 speed. We figured that we clearly need a good baseline, otherwise  
 numbers have little meaning.

Might I draw attention again to this bug: http://d.puremagic.com/issues/show_bug.cgi?id=4900 Granted, this is really freaking old. A re-application of profiling should be done. But in general, what I have observed from DMD compiling is that the number and size (string size) of symbols is what really bogs it down. Most of the time, it's lightning fast. The reason the dcollections unit test taxes it so much is because I'm instantiating 15 objects for each container type, and each one has humongous symbols, a consequence of so many template arguments. -Steve
Aug 24 2012
prev sibling parent "d_follower" <d_follower fakemail.com> writes:
On Tuesday, 24 July 2012 at 14:34:58 UTC, Andrei Alexandrescu 
wrote:
 Hello,


 I was talking to Walter on how to define a good study of D's 
 compilation speed. We figured that we clearly need a good 
 baseline, otherwise numbers have little meaning.

 One idea would be to take a real, non-trivial application, 
 written in both D and another compiled language. We then can 
 measure build times for both applications, and also measure the 
 relative speeds of the generated executables.

 Although it sounds daunting to write the same nontrivial 
 program twice, it turns out such an application does exist: 
 dmdscript, a Javascript engine written by Walter in both C++ 
 and D. It has over 40KLOC so it's of a good size to play with.

 What we need is a volunteer who dusts off the codebase (e.g. 
 the D source is in D1 and should be adjusted to compile with 
 D2), run careful measurements, and show the results. Is anyone 
 interested?


 Thanks,

 Andrei

You can try testing DMD (written in C++) against DDMD (written in D). I don't think you can find more fair comparison (both projects are in sync - though dated - and project size is fairly large).
Aug 24 2012