www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Honey, I shrunk the build times

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
https://github.com/D-Programming-Language/phobos/pull/3379

Punchline: major reduction of both total run time and memory consumed.


Andrei
Jun 06 2015
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu 
wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory 
 consumed.
Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do? - Jonathan M Davis
Jun 06 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/6/15 5:45 PM, Jonathan M Davis wrote:
 On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory consumed.
Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do?
Thanks for asking. The situation before went like this: to build libphobos2.a, the command would go like this (simplified to just a few files and flags: dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d std/algorithm/iteration.d So all modules would go together in one command to build the library. With the package-at-a-time approach, we build one directory at a time like this: dmd -oflibphobos2_std.a std/datetime.d std/conv.d dmd -oflibphobos2_std_algorithm.a std/algorithm/comparison.d std/algorithm/iteration.d So now we have two libraries that need to be combined together, which is easy: dmd -oflibphobos2_std.a libphobos2_std.a libphobos2_std_algorithm.a and voila, the library is built. This is strictly speaking more work: * Everything in Phobos imports everything else, so effectively we're parsing the entire Phobos twice as much * There are temporary files being created * There's an extra final step - files that have just been written need to be read again However, the key advantage here is that the first two steps can be performed in parallel, and that turns out to be key. Time and again I see this: parallel processing almost always ends up doing more work - some of which is wasteful, but in the end it wins. It's counterintuitive sometimes. This is key to scalability, too. Now, the baseline numbers were without std.experimental.allocator. Recall the baseline time on my laptop was 4.93s. I added allocator, boom, 5.08s - sensible degradation. However, after I merged the per-package builder I got the same 4.01 seconds. Andrei
Jun 06 2015
next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

so in the end, after endless talking how separate compilation sux and=20
everyone should do one-step combined compilation, separate compilation=20
wins. it's funny how i'm always right in the end.=
Jun 06 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/06/2015 4:55 p.m., ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

 so in the end, after endless talking how separate compilation sux and
 everyone should do one-step combined compilation, separate compilation
 wins. it's funny how i'm always right in the end.
Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
Jun 06 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/6/15 10:00 PM, Rikki Cattermole wrote:
 On 7/06/2015 4:55 p.m., ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

 so in the end, after endless talking how separate compilation sux and
 everyone should do one-step combined compilation, separate compilation
 wins. it's funny how i'm always right in the end.
Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
There might be a bit of misunderstanding on what that change does. -- Andrei
Jun 06 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/06/2015 5:08 p.m., Andrei Alexandrescu wrote:
 On 6/6/15 10:00 PM, Rikki Cattermole wrote:
 On 7/06/2015 4:55 p.m., ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

 so in the end, after endless talking how separate compilation sux and
 everyone should do one-step combined compilation, separate compilation
 wins. it's funny how i'm always right in the end.
Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
There might be a bit of misunderstanding on what that change does. -- Andrei
I probably should have removed your original post from that. Really it was meant for ketmar.
Jun 06 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 06 Jun 2015 22:08:47 -0700, Andrei Alexandrescu wrote:

 On 6/6/15 10:00 PM, Rikki Cattermole wrote:
 On 7/06/2015 4:55 p.m., ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

 so in the end, after endless talking how separate compilation sux and
 everyone should do one-step combined compilation, separate compilation
 wins. it's funny how i'm always right in the end.
Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
=20 There might be a bit of misunderstanding on what that change does. -- Andrei
it utilizes "partial separate compilation" to earn speed using parallel=20 builds. the thing alot of people talking of before: separate compilation=20 can use multicores with ease, while one-step-all compilation can't=20 without significant changes in compiler internals.=
Jun 06 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 05:25:21 UTC, ketmar wrote:
 On Sat, 06 Jun 2015 22:08:47 -0700, Andrei Alexandrescu wrote:

 On 6/6/15 10:00 PM, Rikki Cattermole wrote:
 On 7/06/2015 4:55 p.m., ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu 
 wrote:

 so in the end, after endless talking how separate 
 compilation sux and
 everyone should do one-step combined compilation, separate 
 compilation
 wins. it's funny how i'm always right in the end.
Nobody is always right. But your way of thinking can be attractive if you like being evil. I'm evil :)
There might be a bit of misunderstanding on what that change does. -- Andrei
it utilizes "partial separate compilation" to earn speed using parallel builds. the thing alot of people talking of before: separate compilation can use multicores with ease, while one-step-all compilation can't without significant changes in compiler internals.
you'd think with dmd's module system achieving compiler-level parallelism wouldn't be so difficult. I guess it stems from dmd being before the free lunch ended.
Jun 07 2015
next sibling parent reply "Temtaime" <temtaime gmail.com> writes:
It's really bad solution.

Are you building phobos 1000 times a day so 5 seconds is really 
long for you ?
Separate compilation prevents compiler from inlining everything.
Jun 07 2015
next sibling parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:
 Separate compilation prevents compiler from inlining everything.
only bad compilers
Jun 07 2015
next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 June 2015 at 10:34, weaselcat via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:

 Separate compilation prevents compiler from inlining everything.
only bad compilers
The way dmd does it, it's almost the same as compiling all object files at once, but only emitting code for one. Then times that by 134 modules and you understand why dmd uses a "better together" strategy for compilation.
Jun 07 2015
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 7 June 2015 at 08:34:50 UTC, weaselcat wrote:
 On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:
 Separate compilation prevents compiler from inlining 
 everything.
only bad compilers
All existing compilers AFAIK. There is no point in discussing theoretical advanced enough compiler when considering actions done right now. Good compiler should be able to work as caching daemon and never need separate object files at all. So we should completely ban it by that logic. In practice creating library per package is decent compromise that works good right now, even if it is consistently imperfect.
Jun 07 2015
parent "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 10:11:26 UTC, Dicebot wrote:
 On Sunday, 7 June 2015 at 08:34:50 UTC, weaselcat wrote:
 On Sunday, 7 June 2015 at 08:24:24 UTC, Temtaime wrote:
 Separate compilation prevents compiler from inlining 
 everything.
only bad compilers
All existing compilers AFAIK. There is no point in discussing theoretical advanced enough compiler when considering actions done right now.
right off the top of my head, I know ghc and rustc have zero issue with this or are we only referring to D compilers?
Jun 07 2015
prev sibling next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:

 It's really bad solution.
=20
 Are you building phobos 1000 times a day so 5 seconds is really long for
 you ?
 Separate compilation prevents compiler from inlining everything.
how is that? even if we left lto aside, compiler needs module source=20 anyway. if one will use full .d files instead of .di, nothing can prevent=20 good compiler from inlining.=
Jun 07 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 June 2015 at 10:51, ketmar via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:

 It's really bad solution.

 Are you building phobos 1000 times a day so 5 seconds is really long for
 you ?
 Separate compilation prevents compiler from inlining everything.
how is that? even if we left lto aside, compiler needs module source anyway. if one will use full .d files instead of .di, nothing can prevent good compiler from inlining.
Semantic analysis is done lazily. No AST, no inline.
Jun 07 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Sun, 07 Jun 2015 11:01:19 +0200, Iain Buclaw via Digitalmars-d wrote:

 On 7 June 2015 at 10:51, ketmar via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:
=20
 On Sun, 07 Jun 2015 08:24:23 +0000, Temtaime wrote:

 It's really bad solution.

 Are you building phobos 1000 times a day so 5 seconds is really long
 for you ?
 Separate compilation prevents compiler from inlining everything.
how is that? even if we left lto aside, compiler needs module source anyway. if one will use full .d files instead of .di, nothing can prevent good compiler from inlining.
=20 =20 Semantic analysis is done lazily. No AST, no inline.
but everything one need to do semantic is already there. it's just calls=20 to `semantic` are absent. with some imaginary "--aggressive-inline"=20 option compiler can do more semantic calls and inline things properly.=20 sure, that will slow down compilation, but that's why it should be done=20 as opt-in feature.=
Jun 07 2015
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/7/15 1:24 AM, Temtaime wrote:
 It's really bad solution.
No.
 Are you building phobos 1000 times a day so 5 seconds is really long for
 you ?
Yes. Andrei
Jun 07 2015
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 7 June 2015 at 08:12:11 UTC, weaselcat wrote:
 you'd think with dmd's module system achieving compiler-level 
 parallelism wouldn't be so difficult.
IIRC, Walter stated that he wanted to add it but decided that it would be too much of a pain to do in C++ and is waiting for us to fully switch to ddmd before tackling that problem. Similarly, Daniel Murphy has ideas on how to improve CTFE (which would vastly help compilation speeds), but it would be so much easier to do in D that he put it off until we switch to ddmd. It would surprise me if there are other speed improvements that have been put off, simply because they'd be easier to implement in D than C++. So, I expect that there's a decent chance that we'll be able to better leverage the design of the language to improve its compilation speed once we've officially switched the reference compiler to D (and we'll probably get there within a release or two; the main hold-up is how long it'll take gdc and ldc to catch up with 2.067). - Jonathan M Davis
Jun 07 2015
parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 June 2015 at 10:49, Jonathan M Davis via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On Sunday, 7 June 2015 at 08:12:11 UTC, weaselcat wrote:

 you'd think with dmd's module system achieving compiler-level parallelism
 wouldn't be so difficult.
IIRC, Walter stated that he wanted to add it but decided that it would be too much of a pain to do in C++ and is waiting for us to fully switch to ddmd before tackling that problem. Similarly, Daniel Murphy has ideas on how to improve CTFE (which would vastly help compilation speeds), but it would be so much easier to do in D that he put it off until we switch to ddmd. It would surprise me if there are other speed improvements that have been put off, simply because they'd be easier to implement in D than C++. So, I expect that there's a decent chance that we'll be able to better leverage the design of the language to improve its compilation speed once we've officially switched the reference compiler to D (and we'll probably get there within a release or two; the main hold-up is how long it'll take gdc and ldc to catch up with 2.067).
I wouldn't have thought that not moving to 2.067 would be a hold-up (there is nothing in that release that blocks building DDMD as it is *now*). But I have been promised time and again that there will be more effort (infrastructure?) put in to help get LDC and GDC integrated into the testing process for all new PRs.
Jun 07 2015
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 7 June 2015 at 08:59:46 UTC, Iain Buclaw wrote:
 I wouldn't have thought that not moving to 2.067 would be a 
 hold-up (there
 is nothing in that release that blocks building DDMD as it is 
 *now*).
The biggest problem is that releasing a ddmd which is compiled with dmd is unacceptable, because it incurs too large a performance hit (~20% IIRC), so we need either ldc or gdc to be at 2.067 so that we can use that to compile the release build of ddmd.
 But
 I have been promised time and again that there will be more 
 effort
 (infrastructure?) put in to help get LDC and GDC integrated 
 into the testing process for all new PRs.
That would be good, though I don't know what the situation with that is. However, I think that Daniel's top priority at this point is getting the frontend to the point that it's backend-agnostic and thus identical for all three backends, which should greatly help in having gdc and ldc keep up with dmd. That obviously wouldn't obviate the need for testing gdc and ldc, but it would reduce the effort to update them and maintain them. - Jonathan M Davis
Jun 07 2015
parent reply "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 10:03:06 UTC, Jonathan M Davis wrote:
 On Sunday, 7 June 2015 at 08:59:46 UTC, Iain Buclaw wrote:
 I wouldn't have thought that not moving to 2.067 would be a 
 hold-up (there
 is nothing in that release that blocks building DDMD as it is 
 *now*).
The biggest problem is that releasing a ddmd which is compiled with dmd is unacceptable, because it incurs too large a performance hit (~20% IIRC), so we need either ldc or gdc to be at 2.067 so that we can use that to compile the release build of ddmd.
after playing around with ddmd built with ldc, it's still a solid 30-40% slower than current dmd(with optimization flags, obv.) after profiling, it spends most of its time swapping and handling page faults. Enabling the GC seems to crash it, oh well. Maybe 20-30% of the actual time is doing non-allocation related things.
Jun 14 2015
next sibling parent "Temtaime" <temtaime gmail.com> writes:
I think the way is fix all memory operations which cause UB and 
enable GC.
Jun 14 2015
prev sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Sunday, 14 June 2015 at 19:02:59 UTC, weaselcat wrote:
 after playing around with ddmd built with ldc, it's still a 
 solid 30-40% slower than current dmd(with optimization flags, 
 obv.)
How did you build it? This is especially important given that DDMD straight from the repo does not build with LDC right now as it tries to override the druntime memory allocation functions, which works only due to the way DMD's -lib is implemented. On a system with 64 GiB RAM, Daniel and I could not measure any performance difference to the C++ version when building the Phobos unittests. - David
Jun 14 2015
parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 15 June 2015 at 02:55, David Nadlinger via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Sunday, 14 June 2015 at 19:02:59 UTC, weaselcat wrote:
 after playing around with ddmd built with ldc, it's still a solid 30-40%
 slower than current dmd(with optimization flags, obv.)
How did you build it? This is especially important given that DDMD straight from the repo does not build with LDC right now as it tries to override the druntime memory allocation functions, which works only due to the way DMD's -lib is implemented. On a system with 64 GiB RAM, Daniel and I could not measure any performance difference to the C++ version when building the Phobos unittests.
Because 64GiB is such a commodity nowadays. :-)
Jun 27 2015
prev sibling parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Jun 07, 2015 at 05:00:58PM +1200, Rikki Cattermole via Digitalmars-d
wrote:
 On 7/06/2015 4:55 p.m., ketmar wrote:
On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

so in the end, after endless talking how separate compilation sux and
everyone should do one-step combined compilation, separate
compilation wins. it's funny how i'm always right in the end.
Nobody is always right.
[...] "Nobody is always right. I am Nobody." :-P T -- Nobody is perfect. I am Nobody. -- pepoluan, GKC forum
Jun 07 2015
prev sibling next sibling parent "weaselcat" <weaselcat gmail.com> writes:
On Sunday, 7 June 2015 at 04:55:52 UTC, ketmar wrote:
 On Sat, 06 Jun 2015 21:30:02 -0700, Andrei Alexandrescu wrote:

 so in the end, after endless talking how separate compilation 
 sux and
 everyone should do one-step combined compilation, separate 
 compilation
 wins. it's funny how i'm always right in the end.
a broken clock is right twice a day ;) also, LDC stomps all over dmd when you enter separate compilation territory.
Jun 06 2015
prev sibling parent "Dicebot" <public dicebot.lv> writes:
"C style per-module separate compilation sux" != "splitting the 
library into smaller meaningful static libraries sux"

It was all discussed and nailed down so many times but old habits 
never die easy.
Jun 06 2015
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Sunday, 7 June 2015 at 04:30:02 UTC, Andrei Alexandrescu wrote:
[...]
 This is key to scalability, too. Now, the baseline numbers were 
 without std.experimental.allocator. Recall the baseline time on 
 my laptop was 4.93s. I added allocator, boom, 5.08s - sensible 
 degradation. However, after I merged the per-package builder I 
 got the same 4.01 seconds.
Ah, okay. So, you essentially did what you were talking about doing for rdmd. I don't think that it's an approach that would have occurred to me, but I'm certainly in favor of a faster build. - Jonathan M Davis
Jun 06 2015
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-06-07 06:30, Andrei Alexandrescu wrote:

 Thanks for asking. The situation before went like this: to build
 libphobos2.a, the command would go like this (simplified to just a few
 files and flags:

 dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d
 std/algorithm/iteration.d

 So all modules would go together in one command to build the library.
 With the package-at-a-time approach, we build one directory at a time
 like this:
I'm wondering if the impovements would have been larger if Phobos had a more tree structure for the modules rather than a fairly flat structure. -- /Jacob Carlborg
Jun 07 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/7/15 2:36 AM, Jacob Carlborg wrote:
 On 2015-06-07 06:30, Andrei Alexandrescu wrote:

 Thanks for asking. The situation before went like this: to build
 libphobos2.a, the command would go like this (simplified to just a few
 files and flags:

 dmd -oflibphobos2.a std/datetime.d std/conv.d std/algorithm/comparison.d
 std/algorithm/iteration.d

 So all modules would go together in one command to build the library.
 With the package-at-a-time approach, we build one directory at a time
 like this:
I'm wondering if the impovements would have been larger if Phobos had a more tree structure for the modules rather than a fairly flat structure.
Affirmative. Currently the duration of the build is determined by the critical path, which mainly consists of building std/*.d. -- Andrei
Jun 07 2015
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 06/07/2015 12:30 AM, Andrei Alexandrescu wrote:
 parallel processing almost always ends up doing more work -
 some of which is wasteful, but in the end it wins. It's counterintuitive
 sometimes.
It just means you're taking more system resources, which yea, can naturally be faster as long as those resources aren't already in use. Get more people assembling gizmos and you'll reach your quota faster even with a little bit of coordination overhead.
Jun 07 2015
prev sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 June 2015 at 02:45, Jonathan M Davis via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory consumed.
Reading makefiles always gives me a headache. For those of us who can't just glance through the changes and quickly decipher them, what did you actually do? - Jonathan M Davis
By the way, what's happening with the eventual packaging of std.datetime? I'd like to have memory consumption of running unittests down down down. Iain.
Jun 27 2015
prev sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu 
wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory 
 consumed.


 Andrei
Are the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time? Atila
Jun 09 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/9/15 4:06 AM, Atila Neves wrote:
 On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory consumed.


 Andrei
Are the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time?
Last one's right. From the diff: Andrei
Jun 09 2015
parent "Atila Neves" <atila.neves gmail.com> writes:
On Tuesday, 9 June 2015 at 16:20:35 UTC, Andrei Alexandrescu 
wrote:
 On 6/9/15 4:06 AM, Atila Neves wrote:
 On Saturday, 6 June 2015 at 21:42:47 UTC, Andrei Alexandrescu 
 wrote:
 https://github.com/D-Programming-Language/phobos/pull/3379

 Punchline: major reduction of both total run time and memory 
 consumed.


 Andrei
Are the inter-package dependencies handled correctly? It's hard to say looking at the diff, but I don't see where it's done. With the "compile everything at once" model it's not an issue; everything is getting recompiled anyway. With per-package... if foo/toto.d gets changed and bar/tata.d has an "import foo.toto;" in it, then both foo and bar packages need to get recompiled. Or is this change recompiling everything all of the time but just happens to do it a package at a time?
Last one's right. From the diff: the future.
Ah right, sorry, I missed that. reggae already calculates dependencies by asking the compiler and (modulo bugs) only recompiles packages that need to be recompiled. Atila
Jun 09 2015