www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Any takers for http://d.puremagic.com/issues/show_bug.cgi?id=9673?

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great 
relatively confined project of good utility. We preapproved it, if 
anyone wants to snatch it please come forward.

Also, any comments to the design are welcome.


Thanks,

Andrei
Mar 09 2013
next sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 04:29:34 UTC, Andrei Alexandrescu 
wrote:
 I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 
 it's a great relatively confined project of good utility. We 
 preapproved it, if anyone wants to snatch it please come 
 forward.

 Also, any comments to the design are welcome.
I've thought about this before. Here are my thoughts: 1. Querying the dependencies of one module, and compiling it, should be done in one go (one dmd execution). The idea is that if we need to get a module's dependencies, it will be because the module is one we've never compiled it before, or the module itself or one of its previously-known dependencies has changed 2. Object files (and their .deps) should be cached independently of the entry point module. This will allow speeding up incremental compilation of multiple programs that share some source files.
Mar 09 2013
prev sibling next sibling parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 05:29, Andrei Alexandrescu wrote:
 I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great
 relatively confined project of good utility. We preapproved it, if
 anyone wants to snatch it please come forward.

 Also, any comments to the design are welcome.


 Thanks,

 Andrei
In my experience single file compilation of medium sized projects is unacceptably slow. Much slower than what you are used to by similar sized C++ projects. I think this is because without using di-files, a lot more code has to be analyzed for each compilation unit. Another problem with single file compilation is that dependencies do not only cover changes to the declarations (as in C++) but also to the implementation, so the import chain can easily explode. A small change to the implementation of a function can trigger rebuilding a lot of other files. The better option would be to pass all source files to update in one invocation of dmd, so it won't get slower than a full rebuild, but this has been plagued with linker errors in the past (undefined and duplicate symbols). If it works, it could identify independent group of files which you now separate into libraries.
Mar 10 2013
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
 In my experience single file compilation of medium sized 
 projects is unacceptably slow. Much slower than what you are 
 used to by similar sized C++ projects.
Even when taking advantage of multiple CPU cores?
Mar 10 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 11:32, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
 In my experience single file compilation of medium sized projects is
 unacceptably slow. Much slower than what you are used to by similar
 sized C++ projects.
Even when taking advantage of multiple CPU cores?
I don't have support for building on multiple cores, but trying it on visuald itself (48 files) yields - combined compilation 6s - single file compilation 1min4s You'd need a lot of cores to be better off with single file compilation. These are only the plugin files, not anything in the used libraries (about 300 more files). Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively) Almost any change causes a lot of files to be rebuilt (just tried one, took 49s to build).
Mar 10 2013
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
 On 10.03.2013 11:32, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze 
 wrote:
 In my experience single file compilation of medium sized 
 projects is
 unacceptably slow. Much slower than what you are used to by 
 similar
 sized C++ projects.
Even when taking advantage of multiple CPU cores?
I don't have support for building on multiple cores, but trying it on visuald itself (48 files) yields - combined compilation 6s - single file compilation 1min4s You'd need a lot of cores to be better off with single file compilation. These are only the plugin files, not anything in the used libraries (about 300 more files). Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively) Almost any change causes a lot of files to be rebuilt (just tried one, took 49s to build).
Do you think it has much to do with that Windows has a larger overhead for process creation? I've ran some tests on Linux: ~$ git clone git://github.com/CyberShadow/DFeed.git ~$ cd DFeed ~/DFeed$ git submodule init ~/DFeed$ time rdmd --force --build-only dfeed real 0m2.290s user 0m1.960s sys 0m0.304s ~/DFeed$ dmd -o- -v dfeed.d | grep '^import ' | sed 's/.*(\(.*\))/\1/g' | grep -v '^/' > all.txt ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 dmd -c' real 0m16.935s user 0m13.837s sys 0m2.812s ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 -P8 dmd -c' real 0m3.703s user 0m23.005s sys 0m4.412s (deprecation messages omitted) I think 2.2s vs. 3.7s is a pretty good result. This was on a 4-core i7 - results should be even better with the new 8-cores on the horizon.
Mar 10 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 12:54, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
 On 10.03.2013 11:32, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
 In my experience single file compilation of medium sized projects is
 unacceptably slow. Much slower than what you are used to by similar
 sized C++ projects.
Even when taking advantage of multiple CPU cores?
I don't have support for building on multiple cores, but trying it on visuald itself (48 files) yields - combined compilation 6s - single file compilation 1min4s You'd need a lot of cores to be better off with single file compilation. These are only the plugin files, not anything in the used libraries (about 300 more files). Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively) Almost any change causes a lot of files to be rebuilt (just tried one, took 49s to build).
Do you think it has much to do with that Windows has a larger overhead for process creation?
I doubt that causes a significant part of it. I think it's related to some files importing the translated Windows-SDK and VS-SDK header files (about 8MB of declarations) and these get imported (indirectly) by almost any other file.
 I've ran some tests on Linux:

 ~$ git clone git://github.com/CyberShadow/DFeed.git
 ~$ cd DFeed
 ~/DFeed$ git submodule init
 ~/DFeed$ time rdmd --force --build-only dfeed
 real    0m2.290s
 user    0m1.960s
 sys     0m0.304s
 ~/DFeed$ dmd -o- -v dfeed.d | grep '^import ' | sed 's/.*(\(.*\))/\1/g'
 | grep -v '^/' > all.txt
 ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 dmd -c'
 real    0m16.935s
 user    0m13.837s
 sys     0m2.812s
 ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 -P8 dmd -c'
 real    0m3.703s
 user    0m23.005s
 sys     0m4.412s

 (deprecation messages omitted)

 I think 2.2s vs. 3.7s is a pretty good result. This was on a 4-core i7 -
 results should be even better with the new 8-cores on the horizon.
Looks pretty ok, but considering the number of modules in dfeed (I count about 24) and them being not very large, that makes compilation speed for each module about 1 second. It will only be faster if the number of modules to compile does not exceed twice the number of cores available. I think it does not scale well with increasing numbers of modules.
Mar 10 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 13:35:34 UTC, Rainer Schuetze wrote:
 Looks pretty ok, but considering the number of modules in dfeed 
 (I count about 24) and them being not very large, that makes 
 compilation speed for each module about 1 second. It will only 
 be faster if the number of modules to compile does not exceed 
 twice the number of cores available.
~/DFeed$ cat all.txt | wc -l 62
 I think it does not scale well with increasing numbers of 
 modules.
Why? Wouldn't it scale linearly? Or you mean due to the increased number of graph edges when increasing graph nodes? Anyway, the programmer can take steps in lessening intermodule dependencies to reduce incremental build times. That's not an option with compiling everything at once, unless you split the code manually into libraries.
Mar 10 2013
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 15:07, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 13:35:34 UTC, Rainer Schuetze wrote:
 Looks pretty ok, but considering the number of modules in dfeed (I
 count about 24) and them being not very large, that makes compilation
 speed for each module about 1 second. It will only be faster if the
 number of modules to compile does not exceed twice the number of cores
 available.
~/DFeed$ cat all.txt | wc -l 62
A, I didn't notice the ae link. I was already suspecting that 1 second/module was a bit long.
 I think it does not scale well with increasing numbers of modules.
Why? Wouldn't it scale linearly? Or you mean due to the increased number of graph edges when increasing graph nodes?
I assume that while a project grows, module dependencies also increase. So each single compile will get slower while the number of modules grow. Full build scales linearly with code size, but single file compilation time increases more (as a function of code size, module count and dependencies).
 Anyway, the programmer can take steps in lessening intermodule
 dependencies to reduce incremental build times. That's not an option
 with compiling everything at once, unless you split the code manually
 into libraries.
True.
Mar 10 2013
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
 - combined compilation    6s
 - single file compilation 1min4s

 Using dmd compiled with dmc instead of cl makes these times 17s 
 and 1min39s respectively)
Holy smokes! Are you saying that I can speed up compilation of D programs by almost 3 times just by building DMD with Microsoft's C++ compiler instead of the DigitalMars one?
Mar 10 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 13:11, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
 - combined compilation    6s
 - single file compilation 1min4s

 Using dmd compiled with dmc instead of cl makes these times 17s and
 1min39s respectively)
Holy smokes! Are you saying that I can speed up compilation of D programs by almost 3 times just by building DMD with Microsoft's C++ compiler instead of the DigitalMars one?
My usual estimate is about twice as fast, but it depends on what you compile. It doesn't have a huge effect on running the test suite, my guess is that the runtime initialization for the MS build is slightly slower than for the dmc build, and there are a large number of small files to compile there. Also, it's quite difficult to get accurate and reproducable benchmarking numbers these days, with the (mobile) processors continuously changing their performance.
Mar 10 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 13:40:04 UTC, Rainer Schuetze wrote:
 My usual estimate is about twice as fast,
That's still a huge difference. For one compiler to beat another 200% at something that isn't a microbenchmark isn't something you hear about often.
 but it depends on what you compile. It doesn't have a huge 
 effect on running the test suite, my guess is that the runtime 
 initialization for the MS build is slightly slower than for the 
 dmc build, and there are a large number of small files to 
 compile there.
But we're looking at the combined compilation numbers!
 Also, it's quite difficult to get accurate and reproducable 
 benchmarking numbers these days, with the (mobile) processors 
 continuously changing their performance.
You don't need an accurate benchmark to notice something is about 3 times as fast as something else....
Mar 10 2013
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 15:11, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 13:40:04 UTC, Rainer Schuetze wrote:
 My usual estimate is about twice as fast,
That's still a huge difference. For one compiler to beat another 200% at something that isn't a microbenchmark isn't something you hear about often.
I thought that factor 2 was common knowledge. See also this compiler comparison: http://www.willus.com/ccomp_benchmark2.shtml?p18+s14
 but it depends on what you compile. It doesn't have a huge effect on
 running the test suite, my guess is that the runtime initialization
 for the MS build is slightly slower than for the dmc build, and there
 are a large number of small files to compile there.
But we're looking at the combined compilation numbers!
Compilation itself takes only a little part of the test suite. Most of it is creating and initializing compiler processes and executing the tests. I've redone the test suite comparison (quick test single core): 2:38 min for msc, 2:32 for dmc, so no big difference, dmc even wins. I disabled turbo boost for the i7, but temperature control still throttled the CPU, so accuracy is not very good.
Mar 10 2013
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/10/13 7:25 AM, Rainer Schuetze wrote:
 I don't have support for building on multiple cores, but trying it on
 visuald itself (48 files) yields

 - combined compilation 6s
 - single file compilation 1min4s

 You'd need a lot of cores to be better off with single file compilation.

 These are only the plugin files, not anything in the used libraries
 (about 300 more files). Using dmd compiled with dmc instead of cl makes
 these times 17s and 1min39s respectively)

 Almost any change causes a lot of files to be rebuilt (just tried one,
 took 49s to build).
I understand. However, I don't think that's not enough support for generalization. Phobos is 197KLOC. At work I work on a C++ project that has k more lines of code, where k is a small number. Phobos uses separate compilation for its unittests, which are quite thorough, and compiling and running them all with make -j8 takes under two minutes on one single laptop. (The use of -j is crucial.) Building from scratch the C++ project is a small tectonic event involving dozens of machines and lasting much longer than k times more. A large project is also more likely to manage dependencies in ways that reduce impact of individual file changes, something that apparently isn't the case for visuald. What I'm saying here is that incremental builds are a valid choice for certain projects, and is possibly a gating factor to building large codebases with rdmd. Andrei
Mar 10 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 10.03.2013 18:13, Andrei Alexandrescu wrote:
 On 3/10/13 7:25 AM, Rainer Schuetze wrote:
 I don't have support for building on multiple cores, but trying it on
 visuald itself (48 files) yields

 - combined compilation 6s
 - single file compilation 1min4s

 You'd need a lot of cores to be better off with single file compilation.

 These are only the plugin files, not anything in the used libraries
 (about 300 more files). Using dmd compiled with dmc instead of cl makes
 these times 17s and 1min39s respectively)

 Almost any change causes a lot of files to be rebuilt (just tried one,
 took 49s to build).
I understand. However, I don't think that's not enough support for generalization. Phobos is 197KLOC. At work I work on a C++ project that has k more lines of code, where k is a small number. Phobos uses separate compilation for its unittests, which are quite thorough, and compiling and running them all with make -j8 takes under two minutes on one single laptop. (The use of -j is crucial.) Building from scratch the C++ project is a small tectonic event involving dozens of machines and lasting much longer than k times more.
I agree that rebuild times are usually much faster with D than in C++. But please also consider the usual edit-debug-build cycle. At, work, where our C++ code base is maybe 100 times larger than Visual D, build times are usually below 10 seconds for edits to a few source files. My point is that import dependencies in D are more viral than C++ headers because you cannot even remove them in di-files (this would break initialization order).
 A large project is also more likely to manage dependencies in ways that
 reduce impact of individual file changes, something that apparently
 isn't the case for visuald.

 What I'm saying here is that incremental builds are a valid choice for
 certain projects, and is possibly a gating factor to building large
 codebases with rdmd.
I agree that incremental compilation would be good to have, I'm just saying that single file compilation is not the solution. We should aim at making it work to recompile all dependent files in one compiler invocation.
Mar 10 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/10/13 4:41 PM, Rainer Schuetze wrote:
 My
 point is that import dependencies in D are more viral than C++ headers
 because you cannot even remove them in di-files (this would break
 initialization order).
This is new. Could you please elaborate? Thanks, Andrie
Mar 10 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/11/2013 12:49 AM, Andrei Alexandrescu wrote:
 On 3/10/13 4:41 PM, Rainer Schuetze wrote:
 My
 point is that import dependencies in D are more viral than C++ headers
 because you cannot even remove them in di-files (this would break
 initialization order).
This is new. Could you please elaborate? Thanks, Andrie
It is not true. The initialization order is determined at run time.
Mar 10 2013
parent reply Rainer Schuetze <r.sagitario gmx.de> writes:
On 11.03.2013 01:15, Timon Gehr wrote:
 On 03/11/2013 12:49 AM, Andrei Alexandrescu wrote:
 On 3/10/13 4:41 PM, Rainer Schuetze wrote:
 My
 point is that import dependencies in D are more viral than C++ headers
 because you cannot even remove them in di-files (this would break
 initialization order).
This is new. Could you please elaborate? Thanks, Andrie
It is not true. The initialization order is determined at run time.
An import is listed in the module info struct as a dependency, if it has static constructors (or destructors) or if it imports modules that have static constructors. If a di-file misses an import, it might not end up in the imported modules list of an importing module, so the correct order cannot be determined at runtime. ( Search the dmd source for "needmoduleinfo", especially https://github.com/D-Programming-Language/dmd/blob/master/src/toobj.c#L150 and https://github.com/D-Programming-Language/dmd/blob/master/src/import.c#L344 ) Also, you might run into trouble when removing static constructors/destructors from the di-file because of this.
Mar 10 2013
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 03/11/2013 07:51 AM, Rainer Schuetze wrote:
 On 11.03.2013 01:15, Timon Gehr wrote:
 On 03/11/2013 12:49 AM, Andrei Alexandrescu wrote:
 On 3/10/13 4:41 PM, Rainer Schuetze wrote:
 My
 point is that import dependencies in D are more viral than C++ headers
 because you cannot even remove them in di-files (this would break
 initialization order).
This is new. Could you please elaborate? Thanks, Andrie
It is not true. The initialization order is determined at run time.
An import is listed in the module info struct as a dependency, if it has static constructors (or destructors) or if it imports modules that have static constructors. If a di-file misses an import, it might not end up in the imported modules list of an importing module, so the correct order cannot be determined at runtime. ( Search the dmd source for "needmoduleinfo", especially https://github.com/D-Programming-Language/dmd/blob/master/src/toobj.c#L150 and https://github.com/D-Programming-Language/dmd/blob/master/src/import.c#L344 ) Also, you might run into trouble when removing static constructors/destructors from the di-file because of this.
I see, thanks! IMO this is ridiculous. I'd argue it is an implementation bug and should be fixed.
Mar 11 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/11/13 8:34 AM, Timon Gehr wrote:
 On 03/11/2013 07:51 AM, Rainer Schuetze wrote:
 An import is listed in the module info struct as a dependency, if it has
 static constructors (or destructors) or if it imports modules that have
 static constructors.
 If a di-file misses an import, it might not end up
 in the imported modules list of an importing module, so the correct
 order cannot be determined at runtime.
 ( Search the dmd source for "needmoduleinfo", especially
 https://github.com/D-Programming-Language/dmd/blob/master/src/toobj.c#L150

 and
 https://github.com/D-Programming-Language/dmd/blob/master/src/import.c#L344

 )

 Also, you might run into trouble when removing static
 constructors/destructors from the di-file because of this.
I see, thanks! IMO this is ridiculous. I'd argue it is an implementation bug and should be fixed.
Rainer, could you please file this to bugzilla? Thanks, Andrei
Mar 11 2013
parent Rainer Schuetze <r.sagitario gmx.de> writes:
On 11.03.2013 14:21, Andrei Alexandrescu wrote:
 On 3/11/13 8:34 AM, Timon Gehr wrote:
 On 03/11/2013 07:51 AM, Rainer Schuetze wrote:
 An import is listed in the module info struct as a dependency, if it has
 static constructors (or destructors) or if it imports modules that have
 static constructors.
 If a di-file misses an import, it might not end up
 in the imported modules list of an importing module, so the correct
 order cannot be determined at runtime.
 ( Search the dmd source for "needmoduleinfo", especially
 https://github.com/D-Programming-Language/dmd/blob/master/src/toobj.c#L150


 and
 https://github.com/D-Programming-Language/dmd/blob/master/src/import.c#L344


 )

 Also, you might run into trouble when removing static
 constructors/destructors from the di-file because of this.
I see, thanks! IMO this is ridiculous. I'd argue it is an implementation bug and should be fixed.
Rainer, could you please file this to bugzilla? Thanks, Andrei
Done: http://d.puremagic.com/issues/show_bug.cgi?id=9697
Mar 12 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/10/13, Rainer Schuetze <r.sagitario gmx.de> wrote:
 I think this is because without using di-files, a
 lot more code has to be analyzed for each compilation unit.
I used to think otherwise, and I wasn't wrong, however DMD has gotten progressively faster at compiling all at once: 2.050: E:\dev\projects\WindowsAPI>timeit build.exe fullbuild Elapsed Time: 0:00:28.370 2.050: E:\dev\projects\WindowsAPI>timeit build.exe multicore Elapsed Time: 0:00:14.210 It would seem like parallel builds are the way to go. But if you use a newer compiler: 2.062: E:\dev\projects\WindowsAPI>timeit build.exe fullbuild Elapsed Time: 0:00:14.971 2.062: E:\dev\projects\WindowsAPI>timeit build.exe multicore Elapsed Time: 0:00:15.061 So now full builds have become faster. The repository: https://github.com/AndrejMitrovic/WindowsAPI Here's the build script, which you first have to compile with a relatively recent compiler (2.050 didn't have std.parallelism or lambda syntax): http://dpaste.dzfl.pl/083247a2 Also pasted here: import std.algorithm; import std.array; import std.exception; import std.file; import std.parallelism; import std.path; import std.process; import std.string; alias std.string.join join; void main(string[] args) { args.popFront(); enforce(args.length); string[] mods = map!(a => a.name)(dirEntries(r".\win32", SpanMode.shallow)).array; string flags = "-version=Unicode -version=WindowsXP"; if (args.front == "multicore") { foreach (mod; parallel(mods)) { string cmd = format("dmd -c %s %s", mod, flags); system(cmd); } auto objs = map!(a => a.baseName.setExtension(".obj"))(mods); string cmd = format("dmd -lib -ofmulti_win32.lib %s %s", flags, objs.join(" ")); system(cmd); } else if (args.front == "fullbuild") { string cmd = format("dmd -lib -offull_win32.lib %s %s", flags, mods.join(" ")); system(cmd); } }
Mar 10 2013
prev sibling next sibling parent reply "jerro" <a a.com> writes:
 The better option would be to pass all source files to update 
 in one invocation of dmd, so it won't get slower than a full 
 rebuild, but this has been plagued with linker errors in the 
 past (undefined and duplicate symbols). If it works, it could 
 identify independent group of files which you now separate into 
 libraries.
Aside from linker errors, there is one more (minor) issue with this approach. If there are multiple source files with the same name and there is no -of flag, DMD will generate an object file for just one of them. This could be worked around in rdmd by symlinking or copying files, but I think it would be better to fix it in DMD and use module.name.o instead of source_file_name.o for object file names.
Mar 11 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Monday, 11 March 2013 at 13:27:47 UTC, jerro wrote:
 Aside from linker errors, there is one more (minor) issue with 
 this approach. If there are multiple source files with the same 
 name and there is no -of flag, DMD will generate an object file 
 for just one of them. This could be worked around in rdmd by 
 symlinking or copying files, but I think it would be better to 
 fix it in DMD and use module.name.o instead of 
 source_file_name.o for object file names.
What about -oq?
Mar 11 2013
parent reply "jerro" <a a.com> writes:
 What about -oq?
I assume you meant -op? I didn't know about -op flag, it seems it does solve this problem.
Mar 11 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Monday, 11 March 2013 at 19:44:31 UTC, jerro wrote:
 What about -oq?
I assume you meant -op? I didn't know about -op flag, it seems it does solve this problem.
Oops, it looks like -oq is something ldc-specific. It implements your exact suggestion (creates a.b.c.obj instead of c.obj or a/b/c.obj).
Mar 11 2013
next sibling parent Marco Leise <Marco.Leise gmx.de> writes:
Am Tue, 12 Mar 2013 01:09:52 +0100
schrieb "Vladimir Panteleev" <vladimir thecybershadow.net>:

 On Monday, 11 March 2013 at 19:44:31 UTC, jerro wrote:
 What about -oq?
I assume you meant -op? I didn't know about -op flag, it seems it does solve this problem.
Oops, it looks like -oq is something ldc-specific. It implements your exact suggestion (creates a.b.c.obj instead of c.obj or a/b/c.obj).
And it implements other nice things, like: * selective flags for in-contract, out-contract, assert, etc. * using a debug build of Phobos when compiling in debug mode, so you get to keep Phobos' asserts while debugging. * demoting GC allocations to stack allocations where objects don't leave the scope (making scoped!... unnecessary) :) awesome -- Marco
Mar 11 2013
prev sibling next sibling parent reply "jerro" <a a.com> writes:
 Oops, it looks like -oq is something ldc-specific. It 
 implements your exact suggestion (creates a.b.c.obj instead of 
 c.obj or a/b/c.obj).
Thanks, didn't know about that one either. It does seem nice - too bad it's only supported by LDC.
Mar 11 2013
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/12/13, jerro <a a.com> wrote:
 Oops, it looks like -oq is something ldc-specific. It
 implements your exact suggestion (creates a.b.c.obj instead of
 c.obj or a/b/c.obj).
Thanks, didn't know about that one either. It does seem nice - too bad it's only supported by LDC.
You can always file an enhancement request. It's up to Walter to approve it though.
Mar 11 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-03-12 01:09, Vladimir Panteleev wrote:

 Oops, it looks like -oq is something ldc-specific. It implements your
 exact suggestion (creates a.b.c.obj instead of c.obj or a/b/c.obj).
http://d.puremagic.com/issues/show_bug.cgi?id=3541 -- /Jacob Carlborg
Mar 17 2013
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-03-10 11:27, Rainer Schuetze wrote:

 The better option would be to pass all source files to update in one
 invocation of dmd, so it won't get slower than a full rebuild, but this
 has been plagued with linker errors in the past (undefined and duplicate
 symbols). If it works, it could identify independent group of files
 which you now separate into libraries.
I think this really should be fixed. But if I recall correctly I think Walter talked about using the -lib flag, this could perhaps be used as a workaround. A .a/.lib file would be create which the object files need to be extracted from. -- /Jacob Carlborg
Mar 17 2013
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 09 Mar 2013 23:29:33 -0500
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great 
 relatively confined project of good utility. We preapproved it, if 
 anyone wants to snatch it please come forward.
 
 Also, any comments to the design are welcome.
 
 
 Thanks,
 
 Andrei
That makes me think of a hierarchy immediately: main.d+ / \ abc.d+ def.d / \ hij.d* klm.d A change in hij.d bubbles up to abc.d and finally to main.d. This is much simpler than a full analysis of whether the change in hij.d really has effects up to main.d. "dmd -c main.d abc.d hij.d" would then rebuild the set. Caveat: A directed graph with no cycles is over-simplistic. Someone with a large project could perhaps tell what the average % of rebuilt modules would be. -- Marco
Mar 10 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 10 March 2013 at 15:04:42 UTC, Marco Leise wrote:
 Caveat: A directed graph with no cycles is over-simplistic.
 Someone with a large project could perhaps tell what the
 average % of rebuilt modules would be.
It is 100% as long as this bug exists : http://d.puremagic.com/issues/show_bug.cgi?id=9571
Mar 10 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 15:11:06 UTC, deadalnix wrote:
 On Sunday, 10 March 2013 at 15:04:42 UTC, Marco Leise wrote:
 Caveat: A directed graph with no cycles is over-simplistic.
 Someone with a large project could perhaps tell what the
 average % of rebuilt modules would be.
It is 100% as long as this bug exists : http://d.puremagic.com/issues/show_bug.cgi?id=9571
Doesn't this bug make incremental compilation (in projects where it is encountered) impossible, rather than simply impractical?
Mar 10 2013
parent reply "deadalnix" <deadalnix gmail.com> writes:
On Sunday, 10 March 2013 at 15:30:13 UTC, Vladimir Panteleev 
wrote:
 On Sunday, 10 March 2013 at 15:11:06 UTC, deadalnix wrote:
 On Sunday, 10 March 2013 at 15:04:42 UTC, Marco Leise wrote:
 Caveat: A directed graph with no cycles is over-simplistic.
 Someone with a large project could perhaps tell what the
 average % of rebuilt modules would be.
It is 100% as long as this bug exists : http://d.puremagic.com/issues/show_bug.cgi?id=9571
Doesn't this bug make incremental compilation (in projects where it is encountered) impossible, rather than simply impractical?
It really depend on the coding style. If you do some C-like D, everything is fine, but if you use complex feature of D, then incremental compilation become clearly impossible (I spent a fair amount of time on the problem).
Mar 10 2013
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 15:38:42 UTC, deadalnix wrote:
 It really depend on the coding style. If you do some C-like D, 
 everything is fine, but if you use complex feature of D, then 
 incremental compilation become clearly impossible (I spent a 
 fair amount of time on the problem).
I think this is a serious problem. I hadn't thought of it before, but if we are designing our tools to work around implementation issues in the compiler, I think we're doing something wrong. Rather than meddling with a crippled incremental compilation model for rdmd that'll get obsoleted by a fixed compiler, how about attacking the problem directly? It doesn't help that the problems surrounding incremental compilation (I mean the general case with incrementally compiling a few modules at once, not deadalnix's bug) don't seem to be well-defined. Do we have a filed issue with a reproducible test case?
Mar 10 2013
next sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 10 Mar 2013 16:44:36 +0100
schrieb "Vladimir Panteleev" <vladimir thecybershadow.net>:

 I think this is a serious problem. I hadn't thought of it before, 
 but if we are designing our tools to work around implementation 
 issues in the compiler, I think we're doing something wrong. 
 Rather than meddling with a crippled incremental compilation 
 model for rdmd that'll get obsoleted by a fixed compiler, how 
 about attacking the problem directly?
+1, Mono-D is currently in the middle of a refactoring to use rdmd to better handle project builds. Incremental builds were possible in earlier versions, but caused the known problems.
 It doesn't help that the problems surrounding incremental 
 compilation (I mean the general case with incrementally compiling 
 a few modules at once, not deadalnix's bug) don't seem to be 
 well-defined. Do we have a filed issue with a reproducible test 
 case?
Maybe this _class_ of bug wasn't considered before. You just need to have one module with a template and another one using it. If you change the template, the template module will be recompiled (generating no code to speak of), while the other file that actually instantiates the template remains untouched. Incremental builds end up with either outdated template instances or linker errors until you force a rebuild. -- Marco
Mar 10 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/10/13 12:42 PM, Marco Leise wrote:
 Maybe this _class_ of bug wasn't considered before.
 You just need to have one module with a template and another
 one using it. If you change the template, the template module
 will be recompiled (generating no code to speak of), while the
 other file that actually instantiates the template remains
 untouched.
 Incremental builds end up with either outdated template
 instances or linker errors until you force a rebuild.
But the module using the template will transitively depend on the one defining the template, so it will be rebuilt as well. Andrei
Mar 10 2013
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 10 Mar 2013 13:04:06 -0400
schrieb Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:

 But the module using the template will transitively depend on the one 
 defining the template, so it will be rebuilt as well.
 
 Andrei
That's true. I was referring to the status-quo, not the imagined solution. -- Marco
Mar 10 2013
prev sibling parent "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Sunday, 10 March 2013 at 16:42:52 UTC, Marco Leise wrote:
 Maybe this _class_ of bug wasn't considered before.
 You just need to have one module with a template and another
 one using it. If you change the template, the template module
 will be recompiled (generating no code to speak of), while the
 other file that actually instantiates the template remains
 untouched.
 Incremental builds end up with either outdated template
 instances or linker errors until you force a rebuild.
I don't think this is the problem that we're dealing with here. Don't quote me on it, but if I recall collectly, the problem had to do with DMD emitting template instantiations only to one arbitrary object file out of the all compiled at the moment.
Mar 10 2013
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/10/13 11:44 AM, Vladimir Panteleev wrote:
 On Sunday, 10 March 2013 at 15:38:42 UTC, deadalnix wrote:
 It really depend on the coding style. If you do some C-like D,
 everything is fine, but if you use complex feature of D, then
 incremental compilation become clearly impossible (I spent a fair
 amount of time on the problem).
I think this is a serious problem. I hadn't thought of it before, but if we are designing our tools to work around implementation issues in the compiler, I think we're doing something wrong. Rather than meddling with a crippled incremental compilation model for rdmd that'll get obsoleted by a fixed compiler, how about attacking the problem directly? It doesn't help that the problems surrounding incremental compilation (I mean the general case with incrementally compiling a few modules at once, not deadalnix's bug) don't seem to be well-defined. Do we have a filed issue with a reproducible test case?
Agreed. We need a workable item here. Andrei
Mar 10 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-03-10 16:44, Vladimir Panteleev wrote:

 I think this is a serious problem. I hadn't thought of it before, but if
 we are designing our tools to work around implementation issues in the
 compiler, I think we're doing something wrong. Rather than meddling with
 a crippled incremental compilation model for rdmd that'll get obsoleted
 by a fixed compiler, how about attacking the problem directly?

 It doesn't help that the problems surrounding incremental compilation (I
 mean the general case with incrementally compiling a few modules at
 once, not deadalnix's bug) don't seem to be well-defined. Do we have a
 filed issue with a reproducible test case?
Search for posts by Tomasz Stachowiak (h3r3tic). He tried to implement incremental compilation a couple of years ago. -- /Jacob Carlborg
Mar 17 2013
parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 3/17/13, Jacob Carlborg <doob me.com> wrote:
 Search for posts by Tomasz Stachowiak (h3r3tic). He tried to implement
 incremental compilation a couple of years ago.
https://bitbucket.org/h3r3tic/xfbuild/issue/7/make-incremental-building-reliable
Mar 17 2013