www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP 11: trial partial implementation

reply Adam D. Ruppe <destructionator gmail.com> writes:
http://arsdnet.net/dcode/build2.d

To compile: dmd build2

To use: build2 [args like you'd pass to dmd...]


This is very similar to my existing build.d, but makes fewer
assumptions about me, instead opting to try to follow the DIP 11
design.

We should be able to build off this to an almost fully functional
prototype going to test the idea before touching the compiler at all.
Even the pragmas (almost) work as defined.

I've not throughly tested it, but it ought to work on all operating
systems (note: I've only tested on linux)
without dependencies with the following features:

* Downloading from simple http servers. No compression, no encryption
  at this time.

* It pre-processes dmd arguments for the -I paths described in the
  wiki and tries downloads to a local directory.

build2 mymodule -Ihttp://myrepository.com/d

The selective paths should work too but I haven't tested them yet.

* It adds some support for modules downloaded directly from the web.

build2 mymodule http://myrepisotory.com/d/file.d

* Arguments it doesn't care about are passed unchanged to dmd.

* Pass -ignore to the build process and you can do

pragma(importpath, "webpath");

as described in the wiki, with one change: the wiki says it should
be only for the current file. I can't find a way to do that outside
the compiler... dmd -v doesn't tell you what file the pragma came from.

* Automatically links .d files when doing a build at once, tries to
  not break or slow down incremental builds

* Kinda simulates the idea of an external tool via a D function. See
Tuple!(int, string) dget(string, string) in the source.

* Has a simple cache - if the file is there already, use it.

* If a downloaded module requires another, it keeps going



What it doesn't attempt to do:

* Be fast. It loops dmd like my old build.d. (I can't find a better
  way to do it. Even rdmd always runs dmd at least twice - check
  its source!)

* Other protocols other than basic http, and it doesn't even support
  redirects yet. I wanted zero dependency so I very quickly spun my
  own downloader... it's fairly basic.

* Any kind of archives, yet.

* __FILE__ is the same.

* The output is ugly including repeated errors and debug info.

* It makes no attempt to hash the files since the wiki didn't decide
  on a syntax.

* See FIXME comments in the source for some specifics
Jun 18 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
    way to do it. Even rdmd always runs dmd at least twice - check
    its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything * If the dependencies have change run the compiler again * Re-cache the dependencies if necessary -- /Jacob Carlborg
Jun 19 2011
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
Jacob Carlborg wrote:
 * Run the compiler once with the -deps flag to collect the dependencies
-deps doesn't actually write anything unless the compile succeeds... and it can't succeed unless you already know the dependencies! The basic idea of caching the command might work though. When I get sick of slowness, I manually copy/paste the command out of the output and into a makefile. Watching for failure to update the cache automatically ought to work though, probably should do that.
Jun 19 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-19 17:13, Adam D. Ruppe wrote:
 Jacob Carlborg wrote:
 * Run the compiler once with the -deps flag to collect the dependencies
-deps doesn't actually write anything unless the compile succeeds... and it can't succeed unless you already know the dependencies!
Oh, I didn't think of that.
 The basic idea of caching the command might work though.
 When I get sick of slowness, I manually copy/paste the command out
 of the output and into a makefile.

 Watching for failure to update the cache automatically ought to work
 though, probably should do that.
-- /Jacob Carlborg
Jun 19 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itkp2l$1ru0$1 digitalmars.com...
 On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
    way to do it. Even rdmd always runs dmd at least twice - check
    its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything
Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile.
 * If the dependencies have change run the compiler again
 * Re-cache the dependencies if necessary
Jun 19 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-19 20:31, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itkp2l$1ru0$1 digitalmars.com...
 On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
     way to do it. Even rdmd always runs dmd at least twice - check
     its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything
Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile.
I understand that that would be faster when the dependencies have changed but if they haven't then you just have to run the compiler once. Don't know what would be best to do though. BTW, to just get the dependencies, would that be with the -deps and -c flags? Is the a better way? I mean if you just specify the -deps flag it will do a full compilation. Seems to me that skipping linking (-c flag) is a little too much as well for what's actually necessary. Would be good to have a flag that does only what's absolutely necessary for tracking dependencies.
 * If the dependencies have change run the compiler again
 * Re-cache the dependencies if necessary
-- /Jacob Carlborg
Jun 19 2011
next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
Jacob Carlborg wrote:
 BTW, to just get the dependencies, would that be with the -deps and
 -c flags? Is the a better way?
That's probably as good as it gets... thanks to mixins, templates, versions, static ifs etc. you can't be sure you got all the right deps without going through at least the majority of the compile process. Of course you could always just run a search for an import statement in the source if speed is more important than specific accuracy. That's good enough for a great many cases.
Jun 19 2011
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itljh6$d4l$1 digitalmars.com...
 On 2011-06-19 20:31, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itkp2l$1ru0$1 digitalmars.com...
 On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
     way to do it. Even rdmd always runs dmd at least twice - check
     its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything
Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile.
I understand that that would be faster when the dependencies have changed but if they haven't then you just have to run the compiler once. Don't know what would be best to do though. BTW, to just get the dependencies, would that be with the -deps and -c flags? Is the a better way? I mean if you just specify the -deps flag it will do a full compilation. Seems to me that skipping linking (-c flag) is a little too much as well for what's actually necessary. Would be good to have a flag that does only what's absolutely necessary for tracking dependencies.
What I meant was that doing a deps-only run is fast enough that doing it every time shouldn't be a problem. However, I am starting to wonder if RDMD's functionality should built into DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it themselves). DDMD does take about a minute or so to compile, and while the deps-only run is the faster part, it's not insignificant. But then, maybe that's just due to some fixable inefficiency in DMD? There's very little templates/ctfe involved.
Jun 20 2011
parent reply Jacob Carlborg <doob me.com> writes:
On 2011-06-20 21:32, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itljh6$d4l$1 digitalmars.com...
 On 2011-06-19 20:31, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>   wrote in message
 news:itkp2l$1ru0$1 digitalmars.com...
 On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
      way to do it. Even rdmd always runs dmd at least twice - check
      its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything
Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile.
I understand that that would be faster when the dependencies have changed but if they haven't then you just have to run the compiler once. Don't know what would be best to do though. BTW, to just get the dependencies, would that be with the -deps and -c flags? Is the a better way? I mean if you just specify the -deps flag it will do a full compilation. Seems to me that skipping linking (-c flag) is a little too much as well for what's actually necessary. Would be good to have a flag that does only what's absolutely necessary for tracking dependencies.
What I meant was that doing a deps-only run is fast enough that doing it every time shouldn't be a problem. However, I am starting to wonder if RDMD's functionality should built into DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it themselves). DDMD does take about a minute or so to compile, and while the deps-only run is the faster part, it's not insignificant. But then, maybe that's just due to some fixable inefficiency in DMD? There's very little templates/ctfe involved.
I guess one could add the "-o-" (do not write object file) flag as well. Don't know how much that would help though. I guess DMD does need to do most of the process it normally does on the files due to static if, mixins and other meta programming features. -- /Jacob Carlborg
Jun 20 2011
parent reply "Nick Sabalausky" <a a.a> writes:
"Jacob Carlborg" <doob me.com> wrote in message 
news:itoeia$2bs3$1 digitalmars.com...
 On 2011-06-20 21:32, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 news:itljh6$d4l$1 digitalmars.com...
 On 2011-06-19 20:31, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>   wrote in message
 news:itkp2l$1ru0$1 digitalmars.com...
 On 2011-06-19 02:10, Adam D. Ruppe wrote:
 http://arsdnet.net/dcode/build2.d

 * Be fast. It loops dmd like my old build.d. (I can't find a better
      way to do it. Even rdmd always runs dmd at least twice - check
      its source!)
That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything
Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile.
I understand that that would be faster when the dependencies have changed but if they haven't then you just have to run the compiler once. Don't know what would be best to do though. BTW, to just get the dependencies, would that be with the -deps and -c flags? Is the a better way? I mean if you just specify the -deps flag it will do a full compilation. Seems to me that skipping linking (-c flag) is a little too much as well for what's actually necessary. Would be good to have a flag that does only what's absolutely necessary for tracking dependencies.
What I meant was that doing a deps-only run is fast enough that doing it every time shouldn't be a problem. However, I am starting to wonder if RDMD's functionality should built into DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it themselves). DDMD does take about a minute or so to compile, and while the deps-only run is the faster part, it's not insignificant. But then, maybe that's just due to some fixable inefficiency in DMD? There's very little templates/ctfe involved.
I guess one could add the "-o-" (do not write object file) flag as well. Don't know how much that would help though. I guess DMD does need to do most of the process it normally does on the files due to static if, mixins and other meta programming features.
It would have to do most/all of the front-end work, but I wouldn't think it should have to do any of the backend work or any optimizations (not sure where those lie, a little in each?).
Jun 20 2011
parent Jacob Carlborg <doob me.com> writes:
On 2011-06-21 00:46, Nick Sabalausky wrote:
 "Jacob Carlborg"<doob me.com>  wrote in message
 I guess one could add the "-o-" (do not write object file) flag as well.
 Don't know how much that would help though. I guess DMD does need to do
 most of the process it normally does on the files due to static if, mixins
 and other meta programming features.
It would have to do most/all of the front-end work, but I wouldn't think it should have to do any of the backend work or any optimizations (not sure where those lie, a little in each?).
Yeah, sounds right. This is how I understand how it works (at least theoretical): All optimizations that aren't specific to a palatform is in the frontend (or possibly in a middlend). All platform specific optimizations are in the backend. -- /Jacob Carlborg
Jun 21 2011