www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - Button: A fast, correct, and elegantly simple build system.

reply Jason White <54f9byee3t32 gmail.com> writes:
I am pleased to finally announce the build system I've been 
slowly working on for over a year in my spare time:

     Docs:   http://jasonwhite.github.io/button/
     Source: https://github.com/jasonwhite/button

Features:

- Correct incremental builds.
- Automatic dependency detection (for any build task, even shell 
scripts).
- Build graph visualization using GraphViz.
- Language-independent. It can build anything.
- Can automatically build when an input file is modified (using 
inotify).
- Recursive: It can build the build description as part of the 
build.
- Lua is the primary build description language.

A ton of design work went into this. Over the past few years, I 
went through many different designs and architectures. I finally 
settled on this one about a year ago and then went to work on 
implementing it. I am very happy with how it turned out.

Note that this is still a ways off from being production-ready. 
It needs some polishing. Feedback would be most appreciated (file 
some issues!). I really want to make this one of the best build 
systems out there.

Here is an example build description for DMD:

     https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak

In fact, there is some experimental support for automatic 
conversion of Makefiles to Button's build description format 
using a fork of GNU Make itself: 
https://github.com/jasonwhite/button-make

Finally, a few notes:

- I was hoping to give a talk on this at DConf, but sadly my 
submission was turned down. :'(

- I am aware of Reggae, another build system written in D. 
Although, I admit I haven't looked at it very closely. I am 
curious how it compares.

- You might also be interested in the two other libraries I wrote 
specifically for this project:

   - https://github.com/jasonwhite/darg (A command-line parser)
   - https://github.com/jasonwhite/io (An IO streams library)
May 30 2016
next sibling parent reply poliklosio <poliklosio happypizza.com> writes:
On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 I am pleased to finally announce the build system I've been 
 slowly working on for over a year in my spare time:

     Docs:   http://jasonwhite.github.io/button/
     Source: https://github.com/jasonwhite/button
Great news! Love to see innovation in this area.
 - Lua is the primary build description language.
Why not D?
May 30 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Monday, 30 May 2016 at 20:58:51 UTC, poliklosio wrote:
 - Lua is the primary build description language.
Why not D?
Generating the JSON build description should entirely deterministic. With Lua, this can be guaranteed. You can create a sandbox where only certain operations are permitted. For example, reading files is permitted, but writing to them is not. I can also intercept all file reads and mark the files that get read as dependencies. It certainly could be done in D, or any other language for that matter. All that needs to be done is to write a program that can output the fundamental JSON build description.
May 30 2016
prev sibling next sibling parent Joel <joelcnz gmail.com> writes:
On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 I am pleased to finally announce the build system I've been 
 slowly working on for over a year in my spare time:

 [...]
[snip] Button:
  - https://github.com/jasonwhite/darg (A command-line parser)
  - https://github.com/jasonwhite/io (An IO streams library)
That's great, sharing your D tools.
May 30 2016
prev sibling next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 31/05/2016 7:16 AM, Jason White wrote:
 I am pleased to finally announce the build system I've been slowly
 working on for over a year in my spare time:

     Docs:   http://jasonwhite.github.io/button/
     Source: https://github.com/jasonwhite/button

 Features:

 - Correct incremental builds.
 - Automatic dependency detection (for any build task, even shell scripts).
 - Build graph visualization using GraphViz.
 - Language-independent. It can build anything.
 - Can automatically build when an input file is modified (using inotify).
 - Recursive: It can build the build description as part of the build.
 - Lua is the primary build description language.

 A ton of design work went into this. Over the past few years, I went
 through many different designs and architectures. I finally settled on
 this one about a year ago and then went to work on implementing it. I am
 very happy with how it turned out.

 Note that this is still a ways off from being production-ready. It needs
 some polishing. Feedback would be most appreciated (file some issues!).
 I really want to make this one of the best build systems out there.

 Here is an example build description for DMD:

     https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak

 In fact, there is some experimental support for automatic conversion of
 Makefiles to Button's build description format using a fork of GNU Make
 itself: https://github.com/jasonwhite/button-make

 Finally, a few notes:

 - I was hoping to give a talk on this at DConf, but sadly my submission
 was turned down. :'(

 - I am aware of Reggae, another build system written in D. Although, I
 admit I haven't looked at it very closely. I am curious how it compares.

 - You might also be interested in the two other libraries I wrote
 specifically for this project:

   - https://github.com/jasonwhite/darg (A command-line parser)
   - https://github.com/jasonwhite/io (An IO streams library)
Are you on Freenode (no nick to name right now)? I would like to talk to you about a few ideas relating to lua and D.
May 30 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Tuesday, 31 May 2016 at 03:40:32 UTC, rikki cattermole wrote:
 Are you on Freenode (no nick to name right now)?
 I would like to talk to you about a few ideas relating to lua 
 and D.
No, I'm not on IRC. I'll see if I can find the time to hop on this weekend.
May 31 2016
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 I am pleased to finally announce the build system I've been 
 slowly working on for over a year in my spare time:

 snip
 In fact, there is some experimental support for automatic 
 conversion of Makefiles to Button's build description format 
 using a fork of GNU Make itself: 
 https://github.com/jasonwhite/button-make
I'm going to take a look at that!
 - I am aware of Reggae, another build system written in D. 
 Although, I admit I haven't looked at it very closely. I am 
 curious how it compares.
Since I wrote reggae, let me compare ;)
 - Correct incremental builds.
Yep.
 - Automatic dependency detection (for any build task, even 
 shell scripts).
Yes for C/C++/D, no for random tasks in general, but yes if you use the tup backend.
 - Build graph visualization using GraphViz.
Use the ninja backend, get it for free.
 - Language-independent. It can build anything.
So can reggae, but the built-in high-level rules only do C/C++/D right now.
 - Can automatically build when an input file is modified (using 
 inotify).
Nope, I never found that interesting. Possibly because I keep saving after every edit in OCD style and I really don't want things running automatically.
 - Recursive: It can build the build description as part of the 
 build.
I'm not sure what that means. reggae copies CMake here and runs itself when the build description changes, if that's what you mean.
 - Lua is the primary build description language.
In reggae you can pick from D, Python, Ruby, Javascript and Lua. Atila
May 31 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Tuesday, 31 May 2016 at 10:15:14 UTC, Atila Neves wrote:
 On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 I am pleased to finally announce the build system I've been 
 slowly working on for over a year in my spare time:

 snip
 In fact, there is some experimental support for automatic 
 conversion of Makefiles to Button's build description format 
 using a fork of GNU Make itself: 
 https://github.com/jasonwhite/button-make
I'm going to take a look at that!
I think the Makefile converter is probably the coolest thing about this build system. I don't know of any other build system that has done this. The only problem is that it doesn't do well with Makefiles that invoke make recursively. I tried compiling Git using it, but Git does some funky stuff with recursive make like grepping the output of the sub-make.
 - Can automatically build when an input file is modified 
 (using inotify).
Nope, I never found that interesting. Possibly because I keep saving after every edit in OCD style and I really don't want things running automatically.
I constantly save like a madman too. If an incremental build is sufficiently fast, it doesn't really matter. You can also specify a delay so it accumulates changes and then after X milliseconds it runs a build.
 - Recursive: It can build the build description as part of the 
 build.
I'm not sure what that means. reggae copies CMake here and runs itself when the build description changes, if that's what you mean.
It means that Button can run Button as a build task (and it does it correctly). A child Button process reports its dependencies to the parent Button process via a pipe. This is the same mechanism that detects dependencies for ordinary tasks. Thus, there is no danger of doing incorrect incremental builds when recursively running Button like there is with Make.
 - Lua is the primary build description language.
In reggae you can pick from D, Python, Ruby, Javascript and Lua.
That's pretty cool. It is possible for Button to do the same, but I don't really want to support that many languages. In fact, the Make and Lua build descriptions both work the same exact way - they output a JSON build description for Button to use. So long as someone can write a program to do this, they can write their build description in it.
May 31 2016
prev sibling next sibling parent reply Dicebot <public dicebot.lv> writes:
Can it be built from just plain dmd/phobos install available? One of
major concernc behind discussion that resulted in Atila reggae effort is
that propagating additional third-party dependencies is very damaging
for build systems. Right now Button seems to fail rather hard on this
front (i.e. Lua for build description + uncertain amount of build
dependencies for Button itself).
May 31 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Tuesday, 31 May 2016 at 14:28:02 UTC, Dicebot wrote:
 Can it be built from just plain dmd/phobos install available? 
 One of major concernc behind discussion that resulted in Atila 
 reggae effort is that propagating additional third-party 
 dependencies is very damaging for build systems. Right now 
 Button seems to fail rather hard on this front (i.e. Lua for 
 build description + uncertain amount of build dependencies for 
 Button itself).
Building it only requires dmd+phobos+dub. Why is having dependencies so damaging for build systems? Does it really matter with a package manager like Dub? If there is another thread that answers these questions, please point me to it. The two dependencies Button itself has could easily be moved into the same project. I kept them separate because they can be useful for others. These are the command-line parser and IO stream libraries. As for the dependency on Lua, it is statically linked into a separate executable (called "button-lua") and building it is dead-simple (just run make). Using the Lua build description generator is actually optional, it's just that writing build descriptions in JSON would be horribly tedious.
May 31 2016
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2016-06-01 06:34, Jason White wrote:

 Building it only requires dmd+phobos+dub.

 Why is having dependencies so damaging for build systems? Does it really
 matter with a package manager like Dub? If there is another thread that
 answers these questions, please point me to it.

 The two dependencies Button itself has could easily be moved into the
 same project. I kept them separate because they can be useful for
 others. These are the command-line parser and IO stream libraries.

 As for the dependency on Lua, it is statically linked into a separate
 executable (called "button-lua") and building it is dead-simple (just
 run make). Using the Lua build description generator is actually
 optional, it's just that writing build descriptions in JSON would be
 horribly tedious.
So, Lua is a build dependency? Seems that Sqlite is a build dependency as well. -- /Jacob Carlborg
May 31 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Wednesday, 1 June 2016 at 06:41:17 UTC, Jacob Carlborg wrote:
 So, Lua is a build dependency? Seems that Sqlite is a build 
 dependency as well.
Actually, SQLite more of a run-time dependency because etc.c.sqlite3 comes with DMD. $ ldd button linux-vdso.so.1 (0x00007ffcc474c000) --> libsqlite3.so.0 => /usr/lib/libsqlite3.so.0 (0x00007f2d13641000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f2d13421000) libm.so.6 => /usr/lib/libm.so.6 (0x00007f2d13119000) librt.so.1 => /usr/lib/librt.so.1 (0x00007f2d12f11000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f2d12d09000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f2d12af1000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f2d12749000) /lib64/ld-linux-x86-64.so.2 (0x00007f2d13951000)
May 31 2016
parent Jacob Carlborg <doob me.com> writes:
On 2016-06-01 08:48, Jason White wrote:

 Actually, SQLite more of a run-time dependency because etc.c.sqlite3
 comes with DMD.

 $ ldd button
      linux-vdso.so.1 (0x00007ffcc474c000)
 --> libsqlite3.so.0 => /usr/lib/libsqlite3.so.0 (0x00007f2d13641000)
      libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f2d13421000)
      libm.so.6 => /usr/lib/libm.so.6 (0x00007f2d13119000)
      librt.so.1 => /usr/lib/librt.so.1 (0x00007f2d12f11000)
      libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f2d12d09000)
      libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f2d12af1000)
      libc.so.6 => /usr/lib/libc.so.6 (0x00007f2d12749000)
      /lib64/ld-linux-x86-64.so.2 (0x00007f2d13951000)
So it's both a build and runtime dependency ;) -- /Jacob Carlborg
Jun 01 2016
prev sibling parent reply Dicebot <public dicebot.lv> writes:
On Wednesday, 1 June 2016 at 04:34:23 UTC, Jason White wrote:
 Why is having dependencies so damaging for build systems? Does 
 it really matter with a package manager like Dub? If there is 
 another thread that answers these questions, please point me to 
 it.
Rephrasing one famous advice, "every added tooling dependency in an open-source project reduces amount of potential contributors by half" :) Basically, one can expect that anyone working with D will have dmd/phobos and, hopefully, dub. No matter how cool Button is, if it actually needs to be installed in contributor system to build a project, it is very unlikely to be widely used. That issue can be reduced by making Button itself trivially built from plain dmd/phobos/dub install and configuring the project to bootstrap it if not already present - but that only works if you don't also need to install bunch of additional tools like sqlite or make. From that perspective, the best build system you could possibly have would look like this: ``` #!/usr/bin/rdmd import std.build; // define your build script as D code ```
Jun 03 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/3/2016 1:26 AM, Dicebot wrote:
 From that perspective, the best build system you could possibly have would look
 like this:

 ```
 #!/usr/bin/rdmd

 import std.build;

 // define your build script as D code
 ```
Yeah, I have often thought that writing a self-contained D program to build D would work well. The full power of the language would be available, there'd be nothing new to learn, and all you'd need is an existing D compiler (which we already require to build).
Jun 12 2016
next sibling parent reply cym13 <cpicard openmailbox.org> writes:
On Sunday, 12 June 2016 at 20:03:06 UTC, Walter Bright wrote:
 On 6/3/2016 1:26 AM, Dicebot wrote:
 From that perspective, the best build system you could 
 possibly have would look
 like this:

 ```
 #!/usr/bin/rdmd

 import std.build;

 // define your build script as D code
 ```
Yeah, I have often thought that writing a self-contained D program to build D would work well. The full power of the language would be available, there'd be nothing new to learn, and all you'd need is an existing D compiler (which we already require to build).
What about Attila's work with reggae?
Jun 12 2016
parent reply Kagamin <spam here.lot> writes:
On Sunday, 12 June 2016 at 20:47:31 UTC, cym13 wrote:
 Yeah, I have often thought that writing a self-contained D 
 program to build D would work well. The full power of the 
 language would be available, there'd be nothing new to learn, 
 and all you'd need is an existing D compiler (which we already 
 require to build).
What about Attila's work with reggae?
Reggae still needs a prebuilt reggae to run the build script.
Jun 16 2016
next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 16 June 2016 at 12:32:02 UTC, Kagamin wrote:
 On Sunday, 12 June 2016 at 20:47:31 UTC, cym13 wrote:
 Yeah, I have often thought that writing a self-contained D 
 program to build D would work well. The full power of the 
 language would be available, there'd be nothing new to learn, 
 and all you'd need is an existing D compiler (which we 
 already require to build).
What about Attila's work with reggae?
Reggae still needs a prebuilt reggae to run the build script.
But seeing as you need a d compiler to build and anyway...
Jun 16 2016
parent John Colvin <john.loughran.colvin gmail.com> writes:
On Thursday, 16 June 2016 at 12:53:35 UTC, John Colvin wrote:
 On Thursday, 16 June 2016 at 12:32:02 UTC, Kagamin wrote:
 On Sunday, 12 June 2016 at 20:47:31 UTC, cym13 wrote:
 Yeah, I have often thought that writing a self-contained D 
 program to build D would work well. The full power of the 
 language would be available, there'd be nothing new to 
 learn, and all you'd need is an existing D compiler (which 
 we already require to build).
What about Attila's work with reggae?
Reggae still needs a prebuilt reggae to run the build script.
But seeing as you need a d compiler to build and anyway...
Ugh, autocorrect. s/and/dmd
Jun 16 2016
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Thursday, 16 June 2016 at 12:32:02 UTC, Kagamin wrote:
 On Sunday, 12 June 2016 at 20:47:31 UTC, cym13 wrote:
 Yeah, I have often thought that writing a self-contained D 
 program to build D would work well. The full power of the 
 language would be available, there'd be nothing new to learn, 
 and all you'd need is an existing D compiler (which we 
 already require to build).
What about Attila's work with reggae?
Reggae still needs a prebuilt reggae to run the build script.
The idea would be to build reggae with the system dmd first (since having a D compiler is now a pre-requisite), then build dmd, druntime and phobos. There are no extra dependencies except on the reggae source files. Again, that's the idea, at least. Atila
Jun 16 2016
parent Kagamin <spam here.lot> writes:
On Thursday, 16 June 2016 at 13:40:39 UTC, Atila Neves wrote:
 The idea would be to build reggae with the system dmd first 
 (since having a D compiler is now a pre-requisite)
If a D compiler is required, it means a prebuilt executable is not needed: rdmd should be enough to compile and run the build script.
Jun 16 2016
prev sibling parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Sunday, 12 June 2016 at 20:03:06 UTC, Walter Bright wrote:
 On 6/3/2016 1:26 AM, Dicebot wrote:
 From that perspective, the best build system you could 
 possibly have would look
 like this:

 ```
 #!/usr/bin/rdmd

 import std.build;

 // define your build script as D code
 ```
Yeah, I have often thought that writing a self-contained D program to build D would work well. The full power of the language would be available, there'd be nothing new to learn, and all you'd need is an existing D compiler (which we already require to build).
The core functionality of Button could be split off into a library fairly easily and there would be no dependency on Lua. Using it might look something like this: import button; immutable Rule[] rules = [ { inputs: [Resource("foo.c"), Resource("baz.h")], task: Task([Command(["gcc", "-c", "foo.c", "-o", "foo.o"])]), outputs: [Resource("foo.o")] }, { inputs: [Resource("bar.c"), Resource("baz.h")], task: Task([Command(["gcc", "-c", "bar.c", "-o", "bar.o"])]), outputs: [Resource("bar.o")] }, { inputs: [Resource("foo.o"), Resource("bar.o")], task: Task([Command(["gcc", "foo.o", "bar.o", "-o", "foobar"])]), outputs: [Resource("foobar")] } ]; void main() { build(rules); } Of course, more abstractions would be added to make creating the list of rules less verbose. However, I question the utility of even doing this in the first place. You miss out on the convenience of using the existing command line interface. And for what? Just so everything can be in D? Writing the same thing in Lua would be much prettier. I don't understand this dependency-phobia.
Jun 12 2016
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/12/2016 4:27 PM, Jason White wrote:
 I don't understand this dependency-phobia.
It's the "first 5 minutes" thing. Every hiccup there costs us maybe half the people who just want to try it out. Even the makefiles have hiccups. I've had builds fail with the dmd system because I had the wrong version of make installed. And it doesn't fail with "you have the wrong make program installed" messages, it fails with some weird error message pointing into the middle of the makefile. The makefiles, especially posix.mak, have grown into horrific snarls of who-knows-what-is-happening. I hate makefiles that call other makefiles. Sometimes I feel like chucking them all and replacing them with a batch file that is nothing more than an explicit list of commands: dmd -c file1.d dmd -c file2.d etc. :-)
Jun 13 2016
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2016-06-13 22:12, Walter Bright wrote:

 It's the "first 5 minutes" thing. Every hiccup there costs us maybe half
 the people who just want to try it out.

 Even the makefiles have hiccups. I've had builds fail with the dmd
 system because I had the wrong version of make installed. And it doesn't
 fail with "you have the wrong make program installed" messages, it fails
 with some weird error message pointing into the middle of the makefile.

 The makefiles, especially posix.mak, have grown into horrific snarls of
 who-knows-what-is-happening. I hate makefiles that call other makefiles.
 Sometimes I feel like chucking them all and replacing them with a batch
 file that is nothing more than an explicit list of commands:

     dmd -c file1.d
     dmd -c file2.d

 etc. :-)
I couldn't agree more. With the D compiler being so fast it's reasonable to just recompile everything at once instead of trying to track what's changed. -- /Jacob Carlborg
Jun 14 2016
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Tuesday, 14 June 2016 at 07:45:10 UTC, Jacob Carlborg wrote:
 I couldn't agree more. With the D compiler being so fast it's 
 reasonable to just recompile everything at once instead of 
 trying to track what's changed.
i'm agree with that. i'm so used to do just "rdmd main.d" in my projects (ranged from "hello, world" to complex game engines).
Jun 14 2016
parent reply drug <drug2004 bk.ru> writes:
14.06.2016 13:04, ketmar пишет:
 On Tuesday, 14 June 2016 at 07:45:10 UTC, Jacob Carlborg wrote:
 I couldn't agree more. With the D compiler being so fast it's
 reasonable to just recompile everything at once instead of trying to
 track what's changed.
i'm agree with that. i'm so used to do just "rdmd main.d" in my projects (ranged from "hello, world" to complex game engines).
I don't agree if you don't mind. I have two almost identical implementation of the same thing in D and C++. And if I rebuild them totally - yes, dmd is faster than gcc: dmd 5 secs ldmd2 6 secs make 40 secs make -j10 11 secs But if I changed several lines only then dmd time doesn't change and gcc takes much less time. In fact digits are small for D, but I feel the difference really. Not big, not bad, but it exists.
Jun 14 2016
parent Jacob Carlborg <doob me.com> writes:
On 2016-06-14 14:04, drug wrote:

 I don't agree if you don't mind. I have two almost identical
 implementation of the same thing in D and C++. And if I rebuild them
 totally - yes, dmd is faster than gcc:
     dmd        5 secs
     ldmd2      6 secs
     make      40 secs
     make -j10 11 secs

 But if I changed several lines only then dmd time doesn't change and gcc
 takes much less time. In fact digits are small for D, but I feel the
 difference really. Not big, not bad, but it exists.
For me, IIRC, it takes longer time to recompile a single C++ file from the DMD source code than it takes to build Phobos from scratch. What's slowing down the compilation of Phobos is the C code. -- /Jacob Carlborg
Jun 14 2016
prev sibling next sibling parent sarn <sarn theartofmachinery.com> writes:
On Monday, 13 June 2016 at 20:12:27 UTC, Walter Bright wrote:
 On 6/12/2016 4:27 PM, Jason White wrote:
 I don't understand this dependency-phobia.
It's the "first 5 minutes" thing. Every hiccup there costs us maybe half the people who just want to try it out. ... The makefiles, especially posix.mak, have grown into horrific snarls of who-knows-what-is-happening.
I had a minor rant about this at DConf. The makefiles are the major reason I haven't contributed to the core D projects. They'd be a hell of a lot simpler if everything that isn't building an executable (and isn't idempotent) got ripped out. No downloading compilers, no cloning/updating repos, etc, etc. Having a pushbutton process for installing/bootstrapping is cool, but that stuff is better off in scripts.
Jun 14 2016
prev sibling parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Monday, 13 June 2016 at 20:12:27 UTC, Walter Bright wrote:
 On 6/12/2016 4:27 PM, Jason White wrote:
 I don't understand this dependency-phobia.
It's the "first 5 minutes" thing. Every hiccup there costs us maybe half the people who just want to try it out.
I suppose you're right. It is just frustrating that people are unwilling to adopt clearly superior tools simply because it would introduce a new dependency. I'm sure D itself has the same exact problem.
Jun 14 2016
next sibling parent Stefan Koch <uplink.coder googlemail.com> writes:
On Wednesday, 15 June 2016 at 05:42:21 UTC, Jason White wrote:
 On Monday, 13 June 2016 at 20:12:27 UTC, Walter Bright wrote:
 On 6/12/2016 4:27 PM, Jason White wrote:
 I don't understand this dependency-phobia.
It's the "first 5 minutes" thing. Every hiccup there costs us maybe half the people who just want to try it out.
I suppose you're right. It is just frustrating that people are unwilling to adopt clearly superior tools simply because it would introduce a new dependency. I'm sure D itself has the same exact problem.
I am confident can build a lua to D transcompiler that works at CTFE.
Jun 15 2016
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/16 1:42 AM, Jason White wrote:
 On Monday, 13 June 2016 at 20:12:27 UTC, Walter Bright wrote:
 On 6/12/2016 4:27 PM, Jason White wrote:
 I don't understand this dependency-phobia.
It's the "first 5 minutes" thing. Every hiccup there costs us maybe half the people who just want to try it out.
I suppose you're right. It is just frustrating that people are unwilling to adopt clearly superior tools simply because it would introduce a new dependency. I'm sure D itself has the same exact problem.
In all likelihood. One issue with build systems is there's no clear heir to make. There are so many, including a couple (!) by our community, each with its pros and cons. Which one should we choose? -- Andrei
Jun 15 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Wednesday, 15 June 2016 at 12:02:56 UTC, Andrei Alexandrescu 
wrote:
 In all likelihood. One issue with build systems is there's no 
 clear heir to make. There are so many, including a couple (!) 
 by our community, each with its pros and cons. Which one should 
 we choose?
You should choose mine, obviously. ;) In all seriousness, Make will probably live as long as C. There are a *ton* of Makefiles out there that no one wants translate to a new build system. Part of the reason for that is probably because they are so friggin' incomprehensible and its not exactly glamorous work. This is why I'm working on that tool to allow Button to build existing Makefiles [1]. It may not work 100% of the time, but it should help a lot with migrating away from Make. [1] https://github.com/jasonwhite/button-make
Jun 15 2016
prev sibling next sibling parent reply Kagamin <spam here.lot> writes:
On Sunday, 12 June 2016 at 23:27:07 UTC, Jason White wrote:
 However, I question the utility of even doing this in the first 
 place. You miss out on the convenience of using the existing 
 command line interface.
Why the build script can't have a command line interface?
Jun 16 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Thursday, 16 June 2016 at 12:34:26 UTC, Kagamin wrote:
 On Sunday, 12 June 2016 at 23:27:07 UTC, Jason White wrote:
 However, I question the utility of even doing this in the 
 first place. You miss out on the convenience of using the 
 existing command line interface.
Why the build script can't have a command line interface?
It could, but now the build script is a more complicated and for little gain. Adding command line options on top of that to configure the build would be painful. It would be simpler and cleaner to write a D program to generate the JSON build description for Button to consume. Then you can add a command line interface to configure how the build description is generated. This is how the Lua build descriptions work[1]. [1] http://jasonwhite.github.io/button/docs/tutorial#going-meta-building-the-build-description
Jun 16 2016
parent Kagamin <spam here.lot> writes:
On Friday, 17 June 2016 at 04:54:37 UTC, Jason White wrote:
 Why the build script can't have a command line interface?
It could, but now the build script is a more complicated and for little gain.
It's only as complicated to implement required features and not more complicated. If the command line interface is not needed, it can be omitted, example: --- import button; auto Build = ... mixin mainBuild!Build; //no CLI ---
 Adding command line options on top of that to configure the 
 build would be painful.
$ rdmd build.d configure [options] Well, if one wants to go really complex, a prebuilt binary can be provided to help with that, but it's not always needed I think.
 It would be simpler and cleaner to write a D program to 
 generate the JSON build description for Button to consume. Then 
 you can add a command line interface to configure how the build 
 description is generated. This is how the Lua build 
 descriptions work[1].
--- import button; auto Build = ... mixin mainBuildJSON!Build; --- Should be possible to work like lua script.
Jun 17 2016
prev sibling parent reply Dicebot <public dicebot.lv> writes:
 However, I question the utility of even doing this in the first 
 place. You miss out on the convenience of using the existing 
 command line interface. And for what? Just so everything can be 
 in D? Writing the same thing in Lua would be much prettier. I 
 don't understand this dependency-phobia.
It comes from knowing that for most small to average size D projects you don't need a build _tool_ at all. If full clean build takes 2 seconds, installing extra tool to achieve the same thing one line shell script does is highly annoying. Your reasoning about makefiles seems to be flavored by C++ realities. But my typical D makefile would look like something this: build: dmd -ofbinary `find ./src` test: dmd -unittest -main `find ./src` deploy: build test scp ./binary server: That means that I usually care neither about correctness nor about speed, only about good cross-platform way to define pipelines. And for that fetching dedicated tool is simply too discouraging. In my opinion that is why it is so hard to take over make place for any new tool - they all put too much attention into complicated projects but to get self-sustained network effect one has to prioritize small and simple projects. And ease of availability is most important there.
Jun 17 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Friday, 17 June 2016 at 10:24:16 UTC, Dicebot wrote:
 However, I question the utility of even doing this in the 
 first place. You miss out on the convenience of using the 
 existing command line interface. And for what? Just so 
 everything can be in D? Writing the same thing in Lua would be 
 much prettier. I don't understand this dependency-phobia.
It comes from knowing that for most small to average size D projects you don't need a build _tool_ at all. If full clean build takes 2 seconds, installing extra tool to achieve the same thing one line shell script does is highly annoying. Your reasoning about makefiles seems to be flavored by C++ realities. But my typical D makefile would look like something this: build: dmd -ofbinary `find ./src` test: dmd -unittest -main `find ./src` deploy: build test scp ./binary server: That means that I usually care neither about correctness nor about speed, only about good cross-platform way to define pipelines. And for that fetching dedicated tool is simply too discouraging. In my opinion that is why it is so hard to take over make place for any new tool - they all put too much attention into complicated projects but to get self-sustained network effect one has to prioritize small and simple projects. And ease of availability is most important there.
I agree that a sophisticated build tool isn't really needed for tiny projects, but it's still really nice to have one that can scale as the project grows. All too often, as a project gets bigger, the build system it uses buckles under the growing complexity, no one ever gets around to changing it because they're afraid of breaking something, and the problem just gets worse. I realize you might be playing devil's advocate a bit and I appreciate it. Let me propose another idea where maybe we can remove the extra dependency for new codebase collaborators but still have access to a full-blown build system: Add a sub-command to Button that produces a shell script to run the build. For example, `button shell -o build.sh`. Then just run `./build.sh` to build everything. I vaguely recall either Tup or Ninja having something like this. The main downside is that it'd have to be committed every time the build changes. This could be automated with a bot, but it's still annoying. The upsides are that there is no need for any other external libraries or tools, and the superior build system can still be used by anyone who wants it.
Jun 18 2016
parent reply Dicebot <public dicebot.lv> writes:
On Saturday, 18 June 2016 at 08:05:18 UTC, Jason White wrote:
 I realize you might be playing devil's advocate a bit and I 
 appreciate it.
Yeah, I personally quite like how Button looks and would totally consider it, probably with some tweaks of own taste. But not for most public projects for a reasons mentioned.
 Let me propose another idea where maybe we can remove the extra 
 dependency for new codebase collaborators but still have access 
 to a full-blown build system: Add a sub-command to Button that 
 produces a shell script to run the build. For example, `button 
 shell -o build.sh`. Then just run `./build.sh` to build 
 everything. I vaguely recall either Tup or Ninja having 
 something like this.
This actually sounds nice. Main problem that comes to my mind is that there is no cross-platform shell script. Even if it is list of plain unconditional commands there are always differences like normalized path form. Of course, one can always generate `build.d` as a shell script, but that would only work for D projects and Button is supposed to be a generic solution.
Jun 19 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Sunday, 19 June 2016 at 15:47:21 UTC, Dicebot wrote:
 Let me propose another idea where maybe we can remove the 
 extra dependency for new codebase collaborators but still have 
 access to a full-blown build system: Add a sub-command to 
 Button that produces a shell script to run the build. For 
 example, `button shell -o build.sh`. Then just run 
 `./build.sh` to build everything. I vaguely recall either Tup 
 or Ninja having something like this.
This actually sounds nice. Main problem that comes to my mind is that there is no cross-platform shell script. Even if it is list of plain unconditional commands there are always differences like normalized path form. Of course, one can always generate `build.d` as a shell script, but that would only work for D projects and Button is supposed to be a generic solution.
I'd make it so it could either produce a Bash or Batch script. Possibly also a PowerShell script because error handling in Batch is awful. That should cover any platform it might be needed on. Normalizing paths shouldn't be a problem either. This should actually be pretty easy to implement.
Jun 19 2016
parent reply Dicebot <public dicebot.lv> writes:
On Monday, 20 June 2016 at 02:46:13 UTC, Jason White wrote:
 This actually sounds nice. Main problem that comes to my mind 
 is that there is no cross-platform shell script. Even if it is 
 list of plain unconditional commands there are always 
 differences like normalized path form. Of course, one can 
 always generate `build.d` as a shell script, but that would 
 only work for D projects and Button is supposed to be a 
 generic solution.
I'd make it so it could either produce a Bash or Batch script. Possibly also a PowerShell script because error handling in Batch is awful. That should cover any platform it might be needed on. Normalizing paths shouldn't be a problem either. This should actually be pretty easy to implement.
Will plain sh script script also work for MacOS / BSD flavors? Committing just two scripts is fine but I wonder how it scales.
Jun 20 2016
next sibling parent Rory McGuire via Digitalmars-d-announce writes:
On Mon, Jun 20, 2016 at 10:21 AM, Dicebot via Digitalmars-d-announce <
digitalmars-d-announce puremagic.com> wrote:

 On Monday, 20 June 2016 at 02:46:13 UTC, Jason White wrote:

 This actually sounds nice. Main problem that comes to my mind is that
 there is no cross-platform shell script. Even if it is list of plain
 unconditional commands there are always differences like normalized path
 form. Of course, one can always generate `build.d` as a shell script, but
 that would only work for D projects and Button is supposed to be a generic
 solution.
I'd make it so it could either produce a Bash or Batch script. Possibly also a PowerShell script because error handling in Batch is awful. That should cover any platform it might be needed on. Normalizing paths shouldn't be a problem either. This should actually be pretty easy to implement.
Will plain sh script script also work for MacOS / BSD flavors? Committing just two scripts is fine but I wonder how it scales.
Bash script should work with all. I also read that Microsoft is making bash for windows[1]. [1] http://www.theverge.com/2016/3/30/11331014/microsoft-windows-linux-ubuntu-bash
Jun 20 2016
prev sibling parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Monday, 20 June 2016 at 08:21:29 UTC, Dicebot wrote:
 On Monday, 20 June 2016 at 02:46:13 UTC, Jason White wrote:
 This actually sounds nice. Main problem that comes to my mind 
 is that there is no cross-platform shell script. Even if it 
 is list of plain unconditional commands there are always 
 differences like normalized path form. Of course, one can 
 always generate `build.d` as a shell script, but that would 
 only work for D projects and Button is supposed to be a 
 generic solution.
I'd make it so it could either produce a Bash or Batch script. Possibly also a PowerShell script because error handling in Batch is awful. That should cover any platform it might be needed on. Normalizing paths shouldn't be a problem either. This should actually be pretty easy to implement.
Will plain sh script script also work for MacOS / BSD flavors? Committing just two scripts is fine but I wonder how it scales.
FYI, I implemented this feature today (no Batch/PowerShell output yet though): http://jasonwhite.github.io/button/docs/commands/convert I think Bash should work on most Unix-like platforms.
Jun 26 2016
parent reply Rory McGuire via Digitalmars-d-announce writes:
On Mon, Jun 27, 2016 at 2:23 AM, Jason White via Digitalmars-d-announce <
digitalmars-d-announce puremagic.com> wrote:

 On Monday, 20 June 2016 at 08:21:29 UTC, Dicebot wrote:

 On Monday, 20 June 2016 at 02:46:13 UTC, Jason White wrote:

 This actually sounds nice. Main problem that comes to my mind is that
 there is no cross-platform shell script. Even if it is list of plain
 unconditional commands there are always differences like normalized path
 form. Of course, one can always generate `build.d` as a shell script, but
 that would only work for D projects and Button is supposed to be a generic
 solution.
I'd make it so it could either produce a Bash or Batch script. Possibly also a PowerShell script because error handling in Batch is awful. That should cover any platform it might be needed on. Normalizing paths shouldn't be a problem either. This should actually be pretty easy to implement.
Will plain sh script script also work for MacOS / BSD flavors? Committing just two scripts is fine but I wonder how it scales.
FYI, I implemented this feature today (no Batch/PowerShell output yet though): http://jasonwhite.github.io/button/docs/commands/convert I think Bash should work on most Unix-like platforms.
And there is this[0] for windows, if you wanted to try bash on windows: [0]: https://msdn.microsoft.com/en-us/commandline/wsl/about
Jun 26 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Monday, 27 June 2016 at 06:43:26 UTC, Rory McGuire wrote:
 FYI, I implemented this feature today (no Batch/PowerShell 
 output yet
 though):

     http://jasonwhite.github.io/button/docs/commands/convert

 I think Bash should work on most Unix-like platforms.
And there is this[0] for windows, if you wanted to try bash on windows: [0]: https://msdn.microsoft.com/en-us/commandline/wsl/about
Thanks, but I'll be sticking to bash on Linux. ;) I'll add Batch (and maybe PowerShell) output when Button is supported on Windows. It should be very easy.
Jun 27 2016
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Mon, May 30, 2016 at 07:16:50PM +0000, Jason White via
Digitalmars-d-announce wrote:
 I am pleased to finally announce the build system I've been slowly
 working on for over a year in my spare time:
 
     Docs:   http://jasonwhite.github.io/button/
     Source: https://github.com/jasonwhite/button
 
 Features:
 
 - Correct incremental builds.
 - Automatic dependency detection (for any build task, even shell scripts).
 - Build graph visualization using GraphViz.
 - Language-independent. It can build anything.
 - Can automatically build when an input file is modified (using inotify).
 - Recursive: It can build the build description as part of the build.
 - Lua is the primary build description language.
Finally got around to looking at this (albeit just briefly). It looks very nice! Perhaps I'll try using it for my next project. I'm particularly pleased with the bipartite graph idea. It's a very nice way of sanely capturing the idea of build commands that generate multiple outputs. Also big plusses in my book are implicit dependencies and use of inotify to eliminate the infamous "thinking pause" that older build systems all suffer from (this idea was also advanced by tup, but IMO Button looks a tad more polished than tup in terms of overall design). Of course, being written in D is a bonus in my book. :-D Though realistically speaking it probably doesn't really matter to me as an end user, other than just giving me warm fuzzies. Unfortunately I don't have the time right now to actually do anything non-trivial with it... but I'll try to give feedback when I do get around to it (and I definitely plan to)! T -- Life is unfair. Ask too much from it, and it may decide you don't deserve what you have now either.
Jun 10 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Saturday, 11 June 2016 at 02:48:59 UTC, H. S. Teoh wrote:
 Finally got around to looking at this (albeit just briefly). It 
 looks very nice!  Perhaps I'll try using it for my next project.
If you do end up using it, I'd be happy to iron out any irritations in Button that you encounter. Button really needs a large project using it to help drive refinements.
 I'm particularly pleased with the bipartite graph idea. It's a 
 very nice way of sanely capturing the idea of build commands 
 that generate multiple outputs.  Also big plusses in my book 
 are implicit dependencies and use of inotify to eliminate the 
 infamous "thinking pause" that older build systems all suffer 
 from (this idea was also advanced by tup, but IMO Button looks 
 a tad more polished than tup in terms of overall design).  Of 
 course, being written in D is a bonus in my book. :-D Though 
 realistically speaking it probably doesn't really matter to me 
 as an end user, other than just giving me warm fuzzies.
Tup has had a big influence on the design of Button (e.g., a bipartite graph, deleting unused outputs, implicit dependencies, using Lua, etc.). Overall, I'd say Button does the same or better in every respect except maybe speed. About it being written in D: If Rust had been mature enough when I first started working on it, I might have used it instead. All I knew is that I didn't want to go through the pain of writing it in C/C++. :-)
 Unfortunately I don't have the time right now to actually do 
 anything non-trivial with it... but I'll try to give feedback 
 when I do get around to it (and I definitely plan to)!
Thanks! I look forward to it!
Jun 11 2016
prev sibling next sibling parent reply Fool <fool dlang.org> writes:
On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 I am pleased to finally announce the build system I've been 
 slowly working on for over a year in my spare time:

     Docs:   http://jasonwhite.github.io/button/
     Source: https://github.com/jasonwhite/button

 Features:

 - Correct incremental builds.
 - Automatic dependency detection (for any build task, even 
 shell scripts).
 - Build graph visualization using GraphViz.
 - Language-independent. It can build anything.
 - Can automatically build when an input file is modified (using 
 inotify).
 - Recursive: It can build the build description as part of the 
 build.
 - Lua is the primary build description language.
Nice work! I'm wondering how Button would compare in the Build System Shootout (https://github.com/ndmitchell/build-shootout).
Jun 12 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Sunday, 12 June 2016 at 11:06:23 UTC, Fool wrote:
 Nice work! I'm wondering how Button would compare in the Build 
 System Shootout (https://github.com/ndmitchell/build-shootout).
It does pretty well. I even looked over this as I was designing it. Here's the test cases it succeeds at: - "basic: Basic dependency" - "parallel: Parallelism" - "include: C #include files" - "wildcard: Build a file specified by an extension wildcard" (Via generating the build description.) - "spaces: Build a file containing spaces" - "monad1: Monadic patterns" - "monad2: More monadic patterns" - "monad3: More monadic patterns" - "unchanged: Handle files which do not change" - "multiple: Rules with multiple outputs" - "digest: Don't rebuild when a file is modified to the same value" - "nofileout: Don't produce an output file" And the ones it fails at: - "system1: Dependency on system information" (Because tasks with no dependencies are only run once. This could be changed easily enough, but I don't see the point.) - "system2: Dependency on system environment variable" (Button doesn't know about environment variables.) - "pool: Limit the parallelism in a specific stage" (I'm not sure how useful this is, but it could be added.) - "secondary: Secondary target" (I think this is incorrect behavior and not a feature.) - "intermediate: Intermediate target" (Same reason as "secondary". If this is really needed, it should be encapsulated inside a single task.) As for the "Build System Power" section: - Yes: Pre dependencies - Yes: Post dependencies - Yes: Mid dependencies - Yes: Auto post dependencies - Not yet: Auto cached commands I'd say it's more robust than any other single build system there, but I'm biased. :-) I should probably make a pull request to add it to the shootout.
Jun 12 2016
parent reply Fool <fool dlang.org> writes:
On Sunday, 12 June 2016 at 22:59:15 UTC, Jason White wrote:
 - "system1: Dependency on system information" (Because tasks 
 with no dependencies are only run once. This could be changed 
 easily enough, but I don't see the point.)
Switching the compiler version seems to be a valid use case. You might have other means to detect this, though.
 - "secondary: Secondary target" (I think this is incorrect 
 behavior and not a feature.)
 - "intermediate: Intermediate target" (Same reason as 
 "secondary". If this is really needed, it should be 
 encapsulated inside a single task.)
A possible use case is creating object files first and packing them into a library as a second step. Then single object files are of not much interest anymore. Imagine, you want to distribute a build to several development machines such that their local build environments are convinced that the build is up to date. If object files can be treated as secondary or intermediate targets you can save lots of unnecessary network traffic and storage.
 I should probably make a pull request to add it to the shootout.
It might help advertising. :-)
Jun 14 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Tuesday, 14 June 2016 at 10:47:58 UTC, Fool wrote:
 Switching the compiler version seems to be a valid use case. 
 You might have other means to detect this, though.
If you want to depend on the compiler version, then you can add a dependency on the compiler executable. It might be a good idea to have Button do this automatically for every command. That is, finding the path to the command's executable and making it a dependency.
 A possible use case is creating object files first and packing 
 them into a library as a second step. Then single object files 
 are of not much interest anymore. Imagine, you want to 
 distribute a build to several development machines such that 
 their local build environments are convinced that the build is 
 up to date. If object files can be treated as secondary or 
 intermediate targets you can save lots of unnecessary network 
 traffic and storage.
You're right, that is a valid use case. In my day job, we have builds that produce 60+ GB of object files. It would be wasteful to distribute all that to development machines. However, I can think of another scenario where it would just as well be incorrect behavior: Linking an executable and then running tests on it. The executable could then be seen by the build system as the "secondary" or "intermediate" output. If it gets deleted, I think we'd want it rebuilt. I'm not sure how Make or Shake implement this without doing it incorrectly in certain scenarios. There would need to be a way to differentiate between necessary and unnecessary outputs. I'll have to think about this more.
Jun 14 2016
next sibling parent "H. S. Teoh via Digitalmars-d-announce" writes:
On Wed, Jun 15, 2016 at 05:04:28AM +0000, Jason White via
Digitalmars-d-announce wrote:
 On Tuesday, 14 June 2016 at 10:47:58 UTC, Fool wrote:
[...]
 A possible use case is creating object files first and packing them
 into a library as a second step. Then single object files are of not
 much interest anymore. Imagine, you want to distribute a build to
 several development machines such that their local build
 environments are convinced that the build is up to date. If object
 files can be treated as secondary or intermediate targets you can
 save lots of unnecessary network traffic and storage.
You're right, that is a valid use case. In my day job, we have builds that produce 60+ GB of object files. It would be wasteful to distribute all that to development machines. However, I can think of another scenario where it would just as well be incorrect behavior: Linking an executable and then running tests on it. The executable could then be seen by the build system as the "secondary" or "intermediate" output. If it gets deleted, I think we'd want it rebuilt. I'm not sure how Make or Shake implement this without doing it incorrectly in certain scenarios. There would need to be a way to differentiate between necessary and unnecessary outputs. I'll have to think about this more.
I don't think Make handles this at all. You'd just write rules in the Makefile to delete the intermediate files if you really care to. Most of the time people just ignore it, and add a 'clean' rule with some wildcards to cleanup the intermediate files. (This is actually one of the sources of major annoyance with Makefiles: because of the unknown state of intermediate files, builds are rarely reproducible, and `make clean; make` is a ritual that has come to be accepted as a fact of life. Arguably, though, a *proper* build system ought to be such that incremental builds are always correct and reproducible, and does not depend on environmental factors.) T -- Not all rumours are as misleading as this one.
Jun 14 2016
prev sibling parent Fool <fool dlang.org> writes:
On Wednesday, 15 June 2016 at 05:04:28 UTC, Jason White wrote:
 If you want to depend on the compiler version, then you can add 
 a dependency on the compiler executable. It might be a good 
 idea to have Button do this automatically for every command. 
 That is, finding the path to the command's executable and 
 making it a dependency.
I think you are fine if adding a dependency works. If it's done automatically someone will ask for a way to disable this feature.
 However, I can think of another scenario where it would just as 
 well be incorrect behavior: Linking an executable and then 
 running tests on it. The executable could then be seen by the 
 build system as the "secondary" or "intermediate" output. If it 
 gets deleted, I think we'd want it rebuilt.

 I'm not sure how Make or Shake implement this without doing it 
 incorrectly in certain scenarios. There would need to be a way 
 to differentiate between necessary and unnecessary outputs. 
 I'll have to think about this more.
Shake has 'order only' dependencies that cover the 'intermediate' case. GNU make supports special targets '.INTERMEDIATE' and '.SECONDARY' [1]. [1] http://www.gnu.org/software/make/manual/make.html#Chained-Rules
Jun 15 2016
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/30/2016 12:16 PM, Jason White wrote:
 Here is an example build description for DMD:

     https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak
Yes, the syntax looks nice.
Jun 12 2016
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/12/16 8:27 PM, Walter Bright wrote:
 On 5/30/2016 12:16 PM, Jason White wrote:
 Here is an example build description for DMD:

     https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak
Yes, the syntax looks nice.
Cool. Difference in size is also large. Do they do the same things? -- Andrei
Jun 14 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Tuesday, 14 June 2016 at 14:57:52 UTC, Andrei Alexandrescu 
wrote:
 On 6/12/16 8:27 PM, Walter Bright wrote:
 On 5/30/2016 12:16 PM, Jason White wrote:
 Here is an example build description for DMD:

     
 https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak
Yes, the syntax looks nice.
Cool. Difference in size is also large. Do they do the same things? -- Andrei
Not quite. It doesn't download a previous version of dmd for bootstrapping and it doesn't handle configuration (e.g., x86 vs x64). About all it does is the bare minimum work necessary to create the dmd executable. I basically ran `make all -n` and converted the output because it's easier to read than the Makefile itself. Building from scratch takes about 7 seconds on my machine (using 8 cores and building in /tmp). Make takes about 5 seconds. Guess I need to do some optimizing. :-)
Jun 14 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 6/15/16 1:29 AM, Jason White wrote:
 On Tuesday, 14 June 2016 at 14:57:52 UTC, Andrei Alexandrescu wrote:
 On 6/12/16 8:27 PM, Walter Bright wrote:
 On 5/30/2016 12:16 PM, Jason White wrote:
 Here is an example build description for DMD:

 https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak
Yes, the syntax looks nice.
Cool. Difference in size is also large. Do they do the same things? -- Andrei
Not quite. It doesn't download a previous version of dmd for bootstrapping and it doesn't handle configuration (e.g., x86 vs x64). About all it does is the bare minimum work necessary to create the dmd executable. I basically ran `make all -n` and converted the output because it's easier to read than the Makefile itself.
OK. I guess at least some of that stuff should be arguably scripted.
 Building from scratch takes about 7 seconds on my machine (using 8 cores
 and building in /tmp). Make takes about 5 seconds. Guess I need to do
 some optimizing. :-)
I'd say the gating factor is -j. If an build system doesn't implement the equivalent of make -j, that's a showstopper. Andrei
Jun 15 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Wednesday, 15 June 2016 at 12:00:52 UTC, Andrei Alexandrescu 
wrote:
 I'd say the gating factor is -j. If an build system doesn't 
 implement the equivalent of make -j, that's a showstopper.
Don't worry, there is a --threads option and it defaults to the number of logical cores. I just did some tests and the reason it is slower than Make is because of the automatic dependency detection on every single command. I disabled the automatic dependency detection and compared it with Make again. Button was then roughly the same speed as Make -- sometimes it was faster, sometimes slower. Although, I think getting accurate dependencies at the cost of slightly slower builds is very much a worthwhile trade-off.
Jun 15 2016
parent reply Atila Neves <atila.neves gmail.com> writes:
On Thursday, 16 June 2016 at 04:26:24 UTC, Jason White wrote:
 On Wednesday, 15 June 2016 at 12:00:52 UTC, Andrei Alexandrescu 
 wrote:
 I'd say the gating factor is -j. If an build system doesn't 
 implement the equivalent of make -j, that's a showstopper.
Don't worry, there is a --threads option and it defaults to the number of logical cores. I just did some tests and the reason it is slower than Make is because of the automatic dependency detection on every single command. I disabled the automatic dependency detection and compared it with Make again. Button was then roughly the same speed as Make -- sometimes it was faster, sometimes slower. Although, I think getting accurate dependencies at the cost of slightly slower builds is very much a worthwhile trade-off.
It would be a worthwhile trade-off, if those were the only two options available, but they're not. There are multiple build systems out there that do correct builds whilst being faster than make. Being faster is easy, because make is incredibly slow. I didn't even find out about ninja because I read about it in a blog post, I actively searched for a make alternative because I was tired of waiting for it. Atila
Jun 16 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Thursday, 16 June 2016 at 13:39:20 UTC, Atila Neves wrote:
 It would be a worthwhile trade-off, if those were the only two 
 options available, but they're not. There are multiple build 
 systems out there that do correct builds whilst being faster 
 than make. Being faster is easy, because make is incredibly 
 slow.

 I didn't even find out about ninja because I read about it in a 
 blog post, I actively searched for a make alternative because I 
 was tired of waiting for it.
Make is certainly not slow for full builds. That is what I was testing. I'm well aware of Ninja and it is maybe only 1% faster than Make for full builds[1]. There is only so much optimization that can be done when spawning processes as dictated by a DAG. 99% of the CPU's time is spent on running the tasks themselves. Where Make gets slow is when checking for changes on a ton of files. I haven't tested it, but I'm sure Button is faster than Make in this case because it checks for changed files using multiple threads. Using the file system watcher can also bring this down to a near-zero time. Speed is not the only virtue of a build system. A build system can be amazeballs fast, but if you can't rely on it doing incremental builds correctly in production, then you're probably doing full builds every single time. Being easy to use and robust is also pretty important. [1] http://hamelot.io/programming/make-vs-ninja-performance-comparison/
Jun 16 2016
next sibling parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Fri, Jun 17, 2016 at 05:41:30AM +0000, Jason White via
Digitalmars-d-announce wrote:
[...]
 Where Make gets slow is when checking for changes on a ton of files. I
 haven't tested it, but I'm sure Button is faster than Make in this
 case because it checks for changed files using multiple threads. Using
 the file system watcher can also bring this down to a near-zero time.
IMO using the file system watcher is the way to go. It's the only way to beat the O(n) pause at the beginning of a build as the build system scans for what has changed.
 Speed is not the only virtue of a build system. A build system can be
 amazeballs fast, but if you can't rely on it doing incremental builds
 correctly in production, then you're probably doing full builds every
 single time. Being easy to use and robust is also pretty important.
[...] For me, correctness is far more important than speed. Mostly because at my day job, we have a Make-based build system and because of Make's weaknesses, countless hours, sometimes even days, have been wasted running `make clean; make` just so we can "be sure". Actually, it's worse than that; the "official" way to build it is: svn diff > /tmp/diff \rm -rf old_checkout mkdir new_checkout cd new_checkout svn co http://svnserver/path/to/project patch -p0 </tmp/diff make because we have been bitten before by `make clean` not *really* cleaning *everything*, and so `make clean; make` was actually producing a corrupt image, whereas checking out a fresh new workspace produces the correct image. Far too much time has been wasted "debugging" bugs that weren't really there, just because Make cannot be trusted to produce the correct results. Or heisenbugs that disappear when you rebuild from scratch. Unfortunately, due to the size of our system, a fresh svn checkout on a busy day means 15-20 mins (due to everybody on the local network trying to do fresh checkouts!), then make takes about 30-45 mins to build everything. When your changeset touches Makefiles, this could mean a 1 hour turnaround for every edit-compile-test cycle, which is ridiculously unproductive. Such unworkable turnaround times, of course, causes people to be lazy and just run tests on incremental builds (of unknown correctness), which results in people checking in changesets that are actually wrong but just happen to work when they were testing on an incremental build (thanks to Make picking up stray old copies of obsolete libraries or object files or other such detritus). Which means *everybody*'s workspace breaks after running `svn update`. And of course, nobody is sure whether it broke because of their own changes, or because somebody checked in a bad changeset; so it's `make clean; make` time just to "be sure". That's n times how many man-hours (for n = number of people on the team) straight down the drain, where had the build system actually been reliable, only the person responsible would have to spend a few extra hours to fix the problem. Make proponents don't seem to realize how a seemingly not-very-important feature as build correctness actually adds up to a huge cost in terms of employee productivity, i.e., wasted hours, AKA wasted employee wages for the time spent watching `make clean; make` run. T -- It is of the new things that men tire --- of fashions and proposals and improvements and change. It is the old things that startle and intoxicate. It is the old things that are young. -- G.K. Chesterton
Jun 16 2016
next sibling parent Jason White <54f9byee3t32 gmail.com> writes:
On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
 For me, correctness is far more important than speed. Mostly 
 because at my day job, we have a Make-based build system and 
 because of Make's weaknesses, countless hours, sometimes even 
 days, have been wasted running `make clean; make` just so we 
 can "be sure".  Actually, it's worse than that; the "official" 
 way to build it is:

 	svn diff > /tmp/diff
 	\rm -rf old_checkout
 	mkdir new_checkout
 	cd new_checkout
 	svn co http://svnserver/path/to/project
 	patch -p0 </tmp/diff
 	make

 because we have been bitten before by `make clean` not *really* 
 cleaning *everything*, and so `make clean; make` was actually 
 producing a corrupt image, whereas checking out a fresh new 
 workspace produces the correct image.

 Far too much time has been wasted "debugging" bugs that weren't 
 really there, just because Make cannot be trusted to produce 
 the correct results. Or heisenbugs that disappear when you 
 rebuild from scratch. Unfortunately, due to the size of our 
 system, a fresh svn checkout on a busy day means 15-20 mins 
 (due to everybody on the local network trying to do fresh 
 checkouts!), then make takes about 30-45 mins to build 
 everything.  When your changeset touches Makefiles, this could 
 mean a 1 hour turnaround for every edit-compile-test cycle, 
 which is ridiculously unproductive.

 Such unworkable turnaround times, of course, causes people to 
 be lazy and just run tests on incremental builds (of unknown 
 correctness), which results in people checking in changesets 
 that are actually wrong but just happen to work when they were 
 testing on an incremental build (thanks to Make picking up 
 stray old copies of obsolete libraries or object files or other 
 such detritus). Which means *everybody*'s workspace breaks 
 after running `svn update`. And of course, nobody is sure 
 whether it broke because of their own changes, or because 
 somebody checked in a bad changeset; so it's `make clean; make` 
 time just to "be sure". That's n times how many man-hours (for 
 n = number of people on the team) straight down the drain, 
 where had the build system actually been reliable, only the 
 person responsible would have to spend a few extra hours to fix 
 the problem.

 Make proponents don't seem to realize how a seemingly 
 not-very-important feature as build correctness actually adds 
 up to a huge cost in terms of employee productivity, i.e., 
 wasted hours, AKA wasted employee wages for the time spent 
 watching `make clean; make` run.
I couldn't agree more! Correctness is by far the most important feature of a build system. Second to that is probably being able to make sense of what is happening. I have the same problems as you in my day job, but magnified. Some builds take 3+ hours, some nearly 24 hours, and none of the developers can run full builds themselves because the build process is so long and complicated. Turn-around time to test changes is abysmal and everyone is probably orders of magnitude more unproductive because of it. All of this because we can't trust Make or Visual Studio to do incremental builds correctly. I hope to change that with Button.
Jun 16 2016
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
 On Fri, Jun 17, 2016 at 05:41:30AM +0000, Jason White via 
 Digitalmars-d-announce wrote: [...]
 Where Make gets slow is when checking for changes on a ton of 
 files. I haven't tested it, but I'm sure Button is faster than 
 Make in this case because it checks for changed files using 
 multiple threads. Using the file system watcher can also bring 
 this down to a near-zero time.
IMO using the file system watcher is the way to go. It's the only way to beat the O(n) pause at the beginning of a build as the build system scans for what has changed.
See, I used to think that, then I measured. tup uses fuse for this and that's exactly why it's fast. I was considering a similar approach with the reggae binary backend, and so I went and timed make, tup, ninja and itself on a synthetic project. Basically I wrote a program to write out source files to be compiled, with a runtime parameter indicating how many source files to write. The most extensive tests I did was on a synthetic project of 30k source files. That's a lot bigger than the vast majority of developers are ever likely to work on. As a comparison, the 2.6.11 version of the Linux kernel had 17k files. A no-op build on my laptop was about (from memory): tup: <1s ninja, binary: 1.3s make: >20s It turns out that just stat'ing everything is fast enough for pretty much everybody, so I just kept the simple algorithm. Bear in mind the Makefiles here were the simplest possible - doing anything that usually goes on in Makefileland would have made it far, far slower. I know: I converted a build system at work from make to hand-written ninja and it no-op builds went from nearly 2 minutes to 1s. If you happen to be unlucky enough to work on a project so large you need to watch the file system, then use the tup backend I guess. Atila
Jun 17 2016
parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Fri, Jun 17, 2016 at 09:00:45AM +0000, Atila Neves via
Digitalmars-d-announce wrote:
 On Friday, 17 June 2016 at 06:18:28 UTC, H. S. Teoh wrote:
 On Fri, Jun 17, 2016 at 05:41:30AM +0000, Jason White via
 Digitalmars-d-announce wrote: [...]
 Where Make gets slow is when checking for changes on a ton of
 files.  I haven't tested it, but I'm sure Button is faster than
 Make in this case because it checks for changed files using
 multiple threads.  Using the file system watcher can also bring
 this down to a near-zero time.
IMO using the file system watcher is the way to go. It's the only way to beat the O(n) pause at the beginning of a build as the build system scans for what has changed.
See, I used to think that, then I measured. tup uses fuse for this and that's exactly why it's fast. I was considering a similar approach with the reggae binary backend, and so I went and timed make, tup, ninja and itself on a synthetic project. Basically I wrote a program to write out source files to be compiled, with a runtime parameter indicating how many source files to write. The most extensive tests I did was on a synthetic project of 30k source files. That's a lot bigger than the vast majority of developers are ever likely to work on. As a comparison, the 2.6.11 version of the Linux kernel had 17k files.
Today's software projects are much bigger than you seem to imply. For example, my work project *includes* the entire Linux kernel as part of its build process, and the size of the workspace is dominated by the non-Linux components. So 30k source files isn't exactly something totally far out.
 A no-op build on my laptop was about (from memory):
 
 tup: <1s
 ninja, binary: 1.3s
 make: >20s
 
 It turns out that just stat'ing everything is fast enough for pretty
 much everybody, so I just kept the simple algorithm. Bear in mind the
 Makefiles here were the simplest possible - doing anything that
 usually goes on in Makefileland would have made it far, far slower. I
 know: I converted a build system at work from make to hand-written
 ninja and it no-op builds went from nearly 2 minutes to 1s.
Problem: stat() isn't good enough when network file sharing is involved. It breaks correctness by introducing heisenbugs caused by (sometimes tiny) differences in local hardware clocks. It also may break if two versions of the same file share the same timestamp (often thought impossible, but quite possible with machine-generated files and a filesystem that doesn't have subsecond resolution -- and it's rare enough that when it does happen people are left scratching their heads for many wasted hours). To guarantee correctness you need to compute a digest of file contents, not just timestamp.
 If you happen to be unlucky enough to work on a project so large you
 need to watch the file system, then use the tup backend I guess.
[...] Yes, I'm pretty sure that describes a lot of software projects out there today. The scale of software these days is growing exponentially, and there's no sign of it slowing down. Or maybe that's just an artifact of the field I work in? :-P T -- Never step over a puddle, always step around it. Chances are that whatever made it is still dripping.
Jun 17 2016
parent Dicebot <public dicebot.lv> writes:
On 06/17/2016 06:20 PM, H. S. Teoh via Digitalmars-d-announce wrote:
 If you happen to be unlucky enough to work on a project so large you
 need to watch the file system, then use the tup backend I guess.
[...] Yes, I'm pretty sure that describes a lot of software projects out there today. The scale of software these days is growing exponentially, and there's no sign of it slowing down. Or maybe that's just an artifact of the field I work in? :-P
Server-side domain is definitely getting smaller beause micro-service hype keeps growing (and that is one of hypes I do actually support btw).
Jun 17 2016
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Friday, 17 June 2016 at 05:41:30 UTC, Jason White wrote:
 On Thursday, 16 June 2016 at 13:39:20 UTC, Atila Neves wrote:
 It would be a worthwhile trade-off, if those were the only two 
 options available, but they're not. There are multiple build 
 systems out there that do correct builds whilst being faster 
 than make. Being faster is easy, because make is incredibly 
 slow.

 I didn't even find out about ninja because I read about it in 
 a blog post, I actively searched for a make alternative 
 because I was tired of waiting for it.
Make is certainly not slow for full builds. That is what I was testing.
I only care about incremental builds. I actually have difficulty understanding why you tested full builds, they're utterly uninteresting to me.
 A build system  can be amazeballs fast, but if you can't rely 
 on it doing incremental builds correctly in production, then 
 you're probably doing full builds every single time. Being easy 
 to use and robust is also pretty important.
I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are all correct _and_ fast. Atila
Jun 17 2016
parent reply Fool <fool dlang.org> writes:
On Friday, 17 June 2016 at 08:23:50 UTC, Atila Neves wrote:
 I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are 
 all correct _and_ fast.
'Correct' referring to which standards? There is an interesting series of blog posts by Mike Shal: http://gittup.org/blog/2014/03/6-clobber-builds-part-1---missing-dependencies/ http://gittup.org/blog/2014/05/7-clobber-builds-part-2---fixing-missing-dependencies/ http://gittup.org/blog/2014/06/8-clobber-builds-part-3---other-clobber-causes/ http://gittup.org/blog/2015/03/13-clobber-builds-part-4---fixing-other-clobber-causes/
Jun 17 2016
parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Fri, Jun 17, 2016 at 07:30:42PM +0000, Fool via Digitalmars-d-announce wrote:
 On Friday, 17 June 2016 at 08:23:50 UTC, Atila Neves wrote:
 I agree, but CMake/ninja, tup, regga/ninja, reggae/binary are all
 correct _and_ fast.
'Correct' referring to which standards? There is an interesting series of blog posts by Mike Shal: http://gittup.org/blog/2014/03/6-clobber-builds-part-1---missing-dependencies/ http://gittup.org/blog/2014/05/7-clobber-builds-part-2---fixing-missing-dependencies/ http://gittup.org/blog/2014/06/8-clobber-builds-part-3---other-clobber-causes/ http://gittup.org/blog/2015/03/13-clobber-builds-part-4---fixing-other-clobber-causes/
To me, "correct" means: - After invoking the build tool, the workspace *always* reflects a valid, reproducible build. Regardless of initial conditions, existence or non-existence of intermediate files, stale files, temporary files, or other detritus. Independent of environmental factors. Regardless of whether a previous build invocation was interrupted in the middle -- the build system should be able to continue where it left off, reproduce any partial build products, and produce exactly the same products, bit for bit, as if it had not been interrupted before. - If anything changes -- and I mean literally ANYTHING -- that might cause the build products to be different in some way, the build tool should detect that and update the affected targets accordingly the next time it's invoked. "Anything" includes (but is not limited to): - The contents of source files, even if the timestamp stays identical to the previous version. - Change in compiler flags, or any change to the build script itself; - A new version of the compiler was installed on the system; - A system library was upgraded / a new library was installed that may get picked up at link time; - Change in environment variables that might cause some of the build commands to work differently (yes I know this is a bad thing -- it is not recommended to have your build depend on this, but the point is that if it does, the build tool ought to detect it). - Editing comments in a source file (what if there's a script that parses comments? Or ddoc?); - Reverting a patch (that may leave stray source files introduced by the patch). - Interrupting a build in the middle -- the build system should be able to detect any partially-built products and correctly rebuild them instead of picking up a potentially corrupted object in the next operation in the pipeline. - As much as is practical, all unnecessary work should be elided. For example: - If I edit a comment in a source file, and there's an intermediate compile stage where an object file is produced, and the object file after the change is identical to the one produced by the previous compilation, then any further actions -- linking, archiving, etc. -- should not be done, because all products will be identical. - More generally, if my build consists of source file A, which gets compiled to intermediate product B, which in turn is used to produce final product C, then if A is modified, the build system should regenerate B. But if the new B is identical to the old B, then C should *not* be regenerated again. - Contrariwise, if modifications are made to B, the build system should NOT use the modified B to generate C; instead, it should detect that B is out-of-date w.r.t. A, and regenerate B from A first, and then proceed to generate C if it would be different from before. - Touching the timestamp of a source file or intermediate file should *not* cause the build system to rebuild that target, if the result will actually be bit-for-bit identical with the old product. - In spite of this work elision, the build system should still ensure that the final build products are 100% reproducible. That is, work is elided if and only if it is actually unnecessary; if a comment change actually causes something to change (e.g., ddocs are different now), then the build system must rebuild all affected subsequent targets. - Assuming that a revision control system is in place, and a workspace is checked out on revision X with no further modifications, then invoking the build tool should ALWAYS, without any exceptions, produce exactly the same outputs, bit for bit. I.e., if your workspace faithfully represents revision X in the RCS, then invoking the build tool will produce the exact same binary products as anybody else who checks out revision X, regardless of their initial starting conditions. - E.g., I may be on revision Y, then I run svn update -rX, and there may be stray intermediate files strewn around my workspace that are not in a fresh checkout of revision X, the build tool should still produce exactly the same products as a clean, fresh checkout of revision X. This holds regardless of whether Y represents an older revision or a newer revision, or a different branch, etc.. - In other words, the build system should be 100% reproducible at all times, and should not be affected by the existence (or non-existence) of any stale intermediate files. By the above definition of correctness, Make (and pretty much anything based on it, that I know of) fails on several counts. Systems like SCons come close to full correctness, and I believe tup can also be made correct in this way. Make, however, by its very design cannot possibly meet all of the above requirements simultaneously, and thus fails my definition of correctness. T -- A bend in the road is not the end of the road unless you fail to make the turn. -- Brian White
Jun 17 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Friday, 17 June 2016 at 20:36:53 UTC, H. S. Teoh wrote:
 - Assuming that a revision control system is in place, and a
   workspace is checked out on revision X with no further
   modifications, then invoking the build tool should ALWAYS,
   without any exceptions, produce exactly the same outputs, bit
   for bit.  I.e., if your workspace faithfully represents
   revision X in the RCS, then invoking the build tool will
   produce the exact same binary products as anybody else who
   checks out revision X, regardless of their initial starting
   conditions.
Making builds bit-for-bit reproducible is really, really hard to do, particularly on Windows. Microsoft's C/C++ compiler embeds timestamps and other nonsense into the binaries so that every time you build, even when no source changed, you get a different binary. Google wrote a tool to help eliminate this non-determinism as a post-processing step called zap_timestamp[1]. I want to eventually include something like this with Button on Windows. I'll probably have to make a PE reader library first though. Without reproducible builds, caching outputs doesn't work very well either. Moral of the story is, if you're writing a compiler, for the sake of build systems everywhere, make the output deterministic! For consecutive invocations, without changing any source code, I want the hashes of the binaries to be identical every single time. DMD doesn't do this and it saddens me greatly. [1] https://github.com/google/syzygy/tree/master/syzygy/zap_timestamp
Jun 18 2016
parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Sat, Jun 18, 2016 at 08:38:21AM +0000, Jason White via
Digitalmars-d-announce wrote:
 On Friday, 17 June 2016 at 20:36:53 UTC, H. S. Teoh wrote:
 - Assuming that a revision control system is in place, and a
   workspace is checked out on revision X with no further
   modifications, then invoking the build tool should ALWAYS,
   without any exceptions, produce exactly the same outputs, bit
   for bit.  I.e., if your workspace faithfully represents
   revision X in the RCS, then invoking the build tool will
   produce the exact same binary products as anybody else who
   checks out revision X, regardless of their initial starting
   conditions.
Making builds bit-for-bit reproducible is really, really hard to do, particularly on Windows. Microsoft's C/C++ compiler embeds timestamps and other nonsense into the binaries so that every time you build, even when no source changed, you get a different binary. Google wrote a tool to help eliminate this non-determinism as a post-processing step called zap_timestamp[1]. I want to eventually include something like this with Button on Windows. I'll probably have to make a PE reader library first though.
Even on Posix, certain utilities also insert timestamps, which is very annoying. An Scons-based website that I developed years ago ran into this problem with imagemagick. Fortunately there was a command-line option to suppress timestamps, which made things saner.
 Without reproducible builds, caching outputs doesn't work very well
 either.
Yup.
 Moral of the story is, if you're writing a compiler, for the sake of
 build systems everywhere, make the output deterministic! For
 consecutive invocations, without changing any source code, I want the
 hashes of the binaries to be identical every single time. DMD doesn't
 do this and it saddens me greatly.
[...] DMD doesn't? What does it do that isn't deterministic? T -- Elegant or ugly code as well as fine or rude sentences have something in common: they don't depend on the language. -- Luca De Vitis
Jun 18 2016
parent reply Jason White <54f9byee3t32 gmail.com> writes:
On Saturday, 18 June 2016 at 14:23:39 UTC, H. S. Teoh wrote:
 Moral of the story is, if you're writing a compiler, for the 
 sake of build systems everywhere, make the output 
 deterministic! For consecutive invocations, without changing 
 any source code, I want the hashes of the binaries to be 
 identical every single time. DMD doesn't do this and it 
 saddens me greatly.
DMD doesn't? What does it do that isn't deterministic?
I have no idea. As a simple test, I compiled one my source files to an object file, and ran md5sum on it. I did this again and the md5sum is different. Looking at a diff of the hexdump isn't very fruitful either (for me at least). For reference, I'm on Linux x86_64 with DMD v2.071.0.
Jun 18 2016
parent reply "H. S. Teoh via Digitalmars-d-announce" writes:
On Sat, Jun 18, 2016 at 08:46:30PM +0000, Jason White via
Digitalmars-d-announce wrote:
 On Saturday, 18 June 2016 at 14:23:39 UTC, H. S. Teoh wrote:
 Moral of the story is, if you're writing a compiler, for the sake
 of build systems everywhere, make the output deterministic! For
 consecutive invocations, without changing any source code, I want
 the hashes of the binaries to be identical every single time. DMD
 doesn't do this and it saddens me greatly.
DMD doesn't? What does it do that isn't deterministic?
I have no idea. As a simple test, I compiled one my source files to an object file, and ran md5sum on it. I did this again and the md5sum is different. Looking at a diff of the hexdump isn't very fruitful either (for me at least). For reference, I'm on Linux x86_64 with DMD v2.071.0.
I did a quick investigation, which found something interesting. If compiling straight to executable, the executable is identical each time with the same md5sum. However, when compiling to object files, the md5sum is sometimes the same, sometimes different. Repeating this several time reveals that the md5sum changes every second, meaning that the difference is a timestamp in the object file. Maybe we could file an enhancement request for this? T -- Indifference will certainly be the downfall of mankind, but who cares? -- Miquel van Smoorenburg
Jun 18 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Saturday, 18 June 2016 at 23:52:00 UTC, H. S. Teoh wrote:
 I did a quick investigation, which found something interesting.
  If compiling straight to executable, the executable is 
 identical each time with the same md5sum.  However, when 
 compiling to object files, the md5sum is sometimes the same, 
 sometimes different.  Repeating this several time reveals that 
 the md5sum changes every second, meaning that the difference is 
 a timestamp in the object file.

 Maybe we could file an enhancement request for this?
Done: https://issues.dlang.org/show_bug.cgi?id=16185
Jun 19 2016
prev sibling parent reply Edwin van Leeuwen <edder tkwsping.nl> writes:
On Monday, 13 June 2016 at 00:27:47 UTC, Walter Bright wrote:
 On 5/30/2016 12:16 PM, Jason White wrote:
 Here is an example build description for DMD:

     https://github.com/jasonwhite/dmd/blob/button/src/BUILD.lua

 I'd say that's a lot easier to read than this crusty thing:

     https://github.com/dlang/dmd/blob/master/src/posix.mak
Yes, the syntax looks nice.
How about using reggae? https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
Jun 15 2016
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/15/2016 4:07 AM, Edwin van Leeuwen wrote:
 How about using reggae?

 https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
I haven't studied either.
Jun 15 2016
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Wednesday, 15 June 2016 at 11:47:00 UTC, Walter Bright wrote:
 On 6/15/2016 4:07 AM, Edwin van Leeuwen wrote:
 How about using reggae?

 https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
I haven't studied either.
If you do study that reggae file, remember that it's a deliberate transliteration of the makefile and therefore is a lot more verbose than it *could* be if done from a clean slate or as a proper translation. IIRC it was done to show that reggae could do literally everything the makefile does, in the same way.
Jun 15 2016
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 06/15/2016 08:05 AM, John Colvin wrote:
 On Wednesday, 15 June 2016 at 11:47:00 UTC, Walter Bright wrote:
 On 6/15/2016 4:07 AM, Edwin van Leeuwen wrote:
 How about using reggae?

 https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
I haven't studied either.
If you do study that reggae file, remember that it's a deliberate transliteration of the makefile and therefore is a lot more verbose than it *could* be if done from a clean slate or as a proper translation. IIRC it was done to show that reggae could do literally everything the makefile does, in the same way.
Does it do -j? -- Andrei
Jun 15 2016
next sibling parent Edwin van Leeuwen <edder tkwsping.nl> writes:
On Wednesday, 15 June 2016 at 15:39:47 UTC, Andrei Alexandrescu 
wrote:
 On 06/15/2016 08:05 AM, John Colvin wrote:
 On Wednesday, 15 June 2016 at 11:47:00 UTC, Walter Bright 
 wrote:
 On 6/15/2016 4:07 AM, Edwin van Leeuwen wrote:
 How about using reggae?

 https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
I haven't studied either.
If you do study that reggae file, remember that it's a deliberate transliteration of the makefile and therefore is a lot more verbose than it *could* be if done from a clean slate or as a proper translation. IIRC it was done to show that reggae could do literally everything the makefile does, in the same way.
Does it do -j? -- Andrei
It can work with multiple backends (make/tup/ninja), which all support -j. There is also a binary backend (creates an executable), not sure if that supports -j natively.
Jun 15 2016
prev sibling parent Atila Neves <atila.neves gmail.com> writes:
On Wednesday, 15 June 2016 at 15:39:47 UTC, Andrei Alexandrescu 
wrote:
 On 06/15/2016 08:05 AM, John Colvin wrote:
 On Wednesday, 15 June 2016 at 11:47:00 UTC, Walter Bright 
 wrote:
 On 6/15/2016 4:07 AM, Edwin van Leeuwen wrote:
 How about using reggae?

 https://github.com/atilaneves/phobos/blob/reggae/reggaefile.d
I haven't studied either.
If you do study that reggae file, remember that it's a deliberate transliteration of the makefile and therefore is a lot more verbose than it *could* be if done from a clean slate or as a proper translation. IIRC it was done to show that reggae could do literally everything the makefile does, in the same way.
Does it do -j? -- Andrei
Short answer: yes. Long answer: it has multiple backends. I assume the one that'd be used for dmd/druntime/phobos would be the binary (compiled D code) one since that one doesn't have dependencies on anything else. It does what ninja does, which is to use the number of cores on the system. There are also the ninja, make and tup backends and those do what they do. I've been meaning to update my reggae branch for a while but haven't been able to gather enough motivation. The part that just builds the library is easy (I haven't tried compiling the code below): alias cObjs = objectFiles!(Sources!("etc/c/zlib"), Flags("-m64 -fPIC -O3")); alias dObjs = objectFiles!(Sources!(["std", "etc"]), Flags("-conf= -m64 -w -dip25 -O -release"), ImportPaths("../druntime/import")); auto static_phobos = link("$project/generated/linux/release/64/libphobos", cObjs ~ dObjs, "-lib"); The problem is all the other targets, and I can't break any of them, and they're all annoying in their own special way. The auto-tester only covers a fraction and I have no idea if all of them are still being used by anyone. Does anyone do MinGW builds with posix.mak for instance? I'm half convinced it's broken. Atila
Jun 15 2016
prev sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 30 May 2016 at 19:16:50 UTC, Jason White wrote:
 Note that this is still a ways off from being production-ready. 
 It needs some polishing. Feedback would be most appreciated 
 (file some issues!). I really want to make this one of the best 
 build systems out there.
I found the beginning of the tutorial very clear. I really liked that it can produce a png of the build graph. I also liked the Lua build description for DMD. Much more legible than the make file. However, once I got to the "Going Meta: Building the Build Description" section of the tutorial, I got a little confused. I found it a little weird that the json output towards the end of the tutorial don't always match up. Like, where did the .h files go from the inputs? (I get that they aren't needed for running gcc, but you should mention that) Why is it displaying cc instead of gcc? I just feel like you might be able to split things up a little and provide a few more details. Like, this is how to do a base version, then say this is how you can customize what is displayed. Also, it's a little terse on the details of things like what the cc.binary is doing. Always err on the side of explaining things too much rather than too little, IMO.
Jun 17 2016
parent Jason White <54f9byee3t32 gmail.com> writes:
On Friday, 17 June 2016 at 20:59:46 UTC, jmh530 wrote:
 I found the beginning of the tutorial very clear. I really 
 liked that it can produce a png of the build graph. I also 
 liked the Lua build description for DMD. Much more legible than 
 the make file.

 However, once I got to the "Going Meta: Building the Build 
 Description" section of the tutorial, I got a little confused.

 I found it a little weird that the json output towards the end 
 of the tutorial don't always match up. Like, where did the .h 
 files go from the inputs? (I get that they aren't needed for 
 running gcc, but you should mention that) Why is it displaying 
 cc instead of gcc? I just feel like you might be able to split 
 things up a little and provide a few more details. Like, this 
 is how to do a base version, then say this is how you can 
 customize what is displayed. Also, it's a little terse on the 
 details of things like what the cc.binary is doing. Always err 
 on the side of explaining things too much rather than too 
 little, IMO.
Thank you for the feedback! I'm glad someone has read the tutorial. I'm not happy with that section either. I think I'll split it up and go into more depth, possibly moving it to a separate page. I also still need to write docs on the Lua parts (like cc.binary), but that API is subject to change. Unlike most people, I kind of actually enjoy writing documentation.
Jun 18 2016