www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - unittests are really part of the build, not a special run

reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
This is a tooling issue.

D builds and messaging are rigged to consider unittests as a special 
build followed by a special run of an application.

In brief: I'd like to transition to a model in which unittesting is 
organically part of the build. After all, you wouldn't want to deploy an 
application that's failing its unittests.

Detail: Consider running a build vs. a unittest in one of the supported 
editors/IDEs (emacs, vim, Code:Blocks, Visual D, Xamarin...). During a 
build, an error will come in a standard format, e.g.

std/array.d(39): Error: undefined identifier xyz

This format is recognized by the editor and allows the user to click on 
it and go straight to the offending line etc.

In contrast, a failing unittest has a completely different format. In 
fact, it's a format that's worse than useless because it confuses the 
editor:

core.exception.AssertError std/array.d(39): xyz

emacs will recognize the text as a filename and upon clicking would ask 
the user to open the nonsense file 
"core.exception.AssertError std/array.d". This error line is followed by 
a stacktrace, which is in no recognizable format. It should be in the 
format of e.g. gcc remarks providing additional information for an 
error, and again provide file/line information so the user can click and 
see the call stack in the source.

Where I want us to be is a place where unittests are considered a part 
of the build; it should be trivial to set things up such that 
unittesting is virtually indistinguishable from compilation and linking.

This all is relatively easy to implement but might have a large positive 
impact.

Please chime in before I make this into an issue. Anyone would like to 
take this?


Andrei
Mar 30 2015
next sibling parent "Kapps" <opantm2+spam gmail.com> writes:
Would this change result in just not running main and changing 
the default unittest runner output, but still run static 
constructors (which then allows you to specify a custom unittest 
runner)? If so, I think it's a nice change. I've always found it 
quite odd that running unittests still runs the main method, and 
it results in annoyances when making custom build systems that 
may or may not need -main depending on if they're a library or an 
application. Having unittests not care whether main is present or 
not, and not run main, would be a nice change for tooling.
Mar 30 2015
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
tl: dr: please, no

We have put quite some effort into fighting default DMD behaviour 
of -unittest simply adding to main function and not replacing it. 
Initially many applications did run tests on startup because DMD 
suggested it is a good idea - some rather bad practical 
experience has shown this was a rather bad suggestion. Accidental 
tests that start doing I/O on productions servers, considerably 
increased restart times for services - that kind of issues.

And if you suggest to build both test and normal build as part of 
single compiler call (building test version silently in the 
background) this is also very confusing addition hardly worth its 
gain.

Just tweak your editors if that is truly important. It is not 
like being able to click some fancy lines in GUI makes critical 
usability addition to testing.
Mar 30 2015
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 3:30 PM, Dicebot wrote:
 tl: dr: please, no

 We have put quite some effort into fighting default DMD behaviour of
 -unittest simply adding to main function and not replacing it. Initially
 many applications did run tests on startup because DMD suggested it is a
 good idea - some rather bad practical experience has shown this was a
 rather bad suggestion. Accidental tests that start doing I/O on
 productions servers, considerably increased restart times for services -
 that kind of issues.
Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.
 And if you suggest to build both test and normal build as part of single
 compiler call (building test version silently in the background) this is
 also very confusing addition hardly worth its gain.
Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
 Just tweak your editors if that is truly important. It is not like being
 able to click some fancy lines in GUI makes critical usability addition
 to testing.
This is a cultural change more than a pure tooling matter. I think we'd do well to change things on the tooling side instead of expecting editors to do it for us. Andrei
Mar 30 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu 
wrote:
 On 3/30/15 3:30 PM, Dicebot wrote:
 tl: dr: please, no

 We have put quite some effort into fighting default DMD 
 behaviour of
 -unittest simply adding to main function and not replacing it. 
 Initially
 many applications did run tests on startup because DMD 
 suggested it is a
 good idea - some rather bad practical experience has shown 
 this was a
 rather bad suggestion. Accidental tests that start doing I/O on
 productions servers, considerably increased restart times for 
 services -
 that kind of issues.
Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.
Ok, pardon me for misunderstanding :) I got confused by "you don't want to run application that isn't tested" part.
 And if you suggest to build both test and normal build as part 
 of single
 compiler call (building test version silently in the 
 background) this is
 also very confusing addition hardly worth its gain.
Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.
Mar 30 2015
next sibling parent "w0rp" <devw0rp gmail.com> writes:
On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:
 If it is only format that matters you an always change it via 
 custom test runner. For example, we do have a test runner that 
 generates JUnit-compatible XML output for Jenkins - and that 
 was possible to do with plain `unittest` blocks even with D1 :)

 Main problem with changing default formatting is that it is 
 pretty hard to choose one that is 100% right. Current one is at 
 least simple and predictable being just an exception printout.
I would love to have that JUnit-compatible test runner. I haven't needed it quite yet, but it would be nice to have, and I'm sure others would appreciate it. I would also like it if there was Corbetrua-formatted output for DMD's coverage reports. Then it would be possible to see both test results and code coverage reports in Jenkins.
Mar 30 2015
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 4:26 PM, Dicebot wrote:
 Main problem with changing default formatting is that it is pretty hard
 to choose one that is 100% right.
It's very easy - format asserts occuring directly in unittests the same as compilation errors. I don't see why this would be ever debated. -- Andrei
Mar 30 2015
prev sibling parent reply "Leandro Lucarella" <llucax gmail.com> writes:
On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:
 And if you suggest to build both test and normal build as 
 part of single
 compiler call (building test version silently in the 
 background) this is
 also very confusing addition hardly worth its gain.
Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.
I think having the default using the same format as compiler errors makes perfect sense. Providing extra formatters in Phobos, would be a huge gain, like a JUnit-compatible formatter, as it's a very widespread test reporting format that can be used with many tools. I agree the key is the current configurability, but providing better default and better out of the box alternatives seems like a very reasonable approach to me.
Apr 06 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/6/15 3:16 PM, Leandro Lucarella wrote:
 On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:
 And if you suggest to build both test and normal build as part of
 single
 compiler call (building test version silently in the background)
 this is
 also very confusing addition hardly worth its gain.
Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.
I think having the default using the same format as compiler errors makes perfect sense. Providing extra formatters in Phobos, would be a huge gain, like a JUnit-compatible formatter, as it's a very widespread test reporting format that can be used with many tools. I agree the key is the current configurability, but providing better default and better out of the box alternatives seems like a very reasonable approach to me.
YES! I was surprised that any of this was being debated. -- Andrei
Apr 06 2015
prev sibling parent reply "qznc" <qznc web.de> writes:
On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu 
wrote:
 Violent agreement here. I was just saying unittests should be 
 part of the build process, not the run process. Running 
 unittests and then the app is a bad idea.
Sounds like a good idea to me. Then -unittest should be enabled by default? Implementationwise it sounds like you want another entry point apart from main, e.g. "main_unittest". Then the build process is compile-link-unittest. Afterwards the run process is the usual main call. It makes binaries bigger though. Maybe unittest-specific code can be placed in a special segment, which can be removed during deployment?
Mar 31 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/31/15 9:21 AM, qznc wrote:
 On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:
 Violent agreement here. I was just saying unittests should be part of
 the build process, not the run process. Running unittests and then the
 app is a bad idea.
Sounds like a good idea to me. Then -unittest should be enabled by default?
Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.
 Implementationwise it sounds like you want another entry point apart
 from main, e.g. "main_unittest". Then the build process is
 compile-link-unittest. Afterwards the run process is the usual main call.

 It makes binaries bigger though. Maybe unittest-specific code can be
 placed in a special segment, which can be removed during deployment?
Interesting. Or could be a dynamically-loaded library. But... crawl before we walk. Andrei
Mar 31 2015
parent reply Johannes Totz <johannes jo-t.de> writes:
On 31/03/2015 19:24, Andrei Alexandrescu wrote:
 On 3/31/15 9:21 AM, qznc wrote:
 On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:
 Violent agreement here. I was just saying unittests should be part of
 the build process, not the run process. Running unittests and then the
 app is a bad idea.
Sounds like a good idea to me. Then -unittest should be enabled by default?
Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.
I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
 
 Implementationwise it sounds like you want another entry point apart
 from main, e.g. "main_unittest". Then the build process is
 compile-link-unittest. Afterwards the run process is the usual main call.

 It makes binaries bigger though. Maybe unittest-specific code can be
 placed in a special segment, which can be removed during deployment?
Interesting. Or could be a dynamically-loaded library. But... crawl before we walk. Andrei
Apr 01 2015
next sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 31/03/2015 19:24, Andrei Alexandrescu wrote:
 Probably not; we're looking at two different builds. The build to be
 deployed has no unittest code at all.
I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.
Apr 02 2015
prev sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Thu, Apr 02, 2015 at 11:19:32AM -0700, Jeremy Powers via Digitalmars-d wrote:
 On Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:
 
 On 31/03/2015 19:24, Andrei Alexandrescu wrote:
 Probably not; we're looking at two different builds. The build to
 be deployed has no unittest code at all.
I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.
So what do you want the compiler to do? Emit two executables, one containing the release, the other containing the unittests? Isn't that just a matter of running dmd with/without -unittest? T -- You have to expect the unexpected. -- RL
Apr 02 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/2/15 11:44 AM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Apr 02, 2015 at 11:19:32AM -0700, Jeremy Powers via Digitalmars-d
wrote:
 On Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d <
 digitalmars-d puremagic.com> wrote:

 On 31/03/2015 19:24, Andrei Alexandrescu wrote:
 Probably not; we're looking at two different builds. The build to
 be deployed has no unittest code at all.
I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.
So what do you want the compiler to do? Emit two executables, one containing the release, the other containing the unittests? Isn't that just a matter of running dmd with/without -unittest?
The way I see it, the notion of having one build with strippable unittests is a nice idea but technically challenging. It's also low impact - today concurrent CPU is cheap so running two concurrent unrelated builds can be made as fast as one. The simple effective step toward improvement is to uniformize the format of assertion errors in unittests and to make it easy with tooling to create unittest and non-unittest builds that are gated by the unittests succeeding. Andrei
Apr 02 2015
parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Thu, Apr 2, 2015 at 12:04 PM, Andrei Alexandrescu via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 ...

 The way I see it, the notion of having one build with strippable unittests
 is a nice idea but technically challenging. It's also low impact - today
 concurrent CPU is cheap so running two concurrent unrelated builds can be
 made as fast as one.
This works for me. The important part is that the resultant artifacts of the build have had their exact code tested, doesn't really matter if it's the exact same bits or just logical equivalent. As long as it is the _exact_ same, which means test-only dependencies are not part of the being-tested code.
 The simple effective step toward improvement is to uniformize the format
 of assertion errors in unittests and to make it easy with tooling to create
 unittest and non-unittest builds that are gated by the unittests succeeding.
Nomenclature nitpick: one 'build' with concurrent compile/test steps. Artifacts of build should include a tested library/executable and (uniformly formatted of course) test report.
Apr 02 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Mon, 30 Mar 2015 22:30:17 +0000, Dicebot wrote:

 tl: dr: please, no
=20
 We have put quite some effort into fighting default DMD behaviour of
 -unittest simply adding to main function and not replacing it.
 Initially many applications did run tests on startup because DMD
 suggested it is a good idea - some rather bad practical experience has
 shown this was a rather bad suggestion. Accidental tests that start
 doing I/O on productions servers, considerably increased restart times
 for services - that kind of issues.
=20
 And if you suggest to build both test and normal build as part of single
 compiler call (building test version silently in the background) this is
 also very confusing addition hardly worth its gain.
=20
 Just tweak your editors if that is truly important. It is not like being
 able to click some fancy lines in GUI makes critical usability addition
 to testing.
ah, i see. * building new phobos version now: several seconds. * building new phobos version with unittests now: several minutes. with your suggesion: * building new phobos version: several minutes. * building new phobos version with unittests: several minutes. yep, it's great. i was dreamt of such long compile sessions since i=20 learned that compilers can be fast.=
Mar 30 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 31 Mar 2015 03:29:38 +0000, ketmar wrote:

p.s. $#^&^#%! pan is buggy, the previous meant to be the answer to OP=20
post.=
Mar 30 2015
prev sibling next sibling parent reply Mathias Lang via Digitalmars-d <digitalmars-d puremagic.com> writes:
I'd rather see DMD automatically pass the expression that triggered the
error (as it is done in C) to replace this useless "Unittest failure" that
forces me to look through the code.

D has the advantage that it catches most errors at CT. You can write a lot
of code and just compile it to ensure it's more or less correct. I often
write code that won't pass the unittests, but I need to check if my
template / CT logic is correct. It may takes 20 compilations cycle before I
run the unittests. Running the tests as part of the build would REALLY slow
down the process -especially given that unittest is communicated to
imported module, which means imported libraries. You don't want to catch
unittests failures on every compilation cycle, but rather before your code
make it to the repo - that's what CI systems are for -.
Mar 30 2015
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/30/2015 4:15 PM, Mathias Lang via Digitalmars-d wrote:
 I'd rather see DMD automatically pass the expression that triggered the error
 (as it is done in C) to replace this useless "Unittest failure" that forces me
 to look through the code.
You have to look at the code anyway.
Mar 30 2015
parent =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= <per.nordlow gmail.com> writes:
On Monday, 30 March 2015 at 23:51:17 UTC, Walter Bright wrote:
 I'd rather see DMD automatically pass the expression that 
 triggered the error
 (as it is done in C) to replace this useless "Unittest 
 failure" that forces me
 to look through the code.
You have to look at the code anyway.
My experience is that having the failing expression available speeds up the processing of figuring out what's wrong with my failing code. That's why I'm using https://github.com/nordlow/justd/blob/master/assert_ex.d
Apr 04 2015
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 4:15 PM, Mathias Lang via Digitalmars-d wrote:
 I'd rather see DMD automatically pass the expression that triggered the
 error (as it is done in C) to replace this useless "Unittest failure"
 that forces me to look through the code.
Often you need the context.
 D has the advantage that it catches most errors at CT. You can write a
 lot of code and just compile it to ensure it's more or less correct. I
 often write code that won't pass the unittests, but I need to check if
 my template / CT logic is correct. It may takes 20 compilations cycle
 before I run the unittests. Running the tests as part of the build would
 REALLY slow down the process -especially given that unittest is
 communicated to imported module, which means imported libraries. You
 don't want to catch unittests failures on every compilation cycle, but
 rather before your code make it to the repo - that's what CI systems are
 for -.
I disagree. Andrei
Mar 30 2015
next sibling parent reply Mathias Lang via Digitalmars-d <digitalmars-d puremagic.com> writes:
2015-03-31 2:46 GMT+02:00 Andrei Alexandrescu via Digitalmars-d <
digitalmars-d puremagic.com>:

 On 3/30/15 4:15 PM, Mathias Lang via Digitalmars-d wrote:

 I'd rather see DMD automatically pass the expression that triggered the
 error (as it is done in C) to replace this useless "Unittest failure"
 that forces me to look through the code.
Often you need the context.
Often, not always. You doesn't loose any information by displaying the expression.
  D has the advantage that it catches most errors at CT. You can write a
 lot of code and just compile it to ensure it's more or less correct. I
 often write code that won't pass the unittests, but I need to check if
 my template / CT logic is correct. It may takes 20 compilations cycle
 before I run the unittests. Running the tests as part of the build would
 REALLY slow down the process -especially given that unittest is
 communicated to imported module, which means imported libraries. You
 don't want to catch unittests failures on every compilation cycle, but
 rather before your code make it to the repo - that's what CI systems are
 for -.
I disagree. Andrei
As you are entitled to. But I don't see any argument here.
Mar 30 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/30/15 7:14 PM, Mathias Lang via Digitalmars-d wrote:
 As you are entitled to. But I don't see any argument here.
The thing here is you're not forced to build both the unittest version and the deployed version. It's an opt-in thing. The problem currently is we format error messages badly so we make Good Things difficult. -- Andrei
Mar 30 2015
next sibling parent reply "Andy Smith" <andyrsmith googlemail.com> writes:
A band-aid rather than a solution, but sticking this in 
.emacs/init.el will fix up emacs to do the right thing with 
asserts (only tested with DMD).

Cheers,

A.

(add-to-list 'compilation-error-regexp-alist
		'("^object\.Exception \\(.*\\)(\\([0-9]+\\)).*"
		  1 2 ) )
Mar 31 2015
parent reply "Andy Smith" <andyrsmith googlemail.com> writes:
On Tuesday, 31 March 2015 at 08:10:19 UTC, Andy Smith wrote:
 A band-aid rather than a solution, but sticking this in 
 .emacs/init.el will fix up emacs to do the right thing with 
 asserts (only tested with DMD).

 Cheers,

 A.

 (add-to-list 'compilation-error-regexp-alist
 		'("^object\.Exception \\(.*\\)(\\([0-9]+\\)).*"
 		  1 2 ) )
Ah - didn't test for your specific example... Need.. (add-to-list 'compilation-error-regexp-alist '("^core\.exception.AssertError \\(.*\\)(\\([0-9]+\\)).*" 1 2 ) ) as well.... not sure how many variants of these there are but if regexps can't handle it am sure elisp can.... Cheers, A.
Mar 31 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-03-31 10:16, Andy Smith wrote:

 Ah - didn't test for your specific example...

 Need..

 (add-to-list 'compilation-error-regexp-alist
          '("^core\.exception.AssertError \\(.*\\)(\\([0-9]+\\)).*"
            1 2 ) )

 as well.... not sure how many variants of these there are but if regexps
 can't handle it am sure elisp can....
This is what I use in TextMate "^(.*?) (.*?)\((\d+)\):(.*)?". -- /Jacob Carlborg
Mar 31 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-03-31 08:10, Andrei Alexandrescu wrote:

 The thing here is you're not forced to build both the unittest version
 and the deployed version. It's an opt-in thing. The problem currently is
 we format error messages badly so we make Good Things difficult. -- Andrei
It's not difficult. I've modified the D bundle for TextMate to recognize exceptions, so this includes failed unit tests. It's not any more difficult than recognize a compile error. -- /Jacob Carlborg
Mar 31 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/31/15 4:22 AM, Jacob Carlborg wrote:
 On 2015-03-31 08:10, Andrei Alexandrescu wrote:

 The thing here is you're not forced to build both the unittest version
 and the deployed version. It's an opt-in thing. The problem currently is
 we format error messages badly so we make Good Things difficult. --
 Andrei
It's not difficult. I've modified the D bundle for TextMate to recognize exceptions, so this includes failed unit tests. It's not any more difficult than recognize a compile error.
Problem is doing that for all editors does not scale. -- Andrei
Mar 31 2015
parent reply "Jacob Carlborg" <doob me.com> writes:
On Tuesday, 31 March 2015 at 15:00:28 UTC, Andrei Alexandrescu 
wrote:

 Problem is doing that for all editors does not scale. -- Andrei
It's not like the error messages used by DMD are in a standardized format. So hopefully the editors already recognize this format. BTW, what about exceptions, do you think we should change the format for those as well? -- /Jacob Carlborg
Mar 31 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/31/15 9:27 AM, Jacob Carlborg wrote:
 On Tuesday, 31 March 2015 at 15:00:28 UTC, Andrei Alexandrescu wrote:

 Problem is doing that for all editors does not scale. -- Andrei
It's not like the error messages used by DMD are in a standardized format. So hopefully the editors already recognize this format.
The idea is to make a SMALL change on our side for a LARGE INSTANT benefit for everyone. Sigh.
 BTW,
 what about exceptions, do you think we should change the format for
 those as well?
I don't see a reason. Andrei
Mar 31 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-03-31 20:26, Andrei Alexandrescu wrote:

 The idea is to make a SMALL change on our side for a LARGE INSTANT
 benefit for everyone. Sigh.
But does the editors handles the current format of the compile errors?
 I don't see a reason.
Why wouldn't you want exceptions to be clickable as well? In TextMate I handle compile errors, warnings, deprecation message and exceptions. The exceptions will include the unit test failures as well. -- /Jacob Carlborg
Mar 31 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/31/15 1:07 PM, Jacob Carlborg wrote:
 On 2015-03-31 20:26, Andrei Alexandrescu wrote:

 The idea is to make a SMALL change on our side for a LARGE INSTANT
 benefit for everyone. Sigh.
But does the editors handles the current format of the compile errors?
At least all editors I use that claim some level of D support.
 I don't see a reason.
Why wouldn't you want exceptions to be clickable as well? In TextMate I handle compile errors, warnings, deprecation message and exceptions. The exceptions will include the unit test failures as well.
Well nice then. Andrei
Mar 31 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-03-31 02:46, Andrei Alexandrescu wrote:

 Often you need the context.
That should be printed as well [1] [1] http://thejqr.com/2009/02/06/textmate-rspec-and-dot-spec-party.html -- /Jacob Carlborg
Mar 31 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-03-31 01:15, Mathias Lang via Digitalmars-d wrote:
 I'd rather see DMD automatically pass the expression that triggered the
 error (as it is done in C) to replace this useless "Unittest failure"
 that forces me to look through the code.

 D has the advantage that it catches most errors at CT. You can write a
 lot of code and just compile it to ensure it's more or less correct. I
 often write code that won't pass the unittests, but I need to check if
 my template / CT logic is correct. It may takes 20 compilations cycle
 before I run the unittests. Running the tests as part of the build would
 REALLY slow down the process -especially given that unittest is
 communicated to imported module, which means imported libraries. You
 don't want to catch unittests failures on every compilation cycle, but
 rather before your code make it to the repo - that's what CI systems are
 for -.
With a custom unit test runner it's possible to run the unit tests as CTFE. -- /Jacob Carlborg
Mar 31 2015
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
That would be great if we could JIT the unitests at build time...
Mar 30 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 31/03/2015 1:08 p.m., deadalnix wrote:
 That would be great if we could JIT the unitests at build time...
While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
Mar 30 2015
next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 31 March 2015 at 00:49:41 UTC, Rikki Cattermole wrote:
 On 31/03/2015 1:08 p.m., deadalnix wrote:
 That would be great if we could JIT the unitests at build 
 time...
While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
I wouldn't go that far as to add external calls into CTFE, but yeah, that's pretty much the way I see things going.
Mar 30 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 31/03/2015 2:07 p.m., deadalnix wrote:
 On Tuesday, 31 March 2015 at 00:49:41 UTC, Rikki Cattermole wrote:
 On 31/03/2015 1:08 p.m., deadalnix wrote:
 That would be great if we could JIT the unitests at build time...
While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
I wouldn't go that far as to add external calls into CTFE, but yeah, that's pretty much the way I see things going.
Yeah, there is a lot of pros and cons to adding it. But the thing is, we are not pushing CTFE as far as it can go. We really aren't. And in some ways I like it. It makes it actually easier to work with in a lot of ways.
Mar 30 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-03-31 02:49, Rikki Cattermole wrote:

 While we are at it, how about improving CTFE to be fully JIT'd and
 support calling external code? That way it is only a small leap to add
 unittests to be CTFE'd.
It's already possible to run unit tests during compile time as CTFE [1]. [1] http://forum.dlang.org/thread/ks1brj$1l6c$1 digitalmars.com -- /Jacob Carlborg
Mar 31 2015
prev sibling next sibling parent "Idan Arye" <GenericNPC gmail.com> writes:
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu 
wrote:
 This is a tooling issue.

 ...

 In brief: I'd like to transition to a model in which 
 unittesting is organically part of the build. After all, you 
 wouldn't want to deploy an application that's failing its 
 unittests.

 Detail: Consider running a build vs. a unittest in one of the 
 supported editors/IDEs (emacs, vim, Code:Blocks, Visual D, 
 Xamarin...). During a build, an error will come in a standard 
 format, e.g.

 std/array.d(39): Error: undefined identifier xyz

 This format is recognized by the editor and allows the user to 
 click on it and go straight to the offending line etc.

 In contrast, a failing unittest has a completely different 
 format. In fact, it's a format that's worse than useless 
 because it confuses the editor:

 core.exception.AssertError std/array.d(39): xyz

 ...

 Where I want us to be is a place where unittests are considered 
 a part of the build; it should be trivial to set things up such 
 that unittesting is virtually indistinguishable from 
 compilation and linking.

 ...

 Andrei
There is no point in running unittests before `main` in the same executable, but that doesn't mean the build is the right place for running unittests. The IDE/build-system should be the one that handles running the tests(both unit and integration). The compiler should give the IDE/build-system enough tools to do it properly - not to do it for them. Running UTs as part of the build blocks some options, like building a UT executable and running it in another environment where building is not possible. Ideally, I would like to see the compiler, when ordered to do a unittest build, create an executable that only runs unittests and ignores `main`. In matter of fact, what it does is run a special "ut-main" entry point function declared in Phobos(or in the runtime - whichever makes more sense) that runs all the unittests, and can possibly catch `AssertError`s and display them in proper format. Since we no longer run `main`, the command line arguments can go to the ut-main function. This doesn't change anything with the built-in ut-main since it just ignores them, but if that special entry point can be overrideable via a compiler flag, and IDE/build-system can supply it's own, more complex ut-main that can make use of the command line arguments. That special ut-main can communicate with the IDE/build-system to provide a better UX for unittest running - for example an IDE could do a graphical display of the unittests run, when it's custom ut-main is responsible for providing it info about the ut-run progress.
Mar 31 2015
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu 
wrote:
 Where I want us to be is a place where unittests are considered 
 a part of the build; it should be trivial to set things up such 
 that unittesting is virtually indistinguishable from 
 compilation and linking.
Something of the form: `rdmd -test code.d`? Though how that will work is a different question.
Mar 31 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 3/31/15 2:48 AM, Kagamin wrote:
 On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:
 Where I want us to be is a place where unittests are considered a part
 of the build; it should be trivial to set things up such that
 unittesting is virtually indistinguishable from compilation and linking.
Something of the form: `rdmd -test code.d`? Though how that will work is a different question.
Yes and agreed. -- Andrei
Mar 31 2015
prev sibling next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
I actually thought about the whole "it should fail to build if 
any of the unit tests fail" idea 2 or 3 weeks ago, so this sounds 
good.

WRT to the error messages and their recognition by text editors, 
a _massive_ improvement would be compiler-assisted formatting of 
the assertion errors. This:

core.exception.AssertError foo.d(2): Assertion failure

Is not useful when I wrote `assert(foo == 2)`. This, however, is:

tests.encode.testEncodeMoreThan8Bits:
     tests/encode.d:166 - Expected: [158, 234, 3]
     tests/encode.d:166 -      Got: [158, 234]


In Python, my favourite testing framework is py.test. It reflects 
on the test code itself and replaces `assert foo == 2` with its 
own code so that it looks like this in the output:

     def test_foo():
         foo = 5
       assert foo == 2
E assert 5 == 2 It also recognises things like `assert x in xs`, which is obviously handy. Since Walter has mentioned the "specialness" of assert before, maybe the compiler could recognise at least the most common kinds and format accordingly (assert ==, assert in, assert is null, assert !is null)? The main reasons I wrote a unit testing library to begin with were: 1. Better error messages when tests fail 2. Named unit tests and running them by name 3. Running unit tests in multiple threads I'm address 1. above, 2. has its own thread currently and AFAIK 3. was only done my me in unit-threaded. There are other niceties that I probably won't give up but those were the big 3. Atila On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:
 This is a tooling issue.

 D builds and messaging are rigged to consider unittests as a 
 special build followed by a special run of an application.

 In brief: I'd like to transition to a model in which 
 unittesting is organically part of the build. After all, you 
 wouldn't want to deploy an application that's failing its 
 unittests.

 Detail: Consider running a build vs. a unittest in one of the 
 supported editors/IDEs (emacs, vim, Code:Blocks, Visual D, 
 Xamarin...). During a build, an error will come in a standard 
 format, e.g.

 std/array.d(39): Error: undefined identifier xyz

 This format is recognized by the editor and allows the user to 
 click on it and go straight to the offending line etc.

 In contrast, a failing unittest has a completely different 
 format. In fact, it's a format that's worse than useless 
 because it confuses the editor:

 core.exception.AssertError std/array.d(39): xyz

 emacs will recognize the text as a filename and upon clicking 
 would ask the user to open the nonsense file 
 "core.exception.AssertError std/array.d". This error line is 
 followed by a stacktrace, which is in no recognizable format. 
 It should be in the format of e.g. gcc remarks providing 
 additional information for an error, and again provide 
 file/line information so the user can click and see the call 
 stack in the source.

 Where I want us to be is a place where unittests are considered 
 a part of the build; it should be trivial to set things up such 
 that unittesting is virtually indistinguishable from 
 compilation and linking.

 This all is relatively easy to implement but might have a large 
 positive impact.

 Please chime in before I make this into an issue. Anyone would 
 like to take this?


 Andrei
Mar 31 2015
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-03-31 23:12, Atila Neves wrote:
 I actually thought about the whole "it should fail to build if any of
 the unit tests fail" idea 2 or 3 weeks ago, so this sounds good.

 WRT to the error messages and their recognition by text editors, a
 _massive_ improvement would be compiler-assisted formatting of the
 assertion errors. This:

 core.exception.AssertError foo.d(2): Assertion failure

 Is not useful when I wrote `assert(foo == 2)`. This, however, is:

 tests.encode.testEncodeMoreThan8Bits:
      tests/encode.d:166 - Expected: [158, 234, 3]
      tests/encode.d:166 -      Got: [158, 234]


 In Python, my favourite testing framework is py.test. It reflects on the
 test code itself and replaces `assert foo == 2` with its own code so
 that it looks like this in the output:

      def test_foo():
          foo = 5
       assert foo == 2
E assert 5 == 2 It also recognises things like `assert x in xs`, which is obviously handy. Since Walter has mentioned the "specialness" of assert before, maybe the compiler could recognise at least the most common kinds and format accordingly (assert ==, assert in, assert is null, assert !is null)?
I kind of agree, RSpec has similar formatting of failed tests. But I leaning to that this should be handled by a library. RSpec has a lot of matchers (assertions) and supports custom matchers as well. For example, for associative arrays RSpec will print a diff of the two objects. For example, the following test: describe 'Foo' do it 'bar' do { foo: 3, bar: 4, baz: 5 }.should == { foo: 3, bar: 4, baz: 6 } end end Will print the following failures: Failures: 1) Foo bar Failure/Error: { foo: 3, bar: 4, baz: 5 }.should == { foo: 3, bar: 4, baz: 6 } expected: {:foo=>3, :bar=>4, :baz=>6} got: {:foo=>3, :bar=>4, :baz=>5} (using ==) Diff: -1,4 +1,4 :bar => 4, -:baz => 6, +:baz => 5, :foo => 3, # ./spec/text_mate/helpers/options_helper_spec.rb:6:in `block (2 levels) in <top (required)>' It also prints the comparison operator used. -- /Jacob Carlborg
Mar 31 2015
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 03/31/2015 05:12 PM, Atila Neves wrote:
 I actually thought about the whole "it should fail to build if any of
 the unit tests fail" idea 2 or 3 weeks ago, so this sounds good.

 WRT to the error messages and their recognition by text editors, a
 _massive_ improvement would be compiler-assisted formatting of the
 assertion errors. This:

 core.exception.AssertError foo.d(2): Assertion failure

 Is not useful when I wrote `assert(foo == 2)`. This, however, is:

 tests.encode.testEncodeMoreThan8Bits:
      tests/encode.d:166 - Expected: [158, 234, 3]
      tests/encode.d:166 -      Got: [158, 234]
Yea, at one point, a whole system of nifty asserts that did just that was created and submitted to Phobos. It was quickly rejected because people said regular assert could, and would, easily be made to do the same thing. That was several years ago and absolutely nothing has happened. We *could've* at least had it in the std library all these years. But the preference was for vaporware. And now we're back to square one with "Whaddya need sometin' like that for anyway?" >_<
Apr 01 2015
prev sibling next sibling parent reply "Ary Borenszweig" <asterite gmail.com> writes:
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu
wrote:
 This is a tooling issue.
I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests. The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
Apr 01 2015
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-04-01 20:04, Ary Borenszweig wrote:

 By the way, this is the way we do it in Crystal. The source code
 for the spec library is here, if you need some inspiration:
 https://github.com/manastech/crystal/tree/master/src/spec . It's
 just 687 lines long.
Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things with classes, instance and inheritance, that is, each describe block is a class an each it block is an instance method? -- /Jacob Carlborg
Apr 01 2015
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 4/1/15 3:57 PM, Jacob Carlborg wrote:
 On 2015-04-01 20:04, Ary Borenszweig wrote:

 By the way, this is the way we do it in Crystal. The source code
 for the spec library is here, if you need some inspiration:
 https://github.com/manastech/crystal/tree/master/src/spec . It's
 just 687 lines long.
Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things with classes, instance and inheritance, that is, each describe block is a class an each it block is an instance method?
No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible. But right now you get these things: 1. You can generate many tests in a simple way: ~~~ [1, 2, 3].each do |num| it "works for #{num}" do ... end end ~~~ 2. You get a summary of all the failures and the lines of the specs that failed. Also, you get errors similar to RSpec for matchers. And you get printed a command line for each failing spec so you can rerun it separately. These are the most useful RSpec features for me. 3. You can get dots for each spec or the name of the specs (-format option). 4. You can run a spec given its line number or a regular expression for its name. Eventually it will have more features, as the language evolves, but for now this has proven to be very useful :-) Another good thing about it being just a library is that others send pull requests and patches, and this is easier to understand than some internal logic built into the compiler (compiler code is always harder).
Apr 01 2015
parent reply Jacob Carlborg <doob me.com> writes:
On 2015-04-01 21:28, Ary Borenszweig wrote:

 No, it's actually much simpler but less powerful. This is because the
 language is not as dynamic as Ruby. But we'd like to keep things as
 simple as possible.
Can't you implement that using macros?
 But right now you get these things:

 1. You can generate many tests in a simple way:

 ~~~
 [1, 2, 3].each do |num|
    it "works for #{num}" do
      ...
    end
 end
 ~~~

 2. You get a summary of all the failures and the lines of the specs that
 failed. Also, you get errors similar to RSpec for matchers. And you get
 printed a command line for each failing spec so you can rerun it
 separately. These are the most useful RSpec features for me.

 3. You can get dots for each spec or the name of the specs (-format
 option).

 4. You can run a spec given its line number or a regular expression for
 its name.

 Eventually it will have more features, as the language evolves, but for
 now this has proven to be very useful :-)

 Another good thing about it being just a library is that others send
 pull requests and patches, and this is easier to understand than some
 internal logic built into the compiler (compiler code is always harder).
This sounds all great. But lowering groups and examples to classes and methods takes it to the next level. -- /Jacob Carlborg
Apr 01 2015
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 4/2/15 3:32 AM, Jacob Carlborg wrote:
 On 2015-04-01 21:28, Ary Borenszweig wrote:

 No, it's actually much simpler but less powerful. This is because the
 language is not as dynamic as Ruby. But we'd like to keep things as
 simple as possible.
Can't you implement that using macros?
We can. But then it becomes harder to understand what's going on. In RSpec I don't quite understand what's going on really, and I like a bit of magic but not too much of it. In fact with macros it's not that simple because you need to remember the context where you are defining stuff, so that might need adding that capabilities to macros, which will complicate the language.
 But right now you get these things:
This sounds all great. But lowering groups and examples to classes and methods takes it to the next level.
Somebody also started writing a minitest clone: https://github.com/ysbaddaden/minitest.cr . Implementing a DSL on top of that using regular code or macros should be possible. But right now the features we have are enough.
Apr 02 2015
parent Jacob Carlborg <doob me.com> writes:
On 2015-04-02 21:11, Ary Borenszweig wrote:

 We can. But then it becomes harder to understand what's going on. In
 RSpec I don't quite understand what's going on really, and I like a bit
 of magic but not too much of it.
It's quite straightforward to implement, in Ruby as least. Something like this: module DSL def describe(name, &block) context = Class.new(self) context.send(:extend, DSL) context.instance_eval(&block) end def it(name, &block) send(:define_method, name, &block) end end class Foo extend DSL describe 'foo' do it 'bar' do p 'asd' end end end You need to register the tests somehow also but be able to run them, but this is the basic idea. I cheated here and used a class to start with to simplify the example.
 In fact with macros it's not that simple because you need to remember
 the context where you are defining stuff, so that might need adding that
 capabilities to macros, which will complicate the language.
Yeah, I don't really now how macros work in Cyrstal. -- /Jacob Carlborg
Apr 03 2015
prev sibling next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:
 On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu
 wrote:
 This is a tooling issue.
I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case
Everything you propose can be done with a custom unittest runner, using the builtin unittest blocks. Compile-time reflection + UDAs + unittests is a surprisingly powerful combination, and I don't understand the proposals to make unittest name and such part of the ModuleInfo or provide special compiler support for them. Such an approach is not as scalable, and with compile-time reflection you can do anything you need with the current built-in unittest blocks. The only issue I have with the way unittests are done right now, is the incredibly annoying requirement of having a main function and that main gets called. It makes generic tooling and CI systems much more annoying, as you have to try and guess whether you need to create a fake main() (or pass in -main), and worry about if the code is going to keep running after tests complete.
Apr 01 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Wed, 01 Apr 2015 19:18:52 +0000, Kapps wrote:

 Everything you propose can be done with a custom unittest runner, using
 the builtin unittest blocks. Compile-time reflection + UDAs + unittests
 is a surprisingly powerful combination, and I don't understand the
 proposals to make unittest name and such part of the ModuleInfo or
 provide special compiler support for them. Such an approach is not as
 scalable, and with compile-time reflection you can do anything you need
 with the current built-in unittest blocks.
only if reflection is fully working, which is not the case (see, for=20 example, old bug with member enumeration for packages).=
Apr 01 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-04-01 21:18, Kapps wrote:

 The only issue I have with the way unittests are done right now, is the
 incredibly annoying requirement of having a main function and that main
 gets called. It makes generic tooling and CI systems much more annoying,
 as you have to try and guess whether you need to create a fake main()
 (or pass in -main), and worry about if the code is going to keep running
 after tests complete.
I just don't compile the module congaing the main function. Although I place the my tests in a completely separate directory. -- /Jacob Carlborg
Apr 01 2015
parent "Kapps" <opantm2+spam gmail.com> writes:
On Thursday, 2 April 2015 at 06:33:50 UTC, Jacob Carlborg wrote:
 On 2015-04-01 21:18, Kapps wrote:

 The only issue I have with the way unittests are done right 
 now, is the
 incredibly annoying requirement of having a main function and 
 that main
 gets called. It makes generic tooling and CI systems much more 
 annoying,
 as you have to try and guess whether you need to create a fake 
 main()
 (or pass in -main), and worry about if the code is going to 
 keep running
 after tests complete.
I just don't compile the module congaing the main function. Although I place the my tests in a completely separate directory.
Which is okay for running your own code, but the problem is it's not really a good solution for a CI system. I made a plugin for Bamboo to automatically test my dub projects, and essentially just rely on the fact that 'dub test' works. Without that, I would have no way of knowing whether or not there's a main function, whether or not that main function actually does something, etc. Being able to prevent main from being required and running is, for me, the biggest issue with the current unittest system.
Apr 02 2015
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:
 On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu
 wrote:
 This is a tooling issue.
I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests. The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
I 100% disagree. Having built-in unittest blocks have been a huge win for the language and greatly improved quality of library ecosystem. Value of standardization and availability is tremendous here. Only problem is that development of the feature has stopped half way and there are still small bits missing here and there. All your requested features can be implemented within existing unittest feature via custom runner - while still running tests properly with default one!
Apr 01 2015
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
P.S. I hate all the Ruby testing facilities, hate with bloody 
passion.
Apr 01 2015
next sibling parent reply "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:
 P.S. I hate all the Ruby testing facilities, hate with bloody 
 passion.
You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Apr 01 2015
parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:
 On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:
 P.S. I hate all the Ruby testing facilities, hate with bloody 
 passion.
You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)
Apr 02 2015
next sibling parent reply David Gileadi <gileadis NSPMgmail.com> writes:
On 4/2/15 1:34 PM, Dicebot wrote:
 On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:
 On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:
 P.S. I hate all the Ruby testing facilities, hate with bloody passion.
You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)
Having never used Cucumber but having been interested in it, what was the unpleasantness?
Apr 02 2015
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:
 On 4/2/15 1:34 PM, Dicebot wrote:
 On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:
 On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:
 P.S. I hate all the Ruby testing facilities, hate with 
 bloody passion.
You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)
Having never used Cucumber but having been interested in it, what was the unpleasantness?
The very fact of being forced to install some external application) which is not even available in my distro repositories) to run set of basic tests that could be done with 10 line D or shell script instead. It is hardly surprising that so far I preferred to submit pull requests without testing instead.
Apr 02 2015
prev sibling parent reply "Wyatt" <wyatt.epp gmail.com> writes:
On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:
 Having never used Cucumber but having been interested in it, 
 what was the unpleasantness?
Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the "plain English" method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier. -Wyatt
Apr 02 2015
next sibling parent David Gileadi <gileadis NSPMgmail.com> writes:
On 4/2/15 2:46 PM, Wyatt wrote:
 On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:
 Having never used Cucumber but having been interested in it, what was
 the unpleasantness?
Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the "plain English" method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier. -Wyatt
On 4/2/15 2:32 PM, Dicebot wrote:
 The very fact of being forced to install some external application)
 which is not even available in my distro repositories) to run set of
 basic tests that could be done with 10 line D or shell script instead.

 It is hardly surprising that so far I preferred to submit pull requests
 without testing instead.
Thanks to you both for the answers!
Apr 02 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-04-02 23:46, Wyatt wrote:

 Dealing with it at work, I find it puts us scarily at the mercy of
 regexen in Ruby, which is unsettling to say the least.  More pressingly,
 the "plain English" method of writing tests hinders my ability to figure
 out what the test is actually trying to do. There's not enough structure
 to give you good visual anchors that are easy to follow, so I end up
 having to build a mental model of an entire feature file every time I
 look at it.  It's hugely inconvenient.  And if I can't remember what a
 phrase corresponds to, I have to hunt down the implementation and read
 that anyway, so it's not saving any time or making life any easier.
At work we're using Turnip [1], which basically is Gherkin (Cucumber) files running on top of RSpec, best of both worlds Dicebot ;). It has two big advatnages compared to regular Cucumber: * It doesn't use regular expression for the steps, just plain strings * The steps are implemented in modules which are later included where needed. They're not floating around in global space like in Cucumber We also made some modifications so we have one file with one module matching one scenario, which is automatically included based on the scenario name. This made it possible to have steps that don't interfere with each other. We can have two steps which are identical in two different scenarios with two different implementations that doesn't conflict. This also made it possible to take full advantage of RSpec, by creating instance variables that keeps the data across steps. We're also currently experimenting with a gem (I can't recall its name right now) which allows to write the Cucumber steps inline in the RSpec tests, looking like this: describe "foobar" do Steps "this is a scenario" do Given "some kind of setup" do end When "when something cool happens" do end Then "something even cooler will happen" do end end end [1] https://github.com/jnicklas/turnip -- /Jacob Carlborg
Apr 03 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-04-02 22:34, Dicebot wrote:

 Oh yeah, looking forward to listening it :) I had an unpleasant
 experience of encountering Cucumber when trying to contribute to dstep
 so this specific name is like a trigger to me :)
Yeah, I don't like how it ended up in DStep. I'm using it completely wrong, I'm just too lazy to fix it. Currently it basically compares two strings, the input and the output. I had some big plans for it but that never happened. Currently The only advantage is the runner, i.e. it shows a nice output, it's possible to only run a single test, it does stop all test at the first failure. All the thing Andrei wants to fix in D it seems :) In the case of DStep you can mostly ignore the Cucumber part and just focus on the input and expected files. You can even run the tests without it, just run DStep on the input file compare the output with the expected file. -- /Jacob Carlborg
Apr 03 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-04-01 21:31, Dicebot wrote:
 P.S. I hate all the Ruby testing facilities, hate with bloody passion.
The unit test framework in the Ruby standard library: class FooTest def test_foo_bar assert 3 == 3 end end Looks likes most other testing frameworks out there to me. -- /Jacob Carlborg
Apr 01 2015
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2015-04-01 21:25, Dicebot wrote:

 I 100% disagree. Having built-in unittest blocks have been a huge win
 for the language and greatly improved quality of library ecosystem.
 Value of standardization and availability is tremendous here.

 Only problem is that development of the feature has stopped half way and
 there are still small bits missing here and there. All your requested
 features can be implemented within existing unittest feature via custom
 runner - while still running tests properly with default one!
The the unittest block itself could have been easily implemented as a library solution, if we had the following features: * Trailing delegate syntax * Executable code at module scope module bar; unittest("foo bar") { } Would be lowered to: unittest("foo bar", { }); These two features are useful for other things, like benchmarking: benchmark { } -- /Jacob Carlborg
Apr 01 2015
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Thu, 02 Apr 2015 08:37:26 +0200, Jacob Carlborg wrote:

 On 2015-04-01 21:25, Dicebot wrote:
=20
 I 100% disagree. Having built-in unittest blocks have been a huge win
 for the language and greatly improved quality of library ecosystem.
 Value of standardization and availability is tremendous here.

 Only problem is that development of the feature has stopped half way
 and there are still small bits missing here and there. All your
 requested features can be implemented within existing unittest feature
 via custom runner - while still running tests properly with default
 one!
=20 The the unittest block itself could have been easily implemented as a library solution, if we had the following features: =20 * Trailing delegate syntax * Executable code at module scope =20 module bar; =20 unittest("foo bar") { } =20 Would be lowered to: =20 unittest("foo bar", { =20 }); =20 These two features are useful for other things, like benchmarking: =20 benchmark { }
executable code at module scope isn't really necessary, i believe. it's=20 just a little messier with mixin templates, but i believe that it's=20 acceptable: mixin template foo(alias fn) { static this () { import iv.writer; writeln("hello! i'm a test!"); fn(); writeln("test passed"); } } // and do it mixin foo!({assert(42);}); // or with trailing delegate sugar: mixin foo!{ assert(42); }; sure, `static this` can do anything we want; registering unittest=20 delegate in some framework, for example.=
Apr 01 2015
next sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Thu, 02 Apr 2015 06:52:14 +0000, ketmar wrote:

p.s. ah, sure, it should be `shared static this()`. %$Y^% %.=
Apr 01 2015
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2015-04-02 08:52, ketmar wrote:

 executable code at module scope isn't really necessary, i believe.
I think so to have the same syntax. -- /Jacob Carlborg
Apr 02 2015
prev sibling next sibling parent "Atila Neves" <atila.neves gmail.com> writes:
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:
 On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu
 wrote:
 This is a tooling issue.
I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests.
It has been done, several times, by different people. I'm partial to unit-threaded but then again I would be since I wrote it :) Atila
 The problem is that you can start with "unittest" blocks, but
 then you realize you need more, so what do you do? You combine
 both? You can't!

 I'd say, deprecate "unittest" and write a good test library. You
 can still provide it for backwards compatibility.

 By the way, this is the way we do it in Crystal. The source code
 for the spec library is here, if you need some inspiration:
 https://github.com/manastech/crystal/tree/master/src/spec . It's
 just 687 lines long.
Apr 01 2015
prev sibling parent "Kagamin" <spam here.lot> writes:
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:
 However, when you have a big complex project you start having
 other needs:
 1. Named unit-tests, so you can better find what failed
 2. Better error messages for assertions
 3. Better output to rerun failed tests
 4. Setup and teardown hooks
 5. Different outputs depending on use case
There are test frameworks for D: http://wiki.dlang.org/Libraries_and_Frameworks#Unit_Testing_Framework
 I'd say, deprecate "unittest" and write a good test library.
Test frameworks don't conflict with unittest blocks at all. They have different use cases.
Apr 02 2015
prev sibling parent reply =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= <per.nordlow gmail.com> writes:
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu 
wrote:
 In brief: I'd like to transition to a model in which 
 unittesting is organically part of the build. After all, you 
 wouldn't want to deploy an application that's failing its 
 unittests.
Overall a good idea, Andrei. I take the opportunity to share a reoccurring dream of mine which is highly related to your proposal, Andrei: I would like to have a compiler option for *automatic persistent memoization* of unittests that are inferred to be strongly pure. Please take a moment to think about how your usage of unittests would change if this was available. Eventhough D compiles faster than all other languages, big projects (including my single-developer ones) will eventually grow so large that always waiting for all unittests to compile and run will not be bearable to a developer. The idea of persistent memoziation is not new (SCons does it an elegant with build artifacts). They just haven't been applied in as many cases as they could be. In theory it's just a matter of hashing all the code and data the a unittest depends upon and using this hash as a memoization key for remembering if the unittest failed (and perhaps also how) or now. However, I'm not sure how the memoization keys should be calculated in practise. I do however know that the ELF file format contains a BuildID attribute calculated as a SHA-1. Is there any builtin support in ELF for hashing individual functions and data (sections)? Is there at all possible to figure out what code a unittest depends upon? Please also think about how such a builtin feature would promote establishment and usage of D from a robustness and productivity point of view.
Apr 04 2015
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/4/15 10:38 AM, "Nordlöw" wrote:
 Please also think about how such a builtin feature would promote
 establishment and usage of D from a robustness and productivity point of
 view.
I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
Apr 04 2015
parent reply =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= <per.nordlow gmail.com> writes:
On Saturday, 4 April 2015 at 19:00:01 UTC, Andrei Alexandrescu 
wrote:
 On 4/4/15 10:38 AM, "Nordlöw" wrote:
 Please also think about how such a builtin feature would 
 promote
 establishment and usage of D from a robustness and 
 productivity point of
 view.
I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
What's the reason for that? - Lack of time? - Fear of increased complexity that becomes difficult to maintain? - Difficult to get it exactly right? - Or is there something fundamentally wrong with the idea itself?
Apr 05 2015
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/5/15 4:04 AM, "Nordlöw" wrote:
 On Saturday, 4 April 2015 at 19:00:01 UTC, Andrei Alexandrescu wrote:
 On 4/4/15 10:38 AM, "Nordlöw" wrote:
 Please also think about how such a builtin feature would promote
 establishment and usage of D from a robustness and productivity point of
 view.
I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
What's the reason for that? - Lack of time? - Fear of increased complexity that becomes difficult to maintain? - Difficult to get it exactly right? - Or is there something fundamentally wrong with the idea itself?
Lack of staff, i.e. people willing and able to put boots on the ground. People following the competent and enthusiastic exchanges in the forums would be shocked to know just how few of those would be willing to actually _do_ anything for D, even the littlest thing. Nicely that doesn't include you - thanks for contributing. -- Andrei
Apr 05 2015