digitalmars.D - unittests are really part of the build, not a special run
- Andrei Alexandrescu (31/31) Mar 30 2015 This is a tooling issue.
- Kapps (9/9) Mar 30 2015 Would this change result in just not running main and changing
- Dicebot (15/15) Mar 30 2015 tl: dr: please, no
- Andrei Alexandrescu (11/25) Mar 30 2015 Violent agreement here. I was just saying unittests should be part of
- Dicebot (11/37) Mar 30 2015 Ok, pardon me for misunderstanding :) I got confused by "you
- w0rp (7/14) Mar 30 2015 I would love to have that JUnit-compatible test runner. I haven't
- Andrei Alexandrescu (3/5) Mar 30 2015 It's very easy - format asserts occuring directly in unittests the same
- Leandro Lucarella (9/25) Apr 06 2015 I think having the default using the same format as compiler
- Andrei Alexandrescu (2/28) Apr 06 2015 YES! I was surprised that any of this was being debated. -- Andrei
- qznc (11/14) Mar 31 2015 Sounds like a good idea to me.
- Andrei Alexandrescu (6/17) Mar 31 2015 Probably not; we're looking at two different builds. The build to be
- Johannes Totz (8/34) Apr 01 2015 I'm starting to see this differently these days (basically since I
- Jeremy Powers via Digitalmars-d (9/19) Apr 02 2015 This.
- H. S. Teoh via Digitalmars-d (7/32) Apr 02 2015 So what do you want the compiler to do? Emit two executables, one
- Andrei Alexandrescu (10/39) Apr 02 2015 The way I see it, the notion of having one build with strippable
- Jeremy Powers via Digitalmars-d (10/19) Apr 02 2015 This works for me. The important part is that the resultant artifacts o...
- ketmar (9/26) Mar 30 2015 ah, i see.
- ketmar (3/3) Mar 30 2015 On Tue, 31 Mar 2015 03:29:38 +0000, ketmar wrote:
- Mathias Lang via Digitalmars-d (12/12) Mar 30 2015 I'd rather see DMD automatically pass the expression that triggered the
- Walter Bright (2/5) Mar 30 2015 You have to look at the code anyway.
- =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= (6/12) Apr 04 2015 My experience is that having the failing expression available
- Andrei Alexandrescu (4/17) Mar 30 2015 I disagree.
- Mathias Lang via Digitalmars-d (5/24) Mar 30 2015 Often, not always. You doesn't loose any information by displaying the
- Andrei Alexandrescu (4/5) Mar 30 2015 The thing here is you're not forced to build both the unittest version
- Andy Smith (8/8) Mar 31 2015 A band-aid rather than a solution, but sticking this in
- Andy Smith (10/18) Mar 31 2015 Ah - didn't test for your specific example...
- Jacob Carlborg (4/11) Mar 31 2015 This is what I use in TextMate "^(.*?)@(.*?)\((\d+)\):(.*)?".
- Jacob Carlborg (6/9) Mar 31 2015 It's not difficult. I've modified the D bundle for TextMate to recognize...
- Andrei Alexandrescu (2/10) Mar 31 2015 Problem is doing that for all editors does not scale. -- Andrei
- Jacob Carlborg (8/9) Mar 31 2015 It's not like the error messages used by DMD are in a
- Andrei Alexandrescu (5/12) Mar 31 2015 The idea is to make a SMALL change on our side for a LARGE INSTANT
- Jacob Carlborg (7/10) Mar 31 2015 Why wouldn't you want exceptions to be clickable as well?
- Andrei Alexandrescu (4/12) Mar 31 2015 Well nice then.
- Jacob Carlborg (5/6) Mar 31 2015 That should be printed as well [1]
- Jacob Carlborg (4/17) Mar 31 2015 With a custom unit test runner it's possible to run the unit tests as CT...
- deadalnix (1/1) Mar 30 2015 That would be great if we could JIT the unitests at build time...
- Rikki Cattermole (4/5) Mar 30 2015 While we are at it, how about improving CTFE to be fully JIT'd and
- deadalnix (3/9) Mar 30 2015 I wouldn't go that far as to add external calls into CTFE, but
- Rikki Cattermole (4/13) Mar 30 2015 Yeah, there is a lot of pros and cons to adding it. But the thing is, we...
- Jacob Carlborg (5/8) Mar 31 2015 It's already possible to run unit tests during compile time as CTFE [1].
- Idan Arye (27/51) Mar 31 2015 There is no point in running unittests before `main` in the same
- Kagamin (4/8) Mar 31 2015 Something of the form: `rdmd -test code.d`? Though how that will
- Andrei Alexandrescu (2/8) Mar 31 2015 Yes and agreed. -- Andrei
- Atila Neves (33/69) Mar 31 2015 I actually thought about the whole "it should fail to build if
- Jacob Carlborg (29/50) Mar 31 2015 I kind of agree, RSpec has similar formatting of failed tests. But I
- Nick Sabalausky (9/19) Apr 01 2015 Yea, at one point, a whole system of nifty asserts that did just that
- Ary Borenszweig (25/26) Apr 01 2015 I think D's built-in "unittest" blocks are a mistake.
- Jacob Carlborg (6/10) Apr 01 2015 Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things
- Ary Borenszweig (25/33) Apr 01 2015 No, it's actually much simpler but less powerful. This is because the
- Jacob Carlborg (6/31) Apr 01 2015 This sounds all great. But lowering groups and examples to classes and
- Ary Borenszweig (11/19) Apr 02 2015 We can. But then it becomes harder to understand what's going on. In
- Jacob Carlborg (27/33) Apr 03 2015 It's quite straightforward to implement, in Ruby as least. Something
- Kapps (15/29) Apr 01 2015 Everything you propose can be done with a custom unittest runner,
- ketmar (3/10) Apr 01 2015 only if reflection is fully working, which is not the case (see, for=20
- Jacob Carlborg (5/11) Apr 01 2015 I just don't compile the module congaing the main function. Although I
- Kapps (10/25) Apr 02 2015 Which is okay for running your own code, but the problem is it's
- Dicebot (10/37) Apr 01 2015 I 100% disagree. Having built-in unittest blocks have been a huge
- Dicebot (2/2) Apr 01 2015 P.S. I hate all the Ruby testing facilities, hate with bloody
- Atila Neves (4/6) Apr 01 2015 You're going to _love_ my DConf talk ;) I was expecting that
- Dicebot (4/10) Apr 02 2015 Oh yeah, looking forward to listening it :) I had an unpleasant
- David Gileadi (3/14) Apr 02 2015 Having never used Cucumber but having been interested in it, what was
- Dicebot (7/25) Apr 02 2015 The very fact of being forced to install some external
- Wyatt (12/14) Apr 02 2015 Dealing with it at work, I find it puts us scarily at the mercy
- David Gileadi (3/22) Apr 02 2015 Thanks to you both for the answers!
- Jacob Carlborg (31/40) Apr 03 2015 At work we're using Turnip [1], which basically is Gherkin (Cucumber)
- Jacob Carlborg (14/17) Apr 03 2015 Yeah, I don't like how it ended up in DStep. I'm using it completely
- Jacob Carlborg (10/11) Apr 01 2015 The unit test framework in the Ruby standard library:
- Jacob Carlborg (18/25) Apr 01 2015 The the unittest block itself could have been easily implemented as a
- ketmar (20/53) Apr 01 2015 executable code at module scope isn't really necessary, i believe. it's=...
- ketmar (2/2) Apr 01 2015 On Thu, 02 Apr 2015 06:52:14 +0000, ketmar wrote:
- Jacob Carlborg (4/5) Apr 02 2015 I think so to have the same syntax.
- Atila Neves (4/31) Apr 01 2015 It has been done, several times, by different people. I'm partial
- Kagamin (5/13) Apr 02 2015 There are test frameworks for D:
- =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= (28/32) Apr 04 2015 Overall a good idea, Andrei.
- Andrei Alexandrescu (3/6) Apr 04 2015 I do think it's a great idea. Sadly I also think how I have negative
- =?UTF-8?B?Ik5vcmRsw7Z3Ig==?= (7/15) Apr 05 2015 What's the reason for that?
- Andrei Alexandrescu (6/19) Apr 05 2015 Lack of staff, i.e. people willing and able to put boots on the ground.
This is a tooling issue. D builds and messaging are rigged to consider unittests as a special build followed by a special run of an application. In brief: I'd like to transition to a model in which unittesting is organically part of the build. After all, you wouldn't want to deploy an application that's failing its unittests. Detail: Consider running a build vs. a unittest in one of the supported editors/IDEs (emacs, vim, Code:Blocks, Visual D, Xamarin...). During a build, an error will come in a standard format, e.g. std/array.d(39): Error: undefined identifier xyz This format is recognized by the editor and allows the user to click on it and go straight to the offending line etc. In contrast, a failing unittest has a completely different format. In fact, it's a format that's worse than useless because it confuses the editor: core.exception.AssertError std/array.d(39): xyz emacs will recognize the text as a filename and upon clicking would ask the user to open the nonsense file "core.exception.AssertError std/array.d". This error line is followed by a stacktrace, which is in no recognizable format. It should be in the format of e.g. gcc remarks providing additional information for an error, and again provide file/line information so the user can click and see the call stack in the source. Where I want us to be is a place where unittests are considered a part of the build; it should be trivial to set things up such that unittesting is virtually indistinguishable from compilation and linking. This all is relatively easy to implement but might have a large positive impact. Please chime in before I make this into an issue. Anyone would like to take this? Andrei
Mar 30 2015
Would this change result in just not running main and changing the default unittest runner output, but still run static constructors (which then allows you to specify a custom unittest runner)? If so, I think it's a nice change. I've always found it quite odd that running unittests still runs the main method, and it results in annoyances when making custom build systems that may or may not need -main depending on if they're a library or an application. Having unittests not care whether main is present or not, and not run main, would be a nice change for tooling.
Mar 30 2015
tl: dr: please, no We have put quite some effort into fighting default DMD behaviour of -unittest simply adding to main function and not replacing it. Initially many applications did run tests on startup because DMD suggested it is a good idea - some rather bad practical experience has shown this was a rather bad suggestion. Accidental tests that start doing I/O on productions servers, considerably increased restart times for services - that kind of issues. And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain. Just tweak your editors if that is truly important. It is not like being able to click some fancy lines in GUI makes critical usability addition to testing.
Mar 30 2015
On 3/30/15 3:30 PM, Dicebot wrote:tl: dr: please, no We have put quite some effort into fighting default DMD behaviour of -unittest simply adding to main function and not replacing it. Initially many applications did run tests on startup because DMD suggested it is a good idea - some rather bad practical experience has shown this was a rather bad suggestion. Accidental tests that start doing I/O on productions servers, considerably increased restart times for services - that kind of issues.Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain.Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.Just tweak your editors if that is truly important. It is not like being able to click some fancy lines in GUI makes critical usability addition to testing.This is a cultural change more than a pure tooling matter. I think we'd do well to change things on the tooling side instead of expecting editors to do it for us. Andrei
Mar 30 2015
On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:On 3/30/15 3:30 PM, Dicebot wrote:Ok, pardon me for misunderstanding :) I got confused by "you don't want to run application that isn't tested" part.tl: dr: please, no We have put quite some effort into fighting default DMD behaviour of -unittest simply adding to main function and not replacing it. Initially many applications did run tests on startup because DMD suggested it is a good idea - some rather bad practical experience has shown this was a rather bad suggestion. Accidental tests that start doing I/O on productions servers, considerably increased restart times for services - that kind of issues.Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain.Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
Mar 30 2015
On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.I would love to have that JUnit-compatible test runner. I haven't needed it quite yet, but it would be nice to have, and I'm sure others would appreciate it. I would also like it if there was Corbetrua-formatted output for DMD's coverage reports. Then it would be possible to see both test results and code coverage reports in Jenkins.
Mar 30 2015
On 3/30/15 4:26 PM, Dicebot wrote:Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right.It's very easy - format asserts occuring directly in unittests the same as compilation errors. I don't see why this would be ever debated. -- Andrei
Mar 30 2015
On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:I think having the default using the same format as compiler errors makes perfect sense. Providing extra formatters in Phobos, would be a huge gain, like a JUnit-compatible formatter, as it's a very widespread test reporting format that can be used with many tools. I agree the key is the current configurability, but providing better default and better out of the box alternatives seems like a very reasonable approach to me.If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain.Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
Apr 06 2015
On 4/6/15 3:16 PM, Leandro Lucarella wrote:On Monday, 30 March 2015 at 23:26:38 UTC, Dicebot wrote:YES! I was surprised that any of this was being debated. -- AndreiI think having the default using the same format as compiler errors makes perfect sense. Providing extra formatters in Phobos, would be a huge gain, like a JUnit-compatible formatter, as it's a very widespread test reporting format that can be used with many tools. I agree the key is the current configurability, but providing better default and better out of the box alternatives seems like a very reasonable approach to me.If it is only format that matters you an always change it via custom test runner. For example, we do have a test runner that generates JUnit-compatible XML output for Jenkins - and that was possible to do with plain `unittest` blocks even with D1 :) Main problem with changing default formatting is that it is pretty hard to choose one that is 100% right. Current one is at least simple and predictable being just an exception printout.And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain.Making the format of unittest failures better would take us a long way. Then we can script builds so the unittest and release build are created concurrently.
Apr 06 2015
On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.Sounds like a good idea to me. Then -unittest should be enabled by default? Implementationwise it sounds like you want another entry point apart from main, e.g. "main_unittest". Then the build process is compile-link-unittest. Afterwards the run process is the usual main call. It makes binaries bigger though. Maybe unittest-specific code can be placed in a special segment, which can be removed during deployment?
Mar 31 2015
On 3/31/15 9:21 AM, qznc wrote:On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.Sounds like a good idea to me. Then -unittest should be enabled by default?Implementationwise it sounds like you want another entry point apart from main, e.g. "main_unittest". Then the build process is compile-link-unittest. Afterwards the run process is the usual main call. It makes binaries bigger though. Maybe unittest-specific code can be placed in a special segment, which can be removed during deployment?Interesting. Or could be a dynamically-loaded library. But... crawl before we walk. Andrei
Mar 31 2015
On 31/03/2015 19:24, Andrei Alexandrescu wrote:On 3/31/15 9:21 AM, qznc wrote:I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.On Monday, 30 March 2015 at 22:50:21 UTC, Andrei Alexandrescu wrote:Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.Violent agreement here. I was just saying unittests should be part of the build process, not the run process. Running unittests and then the app is a bad idea.Sounds like a good idea to me. Then -unittest should be enabled by default?Implementationwise it sounds like you want another entry point apart from main, e.g. "main_unittest". Then the build process is compile-link-unittest. Afterwards the run process is the usual main call. It makes binaries bigger though. Maybe unittest-specific code can be placed in a special segment, which can be removed during deployment?Interesting. Or could be a dynamically-loaded library. But... crawl before we walk. Andrei
Apr 01 2015
On Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d < digitalmars-d puremagic.com> wrote:On 31/03/2015 19:24, Andrei Alexandrescu wrote:This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
Apr 02 2015
On Thu, Apr 02, 2015 at 11:19:32AM -0700, Jeremy Powers via Digitalmars-d wrote:On Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d < digitalmars-d puremagic.com> wrote:So what do you want the compiler to do? Emit two executables, one containing the release, the other containing the unittests? Isn't that just a matter of running dmd with/without -unittest? T -- You have to expect the unexpected. -- RLOn 31/03/2015 19:24, Andrei Alexandrescu wrote:This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
Apr 02 2015
On 4/2/15 11:44 AM, H. S. Teoh via Digitalmars-d wrote:On Thu, Apr 02, 2015 at 11:19:32AM -0700, Jeremy Powers via Digitalmars-d wrote:The way I see it, the notion of having one build with strippable unittests is a nice idea but technically challenging. It's also low impact - today concurrent CPU is cheap so running two concurrent unrelated builds can be made as fast as one. The simple effective step toward improvement is to uniformize the format of assertion errors in unittests and to make it easy with tooling to create unittest and non-unittest builds that are gated by the unittests succeeding. AndreiOn Wed, Apr 1, 2015 at 7:31 AM, Johannes Totz via Digitalmars-d < digitalmars-d puremagic.com> wrote:So what do you want the compiler to do? Emit two executables, one containing the release, the other containing the unittests? Isn't that just a matter of running dmd with/without -unittest?On 31/03/2015 19:24, Andrei Alexandrescu wrote:This. I want to run unit tests as part of the build process, and I want my release build to have unit tests run against it. If unit tests haven't passed for a build, it's not release ready. But, I don't want my release build to be bloated with unit test code. Related, unit tests often have dependencies that I _don't_ want as part of my release build. Mocking frameworks are a good example.Probably not; we're looking at two different builds. The build to be deployed has no unittest code at all.I'm starting to see this differently these days (basically since I started to use jenkins for everything): A build you haven't unit tested has implicitly failed. That means the release build that does not have any unit test bits is not deployable. Instead, compile as usual (both debug and release), and run unit tests against both (e.g. to catch compiler bugs in the optimiser). Then for deployment, drop/strip/remove/dont-package the unit test code.
Apr 02 2015
On Thu, Apr 2, 2015 at 12:04 PM, Andrei Alexandrescu via Digitalmars-d < digitalmars-d puremagic.com> wrote:...This works for me. The important part is that the resultant artifacts of the build have had their exact code tested, doesn't really matter if it's the exact same bits or just logical equivalent. As long as it is the _exact_ same, which means test-only dependencies are not part of the being-tested code.The way I see it, the notion of having one build with strippable unittests is a nice idea but technically challenging. It's also low impact - today concurrent CPU is cheap so running two concurrent unrelated builds can be made as fast as one.The simple effective step toward improvement is to uniformize the format of assertion errors in unittests and to make it easy with tooling to create unittest and non-unittest builds that are gated by the unittests succeeding.Nomenclature nitpick: one 'build' with concurrent compile/test steps. Artifacts of build should include a tested library/executable and (uniformly formatted of course) test report.
Apr 02 2015
On Mon, 30 Mar 2015 22:30:17 +0000, Dicebot wrote:tl: dr: please, no =20 We have put quite some effort into fighting default DMD behaviour of -unittest simply adding to main function and not replacing it. Initially many applications did run tests on startup because DMD suggested it is a good idea - some rather bad practical experience has shown this was a rather bad suggestion. Accidental tests that start doing I/O on productions servers, considerably increased restart times for services - that kind of issues. =20 And if you suggest to build both test and normal build as part of single compiler call (building test version silently in the background) this is also very confusing addition hardly worth its gain. =20 Just tweak your editors if that is truly important. It is not like being able to click some fancy lines in GUI makes critical usability addition to testing.ah, i see. * building new phobos version now: several seconds. * building new phobos version with unittests now: several minutes. with your suggesion: * building new phobos version: several minutes. * building new phobos version with unittests: several minutes. yep, it's great. i was dreamt of such long compile sessions since i=20 learned that compilers can be fast.=
Mar 30 2015
On Tue, 31 Mar 2015 03:29:38 +0000, ketmar wrote: post.=
Mar 30 2015
I'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code. D has the advantage that it catches most errors at CT. You can write a lot of code and just compile it to ensure it's more or less correct. I often write code that won't pass the unittests, but I need to check if my template / CT logic is correct. It may takes 20 compilations cycle before I run the unittests. Running the tests as part of the build would REALLY slow down the process -especially given that unittest is communicated to imported module, which means imported libraries. You don't want to catch unittests failures on every compilation cycle, but rather before your code make it to the repo - that's what CI systems are for -.
Mar 30 2015
On 3/30/2015 4:15 PM, Mathias Lang via Digitalmars-d wrote:I'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code.You have to look at the code anyway.
Mar 30 2015
On Monday, 30 March 2015 at 23:51:17 UTC, Walter Bright wrote:My experience is that having the failing expression available speeds up the processing of figuring out what's wrong with my failing code. That's why I'm using https://github.com/nordlow/justd/blob/master/assert_ex.dI'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code.You have to look at the code anyway.
Apr 04 2015
On 3/30/15 4:15 PM, Mathias Lang via Digitalmars-d wrote:I'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code.Often you need the context.D has the advantage that it catches most errors at CT. You can write a lot of code and just compile it to ensure it's more or less correct. I often write code that won't pass the unittests, but I need to check if my template / CT logic is correct. It may takes 20 compilations cycle before I run the unittests. Running the tests as part of the build would REALLY slow down the process -especially given that unittest is communicated to imported module, which means imported libraries. You don't want to catch unittests failures on every compilation cycle, but rather before your code make it to the repo - that's what CI systems are for -.I disagree. Andrei
Mar 30 2015
2015-03-31 2:46 GMT+02:00 Andrei Alexandrescu via Digitalmars-d < digitalmars-d puremagic.com>:On 3/30/15 4:15 PM, Mathias Lang via Digitalmars-d wrote:Often, not always. You doesn't loose any information by displaying the expression.I'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code.Often you need the context.D has the advantage that it catches most errors at CT. You can write aAs you are entitled to. But I don't see any argument here.lot of code and just compile it to ensure it's more or less correct. I often write code that won't pass the unittests, but I need to check if my template / CT logic is correct. It may takes 20 compilations cycle before I run the unittests. Running the tests as part of the build would REALLY slow down the process -especially given that unittest is communicated to imported module, which means imported libraries. You don't want to catch unittests failures on every compilation cycle, but rather before your code make it to the repo - that's what CI systems are for -.I disagree. Andrei
Mar 30 2015
On 3/30/15 7:14 PM, Mathias Lang via Digitalmars-d wrote:As you are entitled to. But I don't see any argument here.The thing here is you're not forced to build both the unittest version and the deployed version. It's an opt-in thing. The problem currently is we format error messages badly so we make Good Things difficult. -- Andrei
Mar 30 2015
A band-aid rather than a solution, but sticking this in .emacs/init.el will fix up emacs to do the right thing with asserts (only tested with DMD). Cheers, A. (add-to-list 'compilation-error-regexp-alist '("^object\.Exception \\(.*\\)(\\([0-9]+\\)).*" 1 2 ) )
Mar 31 2015
On Tuesday, 31 March 2015 at 08:10:19 UTC, Andy Smith wrote:A band-aid rather than a solution, but sticking this in .emacs/init.el will fix up emacs to do the right thing with asserts (only tested with DMD). Cheers, A. (add-to-list 'compilation-error-regexp-alist '("^object\.Exception \\(.*\\)(\\([0-9]+\\)).*" 1 2 ) )Ah - didn't test for your specific example... Need.. (add-to-list 'compilation-error-regexp-alist '("^core\.exception.AssertError \\(.*\\)(\\([0-9]+\\)).*" 1 2 ) ) as well.... not sure how many variants of these there are but if regexps can't handle it am sure elisp can.... Cheers, A.
Mar 31 2015
On 2015-03-31 10:16, Andy Smith wrote:Ah - didn't test for your specific example... Need.. (add-to-list 'compilation-error-regexp-alist '("^core\.exception.AssertError \\(.*\\)(\\([0-9]+\\)).*" 1 2 ) ) as well.... not sure how many variants of these there are but if regexps can't handle it am sure elisp can....This is what I use in TextMate "^(.*?) (.*?)\((\d+)\):(.*)?". -- /Jacob Carlborg
Mar 31 2015
On 2015-03-31 08:10, Andrei Alexandrescu wrote:The thing here is you're not forced to build both the unittest version and the deployed version. It's an opt-in thing. The problem currently is we format error messages badly so we make Good Things difficult. -- AndreiIt's not difficult. I've modified the D bundle for TextMate to recognize exceptions, so this includes failed unit tests. It's not any more difficult than recognize a compile error. -- /Jacob Carlborg
Mar 31 2015
On 3/31/15 4:22 AM, Jacob Carlborg wrote:On 2015-03-31 08:10, Andrei Alexandrescu wrote:Problem is doing that for all editors does not scale. -- AndreiThe thing here is you're not forced to build both the unittest version and the deployed version. It's an opt-in thing. The problem currently is we format error messages badly so we make Good Things difficult. -- AndreiIt's not difficult. I've modified the D bundle for TextMate to recognize exceptions, so this includes failed unit tests. It's not any more difficult than recognize a compile error.
Mar 31 2015
On Tuesday, 31 March 2015 at 15:00:28 UTC, Andrei Alexandrescu wrote:Problem is doing that for all editors does not scale. -- AndreiIt's not like the error messages used by DMD are in a standardized format. So hopefully the editors already recognize this format. BTW, what about exceptions, do you think we should change the format for those as well? -- /Jacob Carlborg
Mar 31 2015
On 3/31/15 9:27 AM, Jacob Carlborg wrote:On Tuesday, 31 March 2015 at 15:00:28 UTC, Andrei Alexandrescu wrote:The idea is to make a SMALL change on our side for a LARGE INSTANT benefit for everyone. Sigh.Problem is doing that for all editors does not scale. -- AndreiIt's not like the error messages used by DMD are in a standardized format. So hopefully the editors already recognize this format.BTW, what about exceptions, do you think we should change the format for those as well?I don't see a reason. Andrei
Mar 31 2015
On 2015-03-31 20:26, Andrei Alexandrescu wrote:The idea is to make a SMALL change on our side for a LARGE INSTANT benefit for everyone. Sigh.But does the editors handles the current format of the compile errors?I don't see a reason.Why wouldn't you want exceptions to be clickable as well? In TextMate I handle compile errors, warnings, deprecation message and exceptions. The exceptions will include the unit test failures as well. -- /Jacob Carlborg
Mar 31 2015
On 3/31/15 1:07 PM, Jacob Carlborg wrote:On 2015-03-31 20:26, Andrei Alexandrescu wrote:At least all editors I use that claim some level of D support.The idea is to make a SMALL change on our side for a LARGE INSTANT benefit for everyone. Sigh.But does the editors handles the current format of the compile errors?Well nice then. AndreiI don't see a reason.Why wouldn't you want exceptions to be clickable as well? In TextMate I handle compile errors, warnings, deprecation message and exceptions. The exceptions will include the unit test failures as well.
Mar 31 2015
On 2015-03-31 02:46, Andrei Alexandrescu wrote:Often you need the context.That should be printed as well [1] [1] http://thejqr.com/2009/02/06/textmate-rspec-and-dot-spec-party.html -- /Jacob Carlborg
Mar 31 2015
On 2015-03-31 01:15, Mathias Lang via Digitalmars-d wrote:I'd rather see DMD automatically pass the expression that triggered the error (as it is done in C) to replace this useless "Unittest failure" that forces me to look through the code. D has the advantage that it catches most errors at CT. You can write a lot of code and just compile it to ensure it's more or less correct. I often write code that won't pass the unittests, but I need to check if my template / CT logic is correct. It may takes 20 compilations cycle before I run the unittests. Running the tests as part of the build would REALLY slow down the process -especially given that unittest is communicated to imported module, which means imported libraries. You don't want to catch unittests failures on every compilation cycle, but rather before your code make it to the repo - that's what CI systems are for -.With a custom unit test runner it's possible to run the unit tests as CTFE. -- /Jacob Carlborg
Mar 31 2015
That would be great if we could JIT the unitests at build time...
Mar 30 2015
On 31/03/2015 1:08 p.m., deadalnix wrote:That would be great if we could JIT the unitests at build time...While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
Mar 30 2015
On Tuesday, 31 March 2015 at 00:49:41 UTC, Rikki Cattermole wrote:On 31/03/2015 1:08 p.m., deadalnix wrote:I wouldn't go that far as to add external calls into CTFE, but yeah, that's pretty much the way I see things going.That would be great if we could JIT the unitests at build time...While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
Mar 30 2015
On 31/03/2015 2:07 p.m., deadalnix wrote:On Tuesday, 31 March 2015 at 00:49:41 UTC, Rikki Cattermole wrote:Yeah, there is a lot of pros and cons to adding it. But the thing is, we are not pushing CTFE as far as it can go. We really aren't. And in some ways I like it. It makes it actually easier to work with in a lot of ways.On 31/03/2015 1:08 p.m., deadalnix wrote:I wouldn't go that far as to add external calls into CTFE, but yeah, that's pretty much the way I see things going.That would be great if we could JIT the unitests at build time...While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.
Mar 30 2015
On 2015-03-31 02:49, Rikki Cattermole wrote:While we are at it, how about improving CTFE to be fully JIT'd and support calling external code? That way it is only a small leap to add unittests to be CTFE'd.It's already possible to run unit tests during compile time as CTFE [1]. [1] http://forum.dlang.org/thread/ks1brj$1l6c$1 digitalmars.com -- /Jacob Carlborg
Mar 31 2015
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:This is a tooling issue. ... In brief: I'd like to transition to a model in which unittesting is organically part of the build. After all, you wouldn't want to deploy an application that's failing its unittests. Detail: Consider running a build vs. a unittest in one of the supported editors/IDEs (emacs, vim, Code:Blocks, Visual D, Xamarin...). During a build, an error will come in a standard format, e.g. std/array.d(39): Error: undefined identifier xyz This format is recognized by the editor and allows the user to click on it and go straight to the offending line etc. In contrast, a failing unittest has a completely different format. In fact, it's a format that's worse than useless because it confuses the editor: core.exception.AssertError std/array.d(39): xyz ... Where I want us to be is a place where unittests are considered a part of the build; it should be trivial to set things up such that unittesting is virtually indistinguishable from compilation and linking. ... AndreiThere is no point in running unittests before `main` in the same executable, but that doesn't mean the build is the right place for running unittests. The IDE/build-system should be the one that handles running the tests(both unit and integration). The compiler should give the IDE/build-system enough tools to do it properly - not to do it for them. Running UTs as part of the build blocks some options, like building a UT executable and running it in another environment where building is not possible. Ideally, I would like to see the compiler, when ordered to do a unittest build, create an executable that only runs unittests and ignores `main`. In matter of fact, what it does is run a special "ut-main" entry point function declared in Phobos(or in the runtime - whichever makes more sense) that runs all the unittests, and can possibly catch `AssertError`s and display them in proper format. Since we no longer run `main`, the command line arguments can go to the ut-main function. This doesn't change anything with the built-in ut-main since it just ignores them, but if that special entry point can be overrideable via a compiler flag, and IDE/build-system can supply it's own, more complex ut-main that can make use of the command line arguments. That special ut-main can communicate with the IDE/build-system to provide a better UX for unittest running - for example an IDE could do a graphical display of the unittests run, when it's custom ut-main is responsible for providing it info about the ut-run progress.
Mar 31 2015
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:Where I want us to be is a place where unittests are considered a part of the build; it should be trivial to set things up such that unittesting is virtually indistinguishable from compilation and linking.Something of the form: `rdmd -test code.d`? Though how that will work is a different question.
Mar 31 2015
On 3/31/15 2:48 AM, Kagamin wrote:On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:Yes and agreed. -- AndreiWhere I want us to be is a place where unittests are considered a part of the build; it should be trivial to set things up such that unittesting is virtually indistinguishable from compilation and linking.Something of the form: `rdmd -test code.d`? Though how that will work is a different question.
Mar 31 2015
I actually thought about the whole "it should fail to build if any of the unit tests fail" idea 2 or 3 weeks ago, so this sounds good. WRT to the error messages and their recognition by text editors, a _massive_ improvement would be compiler-assisted formatting of the assertion errors. This: core.exception.AssertError foo.d(2): Assertion failure Is not useful when I wrote `assert(foo == 2)`. This, however, is: tests.encode.testEncodeMoreThan8Bits: tests/encode.d:166 - Expected: [158, 234, 3] tests/encode.d:166 - Got: [158, 234] In Python, my favourite testing framework is py.test. It reflects on the test code itself and replaces `assert foo == 2` with its own code so that it looks like this in the output: def test_foo(): foo = 5assert foo == 2E assert 5 == 2 It also recognises things like `assert x in xs`, which is obviously handy. Since Walter has mentioned the "specialness" of assert before, maybe the compiler could recognise at least the most common kinds and format accordingly (assert ==, assert in, assert is null, assert !is null)? The main reasons I wrote a unit testing library to begin with were: 1. Better error messages when tests fail 2. Named unit tests and running them by name 3. Running unit tests in multiple threads I'm address 1. above, 2. has its own thread currently and AFAIK 3. was only done my me in unit-threaded. There are other niceties that I probably won't give up but those were the big 3. Atila On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:This is a tooling issue. D builds and messaging are rigged to consider unittests as a special build followed by a special run of an application. In brief: I'd like to transition to a model in which unittesting is organically part of the build. After all, you wouldn't want to deploy an application that's failing its unittests. Detail: Consider running a build vs. a unittest in one of the supported editors/IDEs (emacs, vim, Code:Blocks, Visual D, Xamarin...). During a build, an error will come in a standard format, e.g. std/array.d(39): Error: undefined identifier xyz This format is recognized by the editor and allows the user to click on it and go straight to the offending line etc. In contrast, a failing unittest has a completely different format. In fact, it's a format that's worse than useless because it confuses the editor: core.exception.AssertError std/array.d(39): xyz emacs will recognize the text as a filename and upon clicking would ask the user to open the nonsense file "core.exception.AssertError std/array.d". This error line is followed by a stacktrace, which is in no recognizable format. It should be in the format of e.g. gcc remarks providing additional information for an error, and again provide file/line information so the user can click and see the call stack in the source. Where I want us to be is a place where unittests are considered a part of the build; it should be trivial to set things up such that unittesting is virtually indistinguishable from compilation and linking. This all is relatively easy to implement but might have a large positive impact. Please chime in before I make this into an issue. Anyone would like to take this? Andrei
Mar 31 2015
On 2015-03-31 23:12, Atila Neves wrote:I actually thought about the whole "it should fail to build if any of the unit tests fail" idea 2 or 3 weeks ago, so this sounds good. WRT to the error messages and their recognition by text editors, a _massive_ improvement would be compiler-assisted formatting of the assertion errors. This: core.exception.AssertError foo.d(2): Assertion failure Is not useful when I wrote `assert(foo == 2)`. This, however, is: tests.encode.testEncodeMoreThan8Bits: tests/encode.d:166 - Expected: [158, 234, 3] tests/encode.d:166 - Got: [158, 234] In Python, my favourite testing framework is py.test. It reflects on the test code itself and replaces `assert foo == 2` with its own code so that it looks like this in the output: def test_foo(): foo = 5I kind of agree, RSpec has similar formatting of failed tests. But I leaning to that this should be handled by a library. RSpec has a lot of matchers (assertions) and supports custom matchers as well. For example, for associative arrays RSpec will print a diff of the two objects. For example, the following test: describe 'Foo' do it 'bar' do { foo: 3, bar: 4, baz: 5 }.should == { foo: 3, bar: 4, baz: 6 } end end Will print the following failures: Failures: 1) Foo bar Failure/Error: { foo: 3, bar: 4, baz: 5 }.should == { foo: 3, bar: 4, baz: 6 } expected: {:foo=>3, :bar=>4, :baz=>6} got: {:foo=>3, :bar=>4, :baz=>5} (using ==) Diff: -1,4 +1,4 :bar => 4, -:baz => 6, +:baz => 5, :foo => 3, levels) in <top (required)>' It also prints the comparison operator used. -- /Jacob Carlborgassert foo == 2E assert 5 == 2 It also recognises things like `assert x in xs`, which is obviously handy. Since Walter has mentioned the "specialness" of assert before, maybe the compiler could recognise at least the most common kinds and format accordingly (assert ==, assert in, assert is null, assert !is null)?
Mar 31 2015
On 03/31/2015 05:12 PM, Atila Neves wrote:I actually thought about the whole "it should fail to build if any of the unit tests fail" idea 2 or 3 weeks ago, so this sounds good. WRT to the error messages and their recognition by text editors, a _massive_ improvement would be compiler-assisted formatting of the assertion errors. This: core.exception.AssertError foo.d(2): Assertion failure Is not useful when I wrote `assert(foo == 2)`. This, however, is: tests.encode.testEncodeMoreThan8Bits: tests/encode.d:166 - Expected: [158, 234, 3] tests/encode.d:166 - Got: [158, 234]Yea, at one point, a whole system of nifty asserts that did just that was created and submitted to Phobos. It was quickly rejected because people said regular assert could, and would, easily be made to do the same thing. That was several years ago and absolutely nothing has happened. We *could've* at least had it in the std library all these years. But the preference was for vaporware. And now we're back to square one with "Whaddya need sometin' like that for anyway?" >_<
Apr 01 2015
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:This is a tooling issue.I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests. The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
Apr 01 2015
On 2015-04-01 20:04, Ary Borenszweig wrote:By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things with classes, instance and inheritance, that is, each describe block is a class an each it block is an instance method? -- /Jacob Carlborg
Apr 01 2015
On 4/1/15 3:57 PM, Jacob Carlborg wrote:On 2015-04-01 20:04, Ary Borenszweig wrote:No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible. But right now you get these things: 1. You can generate many tests in a simple way: ~~~ [1, 2, 3].each do |num| ... end end ~~~ 2. You get a summary of all the failures and the lines of the specs that failed. Also, you get errors similar to RSpec for matchers. And you get printed a command line for each failing spec so you can rerun it separately. These are the most useful RSpec features for me. 3. You can get dots for each spec or the name of the specs (-format option). 4. You can run a spec given its line number or a regular expression for its name. Eventually it will have more features, as the language evolves, but for now this has proven to be very useful :-) Another good thing about it being just a library is that others send pull requests and patches, and this is easier to understand than some internal logic built into the compiler (compiler code is always harder).By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things with classes, instance and inheritance, that is, each describe block is a class an each it block is an instance method?
Apr 01 2015
On 2015-04-01 21:28, Ary Borenszweig wrote:No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible.Can't you implement that using macros?But right now you get these things: 1. You can generate many tests in a simple way: ~~~ [1, 2, 3].each do |num| ... end end ~~~ 2. You get a summary of all the failures and the lines of the specs that failed. Also, you get errors similar to RSpec for matchers. And you get printed a command line for each failing spec so you can rerun it separately. These are the most useful RSpec features for me. 3. You can get dots for each spec or the name of the specs (-format option). 4. You can run a spec given its line number or a regular expression for its name. Eventually it will have more features, as the language evolves, but for now this has proven to be very useful :-) Another good thing about it being just a library is that others send pull requests and patches, and this is easier to understand than some internal logic built into the compiler (compiler code is always harder).This sounds all great. But lowering groups and examples to classes and methods takes it to the next level. -- /Jacob Carlborg
Apr 01 2015
On 4/2/15 3:32 AM, Jacob Carlborg wrote:On 2015-04-01 21:28, Ary Borenszweig wrote:We can. But then it becomes harder to understand what's going on. In RSpec I don't quite understand what's going on really, and I like a bit of magic but not too much of it. In fact with macros it's not that simple because you need to remember the context where you are defining stuff, so that might need adding that capabilities to macros, which will complicate the language.No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible.Can't you implement that using macros?Somebody also started writing a minitest clone: https://github.com/ysbaddaden/minitest.cr . Implementing a DSL on top of that using regular code or macros should be possible. But right now the features we have are enough.But right now you get these things:This sounds all great. But lowering groups and examples to classes and methods takes it to the next level.
Apr 02 2015
On 2015-04-02 21:11, Ary Borenszweig wrote:We can. But then it becomes harder to understand what's going on. In RSpec I don't quite understand what's going on really, and I like a bit of magic but not too much of it.It's quite straightforward to implement, in Ruby as least. Something like this: module DSL def describe(name, &block) context = Class.new(self) context.send(:extend, DSL) context.instance_eval(&block) end def it(name, &block) send(:define_method, name, &block) end end class Foo extend DSL describe 'foo' do it 'bar' do p 'asd' end end end You need to register the tests somehow also but be able to run them, but this is the basic idea. I cheated here and used a class to start with to simplify the example.In fact with macros it's not that simple because you need to remember the context where you are defining stuff, so that might need adding that capabilities to macros, which will complicate the language.Yeah, I don't really now how macros work in Cyrstal. -- /Jacob Carlborg
Apr 03 2015
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:Everything you propose can be done with a custom unittest runner, using the builtin unittest blocks. Compile-time reflection + UDAs + unittests is a surprisingly powerful combination, and I don't understand the proposals to make unittest name and such part of the ModuleInfo or provide special compiler support for them. Such an approach is not as scalable, and with compile-time reflection you can do anything you need with the current built-in unittest blocks. The only issue I have with the way unittests are done right now, is the incredibly annoying requirement of having a main function and that main gets called. It makes generic tooling and CI systems much more annoying, as you have to try and guess whether you need to create a fake main() (or pass in -main), and worry about if the code is going to keep running after tests complete.This is a tooling issue.I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case
Apr 01 2015
On Wed, 01 Apr 2015 19:18:52 +0000, Kapps wrote:Everything you propose can be done with a custom unittest runner, using the builtin unittest blocks. Compile-time reflection + UDAs + unittests is a surprisingly powerful combination, and I don't understand the proposals to make unittest name and such part of the ModuleInfo or provide special compiler support for them. Such an approach is not as scalable, and with compile-time reflection you can do anything you need with the current built-in unittest blocks.only if reflection is fully working, which is not the case (see, for=20 example, old bug with member enumeration for packages).=
Apr 01 2015
On 2015-04-01 21:18, Kapps wrote:The only issue I have with the way unittests are done right now, is the incredibly annoying requirement of having a main function and that main gets called. It makes generic tooling and CI systems much more annoying, as you have to try and guess whether you need to create a fake main() (or pass in -main), and worry about if the code is going to keep running after tests complete.I just don't compile the module congaing the main function. Although I place the my tests in a completely separate directory. -- /Jacob Carlborg
Apr 01 2015
On Thursday, 2 April 2015 at 06:33:50 UTC, Jacob Carlborg wrote:On 2015-04-01 21:18, Kapps wrote:Which is okay for running your own code, but the problem is it's not really a good solution for a CI system. I made a plugin for Bamboo to automatically test my dub projects, and essentially just rely on the fact that 'dub test' works. Without that, I would have no way of knowing whether or not there's a main function, whether or not that main function actually does something, etc. Being able to prevent main from being required and running is, for me, the biggest issue with the current unittest system.The only issue I have with the way unittests are done right now, is the incredibly annoying requirement of having a main function and that main gets called. It makes generic tooling and CI systems much more annoying, as you have to try and guess whether you need to create a fake main() (or pass in -main), and worry about if the code is going to keep running after tests complete.I just don't compile the module congaing the main function. Although I place the my tests in a completely separate directory.
Apr 02 2015
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:I 100% disagree. Having built-in unittest blocks have been a huge win for the language and greatly improved quality of library ecosystem. Value of standardization and availability is tremendous here. Only problem is that development of the feature has stopped half way and there are still small bits missing here and there. All your requested features can be implemented within existing unittest feature via custom runner - while still running tests properly with default one!This is a tooling issue.I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests. The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
Apr 01 2015
P.S. I hate all the Ruby testing facilities, hate with bloody passion.
Apr 01 2015
On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:P.S. I hate all the Ruby testing facilities, hate with bloody passion.You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Apr 01 2015
On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)P.S. I hate all the Ruby testing facilities, hate with bloody passion.You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Apr 02 2015
On 4/2/15 1:34 PM, Dicebot wrote:On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:Having never used Cucumber but having been interested in it, what was the unpleasantness?On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)P.S. I hate all the Ruby testing facilities, hate with bloody passion.You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Apr 02 2015
On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:On 4/2/15 1:34 PM, Dicebot wrote:The very fact of being forced to install some external application) which is not even available in my distro repositories) to run set of basic tests that could be done with 10 line D or shell script instead. It is hardly surprising that so far I preferred to submit pull requests without testing instead.On Wednesday, 1 April 2015 at 20:48:43 UTC, Atila Neves wrote:Having never used Cucumber but having been interested in it, what was the unpleasantness?On Wednesday, 1 April 2015 at 19:31:37 UTC, Dicebot wrote:Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)P.S. I hate all the Ruby testing facilities, hate with bloody passion.You're going to _love_ my DConf talk ;) I was expecting that already, you let me know what you thought of them last year! Atila
Apr 02 2015
On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:Having never used Cucumber but having been interested in it, what was the unpleasantness?Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the "plain English" method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier. -Wyatt
Apr 02 2015
On 4/2/15 2:46 PM, Wyatt wrote:On Thursday, 2 April 2015 at 20:55:04 UTC, David Gileadi wrote:On 4/2/15 2:32 PM, Dicebot wrote:Having never used Cucumber but having been interested in it, what was the unpleasantness?Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the "plain English" method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier. -WyattThe very fact of being forced to install some external application) which is not even available in my distro repositories) to run set of basic tests that could be done with 10 line D or shell script instead. It is hardly surprising that so far I preferred to submit pull requests without testing instead.Thanks to you both for the answers!
Apr 02 2015
On 2015-04-02 23:46, Wyatt wrote:Dealing with it at work, I find it puts us scarily at the mercy of regexen in Ruby, which is unsettling to say the least. More pressingly, the "plain English" method of writing tests hinders my ability to figure out what the test is actually trying to do. There's not enough structure to give you good visual anchors that are easy to follow, so I end up having to build a mental model of an entire feature file every time I look at it. It's hugely inconvenient. And if I can't remember what a phrase corresponds to, I have to hunt down the implementation and read that anyway, so it's not saving any time or making life any easier.At work we're using Turnip [1], which basically is Gherkin (Cucumber) files running on top of RSpec, best of both worlds Dicebot ;). It has two big advatnages compared to regular Cucumber: * It doesn't use regular expression for the steps, just plain strings * The steps are implemented in modules which are later included where needed. They're not floating around in global space like in Cucumber We also made some modifications so we have one file with one module matching one scenario, which is automatically included based on the scenario name. This made it possible to have steps that don't interfere with each other. We can have two steps which are identical in two different scenarios with two different implementations that doesn't conflict. This also made it possible to take full advantage of RSpec, by creating instance variables that keeps the data across steps. We're also currently experimenting with a gem (I can't recall its name right now) which allows to write the Cucumber steps inline in the RSpec tests, looking like this: describe "foobar" do Steps "this is a scenario" do Given "some kind of setup" do end When "when something cool happens" do end Then "something even cooler will happen" do end end end [1] https://github.com/jnicklas/turnip -- /Jacob Carlborg
Apr 03 2015
On 2015-04-02 22:34, Dicebot wrote:Oh yeah, looking forward to listening it :) I had an unpleasant experience of encountering Cucumber when trying to contribute to dstep so this specific name is like a trigger to me :)Yeah, I don't like how it ended up in DStep. I'm using it completely wrong, I'm just too lazy to fix it. Currently it basically compares two strings, the input and the output. I had some big plans for it but that never happened. Currently The only advantage is the runner, i.e. it shows a nice output, it's possible to only run a single test, it does stop all test at the first failure. All the thing Andrei wants to fix in D it seems :) In the case of DStep you can mostly ignore the Cucumber part and just focus on the input and expected files. You can even run the tests without it, just run DStep on the input file compare the output with the expected file. -- /Jacob Carlborg
Apr 03 2015
On 2015-04-01 21:31, Dicebot wrote:P.S. I hate all the Ruby testing facilities, hate with bloody passion.The unit test framework in the Ruby standard library: class FooTest def test_foo_bar assert 3 == 3 end end Looks likes most other testing frameworks out there to me. -- /Jacob Carlborg
Apr 01 2015
On 2015-04-01 21:25, Dicebot wrote:I 100% disagree. Having built-in unittest blocks have been a huge win for the language and greatly improved quality of library ecosystem. Value of standardization and availability is tremendous here. Only problem is that development of the feature has stopped half way and there are still small bits missing here and there. All your requested features can be implemented within existing unittest feature via custom runner - while still running tests properly with default one!The the unittest block itself could have been easily implemented as a library solution, if we had the following features: * Trailing delegate syntax * Executable code at module scope module bar; unittest("foo bar") { } Would be lowered to: unittest("foo bar", { }); These two features are useful for other things, like benchmarking: benchmark { } -- /Jacob Carlborg
Apr 01 2015
On Thu, 02 Apr 2015 08:37:26 +0200, Jacob Carlborg wrote:On 2015-04-01 21:25, Dicebot wrote: =20executable code at module scope isn't really necessary, i believe. it's=20 just a little messier with mixin templates, but i believe that it's=20 acceptable: mixin template foo(alias fn) { static this () { import iv.writer; writeln("hello! i'm a test!"); fn(); writeln("test passed"); } } // and do it mixin foo!({assert(42);}); // or with trailing delegate sugar: mixin foo!{ assert(42); }; sure, `static this` can do anything we want; registering unittest=20 delegate in some framework, for example.=I 100% disagree. Having built-in unittest blocks have been a huge win for the language and greatly improved quality of library ecosystem. Value of standardization and availability is tremendous here. Only problem is that development of the feature has stopped half way and there are still small bits missing here and there. All your requested features can be implemented within existing unittest feature via custom runner - while still running tests properly with default one!=20 The the unittest block itself could have been easily implemented as a library solution, if we had the following features: =20 * Trailing delegate syntax * Executable code at module scope =20 module bar; =20 unittest("foo bar") { } =20 Would be lowered to: =20 unittest("foo bar", { =20 }); =20 These two features are useful for other things, like benchmarking: =20 benchmark { }
Apr 01 2015
On Thu, 02 Apr 2015 06:52:14 +0000, ketmar wrote: p.s. ah, sure, it should be `shared static this()`. %$Y^% %.=
Apr 01 2015
On 2015-04-02 08:52, ketmar wrote:executable code at module scope isn't really necessary, i believe.I think so to have the same syntax. -- /Jacob Carlborg
Apr 02 2015
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:It has been done, several times, by different people. I'm partial to unit-threaded but then again I would be since I wrote it :) AtilaThis is a tooling issue.I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests.The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
Apr 01 2015
On Wednesday, 1 April 2015 at 18:04:31 UTC, Ary Borenszweig wrote:However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use caseThere are test frameworks for D: http://wiki.dlang.org/Libraries_and_Frameworks#Unit_Testing_FrameworkI'd say, deprecate "unittest" and write a good test library.Test frameworks don't conflict with unittest blocks at all. They have different use cases.
Apr 02 2015
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote:In brief: I'd like to transition to a model in which unittesting is organically part of the build. After all, you wouldn't want to deploy an application that's failing its unittests.Overall a good idea, Andrei. I take the opportunity to share a reoccurring dream of mine which is highly related to your proposal, Andrei: I would like to have a compiler option for *automatic persistent memoization* of unittests that are inferred to be strongly pure. Please take a moment to think about how your usage of unittests would change if this was available. Eventhough D compiles faster than all other languages, big projects (including my single-developer ones) will eventually grow so large that always waiting for all unittests to compile and run will not be bearable to a developer. The idea of persistent memoziation is not new (SCons does it an elegant with build artifacts). They just haven't been applied in as many cases as they could be. In theory it's just a matter of hashing all the code and data the a unittest depends upon and using this hash as a memoization key for remembering if the unittest failed (and perhaps also how) or now. However, I'm not sure how the memoization keys should be calculated in practise. I do however know that the ELF file format contains a BuildID attribute calculated as a SHA-1. Is there any builtin support in ELF for hashing individual functions and data (sections)? Is there at all possible to figure out what code a unittest depends upon? Please also think about how such a builtin feature would promote establishment and usage of D from a robustness and productivity point of view.
Apr 04 2015
On 4/4/15 10:38 AM, "Nordlöw" wrote:Please also think about how such a builtin feature would promote establishment and usage of D from a robustness and productivity point of view.I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
Apr 04 2015
On Saturday, 4 April 2015 at 19:00:01 UTC, Andrei Alexandrescu wrote:On 4/4/15 10:38 AM, "Nordlöw" wrote:What's the reason for that? - Lack of time? - Fear of increased complexity that becomes difficult to maintain? - Difficult to get it exactly right? - Or is there something fundamentally wrong with the idea itself?Please also think about how such a builtin feature would promote establishment and usage of D from a robustness and productivity point of view.I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
Apr 05 2015
On 4/5/15 4:04 AM, "Nordlöw" wrote:On Saturday, 4 April 2015 at 19:00:01 UTC, Andrei Alexandrescu wrote:Lack of staff, i.e. people willing and able to put boots on the ground. People following the competent and enthusiastic exchanges in the forums would be shocked to know just how few of those would be willing to actually _do_ anything for D, even the littlest thing. Nicely that doesn't include you - thanks for contributing. -- AndreiOn 4/4/15 10:38 AM, "Nordlöw" wrote:What's the reason for that? - Lack of time? - Fear of increased complexity that becomes difficult to maintain? - Difficult to get it exactly right? - Or is there something fundamentally wrong with the idea itself?Please also think about how such a builtin feature would promote establishment and usage of D from a robustness and productivity point of view.I do think it's a great idea. Sadly I also think how I have negative staff to put to work on it. -- Andrei
Apr 05 2015