March 13, 2012
On Tuesday, March 13, 2012 15:04:09 Ary Manzana wrote:
> How can you re-run just a failing test? (without having to run all the previous tests that will succeed?)

You can't, not without essentially creating your own unit testing framework. D's unit testing framework is quite simple. If you compile with -unittest, then the unittest blocks are all run before main is run. If any unittest block in a module fails, then no further unittest blocks in that module are run (though unittest blocks in other modules are run). If any unittest block failed, then main is never run, otherwise the program continues normally.

There is no way to rerun specific unit tests or have any control whatsoever which unit tests run unless you create separate programs which only compile in specific modules so that only the unit tests in those modules are run. And even then, you have no control over _which_ tests in a unittest block run unless you play with version statements or whatnot.

D's unit testing framework works quite well as long as you're willing to always run all of the tests. If you want more control than that, you have to play games if not outright create your own unit testing framework.

- Jonathan M Davis
March 13, 2012
On 03/13/2012 11:49 AM, Andrej Mitrovic wrote:
> On 3/13/12, Ali Çehreli<acehreli@yahoo.com>  wrote:
>> Developers wouldn't
>> want that to happen every time a .d file is compiled.
>
> Well luckily unittests don't run when you compile a .d file but when
> you run the app! :)

Good point. :)

Our C++ unit tests are a part of a test binary that has a make file dependency on the library that the .cc files contribute to.

A changed .cc causes its .o to be built, the .o causes its .a to be built, the .a causes its unit test application to be built, and finally the unit test application is executed as a part of the library's post build step.

Ali
March 13, 2012
On Tue, Mar 13, 2012 at 11:28:24AM -0700, Ali Çehreli wrote: [...]
> We are getting a little off topic here but I've been following the recent unit test thread about writing to files. Unit tests should not have external interactions like that either. For example, no test should connect to an actual server to do some interaction.  Developers wouldn't want that to happen every time a .d file is compiled. :) (Solutions like mocks, fakes, stubs, etc. do exist. And yes, I know that they are sometimes non-trivial.)
[...]

It's not about whether unittests should write to files, but about testing a part of the code that operates on files. So you create some test data in the unittest and put it in the file, then pass the file to the function being tested.

Ideally, this should be done in some kind of tmpfs, which is only accessible to the program, and which is discarded by the OS once the testing is finished.

You still have the case of non-trivial test data, though. Sometimes there's a large dataset with a known result that you want to use in a unittest, to ensure that your latest changes didn't break a known non-trivial working case.

I suppose you could argue that these kinds of tests belong in an external test framework, but that's the kind of thing that discourages people from actually writing test cases in the first place. I know I'll be too lazy to do this if it wasn't as simple as adding a unittest block to my code. It's just too much trouble to implement an external framework, write the test case in a separate file, update the build system to include it in the test suite, only to have to scrap a lot of this effort later when you suddenly realize that there's a better algorithm you can use which invalidates most of the original test case. Whereas if this was just an embedded unittest block, you can delete or comment out the block, replace it with a new test relevant to the new code, and keep going.

Seems like a small difference, but having to edit 2-3 different files just to update a test case as opposed to continuing to work with the same source file you've been working on can make the difference between the programmer deciding to put it off till later (which usually means it doesn't get done) vs. doing it immediately without too much interruption (test cases are up-to-date and more thorough).


T

-- 
I'm still trying to find a pun for "punishment"...
1 2 3
Next ›   Last »