December 06, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On 2015-12-05 21:44, Russel Winder via Digitalmars-d wrote: > For the purposes of this argument, let's ignore crashes or manually > executed panics. The issue is the difference in behaviour between > assert and Errorf. assert in languages that use it causes an exception > and this causes termination which means execution of other tests does > not happen unless the framework makes sure this happens. D unittest > does not. Errorf notes the failure and carries on, this is crucial > important for good testing using loops. I think that the default test runner is completely broken for terminating the complete test suite if a test fails. Although I do think I should terminate the rest of the test that failed. I also don't think one should test using loops. > Very true, and that is core to the issue here. asserts raise exceptions > which , unless handled by the testing framework properly, cause > termination. This is at the heart of the problem. For data-driven > testing some form of loop is required. The loop must not terminate if > all the tests are to run. pytest.mark.parametrize does the right thing, > as do normal loops and Errorf. D assert does the wrong thing. Nothing says that you have to use assert in a unit test ;) I'm not sure how your data looks like or what you're actually testing. But when I had the need to test multiple values it was either a data structure, then I could do one assert for the whole data structure. Or I used multiple tests. > I think this is the evidence that proves that the current D testing > framework is in need of work to make it better than it is currently. Absolutely, the built in support is almost completely broken. > If a stacktrace is needed the testing framework is inadequate. I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon ass the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there? > dspecs I'm not sure if you're referring to my "framework" [1] or this one [2]. But none of them will catch any exception and behave just as the standard test runner. But would like to implement a custom runner that catches assertions and continues with the next tests. > and specd are This one seems to only catch "MatchException". So if any other exception is thrown, including assert error, it will have the same behavior as the standard test runner. [1] https://github.com/jacob-carlborg/dspec [2] https://github.com/youxkei/dspecs -- /Jacob Carlborg | |||
December 06, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Chris Wright | On 2015-12-06 00:09, Chris Wright wrote: > But there are problems with saying that the builtin assert function > should show the entire expression with operand values, nicely formatted. > > assert has to serve both unittesting and contract programming. When > dealing with contract programming and failed contracts, you risk objects > being in invalid states. Trying to call methods on such objects in order > to provide descriptive error messages is risky. A helpful stacktrace > might be transformed into a segmentation fault, for instance. Or an > assert error might be raised while attempting to report an assert error. > > assert is a builtin function. It's part of the runtime. That puts rather > strict constraints on how much it can do. The runtime can't depend on the > standard library, for instance, so if you want assert() to include the > values that were problematic, the runtime has to include that formatting > code. That doesn't seem like a lot on its own, but std.format is probably > a couple thousand lines of code. (About 3,000 semicolons, including > unittests.) > > I would like these nicely formatted messages. I don't think it's > reasonably practical to add them to assert. I'll spend some thought on > how to implement them outside the runtime, for a testing framework, > though I'm not optimistic on a nice API. Catch does it with macros and by > parsing C++, and the nearest equivalent in D is string mixins, which are > syntactically more complex. Spock does it with a compiler plugin. I know > I can do it with strings and string mixins, but that's not exactly going > to be a clean API. Another good use case for AST macros. -- /Jacob Carlborg | |||
December 06, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On 2015-12-05 12:12, Russel Winder via Digitalmars-d wrote: > I put it the other way round: why do you want a stack trace from a > failure of a unit test? The stack trace tells you nothing about the > code under test that the test doesn't already tell you. All you need to > know is which tests failed and why. This of course requires power > asserts or horrible things like assertEqual and the like to know the > state that caused the assertion fail. For me, PyTest is the model > system here, along with Spock, and ScalaTest. Perhaps also Catch. ScalaTest will print a stack trace on failure, at least when I run it from inside Eclipse. So will RSpec which I'm guessing ScalaTest is modeled after. In RSpec, with the default formatter it will print a dot for a passed test and a F for a failed test. Then at the end it will print the stack traces for all failed tests. > Just because some unittests have done something in the past doesn't > mean it is the right thing to do. The question is what does the > programmer need for the task at hand. Stack traces add nothing useful > to the analysis of the test pass or fail. I guess it depends on how you write your tests. If you only test a single function which doesn't call anything else that will work. But as soon as the function you're testing calls other functions a stack trace is really needed. What do you do when you get a test failure due to some exception/assertion is thrown deep inside some code you have never seen before and how no idea how the execution got there? > I will be looking at dunit, specd and dcheck. The current hypothesis is > though that the built in unit test is not as good as it needs to be, or > at least could be. The built-in runner is so bad it's almost broken. -- /Jacob Carlborg | |||
December 06, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg | On Sun, 06 Dec 2015 12:11:08 +0100, Jacob Carlborg wrote:
> I also don't think one should test using loops.
Table based testing is quite handy in a number of circumstances as long as you're using a framework that makes it viable. Asserts that throw exceptions make it far less viable.
One other reason it works in Go is that you already have a tradition of laboriously constructing descriptive error messages by hand due to the lack of stacktraces. But since Spock automates that for you, it would be more viable with Spock than with D's default unittests.
| |||
December 07, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to ZombineDev | On Friday, 4 December 2015 at 19:38:35 UTC, ZombineDev wrote:
> On Friday, 4 December 2015 at 19:00:37 UTC, Russel Winder wrote:
>>[...]
>
> You can look at some of the DUB packages: http://code.dlang.org/search?q=test for more advanced testing facilities.
>
> I for example sometimes use dunit (https://github.com/nomad-software/dunit) which has nice test results reporting:
>
>> [...]
>
> By the way, looking at the code, it shouldn't be too hard to write your own test runner: https://github.com/nomad-software/dunit/blob/master/source/dunit/moduleunittester.d?ts=3
It is if you want to run individual unit tests.
Atila
| |||
December 07, 2015 Re: DMD unittest fail reporting… | ||||
|---|---|---|---|---|
| ||||
Posted in reply to Chris Wright | On Sunday, 6 December 2015 at 03:23:53 UTC, Chris Wright wrote:
> I quickly hacked up something to make assertions slightly more verbose: http://dpaste.dzfl.pl/f94b6ed80b3a
>
> This can be extended quite a bit without a ton of effort, but it would eventually devolve into fully parsing D using compile-time function execution. Still, Catch can't even handle logical or, so it should be trivial to beat it in terms of quality of error reports. No real hope of matching Spock.
>
> The interface leaves something to be desired:
> mixin enforce!(q{i == j});
>
> Say what you will, C preprocessor macros are very low on syntactic overhead.
>
> The other ways I know of for passing in an expression involve eager evaluation or convert the expression to an opaque delegate. The mixin is required in order to access local variables.
>
> The name "enforce" is obviously not appropriate, and it should ideally have pluggable error reporting mechanisms. But for a first hack, it's not so bad.
>
> I might clean this up and put it on DUB.
I guess you missed the discussions on std.experimental.testing? I thought of doing something like your enforce, but decided it was too ugly and unwieldy. It's the only way to copy what Catch does... but unfortunately it's not as nice to read or write. Like you, I came to the realization that at least this once preprocessor macros made things easier.
I still think that considering all the alternatives, the `should` functions in unit-threaded are the best way to go. Not surprising since I wrote them, but still.
Atila
| |||
Copyright © 1999-2021 by the D Language Foundation
Permalink
Reply