Jump to page: 1 24  
Page
Thread overview
DUnit: Advanced unit testing toolkit.
Sep 21, 2013
Gary Willoughby
Sep 21, 2013
Gary Willoughby
Sep 22, 2013
linkrope
Sep 22, 2013
Gary Willoughby
Sep 22, 2013
Gary Willoughby
Sep 23, 2013
jostly
Sep 23, 2013
Dicebot
Sep 23, 2013
Russel Winder
Sep 23, 2013
Gary Willoughby
Sep 24, 2013
Rory McGuire
Sep 24, 2013
Rory McGuire
Sep 24, 2013
linkrope
Sep 25, 2013
Jacob Carlborg
Sep 25, 2013
jostly
Sep 25, 2013
Dicebot
Sep 26, 2013
Jacob Carlborg
Sep 26, 2013
Dicebot
Sep 26, 2013
Jacob Carlborg
Sep 26, 2013
Dicebot
Sep 26, 2013
Jacob Carlborg
Sep 26, 2013
Atila Neves
Sep 25, 2013
Gary Willoughby
Sep 25, 2013
Jacob Carlborg
Sep 26, 2013
Gary Willoughby
Sep 26, 2013
Jacob Carlborg
Sep 26, 2013
Gary Willoughby
Sep 27, 2013
Jacob Carlborg
Sep 27, 2013
Gary Willoughby
Sep 28, 2013
Jacob Carlborg
Sep 29, 2013
Gary Willoughby
Sep 26, 2013
Atila Neves
Sep 22, 2013
Jacob Carlborg
Sep 24, 2013
linkrope
Oct 06, 2013
ilya-stromberg
Oct 06, 2013
Gary Willoughby
Oct 06, 2013
ilya-stromberg
Oct 07, 2013
Gary Willoughby
September 21, 2013
DUnit: Advanced unit testing toolkit.

I've needed this for a project i've been working on so i created a toolkit that i'm using and happy with. I must thank the community here for helping me with a few issues along the way (mostly due to poor documentation). It uses a lot of compile time reflection to generate the mocks, which has been very interesting to learn/write (to say the least).

I think it's useful enough now to release and it would be nice to perhaps receive some guidance as to where it should improve or fails spectacularly.

Wikipedia: http://en.wikipedia.org/wiki/Unit_testing

DUnit: https://github.com/kalekold/dunit

See examples and documentation for usage.

Have fun.
September 21, 2013
DUnit has now been added to the DUB registry at http://code.dlang.org/.
September 22, 2013
Have a look at https://github.com/linkrope/dunit, especially at
the "Related Projects".

Until now, my preferred tool for (large-scale) unit testing in D
would be the combination of my dunit framework (of course),
DMocks-revived for mocks, and the 'must' matchers of specd.

How does your toolkit fit in?
September 22, 2013
On Sunday, 22 September 2013 at 13:13:29 UTC, linkrope wrote:
> Have a look at https://github.com/linkrope/dunit, especially at
> the "Related Projects".
>
> Until now, my preferred tool for (large-scale) unit testing in D
> would be the combination of my dunit framework (of course),
> DMocks-revived for mocks, and the 'must' matchers of specd.
>
> How does your toolkit fit in?

I looked at DMocks and Specd before i started work on DUnit to see what was out there. I'm not saying that DUnit does anything really different than those two combined but i'm trying to make it simpler to use and more intuitive.

For example DMocks uses an intermediary object to handle the mocks. This is thought was a bit strange as this behaviour should be in the mock to begin with. So my first objective was to provide a way of very simply creating a mock object and to interact with that mock object directly. This also fulfilled the secondary objective of moving 'setup' code out of the unit test and making them more easy to read. Also DUnit solved the problem that Dmocks doesn't address of correctly handling Object base class methods properly. All methods can fall-through to parent implementations or be replaced at runtime.

Specd is a nice approach to defining constraints but again it seems overkill for something that should be simple. I don't do anything different, i just do it in a different way.

    specd: 1.must.be.greater_than(0);
    dunit: 1.assertGreaterThan(0);

The reason i've gone with just providing more specific assert methods is that i can create nice helpful error message when things go wrong. For example this line:

    1.assertEquals(0);

Creates this error:

    +------------------------------------------------------------
    | Failed asserting equal
    +------------------------------------------------------------
    | File: example.d
    | Line: 85
    +------------------------------------------------------------
    | ✓ Expected int: 1
    | ✗ Actual int: 2

Making debugging what went wrong loads easier. These messages give you so much useful info that you will never go back to only using assert() again.
September 22, 2013
On Sunday, 22 September 2013 at 15:54:39 UTC, Gary Willoughby wrote:
> The reason i've gone with just providing more specific assert methods is that i can create nice helpful error message when things go wrong. For example this line:
>
>     1.assertEquals(0);
>
> Creates this error:
>
>     +------------------------------------------------------------
>     | Failed asserting equal
>     +------------------------------------------------------------
>     | File: example.d
>     | Line: 85
>     +------------------------------------------------------------
>     | ✓ Expected int: 1
>     | ✗ Actual int: 2
>
> Making debugging what went wrong loads easier. These messages give you so much useful info that you will never go back to only using assert() again.

Actually that should read:

    +------------------------------------------------------------
    | Failed asserting equal
    +------------------------------------------------------------
    | File: example.d
    | Line: 85
    +------------------------------------------------------------
    | ✓ Expected int: 0
    | ✗ Actual int: 1

But you get the idea. ;)
September 22, 2013
On 2013-09-21 02:40, Gary Willoughby wrote:
> DUnit: Advanced unit testing toolkit.
>
> I've needed this for a project i've been working on so i created a
> toolkit that i'm using and happy with. I must thank the community here
> for helping me with a few issues along the way (mostly due to poor
> documentation). It uses a lot of compile time reflection to generate the
> mocks, which has been very interesting to learn/write (to say the least).
>
> I think it's useful enough now to release and it would be nice to
> perhaps receive some guidance as to where it should improve or fails
> spectacularly.
>
> Wikipedia: http://en.wikipedia.org/wiki/Unit_testing
>
> DUnit: https://github.com/kalekold/dunit
>
> See examples and documentation for usage.
>
> Have fun.

You might want to use alternatively you could use "version(unittest)" instead of "debug" for the mocks. Don't know which is better.

-- 
/Jacob Carlborg
September 23, 2013
On Sunday, 22 September 2013 at 13:13:29 UTC, linkrope wrote:
> Have a look at https://github.com/linkrope/dunit, especially at
> the "Related Projects".
>
> Until now, my preferred tool for (large-scale) unit testing in D
> would be the combination of my dunit framework (of course),
> DMocks-revived for mocks, and the 'must' matchers of specd.

I think it's great to see the D unit testing ecosystem growing. Since it's still relatively small, I think we have a good chance here to create interoperability between the different frameworks.

As I see it, we have:

1. Running unit tests

This is where D shines with the builting facility for unit tests. However, it suffers a bit from the fact that, if we use assert, it will stop on the first assertion failure, and there is (as far as I've been able to tell) no reliable way to run specific code before or after all the unit tests. If I'm wrong on that assumption, please correct me, that would simplify the spec running for specd.

In specd, the actual code inside the unittest { } sections only collect results, and the reporting is called from a main() supplied by compiling with version "specrunner" set. I haven't checked to see if your dunit do something similar.

2. Asserting results

Varies from the builtin assert() to xUnit-like assertEquals() to the more verbose x.must.equal(y) used in specd.

This could easily be standardized by letting all custom asserts throw an AssertError, though I would prefer to use another exception that encapsulates the expected and actual result, to help with bridging to reporting.

3. Reporting results

If we have moved beyond basic assert() and use some kind of unit test runner, then we have the ability to report a summary of run tests, and which (and how many) failed.

This is one area where IDE integration would be very nice, and I would very much prefer it if the different unit test frameworks agreed on one standard unit test runner interface, so that the IDE integration problem becomes one of adapting each IDE to one runner interface, instead of adapting each framework to each IDE.

In my experience from the Java and Scala world, the last point is the biggest. Users expect to be able to run unit tests and see the report in whatever standard way their IDE has. In practice this most often means that various libraries pretend to be JUnit when it comes to running tests, because JUnit is supported by all IDEs.

Let's not end up in that situation, but rather work out a common API to run unit tests, and the D unit test community can be the envy of every other unit tester. :)
September 23, 2013
On Monday, 23 September 2013 at 16:40:56 UTC, jostly wrote:
> In specd, the actual code inside the unittest { } sections only collect results, and the reporting is called from a main() supplied by compiling with version "specrunner" set. I haven't checked to see if your dunit do something similar.

I think more "D-way" would have been to simply separate tests in logically separated unittest blocks and run those using static reflection, catching assert error. That way you will get first failure in a set and then continue to other sets. Does that make sense?
September 23, 2013
On Mon, 2013-09-23 at 18:40 +0200, jostly wrote:
[…]
> I think it's great to see the D unit testing ecosystem growing. Since it's still relatively small, I think we have a good chance here to create interoperability between the different frameworks.

There is also integration and system testing of course, not just unit testing. The same testing framework can generally be used for all forms.

In the Java sphere, JUnit gave way to TestNG exactly because TestNG supports all forms of testing not just unit testing. Now though TestNG is giving way to Spock enabling elements of BDD as well as TDD.

D has some unit testing capability built in, which i good, but it is also good to have an external testing framework that can do unit, integration and system testing supporting TDD and BDD.

> As I see it, we have:
> 
> 1. Running unit tests
> 
> This is where D shines with the builting facility for unit tests. However, it suffers a bit from the fact that, if we use assert, it will stop on the first assertion failure, and there is (as far as I've been able to tell) no reliable way to run specific code before or after all the unit tests. If I'm wrong on that assumption, please correct me, that would simplify the spec running for specd.

So the built-in is not entirely up to the task of real unit testing?

> In specd, the actual code inside the unittest { } sections only collect results, and the reporting is called from a main() supplied by compiling with version "specrunner" set. I haven't checked to see if your dunit do something similar.
> 
> 2. Asserting results
> 
> Varies from the builtin assert() to xUnit-like assertEquals() to the more verbose x.must.equal(y) used in specd.

In the Scala variant of the JVM arena, ScalaTest mingles really nicely all the classic TDD asserts styles, along with Hamcrest matchers, but also supports the more BDD style test specifications. Spock also does this. Corollary, D must do this.

NB Go is going through all this just now. The built in unit test capability is minimalist and for testing the Go implementation. GoCheck is classic TDD assert style, and there are some candidate BDD styles on the horizon. Will Go beat D to having the capability that the JVM languages already enjoy?

Note that Groovy and the py.test Python test framework dispense with the need for assertEquals and that family of JUnit thingies, in favour of using the built-in assert and catching the AssertionError exception, doing detailed stack analysis and provided very detailed information about the evaluated expression: power asserts. Why should D follow 1990s thinking when there is 2010s thinking that is much better?

> This could easily be standardized by letting all custom asserts throw an AssertError, though I would prefer to use another exception that encapsulates the expected and actual result, to help with bridging to reporting.

See above :-)

> 3. Reporting results
> 
> If we have moved beyond basic assert() and use some kind of unit test runner, then we have the ability to report a summary of run tests, and which (and how many) failed.
> 
> This is one area where IDE integration would be very nice, and I would very much prefer it if the different unit test frameworks agreed on one standard unit test runner interface, so that the IDE integration problem becomes one of adapting each IDE to one runner interface, instead of adapting each framework to each IDE.
> 
> In my experience from the Java and Scala world, the last point is the biggest. Users expect to be able to run unit tests and see the report in whatever standard way their IDE has. In practice this most often means that various libraries pretend to be JUnit when it comes to running tests, because JUnit is supported by all IDEs.
> 
> Let's not end up in that situation, but rather work out a common API to run unit tests, and the D unit test community can be the envy of every other unit tester. :)

Currently, and very sadly, this generally means writing an XML file using the JUnit schema. On the other hand if D did this Eclipse, IDEA, NetBeans, etc. would immediately render excellent data displays.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


September 23, 2013
On Monday, 23 September 2013 at 16:40:56 UTC, jostly wrote:
> Let's not end up in that situation, but rather work out a common API to run unit tests, and the D unit test community can be the envy of every other unit tester. :)

You've raised some nice ideas and got me thinking. However, i do think we are missing some way of knowing when unit tests start and stop. I like the built in unittest blocks but it would be nice to have something like:

    beforetests
    {
        ...
    }

    aftertests
    {
        ...
    }

To apply code before and after the unit tests have run. These could be used to setup and the execute the reporting environment? I can't think of a way to do this automatically without these constructs.
« First   ‹ Prev
1 2 3 4