December 31, 2011
> 
> I think that the AssertError's message (which includes the file and line number of the failure) and its stack trace are plenty. It's exactly what you need and nothing else.
> 
> - Jonathan M Davis

I want to have such a summary. What's about running only certain unittests?
December 31, 2011
On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
> > I think that the AssertError's message (which includes the file and line number of the failure) and its stack trace are plenty. It's exactly what you need and nothing else.
> > 
> > - Jonathan M Davis
> 
> I want to have such a summary.

I don't see any reason to put that in the standard library. There's nothing wrong with 3rd party solutions which give additional functionality, but D's unit test framework is designed to be minimilistic, and I don't think that adding anything beyond what it does now in terms of summary makes any sense. The only major issue in that regard IMHO is the fact that no further unittest blocks within a module are run after one fails. Even if it did, I still don't think that a fancier summary would be worth having - especially in the standard library.

If you want that sort of summary, you probably want it printing stuff out on success too, and that definitely goes against how the built-in framework works (since it follows the typical unix approach of failure printing out stuff and success printing nothing). So, I think that that really makes more sense as a 3rd party solution rather than as part of the standard library. And in general, 3rd party solutions are more likely to be customizable in a way you'd like rather than picking a single way of doing things.

> What's about running only certain unittests?

D's unit test framework isn't designed that way at this point. You need named unit tests for that to really make sense. It could theoretically be added and would be nice, but that would require changes to the language (though fortunately, they would be backwards compatible changes). So, we may see that eventually but not right now. At this point, the closest that you get to that is to unit test each of your modules separately rather than all at once. And actually, even major unit testing frameworks such as JUnit often end up running all of the unit tests within a module/file when you tell them to run a single unit test (probably in part since one unit test can theoretically affect the ones that follow it, and probably in part due to how they're implemented). So, I'm not sure how common being able to really run a single unit test is anyway. It would be a nice addition, and we may get it eventually, but it's not going to happen right now.

- Jonathan M Davis
December 31, 2011
Jonathan M Davis wrote:

> On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
>> > I think that the AssertError's message (which includes the file and line number of the failure) and its stack trace are plenty. It's exactly what you need and nothing else.
>> > 
>> > - Jonathan M Davis
>> 
>> I want to have such a summary.
> 
> I don't see any reason to put that in the standard library.

I do see any reason not to put in the standard library. On the contrary:
not having one in the standard library prohibits phobos modules to
use an advanced solution.

> If you want that sort of summary, you probably want it printing stuff out on success too, and that definitely goes against how the built-in framework works (since it follows the typical unix approach of failure printing out stuff and success printing nothing).

If I want to run the program only for the units tests, yes. But this does not touch the questions, whether or not to put a unit test framework in the stdlib.

> So, I think that that
> really makes more sense as a 3rd party solution rather than as part of the
> standard library.

Since unit tests found their way in the language, I really see no reason, why they are no fit for the standard lib.

What does a library qualifiy for phobos?

> And in general, 3rd party solutions are more likely to
> be customizable in a way you'd like rather than picking a single way of
> doing things.

Quality of other standard libraries shouldn't be an upper bound for phobos.

> D's unit test framework isn't designed that way at this point.
I wouldn't say that D has an unit test framework right now. I has merely a way to run code at startup.

> You need
> named unit tests for that to really make sense. It could theoretically be
> added and would be nice, but that would require changes to the language
> (though fortunately, they would be backwards compatible changes). So, we
> may see that eventually but not right now. At this point, the closest that
> you get to that is to unit test each of your modules separately rather
> than all at once.
Thats my line, we are lacking features and we could have a good library solution for this.

> So, I'm not sure how common being
> able to really run a single unit test is anyway.

I always run all of them and if one fails, I'll want to run those that failed to examine why they failed.

I agree that we shouldn't add complexitiy to the buildin language unit tests, apart from some additions that you mention. But we should define a way, how an advanced unit test framework should operate with the core feature and of course they should work together.

And since it is something many want to use (look at unit tests frameworks in other languages), it does have its place in the stdlib.


PS: My newsreader (the KDE newsreader from kontact) seems to kill threading. Does anyone know how to change this without changing the newsreader?



December 31, 2011
On 2011-12-31 04:35, Jonathan M Davis wrote:
> On Friday, December 30, 2011 21:38:07 Jacob Carlborg wrote:
>> On 2011-12-30 19:49, Jonathan M Davis wrote:
>>> On Friday, December 30, 2011 13:41:37 Tobias Pankrath wrote:
>>>> I really think it is and will use one for my D code. Since both worlds
>>>> could live together peacefully there is absolutely no reason not to
>>>> include one in phobos.
>>>
>>> It's one thing to use a fancier framework on your own. It's quite
>>> another to put it in the standard library. D's unit testing framework
>>> is designed to be straightforward and simple. On the whole, it does the
>>> job quite well. And once the issue of not running subsequent unittest
>>> blocks within a module after a failure in that module is fixed, I see
>>> no benefit from adding any additional library support. It just
>>> complicates things further.
>>>
>>> - Jonathan M Davis
>>
>> Will that be able to give a proper report of all failed tests in a nice
>> format?
>
> I think that the AssertError's message (which includes the file and line number
> of the failure) and its stack trace are plenty. It's exactly what you need and
> nothing else.
>
> - Jonathan M Davis

Yes but what happens when there are many failed tests, i.e. may AssertErrors that have been thrown? It will just print all after each other and you have to count them yourself if you want to know how many failed tests there are?

-- 
/Jacob Carlborg
December 31, 2011
On 2011-12-31 11:37, Jonathan M Davis wrote:
> On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
>>> I think that the AssertError's message (which includes the file and line
>>> number of the failure) and its stack trace are plenty. It's exactly what
>>> you need and nothing else.
>>>
>>> - Jonathan M Davis
>>
>> I want to have such a summary.
>
> I don't see any reason to put that in the standard library. There's nothing
> wrong with 3rd party solutions which give additional functionality, but D's
> unit test framework is designed to be minimilistic, and I don't think that
> adding anything beyond what it does now in terms of summary makes any sense.
> The only major issue in that regard IMHO is the fact that no further unittest
> blocks within a module are run after one fails. Even if it did, I still don't
> think that a fancier summary would be worth having - especially in the
> standard library.
>
> If you want that sort of summary, you probably want it printing stuff out on
> success too, and that definitely goes against how the built-in framework works
> (since it follows the typical unix approach of failure printing out stuff and
> success printing nothing). So, I think that that really makes more sense as a
> 3rd party solution rather than as part of the standard library. And in
> general, 3rd party solutions are more likely to be customizable in a way you'd
> like rather than picking a single way of doing things.
>
>> What's about running only certain unittests?
>
> D's unit test framework isn't designed that way at this point. You need named
> unit tests for that to really make sense. It could theoretically be added and
> would be nice, but that would require changes to the language (though
> fortunately, they would be backwards compatible changes). So, we may see that
> eventually but not right now. At this point, the closest that you get to that
> is to unit test each of your modules separately rather than all at once. And
> actually, even major unit testing frameworks such as JUnit often end up
> running all of the unit tests within a module/file when you tell them to run a
> single unit test (probably in part since one unit test can theoretically affect
> the ones that follow it, and probably in part due to how they're implemented).
> So, I'm not sure how common being able to really run a single unit test is
> anyway. It would be a nice addition, and we may get it eventually, but it's
> not going to happen right now.
>
> - Jonathan M Davis

It would be possible to implement named unit tests only in library code. It would not have as nice syntax as if it was implemented in the language but still possible.

In Ruby on Rails I run single unit tests all the time. Why would I run all the unit tests, which can take five minutes, when I just can run one unit test and it takes just one second?

When your doing test/behavior driven development (T/BDD) it's certainly nice to be able to run single unit tests, because you run it all the time.

-- 
/Jacob Carlborg
December 31, 2011
On 2011-12-31 11:37, Jonathan M Davis wrote:
> On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
>>> I think that the AssertError's message (which includes the file and line
>>> number of the failure) and its stack trace are plenty. It's exactly what
>>> you need and nothing else.
>>>
>>> - Jonathan M Davis
>>
>> I want to have such a summary.
>
> I don't see any reason to put that in the standard library. There's nothing
> wrong with 3rd party solutions which give additional functionality, but D's
> unit test framework is designed to be minimilistic, and I don't think that
> adding anything beyond what it does now in terms of summary makes any sense.
> The only major issue in that regard IMHO is the fact that no further unittest
> blocks within a module are run after one fails. Even if it did, I still don't
> think that a fancier summary would be worth having - especially in the
> standard library.

BTW, what would be so wrong if the unit tests for the standard library displayed a nice report when finished?

-- 
/Jacob Carlborg
December 31, 2011
On Saturday, December 31, 2011 16:06:49 Jacob Carlborg wrote:
> On 2011-12-31 11:37, Jonathan M Davis wrote:
> > On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
> >>> I think that the AssertError's message (which includes the file and line number of the failure) and its stack trace are plenty. It's exactly what you need and nothing else.
> >>> 
> >>> - Jonathan M Davis
> >> 
> >> I want to have such a summary.
> > 
> > I don't see any reason to put that in the standard library. There's nothing wrong with 3rd party solutions which give additional functionality, but D's unit test framework is designed to be minimilistic, and I don't think that adding anything beyond what it does now in terms of summary makes any sense. The only major issue in that regard IMHO is the fact that no further unittest blocks within a module are run after one fails. Even if it did, I still don't think that a fancier summary would be worth having - especially in the standard library.
> 
> BTW, what would be so wrong if the unit tests for the standard library displayed a nice report when finished?

My primary issue here is that I don't think that we should be adding stuff to Phobos which is essentially a new unit test framework on top of the built in one. If 3rd party stuff wants to do that. Fine. But the standard library should use the standard facilities. If the standard facilities aren't sufficient, then they should be improved.

As for a "nice report," I don't see anything wrong with just using the stack traces (which include the file, line number, and error message of the assertion failure). That's all the information that's needed. Anything else is superfluous IMHO. Now, if there were something nicer that could be generally agreed upon and added to druntime such that the standard unit test facilities used it, then fine. I don't see any point to it, but at least in that case, the standard library is still using the standard unit test framework. What I really don't want to see is Phobos essentially building a new unit test framework on top of the existing one. Any issues that need to be addressed with the unit test framework for the standard library should be addressed in the standard framework. Any additional framework stuff should be left to 3rd parties.

- Jonathan M Davis
December 31, 2011
On Saturday, December 31, 2011 15:48:16 Jacob Carlborg wrote:
> Yes but what happens when there are many failed tests, i.e. may AssertErrors that have been thrown? It will just print all after each other and you have to count them yourself if you want to know how many failed tests there are?

What does the number of failures really matter? You just need to know which ones failed and where. The AssertErrors give you that.

- Jonathan M Davis
December 31, 2011
On Saturday, December 31, 2011 16:04:12 Jacob Carlborg wrote:
> It would be possible to implement named unit tests only in library code. It would not have as nice syntax as if it was implemented in the language but still possible.
> 
> In Ruby on Rails I run single unit tests all the time. Why would I run all the unit tests, which can take five minutes, when I just can run one unit test and it takes just one second?
> 
> When your doing test/behavior driven development (T/BDD) it's certainly nice to be able to run single unit tests, because you run it all the time.

Yes. I agree that it would be nice, but for it to be done at all cleanly, the language, compiler, and druntime need to be improved to make it possible. However, at least syntactically, such changes should be completely backwards compatible, so they can be added at a future date. Regardless, I don't think that it's a problem that Phobos should be trying to solve.

- Jonathan M Davis
January 01, 2012
On Sat, Dec 31, 2011 at 2:56 PM, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> On Saturday, December 31, 2011 16:06:49 Jacob Carlborg wrote:
>> On 2011-12-31 11:37, Jonathan M Davis wrote:
>> > On Saturday, December 31, 2011 11:05:58 Tobias Pankrath wrote:
>> >>> I think that the AssertError's message (which includes the file and line number of the failure) and its stack trace are plenty. It's exactly what you need and nothing else.
>> >>>
>> >>> - Jonathan M Davis
>> >>
>> >> I want to have such a summary.
>> >
>> > I don't see any reason to put that in the standard library. There's nothing wrong with 3rd party solutions which give additional functionality, but D's unit test framework is designed to be minimilistic, and I don't think that adding anything beyond what it does now in terms of summary makes any sense. The only major issue in that regard IMHO is the fact that no further unittest blocks within a module are run after one fails. Even if it did, I still don't think that a fancier summary would be worth having - especially in the standard library.
>>
>> BTW, what would be so wrong if the unit tests for the standard library displayed a nice report when finished?
>
> My primary issue here is that I don't think that we should be adding stuff to Phobos which is essentially a new unit test framework on top of the built in one. If 3rd party stuff wants to do that. Fine. But the standard library should use the standard facilities. If the standard facilities aren't sufficient, then they should be improved.

The counterargument is that the language doesn't really provide a framework - it actually provides anonymous parameterless global functions that will be run before main is invoked if code is compiled with -unittest. That isn't considered a framework in any language I've ever used, but it adds just enough functionality to allow a well-integrated fully-featured library solution. Would making such a library solution part of the standard library really be a problem? I'm mostly ambivalent on this issue because I haven't had time to look closely at the proposed framework, but your argument seems to be that all unittesting functionality needs to be built into the language. I don't think that should be necessary or required.


> As for a "nice report," I don't see anything wrong with just using the stack traces (which include the file, line number, and error message of the assertion failure). That's all the information that's needed. Anything else is superfluous IMHO. Now, if there were something nicer that could be generally agreed upon and added to druntime such that the standard unit test facilities used it, then fine. I don't see any point to it, but at least in that case, the standard library is still using the standard unit test framework. What I really don't want to see is Phobos essentially building a new unit test framework on top of the existing one. Any issues that need to be addressed with the unit test framework for the standard library should be addressed in the standard framework. Any additional framework stuff should be left to 3rd parties.

As I said above, I wouldn't consider what we have to be a framework,
but it's definitely enough to build an excellent library solution on
top of.
As for a report, the problem is that an assertion error isn't what you
want when you're running, say, a continuous integration server (or,
say, a pull request tester). What you really want is a detailed
explanation of what unittests broke, what the tests were testing, and
how the result differed from what was expected. You want to be able to
have a reasonable idea of what went wrong *without* having to look at
someone else's code and figure out exactly what they're testing every
time.