February 11, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Knud Sørensen | On Fri, 11 Feb 2005 22:40:56 +0100, Knud Sørensen wrote: > On Thu, 10 Feb 2005 00:13:24 -0800, Walter wrote: > > >> They're not really meant for including in a released binary. Just an easy way to get them run. > > Hi > > So, far I seen that people would like to: > > 1) Split the test program from the main program.(Alex) > 2) Make sure that the tests get executed. (Walter) > 3) Be able to run single named tests (Andy) > 4) Be able to run unit test on -release programs. (Anders) And don't forget the desire to run *all* the unittests rather than crash out on the first assert failure. -- Derek Melbourne, Australia |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Derek | On Sat, 12 Feb 2005 08:53:45 +1100, Derek <derek@psych.ward> wrote: > On Fri, 11 Feb 2005 22:40:56 +0100, Knud Sørensen wrote: > >> On Thu, 10 Feb 2005 00:13:24 -0800, Walter wrote: >> >> >>> They're not really meant for including in a released binary. Just an easy >>> way to get them run. >> >> Hi >> >> So, far I seen that people would like to: >> >> 1) Split the test program from the main program.(Alex) >> 2) Make sure that the tests get executed. (Walter) >> 3) Be able to run single named tests (Andy) >> 4) Be able to run unit test on -release programs. (Anders) > > And don't forget the desire to run *all* the unittests rather than crash > out on the first assert failure. > Yes, this is very useful for large projects which do nightly builds. Checking unit test output can be a part of the build team's duties - they can arrive at work, find all the failures from the build and throw them at the appropriate person. (Who would catch them, rethrow them etc until it's time to go home). -- Using Opera's revolutionary e-mail client: http://www.opera.com/m2/ |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Alex Stevenson | "Alex Stevenson" <ans104@cs.york.ac.uk> wrote in message news:opsl183yzi08qma6@mjolnir.spamnet.local... > On Sat, 12 Feb 2005 08:53:45 +1100, Derek <derek@psych.ward> wrote: > >> On Fri, 11 Feb 2005 22:40:56 +0100, Knud Sørensen wrote: >> >>> On Thu, 10 Feb 2005 00:13:24 -0800, Walter wrote: >>> >>> >>>> They're not really meant for including in a released binary. Just an >>>> easy >>>> way to get them run. >>> >>> Hi >>> >>> So, far I seen that people would like to: >>> >>> 1) Split the test program from the main program.(Alex) >>> 2) Make sure that the tests get executed. (Walter) >>> 3) Be able to run single named tests (Andy) >>> 4) Be able to run unit test on -release programs. (Anders) >> >> And don't forget the desire to run *all* the unittests rather than crash out on the first assert failure. >> > > Yes, this is very useful for large projects which do nightly builds. Checking unit test output can be a part of the build team's duties - they can arrive at work, find all the failures from the build and throw them at the appropriate person. (Who would catch them, rethrow them etc until it's time to go home). I see the D unittests as something that a code change must pass in order to get accepted. I can't see checking in code that breaks the unittests - such code should be rejected. If by some accident a change breaks a unittest it should be backed out. A complete testing infrastructure would catch more subtle system bugs that could creep in by mistake when a change in one area accidentally causes another to fail. For that system I would want to catch failures and generate nice logs etc etc. Then if one really wants to control the unittest harness it isn't hard to do that by hand or by modifying phobos. |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ben Hinkle | On Fri, 11 Feb 2005 20:46:08 -0500, Ben Hinkle <ben.hinkle@gmail.com> wrote: > > "Alex Stevenson" <ans104@cs.york.ac.uk> wrote in message > news:opsl183yzi08qma6@mjolnir.spamnet.local... >> On Sat, 12 Feb 2005 08:53:45 +1100, Derek <derek@psych.ward> wrote: >> >>> On Fri, 11 Feb 2005 22:40:56 +0100, Knud Sørensen wrote: >>> >>>> On Thu, 10 Feb 2005 00:13:24 -0800, Walter wrote: >>>> >>>> >>>>> They're not really meant for including in a released binary. Just an >>>>> easy >>>>> way to get them run. >>>> >>>> Hi >>>> >>>> So, far I seen that people would like to: >>>> >>>> 1) Split the test program from the main program.(Alex) >>>> 2) Make sure that the tests get executed. (Walter) >>>> 3) Be able to run single named tests (Andy) >>>> 4) Be able to run unit test on -release programs. (Anders) >>> >>> And don't forget the desire to run *all* the unittests rather than crash >>> out on the first assert failure. >>> >> >> Yes, this is very useful for large projects which do nightly builds. >> Checking unit test output can be a part of the build team's duties - they >> can arrive at work, find all the failures from the build and throw them at >> the appropriate person. (Who would catch them, rethrow them etc until it's >> time to go home). > > I see the D unittests as something that a code change must pass in order to > get accepted. I can't see checking in code that breaks the unittests - such > code should be rejected. If by some accident a change breaks a unittest it > should be backed out. I agree, that's how I see unit tests - but since multiple code changes may be simultaneously integrated in a multiple developer environment, the unit test is a necessary first line of defence for catching code which causes problems when a formal build is produced. It's not that developers shouldn't run the unit tests before checking in code (they should!), but that it's also a good first step towards verifying a particular build of code, since you can't always trust programmers to folow procedure and lots of code/code changes interacting can produce unforseen complications. > > A complete testing infrastructure would catch more subtle system bugs that > could creep in by mistake when a change in one area accidentally causes > another to fail. For that system I would want to catch failures and generate > nice logs etc etc. Of course unit testing is just the first stage of testing - A set of automatic sanity checks for code consistency to help determine whether code does what you think it does (later testing should pick up more subtle things, like whether what it does is really what you wanted). Unit testing is great for catching inconsistencies before the testing proceeds to more involved tests which tie up test resource (Hardware or personnel) > > Then if one really wants to control the unittest harness it isn't hard to do > that by hand or by modifying phobos. True. It is easy enough to do manually, but is it sufficiently useful to warrent automation (as a compiler option or version flag)? > > -- Using Opera's revolutionary e-mail client: http://www.opera.com/m2/ |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Alex Stevenson | "Alex Stevenson" <ans104@cs.york.ac.uk> wrote in message news:opsl2aeynd08qma6@mjolnir.spamnet.local... > True. It is easy enough to do manually, but is it sufficiently useful to warrent automation (as a compiler option or version flag)? Sometimes having more compiler switches is more confusing than simply editting dmain2.d to do what you need. |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Derek | "Derek" <derek@psych.ward> wrote in message news:hmd6k9c3pmlc$.1q2ml1nysyh0$.dlg@40tude.net... > And don't forget the desire to run *all* the unittests rather than crash out on the first assert failure. No problem. There are many options: 1) Instead of using: assert(e); in the unit tests, write: myassert(e, "message"); and write the myassert() to log any errors to the suitable log file. 2) Provide a custom implementation of std.asserterror to do the logging. 3) Catch any AssertError exceptions, log them, and proceed with the unit tests: unittest { try { assert(...); ... } catch (AssertError ae) { ae.print(); } } The compiler & language doesn't care what code is between the { } of the unittest blocks. It can be any valid D code that does whatever you need it to do. |
February 12, 2005 Re: Seperating unit tests from programs | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter | On Fri, 11 Feb 2005 22:19:58 -0800, Walter wrote: > "Derek" <derek@psych.ward> wrote in message news:hmd6k9c3pmlc$.1q2ml1nysyh0$.dlg@40tude.net... >> And don't forget the desire to run *all* the unittests rather than crash out on the first assert failure. > > No problem. There are many options: > > 1) Instead of using: > assert(e); > in the unit tests, write: > myassert(e, "message"); > and write the myassert() to log any errors to the suitable log file. > > 2) Provide a custom implementation of std.asserterror to do the logging. > > 3) Catch any AssertError exceptions, log them, and proceed with the unit tests: > > unittest > { > try > { > assert(...); > ... > } > catch (AssertError ae) > { > ae.print(); > } > } > > The compiler & language doesn't care what code is between the { } of the unittest blocks. It can be any valid D code that does whatever you need it to do. D'oh! Of course! Thanks for these hints, Walter. -- Derek Melbourne, Australia |
Copyright © 1999-2021 by the D Language Foundation