Wow! Thanks for an excellent reply! I really appreciate it...



On Tue, Dec 24, 2013 at 11:38 AM, bearophile <bearophileHUGS@lycos.com> wrote:
John Carter:


I also attempted to do it in "Best Practice Test Driven Development" form

I think it's important to write lot of unittests (and use other kind of tests, automatic testing like in Haskell, fuzzy testing, integration tests, smoke tests, etc), but I think Test Driven Development is mostly silly. It's better to actually think a good solution before writing your code. Thinking-Driven Development is better.

Part of the aim of the exercise was to explore that difference.

Conclusions from that exploration?

1) Very fine grained steps resulted in fewer bugs and fewer backtracks.
2) Very fine grained steps resulted in better factored code.
3) Watching others do the same exercise led me to the attitude "adding more test cases is Bad".
4) Very fine grained steps slowed me down compared to my more usual somewhat loose style.
5) Thoughtfully adding Provocative test cases in the order of "What I need to Handle Next" is Very Very Good.
6) Replacing "Fine Grained Steps" with, whenever stuck, pull out a smaller chunk and TDD that first (with Provocative test cases) worked Very Very Well.

Bottom Line?

In future I will be doing...
* Thoughtfully choosing a minimal number of provocative test cases. (Ones that direct me to growing the code correctly)
* I will take the largest steps I can where I can go from test fail to test pass in a single change.
* Whenever I misjudged and take too large a step, I will factor out a smaller function and test and develop that first via the same process.
 


   assert( i != 0);

Sometimes you can also add a comment in the assert that explains why this precondition has failed

In the Ruby world the test frameworks (plus some of my own utility modules) give a very rich set of assertions and logging information on assert failures. I usually can just look at the output of an Ruby assert failure in my code and tell you exactly what the bug is.

For this case I resolve not to get into that, but ultimately I would love to find a D module that will give that rich amount of info. Any recommendations?

have not studied the logic of this code, but are you sure toUFT8 is needed to convert Arabic numbers to Roman and the other way around?

I'm storing the roman numerals as dchars. I have the vague hope this is more efficient than storing them as strings. My reasoning has more to do with "That's the way I did it with ASCII and C strings and chars" than profiling or deep knowledge.

Earlier versions I uses strings, but when I want to iterate of a string in fromRoman I converted to dchars.

 

         return  toUTF8([digit]) ~ toRomansWithZero( i - value);

More missing UCSF.

Alas, my google search results are cloudy by millions of hits for University of California San Francisco. (Wow! Google search is getting fast these days... Already getting a hit on your Post!)

What do you mean by "missing UCSF"?
 
do you have negative tests that tests the code raises an assertion Error when you give a 0 to the function?

Interestingly enough I did.

In the somewhat ugly form of...
unittest {
   try {
      toRoman( 0);
      assert( false);
   } catch( core.exception.AssertError assertError) {
      assert(true);
   }
}

But when I added the @safe dmd said...
roman.d(314): Error: can only catch class objects derived from Exception in @safe code, not 'core.exception.AssertError'

I figured @safe was worth more to me than that test so I discarded it.

I would love away around that.

In Ruby I tend to have several flavours of AssertError, for example PreconditionFailure, PostConditionFailure and InvaraintFailure.

   return array(map!((a) => romanToDigit(a))(roman));

That's quite bad code. Better:

    return roman.map!romanToDigit.array;


That certainly looks nicer, alas, dmd is unhappy....
dmd -unittest -main roman.d && ./roman
/usr/include/dmd/phobos/std/algorithm.d(425): Error: function roman.romanToDigit is not accessible from module algorithm
/usr/include/dmd/phobos/std/algorithm.d(459): Error: function roman.romanToDigit is not accessible from module algorithm
/usr/include/dmd/phobos/std/algorithm.d(411): Error: template instance std.algorithm.MapResult!(romanToDigit, string) error instantiating
roman.d(232):        instantiated from here: map!string
roman.d(232): Error: template instance std.algorithm.map!(romanToDigit).map!string error instantiating

As a halfway point I have taken it to...
/// Convert string of char code points to array of UTF32 so we can have random access.
pure immutable uint[] romanToDigitArray( in string roman)
{
   return roman.map!((a) => romanToDigit(a)).array;
}

Xinok <xinok@live.com>said...

There are a series of unittests which could be merged together. Keep the line comments to keep different tests separate, but there's no need to have multiple unittests which contain nothing more than a few simple asserts. Or as bearophile suggested, use an in-out table

Actually I do have a reason for keeping them separate.

I have been taking on board what books like "xUnit Test Patterns by Gerard Meszaros" and various videos by JB Rainsberger say.

I used to have long "Ramble On" unit tests, but found from painful experience that the above authors are dead right.

Ramble on tests are fragile, hard to maintain and destroy defect localization.

One of the prime values of unittesting is unlike an integrated test, a unit test failure in a well designed suite of unit tests, tells you exactly where the defect is.

Rainsberger talks about having "Only one reason for a test to fail, only one test to fail if you have a defect". I'm not quite sure how to achieve that.

My main gripe with my current set of tests is I have too many. Each test should be pinning exactly one behaviour of the problem, and I should have only one test for each behaviour.

In terms of having them data driven, it comes back to what I was saying about needing a richer palette of asserts and better feed back from an assert failure.

If I make them data driven I will lose which test case failed.

Ideally I should make them data driven AND use a more informative assert facility that tells me what the expression was and what the value of the variables that went in to that expression were.

// Test the 'I''s
unittest {

Some unittest system allows to put that comment "Test the 'I''s" in a more active role. So it's shown on screen when the relative unittest fails.

Also, I suggest to add some "ddoc unittests". It's a feature recently added to D.

I don't quite understand this comment...

I thought it meant just putting a ddoc /// extra / flag on the comment, but nothing new happened either on a test failure or on "dmd -D" output (version v2.064)

Revised version with most comments worked in now up at...
https://bitbucket.org/JohnCarter/roman/src/0362f3626c6490490136b097f23c67fc2970c214/roman.d?at=default

Once again, thanks to everyone for fantastic feedback.

--
John Carter
Phone : (64)(3) 358 6639
Tait Electronics                        
PO Box 1645 Christchurch
New Zealand



This email, including any attachments, is only for the intended recipient. It is subject to copyright, is confidential and may be the subject of legal or other privilege, none of which is waived or lost by reason of this transmission.
If you are not an intended recipient, you may not use, disseminate, distribute or reproduce such email, any attachments, or any part thereof. If you have received a message in error, please notify the sender immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or corrupted during transmission nor can we guarantee that any email or any attachments are free from computer viruses or other conditions which may damage or interfere with recipient data, hardware or software. The recipient relies upon its own procedures and assumes all risk of use and of opening any attachments.