December 05, 2014
On Friday, 5 December 2014 at 15:49:13 UTC, eles wrote:
> On Friday, 5 December 2014 at 13:56:27 UTC, H. S. Teoh via Digitalmars-d wrote:
>> On Fri, Dec 05, 2014 at 11:53:10AM +0000, Chris via Digitalmars-d wrote:
>
>>
>> At my day job, you'd be shocked to know how many times things flat-out
>> break in the nastiest, most obvious ways, yet people DO NOT EVEN
>> NOTICE!!!! QA has developed this bad habit of only testing the feature
>> they were asked to test, and the developers have become complacent over
>> time and blindly trusting that QA has done their job,
>
> That's not trust or complaceny. That's a worldwide conspiracy to ensure good paid jobs for poor programmers & testers like us...

Programmers get paid for fixing bugs they have introduced themselves. :-)
December 05, 2014
On Fri, Dec 05, 2014 at 03:55:22PM +0000, Chris via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 15:44:35 UTC, Wyatt wrote:
> >On Friday, 5 December 2014 at 14:53:43 UTC, Chris wrote:
> >>
> >>As I said, I'm not against unit tests and I use them where they make sense (difficult output, not breaking existing tested code). But I often don't bother with them when they tell me what I already know.
> >>
> >>assert(addNumbers(1,1) == 2);
> >>
> >>I've found myself in the position when unit tests give me a false sense of security.

This is an example of a poor unittest. Well, maybe *one* such case isn't a bad idea to stick in a unittest block somewhere (to make sure things haven't broken *outright*, but you'd notice that via other channels pretty quickly!). But this is akin to writing a unittest that computes the square root of a number in order to test a function that computes the square root of a number. Either it's already blindingly obvious and you're just wasting time, or the unittest is so complex that it proves nothing (you could be repeating exactly the same bugs as the code itself!).

No, a better way to writing a unittest is to approach it from the user's (i.e., caller's) POV. Given this function (as a black box), what kind of behaviour do I expect from it? What if I give it unusual arguments, will it still give the correct result? It's well-known that most bugs happen on boundary conditions, not in the general output (which is usually easy to get right the first time). So, unittests should mainly focus on boundary and exceptional cases. For example, in testing a sqrt function, I wouldn't waste time testing sqrt(16) or sqrt(65536) -- at the most, I'd do just one such case and move on. But most of the testing should be on the exceptional cases, e.g., what happens with sqrt(17) if the function returns an int? That's one case. What about sqrt(1)? sqrt(0)? what happens if you hand it a negative number?


> >Sure, you need to test the obvious things,
> 
> Everywhere? For each function? It may be desirable but hard to maintain.  Also, unit tests break when you change the behavior of a function, then you have to redesign the unit test for this particular function. I prefer unit tests for bigger chunks.

Usually, I don't even bother unittesting a function that isn't generic enough that I know it won't drastically change over time. Usually, it's when I start factoring out code in generic form that I really start working on the unittests. When I'm still in the experimental / exploratory stage, I'd throw in some tests to catch boundary conditions, but I wouldn't spend too much time on that. Most of the unittests should be aimed at preserving certain guarantees -- e.g., math functions should obey certain identities even around boundary values, API functions should always behave according to what external users would expect, etc.. But internal functions that are subject to a lot of changes -- I wouldn't do too much more than just stick in a few things that I know might be problematic (usually while writing the code itself). Any cases not caught by this will be caught at the API boundary when something starts failing API guarantees.

Besides these, I'd add a unittest for each bug I fix -- for regression control.

I'm not afraid of outright deleting unittests if the associated function has been basically gutted and rewritten from scratch, if said unittests are more concerned with implementation details. The ones concerned with overall behaviour would be kept. This is another reason it's better to put the unittest effort on the API level than on overly white-box dependent parts, since those are subject to frequent revisions.


> >but I find the real gains come from being able to verify the behaviour of edge cases and pathological input; and, critically, ensuring that that behaviour doesn't change as you refactor.  (My day job involves writing and maintaining legacy network libraries and parsers in pure C.  D's clean and easy unit tests would be a godsend for me.)
> >
> >-Wyatt
> 
> True, true. Unfortunately, the edge cases are usually spotted when using the software, not in unit tests. They can be included later, but new pathological input keeps coming up (especially if you write for third party software).

I guess it depends on the kind of application you write, but when writing unittests I tend to focus on what ways the code could break, rather than how it might work. Sure, you won't be able to come up with *all* the cases, and unittests sure don't guarantee 100% bug-free code, but generally you do catch the most frequent ones, which saves time dealing with the whole cycle of customer reports, generating bug fix change orders, QA testing, etc.. The ones that weren't caught early will eventually be found in the field, and they would be added to the growing body of unittests to control future regressions.


> Now don't get me wrong, I wouldn't want to miss unit tests in D, but I use them more carefully now, not everywhere.

As with all things, I'm skeptical of blindly applying some methodology even when it's not applicable or of questionable benefit. So while I definitely highly recommend D unittests, I wouldn't go so far as to mandate that, for example, every function must have at least 3 test cases or something like that. While I do use unittests a lot in D because I find it helpful, I'm skeptical of going all-out TDD. Everything in real-life is context- and situation-dependent, and such overly-zealous rule application usually results in wasted efforts for only marginal benefits.


T

-- 
Тише едешь, дальше будешь.
December 05, 2014
On Friday, 5 December 2014 at 15:25:19 UTC, Ary Borenszweig wrote:
> This is very true. Specially when mocks come into play, sometimes test become duplicated code and every time you make changes in your codebase you have to go and change the expected behaviour of mocks, which is just tedious and useless.

In my opinion OOP is very unfriendly for testing as a paradigm in general. The very necessity to create mocks is usually an alarm.
December 05, 2014
On Fri, Dec 05, 2014 at 03:11:29PM +0000, Chris via Digitalmars-d wrote: [...]
> be used. All I'm saying is that sometimes unit tests are sold as the be all end all anti-bug design.

I'm not sure where you heard that from, but even the name itself should already have given it away -- it's *unit* testing, not global testing. Even in the best, most ideal case, you can only prove things about that *unit* of code, it says nothing about what happens when you put them together to form the entire system. There are many ways to put perfectly-functioning components together that results in a malfunctioning system.

Also, while unittests do help to catch many bugs, it's certainly not an "anti-bug" design. There is no such thing! As we all (should) know, there is no such thing as a bug-free system. The best you can do is to reduce the total number of bugs; by their very nature, complex systems are far too complex for us to fully weed out every possible failure. Anyone selling this or that methodology as the be-all and end-all of solving your bug woes is merely pandering snake oil. :-D


> I think they should be used sensibly not everywhere.

Certainly.


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't seem to remove the bugs on my system! -- Mike Dresser
December 05, 2014
On Fri, Dec 05, 2014 at 03:57:14PM +0000, Chris via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 15:49:13 UTC, eles wrote:
> >On Friday, 5 December 2014 at 13:56:27 UTC, H. S. Teoh via Digitalmars-d wrote:
> >>On Fri, Dec 05, 2014 at 11:53:10AM +0000, Chris via Digitalmars-d wrote:
> >
> >>
> >>At my day job, you'd be shocked to know how many times things flat-out break in the nastiest, most obvious ways, yet people DO NOT EVEN NOTICE!!!! QA has developed this bad habit of only testing the feature they were asked to test, and the developers have become complacent over time and blindly trusting that QA has done their job,
> >
> >That's not trust or complaceny. That's a worldwide conspiracy to ensure good paid jobs for poor programmers & testers like us...
> 
> Programmers get paid for fixing bugs they have introduced themselves. :-)

Ah yes, the good ole Make Work Project, subcontracted under Job Securities Inc..

Reminds me of my first job, where there were severe performance problems with the system, and then we go in and find blatantly obvious ways of improving it, like replacing O(n^2) algorithms with a trivial O(n) ones, etc.. At one point we joked that sleep()'s were deliberately added to the code so that later when the customer complains about performance we just comment them out one at a time. :-P  Hey, it ensures customers keep coming back to us, right?


T

-- 
It's amazing how careful choice of punctuation can leave you hanging:
December 05, 2014
On 12/5/2014 1:27 AM, Paulo Pinto wrote:
> Just because code has tests, doesn't mean the tests are testing what they
> should. But if they reach the magical percentage number then everyone is happy.

I write unit tests with the goal of exercising every line of code. While one can argue that that doesn't really test what the code is supposed to be doing, my experience is that high coverage percentages strongly correlate with few problems down the road.

December 05, 2014
On 12/5/2014 8:36 AM, H. S. Teoh via Digitalmars-d wrote:
> As with all things, I'm skeptical of blindly applying some methodology
> even when it's not applicable or of questionable benefit.

In general I agree with you, but for unittests a methodology of using it with a coverage analyzer to ensure all code paths are executed is extremely effective.

I just found two bugs in dmd's lexer.c merely by writing tests to cover unexercised code.

https://github.com/D-Programming-Language/dmd/pull/4191
December 05, 2014
On Friday, 5 December 2014 at 20:25:49 UTC, Walter Bright wrote:
> On 12/5/2014 1:27 AM, Paulo Pinto wrote:
>> Just because code has tests, doesn't mean the tests are testing what they
>> should. But if they reach the magical percentage number then everyone is happy.
>
> I write unit tests with the goal of exercising every line of code. While one can argue that that doesn't really test what the code is supposed to be doing, my experience is that high coverage percentages strongly correlate with few problems down the road.

I imagine you haven't seen unit tests written by off-shore contractors....

For example, you can have coverage without asserts.
December 05, 2014
On 12/5/2014 5:41 AM, H. S. Teoh via Digitalmars-d wrote:
> As for GUI code, I've always been of the opinion that it should be coded
> in such a way as to be fully scriptable. GUI's that can only operate
> when given real user input has failed from the start IMO, because not
> being scriptable also means it's non-automatable (crippled, in my book),
> but more importantly, it's not auto-testable; you have to hire humans to
> sit all day repeating the same sequence of mouse clicks just to make
> sure the latest dev build is still working properly. That's grossly
> inefficient and a waste of money spent hiring the employee.

A complementary approach is to have the UI code call "semantic" routines that are in non-UI code, and those semantic routines do all the semantic work. That minimizes the UI code, and hence the testing problem.

Most GUI apps I've seen mixed up all that code together.
December 05, 2014
On Fri, Dec 05, 2014 at 12:35:50PM -0800, Walter Bright via Digitalmars-d wrote:
> On 12/5/2014 8:36 AM, H. S. Teoh via Digitalmars-d wrote:
> >As with all things, I'm skeptical of blindly applying some methodology even when it's not applicable or of questionable benefit.
> 
> In general I agree with you, but for unittests a methodology of using it with a coverage analyzer to ensure all code paths are executed is extremely effective.
> 
> I just found two bugs in dmd's lexer.c merely by writing tests to cover unexercised code.
> 
> https://github.com/D-Programming-Language/dmd/pull/4191

Yes, and earlier this week I found a bug in version.c just by trying to increase code coverage with test cases. :-)

The kind of blind application of methodology I had in mind was more along the lines of "OK so I'm required to write unittests for everything, so let's just pad the tests with inane stuff like 1+1==2, or testing that the function's return value equals itself, 'cos it's Friday afternoon and I wanna go home".


T

-- 
"Hi." "'Lo."