December 05, 2014
On Fri, 05 Dec 2014 11:53:10 +0000
Chris via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with
> 
> debug writefln("result %s", result);
ah, that debug code is so annoying. besides, it requires importing std.stdio, which is not always desirable. and... and... and... you can move it to unittest section and then just run it with rdmd! hee-hoo, now you got best from the both worlds: you can check that your code is working and you have unittests!

just don't tell anyone that your carefully crafted unittests are simple debug statements moved to another place. i'm doing that all the time and never been caught.


December 05, 2014
> The good thing about unit tests is that they tell you when you break existing code.

That's the great thing about unittests, and the reason why I write unittests. I work on a fairly complex code base and every now and then there's a new feature requested. Implementing features involves several to dozen of modules to be changed, and there's no way that I could guarantee that feature implementation didn't change behaviour of the existing code. I both hate and love when I `make` compiles and unittest fails.

>  But you'll realize that soon enough anyway.

This is not good enough for me. Sometimes "soon enough" means week or two before somebody actually notice the bug in the implementation (again, very complex project that's simply not hand-testable), and that's definitively not soon enough keeping in mind amount of $$$ that you wasted into air.



On Friday, 5 December 2014 at 11:53:11 UTC, Chris wrote:
> On Friday, 5 December 2014 at 09:27:16 UTC, Paulo  Pinto wrote:
>> On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote:
>>> On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote:
>>>>> http://www.teamten.com/lawrence/writings/java-for-everything.html
>>>> i didn't read the article, but i bet that this is just another article
>>>> about his language of preference and how any other language he tried
>>>> doesn't have X or Y or Z. and those X, Y and Z are something like "not
>>>> being on market for long enough", "vendor ACME didn't ported ACMElib to
>>>> it", "out staff is trained in G but not in M" and so on. boring.
>>>>
>>>
>>> From the article:
>>>
>>> "Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time."
>>>
>>> Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages.
>>
>> Yes, but they cannot test everything. GUI code is specially ugly as it requires UI automation tooling.
>>
>> They do exist, but only enterprise customers are willing to pay for it.
>>
>> This is why WPF has UI automation built-in.
>>
>> The biggest problem with unit tests are managers that want to see shiny reports, like those produced by tools like Sonar.
>>
>> Teams than spend ridiculous amount of time writing superfluous unit tests just to match milestone targets.
>>
>> Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy.
>>
>> --
>> Paulo
>
> Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with
>
> debug writefln("result %s", result);
>
> and stuff like this. Stupid? Unprofessional? I don't know. It works. I once started to write unit tests only to find out that indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray. So the bugs are usually due to a lack of imagination or a tunnel vision at the moment of writing code. This will be reflected in the unit tests as well. So why bother? You merely enshrine your own restricted and circular logic in "tests". Which reminds me of maths when teachers would tell us "And see, it makes perfect sense!", yeah, because they laid down the rules themselves in the first place.
>
> The same goes for comparing your output to some "gold standard". The program claims to have an accuracy of 98%. Sure, because you wrote for the gold standard and not for the real world where it drastically drops to 70%.
>
> The good thing about unit tests is that they tell you when you break existing code. But you'll realize that soon enough anyway.

December 05, 2014
On Friday, 5 December 2014 at 12:06:55 UTC, Nemanja Boric wrote:
>> The good thing about unit tests is that they tell you when you break existing code.
>
> That's the great thing about unittests, and the reason why I write unittests. I work on a fairly complex code base and every now and then there's a new feature requested. Implementing features involves several to dozen of modules to be changed, and there's no way that I could guarantee that feature implementation didn't change behaviour of the existing code. I both hate and love when I `make` compiles and unittest fails.
>
>> But you'll realize that soon enough anyway.
>
> This is not good enough for me. Sometimes "soon enough" means week or two before somebody actually notice the bug in the implementation (again, very complex project that's simply not hand-testable), and that's definitively not soon enough keeping in mind amount of $$$ that you wasted into air.
>
>
>
> On Friday, 5 December 2014 at 11:53:11 UTC, Chris wrote:
>> On Friday, 5 December 2014 at 09:27:16 UTC, Paulo  Pinto wrote:
>>> On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote:
>>>> On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote:
>>>>>> http://www.teamten.com/lawrence/writings/java-for-everything.html
>>>>> i didn't read the article, but i bet that this is just another article
>>>>> about his language of preference and how any other language he tried
>>>>> doesn't have X or Y or Z. and those X, Y and Z are something like "not
>>>>> being on market for long enough", "vendor ACME didn't ported ACMElib to
>>>>> it", "out staff is trained in G but not in M" and so on. boring.
>>>>>
>>>>
>>>> From the article:
>>>>
>>>> "Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time."
>>>>
>>>> Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages.
>>>
>>> Yes, but they cannot test everything. GUI code is specially ugly as it requires UI automation tooling.
>>>
>>> They do exist, but only enterprise customers are willing to pay for it.
>>>
>>> This is why WPF has UI automation built-in.
>>>
>>> The biggest problem with unit tests are managers that want to see shiny reports, like those produced by tools like Sonar.
>>>
>>> Teams than spend ridiculous amount of time writing superfluous unit tests just to match milestone targets.
>>>
>>> Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy.
>>>
>>> --
>>> Paulo
>>
>> Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with
>>
>> debug writefln("result %s", result);
>>
>> and stuff like this. Stupid? Unprofessional? I don't know. It works. I once started to write unit tests only to find out that indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray. So the bugs are usually due to a lack of imagination or a tunnel vision at the moment of writing code. This will be reflected in the unit tests as well. So why bother? You merely enshrine your own restricted and circular logic in "tests". Which reminds me of maths when teachers would tell us "And see, it makes perfect sense!", yeah, because they laid down the rules themselves in the first place.
>>
>> The same goes for comparing your output to some "gold standard". The program claims to have an accuracy of 98%. Sure, because you wrote for the gold standard and not for the real world where it drastically drops to 70%.
>>
>> The good thing about unit tests is that they tell you when you break existing code. But you'll realize that soon enough anyway.

Yes, yes, yes. Unit tests can be useful in cases like this. But I don't think that they are _the_ way to cope with bugs. It's more like "stating the obvious", and bugs are hardly ever obvious, else they wouldn't be bugs.

I read some comments in D code on github saying "extend unit test to include XYZ". So it's already been tested, it works and it will never be added, just like the

debug writeln()

disappears after the code has been thoroughly tested. If there's a bug, it's not the XYZ that has been tested but the ZYX nobody thought of (or couldn't think of, because it works as unexpected on Windows) :-).

Usually you run standard tests anyway to see if the old stuff still works as expected. Designing unit tests for each module is a bit tedious. And what if you change a function/method? The unit tests will break and you have to write new ones or comment them out. Blah blah blah.

Maybe people expect too much from unit test. They are just a way to test if the program still works as expected in the most obvious cases. But they are not a debugging tool.
December 05, 2014
On Fri, Dec 05, 2014 at 02:39:07AM +0000, deadalnix via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote:
[...]
> >From the article:
> >
> >"Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time."
> >
> >Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages.
> 
> Well truth to be said, if you don't test, you don't know there is a bug. Therefore there is no bug.

Yeah, back in my C/C++ days, I also thought unittests were a waste of time. But after having been shamed into writing unittests in D ('cos they are just sooo easy to write I ran out of excuses not to), I started realizing to my horror at how many bugs are actually in my code -- all kinds of corner cases that I missed, typos that slipped past compiler checks, etc.. More times than I'm willing to admit, I've revised and revised my code to perfection and "proven" (in my head) that it's correct, only to run it and have it fail the unittests because my brain has unconsciously tuned out a big glaring typo staring me right in the face. Had this been in C/C++, the bug wouldn't have been discovered until much later.

That said, though, for unittests to be actually useful, you sometimes need to change your coding style. Certain kinds of coding style doesn't lend itself well to unittesting -- for example, deeply-nested loops that are very hard to reach into from a unittest, because it may not be immediately obvious how a unittest might test a rare, tricky if-condition buried 3 levels inside nested loops. Usually, such code is actually *never* tested because it's too hard to test -- it's a rare error-condition that doesn't happen with good input (and how many times we succumbed to the temptation of thinking the program is only ever given well-formed input, with disastrous results), too rare to justify the effort of crafting a unittest that would actually trigger it.

This is where range-based component programming becomes an extremely powerful idiom -- separating out the logical parts of a complex piece of code so that there are no longer deeply-nested loops with hard-to-reach conditions, but everything is brought to the forefront where they can be easily verified with simple unittests.

But, some people may not be willing to change the way they think about their coding problem in order to code in a testable way like this. So they may well resort to trying to rationalize away the usefulness of unittests. Well, the loss is their own, as the lack of unittesting will only result in poorer quality of their code, whereas those who are less arrogant will benefit by developing a much better track record of code correctness. :-)


T

-- 
LINUX = Lousy Interface for Nefarious Unix Xenophobes.
December 05, 2014
On Fri, 2014-12-05 at 11:53 +0000, Chris via Digitalmars-d wrote: […]
> indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray. So the bugs are usually due to a lack of imagination or a tunnel vision at the moment of writing code. This will be reflected in the unit tests as well. So why bother? You merely enshrine your own restricted and circular logic in "tests". Which reminds me of maths when teachers would tell us "And see, it makes perfect sense!", yeah, because they laid down the rules themselves in the first place.
[…]

Developers need to stop thinking "how is this code supposed to work" when it comes to tests and start thinking "how can I break this code". It is how testers and QA work, sadly developers all too often fail to.

This is particularly relevant for APIs where there is less likely to be a QA team involved, and developers not looking for error cases is why so many APIs are so broken.

One of the failings of TDD is the emphasis on correct cases, insufficient emphasis on "how can I make this code fail". But that doesn't mean co-development of tests and system is a bad thing. Exactly the opposite, it is a good thing.

So on the one hand I agree with much of your analysis, but I totally disagree with your conclusion. Unit, integration and system tests are essential. They document the usage of code and outline the test coverage and how well the system is likely to work. Even if a system appears to work and yet has no tests, it is totally untrustworthy. Best response to such code is "rm -rf *".

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


December 05, 2014
On Thu, Dec 04, 2014 at 09:03:59PM -0800, Walter Bright via Digitalmars-d wrote:
> On 12/4/2014 6:47 PM, ketmar via Digitalmars-d wrote:
> >and what i also can't grok is "test-driven developement". ah, we spent alot of time writing that tests that we can't even run 'cause we didn't start working on the actual code yet. it's splendid! we didn't start the real work yet and we are already bored. i don't believe that this is a good way to develop a good project.
> 
> What I find most effective is writing the unit tests and the code they drive at the same time.

Yeah, in D, I find that whenever I'm writing a tricky bit of code, I always do an :sp in vim and start adding unittests past the end of the function to record the tricky cases that come to mind. It's proving to be extremely useful in keeping bugs out, because sometimes there are just too many special cases to remember to test afterwards, so if you don't write out the unittests right then and there, you'll probably forget some subtle corner case which will inevitably come back to bite you at the most inconvenient time afterwards.

As for TDD... or OOD, or whatever other acronym / bandwagon methodology that get invented every 5 years, I've always been a skeptic. I'm pretty sure the underlying ideas are beneficial -- unittests, thinking of your data in terms of objects, etc.. They are useful tools for getting your job done, and done well. But when you start pushing that as the be-all and end-all of programming, it ceases being a tool and becomes an idealogy shoved down your throat -- everything Must Be A Class Even When It Only Has Static Methods, You Must Write Tests All Day Before Writing A Single Line Of Code, ad nauseaum -- that inevitably results in needlessly convoluted code that isn't actually *better* than more straightforward code to begin with, as well as coders who hold the strange belief that by following the proposed magic formula their code will magically become correct even if they never even bothered to *think* about their programming problem.

As Walter once said:

	I've been around long enough to have seen an endless parade of
	magic new techniques du jour, most of which purport to remove
	the necessity of thought about your programming problem.  In the
	end they wind up contributing one or two pieces to the
	collective wisdom, and fade away in the rearview mirror.
	-- Walter Bright


T

-- 
Music critic: "That's an imitation fugue!"
December 05, 2014
On Friday, 5 December 2014 at 12:42:16 UTC, Chris wrote:
> I read some comments in D code on github saying "extend unit test to include XYZ". So it's already been tested, it works and it will never be added, just like the

We require adding test cases to match Phobos changes not because it is necessary to ensure obvious. It is there for regression control so that new behavior won't be broken 2 years later by some random change in compiler / other part of Phobos.
December 05, 2014
On Fri, Dec 05, 2014 at 09:27:15AM +0000, Paulo Pinto via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote:
[...]
> >From the article:
> >
> >"Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time."
> >
> >Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages.
> 
> Yes, but they cannot test everything. GUI code is specially ugly as it requires UI automation tooling.

I don't think it was ever claimed that unittests tested *everything*. If they did, they'd be *tests*, not merely *unit*tests. :-)

As for GUI code, I've always been of the opinion that it should be coded in such a way as to be fully scriptable. GUI's that can only operate when given real user input has failed from the start IMO, because not being scriptable also means it's non-automatable (crippled, in my book), but more importantly, it's not auto-testable; you have to hire humans to sit all day repeating the same sequence of mouse clicks just to make sure the latest dev build is still working properly. That's grossly inefficient and a waste of money spent hiring the employee.


> They do exist, but only enterprise customers are willing to pay for it.

IMO, GUI toolkits that don't have built-in UI automation are fundamentally flawed.


> This is why WPF has UI automation built-in.

Yay! :-D


> The biggest problem with unit tests are managers that want to see shiny reports, like those produced by tools like Sonar.
>
> Teams than spend ridiculous amount of time writing superfluous unit tests just to match milestone targets.
> 
> Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy.
[...]

Hmm...

	void func1(...) { ... }

	unittest {
		// Stating the obvious
		func1();
		assert(1 == 1);
	}

	unittest {
		// If you gotta state it once, better do it twice for
		// double the coverage
		func1();
		assert(2 == 2);
	}

	unittest {
		// And better state it multiple ways just to be sure
		func1();
		assert(2 == 1 + 1);
		// (even though this has nothing to do with func1() at
		// all)
	}

	int func2() { ... }

	unittest {
		// Just in case the == operator stops working
		assert(func2() == func2());
		assert(is(typeof(1)));
	}

	unittest {
		// Just in case commutativity stops working -- hey, they
		// don't work for floats, better make sure they do for
		// ints!
		assert(func2() + 1 == 1 + func2());
		assert(int.max == int.max);
	}

	unittest {
		// Just in case zero stops behaving like zero, y'know.
		assert(func2() * 0 == 0 * func2());
		assert(1*0 == 0);
	}

Welp, I got 3 unittests per function, I guess I must be doing pretty well, eh? Sounds like an awesome idea, I should start writing unittests like this from now on. It's much easier this way, and I'd feel better about my code just from the sheer number of unittests! Hey, at least I'd know it if a compiler bug causes built-in operators and types to stop working!

:-P


T

-- 
You are only young once, but you can stay immature indefinitely. -- azephrahel
December 05, 2014
On Fri, Dec 05, 2014 at 11:53:10AM +0000, Chris via Digitalmars-d wrote: [...]
> The good thing about unit tests is that they tell you when you break existing code.

That's one of the *major* benefits of unittest IMO: prevent regressions.


> But you'll realize that soon enough anyway.

Hahahahahahahahahaha... How I wish that were true!

At my day job, you'd be shocked to know how many times things flat-out break in the nastiest, most obvious ways, yet people DO NOT EVEN NOTICE!!!! QA has developed this bad habit of only testing the feature they were asked to test, and the developers have become complacent over time and blindly trusting that QA has done their job, when in reality test coverage is extremely poor, and changes get merged into the main code repo that cause all sorts of regressions. I've had to fix the SAME bugs over and over again in various varying forms, simply because we have no unittesting framework to sound the alarms when somebody inadvertently broke the code AGAIN, the 100th time, 'cos they didn't understand what the correct behaviour should be.

Fixing regressions is easily 30-40% of my workload, and almost all of those cases could be prevented had there been unittests to catch regressions. How I wish that with every bugfix I submit, I could also submit a unittest to make sure it complains loudly and clearly the next time somebody breaks it yet again! There are so many corner cases that we fixed over time, that there's no way for QA to practically re-test all of them (plus, without automated tests, how realistically can you do full regression testing anyway?), and I can almost guarantee that many of these bugs will come back as soon as that piece of code gets touched again. We always have to add new features, many of which involve extensive code changes, but without unittests, we could be introducing hundreds of subtle bugs every time, and, given the rate of new feature merges, we could be covering over most of these subtle bugs because code paths have changed significantly. As a result, most of these bugs become dormant in the code, and only show up again years later when a new code change uncovers that code path once more. By then, so many changes would've already accumulated that we may have forgotten what the old bug really was and what the bugfix should be. It may take multiple tries before that bug gets re-fixed. All of this needless churn could be eliminated just by having unittests catch regressions up-front.


T

-- 
Debian GNU/Linux: Cray on your desktop.
December 05, 2014
On Friday, 5 December 2014 at 11:53:11 UTC, Chris wrote:
> and stuff like this. Stupid? Unprofessional? I don't know. It works. I once started to write unit tests only to find out that indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray.

The code can still break even if those expectations are met. Of course tests catch only regressions, not all possible sorts of bugs. And when they do, it's really fascinating.