December 05, 2014
On Fri, Dec 05, 2014 at 12:44:17PM -0800, Walter Bright via Digitalmars-d wrote:
> On 12/5/2014 5:41 AM, H. S. Teoh via Digitalmars-d wrote:
> >As for GUI code, I've always been of the opinion that it should be coded in such a way as to be fully scriptable. GUI's that can only operate when given real user input has failed from the start IMO, because not being scriptable also means it's non-automatable (crippled, in my book), but more importantly, it's not auto-testable; you have to hire humans to sit all day repeating the same sequence of mouse clicks just to make sure the latest dev build is still working properly. That's grossly inefficient and a waste of money spent hiring the employee.
> 
> A complementary approach is to have the UI code call "semantic" routines that are in non-UI code, and those semantic routines do all the semantic work. That minimizes the UI code, and hence the testing problem.
> 
> Most GUI apps I've seen mixed up all that code together.

Agreed, but that doesn't address the problem of how to test the GUI code itself. Modern GUIs are complicated beasts, even in non-semantic code, and, judging by my admittedly limited experience with GUIs, they sorely need to be more thoroughly tested.

I had to implement a drag-n-drop function in Javascript once, and the thing was one big convoluted mess, even after excluding the semantic part (which in this case is trivial). It left me really longing to have some kind of unittest framework to verify that later code changes won't break that fragile tower of cards, but alas, we didn't have any such framework available.


T

-- 
Try to keep an open mind, but not so open your brain falls out. -- theboz
December 05, 2014
On Fri, Dec 05, 2014 at 08:43:02PM +0000, paulo pinto via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 20:25:49 UTC, Walter Bright wrote:
> >On 12/5/2014 1:27 AM, Paulo Pinto wrote:
> >>Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy.
> >
> >I write unit tests with the goal of exercising every line of code. While one can argue that that doesn't really test what the code is supposed to be doing, my experience is that high coverage percentages strongly correlate with few problems down the road.
> 
> I imagine you haven't seen unit tests written by off-shore contractors....
> 
> For example, you can have coverage without asserts.

Exactly!!

	auto func(...) { ... }

	unittest {
		auto x = func(...); // woohoo, I got me some test coverage!
	} // haha, nobody would even notice when it fails!


T

-- 
Questions are the beginning of intelligence, but the fear of God is the beginning of wisdom.
December 06, 2014
On 12/5/2014 12:53 PM, H. S. Teoh via Digitalmars-d wrote:
> The kind of blind application of methodology I had in mind was more
> along the lines of "OK so I'm required to write unittests for
> everything, so let's just pad the tests with inane stuff like 1+1==2, or
> testing that the function's return value equals itself, 'cos it's Friday
> afternoon and I wanna go home".

Sure. Of particular meaninglessness is anything about the mere quantity of tests.

One thing that I absolutely love about D's unittests is they have changed the culture, like ddoc has. Trying to release code with no unittests looks sloppy and unprofessional. It's easy for management to say "hey, where is the doc? Where are the unittests?" Anything is far better than nothing.

December 06, 2014
On 12/5/2014 5:26 AM, H. S. Teoh via Digitalmars-d wrote:
> As Walter once said:
>
> 	I've been around long enough to have seen an endless parade of
> 	magic new techniques du jour, most of which purport to remove
> 	the necessity of thought about your programming problem.  In the
> 	end they wind up contributing one or two pieces to the
> 	collective wisdom, and fade away in the rearview mirror.
> 	-- Walter Bright

I'd like to subscribe to his newsletter!

December 06, 2014
On 12/5/2014 1:01 PM, H. S. Teoh via Digitalmars-d wrote:
> On Fri, Dec 05, 2014 at 08:43:02PM +0000, paulo pinto via Digitalmars-d wrote:
>> For example, you can have coverage without asserts.
>
> Exactly!!
>
> 	auto func(...) { ... }
>
> 	unittest {
> 		auto x = func(...); // woohoo, I got me some test coverage!
> 	} // haha, nobody would even notice when it fails!

It does at least two things:

1. the code doesn't crash

2. the code being tested is reachable (coverage testing can reveal unreachable code, which is a bug)

December 06, 2014
On Fri, Dec 05, 2014 at 04:19:46PM -0800, Walter Bright via Digitalmars-d wrote:
> On 12/5/2014 5:26 AM, H. S. Teoh via Digitalmars-d wrote:
> >As Walter once said:
> >
> >	I've been around long enough to have seen an endless parade of
> >	magic new techniques du jour, most of which purport to remove
> >	the necessity of thought about your programming problem.  In the
> >	end they wind up contributing one or two pieces to the
> >	collective wisdom, and fade away in the rearview mirror.
> >	-- Walter Bright
> 
> I'd like to subscribe to his newsletter!

I heard he runs a subscription service through a mirror, you should try asking the guy behind your mirror to see if he knows anything about it.


T

-- 
MACINTOSH: Most Applications Crash, If Not, The Operating System Hangs
December 06, 2014
On Friday, 5 December 2014 at 13:23:00 UTC, Russel Winder via
Digitalmars-d wrote:
> Developers need to stop thinking "how is this code supposed to work"
> when it comes to tests and start thinking "how can I break this code".
> It is how testers and QA work, sadly developers all too often fail to.
>

Yes. this week I met with a compiler guy, which stated roughly:
"It is fairly easy to compile correct code. What is difficult is
handling everything else."

> This is particularly relevant for APIs where there is less likely to be
> a QA team involved, and developers not looking for error cases is why so
> many APIs are so broken.
>

Having the QA far away from dev tend to create a carelessness
amongst them (and for understandable social reasons, QA is
responsibility of the QA department, not us, if something is
wrong, they will tell us).

In my experience, having no QA actually works better. Another
approach is to not have a QA department, but one or 2 QA guys
within the dev team.

> One of the failings of TDD is the emphasis on correct cases,
> insufficient emphasis on "how can I make this code fail". But that
> doesn't mean co-development of tests and system is a bad thing. Exactly
> the opposite, it is a good thing.
>

Yes in addition to create less bugs, it also tend to roll out
more decoupled designs (and so testable) design. Decoupled code
have many other benefit than being testable, so that is a good
thing.

> So on the one hand I agree with much of your analysis, but I totally
> disagree with your conclusion. Unit, integration and system tests are
> essential. They document the usage of code and outline the test coverage
> and how well the system is likely to work. Even if a system appears to
> work and yet has no tests, it is totally untrustworthy. Best response to
> such code is "rm -rf *".

+1
December 06, 2014
On Friday, 5 December 2014 at 15:00:15 UTC, Ary Borenszweig wrote:
> On 12/5/14, 8:53 AM, Chris wrote:
>> On Friday, 5 December 2014 at 09:27:16 UTC, Paulo  Pinto wrote:
>>> On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote:
>>>> On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote:
>>
>> Now is the right time to confess. I hardly ever use unit tests although
>> it's included (and encouraged) in D. Why? When I write new code I "unit
>> test" as I go along, with
>>
>> debug writefln("result %s", result);
>>
>> and stuff like this. Stupid? Unprofessional? I don't know. It works.
>
> You should trying writing a compiler without unit tests.

Yes, compiler is THE school case example where unittests shine.
Feed input, check output, no surprise.
December 06, 2014
On Friday, 5 December 2014 at 15:28:36 UTC, Chris wrote:
>> This is very true. Specially when mocks come into play, sometimes test become duplicated code and every time you make changes in your codebase you have to go and change the expected behaviour of mocks, which is just tedious and useless.
>
> Thanks for saying that. That's my experience too, especially when a module is under heavy development with frequent changes.

I second this, too much mock is a lot of work down the road.
December 06, 2014
On Friday, 5 December 2014 at 15:55:23 UTC, Chris wrote:
> Everywhere? For each function? It may be desirable but hard to maintain. Also, unit tests break when you change the behavior of a function, then you have to redesign the unit test for this particular function. I prefer unit tests for bigger chunks.
>

If the change is voluntary, you don't only need to change the
test, but all callsites. You are lucky that the unit test warned
you about this !

>> but I find the real gains come from being able to verify the behaviour of edge cases and pathological input; and, critically, ensuring that that behaviour doesn't change as you refactor.  (My day job involves writing and maintaining legacy network libraries and parsers in pure C.  D's clean and easy unit tests would be a godsend for me.)
>>
>> -Wyatt
>
> True, true. Unfortunately, the edge cases are usually spotted when using the software, not in unit tests. They can be included later, but new pathological input keeps coming up (especially if you write for third party software).
>

This is where a test culture plays. It is hard to retrofit
testing in bad design (and it is why it is advised to dev test
along side with the code itself).