December 06, 2014
Unittests are great to avoid regressions.
Unitests give confidence.
You can do radical changes to your codebase much easier if you know, that nothing
breaks because of it.

I not a fan of TDD. But I like it that I know directly that there are no regressions.
December 06, 2014
On Fri, 2014-12-05 at 12:44 -0800, Walter Bright via Digitalmars-d wrote:
> On 12/5/2014 5:41 AM, H. S. Teoh via Digitalmars-d wrote:
> > As for GUI code, I've always been of the opinion that it should be coded in such a way as to be fully scriptable. GUI's that can only operate when given real user input has failed from the start IMO, because not being scriptable also means it's non-automatable (crippled, in my book), but more importantly, it's not auto- testable; you have to hire humans to sit all day repeating the same sequence of mouse clicks just to make sure the latest dev build is still working properly. That's grossly inefficient and a waste of money spent hiring the employee.

You actually need both. Scripting end-to-end tests, systems tests, etc. is important but most systems also needs humans to try things out.

> A complementary approach is to have the UI code call "semantic"
> routines that
> are in non-UI code, and those semantic routines do all the semantic
> work. That
> minimizes the UI code, and hence the testing problem.

The usual model is that GUI code only has GUI code and uses a Mediator and/or Façade to access any other code. Similarly separate business rules from database access (DAO, etc.). Separation of Concerns, Three Tier Model, MVC, MVP, there are masses of labels for the fundamental architecture, but it is all about modularization and testability.

> Most GUI apps I've seen mixed up all that code together.

You haven't seen many then :-), and most of them have been crap and should have had the incantation "rm -rf *" applied.

--
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

December 06, 2014
On Fri, 2014-12-05 at 12:59 -0800, H. S. Teoh via Digitalmars-d wrote:
> 
[…]
> I had to implement a drag-n-drop function in Javascript once, and the thing was one big convoluted mess, even after excluding the semantic part (which in this case is trivial). It left me really longing to have some kind of unittest framework to verify that later code changes won't break that fragile tower of cards, but alas, we didn't have any such framework available.

http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#JavaScript

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

December 06, 2014
On Sat, 2014-12-06 at 01:15 +0000, deadalnix via Digitalmars-d wrote:
> 
[…]
> In my experience, having no QA actually works better. Another approach is to not have a QA department, but one or 2 QA guys within the dev team.

There's a label for that: DevOps. Except that the term is rapidly loosing is proper meaning as people start using it without understanding what it used to mean.
-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

December 06, 2014
On Sat, Dec 06, 2014 at 08:46:58AM +0000, Paulo Pinto via Digitalmars-d wrote:
> On Saturday, 6 December 2014 at 08:26:23 UTC, Brad Roberts via Digitalmars-d wrote:
> >On 12/5/2014 11:54 PM, Paulo Pinto via Digitalmars-d wrote:
> >>On Saturday, 6 December 2014 at 01:31:59 UTC, deadalnix wrote:
> >>>Code review my friend. Nothing gets in without review, and as won't usually don't enjoy the prospect of having to fix the shit of a coworker, one ensure that coworker wrote proper tests.
> >>
> >>Good luck making that work in companies.
> >>
> >>Code review is something for open source projects and agile conferences.
> >
> >I've worked at several companies, both large and gigantic, and it's worked very well at all of them.  Code reviews are an important part of healthy and quality code development processes.
> 
> Maybe I have worked at wrong companies then.
> 
> In 20 years of career I can count with one hand those that did it, and most developers hated it. Never lasted more than a few meetings.
[...]

Huh, what...?? Meetings? For code review??? How does that even work...?

Where I work, code review is done as part of the change committing process. No code gets merged into the mainline codebase without somebody reviewing it -- and recently they've upped the process to require 2 or more reviewers who approve the changes, both at the code level and at the higher feature level. These reviews are ongoing all the time -- you work on your code, test it locally, and once you're reasonably confident of it, you submit it to QA for further testing and sanity testing, then once that's approved, you submit it to your team lead and he reviews it, and if it has problems, he will reject it. If it gets approved, then it gets reviewed by a wider panel of reviewers drawn from teams who are responsible for the component(s) touched by the code change. Only when they OK the change, will it get merged into the mainline.

However, all this level of review kinda loses a lot of its effectiveness because we have no unittesting system, so regressions are out of control. :-(  The code is complex enough that even with all this review, things still slip through. The lack of automation also means QA tests are sometimes rather skimpy and miss obvious regressions. Having automated unittesting would go a long ways in improving this situation.


T

-- 
Живёшь только однажды.
December 06, 2014
On Sat, Dec 06, 2014 at 03:11:01PM +0000, Russel Winder via Digitalmars-d wrote:
> 
> On Fri, 2014-12-05 at 12:59 -0800, H. S. Teoh via Digitalmars-d wrote:
> > 
> […]
> > I had to implement a drag-n-drop function in Javascript once, and the thing was one big convoluted mess, even after excluding the semantic part (which in this case is trivial). It left me really longing to have some kind of unittest framework to verify that later code changes won't break that fragile tower of cards, but alas, we didn't have any such framework available.
> 
> http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#JavaScript
[...]

Oh, I *know* there are Javascript testing frameworks out there... the problem is convincing people to actually incorporate said tests as part of the development process. Sure, *I* can run the tests with my own code changes, but there are 50-100 developers working on the project who are also constantly making changes, and unless they also regularly run unittests, the whole exercise is kinda moot.


T

-- 
Do not reason with the unreasonable; you lose by definition.
December 06, 2014
On Sat, 2014-12-06 at 01:22 +0000, deadalnix via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 15:28:36 UTC, Chris wrote:
> > > This is very true. Specially when mocks come into play, sometimes test become duplicated code and every time you make changes in your codebase you have to go and change the expected behaviour of mocks, which is just tedious and useless.

Well poor use of mocks anyway.

If a mock is having to change because the code changes (rather than the story changing) then the mock is wrong: inappropriate separation of concerns and use of mocks.

> > Thanks for saying that. That's my experience too, especially when a module is under heavy development with frequent changes.
> 
> I second this, too much mock is a lot of work down the road.

I find mock immensely valuable for separating concerns, e.g GUI for controlling a network device. For integration testing mocks are invaluable, and they are useful for unit tests.
-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

December 06, 2014
On Sat, Dec 06, 2014 at 03:09:21PM +0000, Russel Winder via Digitalmars-d wrote:
> 
> On Fri, 2014-12-05 at 12:44 -0800, Walter Bright via Digitalmars-d wrote:
> > On 12/5/2014 5:41 AM, H. S. Teoh via Digitalmars-d wrote:
> > > As for GUI code, I've always been of the opinion that it should be coded in such a way as to be fully scriptable. GUI's that can only operate when given real user input has failed from the start IMO, because not being scriptable also means it's non-automatable (crippled, in my book), but more importantly, it's not auto- testable; you have to hire humans to sit all day repeating the same sequence of mouse clicks just to make sure the latest dev build is still working properly. That's grossly inefficient and a waste of money spent hiring the employee.
> 
> You actually need both. Scripting end-to-end tests, systems tests, etc. is important but most systems also needs humans to try things out.

Oh, certainly I'm not expecting *all* tests to be automated -- there are some things that are fundamentally non-automatable, like testing look-and-feel, user experience, new feature evaluation, etc.. My complaint was that the lack of automation has caused QA to be completely occupied with menial, repetitive tasks (like navigate the same set of menus 100 times a day to test 100 developer images, just to make sure it still works as expected), that no resources are left for doing more consequential work. Instead, things have gone the opposite direction -- QA is hiring more people because the current staff can no longer keep up with the sheer amount of menial repetitive testing they have to do.

If these automatable tests were actually automated, the QA department would have many resources freed up for doing other important work -- like testing more edge cases for potential problematic areas, etc..


> > A complementary approach is to have the UI code call "semantic" routines that are in non-UI code, and those semantic routines do all the semantic work. That minimizes the UI code, and hence the testing problem.
> 
> The usual model is that GUI code only has GUI code and uses a Mediator and/or Façade to access any other code. Similarly separate business rules from database access (DAO, etc.). Separation of Concerns, Three Tier Model, MVC, MVP, there are masses of labels for the fundamental architecture, but it is all about modularization and testability.
> 
> > Most GUI apps I've seen mixed up all that code together.
> 
> You haven't seen many then :-), and most of them have been crap and should have had the incantation "rm -rf *" applied.
[...]

Unfortunately, I suspect that if I adopted that policy, I'd have to nuke most of my OS. :-P


T

-- 
Knowledge is that area of ignorance that we arrange and classify. -- Ambrose Bierce
December 06, 2014
On Sat, 2014-12-06 at 07:14 -0800, H. S. Teoh via Digitalmars-d wrote:
> 
[…]
> Oh, I *know* there are Javascript testing frameworks out there... the problem is convincing people to actually incorporate said tests as part of the development process. Sure, *I* can run the tests with my own code changes, but there are 50-100 developers working on the project who are also constantly making changes, and unless they also regularly run unittests, the whole exercise is kinda moot.

If the team is 50 to 100 programmers 1 of whom thinks testing is a good idea then I can see an unmitigatable disaster looming and the technical bosses (*) being sacked. Best bet, get a new job now.


(*) management and accounting bosses never get sacked, because it is
never their fault.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

December 06, 2014
On Sat, 2014-12-06 at 07:24 -0800, H. S. Teoh via Digitalmars-d wrote:
> 
[…]
> Oh, certainly I'm not expecting *all* tests to be automated -- there are some things that are fundamentally non-automatable, like testing look-and-feel, user experience, new feature evaluation, etc.. My complaint was that the lack of automation has caused QA to be completely occupied with menial, repetitive tasks (like navigate the same set of menus 100 times a day to test 100 developer images, just to make sure it still works as expected), that no resources are left for doing more consequential work. Instead, things have gone the opposite direction -- QA is hiring more people because the current staff can no longer keep up with the sheer amount of menial repetitive testing they have to do.
> 
> If these automatable tests were actually automated, the QA department would have many resources freed up for doing other important work -- like testing more edge cases for potential problematic areas, etc..

On the plus side, a couple of organizations have had me in to teach their development staff Python so they can write scripting components for their UI, and teach their QA staff Python so they can script all the menial stuff testing away. Worked very well. So well in one case that the QA staff were able to have breaks and relax a bit and do really high quality end-user testing. HR thought they were slacking so sacked half of them. I think the other half of QA have now left and the product testing is clearly suffering relying solely on automated tests that are not being properly updated.

[…]

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder