Thread overview
[phobos] Split std.datetime in two?
Feb 11, 2011
spir
February 11, 2011



----- Original Message -----
> From:Andrei Alexandrescu <andrei at erdani.com>

> Not taking one femtosecond to believe that. The hard part is to get the unittest to fail. Once it fails, it is all trivial. Insert a writeln or use a debugger.

Please please, let's *NOT* make this a standard practice.  If a test fails, I don't want to get a debugger out or start printf debugging *to find the unit test*.  I want it to tell me where it failed, and focus on fixing the problem.

> 
> > 
> > Normal code can afford to be more complex - _especially_ if it's well
> unit
> > tested. But if you make complicated unit tests, then pretty soon you have a major burden in making sure that your tests are correct rather than your
> code.
> 
> I am now having a major burden finding the code that does work in a sea of chaff.

I don't sympathize with you, we have tools to easily do this without much burden.  If you want to find a function to read, use your editor's find feature.  Some even can just allow you to click on the function and find the definition.  This argument is a complete red herring.

> 
> > 
> > In the case above, it's testing 5 things, so it's 5 lines. It's
> simple and
> > therefore less error prone. Unit tests really should favor simplicity and correctness over reduced line count or increased cleverness.
> 
> All code should do that. This is a false choice. Good code must go inside unittest and outside unittest.

Good unit tests are independent and do not affect one another.  Jonathan is right, it's simply a different goal for unit tests, you want easily isolated blocks of code that are simple to understand so when something goes wrong you can work through it in your head instead of figuring out how the unit test works.

This could still be true of loops, as long as the loop is simple and provable.  Overly clever code does not belong in unit tests, and performance is not an issue (unless that is what you are testing).  However, a unit test failure should immediately and unambiguously tell you which test failed.  The way D's unit tests are set up, looping does not do that, it tells you which loop failed, but not the exact test.  This can be possibly alleviated using assert's message feature.

> > The goal of unit
> > testing code is inherently different from normal code. _That_ is why unit
> testing
> > code is written differently from normal code.
> 
> Not buying it. Unittest code is not exempt from simple good coding principles such as avoiding copy and paste.

With unit tests, you care about one thing -- what happened to make this unit test fail.  A one line independent unit test is ideal, you need to do no context reading in order to figure out how the test is constructed.  It allows easy proof that the test is not flawed, and quick direction to the actual problem.

That being said, if you have repetitive setup or teardown code, that can and should be abstracted.

*That* being said, I have not read through all the datetime unit tests, and I don't know the nature of how many could be omitted.  I typically write one unit test per functional situation.  What I mean is, if a function takes one code path, I write one test for that code path.  If a function has different code paths depending on parameters, I try to cover them all.  Random data unit tests are not helpful.  It might be useful as justification for the unit tests to add comments on what particular aspect each line or block of lines is testing.  This is a daunting task and will add even more size to the file, but maybe we will find a slew of tests that are unnecessary.

-Steve




February 11, 2011
On Feb 11, 2011, at 2:39 PM, Steve Schveighoffer <schveiguy at yahoo.com> wrote:

>
>
>
>
> ----- Original Message -----
>> From:Andrei Alexandrescu <andrei at erdani.com>
>
>> Not taking one femtosecond to believe that. The hard part is to get
>> the unittest
>> to fail. Once it fails, it is all trivial. Insert a writeln or use
>> a debugger.
>
> Please please, let's *NOT* make this a standard practice.  If a test fails, I don't want to get a debugger out or start printf debugging *to find the unit test*.  I want it to tell me where it failed, and focus on fixing the problem.

You find the unittest alright. With coming improvements to assert you will often see the arguments that caused the trouble.

I don't understand how we both derive vastly different conclusions from the same extensive eperience with unittests. To me a difficult unittest to find is a crashing one; never ever once in my life I had problems figuring out why an assert fails in a uniitrst, and worse, I am incapable to imagine such.

>>
>>>
>>> Normal code can afford to be more complex - _especially_ if it's well
>> unit
>>> tested. But if you make complicated unit tests, then pretty soon
>>> you have a
>>> major burden in making sure that your tests are correct rather
>>> than your
>> code.
>>
>> I am now having a major burden finding the code that does work in a
>> sea of
>> chaff.
>
> I don't sympathize with you, we have tools to easily do this without much burden.

A little while ago you didn't care to start a debugger or use writeln - two simple tools. I cry double standard.

>  If you want to find a function to read, use your editor's find
> feature.  Some even can just allow you to click on the function and
> find the definition.  This argument is a complete red herring

I find it a valid argument. I and I suspect many just browse code to get a feel of it.

>
>>
>>>
>>> In the case above, it's testing 5 things, so it's 5 lines. It's
>> simple and
>>> therefore less error prone. Unit tests really should favor
>>> simplicity and
>>> correctness over reduced line count or increased cleverness.
>>
>> All code should do that. This is a false choice. Good code must go
>> inside
>> unittest and outside unittest.
>
> Good unit tests are independent and do not affect one another. Jonathan is right, it's simply a different goal for unit tests, you want easily isolated blocks of code that are simple to understand so when something goes wrong you can work through it in your head instead of figuring out how the unit test works.

For me repeated code is worse by all metrics. There is zero advantages that it has, and only troubles.

>
> This could still be true of loops, as long as the loop is simple and provable.  Overly clever code does not belong in unit tests, and performance is not an issue (unless that is what you are testing). However, a unit test failure should immediately and unambiguously tell you which test failed.  The way D's unit tests are set up, looping does not do that, it tells you which loop failed, but not the exact test.  This can be possibly alleviated using assert's message feature.

Then let's use it.

>
>>> The goal of unit
>>> testing code is inherently different from normal code. _That_ is
>>> why unit
>> testing
>>> code is written differently from normal code.
>>
>> Not buying it. Unittest code is not exempt from simple good coding
>> principles
>> such as avoiding copy and paste.
>
> With unit tests, you care about one thing -- what happened to make this unit test fail.  A one line independent unit test is ideal, you need to do no context reading in order to figure out how the test is constructed.  It allows easy proof that the test is not flawed, and quick direction to the actual problem.
>
> That being said, if you have repetitive setup or teardown code, that can and should be abstracted.

Let's. Thanks Jonathan for agreeing to do that.

>
> *That* being said, I have not read through all the datetime unit tests, and I don't know the nature of how many could be omitted.

Please do. I had a completely different opinion (virtually same as yours) before making a thorough pass thru it. Again, like in the sorcerer's apprentice, it's not about the deed, but instead the frightening scale at which the deed is realized.

Andrei
February 11, 2011
On 02/11/2011 02:39 PM, Steve Schveighoffer wrote:
> With unit tests, you care about one thing -- what happened to make this unit test fail.  A one line independent unit test is ideal, you need to do no context reading in order to figure out how the test is constructed.  It allows easy proof that the test is not flawed, and quick direction to the actual problem.

This is a design issue with assert. One cannot have it tell what one needs to know for diagnose, even less on demand, eg when a regression test suite fails after a change.

Denis
-- 
_________________
vita es estrany
spir.wikidot.com