May 05, 2014
On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
> On 5/5/14, 8:55 AM, Dicebot wrote:
>>
>> It was just a most simple example. "Unittests should do no I/O because
>> any sort of I/O can fail because of reasons you don't control from the
>> test suite" is an appropriate generalization of my statement.
>>
>> Full /tmp is not a problem, there is nothing broken about system with
>> full /tmp. Problem is test reporting that is unable to connect failure
>> with /tmp being full unless you do environment verification.
>
> Different strokes for different folks. -- Andrei

There is nothing subjective about it. It is a very well-define practical goal - getting either reproducible or informative reports for test failures from machines you don't have routine access to. Why still keeping test sources maintainable (ok this part is subjective).  It is relatively simple engineering problem but you discard widely adopted solution for it (strict control of test requirements) without proposing any real alternative. "I will yell at someone when it breaks" is not really a solution.
May 05, 2014
On 5/5/14, 10:08 AM, Dicebot wrote:
> On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
>> On 5/5/14, 8:55 AM, Dicebot wrote:
>>>
>>> It was just a most simple example. "Unittests should do no I/O because
>>> any sort of I/O can fail because of reasons you don't control from the
>>> test suite" is an appropriate generalization of my statement.
>>>
>>> Full /tmp is not a problem, there is nothing broken about system with
>>> full /tmp. Problem is test reporting that is unable to connect failure
>>> with /tmp being full unless you do environment verification.
>>
>> Different strokes for different folks. -- Andrei
>
> There is nothing subjective about it.

Of course there is. -- Andrei

May 05, 2014
On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote:
> On 5/5/14, 10:08 AM, Dicebot wrote:
>> On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
>>> On 5/5/14, 8:55 AM, Dicebot wrote:
>>>>
>>>> It was just a most simple example. "Unittests should do no I/O because
>>>> any sort of I/O can fail because of reasons you don't control from the
>>>> test suite" is an appropriate generalization of my statement.
>>>>
>>>> Full /tmp is not a problem, there is nothing broken about system with
>>>> full /tmp. Problem is test reporting that is unable to connect failure
>>>> with /tmp being full unless you do environment verification.
>>>
>>> Different strokes for different folks. -- Andrei
>>
>> There is nothing subjective about it.
>
> Of course there is. -- Andrei

You are not helping your point to look reasonable.
May 05, 2014
On 5/5/14, 11:25 AM, Dicebot wrote:
> On Monday, 5 May 2014 at 18:24:43 UTC, Andrei Alexandrescu wrote:
>> On 5/5/14, 10:08 AM, Dicebot wrote:
>>> On Monday, 5 May 2014 at 16:33:42 UTC, Andrei Alexandrescu wrote:
>>>> On 5/5/14, 8:55 AM, Dicebot wrote:
>>>>>
>>>>> It was just a most simple example. "Unittests should do no I/O because
>>>>> any sort of I/O can fail because of reasons you don't control from the
>>>>> test suite" is an appropriate generalization of my statement.
>>>>>
>>>>> Full /tmp is not a problem, there is nothing broken about system with
>>>>> full /tmp. Problem is test reporting that is unable to connect failure
>>>>> with /tmp being full unless you do environment verification.
>>>>
>>>> Different strokes for different folks. -- Andrei
>>>
>>> There is nothing subjective about it.
>>
>> Of course there is. -- Andrei
>
> You are not helping your point to look reasonable.

My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks.

Andrei

May 05, 2014
On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote:
> My understanding here is you're trying to make dogma out of engineering choices that may vary widely across projects and organizations. No thanks.
>
> Andrei

I am asking to either suggest an alternative solution or to clarify why you don't consider it is an important problem. Dogmatic approach that solves the issue is still better than ignoring it completely.

Right now I am afraid you will push for quick changes that will reduce elegant simplicity of D unittest system without providing a sound replacement that will actually fit into more ambitious use cases (as whole "parallel" thing implies).
May 05, 2014
On 5/5/14, 11:47 AM, Dicebot wrote:
> On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote:
>> My understanding here is you're trying to make dogma out of
>> engineering choices that may vary widely across projects and
>> organizations. No thanks.
>>
>> Andrei
>
> I am asking to either suggest an alternative solution or to clarify why
> you don't consider it is an important problem.

"Clean /tmp/ judiciously."

> Dogmatic approach that
> solves the issue is still better than ignoring it completely.

The problem with your stance, i.e.:

> "Unittests should do no I/O because any sort of I/O can fail because
> of reasons you don't control from the test suite" is an appropriate
> generalization of my statement.

is that it immediately generalizes into the unreasonable:

"Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite".

So that gets into machines not having any memory available, with full disks etc.

Just make sure test machines are prepared for running unittests to the extent unittests are expecting them to. We're wasting time trying to frame this as a problem purely related to unittests alone.

> Right now I am afraid you will push for quick changes that will reduce
> elegant simplicity of D unittest system without providing a sound
> replacement that will actually fit into more ambitious use cases (as
> whole "parallel" thing implies).

If I had my way I'd make parallel the default and single-threaded opt-in, thus penalizing unittests that had issues to start with. But I understand the merits of not breaking backwards compatibility so probably we should start with opt-in parallel unittesting.


Andrei

May 06, 2014
On 30/04/2014 16:43, Andrei Alexandrescu wrote:
> Hello,
>
>
> A coworker mentioned the idea that unittests could be run in parallel
> (using e.g. a thread pool).

There has been a lot of disagreement in this discussion of whether "unittests" blocks should run in parallel or not. Not everyone is agreeing with Andrei's idea in the first place. I am another in such position.

True, like Dicebot, Russel, and others mentioned, a Unit Test should be a procedure with no side-effects (or side-effects independent from the other Unit Tests), and as such, able to run in parallel. Otherwise they are an Integration Test.

But before we continue the discussion, we are missing am more basic assumption here: Do we want D to have a Unit-Testing facility, or a Testing facility?? In other words, do we want to be able to write automated tests that are Integration tests or just Unit tests? Because if we go with this option of making D unittest blocks run in parallel, we kill the option of them supporting Integration Tests. I don't think this is good.

*Unit testing frameworks in other languages (JUnit for Java for example), provide full support for Integration tests, despite the "Unit" in their names. This is good. I think Integration tests are much more common than in "real-world" applications than people give credit for.

Personally I find the distinction between Unit tests and Integrations tests not very useful in practice. It is accurate, but not very useful. In my mental model I don't make a distinction. I write a test that tests a component, or part of a component. The component might be a low-level component that depends on little or no other components - then I have a Unit test. Or it might be a higher level component that depends on other components (which might need to mocked in the test) - then I have an Integration test. But they are not different enough that a different framework should be necessary to write each of them.

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
May 06, 2014
On 30/04/2014 20:23, Dicebot wrote:
> On Wednesday, 30 April 2014 at 18:19:34 UTC, Jonathan M Davis via
> Digitalmars-d wrote:
>> On Wed, 30 Apr 2014 17:58:34 +0000
>> Atila Neves via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>> Unit tests though, by definition (and I'm aware there are more than
>>> one) have to be independent. Have to not touch the filesystem, or the
>>> network. Only CPU and RAM.
>>
>> I disagree with this. A unit test is a test that tests a single piece
>> of functionality - generally a function - and there are functions which
>> have to access the file system or network.
>
> They _use_ access to file system or network, but it is _not_ their
> functionality. Unit testing is all about verifying small perfectly
> separated pieces of functionality which don't depend on correctness /
> stability of any other functions / programs. Doing I/O goes against it
> pretty much by definition and is unfortunately one of most common
> testing antipatterns.

It is common, but it is not necessarily an anti-pattern. Rather it likely is just an Integration test instead of a Unit test. See: http://forum.dlang.org/post/lkb0jm$vp8$1@digitalmars.com

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
May 06, 2014
On 01/05/2014 18:12, Steven Schveighoffer wrote:
> On Thu, 01 May 2014 10:01:19 -0400, Atila Neves <atila.neves@gmail.com>
> wrote:
>
>> On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
>>> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>>>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>>>
>>>>> Agreed. I think we should look into parallelizing all unittests. --
>>>>> Andrei
>>>>
>>>> I recommend running the tests in random order as well.
>>>
>>> This is a bad idea. Tests could fail only some of the time. Even if
>>> bugs are missed, I would prefer it if tests did exactly the same
>>> thing every time.
>>
>> They _should_ do exactly the same thing every time. Which is why
>> running in threads or at random is a great way to enforce that.
>
> But not a great way to debug it.
>
> If your test failure depends on ordering, then the next run will be
> random too.
>

I agree with Steven here.

Actually, even if the failure does *not* depend on ordering, it can still be useful to run the tests in order when debugging:
If there is a bug in a low level component, that will likely trigger a failure in the tests for that low level component, but also the tests for higher-level components (the components that use the low level component).
As such, when debugging, you would want to run the low-level test first since it will likely be easier to debug the issue there, than with the higher-level test.

Sure, one could say that the solution to this should be mocking the low-level component in the high-level test, but mocking is not always desirable or practical. I can provide some concrete examples.

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros
May 06, 2014
On 01/05/2014 08:18, Dicebot wrote:
> On Wednesday, 30 April 2014 at 21:49:06 UTC, Jonathan M Davis via
> Digitalmars-d wrote:
>> On Wed, 30 Apr 2014 21:09:14 +0100
>> Russel Winder via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>
>>> On Wed, 2014-04-30 at 11:19 -0700, Jonathan M Davis via Digitalmars-d
>>> wrote:
>>> > unittest blocks just like any other unit test. I would very > much
>>> > consider std.file's tests to be unit tests. But even if you > don't
>>> > want to call them unit tests, because they access the file > system,
>>> > the reality of the matter is that tests like them are going > to be
>>> > run in unittest blocks, and we have to take that into > account when
>>> > we decide how we want unittest blocks to be run (e.g. whether
>>> > they're parallelizable or not).
>>>
>>> In which case D is wrong to allow them in the unittest blocks and
>>> should introduce a new way of handling these tests. And even then all
>>> tests can and should be parallelized. If they cannot be then there is
>>> an inappropriate dependency.
>>
>> Why? Because Andrei suddenly proposed that we parallelize unittest
>> blocks? If I want to test a function, I'm going to put a unittest block
>> after it to test it. If that means accessing I/O, then it means
>> accessing I/O. If that means messing with mutable, global variables,
>> then that means messing with mutable, global variables. Why should I
>> have to put the tests elsewhere or make is that they don't run whenthe
>> -unttest flag is used just because they don't fall under your definition
>> of "unit" test?
>
> You do this because unit tests must be fast. You do this because unit
> tests must be naively parallel. You do this because unit tests verify
> basic application / library sanity and expected to be quickly run after
> every build in deterministic way (contrary to full test suite which can
> take hours).
>

See http://forum.dlang.org/post/lkb0jm$vp8$1@digitalmars.com.
(basically, do we want to support only Unit tests, or Integration tests also?)

-- 
Bruno Medeiros
https://twitter.com/brunodomedeiros