May 06, 2014
On Monday, 5 May 2014 at 18:58:37 UTC, Andrei Alexandrescu wrote:
> On 5/5/14, 11:47 AM, Dicebot wrote:
>> On Monday, 5 May 2014 at 18:29:40 UTC, Andrei Alexandrescu wrote:
>>> My understanding here is you're trying to make dogma out of
>>> engineering choices that may vary widely across projects and
>>> organizations. No thanks.
>>>
>>> Andrei
>>
>> I am asking to either suggest an alternative solution or to clarify why
>> you don't consider it is an important problem.
>
> "Clean /tmp/ judiciously."

This is solution for "failing test" problem. Problem I speak about is "figuring out why test has failed".

> The problem with your stance, i.e.:
>
>> "Unittests should do no I/O because any sort of I/O can fail because
>> of reasons you don't control from the test suite" is an appropriate
>> generalization of my statement.
>
> is that it immediately generalizes into the unreasonable:
>
> "Unittests should do no $X because any sort of $X can fail because of reasons you don't control from the test suite".
>
> So that gets into machines not having any memory available, with full disks etc.

It is great you have mentioned RAM here as it nicely draws a border-line. Being out of memory throws specific Error which is unlikely to be caught and clearly identifies problem. Disk I/O failure throws Exception which can be easily consumed somewhere inside tested control flow resulting in absolutely mysterious test failures. It is borderline of Error vs Exception - fatal problem incompatible with further execution and routine problem application is expected to handle.

> Just make sure test machines are prepared for running unittests to the extent unittests are expecting them to. We're wasting time trying to frame this as a problem purely related to unittests alone.

Again: you don't have control of test machines for something like language standard library. It is not purely unittest problem, it is problem hard to solve staying within infrastructure of unittests.
May 06, 2014
On Tuesday, 6 May 2014 at 15:54:30 UTC, Bruno Medeiros wrote:
> But before we continue the discussion, we are missing am more basic assumption here: Do we want D to have a Unit-Testing facility, or a Testing facility?? In other words, do we want to be able to write automated tests that are Integration tests or just Unit tests? Because if we go with this option of making D unittest blocks run in parallel, we kill the option of them supporting Integration Tests. I don't think this is good.

These days I often find myself leaning towards writing mostly integration tests with only limited amount of unit tests. But writing good integration test is very different from writing good unit test and usually implies quite lot of boilerplate. Truth is D does not currently have any higher-level facility at all. It has an _awesome_ unit test facility which gets often misused for writing sloppy integration tests.

I'd love to keep existing facility as-is and think about providing good library augmentation for any sort of higher level approach.

Key property of D unit tests is how easy it is to add those inline to existing project, unconstrained simplicity. In perfect world those are closer to contracts than other types of tests. This provides good basic sanity check for all modules you recursively import when run via `rdmd -unittest`.

Good integration test is very different. It has certain assumptions about initial system state and notifies user if those are not met. It can take ages to run and can test real-world situations. It is not supposed to be run implicitly and frequently. You don't want to keep your integration tests inline because of amount of boilerplate code those usually need.

I see no good in trying to unite those very different beasts and my experience with existing test libraries has been very unpleasant in that regard.
May 06, 2014
On 5/6/14, 10:43 AM, Dicebot wrote:
> Disk I/O failure throws Exception which can be easily consumed somewhere
> inside tested control flow resulting in absolutely mysterious test failures.

If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei
May 06, 2014
On Tuesday, 6 May 2014 at 18:13:01 UTC, Andrei Alexandrescu wrote:
> On 5/6/14, 10:43 AM, Dicebot wrote:
>> Disk I/O failure throws Exception which can be easily consumed somewhere
>> inside tested control flow resulting in absolutely mysterious test failures.
>
> If you're pointing out full /tmp/ should be nicely diagnosed by the unittest, I agree. -- Andrei

Good we have a common base ground :)

It inevitably arises next question though : how unittest can diagnose it? Catching exception is not always possible as it can be consumed inside tested function resulting in different observable behavior when /tmp/ is full.

I can't imagine anything better than verifying /tmp/ is not full before running bunch of tests. Will you agree with this one too?
May 06, 2014
On 2014-05-06 19:58, Dicebot wrote:

> These days I often find myself leaning towards writing mostly
> integration tests with only limited amount of unit tests. But writing
> good integration test is very different from writing good unit test and
> usually implies quite lot of boilerplate. Truth is D does not currently
> have any higher-level facility at all. It has an _awesome_ unit test
> facility which gets often misused for writing sloppy integration tests.
>
> I'd love to keep existing facility as-is and think about providing good
> library augmentation for any sort of higher level approach.
>
> Key property of D unit tests is how easy it is to add those inline to
> existing project, unconstrained simplicity. In perfect world those are
> closer to contracts than other types of tests. This provides good basic
> sanity check for all modules you recursively import when run via `rdmd
> -unittest`.
>
> Good integration test is very different. It has certain assumptions
> about initial system state and notifies user if those are not met. It
> can take ages to run and can test real-world situations. It is not
> supposed to be run implicitly and frequently. You don't want to keep
> your integration tests inline because of amount of boilerplate code
> those usually need.
>
> I see no good in trying to unite those very different beasts and my
> experience with existing test libraries has been very unpleasant in that
> regard.

I don't see why would be bad to use "unittest" for integration tests, except for the misguided name. It's perfectly to place "unittest" is completely different modules and packages. They don't need to be placed inline.

I see it as a code place to but code for testing. Then I don't have to come up with awkward names for regular functions. It's also a good palace since D doesn't allow to place statements and expression at module level. Sure there's module constructors but I don't think that's any better.

-- 
/Jacob Carlborg
May 06, 2014
On Tuesday, 6 May 2014 at 18:28:27 UTC, Jacob Carlborg wrote:
d.
> I don't see why would be bad to use "unittest" for integration tests, except for the misguided name. It's perfectly to place "unittest" is completely different modules and packages. They don't need to be placed inline.

Well I am actually guilty of doing exactly that because it allows me to merge coverage analysis files :) But it is not optimal situation once you consider something like parallel tests as compiler does not know which of those blocks are "true" unit tests.

It also makes difficult to define a common "idiomatic" way to organize testing of D projects. I'd also love to see a test library that helps with defining integration tests structure (named tests grouped by common environment requirements doing automatic cleanup upon finishing the group/block) without resorting to custom classes AND without interfering with simplicity of existing unittests.

I think it all can be done by keeping existing single "unittest" keyword but using various annotations. Then integration tests can be done as separate application that uses imaginary Phobos integration tests library to interpret those annotations and provide more complex test structure. And running plain `rdmd -unittest` on actual application modules will still continue to do the same good old thing.
May 06, 2014
On 5/6/14, 11:27 AM, Dicebot wrote:
> On Tuesday, 6 May 2014 at 18:13:01 UTC, Andrei Alexandrescu wrote:
>> On 5/6/14, 10:43 AM, Dicebot wrote:
>>> Disk I/O failure throws Exception which can be easily consumed somewhere
>>> inside tested control flow resulting in absolutely mysterious test
>>> failures.
>>
>> If you're pointing out full /tmp/ should be nicely diagnosed by the
>> unittest, I agree. -- Andrei
>
> Good we have a common base ground :)
>
> It inevitably arises next question though : how unittest can diagnose
> it?

Fail with diagnostic. -- Andrei

May 07, 2014
On 06/05/14 20:39, Dicebot wrote:
> On Tuesday, 6 May 2014 at 18:28:27 UTC, Jacob Carlborg wrote:
> d.
>> I don't see why would be bad to use "unittest" for integration tests,
>> except for the misguided name. It's perfectly to place "unittest" is
>> completely different modules and packages. They don't need to be
>> placed inline.
>
> Well I am actually guilty of doing exactly that because it allows me to
> merge coverage analysis files :) But it is not optimal situation once
> you consider something like parallel tests as compiler does not know
> which of those blocks are "true" unit tests.
>
> It also makes difficult to define a common "idiomatic" way to organize
> testing of D projects. I'd also love to see a test library that helps
> with defining integration tests structure (named tests grouped by common
> environment requirements doing automatic cleanup upon finishing the
> group/block) without resorting to custom classes AND without interfering
> with simplicity of existing unittests.
>
> I think it all can be done by keeping existing single "unittest" keyword
> but using various annotations. Then integration tests can be done as
> separate application that uses imaginary Phobos integration tests
> library to interpret those annotations and provide more complex test
> structure. And running plain `rdmd -unittest` on actual application
> modules will still continue to do the same good old thing.

So you're saying to use the "unittest" keyword but with a UDA?

Something I already do, but for unit tests. Well my idea for a testing framework would work both for unit tests and other, higher levels of test.

@describe("toMsec")
{
    @it("returns the time in milliseconds") unittest
    {
        assert(true);
    }
}

-- 
/Jacob Carlborg
May 07, 2014
On Wednesday, 7 May 2014 at 06:34:44 UTC, Jacob Carlborg wrote:
> So you're saying to use the "unittest" keyword but with a UDA?

I think this is most reasonable compromise that does not harm existing system.

> Something I already do, but for unit tests. Well my idea for a testing framework would work both for unit tests and other, higher levels of test.
>
> @describe("toMsec")
> {
>     @it("returns the time in milliseconds") unittest
>     {
>         assert(true);
>     }
> }

Which is exactly why I'd like to defer exact annotation to library solution - exact requirements for such framework are very different. I'd want to see something like this instead:

@name("Network test 2")
@requires("Network test 1") @cleanup!removeTemporaries
unittest
{
    // do stuff
}

Have never liked that fancy description syntax of "smart" testing frameworks.
May 07, 2014
On Tuesday, 6 May 2014 at 20:41:01 UTC, Andrei Alexandrescu wrote:
> Fail with diagnostic. -- Andrei

..and do that for every single test case which is affected. Which requires either clear test execution order (including cross-module test dependencies) or shared boilerplate (which becomes more messy if more environment needs to tested). Something that is not nicely supported by built-in construct.