February 07, 2019
On Thu, Feb 07, 2019 at 06:10:58PM +0000, jmh530 via Digitalmars-d wrote:
> On Thursday, 7 February 2019 at 15:43:34 UTC, H. S. Teoh wrote:
> > [snip]
> > Of course, on the flip side, unittests that acquire global /
> > external resources that need cleanup, etc., are IMNSHO a code smell.
> 
> That's a good point. I should probably just stick pure on every unittest block by default.

That's a good ideal to strive for, but then it would preclude testing non-pure functions, which seems rather too limiting to me.


T

-- 
There are 10 kinds of people in the world: those who can count in binary, and those who can't.
February 08, 2019
On Thursday, 7 February 2019 at 18:53:22 UTC, H. S. Teoh wrote:
> On Thu, Feb 07, 2019 at 06:10:58PM +0000, jmh530 via Digitalmars-d wrote:
>> On Thursday, 7 February 2019 at 15:43:34 UTC, H. S. Teoh wrote:
>> > [snip]
>> > Of course, on the flip side, unittests that acquire global /
>> > external resources that need cleanup, etc., are IMNSHO a code smell.
>> 
>> That's a good point. I should probably just stick pure on every unittest block by default.
>
> That's a good ideal to strive for, but then it would preclude testing non-pure functions, which seems rather too limiting to me.
>
>
> T

When my functions aren't pure I get sad.

Unfortunately sometimes there's nothing I can do about it because I depend on external code that isn't pure itself. But my default for unit tests is `@safe pure`.
February 08, 2019
On Thursday, 7 February 2019 at 18:06:24 UTC, H. S. Teoh wrote:
> On Thu, Feb 07, 2019 at 04:49:38PM +0000, John Colvin via Digitalmars-d wrote: [...]
>> A fork-based unittest runner would solve some problems without having to restart the process (could be expensive startup) or have people re-write their tests to use a new type of assert.
>> 
>> The process is started, static constructors are run setting up anything needed, the process is then forked & the tests run in the fork until death from success or assert, somehow communicating the index of the last successful test to the runner (let's call that tests[i]). Then if i < test.length - 1 do another fork and start from tests[i + 2] to skip the one that failed.
>> 
>> There are probably corner cases where you wouldn't want this behavior, but I can't think of one off the top of my head.
>
> One case is where the unittests depend on the state of the filesystem, e.g., they all write to the same temp file as part of the testing process. I don't recommend this practice, though, for obvious reasons.

Why would this cause a problem? Unless the tests are dependent on the *order* they're run it, which is of course madness. (Note that I am not suggesting running in parallel and that file descriptors would be inherited in the child fork)

Can you sketch out a concrete case?

> (In fact, I'm inclined to say unittests should *not* depend on the filesystem at all; IMNSHO `import std.stdio;` is a code smell in unittests. The code should be refactored such that it is testable without involving the actual filesystem (simulated filesystems can be used for testing FS-dependent logic, if the API is designed properly). But I bet most D unittests aren't written up to that standard!)
>
>
> T

I reckon most are reasonably good w.r.t. this, by virtue of being simple unittests of pure-ish code.
February 08, 2019
On Friday, February 8, 2019 3:04:35 AM MST John Colvin via Digitalmars-d wrote:
> On Thursday, 7 February 2019 at 18:06:24 UTC, H. S. Teoh wrote:
> > On Thu, Feb 07, 2019 at 04:49:38PM +0000, John Colvin via Digitalmars-d wrote: [...]
> >
> >> A fork-based unittest runner would solve some problems without having to restart the process (could be expensive startup) or have people re-write their tests to use a new type of assert.
> >>
> >> The process is started, static constructors are run setting up anything needed, the process is then forked & the tests run in the fork until death from success or assert, somehow communicating the index of the last successful test to the runner (let's call that tests[i]). Then if i < test.length - 1 do another fork and start from tests[i + 2] to skip the one that failed.
> >>
> >> There are probably corner cases where you wouldn't want this behavior, but I can't think of one off the top of my head.
> >
> > One case is where the unittests depend on the state of the filesystem, e.g., they all write to the same temp file as part of the testing process. I don't recommend this practice, though, for obvious reasons.
>
> Why would this cause a problem? Unless the tests are dependent on the *order* they're run it, which is of course madness. (Note that I am not suggesting running in parallel and that file descriptors would be inherited in the child fork)
>
> Can you sketch out a concrete case?

I've worked on projects before where a set of tests that ran built on top of the previous ones in order to be faster - e.g. a program operating on a database could have each test add or remove items from a database, and each test then depends on the previous ones, because whoever wrote the tests didn't want to have to recreate everything with each test. IIRC, the tests for an XML parser at a company that I used to work for built on one another in such a manner so that they didn't have to keep reading or writing the file from scratch. And I'm pretty sure that I've seen other cases where global variables were used between tests with the assumption that the previous tests left them in a particular state, though I can't think of any other concrete examples at the moment.

In general, I don't think that this is a good way to write tests, but it _can_ reduce how long your tests take, and I've seen it done in practice.

IIRC, in the past, there was some discussion of running unittest blocks in parallel, but it was quickly determined that if we were going to do something like that, we'd need a way either to enforce that certain tests not be run in parallel or to mark them so that they would be run in parallel, because the odds were too high that someone out there was writing tests that required that the unittest blocks be run in order and not in parallel. Forking the program to run each unittest block does change the situation somewhat over running each unittest in its own thread but though not as much as it would for many languages because of D's thread-local by default (which makes each thread more or less the same as a fork with regards to a most of the state in your typical D program - just not all of it).

- Jonathan M Davis



February 08, 2019
On Friday, 8 February 2019 at 13:48:19 UTC, Jonathan M Davis wrote:
> On Friday, February 8, 2019 3:04:35 AM MST John Colvin via Digitalmars-d wrote:
>> On Thursday, 7 February 2019 at 18:06:24 UTC, H. S. Teoh wrote:
>> > On Thu, Feb 07, 2019 at 04:49:38PM +0000, John Colvin via Digitalmars-d wrote: [...]
>> >

> [...]

> In general, I don't think that this is a good way to write tests, but it _can_ reduce how long your tests take, and I've

To me the easiest way to do that is to not talk to a real database except in integration/E2E tests and not have too many of those.

> IIRC, in the past, there was some discussion of running unittest blocks in parallel, but it was quickly determined that if we were going to do something like that, we'd need a way either to enforce that certain tests not be run in parallel or to mark them so that they would be run in parallel, because the odds were too high that someone out there was writing tests that required that the unittest blocks be run in order and not in parallel.

https://github.com/atilaneves/unit-threaded/blob/183a8cd4d3e271d750e40321ade234ed78554730/subpackages/runner/source/unit_threaded/runner/attrs.d#L10

I always feel dirty when I have to use it, though.
February 08, 2019
On Fri, Feb 08, 2019 at 09:02:02AM +0000, Atila Neves via Digitalmars-d wrote:
> On Thursday, 7 February 2019 at 18:53:22 UTC, H. S. Teoh wrote:
> > On Thu, Feb 07, 2019 at 06:10:58PM +0000, jmh530 via Digitalmars-d wrote:
> > > On Thursday, 7 February 2019 at 15:43:34 UTC, H. S. Teoh wrote:
> > > > [snip]
> > > > Of course, on the flip side, unittests that acquire global /
> > > > external resources that need cleanup, etc., are IMNSHO a code
> > > > smell.
> > > 
> > > That's a good point. I should probably just stick pure on every unittest block by default.
> > 
> > That's a good ideal to strive for, but then it would preclude testing non-pure functions, which seems rather too limiting to me.
[...]
> When my functions aren't pure I get sad.

It's funny, my default coding style is pure, but I never bothered to stick 'pure' to my declarations.  I probably should start doing that so that the compiler can catch any non-pure stuff for me that I overlooked.


> Unfortunately sometimes there's nothing I can do about it because I depend on external code that isn't pure itself. But my default for unit tests is `@safe pure`.

That's a good practice to have.  But a lot of D code out there in the wild aren't written this way.


T

-- 
A bend in the road is not the end of the road unless you fail to make the turn. -- Brian White
February 10, 2019
On 2019-02-08 14:48, Jonathan M Davis wrote:

> IIRC, the tests
> for an XML parser at a company that I used to work for built on one another
> in such a manner so that they didn't have to keep reading or writing the
> file from scratch.

The parser should be separate from the IO. But I guess you know that :)

-- 
/Jacob Carlborg
February 10, 2019
On 2019-02-08 11:04, John Colvin wrote:

> Why would this cause a problem? Unless the tests are dependent on the
> *order* they're run it, which is of course madness. (Note that I am not
> suggesting running in parallel and that file descriptors would be
> inherited in the child fork)

Ideally the tests should be run in random order, to avoid the tests depending on each other.

-- 
/Jacob Carlborg
February 10, 2019
On 2019-02-08 16:03, Atila Neves wrote:

> To me the easiest way to do that is to not talk to a real database
> except in integration/E2E tests and not have too many of those.

When you do need to run again a real database, one can run the tests in transactions and roll back the transaction after each tests.

-- 
/Jacob Carlborg
February 10, 2019
On Sunday, February 10, 2019 4:10:58 AM MST Jacob Carlborg via Digitalmars-d wrote:
> On 2019-02-08 14:48, Jonathan M Davis wrote:
> > IIRC, the tests
> > for an XML parser at a company that I used to work for built on one
> > another in such a manner so that they didn't have to keep reading or
> > writing the file from scratch.
>
> The parser should be separate from the IO. But I guess you know that :)

I'm certainly not trying to argue that it's best practice to have unittest blocks that depend on each other, but I am saying that I have seen the equivalent in other languages in real world code at jobs that I've had. So, I fully expect that some folks out there are writing unit tests in D where unittest blocks depend on the state left by previous unittest blocks - be it the state in memory or what's on disk. So, attempting to run a unittest block after a previous one failed, to run them in a different order, to run them in separate threads, to run them in separate processes, or anything along those lines will likely break actual code (at least as long as we're talking about how the standard test runner as opposed to someone writing their code to work with particular test runner that might work differently). It was code doing something that it arguably shouldn't be doing, but I fully expect that such code exists - and the odds of it existing increase as D grows.

And yes, the parser should definitely be separate from the code that does the actual I/O, but I've seen plenty of code out there that doesn't do that - especially when it's code written to solve a specific problem for a specific application rather than being a general purpose solution.

- Jonathan M Davis



1 2
Next ›   Last »