April 28, 2015
On Monday, 27 April 2015 at 11:30:04 UTC, Steven Schveighoffer wrote:
> On 4/27/15 6:20 AM, Dicebot wrote:
>> On Monday, 27 April 2015 at 10:15:20 UTC, Kagamin wrote:
>>> On Monday, 27 April 2015 at 09:22:48 UTC, Dicebot wrote:
>>>> Compiling tests of dependencies pretty much never causes any notable
>>>> slowdown.
>>>
>>> This thread doesn't support that view, see the first post.
>>
>> Which part exactly? I only see comparisons for compiling AND running
>> tests for dependencies. And it is usually running which causes the
>> slowdown.
>
> The problem is as follows:
>
> 1. Unit tests for some library are written for that library. They are written to run tests during unit tests of that library only (possibly with certain requirements of environment, including build lines, or expectations of system resource availability).
> 2. People who import that library's modules are not trying to test the library, they are trying to test their code.

Those are two points I fundamentally disagree with. It doesn't matter where the code comes from - in the end only thing that matters is correctness of your application as a whole. And considering tests are not necessarily pure the results may very well differ between running those tests spearately and as part of application test suite a whole. Unless compiling some specific tests causes some proven _compilation_ slowdown (I have yet to see that) those all must be compiled and filtered by runtime test runner optionally.

And if tests are written in a weird way that they can only be ran within that library test step, those are not really unittests.

Usage of version(MyLibTests) in Nick SDL library annoyed me so much that I forked it to never deal with those pesky versions again. Don't want to do that with Phobos too.
April 28, 2015
On Tuesday, 28 April 2015 at 16:40:05 UTC, Dicebot wrote:
> On Monday, 27 April 2015 at 11:30:04 UTC, Steven Schveighoffer wrote:
>> On 4/27/15 6:20 AM, Dicebot wrote:
>>> On Monday, 27 April 2015 at 10:15:20 UTC, Kagamin wrote:
>>>> On Monday, 27 April 2015 at 09:22:48 UTC, Dicebot wrote:
>>>>> Compiling tests of dependencies pretty much never causes any notable
>>>>> slowdown.
>>>>
>>>> This thread doesn't support that view, see the first post.
>>>
>>> Which part exactly? I only see comparisons for compiling AND running
>>> tests for dependencies. And it is usually running which causes the
>>> slowdown.
>>
>> The problem is as follows:
>>
>> 1. Unit tests for some library are written for that library. They are written to run tests during unit tests of that library only (possibly with certain requirements of environment, including build lines, or expectations of system resource availability).
>> 2. People who import that library's modules are not trying to test the library, they are trying to test their code.
>
> Those are two points I fundamentally disagree with. It doesn't matter where the code comes from - in the end only thing that matters is correctness of your application as a whole. And considering tests are not necessarily pure the results may very well differ between running those tests spearately and as part of application test suite a whole. Unless compiling some specific tests causes some proven _compilation_ slowdown (I have yet to see that) those all must be compiled and filtered by runtime test runner optionally.
>
> And if tests are written in a weird way that they can only be ran within that library test step, those are not really unittests.
>
> Usage of version(MyLibTests) in Nick SDL library annoyed me so much that I forked it to never deal with those pesky versions again. Don't want to do that with Phobos too.

Then how do you propose to approach the containers problem?

On one hand, a unittest on containers themselves involves testing the container for integrity after each operation.

On the other hand, a unittest on another module may involve heavy use of containers (say, N operations with a container), and if integrity checks are enabled at this time, it totals to N^2 trivial operations which may not be feasible.

Ivan Kazmenko.
April 28, 2015
On 4/28/15 12:40 PM, Dicebot wrote:
> On Monday, 27 April 2015 at 11:30:04 UTC, Steven Schveighoffer wrote:
>> On 4/27/15 6:20 AM, Dicebot wrote:
>>> On Monday, 27 April 2015 at 10:15:20 UTC, Kagamin wrote:
>>>> On Monday, 27 April 2015 at 09:22:48 UTC, Dicebot wrote:
>>>>> Compiling tests of dependencies pretty much never causes any notable
>>>>> slowdown.
>>>>
>>>> This thread doesn't support that view, see the first post.
>>>
>>> Which part exactly? I only see comparisons for compiling AND running
>>> tests for dependencies. And it is usually running which causes the
>>> slowdown.
>>
>> The problem is as follows:
>>
>> 1. Unit tests for some library are written for that library. They are
>> written to run tests during unit tests of that library only (possibly
>> with certain requirements of environment, including build lines, or
>> expectations of system resource availability).
>> 2. People who import that library's modules are not trying to test the
>> library, they are trying to test their code.
>
> Those are two points I fundamentally disagree with.

I think by default, nobody wants to test already-tested code in their application. That's not the point of unit tests.

For example, if there's a module that has:

struct S(T)
{
  unittest {...}
}

unittest
{
   S!int; // run unit tests for int
}

Then I can assume that unit test was run and passed. If I want to test it to be sure, I'll run that library's unit tests!

But I don't want my unit test that uses S!int to also test S's unit tests for S!int. That doesn't make sense. I'm wasting cycles on something that is already proven.

Now, if I want to run unit tests for S!MyCustomType, that makes sense. The library didn't test that. Which is why there should be a way to do it (if it's valid!).

Right now, there isn't a way to control what runs and what doesn't. I don't care where it's decided or how it's decided, but somehow I should be able to NOT run S!int tests again.

> Unless compiling some specific tests causes some proven _compilation_
> slowdown (I have yet to see that) those all must be compiled and
> filtered by runtime test runner optionally.

For example, RedBlackTree when running unit tests does a sanity check of the tree for every operation. If you then use a RedBlackTree in your unit tests, you are doing that again. Maybe it's worth it to you, but it can increase the runtime of your test by orders of magnitude. Last time I checked, performance was high on people's priority lists.

Or, potentially you could be running INVALID TESTS, because the tests weren't written for your specific usages. I ran into this when others complained that they couldn't test their code that uses RedBlackTree!string, because all the unit tests instantiated with int literals. This is simply a "does not work" thing. You can't turn them off, and you can't test your code. Is it worth it to you for a library to try to compile tests that prevent your test build from happening? Do you enjoy waiting on others to fix their problems so you can test your code? This is a very less than ideal situation. And it's not something the compiler tells you will happen.

> And if tests are written in a weird way that they can only be ran within
> that library test step, those are not really unittests.

This means you should never ever write unit tests into a template. Because it's impossible to create tests that will work for every single instantiation. See RedBlackTree for all the crud you have to put in for this to be viable today. I'd love to get rid of all that.

In which case, let's disallow that capability. I'm fine with that too. This simply means your "all encompassing tests" will not be all encompassing any more, they will only test a select few instantiations, and you can't control that either from your application code.

I want a way to control it as a template designer and as a template instantiator. Flexibility is king.

-Steve
April 28, 2015
On Tuesday, 28 April 2015 at 20:57:01 UTC, Ivan Kazmenko wrote:
> Then how do you propose to approach the containers problem?
>
> On one hand, a unittest on containers themselves involves testing the container for integrity after each operation.
>
> On the other hand, a unittest on another module may involve heavy use of containers (say, N operations with a container), and if integrity checks are enabled at this time, it totals to N^2 trivial operations which may not be feasible.

If it is that slow I tend to put such tests outside of the tested symbol so that it won't be repeated over and over again for templates. There is still full access to private symbols so it is always possible. Though that is very exceptional case reserved only for most critical cases (everything below 30-60 seconds total is ok to me)
April 28, 2015
On Tuesday, 28 April 2015 at 21:28:09 UTC, Steven Schveighoffer wrote:
> I think by default, nobody wants to test already-tested code in their application. That's not the point of unit tests.

How many more times should I repeat that I am exactly that nobody?

I do want do test everything as part of my app tests, including all possible dependencies, transitively. This is awesome default. With a simple `rdmd -main -unittest` call I can ensure that certain app/module works correctly without having to trust maintainers of dependencies to run tests regularly and without even knowing what those dependencies are. It is beautiful in its simplicity which makes it good default.

> struct S(T)
> {
>   unittest {...}
> }
>
> unittest
> {
>    S!int; // run unit tests for int
> }

If this a very slow test (and there are expected many S) simply put the test blocks out of the aggregate.

> Now, if I want to run unit tests for S!MyCustomType, that makes sense. The library didn't test that. Which is why there should be a way to do it (if it's valid!).

There is something fundamentally broken with a template that needs to be tested for each new user type argument. Built-in tests must cover all possible type classes or they are incomplete.

>> Unless compiling some specific tests causes some proven _compilation_
>> slowdown (I have yet to see that) those all must be compiled and
>> filtered by runtime test runner optionally.
>
> For example, RedBlackTree when running unit tests does a sanity check of the tree for every operation. If you then use a RedBlackTree in your unit tests, you are doing that again. Maybe it's worth it to you, but it can increase the runtime of your test by orders of magnitude. Last time I checked, performance was high on people's priority lists.

It must be very slow sanity checks :X Sounds weird but I will accept it as given. Move the tests out of the RBL definition then.

Also performance has nothing in common with test performance. I can't comment about rest because from the very beginning you seem to make statements about testing in general which do not match my vision at all.
April 29, 2015
On 4/28/15 7:04 PM, Dicebot wrote:
> On Tuesday, 28 April 2015 at 21:28:09 UTC, Steven Schveighoffer wrote:
>> I think by default, nobody wants to test already-tested code in their
>> application. That's not the point of unit tests.
>
> How many more times should I repeat that I am exactly that nobody?

OK, sorry. nodbody-1 :)

>
> I do want do test everything as part of my app tests, including all
> possible dependencies, transitively. This is awesome default. With a
> simple `rdmd -main -unittest` call I can ensure that certain app/module
> works correctly without having to trust maintainers of dependencies to
> run tests regularly and without even knowing what those dependencies
> are. It is beautiful in its simplicity which makes it good default.

or rdmd -main -unittest -> fail to build because the templated unit test doesn't work on your code. Good luck with that.

Again, there are so many reasons I should not have to worry about unit tests in my library being run with your code. That's on you. I didn't write it for your code to run, if you want to run it, run my unit test script.

-Steve
April 29, 2015
On 2014-09-10 04:13, Nick Sabalausky wrote:
> This is getting to be (or rather, *continuing* to be) a royal PITA:
>
> https://github.com/rejectedsoftware/vibe.d/issues/673
>
> I don't mean to pick on Vibe.d in particular, but can we have a solution
> (that doesn't involve obscure corners of druntime, or demanding everyone
> use the same build system) for running our unittests *without*
> triggering a cascading avalanche of unittests from ALL third party libs
> that don't cooperate with the [frankly] quite clumsy
> version(mylib_unittests) hack?!

The most simple solution that ever body will hate (except me), put the unit tests in a separate directory.

-- 
/Jacob Carlborg
April 29, 2015
On Tuesday, 28 April 2015 at 16:40:05 UTC, Dicebot wrote:
> Those are two points I fundamentally disagree with. It doesn't matter where the code comes from - in the end only thing that matters is correctness of your application as a whole.

3rd party libraries are supposed to be tested already, if you want to test it, you should go and properly run its test suite. Whatever template unittests you accidentally instantiated in your code mean nothing with respect to overall correctness.

> And if tests are written in a weird way that they can only be ran within that library test step, those are not really unittests.

The library can be tested only when it's compiled in unittest mode as a whole. When you link with its release version its unittests are not even compiled at all.
April 29, 2015
On Wednesday, 29 April 2015 at 04:53:47 UTC, Steven Schveighoffer wrote:
> or rdmd -main -unittest -> fail to build because the templated unit test doesn't work on your code. Good luck with that.

I will create an upstream PR to fix it, problem solved. Have never had a need to do so though, not even a single time.

Also : can you please point me again what part of RBT causes compilation slowdowns with version(unittest)? I looked through and found only runtime checks. And for that "move out of the aggregate" + "runtime test filtering" does what you want.

> Again, there are so many reasons I should not have to worry about unit tests in my library being run with your code. That's on you. I didn't write it for your code to run, if you want to run it, run my unit test script.

If you don't wan't to run it, filter it out in the test runner. I assure you, there are at least as much reasons why I shouldn't worry if you actually run tests for your library and how those need to be run. Both defaults can be circumvented.
April 29, 2015
On Wednesday, 29 April 2015 at 07:44:17 UTC, Kagamin wrote:
> On Tuesday, 28 April 2015 at 16:40:05 UTC, Dicebot wrote:
>> Those are two points I fundamentally disagree with. It doesn't matter where the code comes from - in the end only thing that matters is correctness of your application as a whole.
>
> 3rd party libraries are supposed to be tested already, if you want to test it, you should go and properly run its test suite. Whatever template unittests you accidentally instantiated in your code mean nothing with respect to overall correctness.

And software is supposed to not have bugs, right. I can put some trust in regular testing of Phobos but it ends there. If "accidental" template tests from user code break any 3d party library and its author refuses to accept PR with the fix I will simply call it broken and fork.

>> And if tests are written in a weird way that they can only be ran within that library test step, those are not really unittests.
>
> The library can be tested only when it's compiled in unittest mode as a whole. When you link with its release version its unittests are not even compiled at all.

Which is exactly why "all source builds with same flags" is the only reasonable compilation model for D. Providing release static libraries causes only problems and never benefits. Even for Phobos it feels like a mistake in the long run.