May 01, 2014
On 5/1/14, 4:35 AM, Johannes Pfau wrote:
> Am Wed, 30 Apr 2014 09:02:54 -0700
> schrieb Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org>:
>
>>> To summarize: It provides a function pointer for every  unit test to
>>> druntime or user code. This is actually easy to do. Naming tests
>>> requires changes in the parser, but I guess that shouldn't be
>>> difficult either.
>>
>> That's fantastic, would you be willing to reconsider that work?
>
> Are you still interested in this?

Yes.

> I guess we could build a std.test phobos module to completely replace
> the current unittest implementation, see:
> http://forum.dlang.org/post/ljtbch$lg6$1@digitalmars.com
>
> This seems to be a better solution, especially considering extension
> possibilities.

I think anything that needs more than the user writing unittests and adding a line somewhere would be suboptimal. Basically we need all we have now, just run in parallel.


Andrei

May 01, 2014
On Thursday, 1 May 2014 at 12:04:57 UTC, Xavier Bigand wrote:
> It's just a lot harder when you are under pressure.
> I am working for a very small company and our dead lines clearly doesn't help us with that, and because I am in the video game industry it's not really critical to have small bugs.
>
> Not every body have the capacity or resources (essentially time) to design his code in the pure conformance of unittests definition, and IMO isn't not an excuse to avoid tests completely.
> If a language/standard library can help democratization of tests it's a good thing, so maybe writing tests have to stay relatively simple and straightforward.
>
> My point is just when you are doing things only for you it's often simpler to them like they must be.

I know that and don't have luxury of time for perfect tests either :) But it is more about state of mind than actual time consumption - once you start keeping higher level tests with I/O separate and making observation how some piece of functionality can be tested in contained way, you approach to designing modules changes. At some point one simply starts to write unit test friendly modules from the very first go, it is all about actually thinking into it.

Using less OOP and more functional programming helps with that btw :)

I can readily admit that in real industry projects one is likely to do many different "dirty" things and this is inevitable. What I do object to is statement that this is the way to go in general, especially in language standard library.
May 01, 2014
On 5/1/14, 4:41 AM, Jacob Carlborg wrote:
> On 2014-04-30 22:41, Andrei Alexandrescu wrote:
>
>> Yah I think that's possible but I'd like the name to be part of the
>> function name as well e.g. unittest__%s.
>
> Why is that necessary? To have the correct symbol name when debugging?

It's nice to have the name available in other tools (stack trace, debugger).

> The Ruby syntax looks like this:
[snip]
> The unit test runner can also print out a documentation, basically all
> text in the "it" and "describe" parameters. Something like this:
> https://coderwall-assets-0.s3.amazonaws.com/uploads/picture/file/1949/rspec_html_screen.png

That's all nice, but I feel we're going gung ho with overengineering already. If we give unittests names and then offer people a button "parallelize unittests" to push (don't even specify the number of threads! let the system figure it out depending on cores), that's a good step to a better world.


Andrei

May 01, 2014
On 5/1/14, 4:44 AM, w0rp wrote:
> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>
>>> Agreed. I think we should look into parallelizing all unittests. --
>>> Andrei
>>
>> I recommend running the tests in random order as well.
>
> This is a bad idea. Tests could fail only some of the time. Even if bugs
> are missed, I would prefer it if tests did exactly the same thing every
> time.

I do random testing all the time, and I print the seed of the prng upon startup. When something fails randomly, I take the seed and seed the prng with it to reproduce. -- Andrei
May 01, 2014
On Thu, 01 May 2014 11:04:31 -0400, Andrei Alexandrescu
<SeeWebsiteForEmail@erdani.org> wrote:

> On 5/1/14, 4:05 AM, Jacob Carlborg wrote:
>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>
>>> Agreed. I think we should look into parallelizing all unittests. --
>>> Andrei
>>
>> I recommend running the tests in random order as well.
>
> Great idea! -- Andrei

I think we can configure this at runtime.

Imagine, you have multiple failing unit tests. You see the first failure.
You find the issue, try and fix the problem, or instrument it, and now a
DIFFERENT test fails. Now focus on that one, yet a different one fails.

This is just going to equal frustration.

If you want to run random, we can do that. If you want to run in order,
that also should be possible. In fact, while debugging, you need to run
them in order, and serially.

-Steve
May 01, 2014
On 5/1/14, 8:04 AM, Dicebot wrote:
> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>> I have just recently went through some of out internal projects removing
>>> all accidental I/O tests for the very reason that /tmp was full
>>
>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>> elicit empathy with a full /tmp :o). -- Andrei
>
> So you are OK with your unit tests failing randomly with no clear
> diagnostics?

I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei


May 01, 2014
On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
> On 5/1/14, 8:04 AM, Dicebot wrote:
>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>> I have just recently went through some of out internal projects removing
>>>> all accidental I/O tests for the very reason that /tmp was full
>>>
>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>> elicit empathy with a full /tmp :o). -- Andrei
>>
>> So you are OK with your unit tests failing randomly with no clear
>> diagnostics?
>
> I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei

It got full because of tests (surprise!). Your actions?
May 01, 2014
On 5/1/14, 9:07 AM, Dicebot wrote:
> On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
>> On 5/1/14, 8:04 AM, Dicebot wrote:
>>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>>> I have just recently went through some of out internal projects
>>>>> removing
>>>>> all accidental I/O tests for the very reason that /tmp was full
>>>>
>>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>>> elicit empathy with a full /tmp :o). -- Andrei
>>>
>>> So you are OK with your unit tests failing randomly with no clear
>>> diagnostics?
>>
>> I'm OK with my unit tests failing on a machine with a full /tmp. The
>> machine needs fixing. -- Andrei
>
> It got full because of tests (surprise!). Your actions?

Fix the machine and reduce the output created by the unittests. It's a simple engineering problem. -- Andrei

May 01, 2014
Le 01/05/2014 13:44, w0rp a écrit :
> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>
>>> Agreed. I think we should look into parallelizing all unittests. --
>>> Andrei
>>
>> I recommend running the tests in random order as well.
>
> This is a bad idea. Tests could fail only some of the time. Even if bugs
> are missed, I would prefer it if tests did exactly the same thing every
> time.
I am in favor of randomized order, cause it can help to find real bugs.

May 01, 2014
Le 01/05/2014 16:01, Atila Neves a écrit :
> On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
>> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>>
>>>> Agreed. I think we should look into parallelizing all unittests. --
>>>> Andrei
>>>
>>> I recommend running the tests in random order as well.
>>
>> This is a bad idea. Tests could fail only some of the time. Even if
>> bugs are missed, I would prefer it if tests did exactly the same thing
>> every time.
>
> They _should_ do exactly the same thing every time. Which is why running
> in threads or at random is a great way to enforce that.
>
> Atila
+1
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19