May 01, 2014
On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public@dicebot.lv> wrote:

> On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
>> On 5/1/14, 8:04 AM, Dicebot wrote:
>>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>>> I have just recently went through some of out internal projects removing
>>>>> all accidental I/O tests for the very reason that /tmp was full
>>>>
>>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>>> elicit empathy with a full /tmp :o). -- Andrei
>>>
>>> So you are OK with your unit tests failing randomly with no clear
>>> diagnostics?
>>
>> I'm OK with my unit tests failing on a machine with a full /tmp. The machine needs fixing. -- Andrei
>
> It got full because of tests (surprise!). Your actions?

It would be nice to have a uniform mechanism to get a unique system-dependent file location for each specific unit test.

The file should automatically delete itself at the end of the test.

-Steve
May 01, 2014
On Thu, 01 May 2014 10:01:19 -0400, Atila Neves <atila.neves@gmail.com> wrote:

> On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
>> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>>
>>>> Agreed. I think we should look into parallelizing all unittests. -- Andrei
>>>
>>> I recommend running the tests in random order as well.
>>
>> This is a bad idea. Tests could fail only some of the time. Even if bugs are missed, I would prefer it if tests did exactly the same thing every time.
>
> They _should_ do exactly the same thing every time. Which is why running in threads or at random is a great way to enforce that.

But not a great way to debug it.

If your test failure depends on ordering, then the next run will be random too.

Proposal runtime parameter for pre-main consumption:

./myprog --rndunit[=seed]

To run unit tests randomly. Prints out as first order of business the seed value before starting. That way, you can repeat the exact same ordering for debugging.

-Steve
May 01, 2014
On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
> On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public@dicebot.lv> wrote:
>
>> On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
>>> On 5/1/14, 8:04 AM, Dicebot wrote:
>>>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>>>> I have just recently went through some of out internal projects
>>>>>> removing
>>>>>> all accidental I/O tests for the very reason that /tmp was full
>>>>>
>>>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>>>> elicit empathy with a full /tmp :o). -- Andrei
>>>>
>>>> So you are OK with your unit tests failing randomly with no clear
>>>> diagnostics?
>>>
>>> I'm OK with my unit tests failing on a machine with a full /tmp. The
>>> machine needs fixing. -- Andrei
>>
>> It got full because of tests (surprise!). Your actions?
>
> It would be nice to have a uniform mechanism to get a unique
> system-dependent file location for each specific unit test.
>
> The file should automatically delete itself at the end of the test.

Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o).

Andrei


May 01, 2014
On Thursday, 1 May 2014 at 17:24:58 UTC, Andrei Alexandrescu wrote:
> On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
>> On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public@dicebot.lv> wrote:
>>
>>> On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
>>>> On 5/1/14, 8:04 AM, Dicebot wrote:
>>>>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>>>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>>>>> I have just recently went through some of out internal projects
>>>>>>> removing
>>>>>>> all accidental I/O tests for the very reason that /tmp was full
>>>>>>
>>>>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>>>>> elicit empathy with a full /tmp :o). -- Andrei
>>>>>
>>>>> So you are OK with your unit tests failing randomly with no clear
>>>>> diagnostics?
>>>>
>>>> I'm OK with my unit tests failing on a machine with a full /tmp. The
>>>> machine needs fixing. -- Andrei
>>>
>>> It got full because of tests (surprise!). Your actions?
>>
>> It would be nice to have a uniform mechanism to get a unique
>> system-dependent file location for each specific unit test.
>>
>> The file should automatically delete itself at the end of the test.
>
> Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o).
>
> Andrei

It hasn't been C:\TEMP for almost 13 years (before Windows XP which is now also end-of-life). Use GetTempPath.

http://msdn.microsoft.com/en-us/library/windows/desktop/aa364992(v=vs.85).aspx
May 01, 2014
On 5/1/14, 10:32 AM, Brad Anderson wrote:
> It hasn't been C:\TEMP for almost 13 years

About the time when I switched :o). -- Andrei
May 01, 2014
On Wednesday, 30 April 2014 at 16:19:48 UTC, Byron wrote:
> On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:
>
>> 
>> I think indeed a small number of unittests rely on order of execution.
>
> Maybe nested unittests?
>
> unittest OrderTests {
>   // setup for all child tests?
>   unittest a {
>   }
>   unittest b {
>   }
> }

I like my unit tests to be next to the element under test, and it seems like this nesting would impose some limits on that.

Another idea might be to use the level of the unit as an indicator of order dependencies.  If UTs for B call/depend on A, then we would assign A to level 0, run it's UTs first, and assign B to level 1.  All 0's run before all 1's.

Could we use a template arg on the UT to indicate level?
unittest!(0) UtA { // test A}
unittest!{1} UtB { // test B}

Or maybe some fancier compiler dependency analysis?
May 01, 2014
On Thu, 01 May 2014 13:25:00 -0400, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> On 5/1/14, 10:09 AM, Steven Schveighoffer wrote:
>> On Thu, 01 May 2014 12:07:19 -0400, Dicebot <public@dicebot.lv> wrote:
>>
>>> On Thursday, 1 May 2014 at 15:37:21 UTC, Andrei Alexandrescu wrote:
>>>> On 5/1/14, 8:04 AM, Dicebot wrote:
>>>>> On Thursday, 1 May 2014 at 14:55:50 UTC, Andrei Alexandrescu wrote:
>>>>>> On 5/1/14, 1:34 AM, Dicebot wrote:
>>>>>>> I have just recently went through some of out internal projects
>>>>>>> removing
>>>>>>> all accidental I/O tests for the very reason that /tmp was full
>>>>>>
>>>>>> Well a bunch of stuff will not work on a full /tmp. Sorry, hard to
>>>>>> elicit empathy with a full /tmp :o). -- Andrei
>>>>>
>>>>> So you are OK with your unit tests failing randomly with no clear
>>>>> diagnostics?
>>>>
>>>> I'm OK with my unit tests failing on a machine with a full /tmp. The
>>>> machine needs fixing. -- Andrei
>>>
>>> It got full because of tests (surprise!). Your actions?
>>
>> It would be nice to have a uniform mechanism to get a unique
>> system-dependent file location for each specific unit test.
>>
>> The file should automatically delete itself at the end of the test.
>
> Looks like /tmp (%TEMP% or C:\TEMP in Windows) in conjunction with the likes of mkstemp is what you're looking for :o).

No, I'm looking for unittest_getTempFile(Line = __LINE__, File = __FILE__)(), which handles all the magic of opening a temporary file, allowing me to use it for the unit test, and then closing and deleting it at the end, when the test passes.

-Steve
May 01, 2014
On 5/1/14, 10:41 AM, Jason Spencer wrote:
> On Wednesday, 30 April 2014 at 16:19:48 UTC, Byron wrote:
>> On Wed, 30 Apr 2014 09:02:54 -0700, Andrei Alexandrescu wrote:
>>
>>>
>>> I think indeed a small number of unittests rely on order of execution.
>>
>> Maybe nested unittests?
>>
>> unittest OrderTests {
>>   // setup for all child tests?
>>   unittest a {
>>   }
>>   unittest b {
>>   }
>> }
>
> I like my unit tests to be next to the element under test, and it seems
> like this nesting would impose some limits on that.
>
> Another idea might be to use the level of the unit as an indicator of
> order dependencies.  If UTs for B call/depend on A, then we would assign
> A to level 0, run it's UTs first, and assign B to level 1.  All 0's run
> before all 1's.
>
> Could we use a template arg on the UT to indicate level?
> unittest!(0) UtA { // test A}
> unittest!{1} UtB { // test B}
>
> Or maybe some fancier compiler dependency analysis?

Well how complicated can we make it all? -- Andrei

May 01, 2014
On Thursday, 1 May 2014 at 17:04:53 UTC, Xavier Bigand wrote:
> Le 01/05/2014 16:01, Atila Neves a écrit :
>> On Thursday, 1 May 2014 at 11:44:12 UTC, w0rp wrote:
>>> On Thursday, 1 May 2014 at 11:05:55 UTC, Jacob Carlborg wrote:
>>>> On 2014-04-30 23:35, Andrei Alexandrescu wrote:
>>>>
>>>>> Agreed. I think we should look into parallelizing all unittests. --
>>>>> Andrei
>>>>
>>>> I recommend running the tests in random order as well.
>>>
>>> This is a bad idea. Tests could fail only some of the time. Even if
>>> bugs are missed, I would prefer it if tests did exactly the same thing
>>> every time.
>>
>> They _should_ do exactly the same thing every time. Which is why running
>> in threads or at random is a great way to enforce that.
>>
>> Atila
> +1

Tests shouldn't be run in a random order all of the time, perhaps once in a while, manually. Having continuous integration randomly report build failures is crap. Either you should always see a build failure, or you shouldn't see it. You can only test things which are deterministic, at least as far as what you observe. Running tests in a random order should be something you do manually, only when you have some ability to figure out why the tests just failed.
May 01, 2014
On 2014-05-01 19:12, Steven Schveighoffer wrote:

> But not a great way to debug it.
>
> If your test failure depends on ordering, then the next run will be
> random too.
>
> Proposal runtime parameter for pre-main consumption:
>
> ./myprog --rndunit[=seed]
>
> To run unit tests randomly. Prints out as first order of business the
> seed value before starting. That way, you can repeat the exact same
> ordering for debugging.

That's exactly what RSpec does. I think it works great.

-- 
/Jacob Carlborg