October 17, 2014
On 2014-10-17 10:26, Atila Neves wrote:

> Is cleaning up in a unittest build a problem? I'd say no, if it the
> tests fail it doesn't make much sense to clean up unless it affects the
> reporting of tests failing.

I have used files in some of my unit tests. I would certainly like those to be properly closed if a tests failed (for whatever reason). Now, some of you will argue that one shouldn't use files in unit tests. But that would only work in a ideal and perfect world, which we don't live in.

-- 
/Jacob Carlborg
October 17, 2014
On 2014-10-16 20:50, Walter Bright wrote:

> I don't understand why unittests in druntime/phobos are an issue for
> users. We don't release a DMD unless they all pass - it should be moot
> for users.

There are asserts elsewhere in the code.

-- 
/Jacob Carlborg
October 17, 2014
On 2014-10-16 21:31, Walter Bright wrote:

> Contract errors in Phobos/Druntime should be limited to having passed it
> invalid arguments, which should be documented

That doesn't mean it won't happen.

-- 
/Jacob Carlborg
October 18, 2014
On 10/17/2014 9:05 AM, Jacob Carlborg wrote:
> On 2014-10-16 21:35, Walter Bright wrote:
>
>> Ok, but why would 3rd party library unittests be a concern? They
>> shouldn't have shipped it if their own unittests fail - that's the whole
>> point of having unittests.
>
> They will have asserts in contracts and other parts of that code that is not
> unit tests.

This particular subthread is about unittests.

October 18, 2014
On 10/17/2014 9:13 AM, Jacob Carlborg wrote:
> On 2014-10-16 21:31, Walter Bright wrote:
>
>> Contract errors in Phobos/Druntime should be limited to having passed it
>> invalid arguments, which should be documented
>
> That doesn't mean it won't happen.

Which means they'll be program bugs, not environmental errors.

It is of great value to distinguish between program bugs and input/environmental errors, and to treat them entirely differently. It makes code easier to understand, more robust, and better/faster code can be generated.

Using asserts to detect input/environmental errors is a bad practice - something like enforce() should be used instead.

I understand that some have to work with poorly written libraries that incorrectly use assert. If that's the only issue with those libraries, you're probably lucky :-) Short term, I suggest editing the code of those libraries, and pressuring the authors of them. Longer term, we need to establish a culture of using assert/enforce correctly.

This is not as pie-in-the-sky as it sounds. Over the years, a lot of formerly popular bad practices in C and C++ have been relentlessly driven out of existence by getting the influential members of the communities to endorse and advocate proper best practices.

----------------------

I do my best to practice what I preach. In the DMD source code, an assert tripping always, by definition, means it's a compiler bug. It is never used to signal errors in code being compiled or environmental errors. If a badly formed .d file causes dmd to assert, it is always a BUG in dmd.
October 18, 2014
On 10/17/2014 9:10 AM, Jacob Carlborg wrote:
> On 2014-10-17 10:26, Atila Neves wrote:
>
>> Is cleaning up in a unittest build a problem? I'd say no, if it the
>> tests fail it doesn't make much sense to clean up unless it affects the
>> reporting of tests failing.
>
> I have used files in some of my unit tests. I would certainly like those to be
> properly closed if a tests failed (for whatever reason). Now, some of you will
> argue that one shouldn't use files in unit tests. But that would only work in a
> ideal and perfect world, which we don't live in.

This should be fairly straightforward to deal with:

1. Write functions to input/output from/to ranges instead of files. Then, have the unittests "mock up" input to drive them that does not come from files. I've used this technique very successfully in Warp.

2. If (1) cannot be done, then write the unittests like:

  {
    openfile();
    scope (exit) closefile();
    scope (failure) assert(0);
    ... use enforce() instead of assert() ...
  }

3. In a script that compiles/runs the unittests, have the script delete any extraneous generated files.
October 18, 2014
On Saturday, 18 October 2014 at 05:22:54 UTC, Walter Bright wrote:
> 2. If (1) cannot be done, then write the unittests like:
>
>   {
>     openfile();
>     scope (exit) closefile();
>     scope (failure) assert(0);
>     ... use enforce() instead of assert() ...
>   }
>
> 3. In a script that compiles/runs the unittests, have the script delete any extraneous generated files.

This is bad, it means:

- I risk having my filesystem ruined by running unit-tests through the compiler.
- The test environment changes between runs.

Built in unit tests should have no side effects.

Something along these lines would be a better setup:

1. Load a filesystem from a read-only file to a virtual driver.
2. Run a special initializer for unit tests to set up the in-memory test environment.
3. Create N forks (N=number of cores):
4. Fork the filesystem/program before running a single unit test.
5. Mount the virtual filesystem (from 1)
6. Run the unit test
7. Collect result from child process and print result.
8. goto 4

But just banning writing to resources would be more suitable. D unit tests are only suitable for testing simple library code anyway.
October 18, 2014
On 2014-10-18 06:36, Walter Bright wrote:

> This particular subthread is about unittests.

That doesn't make the problem go away.

-- 
/Jacob Carlborg
October 18, 2014
On 2014-10-18 07:09, Walter Bright wrote:

> Which means they'll be program bugs, not environmental errors.

Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.

-- 
/Jacob Carlborg
October 18, 2014
On Saturday, 18 October 2014 at 05:10:20 UTC, Walter Bright wrote:
>
> I understand that some have to work with poorly written libraries that incorrectly use assert. If that's the only issue with those libraries, you're probably lucky :-) Short term, I suggest editing the code of those libraries, and pressuring the authors of them. Longer term, we need to establish a culture of using assert/enforce correctly.

So you consider the library interface to be user input?  What about calls that are used internally but also exposed as part of the library interface?