February 28, 2013
On Thu, Feb 28, 2013 at 04:31:15PM +0100, Andrej Mitrovic wrote:
> On 2/28/13, H. S. Teoh <hsteoh@quickfur.ath.cx> wrote:
> > Hmm. I was using assertNotThrown... which disappears with -release.
> 
> This shouldn't happen. Could you provide a test-case, I can't recreate this behavior.

Ugh. While trying to reduce the failing test case, I discovered that assertNotThrown is actually working correctly. There appears to be some kind of subtle bug in my code related to assert() being called in an in-contract, that causes a side-effect that changes the result of the test.

IOW, my fault, there's nothing wrong with assert/unittest. Sorry for the
false alarm. :-( :-(


T

-- 
All problems are easy in retrospect.
March 01, 2013
On 2013-02-28 16:39, Andrei Alexandrescu wrote:

> http://www.linfo.org/rule_of_silence.html

Well I see. But rspec isn't just for running tests and displaying if they passed or not. You can use it to generate a form of specification for your application.

-- 
/Jacob Carlborg
March 01, 2013
Am Wed, 27 Feb 2013 23:41:26 -0800
schrieb Jonathan M Davis <jmdavisProg@gmx.com>:

> On Thursday, February 28, 2013 08:29:41 monarch_dodra wrote:
> > On Thursday, 28 February 2013 at 04:58:11 UTC, Jonathan M Davis
> > 
> > wrote:
> > > So, while I can understand why you'd think that we have a
> > > problem, we actually
> > > don't.
> > > 
> > > - Jonathan M Davis
> > 
> > One of the issues we may actually have down the road is if/when
> > we want to try to deploy failable tests eg:
> > "Test result 197/205".
> > 
> > If one of the tests fails with an assert though, you aren't really capable of moving on to the next test...
> 
> D's unit test facilities are designed to be simple. If you want something fancier, use a 3rd party framework of some kind.
> 
> D's unit test facilities are also designed so that they specifically print _nothing_ on success (and as a command-line tool, this is important), so printing out how many passed or failed will never work unless there are failures (which may or may not conflict with what you're looking for).
> 
> Executing further unittest blocks within a file would often be nice, but it also often would result in erroneous failures (sometimes you're stuck having unittest blocks later in a file rely on those before, even if it's better to avoid that), and it's complete foolishness IMHO to try and continue executing tests within a single unittest block once one assertion fails. Even if they're often independent, it's far to frequent that each subsequent assertion relies on the success of the ones before. So, I'd tend to be against what you're suggesting anyway.
> 
> Also, I suspect that Walter is going to be very reticent to add much of anything to the built in unit tests. They're very simple on purpose, and he generally shoots down suggestions that involve even things as simple as being able to indicate which tests to run.
> 
> Really, if you want fancier unit testing facilities, it's likely going to always be the case that you're going to use a 3rd party framework of some kind. The way the built-in ones are done makes it difficult to extend what they do, and they're _supposed_ to be simple, so any features which go against that would be shot down anyway.
> 
> - Jonathan M Davis

I should really start writing a formal DIP for my old unit test proposal at https://github.com/D-Programming-Language/dmd/pull/1131

There's actually no need to change anything in the language to allow testing other unit test blocks after one failed, it's just an implementation detail. Whether this is dangerous is another discussion, but we should allow 3rd party test runners to do that. What output is then actually printed is also an entirely different discussion as that's defined by the test runner, not by the compiler.

Continuing the same block after an assert is a different thing. While it can be done be hooking the assert handler in druntime it's much more dangerous. A better solution would be introducing an additional check method which does not throw.
1 2
Next ›   Last »