Jump to page: 1 27  
Page
Thread overview
[dmd-internals] changeset 455
Apr 28, 2010
Walter Bright
Apr 28, 2010
Bernard Helyer
Apr 28, 2010
Walter Bright
Apr 28, 2010
Jason House
Apr 28, 2010
Jason House
Apr 28, 2010
Walter Bright
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Walter Bright
Apr 30, 2010
Sean Kelly
May 01, 2010
Sean Kelly
May 01, 2010
Jason House
May 01, 2010
Sean Kelly
May 03, 2010
Sean Kelly
May 02, 2010
Walter Bright
May 03, 2010
Sean Kelly
May 03, 2010
Walter Bright
May 03, 2010
Sean Kelly
May 03, 2010
Walter Bright
May 03, 2010
Sean Kelly
May 03, 2010
Walter Bright
May 03, 2010
Sean Kelly
May 03, 2010
Walter Bright
May 03, 2010
Sean Kelly
May 05, 2010
Denis
May 05, 2010
Jason House
May 05, 2010
Jason House
May 05, 2010
Sean Kelly
May 05, 2010
Sean Kelly
May 04, 2010
Brad Roberts
May 04, 2010
Walter Bright
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Walter Bright
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Jason House
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Bernard Helyer
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Robert Clipsham
Apr 28, 2010
Jason House
Apr 30, 2010
Sean Kelly
Apr 28, 2010
Jason House
Apr 28, 2010
Walter Bright
Apr 28, 2010
Jason House
April 27, 2010
Due to popular demand, now all unittests run, even if some of them fail.

No source code changes necessary. Requires a corresponding update to druntime.
April 28, 2010
On 28/04/10 17:42, Walter Bright wrote:
> Due to popular demand, now all unittests run, even if some of them fail.
>
> No source code changes necessary. Requires a corresponding update to
> druntime.
> _______________________________________________
> dmd-internals mailing list
> dmd-internals at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-internals
>

Wooo! This makes me very happy. Combined with Robert's work on the DWARF output, my birthday is five months early, it seems.
April 28, 2010
Yay!

Sent from my iPhone

On Apr 28, 2010, at 1:42 AM, Walter Bright <walter at digitalmars.com> wrote:

> Due to popular demand, now all unittests run, even if some of them fail.
>
> No source code changes necessary. Requires a corresponding update to
> druntime.
> _______________________________________________
> dmd-internals mailing list
> dmd-internals at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-internals
April 28, 2010
I've been meaning to do this for a while, just didn't have the time. Also, at the ACCU conference, there were a lot of very experienced people with unit tests, and they were pretty clear that the only significant problem with D's unit test facility was not being able to get all the failures in one run.

Bernard Helyer wrote:
> On 28/04/10 17:42, Walter Bright wrote:
>> Due to popular demand, now all unittests run, even if some of them fail.
>>
>> No source code changes necessary. Requires a corresponding update to druntime.
>>
>>
>
> Wooo! This makes me very happy. Combined with Robert's work on the DWARF output, my birthday is five months early, it seems.
>
April 28, 2010
Simply running all tests is necessary but not sufficient. After running hundreds of thousands of tests, there needs to be any easy way to figure out which tests failed and review their failure.

Just as an example, I started a project last week at work, and my
current test suite has:
  ? 250 passing tests. These are regression tests.
  ? 22 tests that fail but represent current out of scope features so
should continue to fail
  ? 24 failing tests for work-in-progress changes. These are the main
focus of current effort (AKA test-driven-development)

Sent from my iPhone

On Apr 28, 2010, at 9:26 AM, Walter Bright <walter at digitalmars.com> wrote:

> I've been meaning to do this for a while, just didn't have the time. Also, at the ACCU conference, there were a lot of very experienced people with unit tests, and they were pretty clear that the only significant problem with D's unit test facility was not being able to get all the failures in one run.
>
> Bernard Helyer wrote:
>> On 28/04/10 17:42, Walter Bright wrote:
>>> Due to popular demand, now all unittests run, even if some of them fail.
>>>
>>> No source code changes necessary. Requires a corresponding update to druntime.
>>>
>>>
>>
>> Wooo! This makes me very happy. Combined with Robert's work on the DWARF output, my birthday is five months early, it seems.
>>
> _______________________________________________
> dmd-internals mailing list
> dmd-internals at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-internals
April 28, 2010
Jason House wrote:
> Simply running all tests is necessary but not sufficient. After running hundreds of thousands of tests, there needs to be any easy way to figure out which tests failed and review their failure.
>

maybe there should be an implicit line added ate the top of all unittests:

scope(failure) writefln("unittest at %s:%d failed", __FILE__,__LINE__);

> Just as an example, I started a project last week at work, and my
> current test suite has:
>  ? 250 passing tests. These are regression tests.
>  ? 22 tests that fail but represent current out of scope features so
> should continue to fail
>  ? 24 failing tests for work-in-progress changes. These are the main
> focus of current effort (AKA test-driven-development)

some kind of external tool? Or maybe a @expectFail tag?



> Sent from my iPhone

April 28, 2010
On 04/28/2010 09:56 AM, Jason House wrote:
> Simply running all tests is necessary but not sufficient. After running hundreds of thousands of tests, there needs to be any easy way to figure out which tests failed and review their failure.

Yes. This is a huge matter which the change only makes worse.

I had to completely change the unittesting method for Phobos because any segfault would be virtually impossible to track down.

Walter, failing unittests for any reason must display the file and line of failure. I'm not sure to what extent segfaulting is detectable, but we definitely must find good ways to address that too.


Andrei
April 28, 2010
On Apr 28, 2010, at 12:21 PM, Benjamin Shropshire <benjamin at precisionsoftware.us
 > wrote:

> Jason House wrote:
>> Simply running all tests is necessary but not sufficient. After running hundreds of thousands of tests, there needs to be any easy way to figure out which tests failed and review their failure.

BTW, that should have been hundreds _or_ thousands. I haven't been in a group with more than 10,000 tests.


>>
>
> maybe there should be an implicit line added ate the top of all unittests:
>
> scope(failure) writefln("unittest at %s:%d failed", __FILE__,__LINE__);


I would hope that any implicit mixin would reference an overridable druntime function. Summarizing failures by pure count or by user- defined category is very common.

Bonus points for an implicit mixin that permits a variable argument count and curries unit test arguments to it. The biggest thing I'd want is a unit test name. An optional category name would be next on my wish list.
April 28, 2010

Andrei Alexandrescu wrote:
> On 04/28/2010 09:56 AM, Jason House wrote:
>> Simply running all tests is necessary but not sufficient. After running hundreds of thousands of tests, there needs to be any easy way to figure out which tests failed and review their failure.
>
> Yes. This is a huge matter which the change only makes worse.
>
> I had to completely change the unittesting method for Phobos because any segfault would be virtually impossible to track down.
>
> Walter, failing unittests for any reason must display the file and line of failure.

It already does:
----------------------------
int x;
void main() { }
unittest {
    assert(x == 3, "x should be 3");
    assert(x == 4);
    assert(x == 5);
}
--------------------------------
Running it:

test3.d(10): x should be 3
test3(11): unittest failure
test3(12): unittest failure

> I'm not sure to what extent segfaulting is detectable, but we definitely must find good ways to address that too.
>

Debuggers are the standard tool for that.
April 28, 2010
On 04/28/2010 02:53 PM, Walter Bright wrote:
>> I'm not sure to what extent segfaulting is detectable, but we definitely must find good ways to address that too.
>>
>
> Debuggers are the standard tool for that.

I hear you but don't have one, and I swore to never use gdb. Ideally we should find a solution within the confines of the compiler.

Andrei
« First   ‹ Prev
1 2 3 4 5 6 7