Thread overview
Re: DMD svn and contract inheritance
Oct 05, 2009
Jason House
Oct 06, 2009
Leandro Lucarella
Oct 06, 2009
Walter Bright
Oct 06, 2009
Lutger
October 05, 2009
Walter Bright Wrote:

> Robert Clipsham wrote:
> > Leandro Lucarella wrote:
> >> Thanks for finally taking this way, Walter =)
> >>
> >> http://www.dsource.org/projects/dmd/timeline
> > 
> > Now that DMD is under version control it should be fairly easy for me to adapt the automated build system used for ldc for dmd. I can set it up to automatically build dmd after a commit, and run dstress/build popular projects and libraries, even package up dmd for "Nightly" builds and maybe even post the results to the D newsgroups/IRC channels.
> > 
> > If you'd be interested to see this Walter, let me know what exactly you want automating/how and where you want results and I'll see about setting it up for you.
> 
> The problem is if some package fails, then I have a large debugging problem trying to figure out unfamiliar code.

With small commits to dmd, it should be trivial to know what small change in dmd caused a user observable change in behavior. If things look good on the dmd side, I'd really hope the code authors would help with debugging their code.

Knowing of a failure within an hour is way better than finding out a month later.

BTW, such regression tests work much better when all failing tests can be identified. It can help with figuring out patterns. Stopping on the first failure can be somewhat limiting, especially if the failure will stick around for a week.
October 06, 2009
Jason House wrote:
> Walter Bright Wrote:
> 
>> Robert Clipsham wrote:
>>> Leandro Lucarella wrote:
>>>> Thanks for finally taking this way, Walter =)
>>>> 
>>>> http://www.dsource.org/projects/dmd/timeline
>>> Now that DMD is under version control it should be fairly easy
>>> for me to adapt the automated build system used for ldc for dmd.
>>> I can set it up to automatically build dmd after a commit, and
>>> run dstress/build popular projects and libraries, even package up
>>> dmd for "Nightly" builds and maybe even post the results to the D
>>> newsgroups/IRC channels.
>>> 
>>> If you'd be interested to see this Walter, let me know what
>>> exactly you want automating/how and where you want results and
>>> I'll see about setting it up for you.
>> The problem is if some package fails, then I have a large debugging
>>  problem trying to figure out unfamiliar code.
> 
> With small commits to dmd, it should be trivial to know what small
> change in dmd caused a user observable change in behavior. If things
> look good on the dmd side, I'd really hope the code authors would
> help with debugging their code.
> 
> Knowing of a failure within an hour is way better than finding out a
> month later.
> 
> BTW, such regression tests work much better when all failing tests
> can be identified. It can help with figuring out patterns. Stopping
> on the first failure can be somewhat limiting, especially if the
> failure will stick around for a week.

We clearly can't define the language around a best-effort kind of flow analysis. I consider Walter's extra checks during optimization a nice perk, but definitely not something we can consider a part of the language. The language definition must work without those.

Andrei
October 06, 2009
Andrei Alexandrescu, el  5 de octubre a las 19:17 me escribiste:
> Jason House wrote:
> >Walter Bright Wrote:
> >
> >>Robert Clipsham wrote:
> >>>Leandro Lucarella wrote:
> >>>>Thanks for finally taking this way, Walter =)
> >>>>
> >>>>http://www.dsource.org/projects/dmd/timeline
> >>>Now that DMD is under version control it should be fairly easy for me to adapt the automated build system used for ldc for dmd. I can set it up to automatically build dmd after a commit, and run dstress/build popular projects and libraries, even package up dmd for "Nightly" builds and maybe even post the results to the D newsgroups/IRC channels.
> >>>
> >>>If you'd be interested to see this Walter, let me know what exactly you want automating/how and where you want results and I'll see about setting it up for you.
> >>The problem is if some package fails, then I have a large debugging
> >> problem trying to figure out unfamiliar code.
> >
> >With small commits to dmd, it should be trivial to know what small change in dmd caused a user observable change in behavior. If things look good on the dmd side, I'd really hope the code authors would help with debugging their code.
> >
> >Knowing of a failure within an hour is way better than finding out a month later.
> >
> >BTW, such regression tests work much better when all failing tests can be identified. It can help with figuring out patterns. Stopping on the first failure can be somewhat limiting, especially if the failure will stick around for a week.
> 
> We clearly can't define the language around a best-effort kind of flow analysis. I consider Walter's extra checks during optimization a nice perk, but definitely not something we can consider a part of the language. The language definition must work without those.

I guess you answered to the wrong mail, Jason is talking about Robert's offering to set up a build/test bot for DMD, to build and test each commit, using dstress and maybe other popular libraries/programs.

-- 
Leandro Lucarella (AKA luca)                      http://llucax.com.ar/
----------------------------------------------------------------------
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
----------------------------------------------------------------------
When I was a child I had a fever
My hands felt just like two balloons.
Now I've got that feeling once again
I can't explain you would not understand
This is not how I am.
I have become comfortably numb.
October 06, 2009
Jason House wrote:
> With small commits to dmd, it should be trivial to know what small
> change in dmd caused a user observable change in behavior.

The problem is, one doesn't know if it is a problem with the change or if it is a problem with the user code. To determine that requires working with and understanding the user code. It's just impractical for me to do that with a large base of code like that.

> If things
> look good on the dmd side, I'd really hope the code authors would
> help with debugging their code.
> 
> Knowing of a failure within an hour is way better than finding out a
> month later.
> 
> BTW, such regression tests work much better when all failing tests
> can be identified. It can help with figuring out patterns. Stopping
> on the first failure can be somewhat limiting, especially if the
> failure will stick around for a week.
October 06, 2009
Walter Bright wrote:

> Jason House wrote:
>> With small commits to dmd, it should be trivial to know what small change in dmd caused a user observable change in behavior.
> 
> The problem is, one doesn't know if it is a problem with the change or if it is a problem with the user code. To determine that requires working with and understanding the user code. It's just impractical for me to do that with a large base of code like that.

But you don't have to take the stance that no regressions may occur (for this very reason).

I think automated builds + tests can be useful in different ways:
- the authors of the code can be notified and look at the problem. Some will
care and help you spot bugs earlier or may fix their own code earlier. This
is at no cost to you. Eventually, this task can perhaps be delegated
completely?
- when a lot of stuff breaks, it is an indicator at least.
- while you may need a lot of investment to determine all problems, at least
some category of problems you may recognize quickly.