On 19 September 2012 01:02, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
On 9/18/12 5:07 PM, "Øivind" wrote:
* For all tests, the best run is selected, but would it not be
reasonable in some cases to get the average value? Maybe excluding the
runs that are more than a couple std. deviations away from the mean value..

After extensive tests with a variety of aggregate functions, I can say firmly that taking the minimum time is by far the best when it comes to assessing the speed of a function.

The fastest execution time is rarely useful to me, I'm almost always much more interested in the slowest execution time.
In realtime software, the slowest time is often the only important factor, everything must be designed to tolerate this possibility.
I can also imagine other situations where multiple workloads are competing for time, the average time may be more useful in that case.

Side question:
Running a test over and over pre-populates the cache with all associated data after the first cycle... The cache needs to be randomised between each cycle to get realistic results.