On 3 June 2013 01:53, Roy Obena <roy.u@gmail.com> wrote:
On Sunday, 2 June 2013 at 14:34:43 UTC, Manu wrote:
On 2 June 2013 21:46, Joseph Rushton Wakeling

Well this is another classic point actually. I've been asked by my friends
at Cambridge to give their code a once-over for them on many occasions, and
while I may not understand exactly what their code does, I can often spot
boat-loads of simple functional errors. Like basic programming bugs;
out-by-ones, pointer logic fails, clear lack of understanding of floating
point, or logical structure that will clearly lead to incorrect/unexpected
edge cases.
And it blows my mind that they then run this code on their big sets of
data, write some big analysis/conclusions, and present this statistical
data in some journal somewhere, and are generally accepted as an authority
and taken seriously!

You're making this up. I'm sure they do a lot of data-driven
tests or simulations that make most errors detectable. They may
not be savvy programmers, and their programs may not be
error-free, but boat-loads of errors? C'mon.

I'm really not.
I mean, this won't all appear in the same function, but I've seen all these sorts of errors on more than one occasion.
I suspect that in most cases it will just increase their perceived standard deviation, otherwise I'm sure they'd notice it's all wrong and look for their bugs.
But it's sad if a study shows higher than true standard deviation because of code errors, or worse, if it does influence the averages slightly, but they feel the result is plausible within their expected tolerance.
The scariest state is the idea that their code is *almost correct*.

Clearly, they should be using D ;)