Jump to page: 1 2
Thread overview
Re: Java > Scala
Dec 18, 2011
Isaac Gouy
Dec 18, 2011
Somedude
Dec 20, 2011
Russel Winder
Dec 20, 2011
Isaac Gouy
Dec 21, 2011
Isaac Gouy
Dec 21, 2011
Russel Winder
Dec 21, 2011
Isaac Gouy
Dec 21, 2011
Russel Winder
Dec 21, 2011
Russel Winder
Dec 19, 2011
Walter Bright
Dec 19, 2011
Isaac Gouy
December 18, 2011
> From: Russel Winder
> Subject: Re: Java > Scala
> Newsgroups: gmane.comp.lang.d.general
> Sat, 17 Dec 2011 22:18:26 -0800

> I really rather object to being labelled an educated idiot.
...
> If you want to look at even more biased benchmarking look at http://shootout.alioth.debian.org/ it is fundamentally designed to show that C is the one true language for writing performance computation.

I rather object to the baseless accusation that the benchmarks game is "designed to show that C is the one true language for writing performance computation."

Your accusation is false.

Your accusation is ignorant (literally).

December 18, 2011
On 12/18/11 1:08 PM, Isaac Gouy wrote:
>> From: Russel Winder
>> Subject: Re: Java>  Scala
>> Newsgroups: gmane.comp.lang.d.general
>> Sat, 17 Dec 2011 22:18:26 -0800
>
>> I really rather object to being labelled an educated idiot.
> ....
>> If you want to look at even more biased benchmarking look at
>> http://shootout.alioth.debian.org/ it is fundamentally designed to show
>> that C is the one true language for writing performance computation.
>
> I rather object to the baseless accusation that the benchmarks game is "designed to show that C is the one true language for writing performance computation."
>
> Your accusation is false.
>
> Your accusation is ignorant (literally).

It also strikes me as something rather random to say. Far as I can tell the shootout comes with plenty of warnings and qualifications and uses a variety of tests that don't seem chosen to favor C or generally systems programming languages.

But I'm sure Russel had something in mind. Russel, would you want to expand a bit?


Thanks,

Andrei
December 18, 2011
Le 19/12/2011 00:08, Andrei Alexandrescu a écrit :
> On 12/18/11 1:08 PM, Isaac Gouy wrote:
>>> From: Russel Winder
>>> Subject: Re: Java>  Scala
>>> Newsgroups: gmane.comp.lang.d.general
>>> Sat, 17 Dec 2011 22:18:26 -0800
>>
>>> I really rather object to being labelled an educated idiot.
>> ....
>>> If you want to look at even more biased benchmarking look at http://shootout.alioth.debian.org/ it is fundamentally designed to show that C is the one true language for writing performance computation.
>>
>> I rather object to the baseless accusation that the benchmarks game is "designed to show that C is the one true language for writing performance computation."
>>
>> Your accusation is false.
>>
>> Your accusation is ignorant (literally).
> 
> It also strikes me as something rather random to say. Far as I can tell the shootout comes with plenty of warnings and qualifications and uses a variety of tests that don't seem chosen to favor C or generally systems programming languages.
> 
> But I'm sure Russel had something in mind. Russel, would you want to expand a bit?
> 
> 
> Thanks,
> 
> Andrei

Not only is it random and baseless, my own personal experience is that the shootout actually gives a fairly accurate display of what one can expect on the areas of speed and memory usage.

Less on the side of code size though, because the programs are still too small to take advantage of some language features designed for large scale programs.

And I still pray to see D back in the shootout.
December 19, 2011
On 12/18/11 5:40 PM, Somedude wrote:
> And I still pray to see D back in the shootout.

Praying might help. Working on it may actually be more effective :o).

Andrei
December 19, 2011
On 12/18/2011 11:08 AM, Isaac Gouy wrote:
> I rather object to the baseless accusation that the benchmarks game is
> "designed to show that C is the one true language for writing performance
> computation."
>
> Your accusation is false.
>
> Your accusation is ignorant (literally).

This is why I quit posting any benchmark results. Someone was always accusing me of bias, sabotage, etc.


December 19, 2011
> From: Walter Bright <newshound2@digitalmars.com>

> Sent: Sunday, December 18, 2011 10:46 PM

> On 12/18/2011 11:08 AM, Isaac Gouy wrote:
>>  I rather object to the baseless accusation that the benchmarks game is
>>  "designed to show that C is the one true language for writing
> performance
>>  computation."
>> 
>>  Your accusation is false.
>> 
>>  Your accusation is ignorant (literally).
> 
> This is why I quit posting any benchmark results. Someone was always accusing me
> 
> of bias, sabotage, etc.


My feeling is that used to happen much more often 4 or 5 years ago, these days a third-party has usually jumped-in to challenge ignorant comments about the benchmarks game before I even notice.

Such ignorant comments are just seen to reflect badly on the person who made them.

December 20, 2011
On Sun, 2011-12-18 at 17:08 -0600, Andrei Alexandrescu wrote:
> On 12/18/11 1:08 PM, Isaac Gouy wrote:
> >> From: Russel Winder
> >> Subject: Re: Java>  Scala
> >> Newsgroups: gmane.comp.lang.d.general
> >> Sat, 17 Dec 2011 22:18:26 -0800
> >
> >> I really rather object to being labelled an educated idiot.
> > ....
> >> If you want to look at even more biased benchmarking look at http://shootout.alioth.debian.org/ it is fundamentally designed to show that C is the one true language for writing performance computation.
> >
> > I rather object to the baseless accusation that the benchmarks game is "designed to show that C is the one true language for writing performance computation."

Overstated perhaps, baseless, no.  But this is a complex issue.

> > Your accusation is false.
> >
> > Your accusation is ignorant (literally).

The recent thread between Caligo, myself and others on this list should surely have displayed the futility of arguing in this form.

> It also strikes me as something rather random to say. Far as I can tell the shootout comes with plenty of warnings and qualifications and uses a variety of tests that don't seem chosen to favor C or generally systems programming languages.

The Shootout infrastructure and overall management is great.  Isaac has done a splendid job there.  The data serves a purpose for people who read between the lines and interpret the results with intelligence.  The opening page does indeed set out that you have to be very careful with the data to avoid comparing apples and oranges.  The data is presented in good faith.

The system as set out is biased though, systematically so.  This is not a problem per se since all the micro-benchmarks are about computationally intensive activity.  Native code versions are therefore always going to appear better.  But then this is fine the Shootout is about computationally intensive comparison.  Actually I am surprised that Java does so well in this comparison due to its start-up time issues.

Part of the "problem" I alluded to was people using the numbers without thinking.  No amount of words on pages affect these people, they take the numbers as is and make decisions based solely on them.  C, C++ and Fortran win on most of them and so are the only choice of language.  (OK so Haskell wins on the quad-core thread-ring, which I find very interesting.)

As I understand it, Isaac ruins this basically single handed, relying of folk providing versions of the code.  This means there is a highly restricted resource issue in managing the Shootout.  Hence a definite set of problems and a restricted set of languages to make management feasible.  This leads to interesting situation such as D is not part of the set but Clean and Mozart/Oz are.  But then Isaac is the final arbiter here, as it is his project, and what he says goes.

I looked at the Java code and the Groovy code a couple of years back (I haven't re-checked the Java code recently), and it was more or less a transliteration of the C code.  This meant that the programming languages were not being shown off at their best.  I started a project with the Groovy community to provide reasonable version of Groovy codes and was getting some take up.  Groovy was always going to be with Python and Ruby and nowhere near C, C++, and Fortran, or Java, but the results being displayed at the time were orders of magnitude slower than Groovy could be, as shown by the Java results.  The most obvious problem was that the original Groovy code was written so as to avoid any parallelism at all.

Of course Groovy (like Python) would never be used directly for this sort of computation, a mixed Groovy/Java or Python/C (or Python/C++, Python/Fortran) would be -- the "tight loop" being coded in the static language, the rest in the dynamic language.   Isaac said though that this was not permitted, that only pure single language versions were allowed.  Entirely reasonable in one sense, unfair in another: fair because it is about language performance in the abstract, unfair because it is comparing languages out of real world use context.

(It is worth noting that the Python is represented by CPython, and I suspect PyPy would be a lot faster for these micro-benchmarks.  But only when PyPy is Python 3 compliant since Python 3 and not Python 2 is the representative in the Shootout.  A comparison here is between using Erlang and Erlang HiPE.)

In the event, Isaac took Groovy out of the Shootout, so the Groovy rewrite effort was disbanded.  I know Isaac says run your own site, but that rather misses the point, and leads directly to the sort of hassles Walter had when providing a benchmark site.   There is no point in a language development team running a benchmark.  The issues are perceived, if not real, bias in the numbers.  Benchmarks have to be run by an independent even if the contributions are from language development teams.

> But I'm sure Russel had something in mind. Russel, would you want to expand a bit?

Hopefully the above does what you ask.

The summary is that Isaac is running this in good faith, but there are systematic biases in the whole thing, which is entirely fine as long as you appreciate that.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@russel.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


December 20, 2011
> From: Russel Winder <russel@russel.org.uk>

> Sent: Monday, December 19, 2011 11:29 PM

>>  >> If you want to look at even more biased benchmarking look at
>>  >> http://shootout.alioth.debian.org/ it is fundamentally designed to
>>  >> show that C is the one true language for writing performance
>>  >> computation.

> Overstated perhaps, baseless, no.  But this is a complex issue.

False and baseless, and a simple issue.

Your words are clear - "... designed to show ...".

Your false accusation is about purpose and intention - you should take back that accusation.
December 20, 2011
On 12/20/11 1:29 AM, Russel Winder wrote:
> The system as set out is biased though, systematically so.  This is not
> a problem per se since all the micro-benchmarks are about
> computationally intensive activity.  Native code versions are therefore
> always going to appear better.  But then this is fine the Shootout is
> about computationally intensive comparison.

This is fine, so no bias so far. It's a speed benchmark, so it's supposed to measure speed. It says as much. If native code comes usually in top places, the word is "expected", not "biased".

> Actually I am surprised
> that Java does so well in this comparison due to its start-up time
> issues.

I suppose this is because the run time of the tests is long enough to bury VM startup time. Alternatively, the benchmark may only measure the effective execution time.

> Part of the "problem" I alluded to was people using the numbers without
> thinking.  No amount of words on pages affect these people, they take
> the numbers as is and make decisions based solely on them.

Well, how is that a bias of the benchmark?

> C, C++ and
> Fortran win on most of them and so are the only choice of language.

The benchmark measures speed. If one is looking for speed wouldn't the choice of language be in keeping with these results? I'd be much more suspicious of the quality and/or good will of the benchmark if other languages would frequently come to the top.

> As I understand it, Isaac ruins this basically single handed, relying of
> folk providing versions of the code.  This means there is a highly
> restricted resource issue in managing the Shootout.  Hence a definite
> set of problems and a restricted set of languages to make management
> feasible.  This leads to interesting situation such as D is not part of
> the set but Clean and Mozart/Oz are.  But then Isaac is the final
> arbiter here, as it is his project, and what he says goes.

If I recall things correctly, Isaac dropped the D code because it was 32-bit only, which was too much trouble for his setup. Now we have good 64 bit generation, so it may be a good time to redo D implementations of the benchmarks and submit it again to Isaac for inclusion in the shootout.

Quite frankly, however, your remark (which I must agree, for all respect I hold for you, is baseless) is a PR faux pas - and unfortunately not the only one of our community. I'd find it difficult to go now and say, "by the way, Isaac, we're that community that insulted you on a couple of occasions. Now that we got to talk again, how about putting D back in the shootout?"

> I looked at the Java code and the Groovy code a couple of years back (I
> haven't re-checked the Java code recently), and it was more or less a
> transliteration of the C code.

That is contributed code. In order to demonstrate bias you'd need to show that faster code was submitted and refused.

> This meant that the programming
> languages were not being shown off at their best.  I started a project
> with the Groovy community to provide reasonable version of Groovy codes
> and was getting some take up.  Groovy was always going to be with Python
> and Ruby and nowhere near C, C++, and Fortran, or Java, but the results
> being displayed at the time were orders of magnitude slower than Groovy
> could be, as shown by the Java results.  The most obvious problem was
> that the original Groovy code was written so as to avoid any parallelism
> at all.

Who wrote the code? Is the owner of the shootout site responsible for those poor results?

> Of course Groovy (like Python) would never be used directly for this
> sort of computation, a mixed Groovy/Java or Python/C (or Python/C++,
> Python/Fortran) would be -- the "tight loop" being coded in the static
> language, the rest in the dynamic language.   Isaac said though that
> this was not permitted, that only pure single language versions were
> allowed.  Entirely reasonable in one sense, unfair in another: fair
> because it is about language performance in the abstract, unfair because
> it is comparing languages out of real world use context.

I'd find it a stretch to label that as unfair, for multiple reasons. The shootout measures speed of programming languages, not speed of systems languages wrapped in shells of other languages. The simpler reason is that it's the decision of the site owner to choose the rules. I happen to find them reasonable, but I get your point too (particularly if the optimized routines are part of the language's standard library).

> (It is worth noting that the Python is represented by CPython, and I
> suspect PyPy would be a lot faster for these micro-benchmarks.  But only
> when PyPy is Python 3 compliant since Python 3 and not Python 2 is the
> representative in the Shootout.  A comparison here is between using
> Erlang and Erlang HiPE.)
>
> In the event, Isaac took Groovy out of the Shootout, so the Groovy
> rewrite effort was disbanded.  I know Isaac says run your own site, but
> that rather misses the point, and leads directly to the sort of hassles
> Walter had when providing a benchmark site.

That actually hits the point so hard, the point is blown into so little pieces, you'd think it wasn't there in the first place. It's a website. If it doesn't do what you want, at the worst case that would be "a bummer". But it's not "unfair" as the whole notion of fairness is inappropriate here. Asking for anything including fairness _does_ miss the point.

> There is no point in a
> language development team running a benchmark.  The issues are
> perceived, if not real, bias in the numbers.  Benchmarks have to be run
> by an independent even if the contributions are from language
> development teams.
>
>> But I'm sure Russel had something in mind. Russel, would you want to
>> expand a bit?
>
> Hopefully the above does what you ask.
>
> The summary is that Isaac is running this in good faith, but there are
> systematic biases in the whole thing, which is entirely fine as long as
> you appreciate that.

Well, to me your elaboration seems like one of those delicious monologues Ricky Gervais gets into in the show "Extras". He makes some remark, figures it's a faux pas, and then tries to mend it but instead it all gets worse and worse.


Andrei
December 21, 2011
> From: Russel Winder <russel@russel.org.uk>
> Sent: Monday, December 19, 2011 11:29 PM


As for your other comments:

> The opening page does indeed set out that you have to be very careful with the data to avoid comparing apples and oranges.

No, the opening page says - "A comparison between programs written in such different languages *is* a comparison between apples and oranges..."


> Actually I am surprised that Java does so well in this comparison due to its start-up time issues.

Perhaps the start-up time issues are less than you suppose.

The Help page shows 4 different measurement approaches for the Java program, and for these tiny tiny programs, with these workloads, the "excluding start-up" "Warmed" times really aren't much different from the usual times that include all the start-up costs -

http://shootout.alioth.debian.org/help.php#java


> Part of the "problem" I alluded to was people using the numbers without thinking.

Do you include yourself among that group of people?


> I started a project with the Groovy community to provide reasonable version of Groovy codes and was getting some take up.

You took on the task in the first week of March 2009

   http://groovy.329449.n5.nabble.com/the-benchmarks-game-Groovy-programs-td366268.html#a366290

and iirc 6 months later not a single program had been contributed !

   http://groovy.329449.n5.nabble.com/Alioth-Shootout-td368794.html


> In the event, Isaac took Groovy out of the Shootout, so the Groovy rewrite effort was disbanded.

Your "Groovy rewrite effort" didn't contribute a single program in 6 months !


> There is no point in a language development team running a benchmark.


Tell that to the PyPy developers http://speed.pypy.org/


Tell that to Mike Pall http://luajit.org/performance_x86.html

Tell that to the Go developers
« First   ‹ Prev
1 2