Jump to page: 1 2
Thread overview
Programming languages and performance
Apr 13, 2015
Walter Bright
Apr 13, 2015
weaselcat
Apr 14, 2015
bearophile
Apr 13, 2015
H. S. Teoh
Apr 14, 2015
Laeeth Isharc
Apr 14, 2015
Laeeth Isharc
Apr 14, 2015
Walter Bright
Apr 14, 2015
weaselcat
Apr 14, 2015
Walter Bright
Apr 14, 2015
weaselcat
Apr 14, 2015
weaselcat
Apr 14, 2015
Walter Bright
Apr 14, 2015
bearophile
Apr 14, 2015
Walter Bright
April 13, 2015
https://www.reddit.com/r/programming/comments/32f4as/why_most_high_level_languages_are_slow/

Good article, discussion is a bit lame.
April 13, 2015
On Monday, 13 April 2015 at 23:28:46 UTC, Walter Bright wrote:
> Good article, discussion is a bit lame.

It's reddit, that's not really surprising.
April 13, 2015
On Mon, Apr 13, 2015 at 04:28:45PM -0700, Walter Bright via Digitalmars-d wrote:
> https://www.reddit.com/r/programming/comments/32f4as/why_most_high_level_languages_are_slow/
> 
> Good article, discussion is a bit lame.

While Phobos is making good progress at being allocation-free, it still has a ways to go. And it doesn't help that the current D GC isn't that great, when you do have to allocate -- I've managed to get 30-40% performance improvements just by turning off the default collection schedule and triggering collections myself at more strategic intervals.

Not having to box things is a big win IMO, though. Boxing of POD types in Java/C# just screams "inefficient" to me... can you imagine all that extra, needless indirection wreaking havoc on the CPU cache and cache predictions?  Having first-class support for value types is also a big win. I rarely use classes in D except when I actually need polymorphism, which requires heap allocation. With alias this, you can even have a limited amount of inheritance in structs, which is totally cool.

But at the end of the day, the programmer has to know how to write cache-efficient code. No matter how the language/compiler tries to be smart and do the Right Thing(tm), poorly-laid out data is poorly-laid out data, and you're gonna incur cache misses all over the place. Cache-unfriendly algorithms are cache-unfriendly algorithms, and no smart language design / smart optimizer is gonna fix that for you. You have to know how to work with the modern cache hierarchies, how to lay out data for efficient access, and how to write cache-friendly algorithms. To this end, I found the following series of articles extremely enlightening:

	http://lwn.net/Articles/250967/


T

-- 
Music critic: "That's an imitation fugue!"
April 14, 2015
On 4/13/2015 4:28 PM, Walter Bright wrote:
> https://www.reddit.com/r/programming/comments/32f4as/why_most_high_level_languages_are_slow/
> Good article, discussion is a bit lame.

One of the reasons I've been range-ifying Phobos is not only to remove dependence on the GC, but often to eliminate allocations entirely, by removing the need for temporaries to hold intermediate results.

https://github.com/D-Programming-Language/phobos/pull/3187
https://github.com/D-Programming-Language/phobos/pull/3185
https://github.com/D-Programming-Language/phobos/pull/3179
https://github.com/D-Programming-Language/phobos/pull/3178
https://github.com/D-Programming-Language/phobos/pull/3167

April 14, 2015
On Tuesday, 14 April 2015 at 02:12:18 UTC, Walter Bright wrote:
> On 4/13/2015 4:28 PM, Walter Bright wrote:
>> https://www.reddit.com/r/programming/comments/32f4as/why_most_high_level_languages_are_slow/
>> Good article, discussion is a bit lame.
>
> One of the reasons I've been range-ifying Phobos is not only to remove dependence on the GC, but often to eliminate allocations entirely, by removing the need for temporaries to hold intermediate results.
>

this is essentially fusion/deforestation, correct?
April 14, 2015
On 4/13/2015 7:23 PM, weaselcat wrote:
> this is essentially fusion/deforestation, correct?

??
April 14, 2015
thanks for the links and colour, Walter and HST

> But at the end of the day, the programmer has to know how to write
> cache-efficient code. No matter how the language/compiler tries to be
> smart and do the Right Thing(tm), poorly-laid out data is poorly-laid
> out data, and you're gonna incur cache misses all over the place.
> Cache-unfriendly algorithms are cache-unfriendly algorithms, and no
> smart language design / smart optimizer is gonna fix that for you. You
> have to know how to work with the modern cache hierarchies, how to lay
> out data for efficient access, and how to write cache-friendly
> algorithms.

> While Phobos is making good progress at being allocation-free, it still
> has a ways to go. And it doesn't help that the current D GC isn't that
> great, when you do have to allocate -- I've managed to get 30-40%
> performance improvements just by turning off the default collection
> schedule and triggering collections myself at more strategic intervals.

Would love to see an article sometime on efficient programming in D - both cache efficiency and how to make the GC your friend.  (I get the basic idea of data driven design, but not yet the subtleties of cache efficient code and I am sure many other newcomers to D must be in a similar position).

I found the same thing as you describe with a monster CSV import (files are daily, but data needs to be organized by symbol to be useful).

> Not having to box things is a big win IMO, though. Boxing of POD types
> in Java/C# just screams "inefficient" to me... can you imagine all that
> extra, needless indirection wreaking havoc on the CPU cache and cache
> predictions?

There was an interesting post on Lambda the ultimate by Mike Pall (sp?  The Lua guy) in which he said certain eyesight decisions in Python meant much harder to ever make Python fast, and one of the pypy guys agreed with him.  (It was more than just boxing).

I am not in favour of extrapolating trends mindlessly, but I wonder what the world looks like In five or ten years should the gap between processor perf and memory latency continue to widen at similar rates given continued growth in data set sizes.


Laeeth.
April 14, 2015
On Tuesday, 14 April 2015 at 02:39:40 UTC, Walter Bright wrote:
> On 4/13/2015 7:23 PM, weaselcat wrote:
>> this is essentially fusion/deforestation, correct?
>
> ??

http://en.wikipedia.org/wiki/Deforestation_(computer_science)
April 14, 2015
On Tuesday, 14 April 2015 at 02:44:15 UTC, Laeeth Isharc wrote:
> thanks for the links and colour, Walter and HST
>
>> But at the end of the day, the programmer has to know how to write
>> cache-efficient code. No matter how the language/compiler tries to be
>> smart and do the Right Thing(tm), poorly-laid out data is poorly-laid
>> out data, and you're gonna incur cache misses all over the place.
>> Cache-unfriendly algorithms are cache-unfriendly algorithms, and no
>> smart language design / smart optimizer is gonna fix that for you. You
>> have to know how to work with the modern cache hierarchies, how to lay
>> out data for efficient access, and how to write cache-friendly
>> algorithms.
>
>> While Phobos is making good progress at being allocation-free, it still
>> has a ways to go. And it doesn't help that the current D GC isn't that
>> great, when you do have to allocate -- I've managed to get 30-40%
>> performance improvements just by turning off the default collection
>> schedule and triggering collections myself at more strategic intervals.
>
> Would love to see an article sometime on efficient programming in D - both cache efficiency and how to make the GC your friend.  (I get the basic idea of data driven design, but not yet the subtleties of cache efficient code and I am sure many other newcomers to D must be in a similar position).
>
> I found the same thing as you describe with a monster CSV import (files are daily, but data needs to be organized by symbol to be useful).
>
>> Not having to box things is a big win IMO, though. Boxing of POD types
>> in Java/C# just screams "inefficient" to me... can you imagine all that
>> extra, needless indirection wreaking havoc on the CPU cache and cache
>> predictions?
>
> There was an interesting post on Lambda the ultimate by Mike Pall (sp?  The Lua guy) in which he said certain eyesight

DESIGN not eyesight.  Ipad spell check.

> decisions in Python meant much harder to ever make Python fast, and one of the pypy guys agreed with him.  (It was more than just boxing).
>
> I am not in favour of extrapolating trends mindlessly, but I wonder what the world looks like In five or ten years should the gap between processor perf and memory latency continue to widen at similar rates given continued growth in data set sizes.
>
>
> Laeeth.

April 14, 2015
On Tuesday, 14 April 2015 at 02:45:37 UTC, weaselcat wrote:
> On Tuesday, 14 April 2015 at 02:39:40 UTC, Walter Bright wrote:
>> On 4/13/2015 7:23 PM, weaselcat wrote:
>>> this is essentially fusion/deforestation, correct?
>>
>> ??
>
> http://en.wikipedia.org/wiki/Deforestation_(computer_science)

my bad, accidentally hit send
there's an example of it on stackoverflow
http://stackoverflow.com/questions/578063/what-is-haskells-stream-fusion
« First   ‹ Prev
1 2