March 19, 2012
On 03/19/2012 01:33 PM, Derek wrote:
> On Fri, 16 Mar 2012 13:16:18 +1100, Kevin <kevincox.ca@gmail.com> wrote:
>
>> This is in no way D specific but say you have two constant strings.
>>
>> const char[] a = "1234567890";
>> // and
>> const char[] b = "67890";
>>
>> You could lay out the memory inside of one another. IE: if a.ptr = 1
>> then b.ptr = 6. I'm not sure if this has been done and I don't think
>> it would apply very often but it would be kinda cool.
>>
>> I thought of this because I wanted to pre-generate hex-representations
>> of some numbers I realized I could use half the memory if I nested
>> them. (At least I think it would be half).
>
> Is the effort to do this really an issue with today's vast amounts of
> RAM (virtual and real) available? How much memory are you expecting to
> 'save'?
>

Using less memory means having less cache misses and therefore improved performance. Saving half the memory can make quite a difference.
March 19, 2012
On Tue, Mar 20, 2012 at 12:05:55AM +0100, Timon Gehr wrote:
> On 03/19/2012 01:33 PM, Derek wrote:
> >On Fri, 16 Mar 2012 13:16:18 +1100, Kevin <kevincox.ca@gmail.com> wrote:
> >
> >>This is in no way D specific but say you have two constant strings.
> >>
> >>const char[] a = "1234567890";
> >>// and
> >>const char[] b = "67890";
> >>
> >>You could lay out the memory inside of one another. IE: if a.ptr = 1 then b.ptr = 6. I'm not sure if this has been done and I don't think it would apply very often but it would be kinda cool.
> >>
> >>I thought of this because I wanted to pre-generate hex-representations of some numbers I realized I could use half the memory if I nested them. (At least I think it would be half).
> >
> >Is the effort to do this really an issue with today's vast amounts of RAM (virtual and real) available? How much memory are you expecting to 'save'?
> >
> 
> Using less memory means having less cache misses and therefore improved performance. Saving half the memory can make quite a difference.

While the *total* amount of memory used may not matter so much, cache locality matters a LOT. The difference between an inner loop that can run with all accessed memory within the CPU cache and an inner loop that triggers >=1 cache misses per iteration (due to accessing memory that happens to exceed cache size just by a little) is *huge*.

Disregarding memory usage just because of the abundance of memory is a fallacy.


T

-- 
What do you mean the Internet isn't filled with subliminal messages? What about all those buttons marked "submit"??
March 19, 2012
On 20 March 2012 01:33, Derek <ddparnell@bigpond.com> wrote:
> Is the effort to do this really an issue with today's vast amounts of RAM (virtual and real) available? How much memory are you expecting to 'save'?
>
> And is RAM address alignment an issue here also? Currently most literals are aligned on a 4 or 8-byte boundary but with this sort of pooling, some literals will not be so aligned any more. That might not be an issue but I'm just curious.

Gah, I hate this sentiment! It encourages lazy, poor design and practice simply because "RAM/CPU is cheap, dev time is expensive". Yes, RAM and CPU /are/ cheap, and dev time is expensive, but so is losing millions of dollars of revenue because your loading times on your app are 100ms too slow, and your conversion rate drops. This is the one thing that i hate about the Rails community, since it is their motto.

Sites should be blazingly fast with today's computing power, but a ridiculous focus on "Developer productivity" has meant that no change has happened. I love it when D threads talk about whether or not the compiler does inlining, or loop unrolling, or whether it does, or should, use the correct instructions for the target. Not because I get off on talking about optimisation, but because it shows that there are still people care about squeezing every last instruction of performance, without compromising on productivity.

Resources cost money, any saving of resources saves money.

--
James Miller
March 20, 2012
On Tue, Mar 20, 2012 at 12:55:29PM +1300, James Miller wrote:
> On 20 March 2012 01:33, Derek <ddparnell@bigpond.com> wrote:
> > Is the effort to do this really an issue with today's vast amounts of RAM (virtual and real) available? How much memory are you expecting to 'save'?
> >
> > And is RAM address alignment an issue here also? Currently most literals are aligned on a 4 or 8-byte boundary but with this sort of pooling, some literals will not be so aligned any more. That might not be an issue but I'm just curious.
> 
> Gah, I hate this sentiment! It encourages lazy, poor design and practice simply because "RAM/CPU is cheap, dev time is expensive". Yes, RAM and CPU /are/ cheap, and dev time is expensive, but so is losing millions of dollars of revenue because your loading times on your app are 100ms too slow, and your conversion rate drops. This is the one thing that i hate about the Rails community, since it is their motto.

Not to mention that this kind of fallacious attitude causes people to fail to realize the fact that the difference of 1 byte can make the difference between an inner loop that accesses memory entirely within the CPU cache, vs. one that causes a cache miss every iteration. The difference in performance is HUGE.

(And no, I'm not suggesting we waste time optimizing bytes, but where memory can be saved, it *should* be saved. Every little bit adds up; the more compact your data structures, the more likely they will fit in the cache and the less likely you'll cause cache misses in performance-critical code.)


> Sites should be blazingly fast with today's computing power, but a ridiculous focus on "Developer productivity" has meant that no change has happened.

Exactly! In spite of the fact that CPU speed has increased on the order of a millionfold since the old days, and in spite of the fact that memory capacity has increased by several orders of magnitude, today's software is STILL taking forever and two days just to load, and we STILL run out of memory and thrash to swap on almost exactly the same tasks that we did 10 years ago.

Where has all the performance boost drained into? Into bloated code with over-complex designs that suck resources like a sponge due to lack of concern with resource usage, that's what.


> I love it when D threads talk about whether or not the compiler does inlining, or loop unrolling, or whether it does, or should, use the correct instructions for the target. Not because I get off on talking about optimisation, but because it shows that there are still people care about squeezing every last instruction of performance, without compromising on productivity.

Exactly. Making the *compiler* produce better code is making a difference where it matters. The compiler will be used by hundreds of thousands of projects, so the slightest improvements carry over to all of them *at zero cost to the application programmers*. It's a win-win situation.


> Resources cost money, any saving of resources saves money.
[...]

+1.


T

-- 
English has the lovely word "defenestrate", meaning "to execute by throwing someone out a window", or more recently "to remove Windows from a computer and replace it with something useful". :-) -- John Cowan
March 20, 2012
On 20 March 2012 13:17, H. S. Teoh <hsteoh@quickfur.ath.cx> wrote:
>> Sites should be blazingly fast with today's computing power, but a ridiculous focus on "Developer productivity" has meant that no change has happened.
>
> Exactly! In spite of the fact that CPU speed has increased on the order of a millionfold since the old days, and in spite of the fact that memory capacity has increased by several orders of magnitude, today's software is STILL taking forever and two days just to load, and we STILL run out of memory and thrash to swap on almost exactly the same tasks that we did 10 years ago.
>
> Where has all the performance boost drained into? Into bloated code with over-complex designs that suck resources like a sponge due to lack of concern with resource usage, that's what.

And whats more, developer productivity is not a function of the tools they use, its a function of how they use them. Sure, some tools make you more productive than others, but I swear by (g)vim for everything and it hasn't let me down. I use the command line, ssh where I need to go, all tasks that should make me "less productive" but I'm good at what i do, so there's no difference. I still spend my time in PHP dealing with platform differences between my local environment and the server environment, I still have to write checks for things that are local-only and things that are server-only. Difference is that these checks exist in production code, compiled code can cut them out. Ideally, 99% of web-apps out there would be CGI/FastCGI processes running behind light webservers like nginx. They would be compiled, native code (maybe byte-code like .NET or Java if you need something special that needs it) and run at the speed of fucking light. But they aren't. You have servers running a virtual machine, running a framework, running a web app that isn't optimized because "hey, RAM is cheap". You have scripting languages running applications that have the scope of massive enterprise software, and they aren't designed for it.

I hate working with PHP, not because the language sucks (though it does) but because I know that every time someone loads a page, a ton of redundant work is done, work that could be done once, on startup, then never done again. Silverstripe has to scan the directory tree to build a manifest file to get decent performance out of what they do. But they still have to check that the files exist every time the file loads. They still have to read the file and parse it. It makes me feel ill thinking about it.

Anyway, I should probably stop ranting about this.

--
James Miller
1 2 3
Next ›   Last »