August 22, 2013
On Wed, 21 Aug 2013 18:50:35 +0200
"Ramon" <spam@thanks.no> wrote:
> 
> I am *not* against keeping an eye on performance, by no means. Looking at Moore's law, however, and at the kind of computing power available nowadays even in smartphones, not to talk about 8 and 12 core PCs, I feel that the importance of performance is way overestimated (possibly following a formertimes justified tradition).
> 

Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software.

Now I'm certainly not saying that we should try to wring every last drop of performance out of every place where it doesn't even matter (like C++ tends to do). But software developers' belief in Moore's law has caused many of them to inadvertently cancel out, or even reverse, the hardware speedups with code inefficiencies (which are *easily* compoundable, and can and *do* exceed the 3x slowdown you claimed in another post was unrealistic) - and, as JS-heavy web apps prove, they haven't even gotten considerably more reliable as a result (Not that JS is a good example of a reliability-oriented language - but a lot of people certainly seem to think it is).

August 22, 2013
On Thu, Aug 22, 2013 at 03:28:34PM -0400, Nick Sabalausky wrote:
> On Wed, 21 Aug 2013 18:50:35 +0200
> "Ramon" <spam@thanks.no> wrote:
> > 
> > I am *not* against keeping an eye on performance, by no means. Looking at Moore's law, however, and at the kind of computing power available nowadays even in smartphones, not to talk about 8 and 12 core PCs, I feel that the importance of performance is way overestimated (possibly following a formertimes justified tradition).
> > 
> 
> Even if we assume Moore's law is as alive and well as ever, a related note is that software tends to expand to fill the available computational power. When I can get slowdown in a text-entry box on a 64-bit multi-core, I know that hardware and Moore's law, practically speaking, have very little effect on real performance. At this point, it's code that affects performance far more than anything else. When we hail the great performance of modern web-as-a-platform by the fact that it allows an i7 or some such to run Quake as well as a Pentium 1 or 2 did, then we know Moore's law effectively counts for squat - performance is no longer about hardware, it's about not writing inefficient software.

I've often heard the argument that inefficiencies in code is OK, because you can just "ask the customer to upgrade to better hardware", and "nobody runs a 386 anymore". Which, from a business POV, is a profitable outlook -- if you're the one producing the hardware, inefficient software is incentive for the customer to pay you more money to buy faster hardware to run the software. On the contrary, if your software runs *too* well, then customers have no motivation to buy new hardware.

This sometimes goes to ludicrous extremes, where an O(n^2) algorithm is justified because "the customer can just upgrade to better hardware", or "next year's CPU will be able to handle this no problem". Until they realize that when n is large (e.g., the customer says "oh I'm running your software with about n=8000), doubling the CPU speed every year just ain't gonna cut it -- you'd be waiting a long many years before your software becomes usable again.


> Now I'm certainly not saying that we should try to wring every last drop of performance out of every place where it doesn't even matter (like C++ tends to do). But software developers' belief in Moore's law has caused many of them to inadvertently cancel out, or even reverse, the hardware speedups with code inefficiencies (which are *easily* compoundable, and can and *do* exceed the 3x slowdown you claimed in another post was unrealistic) - and, as JS-heavy web apps prove, they haven't even gotten considerably more reliable as a result (Not that JS is a good example of a reliability-oriented language - but a lot of people certainly seem to think it is).

Heh. JS? reliable? in the same sentence? Heh.

On the flip side, though, it's true that the performance-conscious crowd among programmers have a tendency to premature optimization, producing unmaintainable code in the process. I used to be one of them, so I know. :) A profiler is absolutely essential to identify where the real bottlenecks are. But once identified, sometimes there's no way to make it better except by going low-level and writing it in a systems programming language. Like D. ;-)

And sometimes, there *is* no single bottleneck that you can address; you just need the code to be closer to hardware *in general* in order to bridge that last 10% performance gap to reach your target. All those convenient little indirections and virtual method lookups do add up.


T

-- 
It is widely believed that reinventing the wheel is a waste of time; but I disagree: without wheel reinventers, we would be still be stuck with wooden horse-cart wheels.
August 22, 2013
On Thursday, 22 August 2013 at 19:28:42 UTC, Nick Sabalausky wrote:
> On Wed, 21 Aug 2013 18:50:35 +0200
> "Ramon" <spam@thanks.no> wrote:
>> 
>> I am *not* against keeping an eye on performance, by no means. Looking at Moore's law, however, and at the kind of computing power available nowadays even in smartphones, not to talk about 8 and 12 core PCs, I feel that the importance of performance is way overestimated (possibly following a formertimes justified tradition).
>> 
>
> Even if we assume Moore's law is as alive and well as ever, a related
> note is that software tends to expand to fill the available
> computational power. When I can get slowdown in a text-entry box on a
> 64-bit multi-core, I know that hardware and Moore's law, practically
> speaking, have very little effect on real performance. At this point,
> it's code that affects performance far more than anything else. When we
> hail the great performance of modern web-as-a-platform by the fact that
> it allows an i7 or some such to run Quake as well as a Pentium 1 or 2
> did, then we know Moore's law effectively counts for squat -
> performance is no longer about hardware, it's about not writing
> inefficient software.
>
> Now I'm certainly not saying that we should try to wring every last
> drop of performance out of every place where it doesn't even matter
> (like C++ tends to do). But software developers' belief in Moore's law
> has caused many of them to inadvertently cancel out, or even reverse,
> the hardware speedups with code inefficiencies (which are *easily*
> compoundable, and can and *do* exceed the 3x slowdown you claimed in
> another post was unrealistic) - and, as JS-heavy web apps prove, they
> haven't even gotten considerably more reliable as a result (Not that JS
> is a good example of a reliability-oriented language - but a lot of
> people certainly seem to think it is).

I agree. However I feel we should differentiate:
On one hand we have "Do not waste 3 bytes or 2 cycles!". I don't think that this a the, or at least not an adequate answer to doubts of Moore's laws eternal validity.
On the other hand we have what is commonly referred to as "code bloat". *That's" the point where a diet seems promising and reasonable. And btw., there we are talking megabytes instead of 2 or 5 bytes.

Now, while many look in the hobby-programmers corner for the culprits, I'm convinced the real culprits are graphics/gui and the merciless diktat of marketing.
The latter because they *want* feature hungry customers and they don't give developers the time needed to *properly* implement, maintain, and repair those features.
Often connected to the this is the graphical guys (connected because graphical is what sells; customers rarely pay for optimized algorithm implementations deep down in the code).

Probably making myself new enemies I dare to say that gui, colourful and generally graphics is the area of lowest quality code. Simplyfying it somewhat and being blunt I'd state: Chances are that your server will hum along years without problems. If anything with a gui runs some hours without crashing and/or filling up stderr, you can consider yourself lucky.
Not meaning to point fingers. But just these days I happened to fall over a new gui project. Guess what? They had a colourful presentation with lots of vanity, pipe dreams and, of course, colours. That matches quite well what I have experienced. Gui/graphical/colour stuff is probably the only area where people seriously do design with powerpoint. I guess that's how they tick.

You can as well look at different developers. Guys developing for an embedded system are often hesitating to even use a RTOS. Your average non-gui, say server developer will use some libraries but he will ponder their advantages, size, dependencies and quality. Now enter the graphic world. John, creating some app, say, to collect, show, and sort photos will happily and very generously user whatever library sounds remotely usefull and doesn't run away fast enough.

Results? An app on a microcontroller that, say, takes care of building management in some 10K. A server in some 100K. And the funny photo app with 12MB plus another 70MB libraries/dependencies. The embedded sytem will run forever and be forgotten, the server will be rebooted ever other year and the photo app will crash twice a day.
August 22, 2013
On Thu, Aug 22, 2013 at 10:10:36PM +0200, Ramon wrote: [...]
> Probably making myself new enemies I dare to say that gui, colourful
> and generally graphics is the area of lowest quality code.
> Simplyfying it somewhat and being blunt I'd state: Chances are that
> your server will hum along years without problems. If anything with
> a gui runs some hours without crashing and/or filling up stderr, you
> can consider yourself lucky.
> Not meaning to point fingers. But just these days I happened to fall
> over a new gui project. Guess what? They had a colourful
> presentation with lots of vanity, pipe dreams and, of course,
> colours. That matches quite well what I have experienced.
> Gui/graphical/colour stuff is probably the only area where people
> seriously do design with powerpoint. I guess that's how they tick.

This is also my experience. :) And I don't mean to diss anyone working with GUI code either, but it's true that in the commercial projects that I'm involved in, the GUI component is where the code has rather poor quality. So poor, in fact, that I dread having to look at it at all -- I try to fix the problem in the low-level modules if at all possible, rather than spend 5 days trying to follow the spaghetti code in the GUI module.

(Or rather, lasagna code -- some time ago they ditched the old spaghetti-code source base, and rewrote the whole thing from ground up using a class hierarchy -- I suppose in the hope that the code would be cleaner that way. Well, the spaghetti is gone, but now the code that does the real work is buried so deeply under who knows how many layers of abstractions, most of which are not properly designed and very leaky, that a single method call can literally do *anything*. The only reliable way to know what it actually does is to set a breakpoint in the debugger, because it has been overloaded everywhere in the most non-obvious places and nobody knows where the call will actually end up.)


> You can as well look at different developers. Guys developing for an embedded system are often hesitating to even use a RTOS. Your average non-gui, say server developer will use some libraries but he will ponder their advantages, size, dependencies and quality. Now enter the graphic world. John, creating some app, say, to collect, show, and sort photos will happily and very generously user whatever library sounds remotely usefull and doesn't run away fast enough.
> 
> Results? An app on a microcontroller that, say, takes care of building management in some 10K. A server in some 100K. And the funny photo app with 12MB plus another 70MB libraries/dependencies. The embedded sytem will run forever and be forgotten, the server will be rebooted ever other year and the photo app will crash twice a day.

LOL... totally sums up my sentiments w.r.t. GUI-dependent apps. :)

I saw through this façade decades ago when Windows 3.1 first came out, and I've hated GUI-based OSes ever since. I stuck to DOS as long as I could through win95 and win98, and then I learned about Linux and I jumped ship and never looked back. But X11 isn't that much better... there are some pretty bloated X11 apps that crash twice a day, too. Sometimes twice an hour. After repeated experiences like that, I decided that CLI is still the most reliable, and far more expressive to begin with.  CLI-based apps are generally far more stable, require far less resources, and are *scriptable* and composable, something that GUI apps could never do (or if they could, not very well). I concluded that the only time GUIs are appropriate is when (1) you're working with graphical data like image editing or visualization, and (2) games. I found that (1) is actually doable with CLI tools like imagemagick, and I rarely do (2) anyway. So I dumped my mouse-based window manager for ratpoison, and use my X11 as a glorified text terminal, and now I'm as happy as can be. :-P

Or rather, I *will* be happy as can be once I find a suitable replacement for a browser. Browsers are by far the most ridiculously resource-consuming beasts ever, given that all they do is to display some text and graphics and let you click on stuff. On my office PC, the browser is often the one cause of long compile times when its memory-hungry resource grabbing clashes with the linker trying to link (guess what?) the GUI module of the project. RAM-hungry, IO-bound browser + linker linking gigantic bloated object files of GUI module = 30-minute coffee break while I watch the equally RAM-hungry X server paint the screen pixel-by-pixel as the hard drive thrashes itself to death. :-P  This 30 minutes easily turns into 1 hour if I actually dare to run two browsers simultaneously (y'know, for debugging purposes -- what I would give to be rid of the responsibility of testing different browsers...).


T

-- 
Always remember that you are unique. Just like everybody else. -- despair.com
August 23, 2013
Walter Bright wrote:
> semantically identical

This would be equivalent to finding plagiarisms and result in a semantical compression of a software base---and seems to be computational intractable unless severely restricted.

-manfred
August 23, 2013
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
> Or rather, I *will* be happy as can be once I find a suitable
> replacement for a browser. Browsers are by far the most ridiculously
> resource-consuming beasts ever, given that all they do is to display
> some text and graphics and let you click on stuff.
>
> T

Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.

I assume you know links2 and w3m, both textmode browsers which support tables, frames, and even images. links2 (or was it elinks?) even supported javascript for some time.
You also might like that links by default is non-graphic and needs a commandline switch to go graphical.

R
August 23, 2013
On Fri, Aug 23, 2013 at 05:06:01AM +0200, Ramon wrote:
> On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
> >Or rather, I *will* be happy as can be once I find a suitable replacement for a browser. Browsers are by far the most ridiculously resource-consuming beasts ever, given that all they do is to display some text and graphics and let you click on stuff.
> >
> >T
> 
> Pretty much describes my feelings too, although I've made my peace with them beasts and like to use xombrero. Although webkit based (translate: bloat) it's relatively(!) modest and is keyboard controllable.

I'm installing it right now. Let's see if it lives up to its promise. ;-)

If it does, I'm ditching opera 12 (the last tolerable version; the latest version, opera 15, has lost everything that made opera opera, and I've no desire to stay with opera) and switching over. I'll keep firefox handy for when bloated features are required, there should be plenty of RAM leftover if xombrero isn't as memory-hogging as opera can be. :-P


> I assume you know links2 and w3m, both textmode browsers which
> support tables, frames, and even images. links2 (or was it elinks?)
> even supported javascript for some time.
> You also might like that links by default is non-graphic and needs a
> commandline switch to go graphical.
[...]

I use elinks every now and then... I can't say I'm that impressed with its interface, to be honest. There are better ways of doing text mode browser UIs. Plus, most sites look trashy in elinks because they're all designed with bloated GUIs in mind.

As for JS, nowadays I turn it off by default anyway, and only enable it when it's actually needed. Makes the web noticeably faster and, in many cases, more pleasant to use. (*cough*dlang.org*cough*)


T

-- 
Caffeine underflow. Brain dumped.
August 23, 2013
On 8/22/2013 7:52 PM, Manfred Nowak wrote:
> Walter Bright wrote:
>> semantically identical
>
> This would be equivalent to finding plagiarisms and result in a semantical
> compression of a software base---and seems to be computational intractable
> unless severely restricted.

I don't think it would be that hard. The structure of the ASTs would need to match, and the types would have to match depending on the operation - for example, a + gives the same result for signed and unsigned types, whereas < does not.

August 23, 2013
On Thursday, 22 August 2013 at 10:34:58 UTC, John Colvin wrote:
> On Thursday, 22 August 2013 at 02:06:13 UTC, Tyler Jameson Little wrote:
>> - array operations (int[] a; int[]b; auto c = a * b;)
>>  - I don't think these are automagically SIMD'd, but there's always hope =D
>
> That isn't allowed. The memory for c must be pre-allocated, and the expression then becomes c[] = a[] * b[];

Oops, that was what I meant.

> Is it SIMD'd?
>
> It depends. There is a whole load of hand-written assembler for simple-ish expressions on builtin types, on x86. x86_64 is only supported with 32bit integer types because I haven't finished writing the rest yet...
>
> However, I'm not inclined to do so at the moment as we need a complete overhaul of that system anyway as it's currently a monster*.  It needs to be re-implemented as a template instantiated by the compiler, using core.simd. Unfortunately it's not a priority for anyone right now AFAIK.

That's fine. I was under the impression that it didn't SIMD at all, and that SIMD only works if explicitly stated.

I assume this is something that can be done at runtime:

    int[] a = [1, 2, 3];
    int[] b = [2, 2, 2];
    auto c = a[] * b[]; // dynamically allocates on the stack; computes w/SIMD
    writeln(c); // prints [2, 4, 6]

I haven't yet needed this, but it would be really nice... btw, it seems D does not have dynamic allocation. I know C99 does, so I know this is technically possible. Is this something we could get? If so, I'll start a thread about it.

> *
> hand-written asm loops. If fully fleshed out there would be:
>   ((aligned + unaligned + legacy mmx) * (x86 + x64) + fallback loop)
>   * number of supported expressions * number of different types
> of them. Then there's unrolling considerations. See druntime/src/rt/arrayInt.d

August 23, 2013
On Thursday, 22 August 2013 at 23:59:59 UTC, H. S. Teoh wrote:
> On Thu, Aug 22, 2013 at 10:10:36PM +0200, Ramon wrote:
> [...]
>> Probably making myself new enemies I dare to say that gui, colourful
>> and generally graphics is the area of lowest quality code.

All areas are bad, given the way software projects are managed.

The consulting projects I work on, are for Fortune 500 companies,
always with at least three development sites and some extent of
off-shoring work.

GUI, embedded, server, database, it doesn't matter. All code is
crap given the amount of time, money and developer quality assigned
to the projects.

Usually the top developers in the teams try to save the code, but
there is only so much one can do, when the ratio between both classes
of developers so big is as a way to make the projects profitable.

So the few heroes that at the beginning of each project try to fix the
situation, eventually give around the middle of the project.

The customers don't care as long as the software works as intended.

> [...]
> LOL... totally sums up my sentiments w.r.t. GUI-dependent apps. :)
>
> I saw through this façade decades ago when Windows 3.1 first came out,
> and I've hated GUI-based OSes ever since. I stuck to DOS as long as I
> could through win95 and win98, and then I learned about Linux and I
> jumped ship and never looked back. But X11 isn't that much better...
> there are some pretty bloated X11 apps that crash twice a day, too.

Funny, I have a different experience.

Before replacing my ZX Spectrum with a PC, I already knew Amiga and Atari ST systems. And IDEs on those environments as well.

So I always favored GUI environments over CLI. For me, personally, the
CLI is good when doing system administration, or programming related tasks that can benefit from the usual set of tricks with commands and pipes.

For everything else nothing like keyboard+mouse and a nice GUI environment.

Personal opinion, to each its own.

--
Paulo