May 22, 2019
On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce wrote:
> > On Wed, May 22, 2019 at 10:20 AM Ola Fosheim Grøstad via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
> [...]
> > > But you shouldn't design a UI framework like a game engine.
> > >
> > > Especially not if you also want to run on embedded devices addressing pixels over I2C.
> >
> > I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
> [...]
>
> Wait, wha...?!  Write game-engine-like code if you care about *battery life*??  I mean... fluid experience, sure, perf, OK, but *battery life*?!  Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever.  I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?

Yes. Efficiency == battery life. Game engines tend to be the most
efficient software written these days.
You don't have to run applications at an unbounded rate. I mean, games
will run as fast as possible maximising device resources, but assuming
it's not a game, then you only execute as much as required rather than
trying to produce frames at the highest rate possible. Realtime
software is responding to constantly changing simulation, but non-game
software tends to only respond to input-driven entropy; if entropy
rate is low, then exec-to-sleeping ratio heavily biases towards
sleeping.

If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.

> I think I need to sit down.

If you say so :)

May 22, 2019
On Wed, May 22, 2019 at 3:40 PM Ola Fosheim Grøstad via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
> > I couldn't possibly agree less; I think cool kids would design
> > literally all computer software like a game engine, if they
> > generally
> > cared about fluid experience, perf, and battery life.
>
> A game engine is designed for full redraw on every frame.

I mean, you don't need to *draw* anything... it's really just a style
of software design that lends to efficiency.
Our servers don't draw anything!

> He said he wanted to draw pixel by pixel and only update pixels that change. I guess this would be useful on a slow I2C serial bus. It is also useful for X-Windows. Or any other scenario where you transmit graphics over a wire.
>
> Games aren't really relevant in those two scenarios, but I don't know what the framework is aiming for either.

Minimising wasted calculation is always relevant. If you don't change part of an image, then you'd better have the tech to skip rendering it (or skip transmitting it in this scenario), otherwise you're wasting resources like a boss ;)

> > There's a reason games can simulate a rich world full of dynamic data and produce hundreds of frames a second, is
>
> Yes, it is because they cut corners and make good use of special cases... The cool kids in the demo-scene even more so. That does not make them good examples to follow for people who care about accuracy and correctness. But I don't know the goal for this GUI framework is.

I don't think you know what you're talking about.
I don't think we 'cut corners' (I'm not sure what that even means)...
we have data to process, and aim to maximise efficiency, that is all.
Architecture is carefully designed towards that goal; it changes your
patterns. You won't tend to have OO hierarchies and sparsely allocated
graphs, and you will naturally tend to arrange data in tables destined
for batch processing. These are key to software efficiency in general.

> So could you make good use of a GPU, even in the early stages in this case? Yes. If you keep it as a separate stage so that you have no dependencies to the object hierarchy.

'Object hierarchy' is precisely where it tends to go wrong. There are a million ways to approach this problem space; some are naturally much more efficient, some rather follow design pattern books and propagate ideas taught in university to kids.

> I would personally
> have done it in two passes for a prototype. Basically translating
> the object hierarchy into geometric data every frame then use a
> GPU to take that and push it to the screen. Not very efficient,
> perhaps, but good enough to get 60FPS with max flexibility.

Sure, maybe that's a reasonable design. Maybe you can go a step further and transform your arrangement a 'hierarchy'? Data structures are everything.

> Is that related to games, yes sure, or any other realt-time simulation software. So not really game specific.

Right. I only advocate good software engineering!
But when I look around, the only field I can see that's doing a really
good job at scale is gamedev. Some libs here and there enclose some
tight worker code, but nothing much at the systemic level.

May 22, 2019
On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce wrote:
> On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
> >
> > On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce wrote:
[...]
> > > I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
> > [...]
> >
> > Wait, wha...?!  Write game-engine-like code if you care about *battery life*??  I mean... fluid experience, sure, perf, OK, but *battery life*?!  Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever.  I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?
> 
> Yes. Efficiency == battery life. Game engines tend to be the most efficient software written these days.
>
> You don't have to run applications at an unbounded rate. I mean, games will run as fast as possible maximising device resources, but assuming it's not a game, then you only execute as much as required rather than trying to produce frames at the highest rate possible. Realtime software is responding to constantly changing simulation, but non-game software tends to only respond to input-driven entropy; if entropy rate is low, then exec-to-sleeping ratio heavily biases towards sleeping.
> 
> If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.
[...]

But isn't that just writing good code in general?  'cos when I think of game engines, I think of framerate maximization, which equals maximum battery drain because you're trying to do as much as possible in any given time interval.

Moreover, I've noticed a recent trend of software trying to emulate game-engine-like behaviour, e.g., smooth scrolling, animations, etc.. In the old days, GUI apps primarily only respond to input events and that was it -- click once, the code triggers once, does its job, and goes back to sleep.  These days, though, apps seem to be bent on animating *everything* and smoothing *everything*, so one click translates to umpteen 60fps animation frames / smooth-scrolling frames instead of only triggering once.

All of which *increases* battery drain rather than decrease it.

And this isn't just for mobile apps; even the pervasive desktop browser nowadays seems bent on eating up as much CPU, memory, and disk as physically possible -- everybody has their neighbour's dog wants ≥60fps hourglass / spinner animations and smooth scrolling, eating up GBs of memory, soaking up 99% CPU, and cluttering the disk with caches of useless paraphrenelia like spinner animations.

Such is the result of trying to emulate game-engine-like behaviour. And now you're recommending that everyone should write code like a game engine!

(Once, just out of curiosity (and no small amount of frustration), I went into Firefox's about:config and turned off all smooth scrolling, animation, etc., settings.  The web suddenly sped up by at least an order of magnitude, probably more. Down with 60fps GUIs, I say.  Unless you're making a game, you don't *need* 60fps. It's squandering resources for trivialities where we should be leaving those extra CPU cycles for actual, useful work instead, or *actually* saving battery life by not trying to make everything emulate a ≥60fps game engine.)


T

-- 
Give me some fresh salted fish, please.
May 22, 2019
On Wed, May 22, 2019 at 5:34 PM H. S. Teoh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Wed, May 22, 2019 at 05:11:06PM -0700, Manu via Digitalmars-d-announce wrote:
> > On Wed, May 22, 2019 at 3:33 PM H. S. Teoh via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
> > >
> > > On Wed, May 22, 2019 at 02:18:58PM -0700, Manu via Digitalmars-d-announce wrote:
> [...]
> > > > I couldn't possibly agree less; I think cool kids would design literally all computer software like a game engine, if they generally cared about fluid experience, perf, and battery life.
> > > [...]
> > >
> > > Wait, wha...?!  Write game-engine-like code if you care about *battery life*??  I mean... fluid experience, sure, perf, OK, but *battery life*?!  Unless I've been living in the wrong universe all this time, that's gotta be the most incredible statement ever.  I've yet to see a fluid, high-perf game engine *not* drain my battery like there's no tomorrow, and now you're telling me that I have to write code like a game engine in order to extend battery life?
> >
> > Yes. Efficiency == battery life. Game engines tend to be the most efficient software written these days.
> >
> > You don't have to run applications at an unbounded rate. I mean, games will run as fast as possible maximising device resources, but assuming it's not a game, then you only execute as much as required rather than trying to produce frames at the highest rate possible. Realtime software is responding to constantly changing simulation, but non-game software tends to only respond to input-driven entropy; if entropy rate is low, then exec-to-sleeping ratio heavily biases towards sleeping.
> >
> > If you have a transformation to make, and you can do it in 1ms, or 100us, then you burn 10 times less energy doing it in 100us.
> [...]
>
> But isn't that just writing good code in general?

Yes, but I can't point at many industries that systemically do that.

>  'cos when I think of
> game engines, I think of framerate maximization, which equals maximum
> battery drain because you're trying to do as much as possible in any
> given time interval.

And how do you do "as much as possible"? I mean, if you write some
code, and then push data through the pipe until resources are at
100%... where do you go from there?
... make the pipeline more efficient.
Hardware isn't delivering much improvement these days, we have had to
get MUCH better at efficiency in the last few years to maintain
competitive advantage.
I don't know any other industry so laser focused on raising the bar on
that front in a hyper-competitive way. We don't write code like we
used to... we're all doing radically different shit these days.


> Moreover, I've noticed a recent trend of software trying to emulate game-engine-like behaviour, e.g., smooth scrolling, animations, etc.. In the old days, GUI apps primarily only respond to input events and that was it -- click once, the code triggers once, does its job, and goes back to sleep.  These days, though, apps seem to be bent on animating *everything* and smoothing *everything*, so one click translates to umpteen 60fps animation frames / smooth-scrolling frames instead of only triggering once.

That's a different discussion. I don't actually endorse this. I'm a fan of instantaneous response from my productivity software... 'Instantaneous' being key, and running without delay means NOT waiting many cycles of the event pump to flow typical modern event-driven code through some complex latent machine to finally produce an output.

> All of which *increases* battery drain rather than decrease it.

I'm with you. Don't unnecessarily animate!

> And this isn't just for mobile apps; even the pervasive desktop browser nowadays seems bent on eating up as much CPU, memory, and disk as physically possible -- everybody has their neighbour's dog wants ≥60fps hourglass / spinner animations and smooth scrolling, eating up GBs of memory, soaking up 99% CPU, and cluttering the disk with caches of useless paraphrenelia like spinner animations.

You're conflating a lot of things here... running smooth and eating
GBs of memory are actually at odds with eachother. If you try and do
both things, then you're almost certainly firmly engaged in gratuitous
systemic inefficiency.
I'm entirely against that, that's my whole point!

You should use as little memory as possible. I have no idea how a webpage eats as much memory as it does... that's a perfect example of the sort of terrible software engineering I'm against!

> Such is the result of trying to emulate game-engine-like behaviour.

No, there's ABSOLUTELY NOTHING in common between a webpage and a game
engine. As I see, they are at polar ends of the spectrum.
Genuinely couldn't be further from each other in terms of software
engineering discipline!

> And now you're recommending that everyone should write code like a game engine!

Yes, precisely so the thing you just described will stop.

> (Once, just out of curiosity (and no small amount of frustration), I went into Firefox's about:config and turned off all smooth scrolling, animation, etc., settings.  The web suddenly sped up by at least an order of magnitude, probably more. Down with 60fps GUIs, I say.

You're placing your resentment in the wrong place.
My 8mhz Amiga 500 ran 60hz gui's without breaking a sweat... you're
completely misunderstanding the actual issue here.

> Unless you're making a game, you don't *need* 60fps.

Incorrect. My computer is around 100,000 times faster than my Amiga
500. We can have fluid execution.
We just need to stop writing software like fucking retards. The only
industry that I know of that knows how to do that at a systemic level
is gamedev.

> It's squandering resources
> for trivialities where we should be leaving those extra CPU cycles for
> actual, useful work instead, or *actually* saving battery life by not
> trying to make everything emulate a ≥60fps game engine.)

You've missed the point completely.
You speak of systemic waste, I'm talking about state-of-the-art
efficiency as baseline expectation and nothing less is acceptable.

May 23, 2019
On Thursday, 23 May 2019 at 00:23:50 UTC, Manu wrote:
> it's really just a style
> of software design that lends to efficiency.
> Our servers don't draw anything!

Then it isn't specific to games, or particularly relevant to rendering. Might as well talk about people writing search engines or machine learning code.

> Minimising wasted calculation is always relevant. If you don't change part of an image, then you'd better have the tech to skip rendering it (or skip transmitting it in this scenario), otherwise you're wasting resources like a boss ;)

Well, it all depends on your priorities. The core difference is that (at least for the desktop) a game rendering engine can focus on 0% overhead for the most demanding scenes, while 40% overhead on light scenes has no impact on the game experience. Granted for mobile engines then battery life might change that equation, though I am not sure if gamers would notice a 20% difference in battery life...

For a desktop application you might instead decide to favour 50% GPU overhead across the board as a trade off for a more flexible API that saves application programmer hours and freeing up CPU time to processing application data. (If your application only uses 10% of the GPU, then going to 15% is a low price to pay.)


> I don't think you know what you're talking about.

Let's avoid the ad hominems… I know what I am talking about, but perhaps I don't know what you are talking about? I thought you were talking about the rendering engines used in games, not software engineering as a discipline.


> I don't think we 'cut corners' (I'm not sure what that even means)...

What is means is that in a game you have a negotiation between the application design requirements and the technology requirements. You can change the game design to take advantage of the technology and change the technology to accommodate the game design. Visual quality only matters as seen from the particular vantage points that the gamer will take in that particular game or type of game.

When creating a generic GUI API you cannot really assume too much. Let's say you added ray-traced widgets. It would make little sense to say that you can only have 10 ray-traced widgets on display at the same time for a GUI API. In a game that is completely acceptable. You'd rather have the ability to put some extra impressive visuals on screen in a limited fashion where it matters the most.

So the priorities is more like in film production. You can pay a price in terms of technological special casing to create a more intense emotional experience. You can limit your focus to what the user is supposed to do (both end user and application programmer) and give priority to "emotional impact". And you also have the ability to train a limited set of workers (programmers) to make good use of the novelty of your technology.

When dealing with unknown application programmers writing unknown applications you have to be more conservative.


> patterns. You won't tend to have OO hierarchies and sparsely allocated
> graphs, and you will naturally tend to arrange data in tables destined
> for batch processing. These are key to software efficiency in general.

If you are talking about something that isn't available to the application programmer then that is fine. For a GUI framework the most important thing after providing a decent UI experience is to make the application programmers life easier and more intuitive. Basically, your goal is to save programmer hours and make it easy to change direction due to changing requirements.  If OO hierarchies is more intuitive to the typical application programmers, then that is what you should use at the API level.

If your write your own internal GUI framework then you have a different trade-off, you might put more of a burden on the application developer in order to make better overall use of your workforce. Or you might limit the scope of the GUI framework to getter better end-user results.


> 'Object hierarchy' is precisely where it tends to go wrong. There are a million ways to approach this problem space; some are naturally much more efficient, some rather follow design pattern books and propagate ideas taught in university to kids.

You presume that efficiency is a problem. That's not necessarily the case. If your framework is for embedded LCDs then you are perhaps limited to under 500 objects on screen anyway.

I also know that Open Inventor (from SGI) and VRML made people more productive. It allowed people to create experiences that they otherwise would not have been able to, both in industrial prototypes and artistic works.

Overhead isn't necessarily bad. A design with some overhead might cut the costs enough for the application developer to make a project feasible. Or even make it accessible for tinkering. You see the same thing with the Processing language.


> Sure, maybe that's a reasonable design. Maybe you can go a step further and transform your arrangement a 'hierarchy'? Data structures are everything.

In the early stages it is most important to have freedom to change things, but with an idea of where you could insert spatial data-structures. Having a plan for where you can place accelerating data-structures and algorithms do matter, of course.

But you don't need to start there. So I think he is doing well by keeping rendering simple in the first iterations.


> Right. I only advocate good software engineering!
> But when I look around, the only field I can see that's doing a really good job at scale is gamedev. Some libs here and there enclose some tight worker code, but nothing much at the systemic level.

It is a bit problematic for generic libraries to use worker code (I assume you mean actors running on separate threads) as you put some serious requirements on the architecture of the application. More actor-oriented languages and run-times could make it pleasant though, so maybe an infrastructure issue where programming languages need to evolve. But you could for a GUI framework, sure.

Although I think the rendering structure used in browser graphical backends is closer to what people would want for an UI than a  typical game rendering engine. Especially the styling engine.

May 23, 2019
On 2019-05-22 17:01:39 +0000, Manu said:

> The worst case defines your application performance, and grids are pretty normal.

That's true, but responsive grids are pretty unusal.

> You can make a UI run realtime ;)

I know, that's what we seek for.

> I mean, there are video games that render a complete screen full of zillions of high-detail things every frame!

Show me a game that renders this with a CPU only approach into a memory buffer, no GPU allowed. Total different use-case.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

May 23, 2019
On Thursday, 23 May 2019 at 06:07:53 UTC, Robert M. Münch wrote:
> On 2019-05-22 17:01:39 +0000, Manu said:
>> I mean, there are video games that render a complete screen full of zillions of high-detail things every frame!
>
> Show me a game that renders this with a CPU only approach into a memory buffer, no GPU allowed. Total different use-case.

I wrote a very flexible generic scanline prototype renderer in the 90s that rendered 1024x768 using 11 bits each for red and green and 10 for blue and hardcoded alpha blending. It provided interactive framerates on the lower end for a large number of circular objects covering the screen, but it took almost all the CPU. It even used callbacks for flexibility and X-Windows with shared-memory, so it was written for flexibility, not very high performance.

Today this very simple renderer would probably run at 400-4000FPS on the CPU rendering to RAM.

So, it isn't difficult to write a decent performance scanline renderer today. You just have to think a lot about the specifics of the CPU pipeline and CPU caching. That's all. A tile based one is more work, but will easily perform way beyond any requirement.

I'm not saying you should do it. It would be CPU specific and seems like a waste of time, but the basics are really very simple. Just use a very fast bin sort for the left and right edge in the x-direction, then use a sorting algorithm that is fast for almost-sorted-lists for the z-direction (to handle alpha blending).

Basically brute force, no fancy datastructure. Brute force can perform decently if you use algorithms that tend to be linear on average.

May 23, 2019
On Tuesday, 21 May 2019 at 14:04:29 UTC, Robert M. Münch wrote:

[...]

> Here is a new screencast: https://www.dropbox.com/s/ywywr7dp5v8rfoz/Bildschirmaufnahme%202019-05-21%20um%2015.20.59.mov?dl=0
>
>
> I optimized the whole thing a bit, so now a complete screen with layouting, hittesting, drawing takes about 28ms, that's 8x faster than before. Drawing is still around 10ms, layouting around 16ms, spatial index handling 2ms.

Awesome. Compared to the video you posted some days ago there is also almost no visible aliasing. Do you plan to create a web browser based on your framework?
May 23, 2019
On Wednesday, 22 May 2019 at 21:18:58 UTC, Manu wrote:
> People really should look at games for how to write good
> software in general.

While I agree for some AAA games (and I'm sure your employer can afford excellent development practics), I'd like to counteract that point for balance: for good practice of stability, threading and error reporting, people should look at high-availability, long-lived server software. A single memory leak will be a problem there, a single deadlock.

Games are also particular software in that they simulate worlds with many numbers of entities, and that exercise the limits of OO. That's a bit specific to games! (and possibly UI)

There also areas where performance matters immensely, such as HFT and video, where people spend more time than in games optimizing the last percent. Arguably, HFT is maybe the one domain that goes the further with performance.

If you want an example of how (sometimes) strangely insular game development can be, maybe look at the Jai language. It is assuming game developement is a gold standard for software and software needs, without ever proving that point.
May 23, 2019
On Thursday, 23 May 2019 at 01:22:20 UTC, Manu wrote:
> That's a different discussion. I don't actually endorse this. I'm a fan of instantaneous response from my productivity software... 'Instantaneous' being key, and running without delay means NOT waiting many cycles of the event pump to flow typical modern event-driven code through some complex latent machine to finally produce an output.

Yes, you are of course right if the effort is spent where it matters. In my mind CygnusED (CED) on the Amiga is STILL the smoothest editor I have ever used and it was because it used smooth hardware assisted scrolling (Copper lists) so my eyes could regain focus very fast. I guess also the phosphor on the screen helped, because other editors that try to spin down a scroll gradually does not feel as good as CED did. *shrugs*

One could certainly come up with a better UI experience by combining a good understanding of visual perception with low level optimization and good use of hardware.

But that sounds like different project to me. One would then have to start with a good theoretical understanding of human perception, how the brain works and so on. Then see if you can pick up ideas from interactive software like games.

That would however lead to a new concept for user-interface design. Which would be interesting, for sure, but requires much more than coding up a UI framework.


> You should use as little memory as possible. I have no idea how a webpage eats as much memory as it does... that's a perfect example of the sort of terrible software engineering I'm against!

In chrome each page runs in a separate process for security reasons, that's how. AFAIK.

Also, service workers are very useful, but it is probably tempting to let them grow large to get better responsiveness (from the network layer). Basically a proxy replicating the web server within the browser, so that you can use the website as an offline app.


> You're placing your resentment in the wrong place.
> My 8mhz Amiga 500 ran 60hz gui's without breaking a sweat...

But people also used the hardware almost directly though, you could install copper-lists even when using the OS with UI (in full screen mode).

In my mind the copper-list concept was alway more impact-full than the blitter. I'm not sure where they got the idea to expose it to ordinary applications, but it had a very real impact on the end user experience and what applications could do (e.g. drawing programs).