May 25, 2019
On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
> Hi, we are currently build up our new technology stack and for this create a 2D GUI framework.

This entire thread is an embarrassment, and a perfect example of the kind of interaction that keeps professionals away from online communities such as this one.

It's been little more than an echo chamber of people being wrong, congratulating each other on being wrong, encouraging people to continue being wrong and shooting down anyone speaking sense with wrong facts and wrong opinions.

The amount of misinformation flying around in here would make <insert political regime of your own taste here> proud.

Let's just start with being blunt straight up: Congratulations, you've announced a GUI framework that can render a grid of squares less efficiently than Microsoft Excel.

So from there, I'm only going to highlight points that need to be thoroughly shot down.

> So this gives us 36 FPS which is IMO pretty good for a desktop app target

Wrong. A 144Hz monitor, for example, gives you less than 7 milliseconds to provide a new frame. Break that down further. On Windows, the thread scheduler will give you 4 milliseconds before your thread is put to sleep. That's if you're a foreground process. Background processes only get 1 millisecond. So from that you can assume for a standard 60Hz monitor, your worst case is that you need to provide a new frame in 1 millisecond.

I currently have 15 programs and 60 browser tabs open. On a laptop. WPF can keep up. You can't.

> But you shouldn't design a UI framework like a game engine.

Wrong. Game engines excel at laying out high-fidelity data in sync with a monitor's default refresh rate. You're insane if you think a 2D interface shouldn't be done in a similar manner. Notice Unity and Unreal implement their own WIMP framework across multiple platforms, designed it like a game engine, and can keep it responsive.

And just like a UI framework, whatever the client is doing separate to the layout and rendering is *not* its responsibility.

> Write game-engine-like code if you care about *battery life*??

The core of a game engine will aim to do everything as quickly as possible and go to sleep as quickly as possible. Everyone here is assuming false equivalency between a game engine, and the game systems and massive volumes of data that just plain take time to process.

> A game engine is designed for full redraw on every frame.

Wrong. A game engine is designed to render new frames when the viewpoint is dirty. Any engine that decouples simulation frame from monitor frame won't do a full redraw every simulation frame. A game engine will often include effects that get rendered at half of the target framerate to save time.

Your definition for "full redraw" is flawed and wrong.

> cos when I think of game engines, I think of framerate maximization, which equals maximum battery drain because you're trying to do as much as possible in any given time interval.

Source: I've released a mobile game that lets you select battery options that basically result in 60Hz/30Hz/20Hz. You know all I did? Decoupled the renderer, ran the simulation 1/2/3 times, and rendered once. Suits burst processing, which is known to be very good for the battery.

If you find a game engine that renders its UI every frame despite having no dirty element, you've found baby's first game UI.

> for good practice of stability, threading and error reporting, people should look at high-availability, long-lived server software. A single memory leak will be a problem there, a single deadlock.

Many games *already have* this requirement. There's plenty of knowledge within the industry of reducing server costs with optimisations.

> For instance, there is no spatial datatructure that is inherently better or more efficient than all other spatial datastructures.

Wrong. Three- and four-dimensional vectors. We have hardware registers to take advantage of them. Represent your object's transformation with an object comprising a translation, a quaternion rotation, and if you're feeling nice to your users a scale vector.

WPF does exactly this. In a round-about way. But it's there.

> Well, what I meant by "cutting corners" it that games reach efficiency by narrowing down what they allow you to do.

Really. Do tell me more. Actually, don't, because whatever you say is going to be wrong and I'm not going to reply to it anyway. Hint: We provide more flexibility than your out-of-the-box WPF/GTK/etc for whatever systems we provide.

> Browsers are actually doing quite well with simple 2D graphics today.

Browsers have been rendering that on GPU for years.

Which starts getting us in to this point.

> I think CPU rendering has its merits and is underestimated a lot.

> In the 2D realm I don't see so much gain using a GPU over using a CPU.

So. On a 4K or higher desktop (Apple shift 5K monitors). Let's say you need to redraw every one of those 3840x2160 pixels at 60Hz. Let's just assume that by some miracle you've managed to get a pixel filled down to 20 cycles. But that's still 8,294,400 pixels. That's 16.6MHz for one frame. Almost a full GHz to keep it responsive at 60 frames per second. 2.4GHz for a 144Hz display.

So you're going to get one thread doing all that? Maybe vectorise it? And hope there's plenty of blanks space so you can run the same algorithm on four contiguous pixels at a time. Hmmm. Oh, I know, multithread it! Parallel for each! Oh, well, now there's an L2 cache to worry about, we'll have to work at different chunks at different times and hope each chunk is roughly equal in cost since any attempt to redistribute the load in to the same cache area another thread is working on will result in constant cache flushes.

OOOOORRRRRRRRRR. Hey. Here's this hardware that executes tiny programs simultaneously. How many shader units does your hardware have? That many tiny programs. And its cache is set up to accept the results of those programs without massive flush penalties. And they're natively SIMD and can handle, say, multi-component RBG colours without breaking a sweat. You don't even have to worry about complicated sorting logic and pixel overwrites, the Z-buffer can handle it if you assign the depth of your UI element to the Z value.

And if you *really* want to avoid driver issues with the pixel and vertex pipeline - just write compute shaders for everything for hardware-independent results.

Oh, hey, wait a minute, Nick's dcompute could be exactly what you're want if you're only doing this to show a UI framework can be written in D. Problem solved by doing what Manu suggested and *WORKING WITH COMMUNITY MEMBERS WHO ALREADY INTIMATELY UNDERSTAND THE PROBLEMS INVOLVED*

---

Right. I'm done. This thread reeks of a "Year of Linux desktop" mentality and I will also likely never read it again just for my sanity. I expect better from this community if it actually wants to see D used and not have the forums turn in to Stack Overflow Lite.
May 25, 2019
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
> So. On a 4K or higher desktop (Apple shift 5K monitors). Let's say you need to redraw every one of those 3840x2160 pixels at 60Hz. Let's just assume that by some miracle you've managed to get a pixel filled down to 20 cycles. But that's still 8,294,400 pixels. That's 16.6MHz for one frame. Almost a full GHz to keep it responsive at 60 frames per second. 2.4GHz for a 144Hz display.

I are math good. 8,294,400 * 20 cycles is 165.8MHz. Times 60 frames per second is 9.5GHz.

CPU rendering is not even remotely the future.
May 26, 2019
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
>> But you shouldn't design a UI framework like a game engine.
>
> Wrong. Game engines excel at laying out high-fidelity data in sync with a monitor's default refresh rate.

You are confusing rendering engine with UI API.


>> A game engine is designed for full redraw on every frame.
> frame. A game engine will often include effects that get rendered at half of the target framerate to save time.

You still do a full redraw of the framebuffer. Full frame. Meaning not just tiny small clip rectangles like on x-windows.

>> For instance, there is no spatial datatructure that is inherently better or more efficient than all other spatial datastructures.
>
> Wrong. Three- and four-dimensional vectors. We have hardware registers to take advantage of them. Represent your object's transformation with an object comprising a translation, a quaternion rotation, and if you're feeling nice to your users a scale vector.

Those are not spatial datastructures. (octrees, bsptrees etc are spatial datastructures)


> Really. Do tell me more. Actually, don't, because whatever you say is going to be wrong and I'm not going to reply to it anyway

Good. Drink less, sleep more.

May 26, 2019
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
> On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
>>
>> Browsers are actually doing quite well with simple 2D graphics today.
>
> Browsers have been rendering that on GPU for years.

Just because (for example) Chrome supports GPU rendering doesn't mean every device it runs on does too. For example...

Open an SVG in your browser, take a screenshot and zoom in on an almost vertical / horizontal edge, EG..

https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg

If you look for an almost vertical or almost horizontal line and check whether the antialiasing is stepped or smooth. GPU typically maxes out at 16x for path rendering, CPU you generally get 256x analytical. So for GPU you'll see more granularity in the antialising at the edges, runs of a few pixels then a larger change, for CPU you'll see each pixel change a small bit along the egde.

Chrome is still doing path rendering on the CPU for me. (I did make sure that the "use hardware acceleration when available" flag was set in the advanced settings.)





May 26, 2019
On Sunday, 26 May 2019 at 11:09:52 UTC, NaN wrote:
> Chrome is still doing path rendering on the CPU for me. (I did make sure that the "use hardware acceleration when available" flag was set in the advanced settings.)

*nods* Switching hardware acceleration on/off has very little impact on my machine, even for things like slide shows.

However, I suspect that Chrome gets basic hardware acceleration through the OS windowing-system whether the setting is on or off.

May 26, 2019
On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
> Oh, hey, wait a minute, Nick's dcompute could be exactly what you're want if you're only doing this to show a UI framework

FWIW, OpenCL is deprecated on OS-X.

You should use Metal for everything.

GPU-APIs are not very future proof.

May 26, 2019
On 2019-05-25 23:23:31 +0000, Ethan said:

> Right. I'm done. This thread reeks of a "Year of Linux desktop" mentality and I will also likely never read it again just for my sanity.

That's your best statement so far. Greate move.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

May 26, 2019
On Sun, May 26, 2019 at 4:10 AM NaN via Digitalmars-d-announce <digitalmars-d-announce@puremagic.com> wrote:
>
> On Saturday, 25 May 2019 at 23:23:31 UTC, Ethan wrote:
> > On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
> >>
> >> Browsers are actually doing quite well with simple 2D graphics today.
> >
> > Browsers have been rendering that on GPU for years.
>
> Just because (for example) Chrome supports GPU rendering doesn't mean every device it runs on does too. For example...
>
> Open an SVG in your browser, take a screenshot and zoom in on an almost vertical / horizontal edge, EG..
>
> https://upload.wikimedia.org/wikipedia/commons/f/fd/Ghostscript_Tiger.svg
>
> If you look for an almost vertical or almost horizontal line and check whether the antialiasing is stepped or smooth. GPU typically maxes out at 16x for path rendering, CPU you generally get 256x analytical.

What? ... this thread is bizarre.

Why would a high quality SVG renderer decide to limit to 16x AA? Are
you suggesting that they use hardware super-sampling to render the
SVG?
Why would you use SSAA to render an SVG that way?
I can't speak for their implementation, which you can only possible
speculate upon if you read the source code... but I would; for each
pixel, calculate the distance from the line, and use that as the
falloff value relative to the line weighting property.

How is the web browser's SVG renderer even relevant? I have absolutely no idea how this 'example' (or almost anything in this thread) could be tied to the point I made way back at the start before it went way off the rails. Just stop, it's killing me.

May 26, 2019
On Sunday, 26 May 2019 at 16:39:53 UTC, Manu wrote:
> How is the web browser's SVG renderer even relevant? I have absolutely no idea how this 'example' (or almost anything in this thread) could be tied to the point I made way back at the start before it went way off the rails. Just stop, it's killing me.

I don't think the discussion is about your idea that software engineering should be done like it is done in the games industry.

Path rendering on the GPU is a topic that is covered relatively frequently in papers the past decade so… more than one approach.

If the SVG renderer in the browser is relevant? Depends. SVG is animated through CSS, so the browser must be able to redraw on every frame. For some interfaces it certainly would be relevant, but I don't think Robert is aiming for that type of interface.

Anyway, for some interfaces like for VST plugins, you don't need very fancy options. Just blitting and a bit of realtime line-drawing. But portability is desired, so JUCE appears to be popular. If you Robert creates something simpler than JUCE, but with the same portability, then it could be very useful.


May 26, 2019
On Sunday, 26 May 2019 at 16:56:39 UTC, Ola Fosheim Grøstad wrote:
> If the SVG renderer in the browser is relevant? Depends. SVG is animated through CSS, so the browser must be able to redraw on every frame. For some interfaces it certainly would be relevant, but I don't think Robert is aiming for that type of interface.

Anyway Skia is available under a BSD license here:

https://skia.org/

I don't find anything on Ganesh or any other GPU backend, but maybe someone else have found something?

One could probably do worse than using the software renderer in Skia… but I don't know how difficult it is to hook it up.