November 29, 2019
On Thursday, 28 November 2019 at 20:46:45 UTC, Ethan wrote:
> On Thursday, 28 November 2019 at 19:37:47 UTC, Jab wrote:
>> Do you have any more information on the topic? I remember digging through Qt and there are sections that completely avoid the GPU all together as it is too inaccurate for the computation that was required. Can't recall exactly what it was.
>
> This would have been an accurate statement when GPUs were entirely fixed function. But then this little technology called "shaders" was introduced to consumer hardware in 2001.
>
> GPUs these days are little more than programmable number crunchers that work *REALLY FAST* in parallel.

It was Qt5, which is pretty recent, so no fixed pipelines are used.

That's kind of what I was surprised about looking through Qt. Quite a bit of it is still done on the CPU, things I wouldn't have expected. Which is why I was wondering if there was any more information on the topic.

IIRC GPUs are limited in what they can do in parallel, so if you only need to do 1 things for a specific job the rest of the GPU isn't really being fully utilized.
November 29, 2019
On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
> IIRC GPUs are limited in what they can do in parallel, so if you only need to do 1 things for a specific job the rest of the GPU isn't really being fully utilized.

GPUs have used a VLIW design, but are moving to RISC as a result of the GPGPU trend. So, becoming more like simple CPUs. But what you get and what you get to access will vary based on API and hardware. (AI and raytracing will likely cause more changes in the future too.)

So you need a CPU software renderer to fall back on, GPU rendering is more of an optimization in addition to CPU rendering. But more and more is moving to the GPU.

Look at the roadmap for Skia to get an idea.
November 29, 2019
On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim Grostad wrote:
> On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
>> IIRC GPUs are limited in what they can do in parallel, so if you only need to do 1 things for a specific job the rest of the GPU isn't really being fully utilized.
>
> GPUs have used a VLIW design, but are moving to RISC as a result of the GPGPU trend. So, becoming more like simple CPUs. But what you get and what you get to access will vary based on API and hardware. (AI and raytracing will likely cause more changes in the future too.)
>

GPUs are vector processors, typically 16 wide SIMD. The shaders and compute kernels for then are written from a single-"threaded" perspective, but this is converted to SIMD qith one "thread" really being a single value in the 16 wide register. This has all kinds of implications for things like branching and memory accesses. Thus forum is not rhe place to go into them.

> So you need a CPU software renderer to fall back on, GPU rendering is more of an optimization in addition to CPU rendering. But more and more is moving to the GPU.
>
> Look at the roadmap for Skia to get an idea.

Yes, proper drawing of common 2d graphics primitives is hard.
November 29, 2019
On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:

> And rendering the window contents is where things start to diverge a lot. A game engine is a fundamentally different beast from a renderer for the kind of graphics a UI draws. The graphics primitives that GUI code wants to deal map awkwardly to the GPU rendering pipeline. Sure, there are ways (some of them quite impressive), but it's a pain. There's no explicit scene graph.

As a company that use QT extensively ...
https://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html



November 29, 2019
On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
> On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim Grostad wrote:
>> So you need a CPU software renderer to fall back on, GPU rendering is more of an optimization in addition to CPU rendering. But more and more is moving to the GPU.
>>
>> Look at the roadmap for Skia to get an idea.
>
> Yes, proper drawing of common 2d graphics primitives is hard.

Accidentially hit send too early. Sorry.

I am not aware of a full GPU implementation of a TTF or OTF font renderer. Glyphs are defined as 2nd or 3rd order splines, but these are warped according to pretty complex rules. All of that is often done with subpixel precision to get proper antialiasing.

These 2d rendering engine in Qt, cairo, Skia... contain proper implememtations for primitives like arcs, polylines with various choices for joints and end caps, filled polygons with correct self-intersection handling, gradients, fill patterns, ... All of these things can be done on GPUs (most of it has), but I highly  doubt that this would be that much faster. You need lots of different shaders for these primitives and switching state while rendering is expensive.
November 29, 2019
On Friday, 29 November 2019 at 08:49:33 UTC, Paolo Invernizzi wrote:
> On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:
>
>> And rendering the window contents is where things start to diverge a lot. A game engine is a fundamentally different beast from a renderer for the kind of graphics a UI draws. The graphics primitives that GUI code wants to deal map awkwardly to the GPU rendering pipeline. Sure, there are ways (some of them quite impressive), but it's a pain. There's no explicit scene graph.
>
> As a company that use QT extensively ...
> https://doc.qt.io/qt-5/qtquick-visualcanvas-scenegraph.html

Same as with browser engines: Qt Quick gets away with it mostly because the UI is declarative. But declarative UIs have their own tradeoffs, and in the case if Qt Quick it comes in the form of less powerful widgets when compared to Qt Widgets.
November 29, 2019
On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
> On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
>> On Friday, 29 November 2019 at 06:02:40 UTC, Ola Fosheim Grostad wrote:
>>> So you need a CPU software renderer to fall back on, GPU rendering is more of an optimization in addition to CPU rendering. But more and more is moving to the GPU.
>>>
>>> Look at the roadmap for Skia to get an idea.
>>
>> Yes, proper drawing of common 2d graphics primitives is hard.
>
> Accidentially hit send too early. Sorry.
>
> I am not aware of a full GPU implementation of a TTF or OTF font renderer. Glyphs are defined as 2nd or 3rd order splines, but these are warped according to pretty complex rules. All of that is often done with subpixel precision to get proper antialiasing.
>
> These 2d rendering engine in Qt, cairo, Skia... contain proper

cairo is not comparable to Skia or Qt, it's more an intermediate level API, which can use itself different backends. But it's clearly lower level than Skia for the few I know of it.

> implememtations for primitives like arcs, polylines with various choices for joints and end caps, filled polygons with correct self-intersection handling, gradients, fill patterns, ... All of these things can be done on GPUs (most of it has), but I highly  doubt that this would be that much faster. You need lots of different shaders for these primitives and switching state while rendering is expensive.

Back in the early 2010's I used something comparable to QtQuick and it had different backends. On Windows we could choose between GDI+ and D2D+DirectWrite. The later, while using the GPU was awefuly laggy compared to the good old GDI+.

Back to the original topic. What people don't realize is that a 100% D GUI would be a more complex project than the D compiler itself. Just the text features is a huge thing in itself: unicode, BDI.
November 29, 2019
On Friday, 29 November 2019 at 02:42:28 UTC, Gregor Mückl wrote:
> They don't concern themselves with how the contents of these quads came to be.

Amazing. Every word of what you just said is wrong.

What, you think stock Win32 widgets are rendered with CPU code with the Aero and later compositors?

You're treating custom user CPU rasterisation on pre-defined bounds as the entire rendering paradigm. And you can be assured that your code is reading to- and writing from- a quarantined section of memory that will be later composited by the layout engine.

If you're going to bring up examples, study WPF and UWP. Entirely GPU driven WIMP APIs.

But I guess we still need homework assignments.

1) What is a Z buffer?

2) What is a frustum? What does "orthographic" mean in relation to that?

3) Comparing the traditional and Aero+ desktop compositors, which one has the advantage with redraws of any kind? Why?

4) Why does ImGui's code get so complicated behind the scenes? And what advantage does this present to a programmer who wishes to use the API?

5) Using a single untextured quad and a pixel shader, how would you rasterise a curve?

(I've written UI libraries and 3D scene graphs in my career as a console engine programmer, so you're going to want to be *very* thorough if you attempt to answer all these.)

On Friday, 29 November 2019 at 08:45:30 UTC, Gregor Mückl wrote:
> GPUs are vector processors, typically 16 wide SIMD. The shaders and compute kernels for then are written from a single-"threaded" perspective, but this is converted to SIMD qith one "thread" really being a single value in the 16 wide register. This has all kinds of implications for things like branching and memory accesses. Thus forum is not rhe place to go into them.

No, please, continue. Let's see exactly how poorly you understand this.

On Friday, 29 November 2019 at 09:00:20 UTC, Gregor Mückl wrote:
> All of these things can be done on GPUs (most of it has), but I highly  doubt that this would be that much faster. You need lots of different shaders for these primitives and switching state while rendering is expensive.

When did you last use a GPU API? 1999?

Top-end gaming engines can output near-photorealistic complex scenes at 60FPS. How many state changes do you think they perform in any given scene?

It's all dependent on API, driver, and even operating system. The WDDM introduced in Vista made breaking changes with XP, splitting a whole ton of the stuff that would traditionally be costly with a state change out of kernel space code and in to user space code. Modern APIs like DirectX 12, Vulkan, Metal etc go one step further and remove that responsibility from the driver and in to user code.

aberba wrote:
> Whats holding ~100% D GUI back?

A lack of skilled and knowledgeable people with the necessary time and money to do it correctly.
November 29, 2019
On Friday, 29 November 2019 at 05:16:08 UTC, Jab wrote:
> IIRC GPUs are limited in what they can do in parallel, so if you only need to do 1 things for a specific job the rest of the GPU isn't really being fully utilized.

Yeah, that's not how GPUs work. They have a number of shader units that execute on outputs in parallel. It used to be an explicit split between vertex and pixel pipelines in the early days, where it was very easy to underutilise the vertex pipeline. But shader units have been unified for a long time. Queue up a bunch of outputs and the driver and hardware will schedule it properly.
November 29, 2019
On 29/11/2019 10:55 PM, Basile B. wrote:
> 
> Back to the original topic. What people don't realize is that a 100% D GUI would be a more complex project than the D compiler itself. Just the text features is a huge thing in itself: unicode, BDI.

Assuming you mean BIDI yes. Text layouting is a real pain. Mostly because it needs an expert in Unicode to do right. But its all pretty well defined with tests described. So it shouldn't be considered out of scope. Font rasterization on the other hand... ugh