May 23, 2019
On 5/23/19 3:52 PM, Ola Fosheim Grøstad wrote:
> On Thursday, 23 May 2019 at 19:32:28 UTC, Nick Sabalausky (Abscissa) wrote:
>> Game engines *MUST* be *EFFICIENT* in order facilitate the demands the games place on them. And "efficiency" *means* efficiency: it means minimizing wasted processing, and that *inherently* means *both* speed and battery.
> 
> I think there is a slight disconnection in how different people view efficency. You argue that this is some kind of absolute metric. I would argue that it is a relative metric, and it is relative to flexibility and power.
> 
> This isn't specific to games.
> 
> For instance, there is no spatial datatructure that is inherently better or more efficient than all other spatial datastructures.
> 
> It depends on what you need to represent. It depends on how often you need to update. It depends on what kind of queries you want to do. And so on.
> 
> This is where a generic application/UI framework will have to give priority to being generally useful in the most general sense and give priority to flexibility and expressiveness.
> 
> A first person shooter game engine, can however make a lot of assumptions. That will make it more efficient for a narrow set of cases, but also completely useless in the most general sense. It also limits what you can do, quite severely.
> 

Of course there's always tradeoffs, but I think you are very much overestimating the connection between inherent performance limitations and things like API and general usefulness and flexibility. And I think you're *SEVERELY* underestimating the flexibility of modern game engines. And I say this having personally used modern game engines. Have you?

FWIW, On 80's technology, I would completely agree with you. And even to some extent on 90's tech. But not today.
May 23, 2019
On Thursday, 23 May 2019 at 20:13:29 UTC, Nick Sabalausky (Abscissa) wrote:
> They want accuracy TO THE EXTENT THEY (and others) CAN PERCEIVE IT. That is the key. Human perception is far more limited than most people realize.

Well, what I meant by "cutting corners" it that games reach efficiency by narrowing down what they allow you to do.

STILL, I think Robert M. Münch is onto something good if he aims for accuracy and provides say a canvas that draws bezier curves to the spec (whether it is PDF or SVG).  I think many niche application areas involve accuracy, like a CNC router program, or a logo cutter or 3D printing. So I think there is a market.

If you can provide everything people need in one framework, then people might want to pay for it.

If you just provide what everyone else sloppily does, then why bother (just use Gtk, Qt or electron instead). *shrug*

May 23, 2019
On Thursday, 23 May 2019 at 20:20:52 UTC, Nick Sabalausky (Abscissa) wrote:
> flexibility. And I think you're *SEVERELY* underestimating the flexibility of modern game engines. And I say this having personally used modern game engines. Have you?

No, I don't use them. I read about how they are organized, but I have no need for the big gaming frameworks which seems to look very bloated, and frankly limiting. I am not really interested in big static photorealistic landscapes. If I went there then I would go for algorithmic surrealistic landscapes, and the frameworks won't fit that. Too static, too euclidean.

When I (which is very rare) hit the hardware I tend to favour bare bones for my simple needs which won't benefit from any big framework. Hardware is fast enough anyway, the limit is in trying to figure out clever ways to use shaders for things like audio-waveform zooming and getting decent quality from it etc.  Hardware is fast enough, the limit is in figuring out the best way to do it.

But I am moving towards doing everything in the browser, and am adopting Angular for regular UI which is even another layer on top of that.  It appears to make me more productive. Maybe I'll change my mind later, but right now Angular seems to be more productive than other options. So the whining about browsers being inefficient is lost on me for regular UI. Programmer productivity matters.

Browsers are actually doing quite well with simple 2D graphics today. Even some 3D is starting to look ok.


> FWIW, On 80's technology, I would completely agree with you. And even to some extent on 90's tech. But not today.

Ok, I've always been interested in spatial datastructures, audio, 2D/3D, raytracing and I don't think there, on a fundamental level, has been any significant theoretical achievements/conceptual shifts since the early 2000s. Except perhaps for the increased focus on point-clouds.

So, I think what you see has more to do with GPU performance and availability of RAM and more mature frameworks than anything else?

May 24, 2019
On 2019-05-23 19:29:26 +0000, Ola Fosheim Grøstad said:

> When creating a user interface framework you should work with everything from sound editors, oscilloscopes, image editors, vector editors, CAD programs, spreadsheets etc. You cannot really assume much about anything. What you want is max flexibility.

That's exactly the right direction.

> Most GUI frameworks fail at this, so you have to do all yourself if you want anything with descent quality, but that is not how it should be.

Yep, I can't agree more.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

May 24, 2019
On 2019-05-23 20:22:28 +0000, Ola Fosheim Grøstad said:

> STILL, I think Robert M. Münch is onto something good if he aims for accuracy and provides say a canvas that draws bezier curves to the spec (whether it is PDF or SVG). I think many niche application areas involve accuracy, like a CNC router program, or a logo cutter or 3D printing. So I think there is a market.

I'm not fully understand the discussion about accuracy WRT GUIs. Of course you need to draw things accurate. And my interjection WRT 35-FPS was just to give an idea about the possible achievable performance. I like desktop apps that are fast and small, nothing more.

> If you can provide everything people need in one framework, then people might want to pay for it. If you just provide what everyone else sloppily does, then why bother (just use Gtk, Qt or electron instead). *shrug*

Exactly. Our goals is to create a GUI framework which you can use to make desktop apps without caring about the OS specifics (which doesn't mean we are limiting in a way that you can't care if you wish). For this we are creating a set of building-blocks that fit perfectly together following a radical KISS and minimal dependency strategy.

If you want, you should be able to maintain a desktop app using a specific version of the framework for 15+ years, without running into any limitations.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

May 24, 2019
On 2019-05-23 17:27:09 +0000, Ola Fosheim Grøstad said:

> Yeah, that leaves a lot of headroom to play with. Do you think there is a market for a x86 CPU software renderer though?

Well, the main market I see for a software renderer is the embedded market and server rendering. Making money with development tools, components or frameworks is most likely only possible in the B2B sector.

One needs to find a niche where companies are interested in: speed and ressource-efficency is definetely one.

> Or do you plan on support CPUs where there is no GPU available?

Currently we don't use a GPU, it's only CPU based. I think CPU rendering has its merits and is underestimated a lot.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

May 24, 2019
On Friday, 24 May 2019 at 08:35:27 UTC, Robert M. Münch wrote:
> I'm not fully understand the discussion about accuracy WRT GUIs. Of course you need to draw things accurate. And my interjection WRT 35-FPS was just to give an idea about the possible achievable performance. I like desktop apps that are fast and small, nothing more.

Yes. What I meant is that it is better for an application developer to have a GUI framework that is predictable and solid than to have the highest possible performance.

So if someone provides a drawing canvas then I'd rather have correctly drawn anti-aliased primitives (like bezier curves) than something that is 20% faster but incorrect. Just an example.

Just in general, predictable, less to worry about, so that the application developer can focus on the application and not the peculiarities of the GUI framework.

> care if you wish). For this we are creating a set of building-blocks that fit perfectly together following a radical KISS and minimal dependency strategy.

Sounds reasonable.


May 24, 2019
On Friday, 24 May 2019 at 08:42:48 UTC, Robert M. Münch wrote:
> Well, the main market I see for a software renderer is the embedded market and server rendering. Making money with development tools, components or frameworks is most likely only possible in the B2B sector.

Indeed. Software that should be easy to port to new hardware, like point-of-sale terminals, calling systems etc.

I guess server rendering means that you can upgrade the software without touching the clients, so that you have a network protocol that transfers the graphics to a simple and cheap client-display. Like, for floor information in a building.

>> Or do you plan on support CPUs where there is no GPU available?
>
> Currently we don't use a GPU, it's only CPU based. I think CPU rendering has its merits and is underestimated a lot.

You are probably right. What I find particularly annoying about GPUs is that the OS vendors keep changing and deprecating the APIs. Like Apple is no longer supporting OpenGL, IIRC.

Sadly, GPU features provide a short path to (forced) obsoletion…

May 24, 2019
On Friday, 24 May 2019 at 08:42:48 UTC, Robert M. Münch wrote:
>
> Currently we don't use a GPU, it's only CPU based. I think CPU rendering has its merits and is underestimated a lot.

+1

One big bottleneck for CPU renderer is pixel upload, but apart from that it's pretty rad.
May 24, 2019
On 2019-05-24 10:12:10 +0000, Ola Fosheim Grøstad said:

> I guess server rendering means that you can upgrade the software without touching the clients, so that you have a network protocol that transfers the graphics to a simple and cheap client-display. Like, for floor information in a building.

Even much simpler use-cases make sense, example: Render 3D previews of 100.000 CAD models and keep them up to date when things change. You need some CLI tool to render it, but most likely you don't have OpenGL or a GPU on the server.

If running an app on a server and using an own front-end client instead of a browser these days makes sense, I'm not sure. However, all people have tremendous CPU power on their desks, which isn't used. So, I'm still favoring desktop apps, and a lot of users do it too. Be contrarian in this sector makes a lot of sense :-)

> You are probably right. What I find particularly annoying about GPUs is that the OS vendors keep changing and deprecating the APIs. Like Apple is no longer supporting OpenGL, IIRC.

Yep, way too much hassles and possibilities to break things from external. Can become a support hell. Better to stay on your own as much as possible.

> Sadly, GPU features provide a short path to (forced) obsoletion…

In the 2D realm I don't see so much gain using a GPU over using a CPU.

-- 
Robert M. Münch
http://www.saphirion.com
smarter | better | faster