Jump to page: 1 2
Thread overview
DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson
Jul 08, 2014
Nordlöw
Jul 09, 2014
Tofu Ninja
Jul 09, 2014
Tofu Ninja
Jul 09, 2014
Tofu Ninja
Jul 09, 2014
Tofu Ninja
Jul 10, 2014
Tofu Ninja
Jul 10, 2014
Tofu Ninja
Jul 10, 2014
ponce
Jul 11, 2014
Tofu Ninja
Jul 09, 2014
Dicebot
Jul 15, 2014
Adam Wilson
Jul 15, 2014
Kapps
Jul 16, 2014
Adam Wilson
July 08, 2014
https://news.ycombinator.com/newest (please find and vote quickly)

https://twitter.com/D_Programming/status/486540487080554496

https://www.facebook.com/dlang.org/posts/881134858566863

http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/


Andrei
July 08, 2014
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu wrote:
http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/

Very intriguing.

First question for Andrew Wilson i reckon :)

Is the Immutable Scene Object (ISO) supposed to be an exact copy (same type and same contents) of the User Scene Object (USO) especially with regards to the Model-View-Controller pattern:

https://en.wikipedia.org/wiki/Model-View-Controller

I'm asking because I first thought that

- USO typically maps to the Model (data) and the
- ISO typically maps to the View (visual representation)
July 09, 2014
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu
wrote:
>http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/

Great talk but I have some reservations about the design. What I
am most concerned about is the design of the immediate mode
layer. I was one of the few who initially pushed for the
immediate mode but I think you missed the point.

There are several points that I want to address so I will go
through them one at a time. Also I apologize for the wall of text.

*Scene Graph
Personally I find it odd that the immediate mode knows anything
about a scene graph at all. Scene Graphs are not an end all be
all, they do not make everything simpler to deal with. It is one
way to solve the problem but not always the best. D is supposed
to be multi paradigm, locking the users into a scene graph design
is against that goal. I personally think that the immediate mode
should be designed for performance and the less performant but
'simpler' modes should be built on top of it.

*Performance vs Simplicity
I know that you have stated quite clearly that you do not believe
performance should be a main goal of Aurora, you have stated that
simplicity is a more important goal. I propose that there is NO
reason at all that Aurora can't have both in the same way that D
itself has both. I think it is just a matter of properly defining
the goals of each layer. The retained mode should be designed
with simplicity in mind whilst still trying to be performant
where possible. On the other hand, the immediate mode should be
designed with performance in mind whilst still trying to be
simple where possible. The simple mode(s?) should be build on top
of the single performance mode.

*Design
Modern graphics hardware has a very well defined interface and
all modern graphics api's are all converging on matching the
hardware as close as possible. Modern graphics is done by sending
buffers of data to the card and having programmable shaders to
operate on the data, period. I believe that the immediate mode
layer should match this as close a possible. If that involves
having some DSL for shaders that gets translated into all the
other various shader languages then so be it, the differences
between them is minimal. If the DSL was a subset of D then all
the better.

*2D vs 3D
I think the difference you are making between 2D and 3D is
largely artificial. In modern graphics api's the difference
between 2D and 3D is merely a matrix multiply. If the immediate
mode was designed how I suggest above, then 2D vs 3D is a non
issue.

*Games
D is already appealing to game developers and with the work on
@nogc and andrei's work with allocators, it is becoming even more
appealing. The outcome of Aurora could land D a VERY strong spot
in games that would secure it a very good future. But only if it
is done right. I think there is a certain level of responsibility
in the way Aurora gets designed that needs to be taken into
account.


I know that most of my points are not in line with what you said
Aurora would and wouldn't be. I just don't think there is any
reason Aurora couldn't achieve the above points whilst still
maintaining it's goal of simplicity.

Also, I am willing to help, I just don't really know what needs
working on. I have a lot of experience with openGL on windows
writing high performance graphics.
July 09, 2014
On Wednesday, 9 July 2014 at 04:26:55 UTC, Tofu Ninja wrote:
> Modern graphics hardware has a very well defined interface and
> all modern graphics api's are all converging on matching the
> hardware as close as possible. Modern graphics is done by sending
> buffers of data to the card and having programmable shaders to
> operate on the data, period.

That's true, but OpenGL is being left behind now that there is a push to match the low level of how GPU drivers work. Apple's Metal is oriented towards the tiled PowerVR and scenegraphs, probably also with some expectations of supporting the upcoming raytracing accelerators. AMD is in talks with Intel (rumour) with the intent of cooperating on Mantle. Direct-X is going lower level… So, there is really no stability in the API at the lower level.

But yes, OpenGL is not particularly suitable for rendering a scene graph without an optimizing engine to reduce context switches.

> largely artificial. In modern graphics api's the difference
> between 2D and 3D is merely a matrix multiply. If the immediate
> mode was designed how I suggest above, then 2D vs 3D is a non
> issue.

Actually, modern 2D APIs like Apple's Quartz are backend "independent" and render to PDF. Native PDF support is important if you want to have an advantage in the web space and in the application space in general.

There is almost no chance anyone wanting to do 3D would use something like Aurora… If you can handle 3D math you also can do OpenGL, Mantle, Metal?

But then again, the official status for Aurora is kind of unclear.
July 09, 2014
On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu wrote:
> https://news.ycombinator.com/newest (please find and vote quickly)
>
> https://twitter.com/D_Programming/status/486540487080554496
>
> https://www.facebook.com/dlang.org/posts/881134858566863
>
> http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/
>
>
> Andrei

http://youtu.be/PRbK7jk0jrk
July 09, 2014
On Wednesday, 9 July 2014 at 05:30:21 UTC, Ola Fosheim Grøstad wrote:
> That's true, but OpenGL is being left behind now that there is a push to match the low level of how GPU drivers work.

As I said, ALL api's are converging on low level access, this includes opengl. This means that all major api's are moving to a buffer+shader model because this is what the hardware likes(there is some more interesting things with command buffers that is happening also).

> Apple's Metal is oriented towards the tiled PowerVR and scenegraphs,

I am not exactly sure where you are get that idea, Metal is the same, buffers+shaders. The major difference is the command buffer that is being explicitly exposed, this is actually what is meant when they say that the the api is getting closer to the hardware. In current api's(dx/ogl) the command buffers are hidden from the user and constructed behind the scenes, in dx is is done by Microsoft and in ogl it is done by the driver(nvidia/amd/intel). There has been a push recently for this to be exposed to the user in some form, this is what metal does, I believe mantel does something similar but I can't be sure because they have not released any documentation.


> probably also with some expectations of supporting the upcoming raytracing accelerators.

I doubt it.

> AMD is in talks with Intel (rumour) with the intent of cooperating on Mantle.

I don't know anything about that but I also doubt it.

> Direct-X is going lower level… So, there is really no stability in the API at the lower level.

On the contrary, all this movement towards low level API is actually causing the API's to all look vary similar.

>
> But yes, OpenGL is not particularly suitable for rendering a scene graph without an optimizing engine to reduce context switches.

I was not talking explicitly about ogl, I am just talking about video cards in general.

> Actually, modern 2D APIs like Apple's Quartz are backend "independent" and render to PDF. Native PDF support is important if you want to have an advantage in the web space and in the application space in general.

This does not really have any thing to do with what I am talking about. I am talking about hardware accelerated graphics, once it gets into the hardware(gpu), there is no real difference between 2d and 3d.

> There is almost no chance anyone wanting to do 3D would use something like Aurora… If you can handle 3D math you also can do OpenGL, Mantle, Metal?

As it stands now, that may be the case, but I honestly don't see a reason it must be so.

> But then again, the official status for Aurora is kind of unclear.

This is true.
July 09, 2014
On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:

Also I should note, dx and ogl are both also moving towards exposing the command buffer.
July 09, 2014
On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:
> I am not exactly sure where you are get that idea, Metal is the same, buffers+shaders. The major difference is the command buffer that is being explicitly exposed, this is actually what is meant when they say that the the api is getting closer to the hardware.

Yes, but 3D APIs are temporary so they don't belong in a stable development library. Hardware and APIs have been constantly changing for 25 years.

My point was that the current move is from heavy graphic contexts with few API calls to explicit command buffers with many API calls. I would think it fits better to tiling where you defer rendering and sort polygons and therefore get context switches anyway (classical PowerVR on iDevices). It fits better to rendering a display graph directly, or UI etc.

>> this to be exposed to the user in some form, this is what metal
> does, I believe mantel does something similar but I can't be sure because they have not released any documentation.

Yes, this is what they do. It is closer to what you want for general computing on the GPU. So there is probably a long term strategy for unifying computation and graphics in there somewhere. IIRC Apple claims Metal can be used for general computing as well as 3D.

>> probably also with some expectations of supporting the upcoming raytracing accelerators.
>
> I doubt it.

Why?

Imagination Technologies (PowerVR) purchased the raytracing accelerator (hardware design/patents) that three former Apple employees designed and has just completed the design for mobile devices so it is close to production. The RTU (ray tracing unit) has supposedly been worked into the same series of GPUs that is used in the iPhone. Speculation, sure, but not unlikely either.

http://www.imgtec.com/powervr/raytracing.asp

>> AMD is in talks with Intel (rumour) with the intent of cooperating on Mantle.
>
> I don't know anything about that but I also doubt it.

Why?

Intel has always been willing to cooperate when AMD holds the strong cards (ATI is stronger than Intel's 3D division).

http://www.phoronix.com/scan.php?page=news_item&px=MTcyODY

> On the contrary, all this movement towards low level API is actually causing the API's to all look vary similar.

I doubt it. ;-)

Apple wants unique AAA titles on their iDevices to keep Android/Winphone at bay and to defend the high profit margins. They have no interest in portable low level access and will just point at OpenGL 2ES for that.

> graphics, once it gets into the hardware(gpu), there is no real difference between 2d and 3d.

True, but that is not a very stable abstraction level. Display Postscript/PDF etc is much more stable. It is also a very useful abstraction level since it means you can use the same graphics API for sending a drawing to the screen, to the printer or to a file.

> As it stands now, that may be the case, but I honestly don't see a reason it must be so.

Well, having the abstractions for opening a drawing context, input devices etc would be useful, but not really a language level task IMO. Solid cross platform behaviour on that level will never happen (just think about what you have to wrap up on Android).
July 09, 2014
On Wednesday, 9 July 2014 at 15:22:35 UTC, Tofu Ninja wrote:
> On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:
>
> Also I should note, dx and ogl are both also moving towards exposing the command buffer.

I should say that it looks like they are moving in that direction, both opengl and direct x support draw indirect which is almost nearly all the way to a command buffer, it is only a matter of time before it become a reality(explicitly with metal and mantel).
July 09, 2014
On Wednesday, 9 July 2014 at 16:25:14 UTC, Tofu Ninja wrote:
> is almost nearly all the way to a command buffer, it is only a matter of time before it become a reality(explicitly with metal and mantel).

Yes, of course, but it does not belong in a stable high level graphics API. It's not gonna work ten years down the road…
« First   ‹ Prev
1 2