|Posted by Adam D Ruppe|
in reply to Dukc
On Thursday, 9 September 2021 at 08:52:34 UTC, Dukc wrote:
> I thought that the Wayland architecture is in some way fundamentally better than X architecture
That's what the wayland propagandists like to say (now... earlier they'd play up the similarities to make the "rewrite it" pill easier to swallow), but it isn't really true. Wayland's graphics system is a subset of X's - it takes the direct rendering component and leaves the others out - then combines the X compositor, X window manager, and X server into one component, when they then call the wayland compositor.
But it is important to realize that direct rendering clients on X - which has been around since before Wayland was born - already work essentially the same way. The client gets a gpu buffer and does opengl rendering to it. Then they signal the display manager - whether X server or Wayland compositor - to show the buffer on screen and they do it when the time is right (e.g. vblank syncing).
X does support other models too, but this is a good thing because it allows backward compatibility, better network efficiency, and simpler applications to just do simpler things too. Removing features people actually use in the real world is rarely that beneficial.
So what is the major difference for graphics architecture? In X, the compositor is an entirely optional component. In Wayland, it is integrated into the display manager. That's the only real difference.
What is a compositor? It is responsible for taking the images from each individual application and layering them on top of each other if you want additional scaling, transformation (like rotation), or transparency in between windows. Inside an individual window, you have all those things regardless of compositor. Traditional X servers can layer images without transparency or external scaling without any trouble at all. It only comes in when you want those fancier graphics things to happen without cooperation between windows. If you're thinking "that's not actually all that useful anyway", yeah me too. I've never used a compositor on my own system at all.
Wayland claims that integrating this optional component provides a performance benefit by avoiding the cost of the additional context switch and synchronization. There's one place where that's legitimately true: exposing a window on a remote network connection. With core X, the server sends an expose event, then the client sends back a blit command from its buffer, or the drawing commands if it doesn't have one. Due to network latency, this can results in some visual lag and some flash as the server paints over a simple background color, then the client paints over it again.
Wayland, of course, sees zero change here because it has zero support for this operation. Its network support - to the extent that there is any at all - is more like VNC screen share than X's remote commands.
Anyway, even locally, that context switch thing is technically true but also almost certainly irrelevant since that cost is insignificant next to the time spent waiting for the vsync anyway if you use the direct rendering facility.
It has a little more truth when comparing non-direct-rendering clients, but again Wayland sees zero benefit here because it simply has zero support for these use cases at all. Those programs just plain don't work. They must be rewritten to target Wayland which means they'd be forced to switch to a direct render system. But if you're rewriting the application anyway, you could just port the X application to the direct render infrastructure too.
Anyway, just for background, since a compositor works with these older style applications a bit differently - those clients are not written with the extension apis in mind and assume they are going directly to the screen (well actually there's non-DRI clients that sync to double buffers too, you can render to pixmaps in the traditional api and blit it over (all core X functionality there) with a sync trigger which was provided as an extension in 1991, or you can use the XRender extension which provides an explicit double buffer function, which was provided as an extension in 2000) - the compositor intercepts the final draw to screen and redirects it to another buffer, then does the alpha blend etc on that before getting to the actual screen. It may need to guess which draws to screen are supposed to be immediate and which are already buffered and synced by the application which can introduce a bit of additional lag; it might vsync twice, for example. This is rare in practice, but when the implementation gets it wrong, you do get either a missed frame or mid-frame screen tear.
But like that's an older API being used on a buggy implementation. Again, if your fix is to rewrite the application anyway, you can just use the different api and/or fix the bug in your existing implementation. There's no real world benefit. And once again Wayland just discards the compatibility and network features of X and the process.
Speaking of the Wayland combining features btw, the wayland compositor also integrates the X window manager. They do this in the name of simplifying the architecture diagram, and again it does a little bit of context switching and syncing simplification (which probably doesn't matter anyway). But one of the things people very commonly do on X is to swap out window managers; it is one of the most common user preferences on a linux desktop (and this full customization one of the only reasons why I actually use linux). Fun fact: you can even do it on the fly without restarting any applications if you just want to try one out. This is *because* it is a separate component. When it is integrated on Wayland, it makes this much harder. There are some variety of wayland compositors - using library code can help make it feel plug and play to developers - but this remains a strength of X in practice.