September 20, 2019
On Thursday, 19 September 2019 at 03:47:05 UTC, Shadowblitz16 wrote:
> Is there a way to make a indexed graphics library that can handle importing and exporting true color images?
>
> I would guess something like this could be simulated with pointers and references right?

My color.d actually can do it.

http://dpldocs.info/experimental-docs/arsd.color.html

it has a true color to indexed quantize function: http://dpldocs.info/experimental-docs/arsd.color.quantize.html

my png.d is capable of loading and saving them:

http://dpldocs.info/experimental-docs/arsd.png.readPng.html

you will want to cast it to IndexedImage or cast it to TrueColorImage and quantize to get an indexed one.

and my simpledisplay.d knows how to display them, but it does so via conversions so palette swaps won't just work live like on a real DOS system or whatever.

http://dpldocs.info/experimental-docs/arsd.simpledisplay.Image.fromMemoryImage.html


all the module's source code are in here:
https://github.com/adamdruppe/arsd


but otherwise the support is not great because it is just whenever I decide to answer emails and i'm gonna be busy this next week. but it might be useful to you
September 20, 2019
On Friday, 20 September 2019 at 00:41:58 UTC, Adam D. Ruppe wrote:
> On Thursday, 19 September 2019 at 03:47:05 UTC, Shadowblitz16 wrote:
>> [...]
>
> My color.d actually can do it.
>
> [...]


cool does this store image data as raw byte[]'s?
I might have to use this :D
September 20, 2019
On Thursday, 19 September 2019 at 20:47:45 UTC, Shadowblitz16 wrote:
> can I do this in D and draw them to a 32bpp bitmap pixel by pixel?
> I would prefer do do this on the gpu but I don't know how.

Conceptually, applying the palette to the index buffer is easy on the GPU. There are two ways to go about that, using either the (programmable) hardware rendering pipeline or the compute shader.

The first thing is that you need to end up with your 4bpp image on the GPU. Either as a texture upload or you're rendering something into it as a framebuffer. I guess the first option is preferable. You'll have to abuse an 8bpp texture format like R8UI for the texture.

The current palette goes into a separate texture or buffer object. A uniform buffer object (OpenGL terminology) is probably the simplest option.

Then you have to invoke either a fragment shader or a compute shader so that it is invoked for every pixel in the input. This performs the palette lookup and writes the palette color to its output.

If you go with a fragment shader, you need to set up a projection that is just right, write a dummy vertex shader and setup a framebuffer to render to. and you need a vertex buffer with one or two triangles in it that cover the output so that the graphics pipeline has something to render. In short, it's the classic creative abuse of the rendering pipeline to obtain a full screen effect.

The slightly more modern version is a compute shader that gets invoked per pixel and does the same thing. The nice thing is that you get to skip all the vertex buffer, vertex shader and projection stuff. And with DirectX or Vulkan, you even get to bind the current swapchain image to the compute shader, so your shader output goes to the screen as directly as possible. OpenGL requires you to allocate an output texture that you copy to the screen separately (well, it's a bronze age APIs...).

It's really quite simple if you understand GPUs. There's a lot of terminology to throw around, but once you get a handle of the general ideas behind of GPU rendering, stuff like that comes easy.

1 2
Next ›   Last »