January 05, 2015
On Friday, 2 January 2015 at 08:46:25 UTC, Manu via Digitalmars-d wrote:
> Not universal enough? Colours are not exactly niche. Loads of system
> api's, image readers/writers, icons, they all use pixel buffers.
> A full-blown image library will require a lot more design work, sure,
> but I can see room for that in Phobos too.

I feel Phobos need to be broken up. There is too much esoteric stuff in there and too much essential stuff missing. I think some kind of "extra" hierarchy is needed for more application specific functionality.

But I agree that colour theory is solid enough to be considered stable and that it would be a great benefit to have a single library used across multiple projects. It is also very suitable for templated types.

A standard image library would have to be templated with optional compiler-specific optimizations (SIMD) for the most usual combination. There are too many representations used in different types of image processing to find common ground (unless you limit yourself and just select PNG as your design base).
January 05, 2015
On Monday, 5 January 2015 at 15:57:32 UTC, Ola Fosheim Grøstad wrote:
> But I agree that colour theory is solid enough to be considered stable and that it would be a great benefit to have a single library used across multiple projects. It is also very suitable for templated types.

Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical.

I went with struct Color { ubyte r,g,b,a; } .... not perfect, probably not good enough for like a Photoshop, and sometimes the bytes need to be shuffled for different formats, but eh it works for me.
January 05, 2015
On Thursday, 1 January 2015 at 06:38:41 UTC, Manu via Digitalmars-d wrote:
> I've been working on a pretty comprehensive module for dealing with
> colours in various formats and colour spaces and conversions between
> all of these.
> It seems like a hot area for duplicated effort, since anything that
> deals with multimedia will need this, and I haven't seen a really
> comprehensive implementation.
>

Indeed, I stop you right there: I did one as well in the past, but definitively not high quality enough to be interesting for 3rd party.

> Does it seem like something we should see added to phobos?
>

Yes.
January 05, 2015
On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote:
> Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical.

Yes, that too. I was more thinking about the ability to create an adapter that extracts colour information from an existing data structure and adds context information such as gamma. Then let  you build a function that say reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit RGB pixel with gamma correction and the residue in a specified format suitable for dithering... ;-]

It is quite common error to do computations on colours that are ignorant of gamma (or do it wrong) which results in less accurate imaging. E.g. When dithering you need to make sure that the residue that is left when doing bit truncation is added to the neighbouring pixels in a "linear addition" (without gamma). Making stuff like that less tedious would make it a very useful library.
January 05, 2015
On 6 January 2015 at 04:11, via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote:
>>
>> Yeah, in my misc repo, there used to be stand along image.d and simpledisplay.d. Now, they both depend on color.d. Even just a basic definition we can use elsewhere is nice to have so other libs can interop on that level without annoying casts or pointless conversions just to please the type system when the contents are identical.
>
>
> Yes, that too. I was more thinking about the ability to create an adapter that extracts colour information from an existing data structure and adds context information such as gamma. Then let  you build a function that say reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit RGB pixel with gamma correction and the residue in a specified format suitable for dithering... ;-]
>
> It is quite common error to do computations on colours that are ignorant of gamma (or do it wrong) which results in less accurate imaging. E.g. When dithering you need to make sure that the residue that is left when doing bit truncation is added to the neighbouring pixels in a "linear addition" (without gamma). Making stuff like that less tedious would make it a very useful library.

I have thought about how to handle residue from lossy-encoding, but I
haven't thought of an API I like for that yet.
Dithering operates on neighbourhoods of pixels, so in some ways I feel
it is beyond the scope of colour.d, but residue is an important detail
to enable dithering that should probably be expressed while encoding.

Currently, I have a colour template which can be arbitrarily typed and components defined in some user-specified order. It binds the colourspace to colours. 'CTo to(CTo, CFrom)(CFrom colour)' is defined and performs arbitrary conversions between colours.

I'm finding myself at a constant struggle between speed and
maximizing-precision. I feel like a lib should maximise precision, but
the trouble then is that it's not actually useful to me...
Very few applications care about colour precision beyond ubyte, so I
feel like using double for much of the processing is overkill :/
I'm not sure what the right balance would look like exactly.
I can make fast-paths for common formats, like ubyte conversions
between sRGB/Linear, etc use tables. Performing colourspace
conversions in fixed point (where both sides of conversion are integer
types) might be possible without significant loss of precision, but
it's tricky... I just pipe through double now, and that's way
overkill.

I'll make a PR tonight some time for criticism.
January 06, 2015
On Monday, 5 January 2015 at 23:39:17 UTC, Manu via Digitalmars-d wrote:
> I'm finding myself at a constant struggle between speed and
> maximizing-precision. I feel like a lib should maximise precision, but
> the trouble then is that it's not actually useful to me...

If you create a "pixel" converter that aims for speed, the programmer might also want it to generate a shader (as text string) with exactly the same properties.  It makes less and less sense to create a performant imaging library that is CPU only.

I suggest reducing the scope to:

1. Provide generic accurate conversion and iterators for colours (or more general; for arrays of spectral values). Useful for doing batch like stuff or initialization.

2. Provide fast colour support for transforms that are simple enough to not warrant GPU processing, but where you accept the cost of building lookup tables before processing. (Build tables using (1).)

> Very few applications care about colour precision beyond ubyte, so I
> feel like using double for much of the processing is overkill :/
> I'm not sure what the right balance would look like exactly.

I think a precise reference implementation using double is a good start. People creating PDFs, SVGs or some other app that does not have real time requirements probably want that. It is also useful for building LUTs.

One thing to consider is that you also might want to handle colour compontents that have negative values or values larger than 1.0:

-  it is useful in non-realistic rendering as "darklights" ( http://www.glassner.com/wp-content/uploads/2014/04/Darklights.pdf )

- with negative values you can then have a unique representation of a single colour in CIE (the theoretical base for RGB that was developed in the 1930s).

- it allows the programmer to do his own gamut compression after conversion
January 06, 2015
On Tuesday, 6 January 2015 at 08:52:06 UTC, Ola Fosheim Grøstad wrote:
> - with negative values you can then have a unique representation of a single colour in CIE (the theoretical base for RGB that was developed in the 1930s).

Actually, what I refer to here is a model of how humans perceive colour:

http://en.wikipedia.org/wiki/CIE_1931_color_space#CIE_standard_observer

January 06, 2015
On Monday, 5 January 2015 at 23:39:17 UTC, Manu via Digitalmars-d wrote:
> On 6 January 2015 at 04:11, via Digitalmars-d
> <digitalmars-d@puremagic.com> wrote:
>> On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote:
>>>
>>> Yeah, in my misc repo, there used to be stand along image.d and
>>> simpledisplay.d. Now, they both depend on color.d. Even just a basic
>>> definition we can use elsewhere is nice to have so other libs can interop on
>>> that level without annoying casts or pointless conversions just to please
>>> the type system when the contents are identical.
>>
>>
>> Yes, that too. I was more thinking about the ability to create an adapter
>> that extracts colour information from an existing data structure and adds
>> context information such as gamma. Then let  you build a function that say
>> reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit
>> RGB pixel with gamma correction and the residue in a specified format
>> suitable for dithering... ;-]
>>
>> It is quite common error to do computations on colours that are ignorant of
>> gamma (or do it wrong) which results in less accurate imaging. E.g. When
>> dithering you need to make sure that the residue that is left when doing bit
>> truncation is added to the neighbouring pixels in a "linear addition"
>> (without gamma). Making stuff like that less tedious would make it a very
>> useful library.
>
> I have thought about how to handle residue from lossy-encoding, but I
> haven't thought of an API I like for that yet.
> Dithering operates on neighbourhoods of pixels, so in some ways I feel
> it is beyond the scope of colour.d, but residue is an important detail
> to enable dithering that should probably be expressed while encoding.
>
> Currently, I have a colour template which can be arbitrarily typed and
> components defined in some user-specified order. It binds the
> colourspace to colours. 'CTo to(CTo, CFrom)(CFrom colour)' is defined
> and performs arbitrary conversions between colours.
>
> I'm finding myself at a constant struggle between speed and
> maximizing-precision. I feel like a lib should maximise precision, but
> the trouble then is that it's not actually useful to me...
> Very few applications care about colour precision beyond ubyte, so I
> feel like using double for much of the processing is overkill :/
> I'm not sure what the right balance would look like exactly.
> I can make fast-paths for common formats, like ubyte conversions
> between sRGB/Linear, etc use tables. Performing colourspace
> conversions in fixed point (where both sides of conversion are integer
> types) might be possible without significant loss of precision, but
> it's tricky... I just pipe through double now, and that's way
> overkill.
>
> I'll make a PR tonight some time for criticism.

What's wrong with old-fashioned `Fast` postfixes on entry points where a faster but less precise method is available? Or template arguments like std.algorithm.SortStrategy?
January 06, 2015
On Tuesday, 6 January 2015 at 09:19:46 UTC, John Colvin wrote:
> What's wrong with old-fashioned `Fast` postfixes on entry points where a faster but less precise method is available? Or template arguments like std.algorithm.SortStrategy?

If this is for phobos it should follow a common model. But since a fast implementation might involve an object which builds a table before processing....

Another problem is that "precise" is a very loose term if the conversion is for display or for composition in a shader. What you want depends on the kind of post processing that happens after this and how it affects the nonlinearities of human perception.
January 06, 2015
On 6 January 2015 at 19:31, via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Tuesday, 6 January 2015 at 09:19:46 UTC, John Colvin wrote:
>>
>> What's wrong with old-fashioned `Fast` postfixes on entry points where a faster but less precise method is available? Or template arguments like std.algorithm.SortStrategy?
>
>
> If this is for phobos it should follow a common model. But since a fast implementation might involve an object which builds a table before processing....
>
> Another problem is that "precise" is a very loose term if the conversion is for display or for composition in a shader. What you want depends on the kind of post processing that happens after this and how it affects the nonlinearities of human perception.

I think an important facet of 'fast' image processing is in the loop
that processes batches of pixels, rather than in the api that
processes a single pixel.
I've gone with accurate; that's the strategy throughout phobos.

I can't create look-up-tables at the moment, since '^^' doesn't work in CTFE! >_<

Here's a PR to see where I'm at: https://github.com/D-Programming-Language/phobos/pull/2845