Yet more colour space confusion

The issue has been addressed and will be fixed in the first maintenance release


Thanks for informing us.


I saw that too. And that looks really good!
Now what about editing in 10bit per channel (30bit)?

I’m attempting to get DxO to answer all these sort of questions here (link).

John M

DxO Wide Gamut looks very close to Rec. 2020. I wonder if it’s is the same?

From what I see is similar to ROMM-RGB, the same used in Affinity Photo.

and why not integrate and use the ROMM-RGB standard

Chances are the graph with the DxO wide gamut was provided to some press and media partners ahead of time, maybe as part of an embargoed press release?

DxO’s Wide Gamut seems indeed very close to Rec 2020, with very minor differences when comparing the graphs but I don’t know if that’s because it uses different values on purpose or if the graph is just imprecise.

ROMM-RGB (used in Affinity Photo) is the same as ProPhoto RGB, and it doesn’t look like DxO is using that:

So my questions are:

  • What are the advantages of a wide gamut color space like Rec. 2020, over Adobe RGB?
  • Why use Rec. 2020 (made for the TV industry) or something very similar, and not the even larger ProPhoto RGB (made for the Photo industry)?
  • If DxO designed its own color space that is close but not identical to Rec. 2020, why not use that standard instead of a custom color space? Or, reversing that question: are there practical advantages in using standard color spaces like Rec. 2020 or ProPhoto RGB for PhotoLab’s internal calculations and exporting?

To make it simple, DxO probably did sum up all it’s cameras sensors knowledge acquired over the years by it’s well known profiles into this new Wide gamut colour space.
In a nutshell: all sensor information from all the cameras DxO ever tested is inside this colour space. Whereas using a different colour space would already “truncate” some colour information when opening a raw file.

Just my opinion I might be wrong.


Disclaimer: not an engineer working with light. I’ve watched lectures on digital photography such as this one but all the math flies above my head, I just understand the analogies and the graphs. ^^

I’m not sure it’s that related to sensors. Sensor pixels measure an electrical signal produced by accumulating photons, and use a color filter array (with the most common patterns being a Bayer pattern with 1 “blue” filter, 1 “red” filter and 2 “green” filters). Then you have to analyze the value for a single pattern and try to figure out what its “blue” or “red” or “green” filtering means exactly, do some averages to produce a RGB color for each pixel, do some edge detection to avoid color artifacts from averages that use color data from a neighboring but different surface, etc.

I’m not sure how all that maps into color spaces, but I think you kinda have to decide on a target color space and map all the raw sensor data unto that color space; and every raw interpreting and demosaicing software — whether an external one like DxO PhotoLab or PureRaw, or the camera’s firmware — does whatever it wants here.

The color mapping algorithm used is probably different for each camera, and software publishers like DxO, Adobe and Phase One probably look at what the in-camera firmware does and compares the output of their own algorithm to the output of the camera’s firmware’s JPEGs to fine-tune their color mapping.

Ultimately I reckon that deciding to map to sRGB, to Adobe RGB, to ProPhoto RGB or something else is an arbitrary decision, and the target color space is picked not because of the input sensor data but because of other advantages, such as:

  • ability to represent more colors than sRGB can (which is only useful if your screen or print output is going to be able to render at least some of those colors);
  • when producing an JPEG or TIFF, if you’re going to need to map the image’s colors to a specific color space, it’s better to work in a large color space and map down to a smaller one like Adobe RGB or sRGB, than to work in a small space like sRGB and have to map that data to a wider space (kinda like resizing an image down gives better results than resizing an image up);
  • working in a larger color space might make some color manipulations a bit more accurate? (not sure about that).

Personally, I wonder if working with the larger color space is going to help for images with saturated colored highlights, like sunsets (especially close to the sun) and stage lights. Working with sRGB usually means that those colors get squished near the edges of the sRGB triangle, so you lose a lot of nuance (and if you don’t squish them near the edges they get muddy). Adobe RGB is a bit wider but I guess not wide enough if DxO, Affinity Photo and others are going for wider gamuts.

1 Like

I take it the paper and ink simulation is not yet implemented but I would like to know if the is likely to affect how I get rid of OOG colours using the soft proofing, as in this picture.

Before (perfectly in gamut for the screen proof)…

Attempt to remove all OOG warnings for the gamut destination for my favourite paper and ink profile…

As you can see, even here, with this horrible, insipid rendering, there are still some warnings.

Or won’t this work until DxO issues the “coming soon” feature?

1 Like

The only reason I can think off for a wider color space is the eventual presence of an output device with a wider color space.


Yes, that’s my understanding … I’ve seen this explained as follows;

If you have a sRGB monitor:

  • In PL5, colors outside AdobeRGB were modified so that they can fit in the AdobeRGB color space. Then, when converting them to sRGB, it was converting a modified version of the original color.

  • In PL6, colors outside AdobeRGB will be kept because it now uses the DxOWideGamut. Then, when converting them to sRGB, it converts the real original color (not the modified one).

That was my initial thinking too, George - but the info above encourages me to give it a go.

More discussion on all these WCS questions here: Questions for DxO - regarding the new Working Color Space

John M

1 Like

I have the same question. However I don’t think the OOG tones should be fixed by desaturation because it gives the image a very insipid rendering as you said.
How should these OOG tones be addressed?

1 Like

That’s a very good question that we can only hope DxO will answer. If not, the whole soft proofing for printing is a waste of time.

Bonjour Joanna,

Chez moi la fonctionnalité " Simuler le papier et l’encre " n’est PAS cliquable, elle ne sa valide pas. Une idée ? (Win11)

Welcome to the forum, @ChEV

Paper simulation is not implemented yet and is promised to arrive later, with an update of DPL6.
This is listed in the release notes too.

Oui, merci, je viens de voir cela dans le document cité.
Meilleures salutations !

Before DPL6, we simply did not know whether we had OOG colours or not. Today and with soft proofing, we know, that what we see on screen is not what’s in the file. We can still print whatever we like - and get slightly different hues depending on which working colour space we use.

We never see the colours as they are recorded in the RAW file. If we’ve been happy with that, we can still be happy now, but if we use a different WCS, we have to adapt. It’s like using a different RAW developer. We (simply) have to re-establish our way from the initial image to the print. :man_shrugging: