Explain the DNG color differences between dxo5 (legacy) and dxo6 (wide gamut)


I don’t understand your answer. If the current working colorspace is wide gamut, any single computation made while editing will use the extended wide gamut colorspace. This will possibly produce colors that are outside of the the camera colorspace. When you export including as DNG (all corrections except color rendering), the DNG will therefore contain such colors.

Now, when you try to view the image, that image must be “realized” so that all pixels match a color that is inside the colorspace of the viewing device. Whatever the viewing software that you are using, a conversion will be made between the wide gamut color space (which is probably not recognized by other software than DPL 6) and the colorspace of the viewing device. If the device is “wide gamut” it will have more chances to correctly display those colors coming from the DNG wide gamut than if it’s not.

In other words, if you have decided to work with an extended gamut, you have to use software, printers and viewing devices that have the possibility to work with it or at least using a working colorspace as near as possible to the initial extended colorspace. This compatibility must be maintained as long as possible in your working flow. But since there’s currently only rare devices that are able to fully support an extended colorspace like DxO’s wide gamut or Prophoto, at one moment in your workflow significant color differences will appear. And soft proofing has always been there to check this. It’s new in DPL 6 but other applications have this feature since a long time. Releasing a software supporting a wide gamut colorspace without having a soft proofing feature would be nonsense.

If the working space is “legacy” (which is closer to sRGB than to Prophoto or to DxO “wide gamut”), there are more chances that the color differences be so small that you don’t notice them.

A similar problem arises when working in the Prophoto colorspace (in Photoshop or Lightroom for example) and trying to print to a personal printer that usually can’t do much more than sRGB.

So yes, I insist, mentioning on what type of display the testing is made is important.

Alright, I’ll allow that maybe this exists. This might just be my ignorance here.

Would love to see some technical information on what the format of demosaiced RAW data is, if not RGB+ICC profile like I expected.

No. Because the data written to the DNG is different while it should be the same. Period.

The DNG data is still in camera space , so NOT in te working space yet. So whatever the working space is of the final software depends on the final software. DxO PL6 should have no say in that.

DxO’s legacy gamut is believed to be somewhat like Adobe RGB from reading older forum posts.
The data in the DNG files that was always written was NOT in Adobe rgb .

So, what i have been saying is that whatever you select in the wide gamut option should have NO effect on the DNG file written, and they even said as much .

The working space is something for the final processing software . So when writing dng files it should have no effect on it.
If set to legacy , the DNG is written before the working space is applied (or it is converted back to …).
This is not happening with wide gamut , which is simply a difference that should not be there .

As an example , when you load the DNG back into DxO itself , it says wide gamut can’t be used for these files. Which tells you that the wide gamut information should not have been written to it .

All I’m asking for is that the 'gamut option ’ has no effect when writing dng files , no matter the export type (optical only or full). Because right now the files get written in a way that no software can use them.

To others :
I know it gets weird when you make color changes i. PL6 , and then export it as DNG , and then have certain expectations of it how it will look in other software. The simple answer is ‘you cannot’ because the ‘other’ software has a large part in how it looks.

So, forget about that. Limit your thinking to the preprocessor workflow , the one DxO has embraced with pure raw. I want the file to load in DxO , do NOTHING with it , export it as DNG. And then have that dng look similar (enough) in other software as the original raw file would look in that other software

DxO has made sure this worked before , their new wide gamut option destroys this .

If you select wide gamut as the working color space, the image data are necessarily modified (translated) to adapt to a colorspace that in all cases is wider than the camera’s colorspace. As explained above, any computation made before the export will be made in the wide gamut color space. This will generate differences that are more visible than if you had selected the legacy color space.

You seem to consider that the DNG resulting from the export should be similar to the RAW file generated by the camera. It’s not. If you want the image data in the DNG to be the same as the data produced by the camera, use Adobe DNG Converter.

Surely, a linear dng is demosaiced so it is not “sensor” data by definition?

Sensor data is a number value obtained under a filter. The filter is not perfect so the number doesn’t represent say, the red level but a mixture of rgb, the amount depending on the accuracy of the filter. The difference in raw converters is in how they interpret this “red” number value for a particular camera model, taking into account the optical characteristics of the filters used and then interpolating multiple pixels to finally arrive at a “colour”. Therefore, the linear dng contains rgb data derived form DXO, Adobe etc. The dng files will not be the same as each manufacturer will interpret the filter data differently as well as having different colour interpretation algorithms…

I think DXO used to operate in the Adobe RGB space when doing raw conversions, this meant that colours outside of Adobe RGB needed conversion to that space. With the new wide gamut working space this doesn’t need to happen. This means that the linear dng’s will be different to “legacy” space?

Any data then has to be displayed/printed and this requires a “conversion” of the data to match the monitor/printer/paper. Relative colorimetric or perceptual for example. DXO appears to use its own or adds an algorithm that adjusts colours that are outside of the device gamut called “Protect Saturated Colours”. This will also result in differences to a system using straight perceptual, relative colorimetric etc.

Personally, from a practical point of view, if I have generated a linear dng it is to use in another editing program where I will set the tone curve (which impacts colours, contrast, colours in general etc without the restriction of using a tif with a fixed white point. So in practical terms for me a slight change in the linear dng start point is totally irrelevant as I won’t have bothered to set colours, tones etc.

We are talking about color spaces.
Yes, a linear DNG can be - and often is / must be - in the same color space as the raw data.

No, it’s often not raw data coming from the sensor. It’s demosaiced. But that’s completely off topic to the issue.

(Sigma foveon sensors produce r,g,b data per pixel. There is no demosaicing there. Just to give an example. Als canon sraw is rgb data per pixel (the camera already did the demosaicing).
But , as said , this is besides the point).

Just , bring it down to this : how are we supposed to use the DNG files written in ‘all corrections’ mode and with the wide gamut option selected, as is the default.
So far, no program - including DxO PL6 itself - can produce ok results with them.

If the answer from DxO is “you are not supposed to create files like that” , then please do not let use create files by that. By using the same behaviour as legacy mode / PL5: leave the color space alone when writing dng files , in both DNG export modes.

Or… Tell us how we are supposed to be using them .
PL6 won’t render correct colors , and no other program i tried will. And the embedded matrix / profile is exactly the same as the legacy behaviour, which can’t be right , since the data has clearly changed.

To be more accurate :
even if initially loading the image data in a colorspace that is wider than the camera colorspace is not supposed to modify the original data, any computation made while “wide gamut” is enabled will possibly generate colors that do not belong to the initial colorspace. These colors will be taken into account when exporting as DNG.

Sigh… No, i do not consider the data to be te same.
I consider them to be in a color space that can be used by other software reading the DNG.

Which means the embedded data in the DNG must be correct, or similar enough to the color space of the input raw file , or… Another solution.

At the moment its just unusable… primarily because it’s unknown what is inside of it .

Btw, i do it think dxo wide gamut is dynamic or anything. It’s ‘just’ a very wide , specially created linear profile to can contain all the colors known to be produced by all cameras in the dxo database. That’s my guess, but it’s a guess.

The fact there is / was a bug in the Mac version , where the final output jpg profile conversion wasn’t done, so people ended up with a jpg file that was in a 'dxo wide gamut ’ profile , seems to suggest the profile is static , and perfect representable in an ICC file. But this is all just speculation on my part, and completely besides the point.

Bring it back to the simple question : how are DNG files written with the dxo wide gamut option enabled , to be used. In dxo or another tool. Because at the moment it all looks wrong.

I think your point here appears to be incorrect because no two raw converters, using their own bespoke demosaicing algorithms, are going to produce the same number for the colour of a pixel. What DXO things of as “Red” is not the same as Adobe, due to the fact that it is not pure red light that goes through the filter and then surrounding pixels are analysed to finally come up with what that particular raw converter thinks is the “correct” colour.

A sensor does not have an inherent colour space because it only has numeric data about the amount of light the pixel has received. To process the data you have to convert that data into rgb values within the colour space that you want to use. If the space is wide enough (eg Prophoto, which contains imaginary colours) then no (relative/perceptual) conversion is needed. If you choose to use a smaller colour space then some kind of conversion would be required. The algorithms of the raw converter plus any colour conversion needed will change the colours you obtain in the dng. If all raw converters used the same demosaicing routines then you would obtain identical results, for example if companies use the same open source, rather than their own custom designed algorithms, I would expect they could give similar results.

That’s why I was talking about ‘a’ piece of software, comparing the original RAW vs the DxO DNG output with no changes made. Not comparing software to software. Comparing the same software.

When DxO has written DNG files, they always were exactly the same - color wise - when opening in other sofware. I already mentioned this. Open Lightroom, take the original raw file- reset everything to defaults. Then open a DxO exported DNG (PL5 / PL6-legacy / PL6-optical-only) and open that in Lightroom, reset everything to defaults.

They are like for like.

Because the DNG that DxO is written has no camera transform applied to it, that way when Lightroom opens it and does the same things it does when opening the original RAW file, the resulting colors are the same.

This is valid for Lightroom / Adobe ACR, ON1 Photoraw, Capture1, Darktable, Rawtherapee, FastRawViewer / RawDigger, Affinity Photo… and probably a lot more.

This is not valid anymore when the wide gamut option is set, because then the data is modified… in a way that looks wrong… and with no information on how that is supposed to be opened in other software… the whole point of writing a DNG file.

Since DxO PL6 itself also displays the colors wrong of that written DNG file, I expect it to be a bug.
If it is not a bug, but a limitation by design… then make the workflow more usable by not doing anything with the legacy / wide-gamut toggle when writing DNG files, because that way you always get usable files out of it.

In layman’s terms, the steps done with a normal RAW file to produce something of an image on screen. Steps 1, 2 and 3 can be in a different order depending on the raw software. Everyone has a different take on it, which is fine

  1. Correct CA if needed, because it helps demosaicing algorithms
  2. Apply whitebalance (R/G/B multipliers) (this often also help demosaicing algorithms)
  3. Demosaic data, if needed
  4. Apply camera matrix and convert to working profile (Linear prophoto? Linear rec2020? Dxo Wide Gamut?)
  5. Make any modifications you want
  6. Convert working profile to output profile (sRGB?)
  7. Output

The DNG files DXO has always written were between step 3 and 4. When the wide-gamut option appears to be picked, it appears to be after step 4. Which makes the files unusable, without other software knowing how to interpret dxo-wide-gamut profile.

Again, I see it in two ways: Or, the DNG export with the wide-gamut option saves wrong data, and it’s a bug. OR, it’s just not ‘something you are supposed to do in DxO’, but then the option should be ignored completely when writing DNG files… not write unusable DNG files.

A third option - write the correct embedded data in the DNG so 3rd party software can interpret the data correctly - is also a viable option, but maybe not something DxO would want to pursue… but at the moment the embedded matrix and/or profile in the DNG is incorrect with the data inside of it.

In the other thread, people were asking why the ‘all corrections’ suddenly had the text ‘except colorrendering’ behind it in PL6.

DxO responded with ‘nothing has technically changed, we just clarified what was happening: We apply everything except the camera profile transform’. Which is exactly in line with what I explained always happened in PL5 and in PL6-legacy mode (and PL6-optical-only mode).

But with the wide option, something changes the pixel values, even if no edit is done to it in DxO. What it is, we can only speculate… I’m guessing that there is no camera transform done, but the whole pixel data is still converted to their own DxO-wide-gamut colorspace. But without the camera transform done, it produces odd results.

Again, we can talk all we want about what is not possible but is happening or not and how I think it should be and others think not… I want to bring it back to a simple question for anyone (but I hope DxO themselves) to answer: How are we supposed to use the DNG files produced with the wide-gamut option selected. If the answer is “you are not” then I say: Please disable it then. If the answer is “it should work fine” then I say: clearly not, the colors are mangled. Some color-conversion is written, or some metadata is missing / wrong in the DNG.


I think that this is where we differ. My understanding is that a linear DNG contains demosaiced data. Therefor it cannot be the raw sensor data. I found this interesting diagram here

This is a real sensor, the Nikon D700 and you can see that it’s color response cannot be expressed in any standard color space. That means that if you are demosaicing you have to decide what your going to do with the out of gamut colors. PL5 put them in (I assume Adobe RGB) and PL6 can now put them in a wide gamut similar to CIE XYZ. What is contained in Linear DNG is RGB and it has to be in a color space similar to the triangle shaped color spaces we see in the chart.

If most software uses a default internal working space of Adobe RGB , then we might expect the linear DNG RGB output to be similar.

But so far we have different processing pipelines:
camera raw ==> DxO wide ==> some RGB color space in DNG ==> dcraw output.
camera raw ==> DxO legacy (aRGB) ==> dcraw output.
camera raw ==> dcraw unmoisaiced output

I don’t really know what dcraw does to a linear DNG file when you select raw colorspace with -o -0 (i.e. do not demosaic) but when you do that to a camera raw file you should get the un demosaiced data and not RGB.


Raw file does not have a colour space. Its de-mosaiced into a color space.

? No.

You have values for your sensors . Some of those values are for red, some for green some for blue (x trans is different or course , rgb sensors are different of course ).
So you end up with a pixel knowing ONLY the red , geen or blue value. Not all three.

‘Filling in’ the missing data , is demosaicing. And it has nothing to do with what the numbers actually mean or what colour space they are in.
The simplest demosaicing algorithm - with the least artifacts :wink: - is to simply render the data to half size , so you don’t have to 'fill in ’ data or try to ‘invent’ data. But it also gives a half height and half width image to what you are used to.
Anyway, this works perfectly fine without having to know anything about what the sensor data really means. You don’t even have to know the black and white point of the camera model.

Of course the sensor data has a colour space. It are numbers representing colors, so it has a colour space. It’s different for every camera model , even maybe camera to camera, and its of course not one of the universally standard colour spaces , but of course it has a colour space!

Look at the steps above that I posted (or for fun try reading Snigbo’s articles about processing a NEF raw file by hand in imagemagick. You’ll grasp all the steps needed).

The simplest form is to apply a camera matrix to align the white point of the sensor data, and then to just ‘assume’ it is a certain known standard colour space (like linear rec2020 or something).

But if a camera is calibrated , there is a known input colour space and a profile to assume it is, and then you can ‘convert’ to the working space.

All this has nothing to do with demosaicing. This are all steps you also have to do on a linear DNG file. A linear DNG file is the sensor data demosaiced, and the black and white points scales to be between 0 and 65535 (that last step isn’t even required , but is what dxo does to normalize the black and white points ). Nothing really else, so the data needs to be handled in te same way the data from a raw file needs to be handled : assume calibrated input profile and convert to working profile (or apply camera matrix and assume working profile ), apply tone and apply edits , convert to output profile and render.

Why ? I think your are sprinting over a few steps. Because what you describe has to do with getting the file displayed. And demosaicing has nothing - yet - to do with displaying an image. It’s ‘filling in the blanks’ of the sensor data , basically upscaling the captured channels of your sensor.

If you have a 24mp Bayer sensor , it means it produces a 24mp monochrome file . But actually, just a series of numbers . Some are meant for red , some for green , some for blue.

So you basically get a 6mp red channel , two 6mp green channels , and a 6mp blue channel. These four channels are ‘upscaled’ (smartly interpolated) to a single 24mp r,g,b data set. You can sort of call it an image , at this point.

That interpolating has still nothing to do with dealing with things like out of gamut , what the values actually mean in CIE XYZ space . That all comes later in the pipeline , and comes after a linear DNG file is written.

Again, think about sigma foveon sensors or canon sraw files. They already contain a r,g,b value for each pixel . No demosaicing needed. But all the things like black point , white point , white balance , input profile / calibration , camera matrix , working profile … They all still are needed and apply .

I tried to find confirmation of that, and have failed so far. Nor have I found any information saying that Linear DNG data is usually RGB data in the CIE XYZ space. The nerd in me is extremely frustrated by the lack of technical information out there. How come no one has made a cool YouTube video showing hexadecimal data for different formats? (Maybe because their audience would be just me :P)

Anyway, would love some feedback or information from a DxO engineer, a DarkTable developer or anyone with deep technical knowledge of these formats.

By the way @jorismak, have you sent a support request on https://support.dxo.com and provided them with sample files? Might be the best way to get DxO people to look at whether they have a bug in their pipeline.

It’s a bit smaller than ProPhoto RGB, and similar to Rec.2020, based on some graphs provided by DxO to reviewers. Here’s a visual comparison:

You know… you are actually right there! I’m always one explaining on other forums that it’s a community forum, and not the official method for reporting bugs.
I guess something here is true too. It’s meant for DxO’s own feedback… but a forum post is not the same as an official support ticket, as a paying customer.


reported as a ticket, should’ve done that sooner!

1 Like

I never said that. That’s absolutely not true.

It’s 3 channels of numbers. R, G, B data per pixel. Normally in the same colour space as the raw sensor data was. You might call that ‘unknown profile’ or whatever.

If a raw file had the value 25% for red somewhere, the linear DNG will have 25% for red in the same spot. Whatever that 25% means for colour.

If a raw file had no value for red at a certain pixel (because that is what happens, there are ‘gaps’ so to speak), that value will be filled in / demosaiced in the linear DNG file.

Want to get technical? Get a bayer raw file.
_DSC1309.ARW (47.0 MB)
I’ll use this. It has clipped skies, so I know the white-level easily :slight_smile: .

Load it into DxO PL5 and set it to ‘no correction’. Or load it into PL6, set it to ‘no correction’ and set the gamut to ‘legacy’.

Export it as DNG with ‘optical corrections only’.

Now, we’re going to extract the 4 raw channels from the bayer file (original raw file). With the 4channels tool from the libraw project, you extract the raw numbers that are inside the raw file, written as R, G1, G2 and B.

4channels _DSC1309.ARW
It creates files such as _dsc1309.arw.R.tiff.

It is not orientated right (needs to be turned left). Also, it’s something like 2012x3012 pixels. That’s because raw converters often crop out the edges for the demosaicing.

Also, its black level is corrected by 4 channels, but the max is just whatever my sensor can produce as maximum readout. Let’s see what it is with ImageMagick:
magick _dsc1309.arw.R.tiff -format "%[max]" info:

It reports a maximum value of 15864. Since all channels are clipped in this file, this is also the maximum value my sensor can produce. I want to scale it between 0 and 65535 instead of 0 and 15864.
65535 / 15864 = 4.131051437216338880484114977. So we multiply every value with 4.131051437216338880484114977 to get it to sit between 0 and 65535.

So, we’re going to multiply it, rotate it left, crop it to 2000x3000 in the middle. And save it as a separate tif file.
magick _dsc1309.arw.R.tiff -evaluate multiply 4.131051437216338880484114977 -rotate -90 -gravity center -crop 2000x3000+0+0 +repage -compress none from_raw.tif

Right… now let’s look at the DNG.
Use dcraw_emu from the libraw project to ‘render’ the DNG.

  • we don’t want to apply any white balance multipliers (-r 1 1 1 1)
  • we don’t want to apply any camera matrix (-M)
  • we don’t want to convert it to any other profile whatever (-o 0)
    and we want to write it without gamma correction to a 16bit tiff (-T -4)

dcraw_emu -T -4 -o 0 -r 1 1 1 1 -M _DSC1309.dng

You could look at this file now. It’ll probably look pretty green. It’s the raw numbers from the bayer channels, done nothing to it except let them sit between 0 and 65535. You’re looking at numbers, not colours, so to speak.

We take that file, take only the red channel, and average it down to 50%. This returns it to 2000x3000.
magick _DSC1309.dng.tiff -channel R -separate -scale 50% -compress none from_dng.tif

Now, compare those files. Name me noteworthy differences. Because in my test file here, they are (exactly) the same.
Look at the statistics of the numbers, like the mean:
magick from_arw.tif -format "%[mean]" info:
magick from_dng.tif -format "%[mean]" info:

7036.49 for one, 7059.74 for the other. To take that into perspective, if the numbers would be between 0 and 255, the difference would be less than 1. So… the data is still the same.

In other words, I declare the set of numbers to be identical, within the margin of error. Even the demosaicing algorithm used by DxO did so very little to the numbers that the mean doesn’t really change.

The data in a linear DNG files are (supposed to be) the same numbers as in your raw file, but ‘with the gaps filled in’. That’s demosaiced. Nothing done, nothing converted, nothing ‘colour space’, nothing ‘profile’… demosaicing is just interpolating the missing data from a bayer or x-trans sensor. Not modifying that data.

This is needed, so that lots of tools can read that data, and handle the numbers exactly the same as they would handle the numbers from a real RAW file, except the demosacing step.

This is all different to a ‘non-linear DNG’ (which DxO can’t write, but is what Adobe DNG converter makes for instance), which contains the true bayer data from your raw file, leaving even the gaps in. That’s why it doesn’t increase in size, but DxO’s DNG do: They interpolate data, they ‘fill in the gaps’ which are then written to file, growing the file bigger.

DxO has been doing this correctly for years. Which is awesome! It creates a workflow that others try, but can’t really seem to recreate. DxO even created PureRaw as a cheaper product just to embrace this workflow.

Now DxO Pl6-wide-gamut has changed this… and I don’t think it was meant to be that way.


And I haven’t said that you said it. Bit of miscommunication here. :smiley:

That said, I thank you for the detailed explanation! I learned a bunch, including about specific tools and workflows I can use if I want to data-peep at some RAW files.


I believe that filing a bug report with DxO is appropriate. I don’t know what is going on.

I took a sample.cr2 file, I ran the following:
dcraw_emu -T -4 -o 0 -r 1 1 1 1 -M sample.cr2
4channels sample.cr2

Just comparing the the shape of the Red channel histogram in the demosaiced file I can see that it is different from the camera raw Red pixels. This is as I would expect, since in the process of demosacing, red is reconstructed from adjacent pixels. But I don’t think this tells us much about why you are getting a green tint from wide gamut.