How to read Delta E when using colour checker?

Hi everyone, I’m a DarkTableuser, and in that softwre I was used to read the Delta E after doing the colour calibration with a c9olur checker, to check if the calibration was performed fine or not. Is it possible to expose and read this variable in Photolab too?

Hello maximtimeclock@rikkarlo,

Delta E is a metric that quantifies the difference between two colors. It represents the perceptual color difference.
When calibrating colors, a low Delta E indicates that the calibration is accurate and the colors match closely to the reference.
DxO PhotoLab provides color calibration tools, but the direct display of Delta E values is not a built-in feature.

Best Regards,

1 Like

Thank you very much for the info, so how do I know if the calibration was performed accurately or not? is there a third party free software where I can import the calibrated photo and the uncalibrated one and extract the delta E value?

In DxO photo lab there isn’t a direct delta E value exposed like in Darktable for color calibration with a color checker. However photo lab offers robust Cat likes best color management tools for accurate color adjustments. Use the color picker and histograms to ensure proper color accuracy and calibration results visually.

I see, I guess this choice was made because Photolab targets hobbyists or artists who do not care about the level of accuracy of the colour calibration. I wanted to use it for scientific reasons, but I guess there is no way to do that at the moment, let me know if you plan to expose the delta E value at some point, I might consider getting back to DxO.

Neither ColorChecker is suitable for truly scientific purposes.

What do you mean? If the delta E is smaller than 1 after the colour correction there is no perceptual difference between the acquired colours and the real colours, this is the level of accuracy that I aim for, I just need to check this value to validate the workflow.

I believe Wlodek meant that the ColorChecker have too large variance and do not deliver the accuracy one need to color critical calibrations.

For example: Exploring the limits of color accuracy in technical photography | Heritage Science | Full Text

Ah yes, I understand, in my case I just need to verify the delta E lower than a certain value, but thanks for the very intersting reference!

Correct. For example, I have four grey cards, which look almost the same in daylight, but look quite different when lit by low-pressure sodium or even tungsten lamps. If even an old human can see the difference, they are worthless for scientific purposes (perhaps with one exception, but which?). They all have different spectral response distribution, turning grey only under specific illuminants. That said, I still use them in some conditions just to get a rough idea of how the camera auto WB gets fooled by specific lighting (oversimplifying, with large green or magenta “tint”, when the “right” way to “fix” WB may be not clear). With color-checkers the situation is even worse.

I think OP is an enthusiast who wants to do some research on his own. If he/she were a scientists, he would already know which tool to use. If not so, it looks like just another negative post, a FUD. I have no idea what OP had in mind speaking about scientific purposes. I think most “color scientists” use their own code to deal with the raw data read by dcraw/libraw code (without using dcraw/libraw demosaicking code, no gamma, perhaps no WB corrections). No one of them would use PhotoLab, Lightroom, CaptureOne or any other software which they don’t fully control, unless they are “studying” the software itself or use it for comparisons.

One has to take into account also what the camera does with the data read from the sensor before it gets written to a file. Some cameras (e.g. most mobiles today) can combine many shots into one, do some denoising, optical corrections, etc., and still call their output “RAW”. They may record the data with different ADC settings for different “color” channels, and so on. Some deeper technical knowledge of the tool is necessary, which consumer camera makers are not likely to share in detail. For some very special purposes, special cameras are made, with more technical data disclosed, e.g. for medical purposes, microscopy, etc. Some other things that come to mind include spectral lens transmitance, chromatic aberrations, readout and thermal noise, crosstalk, dark current, dark frame subtraction, sensor behavior outside the visible light, etc., etc. Even amateur astrophotographers often use specially prepared consumer cameras to record wider spectral range, and they use specialized software for processing. One other recent example is firmware C:Ver.2.01 release for Nikon Z8, with one of the fixed problems described as ‘Green color casting occurred with some pictures taken’. Just one of those “minor details”…

Our brain often “changes” considerably the colors perceived in 3D reality, but does less so for flat photos. The reason for that is still a bit of mystery and depends on the individual. For example, I know two girls who are very picky about the color of their lips on photos – one wants them more pale than in reality, the other more saturated. They have different perception of ‘color accuracy’, or just very different bathroom lightings :slight_smile: .

That’s more or less correct, except that some PL users do care about the color accuracy (reproduction work for archives, museums, etc) but not really on a “scientific level”. It’s software for RAW file development with some additional tools for RGB manipulations, some basic DAM, destined for consumer market, including “pro photographers”. Definitely, tools like ‘HSL’, ‘WB’, ‘Smart Lighting’, ‘ClearView’, ‘Chromatic Aberrations’, ‘Moire’, various renderings, etc. can alter colors in a significant way – that’s what they are meant for. Have a look in ‘Photographic Science and Technology Forum’ on as a starter. Also take a look at some special purpose cameras which may be of your interest. Some of them have quite detailed specs made public and you may get even more info from them, if you use your institution for making contact.


Thank you very much for the in depth explanation, I’m actually working in the digital heritage field and I was considering using dxo for archiving purposes, those are the “scientific” reasons I was talking about, so the issues I usually deal with are much less tecnical than those of ingeneers in the field of optics or cameras manufacturing.

Anyway I will defenitelly check the references you gave me, thank you again.


I think there are several people here working on something similar. Perhaps ask @Stenis ? I have zero knowledge about this type of work. Cheers.

I don´t know what demands you have when working with cultural heritage but I think you ought to be fine with a screen properly calibrated with a Delta E smaller than 1. From what I have seen color accuracy in old images are lost a long time ago because color image media is the first to get severely corrupted and mostly partially lost in archives.

I think you ought to consider a few other things too. If you like many heritage organisations are storing your image RAW-data in DNG-files, I would not lean on DXO Photolab at all. Lightroom has far better tools for DNG-management than DXO have.

With old color images I would be surpriced if most heritage organisations have the resources to even try to restore the color in their images.

Another thing is about the new C2PA standard for securing authenticy and provenance in pictures that both are important for heritage. There is not even the slightest sign that DXO Photolab will be able to be used in workflows like that in a near future, while Adobe is the company that both started the CAI-initiative and is backing the C2PA-work. So despite not being a great fan of Lightroom I think that is a far better option than Photolab in your case.

1 Like

Exactly in my department usually they only look at delta E smaller than 1.

Thanks for the info, I was arriving at the same conclusion.