When I open a raw (CR3) image in DxO Photolab for editing, the image obviously undergoes some process to produce a visible/display-able file in RGB or DxO’s own colour space. This implies some demosaicing process is taking place to produce a file which can then be edited.
While some raw viewing programs open the embedded JPEG in the raw file to produce a visible picture on the computer screen ( and I believe this is what the camera does to display a visible image on its own rear screen), an 8-bit JPEG would not be able to show the range of subtlety necessary for fine tuning in DxO.
I have been under the impression that demosaicing was conducted in conjunction with DeepPrime denoising, when the file was exported into a JPEG, Tiff, DNG etc. But the loupe must also be demosaicing and denoising a portion of the image within DxO. So clearly I am not understanding the full technical picture.
What file format does DxO use in its workspace (if indeed you can call it a ‘file format’ in the normal sense)? What happens when a raw file is imported into DxO. At what point or points does the demosaicing occur?
(Happy to be pointed to another thread in the forum which answers this, if there is such a thing)?
PhotoLab creates an in-memory image, which represents the demosaiced data from the file. My Nikon D850 produces a 14bit image, which means it captures up to 16,384 levels for each colour channel and this is what you are working on.
There is also the consideration of colour gamut and this is best set to to DxO Wide Gamut to make the most of the variety and range of possible colours.
Only when exporting does this memory image get translated and mapped into a physical file and you may need to soft proof to ensure that colours are not oversaturated for the chosen output format and colour space.
The beauty of working this way is that you can soft proof to different virtual copies - sRGB for web sharing and differently for printing to a particular paper.
When I export for printing, it is to a 16bit TIFF file in the ICC profile for my paper. This maintains as high a dynamic range as possible to maintain smooth gradients.
A RGB image is represented as a raster with a x and a y ax. Every element (x,y) contains a pixel that contains the three colors. One also could see it as a three dimensional array (x,y,z) where z=1,z-2 and z=3 contains the color channel.
An image format as you describe is a disk file format. There’re different methods to store that raster image on disk,jpg, tiff,dng,bmp etc. When loading that disk file it will be placed in memory in a raster image again and send to monitor or printer.
Converters and most editors use a 16 bit structure. Every channel will be 16 bit, a pixel 48 bit. Sometime depending on the program 64 bit, another 16 bit will be used for other putposes.
Jpg is by definition 8 bit, tiff can be 8 or 16 bit.
The sensor measures the light in analogue values. In front of the sensor a color filter array CFA has been place. The sensor elements only measures a specific color red,green or blue. The A/D converter creates a 12 or 14 bit digital value of those analogue values.
In the demosaicing process out off these separate colors pixels will be created of 16 bit for each channel. And the pixel values will be adjusted to the working color space.
Most of the editing is done on that in memory image, but some are done on the raw data, the data before the demosaicing process. Exposure correction, Deep Prime, White Balans and maybe some more.
Above is for camera’s with a Bayer array. Most of them have it.
A camera (like Joanna’s) recording data in 14 bits delivers images with pixel values of 0 to 16’383, but some of that range is cut for technical reasons like noise caused by the recorded light, the sensor, its amplifiers etc. Some cameras display “black” as R/G/B=0/0/0, others as 2048/2048/2048 like my EOS 5Diii. “White” will be somewhere near or below 16’383/16’383/16’383.
DxO PhotoLab can push exposure by up to 4 stops, which means that, in addition to the 14 bits for what comes out of the camera, another 4 bits are needed in order to preserve details in the lights, should we reduce exposure again. Moreover, all the contrast sliders and other tools might boost exposure locally and we need another few bits to provide a way back.
All of the above means that PhotoLab’s internal calculations require a lot more bits per channel than what comes out of the files we develop. Upon export, the number of bits is reduced according to file specifications.
All of the above is modelled with integer calculations in mind and floating point calculations can ease things a bit, provided the range of numbers used can preserve the necessary precision (about 6 ppm) during all calculations…even though our brains resolve tonality and hues at around 1% only.
A 14 bit image does not overflow a 16 bit bucket.
One stop more exposure is like adding another 14 bit image.
Add another three and the bucket is full, anything more will overflow and hence be lost.
Not true, adding one stop just multiplies the value by 2 (doubles the value). For whites you can add a few stops of exposure before hitting the 16bit barrier.
@platypus
I don’t think so.
Let’s say the sensor can measure about 12 stops. After digitizing this range all that 12 stops will be covered by that bit depth of whatever value, 12,14, 16, 23 etc. Even 2. If your camera can handle 14 stops or any other value, nothing will change on this.
To make it more complicated, the values are twice corrected. Once for the eyes and second for the monitor. I believe the raw data are gamma corrected for the eyes, not sure, and for the monitor they are gamma corrected during the demosaicing process. And the off course the conversion to the working gamut and output gamut.
Thanks George (and others). That was the bit that I was missing, the formation of a temporary working file in memory which is then written into a disk format when exported.
When I first open a file in DxO (or for that matter in any raw program), it is presented as a picture with certain luminance levels and colour applied. I presume what first happens is DxO reads metadata in the raw file and uses that to create the displayed image. If a preset is applied automatically on opening, DxO will further adjust the file luminance, colour, noise etc in accordance with this preset.
With regard to the diagram posted by John, I note that it shows demosaicing and denoising together. If I open a file in DxO using the No Correction preset, I get a very noisy file with lots of chrominance and luminance noise, so clearly the demosaicing and denoising processes are, at least to some extent, separable, at least with High Quality NR.
If you mean the first initial briefly shown image, thats most likely the jpeg image that’s embedded in the raw file.
Soon after that the application is processing the raw data with or without a preset and is then displaying the raw rendering.