How many bits are used for each colour channel internally by DXO Photo Lab?

Hi

How many bits does DXO Photo Lab use internally for colours?

I am interested because I used to use Adobe Photoshop Elements which uses (or used to) 8 bits per channel and then I upgraded to Photoshop which uses 16 bits,

I am not really an expert but the explanation I found as to why this might matter is that while for display purposes it may not matter (though there is a thread here about how it might matter if your monitor supports 10 bits) if you edit an image multiple times e.g. changing the saturation many times, it will lose colour information. In practice I guess this would have to be a lot of times for a noticeable effect to kick in - and unlikely to be an issue for most photo editing workflows, but I am still interested in the question.

Thanks

Hi Justin,
does this help? – Talking about the bit depth in PL (not your monitor).

Wolfgang

1 Like

Internally it should have at least 16 bits.
14 bits would be a drastic minimum, but 16 bits (at least) sounds more realistic. And maybe more to provide better precision for “applying functions”.
New working space has been added to retain any color sensors can provide.

sensors do NOT provide colors…

Does a sensor not provide me with the colours to use which it has collected. i.e. the colours it can collect are the colours it can provide?

Guess it just depends which side of the sensor you stand.

When you edit a file in PhotLab, it does not matter how many times you edit it, because you are not actually editing the file itself - unlike for example when you edit an 8-bit jpg in Photoshop and save it as 8-bit jpg. When you start doing that, you do start losing things.

1 Like

no, it does not… for a start input devices do not have gamut … “colors” if you mean coordinates in a proper colorimetric color space ( we are not touching human perception - some coordinates simply do not correspond to any colors ) appear only after you do an arbitrary ( another point why input devices are not producing “colors” - there is no single one valid way to do a color transform ) color transform from some numbers representing readout from a sensor ( they are usually a proxy for a charge accumulated in sensels ) into a such coordinates …

PS: consider cameras that detect something in IR/UV spectrum - what colors do you think those might be :slight_smile: and how do you place them into your beloved WG

I can use smart objects… and do a parametic editing in PS w/o losing anything even w/ 1 bit tiff file :slight_smile:

Well, the sensor does provide colour information - through the colour filter array’s pattern and location. Each sensor pixel is just a counter, but only counts photons of a defined energy (colour) range. A colour image can then be built by combining the CFA info (pattern and location) with the values of the counters (sensor pixels) and multipliers that depend on colour filter effectivity and the character of the light.

Anyways, PhotoLab has to calculate with a lot more bits per colour channel than provided by the camera. Changing exposure settings by one stop adds one bit alone. DPL can set exposure to +4 EV (add 4 pixels), change exposure with the tone curve (add 5 or 6 bits) … and needs some margin to prevent posterisation and allow smooth 16 bit per channel export. At least 24 bits per channel are needed and whether DPL uses 24, 32 or 64 bits per channel does not really matter that much any more…unless we’ll get true HDR sensors.

Although histogram and tone curve tools display values of 0 to 255 does not mean that DPL is internally calculating with 8 bits per channel. The values of 0-255 simply help us to handle reality more easily.

1 Like

Yes, I did mean non-destructive vs destructive. Hence the destructive example.

it does not… no gamut, not one single valid color transform, etc

yes, but inventing purely arbitrary color transform , digital readout from a sensel (behind any CFA filter or w/o it) has zero relation to any color until you invent them … and remove IR filter for example, as noted - good luck with colors :slight_smile:

I haven’t found an answer. However, I think the following is more important in preventing the problem you’re concerned about:

3 Likes

The whole game we’re playing is a game from input to output.
The sensor is the input and the monitor/printer is the output.
The sensor collect the intensity of the light through a color filter array. It measures that light in a voltage or currency. Every sensor element is covered by a red green or blue filter. The sensor is sensitive to a certain amount of light. Together with the physical properties of the filter it has a gamut or dynamic range. Which is mentioned in several camera tests.
The output side is working the other way. Where the input collects a certain value through a color filter, the output is sending a value to the output device to archive a color.
The part between the input and output is called conversion and editing.
This is also why I don’t understand the discussions concerning the RAW histogram. It deals with the input side and I’m interested in the output side. The input side is a past station.

I think PL is using 16 bit being a logical value to address computer memory.

George

If cameras provided raw histograms, we could simplify exposure: Get as much light as possible without highlight clipping and correct in post. Measuring 18% grey was okay for film, for digital, other ways would deliver less noise in shadows.

But modern sensors are low noise…they are lower noise compared to older sensors, but still noisy enough for DxO to develop DeepPRIME and DPXD noise reduction algorithms :wink:

1 Like

As said the so called raw histogram is input based, the normal histogram is output based and that is our destination. Why using more exposure based on the input when it results in clipping on the output?

George

I would think its 16-bpc. That is what Adobe uses and I assume most RAW converters use a container space of sorts, that is 16-bpc, while most RAW files are 12 or 14-bpc. Most TIFF’s are 8 or 16-bit and JPEG’s are 8-bpc.

Which ever one you import I think internally DXO PhotoLab adopts. In case of Tiff that is 16 bpc and RAW it will work in 16 bpc. In case of 8bpc JPEG I think it will stay in native bit depth.

If you are confused about bit depth, Many years ago I made an indepth tutorials about everything you never wanted to know about bit depth, pun indented. It should explain everything you might be interested.

Bit Depth in Depth: All you never wanted to know about bit depth. “Lost Tapes” series

1 Like

If output is clipped, but input is not, the cause is in the customising. Early OpticsPro versions used to signal highlights because of what they did with the images. Later PhotoLab versions have fixed this.

Okay, we can tweak images or add contrast/microcontrast until highlights clip and shadows drown and that is perfextly acceptable -if it serves the intended purpose. Nevertheless, we get a wider margin in post if we expose to the right, specially in low contrast situations.

1 Like

Hi - yes. Thanks. Non-destructive editing anyway. That makes a lot of sense. My question is off the mark…

But this exposure to the right is limited by the potentialities of the output device, not the input device.

George

ETTR is something one does when taking a picture. Low contrast subjects can be overexposed, which will then be corrected in post. ETTR provides the best possible (lower noise and higher precision) input and, consequentially, output.

2 Likes