PL7 Shadow Banding

if we use Exposure+Tone here is what DxO PL controls do with narrow band of raw data ( export to linear DNG w/ all corrections applied )

raw data in original raw =

~~after DxO PL6 corrections ( note that it is demosaicked ! )

if we use Tone Curve only here is what DxO PL controls do with narrow band of raw data ( export to linear DNG w/ all corrections applied )

raw data in original raw =

~~after DxO PL6 corrections ( note that it is demosaicked ! )

and if we zoon in then ( note that it is demosaicked ! )

Whatever you do to increase contrast, if you exceed the number of levels that can be handled by the number of bits in a byte, you are likely to provoke banding.

My point is that, if you use the right tools, you can get a good result (without banding)

Take a look at this post…

… to see what a mess the Selective Tonality tool can make of smooth gradation. There are other posts where I show better example of how a step wedge is mistreated if you apply that tool as well, but I can’t find them at the moment

Whatever you do to increase contrast, if you exceed the number of levels that can be handled by the number of bits in a byte, you are likely to provoke banding.

My point is that, if you use the right tools, you can get a good result (without banding)

Hi Joanna,

But this is not JPEG banding - this is coming from the internal processing of DxO, which is much wider than 8 bits. The image that I uploaded was artificially chosen to make this clear, but it happens with the Jeep image simply by loading the RAW file in to PL7, automatically applying the default preset (DxO Natural) and then lifting the exposure one to two stops.

Neither is this an issue with the RAW file, as I can process these files in other software with more extreme settings without banding. In COP the limitation is the noise floor in the RAW file itself, which looks like typical RAW noise.

Like it or not, I can not process the Jeep image (for example - I have others as well) satisfactorily in DxO. I have experience of processing Olympus files in DxO without problems. The issue only became apparent when trying to process GFX100s medium format images, which have 3 to 4 more stops of usable shadow range compared to the smaller sensor (these are 16 bit RAW files, with probably about 14 bits of meaningful signal data at base ISO).

What I really want to know is whether or not this is a bug (in which case I will wait for a fix), or an inherent limitation of PL7 in comparison to other RAW processors (in which case the “right tool” is not clearly not PL7, at least for MF cameras).

I don’t know if this banding is related to the banding I reported earlier regarding vignetting, but in April 2021, DxO stated that it was a bug. See: Vignetting filter causing banding - #29 by sgospodarenko

In November 2022, there was still no solution, see: Vignetting filter causing banding - #32 by Barbara-S

DXOs long grass is really long so everything gets lost in it

Looks like something for support.dxo.com

Depends how much you have to stretch a part of the histogram …
Computers run 32 bits precision. It would have be smarter to use this precision.

HDR needs floating point precision which is not the same as 32 bit precision per channel. Even if FP numbers are 32 bit coded.

1 Like

bit depth for resulting image and for processing are not “the same thing” :
Just stretch histogram on a 8 bits jpg images and get banding (on photoshop for example).
Then use the same 8 bit image , do the same process, but convert it to 16 bits when processing and save the result back to 8 bits : you won’t (generally) have banding on your 8 bit saved image when processed in 16 bits.
This can happen on smaller and more stretched part of the histogram when it’s about 16 and 32 bits.

Well, even if the original 16 bits fixed point (uint16) data was converted into 32 bit (float), there would be an improvement in the rounding off of internal calculations. The original 16 bits discretization of data would remain…

Just try this :

bit depth for resulting image and for processing are not “the same thing” :
Just stretch histogram on a 8 bits jpg images and get banding (on photoshop for example).
Then use the same 8 bit image , do the same process, but convert it to 16 bits when processing and save the result back to 8 bits : you won’t (generally) have banding on your 8 bit saved image when processed in 16 bits.
This can happen on smaller and more stretched part of the histogram when it’s about 16 and 32 bits.

When I say that demosaicing results in a 16 bit RGB image I mean that in-memory image that the converter is creating. In matter of fact that’s the only image. JPG, TIFF etc. are diskfiles containing that image in a certain way.

I still want to go to the original post. I don’t see any branding there.

George

I think you mean so a 16 bit working space (every process is so done in 16 bits, where 32 bits gives more precision for intermediate steps - (see my 8/16 bits real production case above).

Some sensor already provide 16 bits datas (in cinematographic area at least - maybe photography is still limited at 14 bits). So no room for extra precision needed in intermediate steps.

And future sensors will probably provide more dynamic range.

But yes maybe oot.

PL or any other converter is creating a RGB raster image in memory with a bit depth of 16. Most cameras create a raw-file with a bit depth of 12 or 14. These sensels are used to create a RGB pixel with a 16 bit depth value. So probably the initial RGB image has a bit depth of 16 but in steps of 12 or 14. That will change after the first editing.
Dynamic range is something different as bit depth. Don’t mix them as I see often. The dynamic range is the relation between what the sensor can register: between the darkest and the lightest values. The same for your output devices.

George

With more dynamic range, you need to store more values (unless sacrifying precision).
Cinematographic 16 bit sensors provide more dynamic range than 14 bits photography sensors.

NO,NO,NO.
The dynamic range is dealing with the sensibility of the sensor. That range is registered in an analogue value which is digitized using a certain bit depth. There is an analogue range the sensor can register, an analogue value that is registered and there is a number telling what bit depth will be used.
Compare different camera’s with different dynamic ranges. They all have the same bit depth.

George


So why on my D850 I need to record 14 bits raw when at base sensibility (64 iso) to keep all informations, when I can save 12 bits raw at high iso without loosing informations ?

I don’t know where that statement is coming from.

George

Wasn’t it explained by @noname, that the problem is with RAW data coming from the camera? The next post in Kasson’s blog gave the reason:
GFX 100 PDAF banding is fixed - the last word (kasson.com)

Do I miss something?