PL7 Shadow Banding

Depends how much you have to stretch a part of the histogram …
Computers run 32 bits precision. It would have be smarter to use this precision.

HDR needs floating point precision which is not the same as 32 bit precision per channel. Even if FP numbers are 32 bit coded.

1 Like

bit depth for resulting image and for processing are not “the same thing” :
Just stretch histogram on a 8 bits jpg images and get banding (on photoshop for example).
Then use the same 8 bit image , do the same process, but convert it to 16 bits when processing and save the result back to 8 bits : you won’t (generally) have banding on your 8 bit saved image when processed in 16 bits.
This can happen on smaller and more stretched part of the histogram when it’s about 16 and 32 bits.

Well, even if the original 16 bits fixed point (uint16) data was converted into 32 bit (float), there would be an improvement in the rounding off of internal calculations. The original 16 bits discretization of data would remain…

Just try this :

bit depth for resulting image and for processing are not “the same thing” :
Just stretch histogram on a 8 bits jpg images and get banding (on photoshop for example).
Then use the same 8 bit image , do the same process, but convert it to 16 bits when processing and save the result back to 8 bits : you won’t (generally) have banding on your 8 bit saved image when processed in 16 bits.
This can happen on smaller and more stretched part of the histogram when it’s about 16 and 32 bits.

When I say that demosaicing results in a 16 bit RGB image I mean that in-memory image that the converter is creating. In matter of fact that’s the only image. JPG, TIFF etc. are diskfiles containing that image in a certain way.

I still want to go to the original post. I don’t see any branding there.

George

I think you mean so a 16 bit working space (every process is so done in 16 bits, where 32 bits gives more precision for intermediate steps - (see my 8/16 bits real production case above).

Some sensor already provide 16 bits datas (in cinematographic area at least - maybe photography is still limited at 14 bits). So no room for extra precision needed in intermediate steps.

And future sensors will probably provide more dynamic range.

But yes maybe oot.

PL or any other converter is creating a RGB raster image in memory with a bit depth of 16. Most cameras create a raw-file with a bit depth of 12 or 14. These sensels are used to create a RGB pixel with a 16 bit depth value. So probably the initial RGB image has a bit depth of 16 but in steps of 12 or 14. That will change after the first editing.
Dynamic range is something different as bit depth. Don’t mix them as I see often. The dynamic range is the relation between what the sensor can register: between the darkest and the lightest values. The same for your output devices.

George

With more dynamic range, you need to store more values (unless sacrifying precision).
Cinematographic 16 bit sensors provide more dynamic range than 14 bits photography sensors.

NO,NO,NO.
The dynamic range is dealing with the sensibility of the sensor. That range is registered in an analogue value which is digitized using a certain bit depth. There is an analogue range the sensor can register, an analogue value that is registered and there is a number telling what bit depth will be used.
Compare different camera’s with different dynamic ranges. They all have the same bit depth.

George


So why on my D850 I need to record 14 bits raw when at base sensibility (64 iso) to keep all informations, when I can save 12 bits raw at high iso without loosing informations ?

I don’t know where that statement is coming from.

George

Wasn’t it explained by @noname, that the problem is with RAW data coming from the camera? The next post in Kasson’s blog gave the reason:
GFX 100 PDAF banding is fixed - the last word (kasson.com)

Do I miss something?