Following a suggestion from Dr Internet I reprocessed the file with all optical corrections disabled (vignetting most particularly) and it does appear to ameliorate the problem significantly.
Banding does remain pretty visible, so I do wonder if there isn’t some basic issue with DxO’s maths that it loses precision here, but it is definitely better. (I mean the obvious suggestion is that there is a basic issue here that the vignetting processing compounds, but anyway, this is getting significantly beyond what I can really get my head around).
(No image as it’s 1.15am here and I actually have to finish the edit but I’ll come back tomorrow and post some more samples if people feel it would be helpful).
I have a test image that I’ve given to DxO several times over the years, as it shows the same posterization that you’re seeing when boosting shadows and reducing noise. As you’ve found, it’s a combination of high noise, low image complexity, and additional corrections that alter the gradient (particularly vignetting correction). DxO’s monolithic choice of demosaic algorithms can be a factor along with denoising (which DxO handles all in one stage). Other RAW processors with a variety of demosaic options might handle the test image better. Denoise algorithms also vary in their performance. For an image like yours, I like to use Topaz denoise software after using DxO software. Or I’ll give darktable a try.
If we boost shadows, posterisation can start to creep in as we’ll work on a small set of values out of the originally huge pool of values of a RAW file.
We can think of this situation as working not with a 12 bit file, but maybe a 4 bit file. This means that the source has only 16 steps to work with…and boosting this can make the steps visible.
Nevertheless, with higher precision floating point calculations and smarter interpolation algorithms, posterisation could possibly be used. And while this sounds simple enough, it’s actually more demanding: How can the interpolation algorithm know if that 16 step gradient is from a bent or straight surface? The width of the steps will give some clue, but again, we’ll get into the “expensive” area of 80/20 where the last 20 percent of quality improvement could cost 80 percent of efforts.
In principle, yes - but look at the original RAW before denoising is applied: while there isn’t much in the way of dynamic range or color depth in the lifted shadows, there is little to no posterization inherent in the image. It’s more an effect of how denoising is done. Gradients in fairly uniformly-colored areas of the image aren’t being translated well enough. With more programming (perhaps another slider in the noise removal palette or another denoise mode), DxO software can be made more able to handle these situations, where there isn’t much fine detail to preserve.
Thanks everyone so much for responding. Really interesting. And yes, all of what you say makes sense. One might hope for improvement, but yes of course we’re in the margins here.
For reference, here is the Adobe Enhance/Denoise version of the same image:
(with greens desaturated to partially hide the green tint that this algo loves to add to deep shadow noise)
Sadly Greg the number of images I work with on a typical shoot gives me very strong motivation to find and stick to a fairly one size fits all approach to a lot of my postprocessing pipeline. I’m slow enough as it is! After a lot of testing DxO/XD2 does seem to offer the best overall compromise for demosaicing/denoising. That’s not to say I’m not constantly on the lookout for ways to improve that pipeline.