Clipped highlights, but only in Photolab?

yes, saturates, clips, destroys, obliterates, etc

so we count 3 bugs

(A) incorrect detection of true camera model/nominal iso/etc white level-white point- clipping point-sensor saturation point ( per channel if needed ) - for some camera models ( thank you OP again for producing raw files so clearly illustration the matter )

(B) destroying unclipped raw data before or during demosaick stage - that is regardless of “A”

(C) generating linear DNG output (NR/Optics option at least) with incorrect data for use by apps further in processing pipeline (demosaicked raw data scaling vs declared white level) - that is direct result of “B” and possibly also “A” ( may be not though - too lazy to check ) - that is what manifests itself as “magenta tint” or “magenta skies” later down the road, but thanks to “C” we discovered “B” and thanks to “B” + raws shared by OP we discovered “A”


now as far as I remember there was a claim that DxO fixed “magenta tint” issues for some Fuji camera model ( that is “C” from above ) … we need to check that for that camera model they also fixed “B” ( not sure if there was “A” there like we have for Nikon Zf )


so much for the “best in class”( unless there was a fine print like “* class of its own” ? ) and “bespoke testing” of camera/lenses raw files

I always check my pics on a calibrated screen (set up to 80cd/m² and contrast limited roughly to 500:1 for printing) … not by numbers.

The question could (should) be how to avoid “tone demolition” [in German: Tonwertabriss] or in other words, to keep a smooth gradation. To have pure white in the pic is no problem as long it represents the highlights and we do not have a sharp separation to the next visible tone.

For further investigation it could be interesting how camera plus raw-converter react at lower ISO settings, e.g. when set to 100 ISO instead to 3200 ISO.

Hi Wolfgang – although the subject captures are not optimally exposed, they are not edge cases either. We’ve all been there. Putting the spot exposure meter on the cheek area and lowering ISO, etc. would probably have resulted in a better capture. But let’s not shoot the messenger. Other apps have shown that there is some useful detail / color data in the areas obliterated by DxO PL. Worrisome too is that accumulating evidence suggests that the problem may not be specific to this camera.

it does not matter for the topic , rawdigger data ( the numbers ) shows that there is no clipping

as noted multiple times already - do not do ETTR ( intentional or by error or by chance ) if the intent is for raws to be processed in DxO PL or poorraw till the bugs are fixed … other apps like ACR or C1 'd have no issues to deal with that kid’s face as it was shot or even more exposed still

1 Like

noname : I do not equate optimal exposure with ETTR these days, but I do agree with you.

Yes, ETTL and use deepprime :melting_face: :melting_face: :melting_face:

Problem for DxO is they’re somehow stuck with this bug:
If they fix it, there’s a loss of compatibility with the work already done with the current buggy versions.
Unless they leave a Bug/No bug compatibility button … :melting_face: :melting_face: :melting_face:

1 Like

you say parametric edits can be screwed… a good note ( if intended or unintended too ) - fixing those bugs indeed can screw stuff for DxO … a proper company like Adobe of C1 they have what is called “process version” to account for such things… major changes that can break existing parametric edits done by users … so a user needs either to upgrade edits himself explicitly -OR- new current process version will be applied only for new raw files…

1 Like

it is all about words… optimal for what ? for DxO it is certainly not ETTR … optimal for your raw converter of choice, your camera profile, your habits and skills and equipment, etc… of course w/ all those strings attached my optimal is not yours, etc, etc

For me, if there’s no clipping there’s no problem. But where DxO software clips, I would absolutely want the problem fixed and would be willing to redo previous work to get a better result.

For what it’s worth: at various points in the history of DxO software, there have been changes to the color rendering pipeline that have introduced incompatibilities with older adjustments. Nevertheless, DxO has been pretty good about approximating the same results with older sidecar files in newer software. Speaking for myself, I haven’t cared if the results change: if I’m re-examining an image in newer software, I’m usually willing to redo the edits affecting color and tonality. Detail is another matter: I wouldn’t want to redo healing and cloning adjustments.

So far, I haven’t noticed any highlight recovery problems with Olympus and Panasonic RAW files. But that any camera RAW files do have this particular problem is outrageous. DxO should have made fixing this universally a very high priority.

Yes to all of that. Let’s not get off track here.

we can test btw - once I get back hoMe I shall try to find some time for a test

2 Likes

yes, but technically it is done through new options in the tools / new tool (like we have new WGCS pipeline and it is not selected if you open any raw that has DOP/recorded in database before WideGamutCS was introduced ) … here if you fix the matter which is behind the scenes AND that breaks parametric edits that users did unsuspectingly before and suddenly you will deliver more new data to a them → you need to visualize this matter either like Adobe/C1 do -or- add new tool/new options in some existing tool saying something like for example - lossless demosaick checkbox ( existing raw engine in DxO then can be qualified as lossy demosaick … not that it happens during the demosaick - probably before that stage of course ), by default unchecked for existing DOPs (or for raws in library) and by default checked for new raws… I vote for ACR/C1 approach (process version/engine version selection)

Actually, you wrote exactly what I wanted to write. Except that I am using a Lumix GX9 and however, I bought a PL 7 license… :wink:
The handling of the brightest parts of the image in PL really compares poorly with the competition. I also have this experience. Selective tone sliders are also of little use in PL. They have a wide range of operation, but their real usability is very limited.

For clarity, I point out that there are at least two issues being mentioned here: the comparatively poor performance of the Selective Tone sliders and the clipping of highlights in RAW conversion. I’ve become increasingly bothered by the former, but have found relief through local adjustment masks. The latter is the subject of this thread and has no practical solution when it occurs.

check if raw data is in danger to be clipped by DxO → if yes then convert to DNG → manually scale raw data down [ here is a pity that we can’t do a floating point DNs ] below the DxO’s destruction point (and if needed inject some artificially clipped “RGB” sensels to make sure DxO will see where the true clipping is) → feed that to DxO …

not practical at all, but can be done… just for fun

PS: I think I shall test that on OP raw file … (A) inject fake RGB clipped sensels and/or (B) scale raw data down … and see what happens, how DxO code digest “A” in terns of how many sensels needed and if “B” will be of acceptable quality

I still reckon there’s a basic point being overlooked;

  • With the entirely reasonable suggestion that;

In which case, @john4pap/Ioannis; Have you reported this example to DxO Support ?

1 Like

camera profile = color transform, it has nothing to do w/ the matter which happens <= demosaick stage… changing camera model in exif tag results FIRST OF ALL in the change of processing that happens <= demosaick … and that is where all the trouble lies… how many times it shall be repeated that camera profile can be applied after demosaick only when all the damage is already done ?

color transform ( rather “color” transform, as there is no color or even “color” before it is done ) is what happens when “camera profile” is “applied” … it is a process where for each pixel some set of numbers obtained first through initial stages of raw data decoding/processing and some form of demosaick ( even for old Foveon sensor implementation you still might need to do some subtraction math from the data in 3 layers - so this is a simple form of demosaick too ) will be mapped into coordinates in a proper colorimetric color space ( one that has gamut for example ) … yes you can construct a color transform that can result in the effect that looks like what we see, but it was shown that it happens BEFORE any color transform in DxO PL is executed ( repeating again - shown by exporting in DxO PL to linear DNG with only NR and optics corrections applied, image data in that linear DNG is not color-transformed , period, case closed )… so camera profiles have nothing to do w/ it

I reckon some people shall learn the basics before posting