Clipped highlights, but only in Photolab?

I agree support has never been very good and all the mounting problems I think are over whelming them. All to often it was DXO staff here who had to get problems resolved after support had denied there was anything wrong and without them being here now…!
I keep being asked for the same information to do with the Internal Error bug which never sounds hopeful not least trying to get me to close my ticket on it recently. Highlight clipping as a reported problem has been around since PL was released and I expect it was there before. The exact causes may well have been covered here in depth for the first time rather than just regular complaints of it. But its a long standing issue that’s been ignored by DXO.

Here we touch the core of why I think lot of stay with photolab.

This is not because of it’s interface (sluggish even on top end computers).
This is not because of its functions (it lacks a lot of basic functions and existing ones are very very … basic).
This because people think they have a good, if not the best, demosaicer which allow them to get the best from their camera and lenses (even if colors are really wrong and need lot of work).

But clipping raw and loosing datas ruins a good part of this !

demosaick in nothing special … may be you mean NR (DP/DPXD), but now it is already approached by Adobe and I bet, next year, will be approached by C1 … and optical corrections - that again for most purposes and for wider auditory are simply covered by manufacturer’s own optics correction embedded in raw files + Adobe/C1 do make their own profiles… there is no point to argue that for mass consumption those do work… what else ? refuge for people not willing to pay subscription ?

Yes my words are often not precise enough when I speak english.
But as they say denoising is done at raw level, this is what I meant. (I was there including, not very accurately I know, everything that happens at the raw level in the demosaicing term - as well as optical corrections which ocur at raw level too).

Yes, that’s one of their strong point too.

Yes. I think this is quite sad. There are several things that I like about Photolab. But it has very few tools, its performance is noticeably worse that PS, LRC and C1, and now this.
My first negative impression of the program was DxO’s decision of not implementing several powerful tools to be found in ColorEfex in the non-destructive environment of Photolab(Nik filters that I miss in Photolab: Pro Contrast, Contrast Color Range, Contrast Only).
The second was DxO’s implementation of the Nik Collection whose performance is now noticeably worse than it ever was(Picture distorts while editing).
I think that what we discuss now was the nail in the coffin.

Let’s hope they react, but I don’t believe it at all…
I would be curious to know the evolution of PL’s market shares in recent years. Up or down?
In any case, I think that Adobe dominates by a large margin, followed by C1. DXO should only represent a very small percentage…

I opened the image in Darktable - with only White balance on - and it seems to be a normally exposed raw file.

Try some of the other options for demosaic and highlight recovery in darktable. Also turn on the “Visualize clipped highlights…” option whilst you are doing it.

The “good” photo (2058) had some channels blown out in raw (white part of the box). In this case the ‘maximum’ field of the LibRaw object wasn’t clipped.

The “bad” photo (2059) had all pixels below the 0.75*16k threshold (0.75 = LIBRAW_DEFAULT_ADJUST_MAXIMUM_THRESHOLD, 16k because it’s 14-bit raw), so this was the reason why LibRaw.maximum field was clipped. See the link given above to libraw site for some background.

This threshold was invented to prevent pink highlights but the default value proved to be too high in this particular case.
Perhaps DxO should provide us with a slider to tune highlights recovery on the RAW data level for specific cases. (Just to be sure, this has nothing to do with ‘Exposure’ or ‘Highlights’ sliders).

I don’t have darktable but I’ve seen that it has a slider to control highlight recovery. Maybe if it’s moved to the right, the problem could be seen also there.
Probably CO uses 0.55 or something like that for this threshold. Affinity was plagued by this problem sometime ago. Personally, I didn’t have this problem with PL7, but now I understand why.

Side remark: The public version of LibRaw 0.21.2 has no problem reading loselessly compressed Zf raws, but complains about HE compression (like in ‘pink’ case of Zf used in a Chinese bar, somewhere in this forum). I just wrote a tool using libraw.dll to analyze some of my photos, but it is still very preliminary, just general statistics and histograms. It’s a command-line version of RawDigger, but I’ll have more control. I still have to dig through libraw sources to be sure I don’t walk upon some mine (e.g. hot pixels).

What would the purpose be? The image is 100% normal and can be processed in all directions, not excluding the normal standard workflow here in Darktable for brilliant results ;O) Do you suggest I strip my ‘car’ down to find out why it works perfectly? I really can’t see what the problem should be with this image file. Is somebody having fun?

DxO claims they “test” actual cameras - so they shall be perfectly capable to establish for each camera model (x nominal ISO x whatever) real clipping point per each channel w/o any default values and user sliders … it is their job for which they are paid by customers who buy their products… and this is raw conversion 101 and they do not do this

No, I suggest you use the tool you have used with the image and move a couple of sliders(not exposure) and maybe select a few other options from the drop-down menu above those sliders.

I repeat: For what purpose. What is your idea??

He wants to see a burned image done with darktable I suppose …
PL wouldn’t be the only one … ??? :face_with_diagonal_mouth:

Thank you, yes. I had already written about it here earlier:

And if the issue is visible depending on the options used to demosaic, then… .

I suggest to stick to DxO PL and if there are issues in DarkTable then go to open source cesspit =

As you must insist on playing Captain Oblivious:

It’s not about discussing the tool itself but rather showing that the problem OP presented can be reproduced in other software which provides names for what it’s doing at that point.

That information can be used (except for example by people religiously using one software to pixel peep), to learn more. As I mentioned, not something I’ll be doing in this case. My head is already too small for all the things I’m trying to learn anyway.

I don’t get this. The raw file in question is close to perfectly exposed according to the histogram in Darktable when everything but the demosaicing and the white balance has been turned off - meaning, no remapping of extreme values is going on. And yet the soft skin of the model is showing all details. The histogram looks like a plateau with short distance to pure black and white. The whole scale of tones in between is smoothly represented.
It would take me 5 minutes to make the image as good as any based on a well exposed raw file.

I am not in the business of examining what could happen if I pushed this or that slider to the extreme. It would be like smashing the front screen on my good old car to prove that it is made of glass. I don’t want to be irrational. I can use my time better.

I downloaded and tested the raw file in question in Darktable - just to add the fact to the discussion that this file works normally in my editor. It’s not even slightly bad.

Why adding this fact to the discussion? Because I accidentally bought DxO PhotoLab 7 in 2023 believing that some strangeness had been corrected, but it hadn’t. After having discussed that in this forum I get the regular news letters.

Meanwhile I have not made a final edit in PL7. I use it when something becomes difficult in Darktable - just to see if it would be easier in PL. That has not been the case so far. At least PL works here as a reference.
I have used PureRAW 2-3 with great pleasure, to remove noise, and lens sharpen in some cases. Likewise I use Affinity Photo 2 for pixel jobs - when a corner need to be added or a small annoyance removed.

98 percent of my editing is in Darktable, but admitted - this system is not as caring as commercial apps, the structure is different and much smarter. But when learned - I can’t imagine anything better, even among Adobe’s products, which I came to know through decades up to 2023. On the top - Darktable is free to use.

Sorry to sound like a pusher. I just want to share something rational.

1 Like

First of all, if you’re so sure that darktable will show something wrong on this image, why don’t you do it yourself and show us ?
What game are you playing asking other to tweak things that work to make them not working and not doing it yourself ?

Second, since you insist to not see what camera raw, FRV, Affinity photo, NXstudio and C1 are able to produce with this raw (shown above) when photolab produces a terrible mess, here are following 2 screenshoots from darktable.
I don’t manage this software very well, but it seems I used the pipeline recommended by darktable creators (they said this is the one to use, other are obsolete and not as accurate as this one, if I’m not mistaken).

Look at the history to see what step is shown in those 2 screenshots (zoom is 100% as you can verify - clic several time on images to see them full size):
First one is the image just loaded, nothing else : indeed some rare isolated pixel are shown wrong but this is very far from the large zone screwed up par photolab.
Last one is after color calibration and evrything is fine, no more pixels wrong and all skin details are consistent.

If I do not use darktable the right way (this is quite possible), DO IT YOURSELF AND SHOW US !

image 1 :

image 2 :