Colour Management in PL6

@KeithRJ these look rather good.
could you implement Protect Saturated Colors and “colorrendering category/rendering.” in the flowchart?
and these?
Vigneting is corrected first (optical correction CA, Denoising Prime, WB before demosiacing)(as it depends on calibrated data**), then exposure, smart lighting, selective tones, contrast, clearview, microcontrast and then custom tone curve.**

The bolded are done after every change in color by the optical module (denoise prima only on export but before demosiacing so demosiacing is one’s done again after the last edit and all editsettings are applied in one go just before converted to export product.
the exposure and such is in the edit loop.
The rendering of protect saturated color is also in the edit loop because of your push and pull in color and lumination.

(this would complete your flowchart.)

Thanks

Peter

Some explaination of DxO Staff done when plv4 was having a new DNG type:

In fact the DNG that we export has its color data expressed in sensor color space, we don’t switch to any usual color space (like sRGB, AdobeRGB, etc.) to avoid destructive transformation. Interface displays “as shot” but it should rather display “native camera color space”. And on the DNG produced this way, you can then apply any color transformation once reopened in PhotoLab, as you would have done with the direct RAW file.

which is applied in this “rawDNG”:

Exactly these one (distortion, vigneting, lens sharpness and chromatic aberration) + denoising and demosaicking. No color rendering applied (picture remains in sensor color space).

The bold is all of the optical module.

1 Like

Thanks Peter. Perhaps I am remembering wrong, but I keep thinking of reading in several places that the sensor doesn’t have a “Color Space”. My understanding had been that the demosaicing software that converts sensor data to RGB must apply some kind of color space and rendering so that our screens will show us this image.

When PL shows us the RAW image on the screen ‘Something’ must be done so we can view it. Then when PL Exports as DNG with ‘optical and denoise only’ I understand the linear DNG to be “Partially Demosaiced”. Does that mean the same ‘Something’ we saw in PL, or is it something different? Then when I open the DNG in LR, do I see that same something OR does that not apply and I only see the rendering applied by LR?

Sorry if this post sounds confusing. It’s because I AM! :slight_smile:

Oh i am often too! :sweat_smile:

No you don’t remembering wrong. Rawfile’s don’t have a “colorspace” like the horseshoe everyone knows.
They have a physical bount restriction of which wavelenghts the camerasensor can capture. Every sensor has it’s own sensitivity of wavelenghts balance which is “straightend” by the manufacturer’s algoritm in order to give a “natural” index of the captured light/photon’s in order to reproduce colors as they looked when you saw them yourself. So the “sensor Color Space” is the range of wavelenghts it can handle/capture. A spectrum.
(infrared sensors has bin made by “ripping off” the IR-raster. (can’t recall how (i do have a link somewhere) but every normal camera can be turned in to a IR-camera.)

Demosiacing is used to convert a r,g,b,g raster which is layed over the sensor to only let that part of the wavelenght through which charge the sensors well. Red-isch, green -isch, blue-isch, and a extra green-isch because that’s a “weak” color for our eye’s. (note every Well can capture “all wavelenghts” if you rip off that “bayer array” to be able to reconstruct “colors” they made a clever grit which gives some grouping of RGB kind lay out.) It’s much more complicated then written by me above doh.
After the demosiacingprocess the “color” as we know it is present. (it’s calculated from those three “wavelenght-range” amounts of “light”) whitepoint for WB is set which give you the hopefully correct color back which was radiated from the object into your camera lens. it’s mapped into a preview like RGB color space: the horseshoe icc profiles.
The amount of bits used to describe the values of saturation of each channel is plotting the size of the colorspace.
i don’t go any deeper because then i get lost in the details.

What they mean is it’s a “floating Whitepoint” no real WB is set yet (like you have in a OOC-Jpeg or tiff). All available colorpossibility’s captured by the sensor and stored in the rawfile are still there. (Loading a rawfile in a workingcolorspace means often some colors are “clipped” which means gone. (dxo was earlier a AdobeRGB working colorspace which was too narrow for some users. Too much of the original data was clipped/compressed. So the option for them was create a “rawDNG” with the full range of possible colors still in there so you can process that in say LR which was using a larger working colorspace.
Now we have Wide Gamut working colorspace in V6 so that isn’t needed any more.
Why you can still use the “workaround” for is gaining processingspeed. (DxO uses realtime rendering and optical module calculation which slows down your “preview” changing when editing. The Blur blob moments.)

Biggest “problem” in RAWDNG is interpretation differences which can change the overal image color temperature due the floating WB/whitepoint. ( So it can be looking different in DxO and LR as starting point.)
Which is by export and insert again in DxOPL not a problem because i assume that the same rendering is applied as when you keep it inside the working colorspace.

A tiff has a “fixed”, pinpointed WB set so much less surprices in opening those in different applications at the cost of losing much of the colors in the process.

last note: the original “lineair DNG” of dxo is basicly a 16bit Tiff like file in a DNG- container. That’s why a lineair DNG is crippled and not very much better as a regulair tiff. (colorspace is narrowed down.)
Again i simplified it because when i going in depth and detail i got confused too.

edit Don’t try this at home with your 1000 dollar camera
it’s very eh quick show of spectrum adjustment of a camera.

edit 2:
the RAWDNG is a passthrough. You skip the working color space and regulair edit-area. like you just demosiac and pixelize the rawfile in RGB values and go straight to export.

These are already there: Block 4 is your Colour Rendering and PSCA is the Protect Saturated Colours Algorithm

The intended purpose of the diagram is to help people understand colour management in PL6 and not to provide a workflow for all editing options in PL6. So, sorry I will not be adding anything that is not Colour Management :slight_smile:

Actually, demosaicing (along with DeepPrime) is the very first step in preparing the RAW file for editing and has nothing to do with WB. WB is initially taken care of by colour profiles which should have the initial WB settings and is applied AFTER demosaicing using WB settings in the camera profile. Manually changing the WB is an editing function and not part of colour management.

Any other comments and/or suggestions for the diagram will obviously be considered.

Below is a slightly updated version of the diagram with a small change to the wording for the application of PSCA during export.

Thanks to all for your comments and suggestions.

1 Like

But I still think that ‘Soft Proof Preview’ is done in the ‘Monitor Profile’ and that should be part of the diagram.
As far as I understood all the conversion are done via a ‘Profile Connection Space’ as shown in Soft Proofing - #26 by George. Every image is going via that Profile Connection Space, also the Soft Proof Preview.

George

@George, the monitor profile does not change when soft proofing as it is set by the OS. Soft proofing converts the in-memory image (which is in the monitor profile) to the SP profile and displays it on the monitor thereby simulating what the image will look like on the device for which the SP profile has been selected.

You will only see a difference on screen if the SP profile has a narrower gamut and/or different white point to the monitor profile.

So, soft proofing does display the image on the monitor on the monitor gamut but had been covered to the SP profile which is usually a smaller gamut than the monitor profile and should look different.

Hope that helps you understand this a bit better.

1 Like

As a query do we know if PL is now working with hardware monitor profiles such as with BenQ and Dell as I know they it didn’t not that long ago. If not there will be problems for a lot of users as these monitors are more widely used now. I never got my BenQ calibration to work so use Spyder but its working on my wife’s PC.

Since PL no longer provides the option to specify the “ICC Profile used for display” in Preferences (as it did pre-PLv6), it’s assumed that PL is now acquiring monitor profile details directly from the OS.

So, if you have a BenQ monitor then PL should be sourcing the ICC Profile for it, as it’s known to the OS.

John M

The display on the monitor will be in the monitors gamut.

That’s an important restriction you didn’t put in your diagram. What I see in the soft proof preview doesn’t have to be ‘true’.

George

Soft proofing is a simulation and can’t replace a real print (or projection etc.) - if we care for best quality.

1 Like

Why I have always exported the jpgs in whatever quality I will use them to do a final check (and re edit if need be). As you can’t see what results prime etc. will do without anyway. So far apart from finding the new gamut warning useful I haven’t bothered with soft proofing as it doesn’t show the sometimes big effects of cleaning up noise only and export will do that.

1 Like

That is true BUT if the SP gamut is smaller than the monitor gamut then the image will display as if it was on that smaller gamut. i.e. the colors will be modified to fit the SP gamut thereby simulating the SP gamut.

I cannot put every detail into the diagram as it is designed to give a high level overview of how color management works in PL6.

2 Likes

Hi John,
I don’t know about your BenQ monitor, but you might to have a look → here.

Wolfgang
(Eizo CG2730 + Eizo ColorNavigator)

Thanks, there were a lot of reports of problems with Palette Master Element and from his video he also has major problems with different versions of the program with videos on the ways to overcome the problems (or not to use some versions).
I spent a long time with their support who had direct access to the PC and couldn’t get it to work. Unlike in the video we had, as they say to do, USB connection to PC and calibration device running from the USB outlets on the monitor.
In fact my wife now has the same problems of it not working properly and she has switched to just the Spyder calibration. They are very good monitors but I and others have found a lot of problems with Palette Master Elements and in many ways it’s probably easier to just use the same calibration program/hardware on both monitors as messing about with two different programs and one of which does have a very variable history of problems.

Thanks for that and for all the demosaic information.

It’s been taking me a while to digest all this but I really appreciate your spending the time to explain it all.

Thanks again, Rod

The quote was written by dxo staff.
What he means to say ( i think) is that a WB change in edit room will be activate a CA correction adjustment which is done before the actual demosiacing to pixels.

Fwiw i just tried to add some backloop info which shows that working space effects some earlier made corrections in the first stage of raw to preview.
But yes maybe it’s good not to complicate the flowchart.
This backloop system prevent dxo to show in preview the full rendered included denoising image. Same with the 75% zoom and the CAcorrection and microcontrast kick in.
Too much processing power needed.

But it definitely will show you how an image with saturated colors will actually look when exported to disk (and the result then viewed on the same monitor - which is the typical case).

I recommend have Soft Proofing activated at all times … it does no “harm”.

John

Soft proofing contains two conversions: 1 to the soft proof gamut and then 1 to the monitors gamut. I think that’s essential to understand what’s going on. The oog colors are based on the first conversion and what you see is based on the second conversion. To me that’s an essential part of color management.

George

@George, sorry but I totally disagree. Doing the double conversion as you suggest would accomplish nothing as you will be back where you started with your monitor gamut.

Soft proofing converts your image from the Working Colour Space to the soft proof profile and displays it on your monitor. You may end up with out of gamut colors if the SP hasn’t is larger than your monitor gamut but the idea is to stimulate what it would looks like in the SP profile and there must be NO further conversions otherwise you will not be simulating the SP profile.

As an aside, soft proofing to a gamut larger than your monitor gamut cannot simulate what the final output will be using the SP profile.

This is all logical if you put your mind to it.