Hi, Sankos, and thanks for your info. If you read my OP, you’ll see a section titled “Bug”, which may, in fact, be a feature. PL does sharpening/local contrast to the adjustments. However, the sharpening/contrast is not as great as I see in Adobe Camera RAW (ACR). Also, it looks like ACR applies this extra touch just on the Shadow and Highlights sliders, not the Blacks and Whites, which is also different from ACR.
The conclusion remains: ACR (and presumably LR) work very different.
I should note that “local contrast” is just an unsharp mask with a large radius and small amount. PL looks more like it’s sharpening and ACR looks more like it’s enhancing local contrast.
It would be easy to get into a discussions on changes to PL to make it work better. I appreciate that people have been sticking to analysis and workarounds in this thread. Hopefully, you have created a topic in the New Features forum for your suggestion.
I looked for a thread there, but couldn’t find one, so let me point out that, if you want a curve that shows how the source tonalities are mapped to the final tonalities, it may be impossible. Because of things like sharpening and local contrast changes, there is not a 1-to-1 mapping which such a curve would need. Even if you limit the curve to just the Selective Tone control, it appears to include some sharpening/local contrast.
I understand your intent, but I don’t think it’s possible. If you want to discuss this further, consider posting a new thread.
Quick reference of all workarounds mentioned so far:
Use the Tone Curve instead. But while the Tone Curve control provides a lot more flexibility, it is tough to make small changes with it. It is, however, pretty easy to use for stretching or reducing the maximum tonalities (by adjusting the endpoints).
Use Smart Lighting.
Use Clear View.
Use Contrast and Local Contrast.
Try using the tonality slider in the new HSL (I have yet to try this as a replacement for Selective Tone).
Select a less-contrasty camera profile as a starting point (interesting suggestion!).
Adjust the tonality using Local Adjustments (so as to limit the affected areas). Control points might be useful to both limit the area and limit the affected tonalities, but they have their own problems.
Use Selective Tone last. Try using adjacent sliders to compensate (although I noted problems with this approach above).
Did I miss any?
If you have Photoshop or Lightroom, one additional workaround is to use PL just for lens corrections and noise reduction and then ship a DNG to the Adobe application.
I did communicate this on a different level, and i understand your answer.
Before we discus new implements for better tools we need to understand the present tool behaviour. The video i placed where the sub curveline was shown was contrast tool alone.
What is very good is set smartlighting boxes on dark and highlight spots and then use exposure comp to level the exposure. On both ends smartlight is lowering highlight inside the DR or lift shadows when lowering exposure. Powerfull combination , you can use smartlighting as a highlight shadow leveling and general exposure as actual centerweighted exposure.
Works a bit like idynamics in auto mode of panasonic. It lowers the contrast level and lifts shadows wile lowering exposure.
The screenshots above also illustrate how Selective Tone sliders modify local contrast in addition to global contrast. Note the thin vertical lines in the histograms get lower and fatter as a result of the local contrast manipulation. Other tools simply move the thin vertical lines around.
PhotoLab’s local contrast effect is applied by the Selective Tone tool, as well as the Smart Lighting and ClearView Plus tools. The Contrast slider and the Tone Curve do not do any local contrast enhancements.
Lightroom’s Hightights/Shadows/Clarity sliders apply stronger local contrast corrections (prone to haloing) and as far as I remember are not just HiRaLoAm using the simple USM/Gaussian-blur-based filtering, but make use of Laplacian pyramid.
PhotoLab’s Microcontrast slider looks like a straightforward HiRaLoAm – you could simulate it by using the Unsharp Mask tool with something like [50 Intensity; 5 Radius], etc.
A typical “HDR-like” setting in Lightroom [-100 Highlights, +100 Shadows] can be simulated in PhotoLab by moving all four sliders [-100 Highlights, +100 Midtones, -30 Shadows, +100 Blacks]. Settings like these are prone to haloing, though, because of the local contrast enhancement hidden behind the sliders.
To get the greycale ramp perfectly grey, you need to use the Tone Curve Black/White Output triangles (the Y axis) and move them to 128 on both sides of the curve.
Tone Curve’s Gamma values above 1 build a Shadows contrast and compress highlights but do not cause highlight clipping (Black and White points are not clipped). Gamma values below 1 create highlights contrast at the cost of clipped Blacks.
In PhotoLab 2, the HSL tool has a Lightness slider which can alter global contrast when the All Channels options is selected. Maximum values of Lightness clip to Black or to White (there’s no shadows/highlights protection). This should be different in PhotoLab 3 because of the different colour model used by the improved HSL tool.
As far as a practical application of the above, see e.g. Thomas Niemann’s pdf file, which has some good suggestions on the second page.
ok i have found an other simple trick to push things inside the colorspace you work in and you don’t need to use the selective tone sliders too much:
See my video’s, i use tonecurve and show that color accentuation, contrast and selective tone are much less useful for containing the colors inside shown by the moon and sun blinkies when out of gamut of sRGB and that colorrendering profile does need to be adressed too. Clip one clipping is actentuated by colors i was fooling around to see if contrast, selective tone , saturation protection, and stuf like that does things in the blinkies of oversaturation and blown part wile using the protection of tonecurve lift and lower the 0-255 to 4-245. using tonecurve to correct this shows that i can use great amount of contrast pushing the image to “pop” with out blowing the gasket. combination with color specific HSL correction
In words: when you have clipped colors in the bottom ,oversaturated, or blown by too much luminance i can resolve this quick by using the tonecurve.
Only thing i am not sure of yet, have to test it, if i just cut off data or “push it inside” by changing the numbers.
But it is fast and interesting to change highlights and blacks which fall of the chart.
(i hope a real colorspace knowing guy can help out to explain what i am seeing on my screen.)
My conclusion is that my huelight profile isn’t calibrated inside v3 yet and that the generic renderings profile works better in this situations to keep the hue,saturation,brightnes inside the sRGB gamut. (did i used the correct therms/words to describe my finding?) please correct if not to keep it clean.
Second conclusion the sliders in selective tone and contrast arn’t not that powerfull as i first thought and more to see as a sort of fine tuning. I tried to use smart lighting and exposure centerweight correction to recover those blown saturated blinkiespots but HSL and tonecurve did it much faster and less destructive to the rest of the image colors.
Stil i am thinking that i make a mistake in thresholding blacks and lowering highlightvalue due direct cutting of image data wile the sliders are pulling stuf more inside by selective recovery.
I was convinced that color recovery in color rendering by intensity and protection saturation colors much more did in “recovery” “helping”
I really hope that this kind of managing and using of color related tools in the palletes gets much more describt in detail in the user manual or webinairs/tutorials explaining each tool how to use in which way and it’s restrictions. Because fooling around does help finding tricks and workarounds but getting a contructive logic way of working you through image issues to get the best out of it need some technical background.
Yes, I think that would be very helpful. I’ve used tone curve mostly to deal with overexposed highlights (starting before PL3). But a better understanding of alternatives and their pros & cons would be useful.
Yes, i started to re discover tone curve by this topic: “blacklevel” and clearviewplus
I tried to get images shot through water surface more “clear” and this opened my eyes about the use of the tone curve in a way of not only “contrast” curve but also in “dehaze”.
And now color"management".
Same with the HSL tool, playing with it and watching histogram with “moon” and “sun” on in L or R or G or B only shows some control of the spikes if you mesh around in saturation and luminance sliders. (of course your colors changing but in combination with the other toolset it’s usable to get things done and the problem is finding out which to use when.)
I don’t believe the phrase “out of gamut of sRGB” makes sense.
RAW images have a colorspace: the colorspace of the sensor. That’s it. Any camera setting for Adobe RGB, sRGB or whatever is used only when the camera creates a JPEG. If you are working with a RAW file, this setting is irrelevant.
I believe the sensor color is interpreted using the chosen Color Rendering, but I don’t think this means that it’s placed in an sRGB colorspace—I would expect it to be stored in a much bigger colorspace.
When you view an image in PL, the color is converted from PL’s internal colorspace to the monitor’s colorspace. When you print it, it is converted to the printer’s colorspace. When you export it, it is converted to the chosen colorspace. All of this is most useful if the starting colorspace is a large one (such as XYZ) rather than a small one like sRGB. The colors that are out-of-gamut could be different in all these cases.
With a large colorspace (like XYZ), you might never have an out-of-gamut color and it is not clear that PL is giving an out-of-gamut indication when you turn on the clipping indicators. It looks like PL converts the color into an 8-bit/channel RGB value (and I’m not sure how—is this the monitor’s colorspace?), which it displays under the histogram. Any channel with a value of 255 is marked as a highlight clip; if the value is 0, it is a shadow clip. (I hope the internal value is more than 8-bits/channel or else we are all wasting our time producing 16-bit TIFFs).
A color that is not indicated as clipped might be out-of-gamut. And colors like (0,0,0) and (255,255,255), which are considered clipped, might not be out-of-gamut.
By adjusting the Tone Curve endpoints, you prevent any channel from having a value of 0 or 255 and thus from ever showing a clipped value. The Tone Curve seems to be near the end of the processing pipeline, so this prevents any prior controls from ever generating a clipped value. Whether this is a good thing or not, I’m not sure.
I am far from an expert in the subject of colorspaces, particularly with regard to RAW processing and preview renderings. Anyone who is an expert should feel free to correct me.
I am not certain in how this works but i know the initial “colorspace” is the one from the sensor stored in the raw file. Because rawfile arn’t color of image, it contains numbers who representing a reading from the sensor which rawfile developers interpreting to a colorspace like AdobeRGB or sRGB according to testing of that sensor. Most sensor are capable of storing more wavelengthsreading then we can see or need. Much of it is filtered out by a UV/IR filter for instance, that is what taken out for IR-camera’s bringing it towards a fullspectrum reading.
The preview we see in the rawdeveloper is a real colorspace like sRGB or AdobeRGB. This is rendering the numbers of the rawfile into this colorspace i set up in preference if i can. Dxo follows camerasetting i believe. Hence the clipping tools. This shows where the numbers fall of the chart.
And by changing the sliders and such you recalculate the numbers of the rawfile in to an other hue,saturation,luminance level. By compressing? That i don’t now for sure but i think it does.
Using a larger colorspace then your monitor can handle will effect the outcome because you modifing colors in the dark outside the visual display possibilities of your screen.
On thing i never understand is how the tools are working the color. I think you need such a tool as mac has to compare colorspaces in 3D added youtube video but then realtime so you can see which hue is actual outside the sRGB colorspace. Say it’s redisch bulking out you can choose to compres all channels only high(lights) and/or oversaturated( shadow and blacks) or shift the image to the place you have empty space,( by exposure compensation ) or just recalculate, compres, the redisch channel. (HSL tool?)
I think it’s 12bit max for colordefinition/channel in a rawfile (can’t recall where i read that.) and srgb jpeg is 8bits wile tiff has space for 16b/channel to define colors.
I think when i compres the tonecurve by lifting and lowering i actual say to recalculate the rawnumbers inside a smaller colorspace due smaller row of possibilities : 0-255 or 10-245 which is 20 steps less then on both sides clipping of “color” tools as treshold have.(edit: i think the moon and sun have about 0-10 and 245-255 as threshold to start showing clipping.)
So that’s what i am want to know. If i use tonecurve do i cut of the edges of the sRGB colorspace or do i force it to recalculate the rawfile numbers into this new smaller range of “black and white”.
I fear it cut of as in no real black and no real white.
I agree the real stamp of aplying a colorspace is by export. Then all numbers representing a color outside the colorspace like sRGB are clipped and don’t be stored in the file.
The clipping indicators in PhotoLab, esp. the moon, are a bit misleading, because the latter combines black point clipping info with display out-of-gamut warnings. It would really be useful to have a separate OOG warning for the display and for the working colour space – these two might be more or less similar if you have a wide gamut display very close to Adobe RGB, but what about the users who don’t have such displays?
When you have a display close to sRGB, and you edit your image paying attention to the Moon indicator, you effectively do sRGB soft-proofing for sRGB output. But what if you want to export the file in Adobe RGB, e.g. for printing or for people with wide gamut displays? Your edit won’t be optimized for those outputs because your display is incapable of showing you colours outside of sRGB, and you have no working colour space out-of-gamut warnings (in PhotoLab’s case it’s out-of- Adobe RGB-gamut warning) – you will throw away important colours that PhotoLab is capable of rendering, despite its limiting working colour space.
And one last point – sometimes it’s not worth it to worry about slight out-of-gamut issues because the perceptual rendering intent applied during export should take care of this. Without proper soft-proofing one needs to learn to trust it will do just fine, without you having to resort to the Tone Curve tricks. Currently in PhotoLab you have to do hard-proofing, i.e. export and evaluate if there are no colour rendering issues, or create variants / virtual copies for each kind of output (not really ideal).
Yes, RAW files are just numbers and they are characterized so as to match the numbers with colors in some color space.
I expect RAW processors to use some internal color space. Apparently, Lightroom uses a special version of the ProPhotoRGB color space and it stores channel coordinates in 16 bits. You want a color space that is large enough to encompass all colors that can be captured by all sensors now and in the reasonable future. sRGB, for example, would be a poor choice for an internal color space.
As sankos’ linked post shows, Photolab produces a histogram based on one of three choices in the Preferences dialog: the monitor’s color profile, sRGB or Adobe RGB. I believe this means that, for the histogram, PL converts colors from its internal format to the selected profile. I believe the RGB value displayed under the histogram comes from a similar conversion.
Change the profile in the Preferences dialog and the histogram changes. I did this with one image. Then I saved the image with the Export dialog using the same color space each time (but different Preferences profiles)—the resulting images were identical. PL’s profile preference affects the displayed histogram and RGB value, but not the internal color space.
I thought I understood clipping, but I don’t think I do. For instance, I have a color swatch that reads (239,27,9). I thought only values of 0 would be indicated by shadow clipping, but turning on just shadow clipping changes the value to (254,255,255). Turning on just highlight clipping changes it to (1,0,0). Of course, PL could base clipping on the luminance value. Unlike RGB, this is not displayed as a number and so it’s difficult to know what the value it might be.
Like most things in PL, when examined closely, the algorithms become rather mysterious.
Yes, but it also crucially affects the preview you’re seeing. If I set my Preference display profile to sRGB, the preview of images would get really oversaturated because I use a wide gamut monitor. Setting it to Adobe RGB would look better but also wouldn’t be accurate because the only correct characterization of my particular display is the ICC profile I make when calibrating/profiling my monitor.
PhotoLab is the only raw converter I know of which computes the histogram, the colour sampler and the clipping warnings on the basis of the display profile. It makes no sense to me – the histogram should be computed on the basis of either the working colour profile or the output profile (when soft-proofing). It would be nice to have the option to display raw histogram as well (like in FastRawViewer, RawTherapee or darktable). The current display warning is useful, but it’s not the most important thing when we edit our photos.
Thank you for bringing the former threads back in.
i remember again: it started by asking which colorspace is used to convert the rawfiledata into.
There maximum colorspace in photolab is AdobeRGB colorspace and some of you liked to have prophoto for printing.
About the Histogram: (to see if i fully understand)
1 the Histogram shown in camera is the one they process by your camera rawfile to jpeg setting as in adoberRGB or sRGB internal processor? It’s not the LRGB interpretation of the native colorspace of the sensor.
2 DxO has LRGB and clipping black (moon) clipping highlight (sun) and it shows the RGB numbers 0-255 per channel when you hoover over the image but you don’t see a crosshair in the histogram where this point is in the image and correspondences in the histogram.(Somehow i would like that.)
3 i don’t know if floating histogrampalette/tool is possible to enlarge the frame/window. (i know i can in an other application) This would help to fine tune the blackpoint and whitepoint of the images and to see which channel is oversaturated (colored spikes clipped on the top of the window.)
4 softproofing would need a histogram “in” and histogram “out” so you can see both colorspaces you have selected.
Resolving flat-blacks (nativesensor readout 0-0-0) can’t be done i believe there isn’t any detail in to resolve but i don’t know if this 0-0-0 is also the adobe or sRGB blackpoint 0-0-0 or is a smaller colorspace floating inside the bigger colorspace. and is “black” not 0-0-0 but 5-5-5 or something?
The reason i ask is if i use a slider in selective tone : highlight, midtones, shadows, blacks and i stretch the histogram i turn dark shadow to blackpoint and highlights towards “whitepoint” which contains color data and detail until it hits 0-0-0 and 255-255-255 (RGB) borders of colorspace defined by you (sRGB for instance.)
But workspace is AdobeRGB so working in preview setting sRGB i could have some colors outside the colorspace sRGB, so if i am use exposure compensation and the selective tone sliders to turn down highlights and bright colors, do i “compress” the range of color “values/numbers” inside my sRGB from the AdobeRGB colorspace only or also all available data in the imported camera colorspace numbered in the rawfile? So can i retrieve all available data the sensor has captured and coded in the rawfile by using the sliders? i hope it works that way.
Because then a good working , with adjustable “threshold” clipping detection system can be helpfull to maximize the image tonecurve balance by compressing the value’s of the RGB which representing the hue making the brightness less bright so it fits inside the colorspace i choose in preference like sRGB. (i know i can’t see colors my viewdevice can’t show. out of gamut is out of gamut.)
My Huelight DCP’s for my G80 are more stretched to the borders then the general camera rendering of DxO, getting more out of the sensors data but clipping faster apparently in my sRGB workspace (preview)
Let me just touch on this: Adobe RGB is not the “maximum” color space. From Wikipedia: " When defining a color space, the usual reference standard is the CIELAB or CIEXYZ color spaces, which were specifically designed to encompass all colors the average human can see."
ProPhoto is a smaller color space (but bigger than Adobe RGB) and Lightroom uses a custom variant of this color space for all internal work. I hope that PL does not use Adobe RGB for its internal color space.
As far as I can tell, this is incorrect. The histogram is based on the color profile chosen in the preferences dialog, not the camera’s JPG rendering. Cameras can often render the same RAW file in several different ways (using “creative” modes), which is what I believe PL’s Color Rendering tool also does. These rendering modes alter the color—color space conversions, on the other hand, try to maintain a color even as the numbers that represent that color change.
As a wild guess, if you want the color as captured by the camera’s sensors, you need to have a profiled monitor, you need to make sure PL is using that color profile, and you probably need to have Color Rendering disabled (or maybe set to Generic Renderings/Camera Default Rendering).
Since individual sensors may be slightly off, you can use something like the X-Rite color chart and software to create a DCP (or is it ICC) that will correct for that specific sensor. The default is for a “typical” sensor for your camera, which isn’t always right.
Thanks, Greg! That was an enlightening link—info from people who know what they are talking about. I agree with some of the responses that the choice of Adobe RGB for an internal color space is a bit short-sighted.
If you disable the Color Rendering tool, PhotoLab still uses the “Camera Default Rendering” profile (with no protection of saturated colours) in order to assign appropriate colour values to the demosaiced pixels in the Adobe RGB working space, before converting them to the monitor profile and showing the preview to the user.
“Camera Default Rendering” is equivalent to what in Adobe world is called Camera Standard profile – it’s DxO’s emulation of the look designed by the camera maker for the specific camera model. As such, it’s not the “colour as captured by the camera’s sensor” but it’s interpretation of an interpretation, if you know what I mean.
Although PhotoLab’s default camera profile is the “Camera Default Rendering”, their baseline profile seems to be the “Neutral color, neutral tonality” one. The rendering profiles (e.g. DxO FilmPack camera emulations, or ICC/DCP camera profiles) seem to be put on top of the “Neutral color, neutral tonality” input profile – that’s at least how I understand the Intensity slider under the Rendering box – that Intensity slider acts like a layer opacity slider in Photoshop. And the “Protect saturated colors” Intensity slider probably works on a formula similar to the one used by the Vibrancy slider (a channel mask). It’s necessary because the profile-embedded tone curve might cause oversaturation if the profile doesn’t employ gamut compression.
Incidentally it’s possible that OXiDant’s Huelight profiles were designed without that gamut compression, that’s why they clip so easily in PhotoLab.
One last thing, the “neural tonality” profile name is a bit misleading because it suggests we get a linear, scene-referred rendition (“as camera saw it”). But the profile is gamma encoded, i.e. there is a midtones curve, but it’s neutral in the sense that there’s no shadows dip in the curve.