The image sensor contains color filters, typically RGB ones. The exact nature of that red, green and blue depends on the chemicals used to make these filters, and it is slightly different for each camera manufacturer and camera model. That’s what I call “native color space of the camera”. It is not intended for display, it’s what the sensor “sees”. Most cameras allow setting a color space in their settings, typically either sRGB or AdobeRGB. This setting impacts the JPEG images produced, but it…
It is true, DPL is not able to export images with ProPhoto color space. I always asking, what is the reason to use - except scientific purpose - the ProPhote color space ? As far as I know, all output media ( monitor, projector, printout ) and input device ( camera, scanner m) has a far worse color fidelity. If we produce output images with ProPhoto color space, those will be worse looking in print or on the monitor.
Please find here the educational video about ProPhoto color space comparison in real life to other components of the camera to output chain.
The term “Original” means that PhotoLab seeks for this exif field and use this color space.
i suspect in output this is the case.
But in what colorspace you work in the editor when selecting in the export a colorspace?
i hope/ suspect it just works in biggest workspace it can handle (default AdobeRGB?) and is clipping results by export settings as in export to disk and the viewing device settings in preference.
Because if so then a exposure correction/contrastcorrection/colorcorrection can float a color inside your export colorspace (sRGB) and isn’t lost inside DxOPL converter area.
If camera gamut is “clipped” by the selection in export and screen mode then we have a problem:
all data is lost and corrections can’t use data outside the sRGB colorspace if chosen.
The color space is a conversion table, calculates the output image color values for applicable to the next device. Without such conversion, the saturated part of the colors will simple loose the difference of some shades. ( That is why, there are 4 type of compression method https://helpx.adobe.com/acrobat/using/color-settings.html ) The ProPhoto color space allow almost full 16 bit resolution for the R & G & B colors. In the real life the output devices unable to reproduce all. To keep the shades on different output devices, the chosen color space will ensure to get the closest quality to the original.
You can make a test by own. In PS make 3 rectangle, fill with gradient of RGB colors. Export them in TIFF by 3 different color spaces. Open all 3 in DPL and you will see the difference.
In practice, the best monitors has 96% of Adobe RGB color fidelity, the best printers - having 16 bit driver, which is exceptional - never able to reproduce more than 90% of Adobe RGB. I never heard about any monitor or printer able to reproduce more than Adobe RGB and get closer to ProPhoto color space.
This i grasp. and high sampling, small steps, is only usefull if the gradient is wide enough, 4steps of white to black doesn’t be better when i sample i 2x16, 16bit. 2x2 is enough. higher is more of the same. other way around, so if the source has 256 steps from white to black and sample it with a four step lot of nuances is lost. and can’t get it back after that.
quote from Wolf:“PhotoLab converts the sensor colors to AdobeRGB and uses that as working color space. This cannot be configured. During export, you can choose between “as shot” (which converts to the color space you set on your camera) and converting to sRGB, AdobeRGB, or any other color space explicitly. Note that for AdobeRGB, no conversion takes place.”
So he states the working space, first conversion table, is from camera raw’s gamut to AdobeRGB. This is the color samplings table, which defines the nuances between the steps. How much color-differences it can manage before the saturation is 0% or 100% same in hue and brightnes.
then i can conclude that if i set viewingdevice at sRGB. i see les then 100% of this workspace.
So AdobeRGB wil be the target for many years to come?
how much trouble you get from working in AdobeRGB set prefferences in dpl also as AdobeRGB in viewingdevice and export, and not seeing this correctly wile having a srgb noncalibrated monitor?
i would be like to see if the settings in preference:
Does anything with your monitor in this aspect.
I imaging that if i export in AdobeRGB (maintain the colorspace of workspace) wile using viewing profile sRGB i get dark area’s (outside sRGB) which i can’t see on my monitor. if i use Adobe profile i get fooled by the fact it’s noncalibarted and can’t display Adobe’s gamut.
If i use profile of display device i am not sure what i can get. gamut profile dell p2314H
So i keep it safe at sRGB
export could be in adobeRGB for the futures sake. (at the risk of strange color errors in the viewing when AdobeRGB got visible on TV screen’s and monitors. )
The Preferences/Display setting is best kept at “Current profile of the display device”. Then I could make sure in the Windows Settings / Color Management that my monitor has a correct association with the appropriate icc profile I made when calibrating/profiling my monitor. My calibration program (DisplayCal) keeps track of profile loading at system startup and every couple of minutes, so the second step is not necessary for me.
People who haven’t calibrated/profiled their monitors will have a default Windows setting for their monitor and it is sRGB, so it’s not really necessary to choose the Generic Profile (sRGB) in DxO PhotoLab preferences – unless of course you messed around with your Windows colour management settings. But it’s advisable for people who convert their raw images to hardware-calibrate/profile their monitor.
To sum up – in your particular situation you can choose either option in your PhotoLab settings, either Generic or Current should give you identical results (if you haven’t changed your Windows Color Management settings).
Always export for a specific medium – if you intend to share your photos online, use sRGB. If you want to show them to others on your monitor, export as sRGB (or don’t export at all and do a slideshow from within PhotoLab). If you intend to edit the file in another program (Photoshop, Affinity, Luminar, etc.), export as Adobe RGB, 16-bit tiff (or DNG).
I look with a mediaplayer as in GeekBox/Kodi which now only supports FHD.
So sRGB is fine.
In the future this wil be a more potent player i think which can handle 4K/8K and sRGB -wider gamut maybe even default adobeRGB as viewpossibility.
If so, i think its better to view sRGB footage on AdobeRGB capable screen when this come present day, then AdobeRGB footage on a sRGB capable screen now.
I changed display preview and export tiff settings. (16bit sRGB to 16bit AdobeRGB)
sRGB is the media standard for now so it’s the safest bet. Keep your raw files and sidecar .dop files backed up and you can always re-generate the jpegs in the future if the standards and your equipment change.
Anyway, this discussion is a bit off-topic. Sorry to the Original Poster.
I made the necessary testing for different color space usage. The process was
Generate an image with 3x256 pixel wide and 2x256 pixel high. It was a gradient image where all pixels has different RGB values. The images attached here reduced the vertical size to 128 pixel to save space.
Load the image to PS and set the mode to RGB 16 bit resolution.
Set the Ganut warning Ctrl+Shift+Y visible. The original image has no saturated part.
Set the View/Proof setup to Customize and select the color space subsequently to sRGB, Adobe RGB and ProPhoto RGB. Make screen shot about the applied color space results.
As you can see, the wider color spaces suffer from color fidelity loss, meanwhile the sRGB image is able to display all steps of gradient colors.
Finally enabled the Preserve RGB numbers feature, which disable the conversion In same order here are the converted and non-converted images in the sRGB, Adobe RGB and ProPhoto color space cases.
As you can see, the color fidelity change the shades significantly without the proper color space application on the target media In this case the monitor ).
My conclusion is without having proper color space capable media ( monitor or printer ) there is no real advantage to use wider ( e.g. ProPhote ) color space to see images.
I made a second test also to view the difference on RAW images made by 50 Mpx Hasselblad images. It was obvious too, the output image having different color space than the next media, the results are worse. Identify all media actual color space and match the incoming image to are an additional job.
Thanks Bencsi to take this time and effort to investigate this colorspace behaviour.
Ok to be sure i understand this: sRGB a full colorspace replica made in your image from 0-255 steps of RGB combinations, when shown in a wider colorspace the the colors without “resampling” (renewed sampling of all colors resulted in saturated (100%) color at the outside of sRGB. C.S.)
Logic, you can’t show things which arnt’t there to begin with. so it shows to next best thing, 100% of the last known color. 100% of sRGBcolor.) (or if your created image is a RGB footage of the wides gamut then the flakes are saturated blue red and green beyond the 100% of smaller colorspace)
So to reverse this theory: the only proper way is: RAW camera-colorspace(C.S.) to: Prophoto RGB C.S. sampling down to AdobeRGB C.S. and/or sRGB C.S. narrowing down instead of expanding.
With this direction and the use of warning “blinkies” (this indicates that this pixel is out of the colorspace (sRGB for instance) and "clipped. You can pull a image (colordata) as much as possible inside the selected colorspace (sRGB) by lowering highlight/brightspots and raising lowpoints/shadows moving global EV until the blinkies are gone. (you compressing the camera’s colordata into the colorspace you choose).
At the cost of adding noise and artifacts. (sometimes it’s better to let clip the “string” and lose the data)
100% agree.
So my only question for DxO colorspace specialists is:
if AdobeRGB clipps alot of colordata from RAW; is this a floating rendering or static?
Floating: then it (AdobeRGB triangle) moves around over the wider gamut of the camera’s colorspace when adjusting EV and other color related parts, so non of the imagedata is really lost except when exporting in a colorspace.
Static: then all data outside choosen colorspace is instantly lost for ever and lowering 255 will leave a gab /dent in the colorspectrum. flatten de dynamic of the image.
As far as I know the color spaces are static. The underlying CIE color space, which describes the human visible light spectrum is static, because of biological reasons. The coordinates of the corners of the smaller color spaces can be described in constant CIE color space coordinates as can be seen here: https://en.m.wikipedia.org/wiki/ProPhoto_RGB_color_space (ProPhoto RGB (ROMM RGB) Encoding Primaries)
i ment is the raw to rgb data clipped on the edges of the selected work colorspace when the coversion to rgb is done or can you move camera data inside the colorspace in the customise workspace. (The editpreview.)
The input and output color space we can not modify, but it is boded to the media. Add a wrong color space, the result will be false anyhow. Therefore our freedom is limited. What we can modify is the B. the internal correction only. Usually the color space conversion ( ICC profiles ) are given by the hardware manufactures. In case of camera RAW images the conversion is in the hand of RAW converter application like DxO PhotoLab.
one thing i remember is that PRIME denoise is doing its work before the demosiac step. ergo before the colorspace (A) is used to define the rawdata in RGB coded data.
And we can change the amount in the workspace.
that’s one of the reasons that fuij is not easy to add. there Primenoise is not working then anymore.
quote out other thread,
PhotoLab is that it already does many processing BEFORE demosaicing, for example to get a more efficient denoising (demosaicing changes noise structure - making grain rougher - and makes it more difficult to remove after that).
The respone of the DxO employee in the link of the thread owner states:
When you import the RAW file into PhotoLab
– PhotoLab will apply demosaicking and convert the RGB values from the “native color space of the camera” into AdobeRGB.
– PhotoLab will apply any color adjustments (saturation, HSL, but also FilmPack color rendering if you happen to use that, etc.) in that color space. I call this “working color space” because it is the color space PhotoLab does most of its work in.
So, for me, it sounds, that all colors outside of Adobe RGB are clipped with or shortly after demosaicing. All color adjustments in ‘Customize’ are calculated inside the Adobe RGB triangle afterwards. So there are no virtual colors, that are moved on demand into the working color space, if you for example change a red gradient into a green one with HSL adjustments.
Clipped does not mean empty spaces, but rather that a color is used instead of the actual raw color, that lies on the Adobe RGB border.
ok so we work in Adobergb and the warnings are for the space between srgb and adobergb.?
if so then the prophoto could be helpfull for the ones whom using adobergb as full workspace.
So i am still very interested if the demosiac step is a active rendering with the customise workspace until the actual export is done. ( prime thing is processed while exporting.)