raw data has no interpretations of any colors… raw data is a proxy of how many photons make it to each sensel ( and that by the way includes registering something that is not resulting in sensation for a human eye/brain combo → hence no color at all )… that’s it… there is no gamut, no color space until you will use an arbitrary operation to map some numbers into some coordinates in a proper colorspace ( that has gamut and thus one of the reason of it being a proper colorspace ) some of those coordinates might be colors ( for a human observer ) and some are not - also you can’t have a proper RGB color space ( a triangle within CIE xyZ / xyY coordinates ) with gamut that includes all colors ( for a human observer ) and excludes all non colors for a destination mapping
Yes i know, it’s basicly a photon count/senselcharge of each sensel and the comparison between those different charges means how much saturation there is of that filtered hue. Photons in the red hue or green hue or blue hue.
The sensels DR is a factor in the gradationsteps you can use in transfering those senselsgriddata by demosiacing in a proper colorspace.
Camera “colorspace” is more about the DR (noise bottom til saturated (aka fully charged) of each r,g,b,g which is the data to create a image in a chosen colorspace. A small camera DR will be looking worse on a wide colorspace then a large camera DR. Colorbanding is one of the issue’s you see.
That’s why i say if you have a 4bit depth on your camera sensel it’s no advantage to have a 16bit depth working colorspace but visaversa those camera r,g,b,g data will be crushed and you lose a lot of nuance in the colors.
Hence the workingcolorspace needs to be as big as the native “camera (posibel) colorspace”.
On a sensor you have a flitergrid (say bayer array) only red hue pass, only green hue pass and only blue hue pass. So the red filterpass charges those sensels behind it which a raw developer will use to create “red” in the pixel balance RGB and if in the cluster also "green filterpass is charged evenly and no bluelight is hitting the blue sensel in the cluster it will be ending up as “yellow” on your screen. In the rawdevelopers workingcolorspace. If that working colorspace is too small that yellow could be clipped or compressed. Or both.
I am not able to write long sequences on Saturday evening, so I retire from the topic
Okay, we’re off-topic anyways.
The caution on that one is what Topaz has done with Photo AI. Photo AI’s Autopilot wants to take over, and it still seems to be there after you kill as many of the automatic things as you can find in Preferences. They integrated Sharpen AI and Denoise AI kinda like Siamese twins, and then stopped development on those two as independent (and MUCH more capagle) plugins. The Sharpen and Denoise AI that was left in Photo AI was basically the castrato version.
When you use the Sharpen AI plugin, you have 10 models to consider, each of which you can fine tune…You can also look at a quad screen and compare four versions of the same view of your photo. The Denoise AI plugin is even better, because in addition to fine adjusting the models you can actually get a full- or partial-frame view instead of moving DXOs dorky little window around.
The point is, that to be able to satisfy both ends of the user community, there is a ton of development work to do. Or maybe Photolab Essential needs to be further detuned and automated for beginners or those who don’t see a need to dive or swim in the deep end.
BTW: Topaz is aware that they may have jammed too many functions/features together into an automated package.
[quote=“cohen5538, post:38, topic:34894, full:true”]
The thing is that last years they sadly have given a lot of customers a reason to look elsewhere. The low water mark for me was when it took them six full months to fix a camera profile for Sony A7 IV and when using the profile for A7 III I wasn´t even able to see any difference. “The starting point” for the editing in Photolab was practically the same. So, what was I waiting half a year for not able to use a program I have a license for. That is totally unacceptable and will rule out Photolab completely as a professional tool. Which pro would live with constrictions like that. So, in the meantime, I used Sonys own Imaging Edge instead because it is free and really very good. Some people might make that choice instead too.
Well, I read a shoot-out between Adobe Camera Raw and Sony Imaging Edge that revealed that a main difference between these programs was just that ACR did not read the Sony metadata in EXIF which gave the users of ACR a less advantageous starting point than the users of Imaging Edge got.
I think they out to have used that R&D-time to fix the problems with Local Adjustment instead. Who will use these features? Product photographers??
There is a lot of good people here that DXO R&D ought to listen more carefully to. I got mail some time ago where Capture One asked me for my input. They seem to be far more alert and interested. As a user of PhotoMechanic I have had a few cases with Camera Bits too and comparing their forums with the DXO Forums is like night and days. Camera Bits staff really interacts swiftly in the various treads to help people and if it is necessary, they fix it in days, release a new service version and gets back to these users to ensure they have understood the problem and that they have fixed it correctly to the user’s satisfaction. I have found the same with Hamrich Software that makes the industry standard scanning software Vuescan. Both example of fantastic support.
Thank you for your very well written post. I really appreciated it but I have a few things to say about Photolab and Wide Gamut/Classic issues, calibration, softproofing and printing.
For the first:
I don´t have any general problems with the new Wide Gamut and it´s rendering with my camera profiles (A7 IV) compared with the Classic as long as they both use the same profile setting and what seems to be the general Wide gamut default rendering called “Neutral colors”. Of some reasons this might be a little changed since version 6. Now the Sony A7 IV is the “Generic Default” both for Wide Gamut and Classic what I can see. So, I don´t experience any difference at all between the WG and Classic now.
Since I happen to be one of these users that have left the printing standard color space Adobe RGB for Display P3 and not just for printing but even for screen just to have a possibility NOT to have to use two different color spaces for print and screen, I now think I finally can standardize on just P3. I use Photolab with a screen calibrated for Display P3 and it is the screen profile that determines how I will adjust my images in postprocess. From what I know Wide Gamut kicks in when we export JPEG-files and picks a suitable ICC. It covers both Adobe RGB and P3. The old Classic color space in Photolab is said to be equivalent to Adobe RGB but now there are a few other color spaces used too with other devices like high res. TV that might even use the really wide Rec, 2020. I use Display P3 even with my new TV and it looks really good to me.
I agree fully on your stance on softproofing but of different reasons. I have recently done a few tests with a new printing paper that a local photo chain in Sweden sells and strangely found that this Scandinavian Matte Professional gives exactly as good prints as the fine art paper Canson Infinity Etching Rag gives. I can´t practically tell them apart when watching the result from a distance between half a meter to normal viewing distance of 1, 5 to 2,5 meters printed in A2.
There are no good dedicated profiles for this paper but Epson Archival Matte ICC works fine and I get an almost perfect sync with my P3 monitor when using Canson’s own ICC for Epson SC-P900 and the Canson Infinity Etching Rag! BUT when I softproof the Canson ICC gives the most distinct and contrasty and saturated colors compared to Epsons Archival Matte, Fins Art and Velvet Fine Art and not to mention the ICC Scandinavian offers. BUT in reality, it is the Archival matte that is more contrasty and gives the most saturated colors. So, I realized that I got it a bit wrong looking at the softproof and my stance has a long time been that when printing it´s the real life and printing result that rules. I look upon softproof as a tool that might be helpful, in the best case for photographers that is outsourcing the print jobs to print shops.
One of these images are Canson Etching Rag printed with Canson ICC. The others are printed on Scandinavian Matte Pro. Two are printed with Archival matte ICC and the others with Canson ICC too. Can you tell them apart just by looking at them. The cost for a sheet of Etching Rag is about 10 U$ or 100 SEK. The Scandinavian Matte Pro costs 10,20 SEK. The Scandinavian is made of cellulosa and weighs 240 grams/m2 and Canson weighs 310 and is made of cotton.
Brian, that’s a long post.
I think you are not quite correct regarding the colour space issue. DXO’s wide gamut colour space is their new working space. This is the working space where the various calculations are done to convert the raw file to an RGB image including the edits etc. The old aRGB working space was always a technical restriction of DXO and I am not aware of any other recent mainstream raw converter that doesn’t use a wide gamut working space.
DXO was unusual in not using a wide colour space for their editing. EG Lightroom uses Prophoto RGB - no choice. The wide colour space allows for all the complex maths to be carried out without the colour space limitations impacting the final result.
If you want to edit for a particular output space, such as sRGB for web use, then you simply turn on Soft Proofing for that space. In Capture One for example, you don’t have to do that because C1 is always in colour Soft Proofing mode. They do have an “Extra” soft proofing mode but this takes the output recipe that you have selected and will soft proof not only for colour, but print size, DPI settings, and sharpening. Rather more advanced than DXO but this is reflected in C1’s price.