Read the introduction. It’s the standard in the software/monitors. Everybody is supposed to have that.
And the transition of a picture that uses AdobeRGB to sRGB is depending on the used software. It’s not magic.
A nice article by the way. I’ll read it completely later.
As far as I can tell, it is noted as the “as shot” profile and then used as the basis for printing or as the original colour space in software that understands profiles.
“* When you import the RAW file into PhotoLab
– PhotoLab will apply demosaicking and convert the RGB values from the “native color space of the camera” into AdobeRGB.
– PhotoLab will apply any color adjustments (saturation, HSL, but also FilmPack color rendering if you happen to use that, etc.) in that color space. I call this “working color space” because it is the color space PhotoLab does most of its work in.
– To display the image in PhotoLab, PhotoLab will, after all other processing, convert the image into the color space of your screen.
** If you select AdobeRGB for export, that last step will not take place, and everything will stay in AdobeRGB, as you say.*
** If you choose “as shot” for export, PhotoLab looks in the EXIF data of the RAW file whether you set AdobeRGB or sRGB in your camera settings and will either keep AdobeRGB or convert to sRGB.*”
As has been said before which one you select does affect the jpeg preview you see on the display on the back of the camera. It was for that reason that I chose AdobeRGB, to avoid getting a restricted colour rendition there.
All you wrote is true.
To be exact.
Idyn, D-lighting are a split tool.
The toning aka highkey lowkey adjustments to stretch DR, in practise a smartlighting action are jpeg only effecting.
The modes which stepping down initial lightmeatering is effecting exposure of the sensor thus the raw file.
On or off?
My personal use is based on my normal shooting envelope.
I walk with my family around and take shotscof things i think would be nice to turn in a image. No tripod, nor minuts of preperation. Only a quick choice how to get my thought in the camera.
I have idyn on Auto. So it only activate in 1/3 2/3 3/3 -ev if it detects a overrun in DR of the scene. In my older camera , micro sensor, i alway shot -1/2ev compensation as default knowning underexposure is more easy to recover then overexposure.
Now i have both worlds, normal exposure and a automated correction if scene needs it.
Less time needed to click snapshot… less moawning family members waiting for me.
In my memory, i tested it long a go, my manual EV correction overrules idyn.
The weather here is dull no High Dynamic scene so i can’t test this right now.
Further i have 4 custom settings.
1 1.4x electronic zoom, jpeg only, for the moments the lens is to short and i need cropped light meatering.
2 bird mode, back focusing and locking and tracking mode or centrebox mode.
3 a bare Aperture mode, all aid systems off. I am in control.
4 geesh down know.( i know again a copy of Aperture mode best settings…)
(sign to re build my customs to my new use preference… )
Extra is when i meshed up settings in A or P i just turn to customs and see what that has set. Then i know what i decided after closly reading the theory behind the setting. Best option so to speak.
If any reader here have a idyn or active d-lighing on there camera and is in summer, high dynamic scene in garden, please test automode and manual override by spotmeatering and ev compensation. (i am fairly sure its overriding idyn’s settings but not for sure.)
I your case or anyone who are mostly take photographs not snapshots and P&S, indeed turn everything “magic” and automated aid of.
All is slow shutter compensation, shotnoise compensation, a aid which i think if i am remembering correct ( we posted about this here) also for rawfiles a good feature.
Lensshadow correction, vignetting, is jpeg only but straingly enough dxopl seems to read and applies this when i activate it in my panasonic. And thus overshoots this correction because it has the same in optical module… Knowing this i don’t turned it of but in certain shots i look closly to correct this correctionovershoot.
What i tried to say is don’t be afraid to use aid settings aslong as you know the limitations and benefits. And test effects on your rawfiles. See if it’s doing what you expect. Never assume that rawfiles are bare storings of manual settings.
Electronics are quit smart these day’s sometimes smarter then we think.
Found the slowshutter thing also:
That are not my words. It’s a quote from wolf himself. A part of the link.
A raw file doesn’t have an output color profile only an input color profile dealing with the sensor characteristics. Only during exporting a rgb image the output color profile is getting important: for which monitor is the image meant??? Just to stick with monitors.
Sorry, but in my mind RAW files does not have any color space.
The color space in camera is only used for the embedded jpeg and eventually for proprietary camera software to use it as default.
PL works internally with AdobeRGB, and it’s only a working space.
It’s only at export that you choose the color space that you want (original - meaning the one from the camera -, sRGB or AdobeRGB) and whatever the camera choice it does not have any influence or quality loss.
Yes I agree with you on this wording and my post above was intended to clarify (for me at last):
So, in other words, I think we all agree that the color space defined in the camera does not have any impact on output color space while exporting from PL.
Every capturing device has a range of in which it capture the build for data.
In camerasensors its a range of hue, saturation, brightnes. So a sensors colorspace is just the fysical limitations build in the hardware. The second “colorspace” of the sensor is embedded in the readout of the sensors photoncharge. (note that the filters rgbg (4 wells) raster creates what we call colordata rgb not the sensors wells that’s just photon’s changing in charge.)
The sensitivity of the wells for the different lightwaves range redhue, greenhue and bluehue.: If calibrated correct the sensor readout should be neutral. (white light, what we experience as white light.)
(but we all know that different manufactorers has different “neutrals” as color sensitivity caused by the fysical buildingspecs.)
This last “colorspace” is converted in to digital data and mapped in the rawfile.
This is what most people sees as camera’s colorspace. The biggest colorspace you can use in your computer by using the rawdeveloper software to decode the file.
Dxo is choosing AdobeRGB as workingspace, the pixel preview, and if you set monitor colorspace to AdobeRGB it converts this to the monitors profile. If sRGB it convert to sRGB.
One thing is alway’s foggy.
If this conversion is always active are the clipping points (borders of colorspace) amorfe and changing with the tonal settings?
I think so, this is visible when you use recovery sliders as highlight and shadow.
So it’s a floating bowl (srgb) in a floating bigger bowl(adobeRGB) in your sink(cameracolorspace) so you van move both bowls around till you hit the sinkwall.
Edit, a profile suggest that it has a 0-point. Which only is when WB is profided and set.
Raw data itself has not a “whitelight”-point that’s given in the exifdata which we also can set at fixed 5600K.
So the colorspace as we normal speak about is the one DxO produce just after demosiacing in AdobeRGB with a calculated white point and blackpoint and whitebalance.
correct, because the exported output colorspace has A) a profile( hopefully ) suited for the landing device, monitor, smarttv, 4ktv, printer etc, and B) a WB (whitepoint/blackpoint)
The point is we humans see a certain colorspace so every viewdevice needs to be converted to that colorspace. a X-ray photo is also converted from a rontgen wavelength(colorspace) to your srgb monitor.
So my floating bowl analogy is well floating…
What you said about, developing programs are using the embedded jpeg as a reverence that would be implementing that the second part of idyn/active Dlighting (tonal contrast change) is also effecting a rawfiles preview in your developer.
i know for sure that Silkypix camerastyle profile reads rawfile exifdata and even applies Ires settings and colorsaturation profile so there are in the exif data settings of camera which arn’t in first place hardboiled in the rawfile but can be used as additional corrections taken over by the developers program.
Apparently, that is an obsolete article:
" Note: This document is obsolete, and is retained here for historical purposes only. It was published on 5 November 1996, as a proposal specification for sRGB as a standard default color space. sRGB has since been standardized within the International Electrotechnical Commission (IEC) as IEC 61966-2-1. During standardization, a small numerical error caused by rounding error was corrected. The viewing conditions were also clarified.
The W3C CSS3 Color specification specifically references “Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB”. IEC 61966-2-1 (1999-10) ISBN: 2-8318-4989-6 - ICS codes: 33.160.60, 37.080 - TC 100 - 51 pp. as amended by Amendment A1:2003.
The latest official sRGB specification may also be purchased from the IEC."
I noticed. But the basics didn’t change.
Off topic, I just ordered the book “The girl with the Leica”. About Gerda Tarot, girlfriend and partner of Robert Capa.
They have values based on the used sensor/camera/color array.
From that link to Wolf
“That’s what I call “native color space of the camera”. It is not intended for display, it’s what the sensor “sees””
Without the knowledge of what the camera sees it 's impossible to continue.
So the input color gamut of the raw file must be known and the wanted output color gamut/profile. That conversion is part of the demosaicking process.
I just looked the book up - I knew none of this about her, and about Robert Capa. I’ll probably order the book myself at some point.
When I was growing up, I wanted to be a photojournalist. My “hero” was Bronson, who starred in theTV show “Man With a Camera”. He usually seemed to use a 4x5 Speed Graphic, but had a small Leica too. Back then, I was using a Contax II and later a IIa. That led to a Nikon SP. It takes some searching, but his TV series can still be found.
As I grew up, I loved the way people covered the news, and wanted to be part of it - except that they often ended up dead. Memories. The book should be a fascinating read.
Google on “the Mexican suite case”.
On to start with https://www.rencontres-arles.com/en/expositions/view/648/the-mexican-suitcase
As PhotoLab (still?) doesn’t appear to have AWB available as an option, you might find it worthwhile to let the camera fill in the AWB data in the NEF, giving you another easy option when setting WB in post.
If you mostly shoot outdoors (yeah, me, too), you can create a profile set to “daylight”, then set that to your default profile in preferences.
Letting the camera calculate AWB for you does not affect the RAW data, other than the embedded JPG in the NEF, but if you’re using PL and might sometimes want AWB, you need that data.
I know what you mean, and I agree, but it would be better not to word it the way you do. That’s like saying if I don’t know how the carburetor, the brakes, and the transmission work on my car, it’s impossible to drive.
Using my Nikon I don’t need to know much of anything to take photos. The camera wants to do everything for me.
I certainly agree that the better I understand the things you are all talking about, I can sometimes use that knowledge to make better photos. I readily admit that I do NOT know many of these things, and apparently quite a bit of what I thought I knew was WRONG.
Yesterday I tried the 3D focusing on my Nikon for the first time, and in my opinion the results were lousy. Maybe it’s because a bird is too small to trigger the 3D system. With a 70mm max zoom, 1/500th second, ISO 400, I expected to get images as clear as I usually do, but not a single image of the birds worked. It’s not the camera - my shot of boats waiting at a drawbridge was as sharp as I expected. The 3D just couldn’t lock in on something as small as a pelican maybe 20 feet away. In a couple of hours, I’ll try my 50mm lens with a built-in motor, so it’s much faster to respond. I need a longer lens so I don’t need to crop so much - my 80-200 would be find, but it is too slow, and I can’t afford the better one with a built-in motor. I’m not thoroughly unhappy with my results from yesterday, but I expected more detail. I got tired and bored editing them - whatever the reason, I didn’t capture the images as sharp as what I expected.
What I wrote was in the traject of color management. You don’t have to know how the carburetor etc. works, but you have to know you need gasoline, input, to drive, output.
I don’t do bird shooting. But don’t forget that 3D focusing is partly based on predictable movements. The best results with a linear movement.
I just saw your link. It are great moments. The sharpness good be better but everything is in it. I don’t know why. I’m just a hobby photogapher.