I’m going to address another common misconception: the one about ISO. There’s nothing wrong with ISO 400 (or ISO 100 or ISO 6400), but people have the idea that lower ISOs generate less noise (technically, I should be saying “have a higher signal-to-noise ratio” or SNR). This is incorrect: for any given exposure, higher ISO values have higher SNR (i.e. they are less noisy).
Note the qualifier “for any given exposure”. In this case, this means shutter speed and aperture. Keep the shutter speed and aperture the same and I guarantee that the shots with higher ISOs will be no more noisy and usually less (they will, sadly, also have less dynamic range, which is why we don’t shoot using high ISOs all the time).
The misconception is so common, I wrote a white paper to address it:
Before you tell me I have it backwards, read the document.
Let me add that this does not apply to any truly ISO invariant camera (none exists although some come close). In such a camera, by definition, the ISO setting is irrelevant and the noisiness of an image depends solely on the exposure. However, even in such a camera, there are advantages to using the higher ISO numbers (except where you have a generally dark image with a large dynamic range).
There are certain environments (such as in photographing birds) where the choice of aperture and shutter speed are constrained. In these environments, the best setting for ISO is automatic.
What I was taught, and what I believed before things got so confusing here in the forum, was essentially what you wrote. A raw file was a “capture” of the data on the sensor at the moment the image was taken. The amount, and color of the light, is converted into data, and that data is recorded in a file.
The camera gets information from that data, and creates images we can look at. So does a computer later, when is loaded with that data. None of the things tools we use to edit an image have any effect on the raw data. Changing the light (varying the aperture) will instantly have an effect, along with changing the shutter speed.
It makes sense that any “gain circuitry” driven by an ISO setting would have an affect on the sensor, making it more or less reactive to light. At the extreme, if the gain circuitry were set so the sensor would not react to light at all, there would be no data. The “image” extracted from that sensor data would be a black rectangle. If the gain circuitry was designed the other way, so the sensor was overwhelmed by the light hitting it, the image eventually extracted from that data might be a white rectangle. In this crude terminology, I can imagine how the raw file will lead to a lighter or darker image later on, when it is interpreted by software to make an image.
Once any gain circuitry is set, if someone were to measure the light settings of every pixel, one at a time, that is the data that leads to a “raw file”.
…just like in the film days, when film that was known to be underexposed could be “pushed” in the development process, to get more useful data from a negative.
To be honest, we ought to talk more about what happens as you change the ISO dial from 100, to 1000, to 10,000, and maybe to 100,000. Does the raw file change as we might expect it to change?
Sometimes as in shooting birds like you noted, the shutter speed needs to be quite high, and the aperture needs to be giving me a “sharp” bird image over a “less sharp” background. I usually don’t care what the ISO ends up as, and with the noise filter technology built into PL4, that is much less. important than it used to be.
I used to allow the ISO to be whatever I needed to make the image “work”. Following your suggestion and leaving the camera in “Auto ISO” sounds like a very good option to me, at least most of the time.
A correction to another common misconception: the ISO setting does not affect the sensitivity of the sensor. It applies a gain to the signal coming from the sensor. And it only applies a gain—the signal will never be less, only more.
Clipping can occur when a pixel is saturated with photons. It can also occur when the ISO gain raises the signal above what the analog-to-digital converter (ADC) can record. This is why increasing the ISO drops the dynamic range and why you have maximum dynamic range when using the base ISO (i.e. no gain).
By the way, I should add that there is no benefit from using non-native ISO values. Many cameras have, say, a native ISO up to 6400 (using gain circuitry) and then non-native values like 12800 (using digital multipliers after the ADC). The latter are no better than raising the exposure in post and they don’t provide the noise-reduction features of the ISO boost.
It matters because idyn, Active D’ighting change the exposure.
Thus the rawfile’s data.
How do you call the wavelength range a sensor is sensitive for?
Bayerfilter is letting redisch, blueisch, greenisch through. So that’s the colorspace of the sensor.
Break the bayerfilter off and infra red can be effecting the sensor to.
So it’s a camera’s colorspace with no whitebalance.
Black is no photon charge, white is saturated charge. As in colorless charge. Greyscale based. The bayerfilter creates r,g,b, numbers. => a color is computed out of that.
Nope, ISO doesn’t drive a (electrical) gain. ISO is just a number.
(i was in this the same assumption earlier, like that ISO is defining the sampling size of the analoge signal of a well. But it isn’t i was told by a guy who really seems to know about this stuff… I need to find the conversation for you because the details are over my head.
Edit: Definition of ISO:“A raw file has no lightness, it is just exposure measurements.”
“ISO defines the relationship between exposure and lightness such that an exposure of 10/ISO lux seconds should result in an object rendered with the lightness for 18% grey”
So there is no connection between ISO and electronic amplication. Just a number"
Yes, agreed, and in this rawfile there is a number which defines the ISO.
So the rawdeveloper knows which lightness there should be coupled on the numbers.
Whitebalance? True agreed, No wb in rawfile latent image,that’s bound to the exifdata’s camera’s setting.(also part of the rawfile.)
The colorspace set in camera is for oocjpeg not for rawfile.
Agreed.
Do you know the theory of a latent image on a photodrum of a laserprinter?
That are numbers converted in a laserbeam strenght writing on a charged turning drum.
Laserlight is decharging the charge. “burning” a latent image in the surfacecharge.
And four of those drums turning syncronised catching kcmy toner.
The toner is mixed with a carrier, magnetic material with a coating, and together it is called “developer”.
Only the correct color of toner in those four developerunits will show the correct image visible for us on paper. When i stop the proces every drum has a part of the image stuck to it charge. If one of the charges is off the endresult is off. Blackbalance in this case.
Back to the rawfile, in the numbers is a location for every pixel. So there is a latent image in a rawfile. Every pixel of the sensors resolution has r,g,b,g numbers attached.
If you process only the grid you see a unfinished latent image. Like on the four drums before the toner is atracted on the surface.
But yes, if you strip the metadata, exit data from a rawfile then what’s left is just exposure. Charge levels. Controlled by the shutter and aperture.
As I said, only the things that affect the electric charge coming out of the sensor matter. Photons affect the charge and exposure affects the photons. ISO gain changes the signal before it reaches the ADC, so that matters as well.
I believe we are in agreement. I also believe everything I said in my prior post was correct.
The RAW file records the raw sensor data. Color information is not stored there. But, yes, if you know the characteristics of the sensor, you can use that to determine how to convert those numbers to colors. The conversion is as good as the sensor characterization.
I think of a colorspace as a way of mapping a value to a precise, specific color. Mapping a RAW number to a color is not precise in the same way. It depends on the accuracy of the sensor profile. Adobe might use one profile, DxO might use another. Given a value and a colorspace, DxO and Adobe would agree on the color; given the same RAW file, they might not.
If a sensor profile were an absolute thing, then one could call it a colorspace.
Using ISO to mean an ISO standard definition to relate exposure to lightness, you’re correct. However, in a camera the native ISO setting is implemented using a gain circuit (non-native ISO is done by a digital multiplication after the ADC). For instance, go to https://photopxl.com/noise-iso-and-dynamic-range-explained/ and check figure 3.
No, every pixel has one number attached. It will be either a red, green or blue channel, depending on the color filter above the pixel. For four pixels, the pattern is typically (but not always)
The way ISO is implemented on most cameras, raising the ISO lowers the exposure that the camera can accept before the highlights clip. Thus the top end of the DR ratio is reduced, reducing DR.
Changing the ISO doesn’t affect the full well capacity of the sensor at all. However, if it results in a change of voltage gain before the ADC, it does affect the level at which the ADC clips the highlights. Suppose the ADC is set to accept a maximum voltage of one volt, which let’s say represents 10,000 photoelectrons in the sensor. Now we double the voltage gain. One volt now represents 5,000 photoelectrons, so the highlights are clipped at 5,000 rather than 10,000.
Not my text by the way. I quote a guy who tried to explain it to me.
As you see, he talks about gain on the ADC. What is effectivly done is changing the charge, aka voltage to a point it can be read properly by the ADC.
Say we change iso 4 stops wile exposure is stable, then we we don’t overexpose the sensor (that’s stil the same exposure.) we only overflooding the ADC inputchannels, clipping everthing higher then max accepted voltage. That’s why the DR of the camera is getting smaller.
Yes we have a agrement.
ISO is loosy coupled on the gain circuit of the ADC. When we dail the isowheel we tell the camera to change the gain of the adc input to maximise the voltage, for the conversion Analog to digital, which represents highest exposure. (edit:on the way also enlarging the photonnoise, shotnoise with the “image” data they are equally enlerged by the gain. By shorten the shuttertime due raising isovalue the amount of gathered shotnoise is less , that’s why most sensors are better of with raising iso then a longer shuttertime. )
Agreed, that’s why every rawdeveloper has a slightly different color and WB interpretation. It’s a “floating” space not precize and therefore no real whitepoint and blackpoint and also no real defined balanced White Balance. That part is done by the raw conversion algorithm of the rawdeveloper which can be different in interpretation.
Owh, miss reading/writing. I ment the resulting rgb pixel . Not the photonwell.
That’s indeed 1 number. When you need four wells, r,g,b,g the sensors resolution is (atleast) four times the native photoresolution. Setting a m43 on 16:9 is just a crop. Exposure is still full sensor. (i know you know this.)
Thanks for this link.
I skipped most formula’s because of my formulablindness. Dyslectic.
Those DR calculations are killing.
But understood most of it in general. That bsi part was very interesting.
The noise part of his explaination was also interesting. Snr.
I did notice some thresholds on my camera but didn’t understand, had some idea’s, which part of the route picked up the noise. Specially heating noise due long exposuretime is a bit…h which post processing can’t handle.
That’s why i use 3200iso(with deepprime even 6400iso) in darkness, the (iso)gain noise and digital adc noise(banding) is less troublesome for denoising then the longexposure shotnoise, light polution, straylight and heatingnoise,
One thing is clear: a better lens , better optics, has less self made strayphoton’s which are separated from the real path through the opticals and causes random adding to the photoncount and has by that a higher resolving power. Aka it handles long exposure time better. The better the glass the less noise caused by strayphotons.
Hot days is better to raise iso then overdo shuttertime heating up the sensor and it’s cicuits. => more noise added to the count. Mostly "red"count right? That’s why long exposures are redisch in shadows.
First, other than advertising stuff, in your example, apparently there is NO reason to go beyond the native ISO, in this case 6400?
Second, if I accept that, and it does sound logical, how does a person know the “limit” of how high one can go in ISO? I’m looking at my Nikon Df as I type this - the highest ISO listed is 12,800. Then there is H1 and H2. Are those extra vaulues useful for anything other than advertising? And how would I know if the 12,800 is still a “native” ISO, to use your words?
I guess one more question, since this is the PL4 forum - how high can one go in ISO, and still expect the awesome noise software to still get an acceptable result? Based on what you wrote, I suspect the noise won’t get any worse if you go beyond the highest “native value”. Am I correct?
And what about the extended base iso?
My camera has a native 200iso but in can dial in 160iso and 100iso…
I suspect that using those two is dipping in the nearly black region of the readout.
I would say 12800iso start to be ikky.
Take DR minimum of 7… 25600iso.
Your color tonal degradation is starting to dive under 7bits colordepth at 3200iso. But hence at that point you see also less colors with your eye’s
You are still off. I used to think the same thing: four sensor pixels make one image pixel. If I could just rip off the Bayer filters, I could get a grayscale with 4X the resolution.
Nope. This what demosaicing is about—you interpolate the missing color values. So those four “wells” are still four pixels. And after demosaicing, all four pixels will each have a complete RGB value. For each pixel, two of those channels will be interpolated.
There are precise definitions of colorspace and imprecise ones. In the precise sense, there is no colorspace for RAW files; in the imprecise sense,you can choose a sensor characterization to resolve the sensor numbers into colors and call that a colorspace, but I was using the precise definition.
To hammer the point, your camera’s sensor probably doesn’t match any existing sensor characterization and will probably drift with time (the dyes in the filters might break down). A RAW file’s “colorspace” is always going to be an approximate thing.
That’s my take. Manufacturer’s are always coming up with weird tricks, so without having a specific camera and an expert on hand, I wouldn’t swear it’s always the case; for most cameras today, though, it’s probably the case. If someone knows an exception, I’d like to hear.
There’s nothing to say that ISO 6400 is the native max. The max depends on the camera; it could be 1600 or it could be 256,000 (in theory).
I usually try a Google search or see if the manual gives it away somehow. For example, this link (Nikon Df Review - ISO Performance) claims that levels higher than 12,800 are not native.
It looks like Peter also found a link.
Apparently, ISO 50 is also not native, but done by just raising the exposure by a stop and then lowering the exposure in post. The “lowering the exposure in post” is automated and has to be understood by every software tool dealing with the RAW file. Software that missed this rule would display the image one stop overexposed. Basically, you don’t really gain any dynamic range this way and you might blow out some highlight.
I notice that page has the statement “As we move up to higher ISO values, noise obviously starts becoming an issue.” Hopefully, those of you who read my document on ISO will understand that the author was actually lowering the exposure on the comparison shots and that that is the source of the degraded images.
I feel like a kid in Junior High, who walked into a college class on calculus and analytic geometry. I think I understand the concepts, but the math and data is way beyond my ability to consider, Still, I think that I can simply decide that on my newer cameras (read Nikon Df, D750, and Leica M10) I can set a personal ISO limit of 6400 and make sure I never go over that.
I have a rough idea of what happens when I go too high, but I don’t see the need to push beyond that.
To avoid this happening unintentionally, I can go into my auto-ISO settings and lock in 6400 at the top end. I think I have it set now to 10,000.
I think an appropriate next step is to take a series of photos at ISO 6400 and confirm that PL4’s “Deep Prime” will create an acceptable result.
There was a similar discussion in the Leica forum many months ago, about how high one can go with ISO. If I can find that link again, I’ll post it here.
Just one last question before I head off to sleep. All of these discussions have been about “most” cameras that use a typical “Bayer” pixel distribution. Fuji has developed an x-trans sensor with a different sensor layout. There are many web pages that talk about this - here’s one of them: https://petapixel.com/2017/03/03/x-trans-vs-bayer-sensors-fantastic-claims-test/
Does the different type of sensor used in the Fuji offer any advantage in terms of maximum ISO? (I own a Fuji X100f, but I’ve been concentrating on my Nikon and Leica cameras lately.)
I shoot birds with my Canon 80D. It’s pixels are far smaller than the Nikon DF’s. Too often, I’m shooting flighty birds in dark forest and only able to max out at f/6.3. This means a lot of ISO 6400 shots. DeepPRIME works well for me and it’s only going to be better for you.
I’m no expert on these different sensor patterns. I’d go with what the author of the article wrote. Interesting article, by the way, with a nice analysis.
I am not into all this hyper-technical stuff either and find it confuses most people rather than making things clearer.
Usually, anything that doesn’t have a “real number” (H1, H2, etc) is not “native”.
I read an article once about ISO, the essence of which was that ISO is not a measure of sensitivity but of how much the signal is being amplified. The premise is that, on the sensor, there is only one “base” ISO (usually 100) and everything else is just a matter of by how much the processor in the camera amplifies the signal produced by the sensor. In other words, it’s not so much about using a stronger antenna for your radio to capture a cleaner signal but more about simply turning up the volume.
Well, the D750 is reckoned to be a good performer in low light conditions - being full frame helps there. Its “native” ISO range tops out at 12,800, which I would confidently use, knowing I have DeepPRIME for processing.
Don’t forget that, as yet, most articles tend to be written with processing via the Adobe chain in mind. Dxo is still not widely recognised and, as such, discussion about acceptable noise tends to devolve down to what PS or Lr or ACR can cope with.
I have a D810 and regularly shoot concerts and other stuff at 10,000 ISO, which, with PL’s DeepPRIME produces superb images. You shouldn’t have any problem with the D750.
The problem with general discussions about high ISO and noise is that it all depends on the make of camera being used by the author. For example, my experience is that Canon cameras produce far more noise per ISO than Nikon and, for that reason amongst others, I never advise people to buy them if they have a choice.
I, personally, never use auto-ISO, which means I can judge, for each scenario, the balance between quality of image and noise that I wish to achieve.
Which DxO do not support and, understandably, have been reticent so to do, presumably because of the vast resources it would involve in writing a demosaicer for just one exception to virtually every other sensor.
From reading a couple of reviews, Fuji’s marketing hype says so but reality doesn’t support that. In fact, from what I understood from the article you linked to, even at lower ISOs, the overall image quality isn’t as good.
Mike, I would rate the D750 as the most competent of your cameras. Pair it with some decent glass and stick with it and you won’t go far wrong.
Oh, and stay away from relying on overly-technical articles to tell you how to take a picture, in favour of the “suck it and see” school of working out what works best. Take some time to familiarise yourself with what your camera can and can’t do. That is how I came to my views on how to take the best possible wide dynamic range shots with all that stuff about over-exposing by 2 stops to achieve a good ETTR histogram.
All of which means that, despite being interesting, we have moved just a tad off-topic for this discussion, which, ostensibly, started out by being about the Nikon Df and PL4
Ah what you actually say is: every pixel “lones” the missing colordata from it’s neighbors.
So a R pixel is completed to RGB by taken over the values of the other next to it laying g.b.
it creates the three RGB-values on one actual value of exposure and three others around it.
understood. (If i think about it, it’s logic. Why waist 3/4 of your sensorspace if you could borrow the values you miss and add them together.
i forgot to write the DxOMark are technical DR not photographic.
(explained in the arcticals)
so indeed a DxOmark 9EV range is probably 7.xEV.
for my camera 3200iso is safe and 6400iso is if i need it i will use it. Anything above that is just desparate.
your camera 6400iso limit and a i need it 12800iso.
noise and lensrendering “resolving power” good contrasty sharp looks are all part of the straylight bouncing in the “wrong” well. making the contrast lines softer. That’s why better glass is better.
besides from distortions by flawed optics.
So amp up (improve) your camera? Buy better glass…
i had fun , i learned something new about the deeper dark bowls of a camera. Back to PL.
[quote=“Joanna, post:132, topic:17046”]
Mike, I would rate the D750 as the most competent of your cameras. Pair it with some decent glass and stick with it and you won’t go far wrong…Take some time to familiarise yourself with what your camera can and can’t do. That is how I came to my views on how to take the best possible wide dynamic range shots with all that stuff about over-exposing by 2 stops to achieve a good ETTR histogram. …All of which means that, despite being interesting, we have moved just a tad off-topic for this discussion, which, ostensibly, started out by being about the Nikon Df and PL4…[/quote]
Quick answer. Yes, the “most competent” camera I currently own is the D750. The Df with manual controls is more like the cameras I’ve grown up with. The Leica doesn’t have most of the things people consider important today, but I think it is my “best” camera, with my “best” glass. But the Leica is best for candid photography. Taking as careful a shot as I can, the Df has a better interface. The only way the Leica can compete when I’m using a long lens, is to sort of turn it into a DSLR (Visoflex, or Live View).
My quick way to do ETTR is to view the histogram in the camera as I’m shooting, and make sure there is a lot of empty space to the right. Not scientific, but quick to evaluate. In the Nikon, If there is empty space to the right, and the blinks haven’t started yet, I think I’m close to the way you showed me how to do.
There’s a good chance I can get the Covid-19 vaccine in the next week or two. After that, and a couple of weeks waiting time, I can hopefully go back to taking all the photos I want, rather than being stuck in my condo most of the day. All this “free time” however has allowed me to learn so much more, trying to keep up in this forum.
Oh, and if the topic for this thread drifts, that’s fine. It’s all good information, once I assimilate it into my brain, as I did with your exposure examples.