Hi, Joanna,
This is an interesting observation and I am trying to make sense of it. In a very simplistic sense, sensors are basically photon counters and count up until they can’t count any higher. ISO affects this only in the sense that, since the photon count is multiplied, the dynamic range is reduced–the maximum possible photon count is reduced by half for each ISO step up.
Given this, the goal might be to ensure that the brightest spot in a scene does not exceed the maximum photon count supported by the sensor at a given ISO. It’s interesting to hear that this exposure could be calculated as between 1⅔ and 2 stops above the exposure selected by spot-metering the brightest spot.
I tried to derive this number. Let’s say I have a white, evenly lit surface, meter it and take a photo based on the reading. I then read the pixel value of the unadjusted image–what is the pixel’s luminance value? I see some web says saying that meters assume they meter an 18% gray surface and then provide an exposure so that the result is 18% gray. So, in an 8-bit image generated from the unadjusted RAW file of my white surface, I would expect to see an RGB value of around (46, 46, 46).
Raising this by 2 stops gives me about (184, 184, 184). It looks like one could go another 1/3 stop above this.
If a meter reproduces 18% gray as 18% gray and if my camera uses a 12-bit sensor, then 10 stops below 18% gray will yield 0, which is the end of the line. This would not be true for a 14-bit sensor, of course, as it could go 12 stops down. As another corollary, every ISO step reduces the dynamic range in half, which is the equivalent to dropping a bit. So, if base sensitivity is ISO 100 with a 12-bit sensor, then ISO 200 would support only 9 stops of underexposure, ISO 400 would support 8 and so on.
I never really spent much time thinking about what the meter is actually doing. Of course, if one has an EVF, one could just look at the histogram to avoid losing of highlights.
Let me know if I misunderstood or made incorrect assumptions.