Unfortunately Nikon Z cameras can only save multiple exposure images as Jpg
I knew there was a good reason for hanging on to “old” technology
I usually use “Average” mode for multiple shots but, just out of sheer curiosity, I thought I would play with the Darken mode…
Veeerrryyy interesting. I’m going to have to play some more
So, to all you naysayers that said it wasn’t possible to ‘compile’ a set of superimposed shots to output as a RAW files, Nikon did so since the D200 and stopped 9 years ago with the D850. No Z camera can do this. even as TIFF.
Indeed, if you try a simple Google search for ‘HDR for/with RAW output’ you get no-where.
The Phase-Detect-on-Chip might make this more tricky for mirrorless, but the point of pixel shift would allow multiple coverage of the same area with some sort of ‘content aware’ fill if necessary.
I wrongly thought it was this ‘old tech’ that had grown up. The in-camera processing power is considerably higher from an Expeed 5 to 7.
Still not quite sure why Nikon introduced this weird NEFX format!
I already talked about that with examples
I know! And thank you for reminding me
So why does everyone still say it CANNOT (not isn’t) be done… when it has been for done for 20 years?
Thank You Pierre for this PP article.
So, does anyone know what these 10 files, that can be compiled into a single RAW, actually are?
If they can be saved to disc, and I cannot see why not, they can be composited ex-camera at will.
There’s a sensible hint in the PP article that some in-camera lens corrections make this difficult with some lenses needing so much that it’s almost impossible.
However, the approach that this can be done for some lenses seems a sensible approach. The Z8 and 135mm Plena would satisfy this requirement. They are almost the ideal pair with the lens being almost optically perfect.
But no option.
NEFX represents an image which does not need demosaicking, but which is still raw in some other aspects presumably : color balance, picture control, lens correction, gamma curve, denoising, …
If so, Nikon could have used the DNG format for this. Why did Nikon choose not to ? I could speculate it’s for commercial reasons.
Well, if you want, you can choose the option to keep the individual NEF files a swell as constructing the combined NEF image.
Like I said, they can, optionally, be saved alongside the combined file.
Well, if I could get my hands on one, I have the tools to try and analyse it.
RAW means the digital sensel values. Operations like wb, exposure corrections and some more are done on these values.
It looks like that by pixel shifting a kind of Foveon sensor is imitated: 3 channel sensels in stead of 1 channel sensel. Just a guess.
Shifting the sensor doesn’t make sense for me. It looks more that the cfa is shifted.
George
https://onlinemanual.nikonimglib.com/d850/en/18_menu_guide_03_24.html
I can’t see where it actually says the combo file is a NEF?
It would seem everyone is now happy that you CAN combine multiple NEFs to a single NEF! If you can align/combine a vertical stack of NEFs, you can align and stack adjacent NEFs. So an NEFX could just be a massive NEF? Apparently it isn’t!
It is more an imitation of three-sensor imaging, as if it had used a beam splitter but in addition it enhances accuracy by sub-pixel shifting.
Refer to
Google Gemini :
Blockquote
Nikon implements pixel shift in the Z8 (and other compatible Z-series cameras like the Zf and Z6III) by mechanically shifting the image sensor by very precise increments.
It is indeed the sensor-CFA (Color Filter Array) assembly that shifts as a block. The camera takes multiple exposures while the sensor is minutely moved:
*Sub-pixel shifts: The sensor is shifted by fractions of a pixel (e.g., 1/2 pixel or 1 pixel) both horizontally and vertically. This allows each photosite on the sensor to capture light from slightly different positions.
*Multiple exposures: The camera takes a sequence of shots (e.g., 4, 8, 16, or 32 photos).
*Software merging: These individual RAW (NEF) files are then merged using Nikon’s NX Studio software (or potentially third-party software with compatible algorithms) to create a single, higher-resolution image.
The benefits of this approach include:
*Improved resolution: By capturing more precise color and detail information across the sensor, the merged image can have significantly higher effective resolution (up to approximately 180 megapixels on the Z8).
*Reduced moiré and false colors: The Bayer filter array on typical sensors interpolates color information. By shifting the sensor, each pixel location can be sampled by different color filters, leading to more accurate color reproduction and reduced artifacts like moiré and color fringing.
*Reduced noise: The oversampling of information at each pixel location can also contribute to lower noise in the final image.
It’s important to note that because this process involves taking multiple sequential shots, it’s best suited for static subjects and requires a tripod. Any movement in the scene or camera during the capture sequence can lead to artifacts in the final merged image.
It says
and further it’s saying again the sensor is moving. Very confusing.
I see Gemini is the ai assistant of Google.
George
The CFA is layered on top of the sensor, forming an assembly
AFAIK, the CFA and the rest of the sensor is a single, moveable chunk.
From your link to Nikon
In pixel shift shooting, taking multiple pictures while shifting the image sensor by one pixel unit allows G or B to be captured at the pixel location where R was captured
That pixel is covering another part of the image when the sensor is moved.
It just doesn’t make sense to me if it’s the sensor that’s moving across a static image. It only makes sense when the cfa is moving.
If the sensor is moving a camera shake is introduced.
George
.
Everything is absolutely stationary at the moment of exposure.
For the individual images, yes. But every image will have another content based on the pixel shifting.
George
That’s kinda the point! Pixel Shift works in 2 distinct ways/options.
You can opt to take multiple images of EXACTLY the same ‘view’ which does a great job of removing random noise and colour accuracy.
You can also allow the sensor to move by a small increment thus increasing resolution with intermediate ‘frames’
You can do both, ie take 32 frames per ‘view’. The software knows you’ve moved the sensor and remaps it.
Imagine a shift lens where you bolt the lens down and move the body… as opposed to the usual way. The view doesn’t increase with the former.