Using Nik6 HDR Efex to merge two images (Linear, 16-bit TIFF, EV0 and EV-4) results in noticeable brightness discontinuity issues in the output image. However, after applying Gamma correction to the input images before merging, the brightness discontinuity issue is significantly improved. Is it necessary for the input images to undergo Gamma correction for merging?
Additionally, when testing the merging of three images (Linear, 16-bit TIFF, EV0, EV-2, and EV-4), the brightness discontinuity issue persists. It seems that the cause is not due to an excessively large EV range difference.
Why are you working with 16 bit TIFF files? Are they scans? In which case, are the film originals at different exposures?
I am able to shoot 14 stop digital images in one RAW file, without the need for multiple HDR exposures.
When you state your exposures in terms of 0EV, how are you measuring that… average, centre weighted or spot?
My personal feeling is that this is not a software problem but a miscomprehension of how to best capture the scene to start with, thus creating the, possibly unnecessary, merge stage.
And, from many years experience with this kind of image, I have concluded that HDR merges are a very old fashioned idea that does not transfer from film to digital.
Here’s a shot I took on my Nikon D850, exposing for the highlights at +1.7EV, without any adjustments…
I am researching ISP pipeline-related matters, starting from the RAW domain and testing the impact of different functions step by step. The original RAW images are 12-bit long and short exposure images, aiming to create a high dynamic range composite image, with Nik6 serving as a reference for the composite result.
To enable Nik6 to use RAW for merging, the 12-bit RAW is converted into 16-bit RGB TIFF (also tested converting 12-bit to 8-bit RGB JPEG, which resulted in the same brightness discontinuity issue). The RAW files are captured using an FHW development board with an IMX307 sensor in stagger mode, and the metering method is 16x16 center-weighted metering.
In my expectation, Nik6 will analyze the brightness of long and short exposures and obtain the ideal composite result through a method similar to
(LongĂ—WL+ShortĂ—WS)/(WL+WS)=HDR, where WL and WS are the blending ratios for long and short exposures. Even if some images are too dark or too bright, the formula should ensure a smooth and continuous brightness distribution.
In the official documentation, I saw that when the exposure difference exceeds 3 EV, there is a risk to the quality of the composite image. However, in actual use, whether Gamma correction has been applied affects the Nik6 merge when the difference is 4 EV. Nik6 seems to have more advanced merging methods to produce ideal HDR images, so I want to inquire whether there are any requirements or limitations regarding the brightness of the inputs in general use.
I have to further ask if you are a researcher/student working on a scientific project or a photographer.
If the former, then IMO you are totally wasting your time - DxO’s PhotoLab software is well able to cope with compressing a 14 stop single image that has been correctly exposed, as you can see from my example.
If the latter, then the results you get are going to differ from one software to another.
what software you used first to export as .tiff? obviously something wrong as it’s not supposed to go this way. what version of Nik are you running? PL doesn’t do HDR but you have other software that can do stacking and hdr?
I’d rather do HDR or stacking images than boosting a single image, to me the final image as the over boosted shadow effect which also lack contrast and details, which is not really desirable, but that’s my opinion. to each their own.
I suspect (guessing) the algorithm has a hard time smoothly interpolating between such extreme values at each “pixel”, resulting in a sort of extreme posterization. Colors near gray would be color shifted even with small difference in the multipliers for the RGB channels. The gamma adjustment brings most of the intermediate brightness pixels closer together allowing better interpolation (smoothing).
Not sure if the warning was presented, but a 4 EV difference between images would be challenging based on my attempts using PS years ago.
Actually, I don’t understand what you’re talking about “at all” ( I am researching ISP pipeline-related matters, …) although I’ve been using NIK since very long time.
For meaningful output of bracketed exposures, you need images ( which you have exported as 16-bit TIFFs and fed into Nik HDR ) that contain and display the desired level of detail, while in raw files shadows can be revived to some extent.
In your screenshot composition the image named “EV0” doesn’t show details around the edge (obviously a tunnel), while in “EV-4” even the sky is heavily underexposed …
So – “back to the drawing board”.
stuck
(Canon, PL7+FP7+VP3 on Win 10 + GTX 1050ti)
9
If you search on that phrase, one of the first few hits is this paper:
Thank you all for your replies and suggestions. In summary, good exposure and an appropriate EV range are needed to achieve a good result.
To Joanna, thank you for your quick response and clear suggestions. I am a researcher, and I don’t think this is a waste of time. In practical engineering, under the demand for real-time HDR, we can only obtain two images, EV0 and EV-4, for merging.
To mikerofoto, the method for obtaining the TIFF is: (1) Capturing RAW using an FHW development board with an IMX307 sensor in stagger mode, (2) Converting RAW to TIFF using OpenCV. Then Merging TIFFs by Nik6.0 HDR Efex. The image below shows the test results using other merging software.