Viewing the effect of denoising - Is the limitation to a small window really unavoidable?

Hi,

The denoising process is slow, that’s the reason for which, while developing an image, we are limited to see the effects of denoising in a small window. The full view cannot be updated in real time, that’s OK.

Let’s assume I have found the apparently good settings for denoising in a given image by using the information displayed by the small viewer window. What if I wanted to confirm without necessarily exporting the image ? Is it possible to imagine an option allowing to patiently wait for full denoising while blocking all other modifications while viewing at the results ?

yes, it is long known as a round trip

DxO PL → export to application as DNG with only NR + optics correction applied → and that application is ? DxO PL !

because “patiently wait” for DxO to get even a twice bigger preview like Adobe has will take one’s lifetime

I have always gone the dng route but wonder if the Deep Prime edits are viewable is soft proofing. I guess they won’t be but that kinda waters down the usefulness of soft proofing and using virtual copies.

You could set up a Watch folder (either Windows or Mac) and create an export preset for previews to that Watch folder. The watch folder would open every image that shows up there in the preview app of your choice (mine is Lilyview on macOS). This would effectively create the functionality which you want.

Two key presses:

  • ⌘-k
  • Return

and your image should pop up automatically for full screen preview in your preferred image browsing app.

1 Like

In a similar vein what I would like to see, is the ability to drag the preview window for the noise reduction to be able to check neighbouring areas, as the preview window is just too small for high resolution images.

1 Like

Thanks for the proposals.

I was thinking about a procedure avoiding the generation of a file :

  1. Locking the state of the editing.
  2. Generating an in-memory preview and displaying it (allowing to zoom in and out).
  3. Freezing of the preview (that is, blocking any setting change and image update).
  4. Waiting until the user unlocks editing.

That would be faster than using an intermediary file.

I use a RAM disk… it is as fast as it gets

1 Like

Feature suggestion: allow user to create a rectangle (same way like creating a selection) and a toggle switch to have the image details shown within the boundaries of the box to include/exclude all adjustments (inside the editing view).

An advanced version could even include a pop-up checklist for which adjustments should be shown in their full glory within that rectangle when “Show enhancements” is enabled.

This way a user can set the size of the preview in accordance to their computer’s performance and desired time it takes for all adjustments to be shown in the preview.

(I didn’t vote as the title is a question.)

You do know that you can change what area the preview windows display?
Just press the small crosshair in the preview window and you get a square you can move along in the main window to change preview.
If you ment resize preview, that’s another story…

fortunately :wink:

Torstein, of course I know that. It’s awfully slow and clumsy. Enable the square, move the square, disable the square. A quick drag around the area already in the mini-window would be a heck of a lot more convenient. Without having to press a button of course. Available all the time or with a modifier key.

This would be useful even if I’ve moved the crosshairs manually but want to poke around a bit nearby after turning off the crosshairs.

I do a lot of work with very high ISO images so this is not a casual request for me.

Nope. Just generate your previews with instant display. Doesn’t get any faster than that. The RAM disk idea is interesting but probably unnecessary with SSD speeds unless the images are very large.

You can have this setup with pixel perfect preview tomorrow. Or you can wait for three years for DxO to create a version of full preview you don’t completely like.

To really see the DeepPrime decoding in the main window would certainly demand a very powerfull PC. Maybe a computer with a RTX4090 and a strong CPU could manage it?

Anyway, maybe a possible solution would be a resizable preview window, so the user could select size according to PC power and acceptable waiting time…

What would be ok for everyone, is to be able to draw a custom zone where deeprime would be applied (with possibility to save several presets of it).
Like this we could get the best depending on hardware and workflow(s).

Full preview alone would of course not be a solution.

This tiny window is ridiculous when used with about 50mp cameras.
I often can’t see a full character eye in it !!!

Does DxO team use its software ?

This question can be asked another way : is this another “tested_by_the_developers” case ?

1 Like

Newbie to DPL, first post.
PL7.1+FP7+DP4, Win11, 32G RAM, i7-14700KF, RTX4070, NVMe SSDs, 24MP FF.
Previously used Capture NX2 and Lightroom 5.7 for 10+ years.
Currently I use also Nikon NX Studio, but for very special cases only.

Adding a button to trigger DP preview would speed up my workflow.
For example, I could then decide quickly if the image is useable,
or which color/contrast corrections are needed.
I would like the DP preview to be “on demand” only, otherwise maybe
too much heat would be generated.

In my case generating a DP (XD) preview should take 1-3 seconds,
but I’m not sure about the preparation overhead. For JPEG exports
this overhead is 5-6 sec per batch, which is a bit too much for me
for online preview but still better than going through export.
Perhaps .NET or Windows drivers are responsible for that latency(?).
Some clues are in PL logs, which DxO seems to download on every export,
so they must be aware of the problem.

I usually work on hundreds of high-ISO images made in horrible
lighting conditions of indoor sports. I already came across some
unwanted artifacts using DP XD. Just a few, glued eyelashes,
strange hair, smeared text. Pure DP seems to be much safer,
but still requires an online review, e.g. to check the colors.
For “plastic” faces, I just add some B&W film grain, but a preview
would also help. Using HQ or PRIME NR is out of question for higher ISO,
except for cropping and very crude preview. The DP (XD) gives sometimes
quite different lighting, colors, and details, so the overall perception
may change dramatically. And the perception is all that matters.

you can also reduce the “amount” of NR by dialing the slider to the left

1 Like

how did you survive for years before AI/ML based NR ?

1 Like

Simply I had less, but still non-zero, useable pictures. Maybe the audience was also less demanding, maybe it’s me.

Without DP, I used lower ISO, at most 3200 on D4, or 6400 if the target were facebook users and you didn’t have to pull up the shadows (mostly 70-200/2.8 lens). Sometimes I used on-camera flash, if possible, but only in vertical position with a built-in bounce card. My old D4 had an accident, so I bought D780 and had to switch from LR5.7 (which produced sometimes terrible tonal transitions, so I had to use slow Nikon free software to fix it). It seems DP “adds” at least one stop, and D780 adds another half, so I get more useable pictures now. I went as high as ISO12800 for FullHD monitor with non-distracting NR, didn’t dare to experiment with higher values yet. The colors are a bit washed out of course but still useable, if the action and emotions are more important. Mixed lighting and color casts from the floor kill nice colors anyway. Otherwise, I still don’t go above ISO1600.

Yes, but some people have naturally plastic skin even at ISO 100, often just a small area above the nose. Adding some grain makes them look more natural. Sweat may also bring some problems. For strange reasons hidden in the brain, we (at least me) don’t notice many skin defects in nature, while photography brings them up (similar effect as for WB). For women photos, my wife is much more picky, so I let her do the first choice.