I am new to PL and I am puzzled by this - any advice would be gratefully received.
I have some images of birds, taken with a G9 and Pana 100-400 at some distance. The images are OK, I don’t expect much sharpness from small birds at a distance.
But I have found that if I crop in really hard - so that the image is about a ninth of original (1670px x 1252px of original 5184x3888) - the image becomes much sharper and more detail pops out. This is wonderful - but what is happening?
Lens Softness Compensation and Chromatic Aberration corrections can be previewed only at 75% or higher zoom levels. You may also use the Loupe tool to get better idea of what DeepPRIME may change.
Does it explain your case or is it something different?
Thanks. I haven’t applied any noise reduction (other than the default lowest level).
If it is the Softness Compensation or Chromatic Aberration correction, does that mean that if I just export without the super hard crop, I will get that sharpness I am previewing in hard crop? I imagine that would be the case, but I’m not sure.
Must say it is really impressive, whatever it is that is doing it! Cheers
stuck
(Canon, PL7+FP7+VP3 on Win 10 + GTX 1050ti)
4
Yes. As @Wlodek has already explained, by making a super hard crop when the view is set to fit to visible then the view of the cropped image is likely to zoom to above 75%, and when the view is above 75%, PL will preview all the sharpening detail.
Try zooming in without cropping, you should see things improve once you get above 75%. When you export, the output image has all the sharpening (and other edits) baked in.
I don’t need to compare, the sudden increase in sharpness is obvious when I crop. I will check, but apparently it will be just as obvious if I just zoom in.
@OldFartAtPlay Another tip for you. When you are zoomed out to view the entire image, it will tend to render more sharply if you stick to zoom levels that are divisible by 2. For example, 50% (half size), 25% (quarter size). This is true with all image applications in my experience.
The simple reason is that a pixel is a 1 or a 0 - it’s either there or it isn’t. You can’t have half a pixel. When an image is viewed at 66%, the computer needs to interpret the image and adjust it to absolute pixles for display, and this means making a judgement between two pixels on what to display. When the zoom level is divisible by 2, a better job is done by the software with the rendering. I feel I’ve explained this poorly, but I hope you get the idea I’m trying to convey.
But, also, as others have already said. Use the loupe or zoom to 100% to view the true sharpness of the image you are viewing, and in PL this will also show the result of any lens adjustments and noise reduction you may have applied.
Actually, you can apply the highest level all the time as all levels of DeepPRIME are adaptive and don’t over-sharpen, even at low ISOs like 100. What it does do is clarify detail in shadows, etc.
One of the best tools for improving sharpness is the four Fine Contrast sliders that come with FilmPack, integrated in PL. I couldn’t do my work without them.
And, if I may mention it yet again, Topaz Photo AI is absolutely amazing at increasing image size and appropriate print sharpening. Just yesterday I had a client who had accidentally shot an important image at 640px x 480px. Using Topaz, I was able to make him an acceptable A3+ print for an exhibition.
Off course you compare. Either impulsive or rational. It’s in your question.
And if you would compare them at 100% you wouldn’t have seen any difference.