A question on DeepPRIME internal processing

I have been using Topaz Labs Gigapixel AI (TLG) (at least until DxO offers something “similar”). Using Topaz Labs workflow, Topaz Labs recommends:

In our (Topaz) AI products, the framework should be DeNoise AI → Sharpen AI → Gigapixel AI .

My understanding, also using some form of AI technology (neither Topaz Labs nor DxO have any technical engineering papers on which AI neural net, how may layers, etc., actually have been implemented in the software model and source program), is that DeepPRIME does the first two steps (denoise and sharpen) as a single “integrated” step (with multiple “passes” within the internals of the neural network, etc.) to get the best compromise between noise reduction and sharpness increase, not as two “independent” steps as (apparently) done by Topaz. (The two Topaz applications in the noise and sharpen steps are independent, and do not cycle between the two.) Is this correct – thus in principle making the DxO integrated DeepPRIME design/implementation in some sense “better”?

As an aside (my copyright images available), I have been experimenting with TLG after DeepPRIME, working on TIFF, JPEG, and DNG output images, with generally excellent results (more “keepers”). Although TLG and all other such implementations supply interpolated “synthetic” pixels into the “expanded” image, by direct experiment with difficult keeper images I have, DeepPRIME and TLG both do a “better job” than the nearest Adobe commercially available equivalents.

1 Like

Maybe this can answer your question.

It seems to go a bit more into the details than other offical descriptions.

As far as i can tell, the integrated steps isn`t denoise and sharpen, but demosaicing and denoise.
So from a technical perspective Deep Prime is more like a completely separate Bayermatrix Demosaicing Algorithm that also Denoises and not some sort of separate “Post Process” Denoiser.
Thus the better results, since the Dataset used for the input is mostly as unchanged as it can get when it comes to ILC Images.

Thank you for that URL – the article is popular press without either mathematics or generalised algorithmics (generalised in the sense of the design of a software implementation of a neural network, not a hardware AI such as a neurosynaptic processor), as I presume a detailed description would reveal too much intellectual property. Nonetheless, the article is both informative and interesting. One question: the actual DeepPRIME “work” is done when the raw image is processed to a non-raw format (not just JPEG). In the course of using PL4 Elite with DeepPRIME, one may adjust various “settings”, typically via user-interface software sliders. Do these sliders affect the operation of the DeepPRIME neural network, typically as adjustment to neural network “weight and path parameters”, and not simply as though the image were output with the slider adjustments and then input to DeepPRIME or DeepPRIME is input to deterministic “slider” modules.? The Topaz Labs method seems to be the latter – no effect between the stages. Is this also the way DeepPRIME works? Whatever the DeepPRIME engineers have done works, and better than any other commercially available workflow that at least I have tried (since I have elected not to pay rent to Adobe, several).

Hi @wildlifephoto,

Thanks for your feedback about combining DeepPRIME and Topaz Gigapixel. Good to know that they play together nicely, most of the time.

As you already guessed, I cannot go more into the details of DeepPRIME than what we did in the blog post. Training neural networks for denoising has become pretty common over the last years. It’s the exact architecture of the network and the exact way we train it that make all the difference. We invested several man-years of work and prefer to keep it secret—sorry :slight_smile:

However I can repeat what the blog post states: DeepPRIME is combined demosaicking and denoising. Since it is not mentioned in the blog post, you might infer that our Lens Sharpness is applied separately, unchanged with respect to PhotoLab 3, and does not use neural networks.

This might, of course, change some day. But these networks get more and more complex and harder and harder to train. So I cannot promise anything, neither that it will work nor when it will be ready.


Hi @Wolf,

Unlike DeepPRIME that appears to be “silent” during processing, Topaz Labs Gigapixel (TLG) AI presents terse messages. Based upon those, I have surmised that Topaz Labs (TL) is using a convolutional neural network (NN – CNN is a specialisation) – not uncommon for imaging with current technology. For the reader who has no familiarity with the issue, a broadbrush article is: https://en.wikipedia.org/wiki/Convolutional_neural_network .
Several observations from use. Any such NN depends upon both training and the acumen of the trainers in selecting training samples (proof of convergence if trained upon the universe, not selected samples, is not established). DxO has a very extensive library of images from which to train a NN, presumably due to the lens evaluation scoring methodology upon which the DxO database into an application such as PL is based. Topaz Labs presumably does not have as extensive library. Thus, if I use a relatively tight crop from DeepPRIME (say, the stamens of a flower), TLG does a better job than my experiments with the Adobe offering. However, if I use a less tight crop, say all of the flowers with background vegetation, and then examine the stamens (recall – DeepPRIME is NOT adding pixels, thus the number of stamen pixels in both test images are the same – I am exporting as TIFF), TLG adds pixels as promised, but the “clarity” of the stamens is not nearly as good “to the eye” as in the first case with significant edge and other artifacts. I have seen fewer of such problems with DeepPRIME, presumably because of the more extensive training library DxO has than TL. It was the significant improvement of DeepPRIME over PRIME (as a “one step” workflow “solution”) that prompted me to license PL4E – the less clock time I have to spend on workflow for a result the client “likes”, the better. One question that may not be a secret: Presumably, DxO has tested DeepPRIME on various environments and platforms. Excluding Apple that I am not considering, does DeepPRIME work better on some GPUs over others, including GPUs that have similar overall processing throughput as measured by published benchmarks/specifications? Again, if DxO has done these tests but cannot comment, at least there would be an answer. Thank you for your candor. Take care. Stay safe. (Get vaccinated.)

His @wildlifephoto,

The DeepPRIME network and its weights are the same for all platforms (Win/Mac) and hardware (any CPU or any GPU). Rounding may differ a bit depending on optimizations in the libraries we rely upon, but that should make no difference in overall quality. The only thing that changes is execution speed.


@Wolf, I’m sure you are aware of the recent release of Topaz Denoise AI 3.02. I haved own Denoise AI 2 for occasional use with image file formats that that are not supported by DeepPRIME and took advantage of a free upgrade to version 3,02. .

With regard to raw files my experience was that DeepPRIME output easily bested the Denoise AI 2 low light setting 100% of the time. I decided to compare raw file output again after installing the latest Denoise 3.02 update and was surprised to find that Denoise 3’s low light output quality is now much improved and iis much closer to DeepPRIME than it was previously.

Denoise 3.02 still suffers from artifacts and some loss of fine detail and DeepPRIIME is still the standard, in my opinion. However, DXO may need to consider tweaking DeepPRIME to ensure it continues to remain the best noise reduction available.

Mark . .

1 Like

From a “review” posted (advert?) by DPReview:

Photographer Michael Clark on Adobe Super Resolution: ‘An incredible new tool’

Published Mar 19, 2021 | michaeljayclark
If you have gotten this far, and are still reading this full-on pixel-peeping madness, then you might have realized that this could be the best upgrade to any and every camera ever. This is certainly one of the most incredible features Adobe has ever released in Photoshop. End excerpt.

Clearly, such a review not only might reduce use of the Topaz product, but of PL4 Elite as well. From my own experiments with the rental Adobe feature described above, Topaz Gigapixel AI does better. I did note that on the subjects Topaz shows in the advertisements (such as animals), the product does well – presumably because the neural network was trained on such images. The results are not as uniform but better than Adobe SuperResolution on all of the images upon which I was working (all Gigapixel post DeepPRIME) – although the artifacts are only visible at significant “zoom” (whereas the original image lost resolution at that much zoom). If I get a chance, I will start a trial of the Topaz workflow that competes with DeepPRIME – I know that pre-AI, the evaluation trial of Topaz showed that the DxO product that would have been a competitor at that epoch was superior. Once Adobe adopted the rental model, my choice was clear.