Hello,
for its GPU comparison tests, the French website Hardwareand.co uses DxO PhotoLab in its latest version on a sample of 88 photos and measures development time with DeepPRIME XD2s. Thanks to these results, I was able to draw up this graph, which will be very useful when choosing the graphics card best suited to DxO.
The performance of the GPUs (desktop versions) is measured against the most powerful and most expensive, the Nvidia RTX 5090, which serves as a benchmark in terms of performance and price. For example, a GPU with 25% less performance will take 4 times longer to process the same photos.
The price indicated is that of a French online retailer, for graphics cards available at the beginning of June 2025. For older GPUs that are no longer available (grey dots), I have used the manufacturerās launch price (MSPR). It is therefore possible to find them at other retailers, at different prices.
The āperformance/price ratioā lines indicate which GPU is best placed for its price. For example, the line āperformance/price ratio = 2ā indicates that a GPU is half as expensive for its performance as the Nvidia RTX 5090.
In this game, the AMD Radeon RX 9060 XT 16GB GPU comes out on top: with a ratio of 3.35, it requires only 65% more time for less than a fifth of the price of the Nvidia RTX 5090. By contrast, the cheapest GPU in this table, the Intel Arc A750, costs just under a tenth of the price of the Nvidia RTX 5090, but will require almost 10 times as much time. Not exactly a bargain. It is better to pay 300 euros with an AMD RX 7600 and already get 33% of the performance, or 340 euros and 41% with the Nvidia RTX 5060.
Finally, there seems to be every reason to prefer the latest-generation GPUs (AMD RX 9xxx / Nvidia RTX 5xxx), which are generally better placed than the models they replace.
Excellent analysis!
I often watch Hardware & Coās reviews, and itās true that in Photolab, Radeons have absolutely nothing to envy from RTXs.
Since Iām currently looking to build a new PC, thereās a very good chance itāll be with an RX 9060 XT or 9070.
Iāve been wanting to build a full AMD PC for a long time, and thatās confirmed.
The only unknown is the Adrenalin drivers, which donāt always get good reviews compared to nVidia Studioā¦
User feedback on this would be very interesting.
And especially what settings are optimal in these drivers, given that I donāt play games at allā¦
Fantastic work! Iāve been contemplating a possible GPU upgrade specifically to improve DXO workflow performance and was eyeing the 9060 XT 16GB. Looks like its a winner!
What interests us here is the performance in Photolab.
And the Radeons perform very well in this case !
On the other hand, for someone with other uses, such as video editing, the choice might need to be reconsidered, but thatās an area Iām not familiar with.
As I mentioned above, it would be interesting to have user feedback on the best settings for the Adrenalin drivers.
1 Like
Joaquin
(Canon EOS R, Win11 (R9-5950X/64GB/RTX3060TI), PL8/VP5)
8
Excellent idea and well done. Kudos!
I was spoilt for choice couple of months ago to speed up PL denoising and wish Iād had the graphics back then. However Iām sure itāll be of big help for others in that situation.
Because the data on Appleās integrated GPUs doesnāt exist on the high-tech news site I used. On an Apple machine, you canāt change the GPU independently.
For Apple Mx and their integrated GPUs, you can look up their FP32 performance to compare with PC GPUs.
This might give you an indication of the DxO denoising performance you can expect.
Apple Silicon benchmarks would be incredibly useful.
DxO have done a very good job (imo) at optimising DeepPrime for the Apple Neural Processor.
On my (now old!) M1 Mac Mini, image export with DeepPrime is significantly faster when using the NPU compared to the GPU.
However, the Pro and Max SoC have the same number of NPU cores as the base chip. Only the Ultra has more NPU cores that all the other chips in of the same generation.
However, as you go up the chip hierarchy (ie M1, M1 Pro, M1 Max, M1 Ultra), and from older generations of SoC to the newer ones, the number of GPU cores does increase,. At some points along the chip timeline there must be a tipping point(s) where the GPU performance exceeds the NPU performance and vice versa?
In addition, it is unclear (to me at least) how the NPU performance improved from the M1 to the M2, to the M3. However, it does seem that the GPU performance improved with each processor generation.
I believe the latest M4 does have a significantly improved NPU though.
So essentially, with an Apple Silicon Mac, the GPU performance isnāt the only important performance figure to compare between models.