Your configuration is almost identical to mine, then adding a state-of-art-graphics card such as the RTX 3070 that you mention will have the same positive effect!
Thanks for mentioning!
Hi leecd, I recently replaced my aging RX480, which worked with DeepPRIME XD, with an ARC A750 running the latest Beta drivers and sadly at the moment this does not work with DeepPRIME XD. The DxO preview window when XD is selected displays either a highly pixelated area or just black and when exported produces an image that is 10% colours and 90% black. I currently have trouble tickets out with both Intel and DxO. Suggestions welcome!
get a supported card ( consider it as the price paid for not checking matters before buying intel dgpu ) or wait until it is fixed ( put RX480 back in use for now ) …
If you REALLY want the BEST one, take the last serie and more powerfull one.
Which is in NVidia world RTX4090. (I’m not sure they will produce a RTX4090Ti - Ti are about 15/30 % faster than no Ti ones).
But think about form factor (size of 3 pci slots plus some space for cooling) and about power supply needed.
If not the BEST one, choose AT LEAST a 20xx serie. No less.
I found you have to look at the power needed. The 3 series needed a lot more than the 4 and to be safe would probably need a change on power supply. You also have the problem of changing power connections. Many of the newer cards need diffrent one to the ‘old’ 6/8 one again probably needing more changes. Many of the new cards are also much longer and even wider than older cards so check space. So its not just which card is better but a range of things you need to look at.
The lessens I learnt, as it was many years since building my own PC’s, were there a clips that hold the card down. Many CPU coolers make access interesting, my original card being an old one was no problem but the new one, will need a plastic ruler to press it to get the card out. A magnetic screw driver is invaluable for not losing the retaining screws as the new cards are much bigger in height as well. Cables will also often be tied up by plastic ties to shorten them (to minimize restriction to airflow), new cards being bigger need a longer cable length to get to there socket without an amazingly tight bend, not a good thing. The card I ended with minimized the need for higher power and just fitted in my case with out to much problem (ASUS Dual GeForce RTX™ 4070 OC Edition 12GB GDDR6X | Graphics Card | ASUS UK).
In my humble opinion, the 4090 is complete overkill. The CPU is just as important as most processing is actually done by the CPU
For instance, I upgraded my PC. I came from a Intel i5 6400 and a AMD RX480 and started by replacing the motherboard/CPU/memory. I switched to AMD Ryzen 5 7600X. Still using the RX480 though. This already more than halved the processing time: from 55s on avg to 25s on avg using DeepPrime XD
Next I replace the RX480 with the RTX 4070: average processing time with DeepPrime XD now is 4 seconds.
A RTX 4090 obviously is a lot more powerful. But will it be worth investing +€300-400 over the 4070 just to save maybe 1 or 2 seconds?
To use it solely for denoising I think so too.
But the OP asked what was the BEST one in the title. And this is this one.
However a test that could be done (not by me) is using this card with a very recent and fast motherboard and cpu, very fast SSD and ram, and see how many images can be processed at the same time (du to high amount of memory on this card, this when reducing all other bottlenecks).
Luckily someone already tested that. There’s a spreadsheet available with defined set of images to benchmark PhotoLab.
There is a RTX 4090 entry with a very potent Intel i9 13900. That user processes the batch of 5 D850 images in 13 seconds when processing with DP XD. The same user also benchmarked with the AMD 7900XTX (15s) and RTX 4080 (17s). This should give an idea.
My own machine processes these images in 25s.
So, in a nutshell:
- 4090: 13s (2,6s per image)
- 7900XTX: 15s (3s per image)
- 4080: 17s(3,4s per image)
- 4070 (with less potent CPU: 6 cores vs. 24 cores): 25s (5s per image)
Now, the last entry is not a very good comparison. Single core performance of the i9 13900k is already quite a bit higher but due to the sheer number of cores of the i9 13800k the multicore performance is 2,5 times as high.
It’s up to the user to determine if the 4090 is worth the extra money
How many maximum simultaneous images where set in this test ? (in preferences/performances/display_and_process tab).
This was my question : Is a setup like this one able to run more simultaneous images and therefore use the advantage of large memory capacity of 24Gb cards (if all possible bottlenecks are reduced to minimum : drive speed, cpu speed, ram speed, pciexpress speed, etc…).
I think a test of 5 image isn’t enough if a combo like this take advantage of processing more simultaneous images…
More threads/simultaneous images just because you have more cores isn’t always faster. It’s not a linear thing.
Others on the forum already tried that. Maybe not with a beast such as the i9 13900k but with a 10 core processor. This proofs that just because you have more cores you can’t just raise the number of simultaneous images
I know all this.
My question is are there tests done with preference setting changed ?
The one which deals with this value.
If you read the link you’ll see that at least someone did it.
The purpose of the sheet was not to test the performance of PL to that extend. The purpose of the benchmark was to get a general idea of performance of a CPU/GPU combination with a somewhat predefined setting: at least the same images with the same processing settings.
If everyone used the same number of threads is unknown to me, but apparently, according to the thread I linked to, that is rather irrelevant too as more threads =/= faster processing times.
I don’t see exactly what are specs of the station used in the link.
It seems it is a mac ? And an other people talk about a 8 core PC ?
Anyway, for few seconds computations, pciexpress (or something related to speed to feed graphic card) is probably the botleneck. Even with a top workstation.
And 4090 probably can’t run 100% for denoising.
Thanks for the update and am sorry to hear about the ARC A750 GPU card issues as I’m sure they are not fun.
I ended up replacing my graphics card with an AMD RX 6650 XT. Processing time on the Olympus 20 megapixel files has gone from 24 to 6 seconds. Not too bad for the price.
I could probably get faster speeds by also upgrading the motherboard, going from PCIe 3 to PCIe 4 and also from SATA SSDs to NVMe SSDs as well. However, for what I do, I’m happy with the current processing speeds.
After much reading (in this forum, dedicated PC component sites, computer magazines) and fierce budget fights I decided for a Geforce RTX 3060TI card. Not fully state of art however good price/cost ratio and obviously good experience of DxO PL users in this forum, too. There was enough reserve in my 500W power supply and it has 2x8 pin connectors for graphic cards, hence no problem with that.
Measurements for PL DP XD (using Studio drivers 536.67 as of 07/18/2023): 10 Canon CR3 35MB photos on SSD: 54 seconds
~5% of the “CPU only” time 1.100 seconds - highly impressing … almost too good to be true …
I was a bit sceptic about the GC fan noise but during normal operation it’s not audible, during processing DP XD development fan goes up but not too noisy, totally ok.
So more than happy with that …
For anyone interested, here’s the link to the DxO processing times spreadsheet: DxO DeepPRIME Processing Times - Google Sheets
The spreadsheet doesn’t ask about preferences settings. I suspect in most cases users have the defaults.
Is there one image and one .dop to download to make the test ?
Or is everyone doing this with different settings and images ?
The top of the spreadsheet provides links for the test images and simple instructions. No .dop, so:
Images processing times with PhotoLab. All images with Preset DXO Standard plus DeepPrime or DeepPrime XD (6.x) NR. Export to JPG without resize.
I created this spreadsheet back when DxO 4.0 was current, but it’s been added to since.
Running current video device drivers is as important as good hardware.
Earlier this year NVIDIA, followed by AMD and now Intel, made massive internal improvements to their device drivers that resulted in 30-40% improvement in image processing times on the same hardware.
Intel, as users here have found out, are a bit behind the curve in delivering device drivers that take full advantage of their new Arc/Iris GPU hardware.
I fitted a ASUS Dual GeForce RTX 4070 12GB GDDR6X and exporting a Sony ARW 25mb imige takes 4 seconds with deep prime and the powere demand is less that 100w even when doing a batch usfull as it avoided upgrading my powere supply which earlier cards needed.