First timer hardware question

Hi. First time here. I have been a long time user of Nikon DSLRs and Dxo Optics Pro 11 Elite. I had tried an early version of PhotoLab but didn’t find it a big improvement over DOP 11.

I recently tried PL6 with its DeepPRIME feature. It is a big leap. However, my PC doesn’t handle it well. There is no point in me buying PL6 until I upgrade my hardware.

I am not looking for any brand recommendation. I am trying to understand which of the following processor combinations will perform better for DeepPRIME processing, both in laptop configurations, not desktop.

  1. Apple M2 Pro chip with 10-core CPU, 16-core GPU, 16-core NE and 16GB RAM.

  2. Intel i9 13950hx with RTX 4070 GPU and 32GB RAM.

Thanks.
Satyaa

Interesting question :slight_smile:

There is a spreadsheet used on this forum where people can add their benchmark results. The benchmark consists of a couple of defined images which need to be processed using defined settings. You can find that spreadsheet here: DxO DeepPRIME Processing Times - Google Sheets

Please note: most of the benchmark results will be using the desktop variant of a GPU. The performance of a mobile GPU could/will be lower than the desktop variants. I assumed the use of desktop GPU’s in my comparison.

Unfortunately, neither the M2 Pro nor the RTX 4070 are listed (yet). There are a couple of M1 Pro entries So I have to guestimate here.

The RTX 4070 (non-TI) is roughly as fast as the RTX 3080 (non-TI). So it is fairly safe to look at the RTX 3080. But the 3080 entry (row 25) used PL4, so it’s not fair to compare against PL6 as it could be that the algorithm has been optimized.

Otherwise we can compare against the RTX 3070 + 20% performance boost. The RTX 4070 is about 20% to 30% faster than the RTX 3070 in benchmarks. On rows 154 you’ll find a user with a RTX 3070 processing the D850 image set in 30 seconds. Performance for the RTX 4070 would be estimated on 24s for the D850 image set.

Now for the M2 Pro. According to benchmarks I found it’s about 20% faster than the M1 Pro. On row 173 of the sheet you’ll find benchmark results of the M1 Pro using the GPU. Row 175 shows results using the neural engine.

With GPU, the D850 set was processed in 102 seconds. Improved by 20% and you’ll get to 82s.
With the neural engines the set was processed in 43s. Improved by 20% this would by 35s.

So, based on this guestimate, the RTX 4070 would be faster. But this is of course all theoretical. I could be off by quite a bit :slight_smile:

I think that performance will be quite similar.

It’s also worth to keeping in mind: any performance comparisons are a snapshot of a given situation (which kind of RAW, wich size of files, which parameters on export) of one user. I nearly would bet that another user with the “same” hardware and “same” workflow could face differerent results. Next to that, PL evolves and so do hardware manufacturers. I had some bugs in some apps until Apple cmae out with another OS version - and suddenly some issues were gone.

So, it’s never a complete comparison and I would prefer the system I’m most happy with and care less about benchmarks - they will alter during new versions of all involved components and workflows.

1 Like

Although you’re right, the benchmarks in the spreadsheet use the same RAW files and the same workflow: all benchmarkers should load the same preset (DxO Standard and export using the same settings. That way same hardware should return the similar results.

But as you already mentioned there’s more to it than just PL settings. Drivers, OS settings, even ambient temperatures. All can have consequences for performance. The current high temperatures cost me about 1.5% CPU performance for instance. But by defining a standard set of images which need to be processed in a set workflow benchmarks can be compared quite well. It should at least give you an idea of the performance.

But always make sure you compare apples with apples: I upgraded my computer from a i5-6400 to a Ryzen 7600X using the same GPU (for now). Processing time of an image almost halved because of the much faster CPU. It would be silly to compare my modern CPU against an old one or vice versa.

@RvL your statement is correct but that means all the hardware, CPU and GPU. I have written numerous times that there should have been another test undertaken of times to render the image without any Noise reduction which would allow the largest CPU element of the export process to be removed from the timings!

There is still CPU involvement in the DP and DP XD processes but the largest element of those operations is down to the power or otherwise of the GPU.

So for normal work in DxPL the CPU is a key element to get the image rendered and on the display as editing takes place, the GPU connected to the monitor that is displaying the PhotoLab workspace plays a minor part according to my tests!

In fact in my case it plays no part unless I move the PhotoLab display to one of my 1920 to 1200 outboard displays, i.e. I have three connections to three monitors one 2560 x 1440 from the onboard GPU, another from onboard to one 1920 x 1200 display and the other from the Discrete graphics card.

At no point does the discrete GPU enter into the real processing of the image on a Windows machine until the image is exported with DP and DP XD.

So in my opinion the Google worksheet is not completely useless but flawed since all the times shown include a contribution from the CPU and from the GPU in the figures!

Hence, I have been showing tables like this (comparing the times from my Grandson’s machine, with my Son’s machine with the machine I finally opted to build but with a motherboard that could take a Ryzen 5900X or 5950X for an additional £250-£300).

that seek to show what elements use what processor (CPU or GPU) in the export process.

I believe that for the export process alone something from an RTX 3060 upwards offers a good return on investment but I also believe that the further you go upwards from the RTX 3060 the less additional “bang for your buck” you seem to get, i.e. the 2080 cost more double (treble!?) the 2060 when my Son built his and then my Grandson’s machine but for NR processing (DP and DP XD) it does not show a good return on investment!

However, every second saved represents 1,000 seconds if you are frequently processing and exporting 1,000 images!?

Looking at the machine you quote the passmark is

and at a guess you are looking at

or something similar which likely exceeds my son’s machine shown in the table above (the 3950X with an RTX2080). Please remember, as stated in another post above, that a laptops GPU is slower than the same “model” in a desktop machine, a figure of 20% slower is often quoted.

I hope this helps!

You’re of course right, that’s why I said you should compare apples with apples. A Ryzen 7600X with a RX480 will be faster than an old i5-6400 with the same RX480 simply because everything that is CPU dependent is handled a bit faster. I just did that comparison: 64s with the Ryzen and 73s with i5-6400.

This is also the hardest part about benchmarking. There are so much factors to take into account :slight_smile:

Thank you for pointing to that spreadsheet and your comparative analysis with other models that are close. I do understand it is an approximation / best guess but gives me something to start with.

For example, the two rows using M1 Pro’s NE vs GPU is also helpful in understand the relative performance of those components.

For my own stats, I use an 8-year old desktop with 4th gen i7 processor and built-in intel GPU. I processed a batch of 100 raw files from GH5M2 and another 100 from D810. PL does not use my built-in GPU. It only uses CPU.

The GH5M2 files take about a minute each with DeepPRIME, while the D810 files take almost two minutes each! I did not try XD with my desktop.

A MacBook Pro with M1 Pro and 16GB RAM takes about 3 seconds each for the GH5M2 files and 7 seconds each for the D810 files. With XD, the GH5M2 files take about 20 seconds each.

I prefer a laptop instead of a new desktop. Hence my search for performance comparisons. I understood that not only are Intel’s laptop processors slower than their desktop counterparts, but also the laptop heats up faster and throttles performance. On the other hand, there are gaming laptops claiming to use desktop processors and to have desktop performance. They are also beyond my budget. I have no concrete information.

Apple uses the same M1/M2 chips in both MacBooks and Mac minis as far as I know. So, performance should not be different, unless I missed something.

Thanks.
Satyaa

going apple product line, keep in mind that… what matter is not when you buy the product but when it was released. iMac M1 for example launch 2020, so buying one now mean it’s already 3yrs old (hardware), so you basically have 4yrs before apple don’t support it again (7yrs support then no more OS upgrade). i did that mistake buying my iMac which had 2yrs (hardware) when i bought it and now it’s technically 10yrs old even though i got it 8 years ago and no support/upgrade from apple.

M1 are cheaper today (all discounted) because this year Apple released M2 macbook pro (january) and just last week the M2 macbook air 15", the new M2 mini and mac pro (expensive). i got that new M2 macbook pro 32gb and just ordered the new macbook air M2 15" for my wife. difference between M1 and M2 is significant, and compared to old apple intel… not worth comparing speed and performance.

Just something to keep in mind…. These are export times only. And with deepprime enabled.

These are all almost completely based on GPU performance. CPU performance does not affect export performance that much (basically not at all if you’re allowing Photolab to process two images simultaneously) when you’re exporting Deep prime images. The CPU is almost always waiting for the GPU to finish noise reduction.

While it’s certainly true that GPUs performance is important, CPU performance is also important because it directly affects your editing speed.

Use the performance comparisons as you wish, but realize these have no bearing whatsoever with respect to how long it takes Photolab to render on screen after you make an image adjustment.

I used to be all about deepprime export performance. But the reality is I edit 100-400 images and then export in batch, and then put the laptop down. If it takes 10 mins or 15 mins to export doesn’t really impact me that much. But if I was editing for 3 hours or 2.5 hours well that’s a big difference. And the GPU doesn’t help me there. A decent GPU is essential to avoid hours long export times…. But once you get to export “fast enough” the CPU deserves the focus to speed your editing time.

Thanks, @mikerofoto and @MikeR
I will keep those points in mind when I make a decision.

assuming that OP uses DxO PL only ( and not anything else ) and that DxO will not progress into using GPU more ( they might ) next year w/ DxO PL7 … so buying a notebook specifically requires some forward thinking about GPU side ( more so a mac where you can’t replace stuff and in most cases you can’t do this with wintel notebooks either )

I have ASUS TUF Gaming GeForce RTX 4070 12GB GDDR6X OC Edition

batch speed is ~~ 40 x ~50mp GFX50S RAF raw files ( to 16 bit TIFF, 4 simult. processing in DxO PL6 ) = ~11 raw files a minute = ~5.5 sec / 50 mp raw file

PS: and that GPU draws ~200w while doing this - which GPU in notebook can’t - unless it is external GPU or you have a really big desktop-replacement-class-chassis notebook.

@noname A lot of things could change in the future but as @RvL re-iterated exporting images is “only” part of the job. But as others have discovered in the past with laptops with fast processors but without a fast GPU, exporting using DP or DP XD will be a major disappointment.

If you look at my spreadsheet you will see that the speed gains from the 2080 versus the 2060/3060 are not inline with the price difference, i.e. the “law of diminishing returns” has set in.

If unlimited funds are available then buy the fastest CPU (for the CPU part of the export process and normal browsing, continual rendering and re-rendering while editing, database update etc. etc.) coupled with the fastest GPU, otherwise balance hardware with the budget and with the type and quantity of work to be undertaken.

Although I can replace components on my desktop systems I am unlikely to go and spend what I spent already plus hundreds more for the next gen GPU model, i.e. a desktop system provides that option but financial constraints may well prevent that, in my case my wife has an old system of mine and there is a possibility to replace my 5600G processor with a 5900X without “wasting” the original investment by using it as part of a build for my wife (not quite as “cynical” as it sounds)!?

In my spreadsheet the times for ‘BHT’ and ‘Golf Course’ images are for 20mp RAWs (from a Lumix G9) exported as 100% JPGs. The 3060 I bought has dropped to £260, a 4060Ti is about £500, a 4070 about £600 and a 4080 is what the 2080 cost my Son back in late 2019 i.e. just under £1200.

With respect to the power consumption of fast graphics cards and the power and heat (and throttling) of powerful CPUs in a laptop package I have no experience.

PS:- You didn’t state the CPU being used nor whether the exports were DP or DP XD.

PPS:- If you upload an image and a “typical” DOP I will create a batch (exporting 16bit TIFFs and then 100%JPGs) and run them through my 3060 and 5600G to see how it compares with a graphics card costing over twice what the 3060 now costs (plus whatever the processor cost)!

DP XD of course - why 'd somebody use anything else unless bugs or not having a proper GPU ?!

and CPU is old : i7-9700K from 2018 - one of the benefits of not using a notebook ( w/some exceptions because notebooks with replace-able GPU cards do exist of course ) is that you can stick a modern GPU in an old MB ( original GPU in there was from the same time ) … having ~6 years old CPU has no material effect in interactive editing in DxO PL6 btw

I simply duplicated 40 times the Egypt raw file that everybody has access to :slight_smile: , so you can repeat the process easily …

XD sometimes overdoes it. That’s why. Occasionally it creates fake detail that is unnatural and looks wrong. It’s always deepprime for me unless something specific needs heavier NR and I can check the output carefully…

Anyway that’s a topic for a different thread.

@noname It may be 6 years old but it is nearly twice the passmark of my i7-4790K’s and only a bit short of my newer 5600G!? I noticed a difference when I went from the i7- to test on my Son’s and Grandson’s machines, they were a lot more “snappy”!?

I created a batch of 50 of the Egypt and Nikon images but can’t find where I “stashed” them and don’t appear to have bothered to add them to the spreadsheet so maybe it was a plan I got “bored” with !?

I found the batches “hiding” on an unmounted SSD! Will find time later and test a batch of 40.

that is why there are sliders to control it - it is not like a mindless PureRAW batch processing

btw… I have still an old working notebook with i7-4810MQ from 2014 with nVidia GTX870M dGPU ( not sure if that GPU will be useable ) … I think I shall fire it up and see how DxO PL6 works on it - it was never used for DxO PL6, but ACR and CaptureOne did not have any interactive UI problems back in the day w/ raws up to 42mp from A7R2 ( again - not talking about any AI stuff that was absent back then - just how responsive editing was in UI )…

Exactly. And that’s exactly the reason not to use it on everything. When I’m working on a gallery of 500 images for a client I’m going to use deepprime (not XD) and set-and-forget. If I have one or two specific images that I want to get the most out of I’m going to use XD and tweak it until I find the balance…

XD isn’t for everything. At least not for me. That’s my point.
Anyway this is off topic, so gonna let this go from here…