First timer hardware question

[quote=“MikeR, post:20, topic:33535”]
**Exactly. And that’s exactly the reason not to use it on everything. **
[/quote]

and that is exactly the reason to use it

[quote=“MikeR, post:20, topic:33535”]
**When I’m working on a gallery of 500 images for a client I’m going to use deepprime (not XD) and set-and-forget. **
[/quote]

I see - you are submitting 500 very very low light images ( think camera metering auto-ISO as 256K+++ for a given exposure :slight_smile: … is it a private eye assignment catching unfaithful spouse in the middle of his/her night out ? ), otherwise there will be no harm / artefacts in tuned down DPXD

in any case our use case & mileage are different … I simply never push DPXD sliders to the plastic fantastic artefacts area

DPXD at their defaults can create artifacts at isos 6400 and above. Anyway you will trust it too much and then get burned with weird looking faces in dim light occasionally.

And I use deepprime on everything…. Even iso 100 images. Set and forget. It’s easier than worrying about which images need it and which ones don’t. Anyway now for real, I’m out.

1 Like

we can agree to disagree - I simply do not use defaults…

@noname I did some tests and also researched your “old” processor and its performance is not particularly bad for its age! The tests were conducted on my old i7-4790K with a GTX 1050Ti and on my Ryzen 5600G with an RTX 3060 and the figures are

I made certain “errors”, namely the i7 was set for 3 consecutive processes instead of 2 and the 5600G to 2 instead of 3 or 4 so I repeated the 5600G tests a number of times.

I set up separate groups for testing, NO NR, DP and DP XD and include a dummy group of 50 images before the first NO NR test!

I “introduce” all the directories of images to be tested to the DxPL database before any exporting and wait until everything settles down before starting the tests one after the other while the first “dummy” group is underway to create a queue that can then be left processing while I get on with my life!

This is the output from the final test with 3 export sessions

2023-06-13_193615_

The i7 tests were run with the images on a slowish (by NVME standards) NVME and the exports sent to C:, a SATA SSD. On the 5600G the inputs were from a SATA SSD connected via a USB3 adapter and the exports were to C:, a SATA SSD!

My issues are that your results with 4 export streams of DP XD show

The best figures I achieve with NO NR is 5.05 seconds per image with 3 simultaneous processes and 11.625 with DP XD!?

The figures with my Son’s 3950X and an “old” RTX 2080 for the old benchmarks were

with 2 threads and pulling the images from and exporting back to a USB3 SATA SSD.

It is certainly possible that the Intel architecture is better suited to PhotoLab processing but the Ryzen systems are taking as long to process the image without noise reduction as you show for processing with DP XD?

My biggest concern with my own tests is the figure for DP XD for the Egypt batch with 4 concurrent exports of 499!? I re-ran the test and got a different result, by 1 second!!? The figure is way out of line with the 2 and 3 concurrent processes for some reason!?

btw yesterday I repeated tests with 50x40mp X-Trans CFA raws + DPXD NR and for my own PC configuration ( 8 core x 8 threads CPU + RTX4070 GPU ) 3 parallel processes were the best ( by small margin though ) vs 2 or 4 .–> 2 or 4 were able to deliver ~12-13 raw files per minute and 3 was ~14 files per minute … BUT it shall be accounted for in tests because extra 1 - 1.5 raw file a minute is not zero - so changing the default/suggested value in DxO PL6 and testing is surely advised for heave batch processing users - there might be gains squeezed

that is the way it was… if your concern that the numbers too fast - I removed HDD/SDD effect by using a RAM disk - I have much memory and can put all raws and output files over there… that way it is most pure GPU / CPU test … I think that 4 parallel processes was wrong and I need to use 3 in my case… I shall try to redo the test with Egypt raws and 3 parallel processes ( yesterday I tested with 40mo X-Trans )

but the speed was more or less the same , per minute : 14 x 40 mp X-Tans ( 3 parallel ) ~= 11 x 50 mp Buyer ( 4 parallel )

Understood. I was not aware of this.
I read often on forums people saying that their 10-year MacBook is running great, but I guess it would not have the latest security updates as you mentioned.

My oldest PC is 8 years old (the desktop I am trying to replace) and it does not qualify for Windows 11. It does get Windows 10 security updates frequently.

MS support is based on Windows version and not hardware (obvious because somebody else makes the hardware except for their limited Surface models). In these 8 years my PC went from Window 7 to 8 to 10.

According to this page (Windows 10 Home and Pro - Microsoft Lifecycle | Microsoft Learn) MS says: “Windows 10 will reach end of support on October 14, 2025. The current version, 22H2, will be the final version of Windows 10, and all editions will remain in support with monthly security update releases through that date. Existing LTSC releases will continue to receive updates beyond that date based on their specific lifecycles.”

There is no retirement date specified for Windows 11 yet, but if it follows at least ten years as Windows 10, then it would be through mid-2031.

Understood.
I did not realize that because my current bottleneck is export times.
As I pointed out in another post above, my PC takes a minute for each 20MP file with DP in PL6. In comparison, it takes two or three seconds to render the images for editing. While it is rendering the detail, I am thinking about the framing, cropping or other things to do.

Is there a DXO article (or may be, a member post here) that explains how PL uses CPU vs. GPU? That may help me find the balance between those two processors.

As @BHAYT pointed out, I’d not want to spend all money for one of those components. The key would be to find a balance between the two.

Thanks.

I know you were responding to @RvL but I thought I’d add my info as well because the model I am considering also has a similar CPU but more RAM and different GPU.

Thanks.

There is some explanation that GPU is used for DeepPRIME on th e DeepPRIME marketing page on DXO…

But the reality is it’s very simple. GPU is only used for the noise reduction when DeepPRIME and DeepPRIME XD are selected. EVERYTHING else is done with CPU only including all export processing after debayer/noise reduction. That’s it. If your GPU is really weak, DeepPRIME and DeepPRIME XD can also be processed with CPU at much slower rates.

Quite simple

The GPU will be quite fast for DeepPRIME. My GPU is marginally slower and deepprimes 50MP images in about 7 seconds. (XD about 20 seconds)

The extra ram will go unused. Photolab doesn’t need it and won’t benefit from it. But it’s nice to have for other things. (16GB is enough for PL)

This should run Photolab nicely

if you go into setting you will see that OpenCL is a separate check box related to “Display and Process”, separate from DeepPRIME acceleration… if a relatively modern GPU is present then it is logical to assume that OpenCL will be using it and not emulated on CPU

I have never seen any performance impact and I have a powerful GPU. It’s possible this only affects the graphics viewport. I’ve never seen any substantial GPU usage during normal editing that would suggest there’s any substantial work being offloaded to a GPU here.

I am not arguing about that - but do we think that is a fake control for pretense only ?

PS: I have notebook with GTX870M which is a not properly supported GPU being old, switching OpenCL ON hangs (not immediately, but if you will start working w/ a raw file) DxO PL6 → so it is a sign I think that indeed it is using GPU :slight_smile:

That’s good summary. Thanks.

This helps, thank you.

What is your current platform? Mac or Win? What size should your screen have and will you carry your device around and use it in the field? Any legacy apps that you need to consider?

What I read from other posts/threads is, that both should perform well enough. Therefore, other criteria could be important too.

Processing (= export) times depend on image size and whatever other settings have been applied for both the image customizing and the number of parallel exports. But anyways, both of your candidates seem to be worth your consideration.

Thank you for the confirmation on those two configurations I asked about. @MikeR clarified above on the usage of CPU and GPU by PL. That was helpful for me to plan for a balance between the two.

Currently, I use an 8-year old Windows desktop PC with 4th gen i7 processor and 16GB RAM. It has a built-in Intel GPU but PL doesn’t use it.

Portability is not a primary goal, because I have lighter laptop (2-in-1) for general use, but if I could get good performance in a laptop, that is tempting! None of my laptops can process photos at the speed of my “old” desktop.

I use both Nikon editing software and DXO Optics Pro for DSLR images. For new mirrorless camera photos, I plan to use PhotoLab 6 (or 7 by the time I get the new machine).

I have no legacy apps that are a MUST going forward. I play around with open-source applications and source code, which I should be able to do on any platform, including my old Linux laptops.

The image sizes I commonly deal with are 25MP (GH6) 20MP (GH5M2), 24MP (D7200) and 36MP (D810). When I take 100MP hi-res image on GH6, I see PL taking significantly long time to process that file, compared to the standard 25MP file.

Thanks.

If you really consider adding a Mac to your equipment, you might also consider the iMac. It is easily fast enough and has a really nice screen…but still an M1 SOC. Supposing that an update to new chips is imminent, timing might be just right.

for mac stuff, you can read/check macworld

they are pretty good with incoming stuff, rumors and many more.
imac should come out next year with m3 chips or late this Fall, as all other model (macbook air, macbook pro and mac mini just got m2 chips). Apple are usually 2 ish year with new version