To use the GPU I had to expressly select it in the Preferences, as I’d already established that the Auto setting for Preferences / Performance / Deep Prime Acceleration resulted in the GPU not being used.
During the EA Beta testing for PhotoLab 4 I was advised that forcing the use of this GPU might cause errors. So far, I’ve exported tens of images with DeepPrime NR (using the EA and commercial releases) without issue.
Added results from my 2013 Mac Pro with 3.3GHz 8-core Xeon, 1TB SSD, 64GB RAM and dual FirePro D500 3GB GPUs. Too bad PhotoLab 4 can’t use both of the GPUs at once. With these images as well as my own, CPU and GPU times have been identical or very nearly so. My MP/s calculation is based on actual pixel dimensions, not camera spec. The D850 wedding images are 33MP, not the camera’s 45MP max - must have been cropped.
Thinking my 8-core CPU might manage better average times if I fed it more images (to utilize all available cores and threads) I downloaded the next 11 images from PhotographyBlog (those are all 45MP) and exported all 16 images at once. Again, CPU and GPU results were identical, but this time MP/s was 0.69, down from 1.5 for just the 5 wedding images). Looks like the larger 45MP image reduce MP/s performance by more than half. Ouch.
Interestingly, when running GPU, the load was handed off from one to the other partway through. Each ran consistently at 86% while under load. When running CPU, the CPU load was very spiky, remaining low most of the time and peaking up to 1100% for 5 seconds about every 30-40 seconds, all while the GPU continued to crank along at 86% with the load again switched from one GPU to the other partway through. Huh.
My conclusion is that larger batches aren’t handled any more efficiently by the 8-core CPU, and the larger 45MP images of birds substantially reduced the MP/s performance. I’m also left to wonder why the GPU is working just as hard when PL4 is set to CPU-only as when it’s set to GPU-only. The CPU seems to add virtually nothing.
It looks strange to me that export times are absolutely identical between CPU and GPU…
Just to be sure, after you have changed your settings for DeepPRIME, did you reboot PhotoLab? Otherwise, any change will not be taken into account…
OK. Disregard everything I wrote above. I’ve retested, relaunching PL4 every time I switched DP prefs, and I’ve updated my spreadsheet entry. The upshot: Whether processing the five 33MP wedding shots, the mix of those with the next 11 45MP shots from the PhotographyBlog gallery, or just the one 51MP Egypte image, I got roughly 0.66MP/s running DeepPRIME on one FirePro D500 GPU. DP with GPU took 1.60x-2.37x longer than PRIME.
I’ve just added my results from my 2018 MacBook pro with and without a Vega 64 eGPU
I also added my windows machine with a Vega 64 GPU and GTX970 (just swapped out for the Vega 64) and (ancient) i7-6700K
Is DeepPRIME using TensorFlow or a similar framework for acceleration? Specifically thinking about whether or not it would take advantage of the Neural Engine in Apple’s M1 chip once PL4 is ported to it.
Thanks for sharing. My 8-core 3.3GHz 2013 Mac Pro with 64GB RAM and FirePro D500 GPU managed only:
Egypte (CPU) - 4:30
Egypte (GPU) - 1:17
Wedding (CPU) - 14:21
Wedding (GPU) - 4:09
Dual GPU support would help, but that’s probably a very low priority. Apple Silicon support would seem to be top priority now. Sure wish DxO would say something about it, as I’ve got a new mini on the way and am dithering about whether to cancel and get an i5/i7 mini + eGPU…
What I was curious about is whether you’re using the frameworks (Metal, TensorFlow) that will allow for (relatively) easy porting to Neural Engine, or whether you rolled your own GPU code. I can understand taking either approach, but I really hope you used the former, and adding Neural Engine support isn’t too painful.
( Marc (macOS Ventura on MBP16" Intel))
Well, I guess you are right, for the Mac, Apple Silicon is the future.
If you have your new mini you can test it out and if you are not happy, send it back to Apple for a refund and get an older mini. But be aware the eGPU is not working anymore with Apple Silicon.