I have used Dxo Optics/Photolab for several years. It’s my favourite raw editing program. However, it has significant disadvantage: Dxo Photolab is too slow for editing large (over 30 MP) RAW files. Please support GPU acceleration for RAW editing!
In my recent tests with Topaz Gigapixel 3.1 which is using both CPU and GPU acceleration came to light that GPU (Nvidia Geforce RTX 2070, 2304 CUDA cores) can be 40x faster than CPU (Intel Xeon 3,50 GHz, 4 Cores) if calculating the same picture!
Topaz plugins are so slow (even with GPU processing that they desperately need hardware acceleration). But I agree - PhotoLab is very slow with my 5DS R files and it’s high time DxO adds GPU acceleration to PhotoLab.
The sliders are hopelessly slow unless one follows a very specific workflow (not enabling any lens correction or noise reduction until the very end).
DxO Optics Pro and DxO Photolab are both already supporting GPU acceleration; this was implemented years ago (at least by DxO Optics Pro 7 in 2012!).
This is achived using OpenCL: Check in the “Preferences” dialog and then in the “Performances” tab, that the OpenCL checkbox is activated. This is supported by both Intel & AMD processors (on the CPU side).
This is stated by DxO support upon opening a support ticket:
The GPU option you are seeing in the program user guide is only for the Windows version of the program. If your video card and associated video hardware driver are compatible on your Mac for use in the program, GPU acceleration will be automatically turned on for you. Therefore, no GPU option is displayed in the Mac version of the program. There are a much wider variety of video cards available for PCs than Macs. This is why the option is available for Windows.
Thanks for the tip Required. The Mac side GPU acceleration is not particularly accelerated at this point. Like you I don’t see much use of the GPU when monitoring with iStat Menus. It looks like you are using Bresink but it’s the same thing.
Performance is much better on Sony A7 III 24 MB files than on the Canon 5DS R 50 MB files. Not twice as responsive but at least three or four times more responsive. Neither are real time though.
Thank you for your response.
Is it so that the available graphical acceleration features are costly to implement (e.g.require some expensive licencing)?
Just as an example, I am also using the “Da Vinci Resolve” software for video editing, and this software fully support graphical acceleration on Mac (including with an external hardware connected by a high speed network).
I’m running PhotoLab 2 Elite v2.2.2 build 23730 under Windows 10 64 bits 1809.
My processor is an Intel Core i5-8400 2800 MHz, with 16 Go DDR4 3200 MHz.
My GPU is Gigabyte GeForce RTX 2060 WINDFORCE OC 6G (rev. 2.0).
Unfortunately, when I attempt to activate the OpenCL checkbox in the “Performances” tab, I get this warning : *“Your graphic card seems to be slower than your CPU. Enabling OpenCL may decrease perforlance.”
And then I’m prevented from activating the OpenCL checkbox, and RAW processing performance remains of course very poor.
Is this situation normal? Do I need to get a better graphics card ?
Right now, I am exporting 50+ MPx Canon RAWs and my Radeon 560 uses all its 4GB VRAM but GPU is used only like 5 frames per second.
GPU processing is here at least 20 years and it never take off. Back then when we had 133MHz CPU it made huge difference. I doubt there is any difference at all. My old Dell workstation with 2x Xeon CPU + Quadro 5000 added like 2fps in Adobe Pr compared to 15 fps on both CPU.
GPU is great for previews but it looks like it doesn’t make any improvements in rendering where you run single thread per image. GPU shines in parallel processing but there is single thread per picture
I recall there was an issue with h264 encoding in parallel because it degrades quality if many threads are used on single image. You need to split image in small blocks. You have to choose quality or speed. Both is not possible.
GPU acceleration has been part of our axes of development for quite a while, though it hasn’t been effective yet.
It’s a hard balance to find between all our algorithms, that are purely CPU, and we’ve already tried some “porting” which didn’t led to satisfying results.
It’s a logical evolution, that we keep on tracking, but on which at the moment, and for the moment, we can’t provide you a viable answer.
I’ll close this topic and release the votes.
Thanks to you all.