It is not really important whether CPU or GPU are used. Using both a little does not mean that the responsiveness gets better. It is all about the preview rendering architecture of the tool. How the data to process is calculated on the visible part of the image and the current zoom factor. How the visible part is divided in computational chunks. How the computational chunks are distributed to cores/shader units. Which correction algorithms are used, whether they support parallel processing, the order of them in the processing pipeline and so on. I have adressed this topic more general in: Avoid Blurry Previews
When doing exports there’s a huge benefit to have GPU supported processing.
Especially when you are on a MacBook Pro - or Mac mini or perhaps a Mac Pro trashcan - and a utilize an external graphics card!
There are more and more people using a hackintosh which can harbor the most powerful GPUs of the time.
As a matter of fact, on mine, I didn’t put a powerful one because I knew DOP made no use of it. I would reconsider this option if DPL started using the GPU. There is a large amount of power that needs to be tapped.
Export isn’t really the issue for hardware acceleration. Export should only really be done at full resolution with full detail and no compromises. Adobe removed GPU acceleration from Premiere and Photoshop for a reason on export.
On the other hand, if GPU acceleration will help DxO get us real time sliders, there I’m all for it. It’s not just GPU acceleration though as Asser points out, it would be calculating and displaying the visible parts of the image for preview (proxies) at the current resolution which would significantly speed up the user interface and workflow in PhotoLab.
Speeding up PhotoLab for large images and on 4K enabled systems is the single most important task in front of DxO if they would like to keep PhotoLab competitive as a RAW tool. Top tier tools like Lightroom and C1 (the only true competition) are basically real time right now in terms of image adjustment and sliders, even on 4K systems with large images.
PS. It’s not great that we have vote in two places for GPU acceleration. It’s forcing me to pull down sincerely meant votes just to vote these two both up.
GPU acceleration should be used whenever there’s a positive effect of it. Let be export, adjustments or rendering or building previews.
Every kind of signal processing will benefit from GPU processing.
So does export and should not be excluded.
If I need to export three different versions of a larger set of photos - say high res aRGB TIFF, down sized high quality sRGB jpegs and smaller sRGB previews - doing this with PL today will push all my CPU cores to a maximum. Slowing down all other processes I might be doing at the same time.
If this process would be handed over to the GPU, my computer would be butter smooth allowing me to continue doing other work. Either in PL or in any other application.
GPU accelerated versions are imperfect (based on Adobe and Apple’s experience). GPU acceleration is suitable for previews or working copies, not for masters. The most efficient workflow with DxO PhotoLab for multiple output resolutions is to output a set of masters (either a TIFF for 32 bit but much more space or a zero compression JPEG) and then use a dedicated resizing/watermarking application to create the other versions.
The issue with using DxO PhotoLab to create multiple sizes is that each version requires full processing from original which can be very slow (as slow as 2 minutes per image). Working from TIFF masters means even huge resizing and watermarking jobs are done in 10 or 15 seconds per image.
Moreover, each of us can choose the web prep script or application which suits us. Some people have really high watermarking requirements, wanting both visible and invisible watermarking - or others want all kinds of changes with GPS location, EXIF data and renaming - or others just need fast simple high quality resizing.
GPU acceleration for the preview when working with sliders to allow us real time sliders would be a gamechanger in terms of workflow and in terms of attracting professional photographers and retouchers to DxO PhotoLab and other products.
The issue with using DxO PhotoLab to create multiple sizes is that each version requires full processing from original which can be very slow (as slow as 2 minutes per image).
Of course it is slow. It’s CPU based.
But referring to a slow CPU-based implementation as an excuse NOT to rewrite it to a highly efficient GPU-accelerated implementation is not valid.
All outputs, especially multi output versions are slow as PL only uses CPU to process them. Thats one of the reasons why GPU processing do help!
Even Capture One is running GPU accelerated export processing and I have never experienced myself nor read anything about their having quality problems with their images.
If I want to check four boxes in the PLs export dialog and produce a set of exported files in a single process - just as DxO is enabling us to do - I want them to be exported with the highest quality possible, as fast as possible with the smallest CPU utilisation as possible. And this done in one single step. So that I can continue working.
Not export a first time, punish the CPU to max and sit and wait for the computer to be done, open in another application select new output streams or route into secondary ingest folders and so on.
One action. One export.
Multiple versions. At highest possible quality.
At a minimum impact.
But export is only the last step.
Sure I want PL to use GPU acceleration for as much as possible and that include previews, adjustments, filters etc.
You’re right, export should not be excluded if it can benefit from it.
But for me, realtime preview when working with cursors is much much more important. It would give a 100 for preview vs 1 for export.
This is a Chrome browser only option and does not have an effect on PL’s performance.
Here is some info about using hardware acceleration with Chrome which I found informative.
The tip about typing “chrome://gpu” in the Chrome address bar is illuminating. The result displays an extensive readout. The first section is particularly useful showing what features are actually accelerated.
Believe it or not I sold off a Nikon D850 (replaced with D4 + Z6) as the post-production pain in DxO PhotoLab would be too much. I did also borrow a D810 at one point (33MP). DxO PhotoLab processing speed was fine. With the D850 and the Canon 5DSR before it, processing speed was not fine. Tested with PhotoLab 2 on Classic Mac Pro 12 x 3.33 (with SSD drives, 96GB memory, RX580 graphic cards). Speed was also fine on MBP 2011 i7 with SSD. There’s a routine somewhere which hard fails with images over 40MP, as if a buffer is overrun and the image is partially being loaded and reloaded during processing.
There’s certainly work to do here for DxO to support high MP cameras. That said, 24 MP is plenty for me if the image is of high quality. One runs into the limitations of the glass and the mediocre quality of ultra-tiny pixels when going higher.