I have a system with quite a lot of memory. 64 GB of it in fact.
But Photolab barely uses any of it!
Photolab loads and processes an image when you select it, and then completely discards all the work is just did when you go to the next image. It could instead save it in memory for a while in case I come back to the same image, surely? Plenty of memory for that. You could flick back and forth comparing two images without it blurring and reloading every time you switch.
And PhotoLab is barely busy either! It’s just little spikes of activity as you switch image. My CPU is idle most of the time. Photolab could be using some of that spare CPU time to be computing the preview of the next photo (or the previous photo if I’ve been going backwards). That way you wouldn’t hit the arrow key and wait for a second before you get to see the image, it would ALREADY BE THERE.
I’m a software dev, I know this can be done better, and it’s pretty frustrating. When I process a large number of photos, probably half my time is spent waiting for the computer, which is immensely underutilised.
So here’s my big question:
Are later versions of Photolab any better about this? Did DxO get any better at modern computing in recent versions?
Hi and welcome to DxO Users’ Forums. I know that versions 6 and 7 have a preference choice of “Always see(ing) high quality previews”. I don’t remember if PL5 had this option or not, but if it does then this may help your situation.
As for using more memory, some users’ computers have as little as 8GB. Even though 16GB are recommended 8GB are allowed.
I think it might have been version 6 but I’m not sure anymore where they did a lot of under the hood improvements, it was in their promo materials everywhere. For me personally it really got better with version 7, which got a lot more optimized on my machine, but not many others have reported so maybe its not for everyone. Maybe best to try a trial version and see if it does that job for you on your machine.
Prefetching and more “intelligent” memory cache usage are probably things for DxO to improve, but keep in mind that unpacked 100MP RAW takes 600MB in memory, and probably they use at least one other copy. I have DPL7 and it works as expected, more or less.
I think DPL tries to be a “good citizen” of your computer (check in logs), so that it does not become unresponsive to other tasks. Also some other software might reserve resources for itself, like part of GPU memory and SMs. The CPU usage may also depend on your disk performance. The threads might wait for I/O or GPU task to complete. One thread may have to wait for the other threads to complete. Some subtasks are single-threaded because they cannot be parallelized. With newer Intels you get E-cores and P-cores, so the scheduling is more complicated. Some big performance improvements were seen previously after upgrading NVIDIA drivers. And so on, so on… I would be more cautions before jumping into quick conclusions. Some people are unhappy because Notepad does not use all their computer power.
The thread opener’s system RAM has easily space for 100 images with 100 MP with plenty of reserve. Next to the fact he didn’t state anything about image size, but you did (guess, I guess?), the question remains why not even 10 % of available RAM is utilized? Although no other intense jobs are running parallel. Of course the address of that question is DxO, but I can’t detect any conclusion in the original post. In yours I do find some… and the unnecessary mentioning of Notepad. Was that a subtle hint of “some people will never be satisfied”? Just asking.
I probably should do, not sure why that didn’t occur to me. Duh
That’s fair to a point. Certainly, this is used by hobbyists with very average home computers, and it should operate sensibly. But likewise if it wants to be a serious professional option, it needs to have the capacity to utilise any resources made available.
Your average web browser already automatically scales. Chrome eating RAM is a meme but it leverages that RAM to great effect (you can disable caching in the developer console if you want to see how much of a difference it makes) but it also usually backs off if the system is filling up. Some professional creative software has sliders to set limits on how aggressively they should take system resources too.
I don’t think that’s an entirely unfair response for them to have had. Forums like these are always full of ridiculous “Hey devs why isn’t this literally magic” threads. But I do have an understanding of how this sort of software might work, and I think I’m being reasonable in my expectations.
I’m not asking for undefined “performance improvements”. I’m asking that PhotoLab uses the abundant memory of a modern computer to avoid throwing work away and repeating it every time I click a new photo. That type of work caching is very common in modern software of all types.
PhotoLab memory management is obviously a good topic for a request to DxO support.
I think it wasn’t touched too often in this forum. Memory management
is usually a can of worms, so I’m not sure if DxO wants to open it.
It’s hard to design caching policy, which would satisfy everybody.
Perhaps more aggresive CPU usage would also be a good topic for a support request.
Judging from logs, they do limit parallel threads count sometimes
but the rationale for this behavior is unclear. Maybe you can press DxO
to make a statement about this. I wouldn’t expect detailed response,
but maybe they will give just a vague idea of their choices.
GPU resources are mainly used by PL during export operations on photos with DeepPRIME NR. Part of GPU usage is under OS and other applications control, so PL can’t be too aggresive. This can be a hard topic because many video cards may become unstable under heavy load. For an example of problems some people had with older versions, see
There are many other things going under the hood (problems related to Microsoft, Apple, NVIDIA, Intel, AMD), which we will never know about.
Lastly, there is a whole variety of disks used by customers.
Some have HDDs, some use M.2, some have setup RAM drives.
Performance of thumbnail and previews preparation and database operations may hugely vary, and surely many people get crazy about it. But we have to adapt ourselves too.
You should also be aware that some corrections, i.e. ‘Lens Softness correction’, ‘Chromatic Aberration’, ancient ‘Unsharp Mask’, ‘Moire’, "PRIME’ NR, can only be seen starting from 75% magnification. It’s good for people with old hardware, but others may feel “cheated”.
Additionally, you can view DeepPRIME results only on exported files or in a small preview window. These topics were widely discussed here.
DxO is a small company, so many requests stay in their queue for years.
As a side remark, there are some obvious problems unsolved by much larger companies.
In Win11 there is a seemingly “easy to fix” problem with erratic behavior of ‘Restore previous folder windows at logon’. It hasn’t been fixed for over a year now, so on my laptop I still stay at Win10.
I must admit that my remark concerning Notepad was too toxic.
It was made just after reading some other, irrational posts, as suspected by OP .
Update on this one: PL7 (trial) and PL5 perform pretty much the same in this regard. There’s still that half a second of softness before it loads (even with the high quality previews), which makes comparing two photos for sharpness a nuisance, and still no pre-computing of the next image (which would dramatically speed up that first pass of picking and tagging the good photos). Alas, a bit disappointing.
Just storing the last N images (and putting N on a slider in the settings) would be very straightforward and harm nobody’s existing experience (especially if it was set to 0 by default, which would be the current behaviour, but I’m sure they could get away with 1 or 2). There’s much smarter solutions out there but they could get 75% of the benefit without much magic.
There’s a soft twofold irony here.
First, this would most strongly benefit those using slower disks, since it wouldn’t need to reload the same files repeatedly.
Second, Windows does already cache file access. “Free” RAM is often used for other tasks, like keeping files in memory.
The Windows caching thing is funny. I’ve got plenty of free RAM of course, so I can browse back and forth without my disk being touched at all. 100% of the delay I’m seeing is from after the file has been read from disk to memory. It’s half of a free solution, but it’s still pre-processed data, and the Windows cache isn’t as smart as PhotoLab could be.
You could download the free PL7 trial and test it thoroughly. Someone else’s experience might be differen than yours…and only you got your computer.
Checking processor load on my Mac, I found that the CPU is almost fully used, if processing does not require a GPU, in which case, there will be some kind of customising dependent balance between CPU and GPU load.
Note that export can also be slowed down by the performance (or lack thereof) of the channel connecting the GPU to the CPU.
There is an other thread in which the OP is fearing the computer’s burnout (so to speak) due to continuing full load whil exporting.
Bottom Line: Use what you have and tune it for whatever is your target like e.g. “shortest possible export time”.
Are you talking about comparison view or switching from one photo to the next. Because initial load is very processing heavy in virtually all applications, unless its preloaded in cache memory or something. DXO acts like a browser file manager so one can just browse a folder and start working, and its not meant to be used for file management or choosing between many photos, there are dedicated programs for that.
However if you want to compare juts two photos or one version to the other, it should be pretty instant… once the rendering is done. I would expect this behaviour from such an app. I do agree that DXO team could optimize and speed it up. For me there is a difference between older and newer versions of PhotoLab. I’m not sure if its in any way related to drivers or particular workstation or not.
I know a while back DXO was advertising big imporvments for PhotoLab, but I can’t find the press release for that, could have been going from version 4 to 5 maybe.
I also found this on DXO help section, not sure if it makes a difference. Just sharing.
DeepPRIME* and DeepPRIME XD* hardware acceleration further information.
I imagine its a legacy feature being more old school of doing things, probably left from when DXO made first version of the program. If I’m not mistaken they never really developed it further. Unless you know some trick how to compare more than two photos. I usually compare the version of the photo I’m working on with the exported version, since its a way to preview DeepPrime NR over entire image, instead of just portion.
Yes. It could and should be optimized. Personally I didn’t have much problem with it as some other people, since I don’t do many photos and I prefer the quality of output that DXO has put most effort in, however it is a a fair point that the program is in its seventh version its 2024 and should be faster and more optimized in that area as well. Maybe next version, who knows. Whatever they do I just wish they never to go subscription model and never to compromise quality for the sake of speed, and if possible both should be part of the package.
When I customise my images, I get snappy response times and with these, I don’t really care what the processor load is.
My older 2012 iMac with spinning platter drive took some time for switching images (3-4 seconds) and showing changed slider effects (1-2 seconds), but the new gear (which isn’t that new any more) shows no such delays.
After saving this post, I found that the forum software has added a footer with DxO advertizing.
I am NOT a fan of any such obnoxious interference at all.
Please DxO, remove the banners and footers, most of us are already invested in DxO software. Some of it might get undesired feedback here, but adding ads will not help, but more probably worsen the overall feelings towards DxO.