Improve perceived performance by better leveraging the RAM of modern computers

Following on from this thread: PL5 doesn't make use of my workstation's performance at all - fixed in newer versions?

PhotoLab still uses resources like it’s 2007. I’d bet the average user of raw editing software is at least a little bit of an enthusiast and has a decent computer. 8GB is pretty standard these days and 16 GB is not uncommon in even modest mid-range laptops. PhotoLab rarely uses more than a few GB of RAM during previewing and editing. It simply does not leverage modern computers. On top of that, my CPU is doing nothing most of the time either. PhotoLab could be doing more in the background to have things ready to go for the user.

(And I know there’ll be some users on real budget PCs - the best thing about this performance improvement is that it could simply be turned off for lightweight PCs and the user experience would be exactly the same as it is now.)

Two very simple strategies I could see for dramatic improvements in user experience:

  • Keep the last few images in memory and they can be instantly displayed. When you flick back and forth between two images, it’s recomputing the image every time you change, only for it to be tossed out again seconds later. Comparing images directly is impossible because there’s a half second of soft fuzzy preview before the whole image is loaded.
  • Pre-compute the next image or two. When looking through new photos for the first time, each push of the right arrow key comes as a TOTAL SURPRISE to PhotoLab, even though that’s totally predictable! When going through a set of 500 or so photos, just having a quick peek at each one takes 20-30 cumulative minutes just of waiting for each photo to load and render (not including any time you spend actually looking at each photo). Do that in the background ahead of time, and you can vastly improve the speed users can get through photos.

Those are both very simple strategies to implement. If you wanted to get clever you could:

  • cache more images when there’s resources for it (or just give us a slider in the settings and let users pick their own balance)
  • pre-compute backwards as well if the user moves that direction
  • pre-compute both directions when clicking on a random photo (it’s likely it’ll be compared to the adjacent photos)
  • pre-compute when the user starts scrolling or mouse-ing over images before they even click if the system’s fast enough

There’s a vast amount of time-savings to be had in this area. This is how you make modern software that’s fast. I shouldn’t be waiting after I click every single time when most of my actions are totally predictable and follow very simple patterns.


PL uses advanced GPUs more than it uses RAM. If you want faster rendering, this is where to look for performance gain. That and the new Mac M series processors, which are astoundingly fast compared to Intel or AMD.

Yes leveraging more RAM and CPU power is a great idea and would be welcomed but @Joanna is correct that the GPU makes the difference on DPXD between CPU and GPU an astonishing difference between minutes and seconds. . Therefore I would love to see the use of the GPU for more processes than just exporting.

1 Like

I think the op means mainly caching to get a more fluid experience when working.
Which means caching previously done work, so when going forth and back between images comparisons would be possible. But now, in many case they are not possible, unless rendering everything then comparing reults, then going back to adjust, then rendering again. Which is an absolute nightmare and is absolutly innefficient.
But hey, we even can’t display several images for comparing or adjusting series. SO … have to like the 90s style (2000 maybe ?)…

1 Like

Politely, this is entirely irrelevant. Improving performance of the actual computation is entirely orthogonal to my suggestion. It doesn’t matter how fast it gets, there’s literally no faster rendering than rendering that’s done before you started. GPUs cannot compute something in zero time, no matter how cleverly they’re used.

Better use of memory would require less complex engineering, and would most dramatically benefit those without the latest greatest computers.

Bingo. None of this would change how long the work takes or how it’s computed. Don’t need to re-engineer any of that. It’s just changing when the work is done so that users are waiting for it as rarely as is possible. That’s why I say it improves the perceived performance, it streamlines the user experience.

I fully agree. Esp that 1 sec delay when looking at the next iamge is real irritating.

1 Like

More than irritating, this makes it impossible for example to compare different images in the same series without doing a complete rendering of the series, vaguely estimating what needs to be modified, and pick work up again and again.

1 Like

I agree man.

1 Like