On the first image transferred from LRc, Deep Prime or Deep Prime XD runs quite quickly (typically 13-15 seconds for a 45 MP image). Sometimes subsequent images will also be fast, but if the second image is from a different camera/lens combo, the processing time extends dramatically to 3 to 5 minutes. Note that Prime processing times are not affected, so this appears to be a GPU issue. Once the program slows down, it stays slow even if I exit PL and LR and restart. If I reboot, it comes back to normal (until it hangs again). I have found that if I exit both PL and LR and then clear standby memory with RamMap and then restart both LR and PL the problem goes away until it recurs with a 2nd or subsequent image. Computer is i9-9900K with 64GB ram and GTX 1080 with 4k monitor. I realize this GPU is older than the recommended minimum, but it does work fine until it doesn’t and the computer is overall still quite fast. I would appreciate any thoughts you might have re solving this problem.
Version 6.1.1 seems to have solved the problem.
I have the same problem on 6.2.0 build 41 DEMO
Worked fine for a few days, speedy and quick, now after reboots I have an image that takes minutes/forever to finish. Is it something about DeepPrime XD? Starts doing quickly and after 1/3 of the way it bogs down.
Haha, noobie mistake. Deep Prime XD denoising was the reason for sudden slowness in export
Problem is still there, but I have tracked it down to running Topaz Sharpen while PL6 is open. I am working from LR as a starting point and going to PL6.3 for raw development and following up with topaz sharpen. If I close PL6 after it imports back to LR before I open sharpen, then it will process the next image properly (with the GPU), but if I leave it open when running Sharpen, the next image processed with Deep Prime XD will revert to the CPU and hence much slower. This problem now shows up a new computer that I just built, so no longer a possible obsolete GPU issue. The problem exhibited on the old computer with a GTX1080 as well as the new one with an RTX4070ti, so likely not a REBAR issue, but rather something generic in how the two progams are using the GPU. Am cosidering putting a second GPU in the new computer and assigning one of the programs to it to see if that solves the problem.
I’ve experienced the same issue, first with a 4GB RX 5500 and now an 8GB RX 6600 XT. The problem is Lightroom. With all the GPU boxes checked (display, processing, export), Lightroom seems to use all available VRAM and doesn’t release it when it’s done. You can see this on the GPU graph in the Performance tab of Task Manager. If you do have full GPU acceleration in enabled in Lightroom, try unchecking one box at a time, starting with export, and see what happens.
Definitely a VRAM related issue. I will try your suggestion, but I am now not seeing any issue using only LR and PL with a 12 GB RTX4070ti. It is only when I add Topaz Sharpen into the mix that PL gets grumpy. I have been closing PL before running Sharpen and now have had no issues and it is much quicker to close and restart the progam than it is to either wait for a CPU-only deep prime xd run or to get out of all the programs, log out, log in and start over. I do think I will try the two GPU approach and assigh one of the programs to the 2nd GPU. I have a spare GTX1660 ti that will work for the test and if that solves the problem, the Intel Arc A770 seems to really like sharpen based on some tests by Puget Systems NVIDIA GeForce 40 Series vs AMD Radeon 7000 for Content Creation | Puget Systems , so I just might nab one of those as they are pretty reasonable and I have a second PCIe slot and a big enough power supply. With PCIe4, feeding 2 GPUs each with 8 lanes shouldn’t slow things down much and the A770 also is the fastest thing around for H.264/HEVC encoding (thanks to having a hot rod version of quick sync built in).
I’m not surprised, you could go to the moon on that card
crazy price → e.g. here …
I got it in the US when it first came out. Expensive, but not as crazy as the Euro price. It is very fast for a third tier card (but obviously still not enough VRAM to overcome the hunger of three photo programs running in tandem . BTW, the new computer i built also has an 19 13900k, which is also quite quick. Definitely cheaper when you buiild the computer yourself.
Don’t know how widespread or currently valid this is, but it matches my experience: https://tinyurl.com/5fdax7s2
One commentator mentions the same problem when using DXO (I assume with Deep Prime, although he doesn’t say so specifically).
After building the new computer, it was so much faster that I hadn’t checked to see what LR default performance settings were. Turns out it was only set to use the GPU for display. I checked the other two boxes and LR is now stunningly fast, but also the GPU conflict is almost gone. Only occasionnally (depending on the AI algorithm chosen in Topaz Sharpen) will PL turn into a slug after running sharpen and all I have to do is close PL and restart it by sending another image from LR and it works fine again. Definitely counter intuitive, but that was the result. I will run like this for a while and see if anything changes, but the current behaviour doesn’t justify stuffing a second GPU in the box. I checked the memory usage behavior in Task Manage and all three progarms seem to be sharing normally.
After having PL quit using the GPU for some time now, I have found a major conflict. lf Outlook is running when PL is launched, the GPU will almost always not be used for Deep Prime XD processing. I have found that if you close Outlook and log out and back in (Outlook doesn’t really close completely until you log out) before starting a photo processing session, then there is minimal conflict between LR, PL, and Topaz and photo worklfow is not impaired, but if Outlook is open, PL will not use the GPU most of the time. Not sure what the nature of the conflict is, but the workaround is easy unless you are in an environment where you have to keep Outlook open.