Photolab 8 - "Low GPU memory detection" problem

Good morning,

First, my config: Intel I5-2400 / GeForce GT 1030 2GB / 6GB DDR3 / Windows 10 / Updated drivers.
With Photolab 7, no problem using DeepPrime XD in GPU mode (it’s “slow” compared to large configs, but much faster than in CPU mode).
With Photolab 8 (trial version), I get the following message when starting up (translated from French):
Title “Low GPU memory detection”, subject "At least 1024 MB of free GPU memory is required for DeepPrime or DeepPrime XD/XD2s. If you continue, DXO Photolab will use the default CPU processing for the current session (this will increase the export time).
Available buttons: “Exit DXO Photolab” and “Continue on CPU”.
In the Windows Task Manager, “Performance” tab ==> “GPU”, only 0.3 Gb of memory is used (== 1.7 Gb of memory remains).

In the DxO.PhotoLab.txt logs, I have the following error (in Franglish :)): “Windows Error when calling Device Initialization : Paramètre incorrect.”

2024-09-18 19:12:18.875 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | Reported DXGI device: NVIDIA GeForce GT 1030
2024-09-18 19:12:18.875 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | DXGI Device available, memory : usage 53596160, current reservation 0, budget 1765693440, available for reservation 934778880
2024-09-18 19:12:18.875 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | Reported DXGI device: Microsoft Basic Render Driver
2024-09-18 19:12:18.875 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | DXGI Device available, memory : usage 0, current reservation 0, budget 2879402804, available for reservation 1519684813
2024-09-18 19:12:18.942 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | InferenceDevice: selected inference device is NVIDIA GeForce GT 1030 (1.9 GiB, type=Discrete GPU, id=0x8062)
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | Using adapter NVIDIA GeForce GT 1030
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | VRAM Reservation is enabled: 500000000
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Error | Windows Error when calling Device Initialization : Paramètre incorrect.
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | Reported DXGI device: NVIDIA GeForce GT 1030
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | DXGI Device available, memory : usage 53596160, current reservation 0, budget 1765693440, available for reservation 934778880
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | Reported DXGI device: Microsoft Basic Render Driver
2024-09-18 19:12:18.990 | DxO.PhotoLab - 13120 - 1 | DxOCorrectionEngine - Info | DXGI Device available, memory : usage 0, current reservation 0, budget 2879402804, available for reservation 1519684813

Thank you for your help

The problem is as the error says, your graphics card hasn’t got enough RAM, it’s only got 2 GB. DxO have obviously added an extra check to PL8 compared to earlier versions to ensure lack of RAM on the GPU doesn’t crash PL8.

The only way to avoid PL8 defaulting to using the CPU instead of using the GPU will be to upgrade to a more modern graphics card.

The alternative is to accept that exports involving any version of DeepPRIME denoising will take a very long time.

Possibly, but it’s worth the wait for the results you get :grinning:

@Inezant, @stuck & @Joanna I believe we have been here before, potentially the same or something similar, when DxO attempted to avoid the risk of a crash by pre-allocating GPU memory before actually attempting to use it and then downgraded many users to ‘CPU only’ and that is “fatal” if any images have DP XD, or now DP XD2s assigned.

Please see here DXO PL 6.10 and older GPU’s - #28 by Cecile-C

As a result of the problems other users had experienced, I changed my setting to ‘CPU only’ to see what the outcome might be. This brought my i7 4790Ks to their knees just browsing to an image with DP XD selected, the mouse stuttered, the performance monitor software I was running could not even keep plotting the CPU usage graph!!

DxPL was consuming every bit of CPU it could get whenever I browsed to an image with DP XD assigned!

Please note that this is browsing not exporting, DxPL “sips” a little GPU “juice” when rendering/re-rendering DP XD images and now does the same with Dx XD2s and I mean a tiny “sip”. But to replace that GPU “sip” with CPU results in chaos for less powerful CPUs.

If you have an onboard IGPU then it is worth trying to select that but that will make exporting even worse, if it works at all.

But during the problems that some other users experienced my two i7-4790K’s both worked fine (when I didn’t select ‘CPU only deliberately’), one with a 4MB GTX 1050Ti and the other with a 2MB GTX 1050 (not much different to your 2MB GTX 1030 @Inezant) and the 2MB card was not downgraded automatically to CPU only at any time, by “fine” I mean slow to export but fine for browsing.

You would think that a Beta tester might have suggested that DP XD should have been retained as an option alongside DP XD2(s) to avoid any risk of potential performance issues with low powered GPUs :wink:!?

In my tests I concluded that rendering the preview window was the culprit and if I moved the position of the preview around then each time the CPU would max out. The Loupe is no longer permanently on so browsing should be possible without the chaos I saw.

@Joanna but exporting with ‘CPU only’ is truly “horrendous”.

@Inezant please make a support request, the fact that it worked for browsing and (slowly) for exporting with PL7 means that you have an expectation that it would remain the case with PL8.

In truth DxPL doesn’t seem to require much GPU memory but the problem was caused by a design/coding change that sought to pre-allocate GPU memory to avoid problems but which actually caused problems.

Hi there
The requirements for Windows are as follows.

@Required The requirements are ridiculous they have jumped from a 2060 to a 2080, arguably the 2080 is rated as 124% compared with my 3060 rated as 100% and the memory has been doubled I believe (or has it gone from 6 to 8GB). In theory my two 2060s now fall outside the parameters on GPU version and memory available.!.

I have tested XD2s on my 3060 paired with a 5900X and the DP XD2s times are slightly slower. I now have one of my old 17s as a spare so will try to get that going and test my old 1050 2MB card with PL8 and see what happens.

However the point I was making above was that the problems experienced by users last year were created by DxO “fiddling” with the software, are we seeing a manifestation of that again.

Plus if a product has worked fine in the past is there not an expectation that it will continue to work on the latest and greatest release albeit maybe not as fast.

DxO had a number of potential alternatives

  1. Leave DP XD alone and make no changes, but they painted themselves into a corner by releasing XD2 with PureRaw 4.
  2. Leave DP XD in place and add DP XD2(s), then DP XD2s is an option for those able to run it successfully.
  3. Abandon a group of users whose hardware simply cannot meet the specifications, which I am afraid I believe are actually excessive given the amount of GPU memory that appears to be consumed.

DxO chose option 3 and which users will they disenfranchise next year with PL9 I wonder.

First time I see software not able to run on all NVidia cards of the same generation.
I understand why 1xxx serie is obsolete, but since 2xxx serie all nvidia cards have cuda, tensor and rtx cores.
6 Go ram seems enough to run photolab, no ? Or maybe XD2s need more ressources ?

Isn’t it the case ?

OK … Unbelievable that option2 is not the obvious choice. What do their analysts think ?

@JoPoV As I explained in my first post above I ran PL6 and PL7 perfectly happily (except for exporting on the GTX GPUs) and that was on my 12GB RTX 3060, my 4GB GTX 1050TI and my 2GB GTX 1050 at that time running on a 5600G, i7 4790K and i7 4790K respectively. My current configuration is 5900X (3060-12GB), 5600G (2060-6GB) and i7 4790K (2060 - 6GB).

All worked fine even when the “bug” caused some other users problems. Why my 2MB 1050 was “spared” from the problems I do not know.

With PL6 and PL7 running with ‘CPU only’ selected, either by user choice or by software dictat, was a nightmare if the CPU wasn’t at least as powerful as the 5600g when browsing any images with DP XD selected which consumed as much CPU as it could!

The ‘Loupe’ in PL8 is user activated unlike the old NR preview window so is is possible that normal running may not be affected as much as with PL6/7. My test machine is a 5900X so testing the scenario I have described was simply not possible.

Not as far as I know!

If PL8 automatically downgraded from DP XD2s to DP XD if it detected a potential issue with the GPU that would be really smart but I believe that the DP XD part of the new DP XD/XD2s

2024-09-19_193331_

is actually there for those with X-Trans RAW images where DP XD2s has not (yet) been implemented or so I thought!?

As for the choice of the DxO analysts well I have to have something to moan about, unfortunately that is all I seem to do, sorry!

1 Like

It seems like you are mixing the requirements and recommendations. Recommendation is RTX 2080 but the requirement is any RTX with 4GB, or any GTX with 8GB. This means, at least in the GTX 1xxx generation, at least a GTX 1070 (since the GTX 1060 comes with at most 6GB).

1 Like

My GeForce GTX 1650 with 4 GB dedicated GPU memory works fine - tho, not super fast.

  • Export to disk for DeepPRIME takes 9-10 secs / image.

Also, I’m finding PLv8 is much more stable in its working with my GPU (c/w PLv7)

Can you explain what you mean by « more stable » ? Did it sometimes fail before PL8?

And although it works fine, I expect this just means that DxO won’t provide any support in case it fails, since your GPU is below the minimal requirements.

@Lucas Absolutely my concern and I agree with your previous post criticising my previous post

but my concern is that they are edging up the requirements release by release but it gets a lot worse as the following tests show.

I finally got my spare i7 4790K working and fitted the GTX1050(2GB) and then the GTX1050Ti(4GB) and tested with the trial version of PL8 and an older version PL7.6 already installed on that machine.

The 1050 tests were a major disappointment and the 1050Ti not much better.

Users of older graphics cards please ignore my statements in an earlier post about DP XD2s being slightly worse than DP XD because my export tests with PL8 on the 1050Ti took about 1.52 times as long as PL7.6 on the same machine with the same GPU! I am still waiting for the DP XD2s tests on the 1050 GPU to finish because it refuses to use the 1050 and will only use the CPU i.e. I abandoned the test!

Before I show what PL8 “thought” about my 1050 I did some tests with my 5900X 3060(12GB) system and got these for two runs of 11 of the Egypt image from the forum spreadsheet benchmark spreadsheet using GPU-Z to monitor the GPU.

However, the 5900X has no IGPU so just to run the monitors the GPU is in use and the following shows the memory usage for the “idle”, 4 export copies and 2 export copies.

So over 1,000MB is in use in the “idle” state, that increases to 2,241MB with 4 copies and 1,923 with 2 copies (which is all I can sensibly run on the i7!)

On my i7 there is an IGPU capable of driving 2 monitors so the GPU Card on that machine has no monitor connections and does nothing except support DxPL.

PL8 Refuses to use a 2GB GTX1050:-

Like @Inezant I was greeted by

2024-09-19_232006

Please note that the warning is that 1024MB must be available and that it is a 2GB 1050 and should be completely empty and PL8 still downgraded the system to use CPU only!? Plus please note the amounts of memory used during the export on the 3060 (and the later figures for PL7 on the 1050 and the 1050Ti)!?

Limiting the product to a single export thread should be acceptable to users but to completely disable the GPU is unacceptable in my opinion!!

My attempt to then set the GPU was “accepted” by ‘Preferences’ but you need to do a restart for DxPL to adopt the new choice and you are greeted with the same message again.

A warning would be acceptable but the user is being forced to abandon ANY & ALL exporting with any GPU assistance, for DP or DP XD2s, so that only the CPU can be used and that is a (bad) joke!!

My concerns about attempting to do anything with images with DP XD(2s) selected when only the CPU is active, a problem which greeted my tests some time ago, were unfounded I am glad to report.

PL8 seems to be way more controlled in its use of CPU but the ‘Loupe’ appears to be totally dependent on the CPU only and on the i7 4790K is very s l o w and took 37.49 seconds (but I blinked just before it appeared) to render a small loupe window on the Egypt image.

Increasing the size to the larger ‘Loupe’ took 36.435seconds which was probably the time for the smaller window! Moving the big ‘Loupe’ to another part of the image took 46.692 seconds!!

At a stroke DxO have well and truly hobbled most of the slower machines!

Using the NR preview window on PL7.6 takes 3.6 seconds, admittedly for a much smaller preview and of course the “option” to use that window has now completely gone.

PL8 & the 1050 Card:-

PL7 & the 1050 Card:-

PL7 versus PL8 Performance:-

The are no timings for the 1050 card on PL8 but on PL7 we have

Please note the GPU memory used (showing has the highest figure) of 856MB!!!.

I then installed the 1050Ti(4GB) card and the performance tests yielded

PL8:-

PL7:-

So we have

PL8 on the 1050(2GB) = Not completed
PL7 on the 1050(2GB) = 14 minutes 43 seconds
PL8 on the 1050Ti(4GB) = 20 minutes 2 seconds
PL7 on the 1050Ti(4GB) = 13 minutes 10 seconds

Just reminding that this is comparing DPXD (in PL7) vs DPXD2s (in PL8) so times are not expected to be identical anyway.

But I do understand that you may want to have the option to still use DPXD in PL8 so that it can keep working on your GPU. Anyway that’s what the trial is for, so you can see whether the benefits overcome the downsides for your usage :slightly_smiling_face:

@Lucas The “good” news is the that the kit I used for this test is all “obsolete” kit, my main machines are currently the 5900X (3060), a 5600G (2060) and a second i7 4790K (2060) and PL8 is already on the 5900X in a pre-release form and will not go to the other machines until I find a reasonable deal.

I undertook the tests to see what might happen if other users upgrade to PL8 with CPUs or GPUs that are not up to the task.

Yes it is and the real eye-opener was the appalling performance of the ‘Loupe’ which effectively means PL8 will not be going on my remaining “production” i7 even with its 2060 because the ‘Loupe’ seems to need a lot of CPU, way too much in my opinion.

This release has been geared up to capturing a “mythical”/“mystical” group of new users who cannot resist the temptation and is emphatically “a slap in the face with a wet fish” salute (probably accidental) to a group of existing users whose kit doesn’t match up both in terms of GPU but also in CPU!?

Laptop users cannot just replace the GPU but need to replace the whole machine and even Desktop users may find it hard to upgrade the CPU. My purchase of the 5600G was with an eye to replacing it one day with the 5900X but that is the last CPU with my existing kit the next upgrade will be new CPU, new memory and new motherboard and possibly cooler.

Upgrading the GPU is around the £300 mark for a 4060 but if the ‘Loupe’ is important to me then hopefully the 5600G (just over twice the speed of the i7) will help but that would only be a halving of the 36 seconds!? On the 5900X the loupe takes 3.566 seconds to present the large loupe image.

I have an 8-year old PC with an NVIDIA GeForce GTX 750Ti graphics card, with 2Gb of dedicated GPU memory. Up to, and including PL7, I have been able to use PL quite happily without issue, with my preference set to use the graphics card (which is flagged as only partially supported). I use DeepPRIME noise reduction on all my images, having decided that DeepPRIME XD didn’t offer me any noticeable improvements but did result in substantially longer export times. For my Nikon .NEF RAW images, with DeepPRIME selected the export times are around 25-28s per image. Although this is pretty slow, I’m happy just setting an export going of all of the images I’ve worked on - typically 100-200 in a folder - and come back when they’ve exported.

With PL8 I noticed the forced switch to CPU only (or exit) when I tried setting my preference for graphics card. I raised this issue with DxO as I couldn’t see why I shouldn’t be able to continue using DeepPRIME. The reply I got was:

Regarding the behavior: This year, we decided to implement memory reservation to address instabilities in our software. This is necessary because DeepPrime and the Reshape tool require significantly more resources than older GPUs can provide, leading to processing failures in previous versions.

However, these algorithms are the only ones that heavily utilize GPU resources. You can still work effectively with PL8 without using DeepPrime and Reshape.

I accept that DeepPRIME XD2 may push the graphics card even further than DeepPRIME or XD but as noted earlier I’ve happily used DeepPRIME (and Reshape) with all releases of PL including PL7.

So the options I’m left with are a) upgrade my PC / graphics card (which I admit is about due) b) use PL8 with CPU only and accept export times of 2mins+ per image, or c) just continue with PL7 (and forego the new features in PL8, some of which I find appealing).

Addition:
I should have added that the PL8 message about an inadequate graphics cards says a minimum of 1024Mb of free GPU memory is required, yet the Windows task manager shows I have more than this.

Is this eye mythical or mystical ?

deeprime04

What can I do with a so little window ?
And as I read, lot of users not included in the group you name like that seems to begin to appreciate and adopt the “loupe”.

Options to keep working feature for lowend station should be kept (especially if they already exist). But features good for faster stations and stations to come are needed.

@SAFC01 I am sorry to hear about your dilemma although I think we should make an error report to support because the message displayed states that only 1GB is required and promptly downgrades yours 2GB card and my 2GB card!?

My additional complaint is that when I scrutinise the memory being used 2GB is actually enough for exporting. DxO should change the “overly” dramatic action they have implemented to a warning and allow users to use there own discretion @Musashi.

However, the exports using DP XD2s are taking 1.5 times as long with my (spare) i7-4790K and a 1050Ti (4GB) card.

Both the 2060’s I identified in the above post were bought second hand for about £120 each from Ebay and one was a little dusty but they are not going to be hammered hour after hour when doing exports

My other big concern is the ‘Loupe’ performance!?

@JoPoV Have I ever said that they shouldn’t be introduced.

While I realise that it can be hard to maintain an increasing set of diverse feature options we are talking about keeping two ways of presenting a similar feature, one was already there and the other has been added, one uses an existing software model that works on lower CPU specified machines and the other absolutely slays such machines!

The old should have been retained for those that have lived with that feature for so long and whose hardware is simply not up to the job of handling the loupe. But I cannot believe that the loupe requires so much CPU to get the job done!?

The problem is that the NR preview was ripped out and would now be hard to put back in place!

I think that was a very useful exercise. I am currently using an RTX 4060 graphics card, but up to several months ago my card was a GTX 1050Ti. which although a bit slow was still quite usable with PL 7. I know there are at least several posters here that were using that same card, and having foreknowledge of a potential performance hit is very helpful.

Mark

@mwsilvers The test software was installed on my 5600G which was replaced by a 5900X part way through testing.

Although it occurred to me that I could have got down on my old hands and bony knees and replaced the cards I actually did that and installed the 1050Ti (or 1050!?) alongside the 3060 in the (then) 5600G but encountered problems with both PL8 and PL7 so I abandoned the tests and removed the card and never attempted to repeat the exercise or replace the 3060 with the other cards, sorry!

Isn’t it only a size (resolution) question for this loupe to run as fast as the old way ?