I’m currently using an HP Nvidia T600 in my HP SFF. It doesn’t fit full size videocards and I have no plans to change to a full size desktop. I’m at about 40 seconds for processing one image.
I can get an Nvidia T1000. It is supposedly more powerful, but will it give a significant indecrease in the processing time of an image, like somewhere in the twenties of seconds for an image?
Some searches suggest that a T1000 is comparable to a GTX 1050Ti.
In which case, if I tell you that my ancient PC which, has a GTX 1050TI, takes about 55-65 seconds to export a 35MB Canon .CR3 RAW file using DeepPRIME XD, does that help you?
Basically I go for Prime and when needed Deep Prime XD. It doesn’t seem to differ much in processing times.
My files are approximately 9.5MB NEF and 25-30MB CR2. A single image takes longer than a single image in a series as the program can do two threads while processing by the way.