Please hire some new competent talent to program and speed up your software

Runs pretty well on my machine, could run better, but hardly a reason or excuse to not get work done. Curious, did you come here to complain and not much else or do you have something to contribute? I fail to see the logic of coming to forum to complain about how everything is better. If everything is so much better, than why don’t you use it instead of PhotoLab? Shouldn’t you be using the programs and tools that work for what you need or want to do? If you have tried all the others and they work as you want them, than just use them and don’t complain. Simple. Sense of entitlement is truly astonishing.

Mark, have you ever opened a new picture folder with say 1000 RAW in Photolab?? Well try that and come back to us :slight_smile:

PhotoMechanic is made to handle millions of pictures - our other converters are not.

In Capture One there is an elegant solution to all these problems around monolithic Picture Library -databases - they have been using “Sessions” instead and CO creates - if you like - a separate database for every session that can be moved and copied as you like. That is the way I always have used Capture One. This is the way CO once was designed with studio photographers in mind. This “database” thing is of a much later date.

Never a 1000 to my recollection. However, when I was still a Canon 7D Mark II shooter, on many occasions I had folders with around 800 or more raw images before culling and the only slow down was the loading time in Photo Library which took an extra minute or two to load. I never had any other significant performance issues in all the years I have been using PhotoLab. on the same 2016 desktop… Of course with the introduction of DeepPRIME I had to upgrade to a compatible graphics card and acquired a GTX 1050ti. These days I use an RTX 4060 with an upgraded power supply.

Mark

FastRawConverter (which is not database driven, but XML file dependent) runs very fast with thousands of pictures in a single folder. And it’s $20 and not $250 (I own both, as I couldn’t live without the tagging in PhotoMechanic, maybe I would have passed on the Plus features in retrospect and just have bought the tagging tool and not the catalogue tool).

DxO could further split PhotoLab into more parts.

  • Library / asset manager
  • Developer / exporter (not the mutilated PureRaw)
  • Printer
  • ViewPoint (already separated)
  • FilmPack (already separated)

Depending on licensing. the respective functionality would be shown/enabled or hidden/disabled - and I really mean hidden!

What about Nik? Same thing.

And still, you’re refusing to declare your system configuration (?)

1 Like

Post specs? The only response that will get from the same 3 or 4 posters here with fast computers is you need a new computer or faster hardware. Is there an a solution besides that? Probably not. Do realize that the average user with the average computer probably has a lot of other expenses to deal with including photography gear.

Photolab is only exceptional at NR and that’s the only reason to use it over other software. CaptureOne is better at just about everything else including color accuracy, but their NR is about bad as it gets. Requiring time to process NR in Photolab while exporting is understandable. It’s also slow to start up, runs ok, but when a mask layer is selected everything really grinds down. Masks and layers have been around for a very long time so a modern photo editor shouldn’t slow down that much dealing with it. That’s subpar programming. If general users need to have higher spec computers to run Photolab than required by all the other competitors, then that’s their own shortcoming and it’s on the company to smooth things out on their end. Not ask customers to spend more on hardware. CaptureOne was able to speed up their Ai select tool from about a 30-40 second wait to about 5-7 secs with an update and improvement to their programming. I’m sure it was from customers bringing up the issue with slow operation that the company decided to address it. There’s no reason why Photolab wouldn’t be able speed up their software.

You can complain all you want and make all the assumptions you wish to make. If it is assistance you want, you need to help us help you. If all you want to do is complain that is your prerogative. However, posting your complaints to an end user site, with no intention of taking advantage of the years of expertise available here, is frankly just a waste of your time, and more importantly a waste of ours. But, hey, feel free to continue to vent. Eventually you will get bored and go away.

Mark

4 Likes

It’s a real shame that the bullying and rudeness continues here.

1 Like

I have no problems with the speed at which PL runs, and my Windows 10 PC is ancient:

  • CPU: Intel Core i5 3570 @ 3.40GHz
  • GPU: 4MB NVIDIA GeForce GTX 1050 Ti
  • RAM: 8GB
  • system SSD: 256GB
  • data HDD: 10 TB

I can also add that the sole effect of installing the GTX 1050Ti card was to reduce DeepPRIME export times from around 4-5 mins to around 30 seconds (from 30MB RAW files from a Canon 90D). In other words, my third gen i5 CPU with only 8GB of RAM is more than man enough to cope with zooming in and out, changing sliders, etc.

So I politely ask again, what are the specs of your PC?

3 Likes

This is not about bullying. He created this thread to complain about multiple problems he has with a number of aspects of PhotoLab. We’ve been trying to get him to give us useful feedback so we can attempt to assist him with them. So far he’s shown absolutely no interest in receiving any help from us, preferring to continue his rant. That is his privilege. He’s clearly frustrated with PhotoLab. But, after multiple failed attempts to get him to be more forthcoming with details so we can try to resolve some of his issues, we are becoming frustrated with him.

Mark

4 Likes

Alec, most softwares - even viewers - have to make some “housekeeping”-activities building previews in the background. Even XnView that I use sometimes is doing this. I use it despite I have PhotoMechanic too since I think it gives me a better and faster overview of all the metadata on the pictures.

What I´m saying is that they might be very fast if they are in sync but might be slower if they are not. Sometime the previews have to be built wuther they are small and optimized for scrolling, viewing and culling or if they are 1:1 in Photolab.

I think the example I gave earlier just must be some update problem when it freezes in Photolab.

I’ve had enough Mark. I’ve reported Aye’s comment to me, which is over the top. I’m also blocking him. As you know, I don’t mind to-and-fro in exchanges or different viewpoints but ultimately we are here to help one another and progress as photographers, both technically and creatively. This guy is not.

2 Likes

CaptureOne defaults to trying to do everything on GPU after it runs the stupidly long timing every few driver updates to see what’s faster than the CPU. Even if that wasn’t the case, by default the image it shows you is a preview of what the changes will look like on the raw… as applied to the raw file’s embedded full size JPEG, so it doesn’t actually have to process anything. It’s really noticable if you do something like crank up the clarity and “structure” and zoom in; marvel at the emphasized JPEG wavelets. The problem with this IRL is that everything in the program looks noisier than what the output of my camera really is because it’s quantized to 8-bit already (and potentially has camera NR applied since it’s the JPEG preview, whereas I’m shooting a Sony A7RIV and normally have all settings in camera such that the output raws are 14bpc and don’t drop to 12 so there’s far more leeway), is probably dithered, etc. If you’re not doing extreme adjustments it’s fine but it’s kind of impossible to tell how much NR you’re applying / what the final output will look like on images shot at high enough ISOs to care about noise when zoomed out of 61MP. This isn’t very much of the camera’s range, but the extended upper ISO settings or underexposed images need it sometimes.

Lightroom moved to using proxy images by default too, I believe. There are only certain points when it updates the catalog from raws, most of the time it’s working on the JPEG. Photoshop hasn’t been fast since creative cloud came about unless you’re using a cracked copy. It performs far too many phone-home checks and IPC calls to the 8 other running components while writing thousands of lines per second of logs to the temp directory on the OS drive (by default). All of the optimization it had was over 10 years ago. Some of it was fairly ingenious (the undo system is insane). The people who wrote those optimizations are the ones who got moved to the team that made it run quickly on Apple’s ARM tablets; you’ll never see them on x86 / Windows again.

Also with CaptureOne, I’m almost certain it’s not displaying 10bpc color on monitors capable of it; Lightroom definitely doesn’t (although Photoshop does).
I don’t know if DXO will use the full display depth, I’m going to do the trial soon.
I see someone else mentioned the low quality preview thing above but I thought I’d add my observations because it goes beyond that.
If you want to see slow-in-action, try RawTherapee for a day.

I left Adobe 2 years ago after they ignored a lightroom bug that constantly blacklisted AMD GPUs for 3 years straight despite AMD having better OpenCL performance at the time and being the better option unless you needed accelerated 3D rendering in Redshift or something, then photoshop had updated and changed CTRL-S to “save to cloud” with the option to change it back hidden, then when I finally got screenshots of the system info somebody wanted (again) for the blacklisting issue and went back to their forums, one of their “community experts” (who had accused me of not knowing how color management worked and blamed my images showing that Microsoft had indeed added color management support to explorer and Photos first on me converting the profiles, then on running a high bit depth display (???). I didn’t have the link handy to the MS developer site from months earlier when they’d actually added it but I clearly demonstrated it and google would have found it, but I got treated like I was clueless instead) had pinned a post talking about how he discovered that Windows now supported color management in explorer. Didn’t bother with apologizing or anything in the original thread. All those things stacked in one night when I just wanted to edit 10 photos or so in Lightroom was enough to make me kill all data on creative cloud, kill the forum account, cancel my creative cloud subscription that had been going strong for nearly a decade. I tried to give them feedback in the “Why are you leaving” box, but it was 127 characters max, so I told them they were useless and I hoped their children got cancer instead, then ragequit and deleted the main account.

CaptureOne looked like a good option but they’re subscription-only now too and they’ve always been slow with new features anyway, so it’s less a matter of “capturing customers” and more a matter of waiting for the competition to burn all their bridges. I’m about to start a trial of DXO Photolab and will probably purchase the Elite version. It doesn’t appear to have the stupid edit-by-proxy thing CaptureOne does and the noise reduction is amazing by all accounts (not just the website). I still have my perpetual CaptureOne license for doing fast Pano / HDR / HDR Pano merges; it is faster than anything else I’ve tried at that; but they lost me as a customer because I didn’t have enough time to get invested. Most of the other software is either too light on features or subscription based. DXO is the logical next choice based on image quality, and I’m hoping it’ll be able to properly deal with my old Lightroom sidecars which make up most of the image adjustments across nearly 150,000 images from older cameras. CaptureOne claimed it could import changes but that turned out to only be the catalog and it wouldn’t write out XMPs so I lost all changes for 2 years once already thanks to a catalog + backup catalog corruption bug.

This thread is kinda meh for me. My computer is insane overkill that Houdini doesn’t even stress. Without any clue what camera you’re even using or what “not the fastest” means the post isn’t of much use. I remember that Nikon’s stupid partly encrypted / partly compressed raw format from the D800 I had opened and processed massively slower than uncompressed raws twice the size from the Sony A7RIV I replaced it with because I found that Nikon’s autofocus quality had been grossly exaggerated (and I mainly shoot stills). I had a similar experience a long time ago when I finally replaced the Sigma SD9 with a Pentax K10D because I couldn’t stand not being able to use anything but ISO 100 anymore. Pentax debayering was roughly 10x faster than the complex matrix math needed to extract something resembling RGB out of the ranged and overlapping photon readouts from 3 layers of silicon that Foveon sensors used. I started taking so many more pictures even during the day just because processing them wasn’t a chore.

A 24MP sensor these days is almost certainly a Sony-made BSI but the raw format makes a lot of difference in whether it’ll take forever to process or not.

Of course you also have to define what you mean by “not the fastest”. Most computers don’t have anywhere near enough ram in them thanks to companies still shipping with 8GB & 16GB. The days of that being enough are over and it doesn’t have anything to do with bad programming. My old junk file server has a 10/20 core i7 6950X running at 4GHz, 128GB of DDR4 in 2 banks of 4 channels that’s faster than the memory bus on most modern dual channel DDR5 setups thanks to the XMP scam and how it interacts with DDR5, a Radeon 6900XT with 16GB of vram, a 2TB NVMe as the OS drive, a 12TB secondary data drive, and 60TB of storage on a hardware SAS RAID6. It gathered most of that over a period of 8 years on the cheap. I consider 32GB of ram a bare minimum for photoshop to be tolerable with anything else open, but I might use it more heavily than you. My current machine was built as a 3D graphics workstation and currently has a 32/64 core TR PRO 5975WX, 512GB of PC3200 DDR4 ECC RDIMMS as 64GB sticks in 8 channels, 6TB of various NVMe drives with the OS on the one with the fastest transfer and highest IOPS, a 16TB secondary HDD, an RTX4090 (24GB vram) as the primary video card, a 7900XTX (24GB vram) as the secondary that needs to be sold and replaced with another NVidia card since AMD failed this time, and the unfortunate 1Gb/s link to the network which is getting supplemented by a direct dual 56Gb/s fiber Infiniband link to the old machine which has just enough PCIe lanes left for the card. That insanely high speed network link is going to cost me a whopping $150 on ebay because 56Gb is so obsolete in the server realm it’s basically trash. They’re up to 800Gb now and 1.6Tb later this year / early next year, but I only need to match the speed of the RAID6, really. 10Gb won’t cut it for that, not on the e-waste new raid controller that’s around 4-5 generations old that no business wants anymore.

Or of course, there might be a bug slowing it down on your particular combination of hardware and software. Not listing any of it, talking up the competition, and just telling their programmers they suck isn’t a good way to get results. If I do the trial and it’s slow on this hardware I’ll come back here and mention that because that indicates something severely wrong. Of course, I don’t have the fastest hardware, but I think it’s still passable.

2 Likes

One amusing thing is that I saw this a few weeks after the SideFX Houdini 20.5 update came out. They updated their Karma XPU (hybrid GPU-CPU renderer) and it’s now 3-5X faster with higher quality (faster in my eperience). Oh the catch is you need an NVidia Ada series GPU or above because at least 3x of that speed gain is from shader execution reordering which is a hardware feature on those cards. I know with photography software that 4090 I mentioned seems like ridiculous overkill, but ideally (if I had infinite money that is) I’d be running 4 Ada A6000s on this board which have double the VRAM and Houdini can distribute the render across all of them. A 4k material swatch would take about 30s on that setup, I have to wait a whole 2m 20s on average. Woe is me. Time to go rant at SideFX for not wasting their time supporting my Radeon card too just because absolutely nobody in the 3D industry uses them and they’re 1/4 the speed rendering the same scenes for 1/2 the price. They’re good to great gaming cards though, which is what AMD made them for. I should probably complain to AMD for not making them better at everything else for the same price as well.

Did you remember to quit and restart PL after the change? That setting needs a restart.

1 Like

Ive got about the same workstation configuration than you. More ram because houdini simulations generally need way more and a little more recent hardware (7xxx pro cpu series).

Could you tell me how many time is needed to discover new image directories with more than this forum official limit of 1000 images and, key point, if hardware is far from fully used when doing this ?
My experience is that hardware is far from being fully utilized (cpu, gpu and drive usage), and I don’t see where the bottleneck could be, other than on the software side.

I like the 100% cpu + 100% gpu usage houdini can acheive - no need to turn on the heating in winter - but it of course hasn’t the same requirements than a demosaicer/denoiser.

@Aye

Didn’t read the whole thread (a little big long), but if you need to disable GPU with graphic/photo softwares that use IA, there is a hardware problem. Either configuration (drivers version, etc…) or unsuitability (obsolescence).