M3 / Pro / Max support

How much of the hardware in the new chips does / will PL7 support?

It’s not clear how many extra GPUs / CPUs PL7 makes use of, not to mention the neural engine.

I’m less concerned about the export of individual raw files, but the rendering of high quality previews is certainly a distinguishing feature of my MacBook Air versus Pro (M2). Although it’s just a small thing, having to wait an extra beat or two makes all the difference. (It’s why programs such as Photo Mechanic exist; they can produce rapid previews.)

Here’s a good and interesting video about some of the hardware specs in comparison between generations.

I would imagine the editing window is largely dependent on GPU cores, with a healthy dose of CPU. As such, more of these would be a good thing and M3 is a good jump faster than M2.

Regarding DeepPRIME and DeepPRIME XD, these perform best with the Neural Engine and I am sorry to say that every one of the M1, M2, and M3 generation excepting the Ultra variants (so far of M1 and M2 only) has exactly 16 NE cores. That said… the M3 NE cores are significantly faster — 15% over the M2 and a whopping 60% over the M1.

As the owner of an M1 MacBook Pro, I am rather looking forward to a 60% speed bump in DeepPRIME when I get my next laptop early next year!

This is what I get on a M1 MacBook Air 2020 after applying several presets to 60 test images with all of them selected (60 images updated in parallel) and DPL at zoom=100%:

as we can see, most of the work is done by the CPU with a light sprinkle of GPU added. If and to what extent the Apple Neural Engine cores are involved, I cannot say…because Activity Monitor does not seem to get any info about it.

1 Like

Thanks. That’s useful. The neural engine in the M3 chips seems to be the same as in previous ones, so benefits only from the “3nm” process used to built the Apple chips.

I suppose one question is whether PL uses as many cpu / gpu cores as are available, or whether it tops out at using 4, or whatever.

I suspect that the top of the line M3 chips are targeted at video editing and that photo editing would probably benefit from more memory than more cpu /gpu cores, but have no evidence, of course.

Hi,

Yes it is the same amount of NE (16) but they are supposed to be faster.
I am also curious to see a benchmark M2 vs M3(pro) or to have an idea of the DeepPrime(XD) export time with these new machines :thinking:

I should probably just say nothing, because it isn’t a subject I know much about, but I would wonder if the subject is entirely detached from the application level. That is to say, the application (PL7) makes requests to the operating system, and the operating system looks at what hardware is available. If the operating system finds more cores available, it sends the work off to them. When the work is done, the operating system replies to the application.

I should have said at the outset, that the time spent processing the images is not something that I worry about (except if it eats excess battery). A responsive interface (and, in particular, one that produces high quality previews without missing a beat) is what is most important to me.

1 Like

I use an M1 MacBook Air from 2020 and its responsiveness is at least as good as my 2019’s 8-core iMac.

Apple Silicon does not miss a beat (whatever that means) but like most other software, PhotoLab has some potential for improved performance and UX. Moreover, we can help performance by not opening folders containing thousands of files and by keeping those files local.

For the most part, I believe that is how it works, if the application uses threads. Watching a CPU graph while manipulating sliders suggests PhotoLab is using all my cores, so it does. The real science comes down to whether faster cores or more cores is better for it. It doesn’t seem to peg my lowly M1 cores so I would guess more cores would be the greater benefit. But… not scientific.

Yes, as I quoted above…

1 Like

It would really guess that the screen, the mouse, the keyboard, these things are managed by the operating system (MacOS). When you click on a button with your mouse, the operating system looks at the button you clicked on and sends that mouse click to the appropriate application. If you click on a button in your Web browser, the Web browser gets the signal for that mouse click. If you click on a button in PhotoLab 7, then PL7 is informed of the mouse click. Then the application figures out what the user wants to be done. It can do the work itself, or it can tell the operating system to “spawn a new task” (task=thread). Perhaps it is work that can be broken up into several pieces. In such a case, several tasks are “spawned” by the operating system. These different pieces of work can be sent off to several different cores. When the work is done (the thread is completed), the operating system sends the completed work back to PL7. Then PL7 puts all the pieces (threads) of completed work together, and it sends off a request to MacOS to update its PL7 window on the screen.

When you turn on your computer, MacOS does an inventory of “resources”. Certainly three resources are the screen, the keyboard, and the mouse. But the cores are resources too. This way PL7, and all the other applications, they don’t have to think about how many cores they have to work with. The operating system, MacOS, worries about that. If MacOS has found more cores, then the work can be completed sooner.

But it is as zkarj says, the application needs to be using “threads”. Not all work can be broken up into different “threads” (pieces of work). But if PL7 can do that, it has probably been doing that since Version 1.0

I’ve seen the numbers of threads DPL uses rise over the last few years. Whether this has provided better performance, reduced performance prevention or just loads of threads to manage - or a mix of all of the above, I’ll not say. It doesn’t matter either, because we have no means to influence what the app and OS do while we edit our images.

What I clearly see is that DPL has worked more smoothely and quickly, starting with DPL5 and in comparison to earlier versions…on both Intel and Apple Silicon computers and different versions of macOS.

Thinking about it more I would guess that processing images is exactly the type of work that can be broken up into “threads”. Take the entire image and break it up into 8 or 16 rectangles. The work that needs to be done on each rectangle is the same, or similar, and not dependent on the outcome of any of the other rectangles. Then spawn 8 or 16 threads. The operating system, MacOS or Windows, will organize the work. The more cores the better, but in any case, PhotoLab doesn’t care where the work is done. If the machine only has 2 or 4 cores, that’s not a problem, other than that it will take a bit longer.

There’s definitely more of a pause generating high quality previews on an M2 Air than on, eg, a M2 Max.

Neural engine is the key for DeepPrime, I just tested with PureRaw on my MacBook Pro M1 Pro, I tested using GPU, CPU, and neural engine, NE blow them out of the water, so the faster the neural engine the better. However I only have 16 GPU Cores, so maybe on a higher count GPU core, GPU could perhaps overpower the NE.
So, as zkarj said, we can expect a 15% improvement vs M2, and 60% vs M1

1 Like

And using a M2 Ultra with 32 NE … does this add even more power ?
Well I guess I won’t have the answer because that processor COST a bit :hot_face:

@StevenL maybe DxO could provide us some Mac benchmarks ?
You could for example ask a Mac news site like Mac4Ever to test your software on the Macs they tryout :thinking:

1 Like

I would think so, but that’s the problem… only the Ultra SoCs have more than 16 NE cores $$$.

1 Like

Well I have a 16" M3 Max with 48Gb RAM, Cores: 16 (12 performance and 4 efficiency). This replaces my 32Gb M1 Max from 2 years ago.

Early observations:

  • The latest Sonoma OS / Safari are extremely energy-efficient (on the M1 as well as M3).

  • So far as performance is concerned, the DXO PL7 interface seems just a little more responsive than with the M1 Max. I use high quality previews and the slight rendering hesitation when working with 50Mpx RAW files is noticeably shorter. The glitches in the interface (eg, when moving from an unconstrained crop to getting PL7 to drop the menu of crop ratios) remain.

  • I haven’t compared export times as those don’t generally impinge on my workflow as they happen in the background.

  • I also have high hopes for the new memory management. The M1 Max would be easy to provoke into swapping (particularly with Lightroom; less so with PL).

Worth it? Remains to be seen. I certainly find the workflow a bit smoother.

Also, the price of a new machine is net of the selling price of the old one which falls with each new generation, so waiting until next year would make the net cost of a new machine more expensive.

RAW file exports are about 15s, compared to 16s with the M2 Max. But, as I say above, the background tasks are not what matter when working.

Hi, I am looking into changing from a low spec windows 11 Surfacepro (no dedicated gpu) to 2024 15" Air M3 with 16gb and only 512gb SSD. I normally store all processed images in WD home cloud and use this as likeextern wireless Hard drive. Normally all data are stores there and not in my Surfacepro. All other programs fetch data files directly from my cloud and save back there. For PL to make it run faster I normally keep the latest downloaded files from camera and process them and when finished save them to home cloud. Any subsequent processing or printing, viewings.

All you mac users do you save and access files from cloud and or run PL on cloud stored files or save all files in the internal hard drive. If so what the optimum size SSD you use.