DeepPRIME XD introduces purple overlay/chromatic aberration


I don’t need M1 Pro 14 inch, why would you even pull in some other hardware as an example.

If you were working professionally, you would want to solve the issue and not cry about it. Using a machine with a good GPU would solve the issue.

I came here to complain

That’s clear. Let me repeat again, Apple changed how the Neural Engine works. It’s very, very difficult to program around Apple bugs. Apple bugs sometimes last for years and cripple major software programs. It’s been like that for many years now.

There’s three ways out of this for you:

  1. get a computer with a GPU
  2. use DeepPrime instead of DeepPrime XD
  3. install Monterrey instead (I believe DeepPrime XD works fine on Monterrey).

Your particular computer can’t run Monterrey. Buy one which can.

At least DxO should share the status of this issue

I agree with that, although DxO did advise it was an Apple bug at one point and that we all have to wait until Apple fixes the bug.

1 Like

I call your “Nonsense” nonsense and your suggestion on using just DeepPrime instead of DeepPRIME XD. Here is a screenshot of the marketing materials from DxO’s own website on why DeepPRIME XD is better. And please, stop telling people to go buy new or use different hardware. That’s like telling people who bought a Ferrari to stop driving it and use a Ford Escort because “Gas is gas.”

Please note the 2nd paragraph. It explains it in plain English why DeepPRIME XD is better. So please, stop it with putting people down because they are not “professional” enough.


I’ve argued there isn’t much difference, even that much of the detail is invented, a kind of advanced sharpening. Let’s move beyond theory.

Let’s have a look at the difference between DeepPrime and DeepPrime XD. First the settings:

Here’s what the image itself looks like (this is the Neural Engine DeepPrime XD version):

Now the screenshot of the difference at 100% with DeepPrime set at 20.

There is more perceptible detail in the DeepPrime XD image. In the DeepPrime XD image, the player looks like he has facial scars running along his cheekbones. This shot is in Austria and not in Mexico or El Salvador. The player does not have facial scars. At anything less than 100% the difference the difference is imperceptible. That said, this experiment has convinced me to do some further experimentation with DeepPrime XD with my own photography. DeepPrime XD does not look substantially worse and the additional sharpening adds some welcome grit for sports photography. I’ll see about artefacts.

That said, I cannot imagine that the extra sharpening/invented texture would be make or break for a photograph. It’s how many angels dancing on the head of the pin, kind of difference that only other photographers would care about.

Just for fun here’s a comparison between DeepPrime XD with Neural Engine processing and GPU Processing.

They appear identical to me, so nothing is lost by using the GPU engine except for a few seconds.

Export times, both Neural Engine and GPU

I often do exports of reasonably large sets (60 images) where I do multiple exports so quick export is important to me. Here’s what the export times for this D850 image are. GPU is M1 Max with 32 cores.

Kind Time in seconds
DeepPrime Neural Engine 8
DeepPrime GPU 10
DeepPrime XD Neural Engine 20
DeepPrime XD GPU 26

Clearly there’s nothing wrong with GPU processing. The time penalty for DeepPrime XD is 2.5, whether with Neural Engine or GPU.

M1 Pro times should be about double that, perhaps a little less (processing doesn’t always scale out at 100%). I don’t have an M1 Pro to test. Even at double those times, DeepPrime XD GPU export is perfectly viable. We used to deal with those export times for plain old Prime with most graphic cards.


Here are the remaining output files as well as the NEF for anyone who wants to test speed on their own system.

DeepPrime Neural

DeepPrime GPU

DeepPrime XD GPU

Original image and .dop file


There are so many ways to solve this issue of DeepPrime XD color cast on Ventura, it’s really up to the photographer to find a solution which suits his/her requirements and budget.

  • Backgrade to Monterrey (works great)
  • Use GPU processing (should acquire at least an M1 Pro)
  • Use DeepPrime (also superb) until the issue is solved

As anyone who works closely with Apple knows, railing at DxO for deep Apple bugs is howling at the wind, futile and childish.


You should work for DxO’s marketing team. Show me one “major software program” (today’s modern photo editing software) that exhibits this bug that hasn’t been fixed yet. NONE of the other “professional” photo editing software I own (Lightroom, PhotoShop, Capture One, ON1, Affinity Photo 2, and even Luminar Neo) and use have this issue. They all use the same neural engine on Apple silicon. They run on the same “bleeding edge” MacOS Ventura.

This is a DxO problem.


Thanks for the info.

Are you sure each of those applications do use Apple’s neural engine on Apple Silicon to process their images?

I assure you DxO is not deliberately sabotaging their own processing engine. There are issues with DxO for which they deserve a thorough chastising, but those issues mostly concern not allowing us to use features or the software at all. Examples:

Sadly with these issues, there’s no opportunity for the PhotoLab photographer to work around DxO’s block. In this case, the colour cast with DeepPrime XD and Ventura, the photographer has control. S/he can just remedy his or her own situation with the workarounds above.

Hi Maxim,

• in my experience, using the GPU on a M1 machine slows down the whole process, yes, but not by a considerable amount…I guess, exporting 500 images on my 5D mark IV using the ANE will make me spare about 50 minutes at best…

• sometimes, bugs at the OS-level, are not the easiest to fix…you just have to wait that the OS manufacturer decides to fix it on its side first…and its roadmap/priorities are not yours…

• installing the “latest & the greatest” OS isn’t always a wise choice. In the past I have been let down more than once after upgrading to the latest OS too early…Nowadays, I’m 10-12 months behind. I will probably install Ventura at the end of the summer (and only if it brings some serious/useful new features to my workflow).

And then, because I’m a curious person, I’d like to ask something else.
From your post, you seem to work on timelapses, which requires of course, hundreds of stills to make a compelling animation.

Speaking of animation, because video codecs always work w/ a quality vs compression ratio, why do you export all your images using XD (or any other denoising method)?
With temporal and/or spatial compression, all those tiny details are going to be swallowed down by the video compression anyway, right? Personally, I don’t see any real (AKA visible) benefit by using XD and/or DP…

Could you please, I will gladly hear you, explain me how XD improves your timelapses? As I said I’m a curios person, and having done a few timelapses myself, I’m thrilled to learn new skills from someone more advanced on the subject than me…

Thank you.

1 Like

I have to admit, I cannot say with 100% certainty that each and everyone of those programs I mentioned uses Apple’s neural engine. But I can say that Lightroom and Topaz uses it. Without having access to their codes, I am pretty confident that each of those programs uses the neural engine in one way or another. Apple designed the neural engine to allow processes (whether it be Apple or 3rd party) to offload certain tasks previously handled by the CPU/CPU to the NPU. In this sense, Apple handles things behind the scenes whether an app specifically asks to use the NPU or not.

In cases of complex apps (like image processing) may purely rely on Apple’s NPU to “accelerate” machine learning tasks. That’s the whole purpose of the neural engine. Apple’s NPU does not hijack a process and introduce artifacts into 3rd party apps, otherwise everyone who uses their NPU will have a different outcomes. In DxO’s case, again, Apple will never “fix” a tech as widely used as their neural engine because 1 vendor is having problems adapting. Can you imagine Apple fixing this one bug to please DxO and in the process break everyone else’s?

Topaz relies heavily on Apple’s neural engine for their processing. And they, along with everyone else (other than DxO), isn’t having this issue. Here is a relevant post from DPReview. From the sound of it, even regular DeepPRIME uses the neural engine. But this is just my guess from the post below. Maybe normal DeepPRIME doesn’t use Apple’s neural engine at all or at least found a way to prevent any interactions with Apple’s NPU:

I agree with you on this. I’m just thinking that DxO doesn’t have the expertise to resolve this and they keep hoping Apple will bend for them. We are also aware that there are workarounds for the current issue and we do use the workaround (again, let me remind you that it was this community that discovered the issue and provided the workaround).

The issue is that when you have hundreds or thousands of photos to process, each second counts. An extra 5-10 seconds per photo may not matter to most. But to others who process lots of photos, it really matters.

I’ve run my test images through DeepPRIME and DeepPRIME XD and found DPXD to deliver less noise and finer detail in high (>10k) ISO images only. I also found that the ominous cast does not appear on all images, but found no “rule” that hints at images with that potential issue.

With all of the above, I can easily use DeepPRIME and save the time and energy that is otherwise wasted with DPXD. YMMV though.


This would suggest that it is an issue with DxO, would it not? Anyway, all of this frustration would (and still can) be put to rest if DxO does one of the following:

  1. Show us that this is 100% an issue with Apple and there is no work around on it. And only Apple can fix it.
  2. Tell us that DxO is actively and aggressively working on a solution and give us an update (other than a tiny footnote on release notes about working with Apple on this).
  3. Tell us that DxO doesn’t have the expertise to fix this, whether it be their code or the way the used the API to Apple’s neural engine.

It’s been half a year. At least give us a real update on this. So far, DxO has been silent. Maybe that is why people are venting. All it takes are frequent and honest communications to alleviate the concerns and frustrations of your customers. We are not bad people. We have the capability to understand some stuff. :wink:


I’m not expert at all about neural networks, AI and the Neural Cores in Apple Silicon, but I would doubt that the mere ANE contribution is the same to all codes out there which leverage neural networks on different applications…
Each software has its own code and leverages differently those neural cores…

Saying that “if other applications are using the ANE w/o any issue, this means that the issue is on DxO” it’s like saying that if a knife can perfectly slice through different kind of fruits, but it can’t make a clear cut on a paper sheet, then the paper is to blame…(while in reality your knife is not sharp enough) :upside_down_face:

Another thing: of course Apple is the elephant in the room and might not care at all about one little software company having issues with their neural cores, but even if Nvidia has a market cap which is 1/4 compared to Apple, they absolutely own the global GPU market (95% of the AI-related market)…and a few weeks back, they didn’t forget to mention “improved drivers for DxO users” (screenshot below, OK they said “DxO Photo”, but nobody’s perfect :-D).

So, I’m skeptical…I would not put all the blame on DxO without really knowing what’s going on behind curtains…



Fair enough. Now if only DxO will update us on this. Good or bad, give it to us. Just don’t say that “We are working with apple on this.” :face_with_hand_over_mouth:


Yes, skin is a tricky thing for these algorithms, it’s best not to overuse em. Same with ClearView it destroys skin, but somehow DxO even recommends to try it with portraits. Maybe they mean to use it with some masking though.

I found also that lens sharpness is better be disabled or lowered quite significantly when used with DeepPrime XD. Otherwise it introduces to many details that are almost not existed in original image.

1 Like

Yes, some comments on this issue from DxO would be nice to hear.

1 Like

And yes, results on DeepPrime XD through Neural Engine are “Great!”

It’s not web compression, it’s is just how the image processing is with all these artifacts and purple circles. =)

Almost 4 hours in processing time for 500 4K images for me =) I guess that is because the GPU on M2 MB Air is kinda weak.

In case of @uncoy Mr.“I’ll tech you how not to use features that are not working properly or just buy another computer”, I guess his M1 Max has more GPU cores that is why not so much difference in processing time between GPU and neural engine.

I tried DeepPrime XD and force enabled Neural Engine processing, and I am getting 9 seconds per image. While on GPU is 27 seconds per image, 3 times slower.

Photos from the drone are close to the resolution of 4K, so the idea is that after NR and whole photo processing they need to be with maximum details and minimum noise at the beginning of video editing, especially considering that during video compression some details would be inevitably lost.
I don’t go for video NR for this reason because it’s hard to control output result and not to lose details. Photo NR works times and times better than most of NR in video software.
While processing video some details also lost because of stabiliser and it’s crop.
I export video at high bitrates so that more details would not be lost.
So if use this rule maximum details in maximum details out, then I can get final video that I need.

I understand that if I would work with high-res photos from DSLR for example this would be not so much important, because there is a room to play with resolution. But is case of drone photos that the best way for me.

I think you can guess which photo is made with which NR mode.
Both default settings.

And I think I don’t need to answer anything to the advice to buy another computer to use some of the software features.

OK, I see yes. Small sensors used on drones can be a different story where “getting those extra pixels” will indeed add something to overall quality at the end. :+1:

BONUS :wink:
I do not have a drone (yet? I must say I’m very tempted) and I was wondering: because their battery usually lasts for like 20-30 minutes max (and you also have to fly back), how do you manage to make timelapses which spans over several hours? Let’s say that I want to make one from sunrise to dusk in a particular location…how can I go “there” with my drone multiple times being exactly at the same place / same POV / same altitude…to take all the pictures I need? :thinking:


1 Like