Artificial Intelligence on PhotoLab: what do you expect for?

AI is the new frontier for software applications.

In some ways, Smart Lighting, Prime, ClearView, and ViewPoint could be considered as AI algorithms. But what else does the DxO photographers community need?

1. AI mask.
Quick AI masks of parts of the picture, by a selection menu: For example:
• sky, in order to select every part of the sky, including through vegetation, between roofs and buildings.
• faces and skins, to recognize and select all faces and skins
• vegetation, to separate all vegetation parts of a picture from artificial or mineral ones
• sharpness mask: an automatic detection of areas of maximal sharpness, with an opacity gradient depending on the transition between sharpness and blur. With of course, the possibility of inverting the selection to choose only the bokeh zones.

With this kind of AI selection, it would be easier to adjust brightness, color, or micro-contrast without long and complex manual masks or U points selections combinations.

2. AI high light recovery
Cloudy or white skies in landscape pictures often need a different exposure than the rest of the picture. AI could select them and apply only on them a dedicated RAW conversion curve (like neutral color, realistic tonality gamma 2.2, which is the best generic rendering for high lights). Moreover, AI could balance cyan/magenta drift or even recreate a “realistic” information on overexposed channels in order to give a smooth restored gradient on recovered high lights.

The same AI high light recovery could apply on artificial lights in night shots or show’s pictures. Coloured spot lights appear white because they are overexposed. Maybe an AI algorithm could restore their natural colour, despite their overexposure.

3. AI restoration after perspective transformation
Some perspective transformations dramatically reduce length or height of a side of the picture. By recreating “out of original picture” areas with AI algorithms, it would be possible to complete the empty parts of the frame by a “realistic” pattern, copied from closest areas of the picture (sky, vegetation, grass, rocks or concrete ground…). It would be very useful to widen the framing after ViewPoint transformations, or at least, include larger and more numerous parts of the subjects or environment of the original picture.

These are three kind of AI implementation I really would like in PhotoLab, and I expect, that could make the difference with PhotoLab competitors. What do you think about these development perspectives? Do you have any other AI application development in mind?

Question is as much for EA community as the DxO staff, if you agree ;-).

For all, please do not hesitate to react or complete the reflection.

Kind regards.

Very interesting way (I don’t know for the last one) in the DxO abilities (or domains).
But who has not my vote in 2019.

To my mind, PhotoLab is an image processing tool, not a creative art tool; therefore it is intended for photographers, not artists.

Artists look at a scene, correct the perspective in their mind and draw/paint the corrected image.

Photographers either use a large format camera with tilt and shift movements, a digital camera with a tilt/shift lens, or they allow room when framing the shot, knowing that it will lose area in post-processing.

Why look for a software tool to use artificial intelligence when you can use real intelligence to make a picture before it even gets to the computer.


Maybe it’s off topic

For professionnal photograpers , I think AI could be use in DAM for

  • automatically describe and tag photo. Today there is a lot of AI with this capability.
  • next (not really AI) in shearch mode it will be usefull for big database to display a dynamic graph network analysis (like this ) showing most important keyword for quickly find images.

Interesting and ambitious. I am impressed with what AI can do in general - but for what it’s worth, I don’t want it to try to read my mind. So I would envision many such adjustments to be more by-request than automatic. For example, I’ve previously asked for more options for auto-crop based on keystoning and perspective correction. The way it’s done now crops for optimal width but not optimal height. I don’t know if that request is being considered, since the choice of autocorrect algorithms is based on a rule of thumb that can never please everyone.

For #2, I’ve recently come to appreciate how useful spot-weighted Smart Lighting is in PhotoLab. If you use the tools available already to adjust exposure for highlight recovery, any color information not lost should determine what you see. I understanding wanting smarter and more flexible local adjustments, but what additional color correction would you like AI to do?

#3: Do you know of any programs that already do this kind of fill-in well? As I recall, Microsoft’s ICE can do it when stitching photos - but the results I’ve obtained from it aren’t good. On the other hand, I’ve had great success doing this manually with a clone stamp - which PhotoLab now has (nicely implemented, in my opinion). The advantage of this is being able to layer and blend details so that repeating patterns are minimized and transitions appear natural.



Some sample :

Artificial intelligence is not fully operational but it can do at least 50% of the work of this time-consuming task.
And this work will especially after find more easily photos for a specific subject.

In June IBM as produce this :

The Gartner Hype Cycle for Artificial Intelligence, 2019 predicts the computer vision will be operational between 2 and 5 years.

A bit scarry isn’t it ?

As long as it is in good hands and under some kind of control :grimacing:
Unfortunately we already see some misuses out there.
Let’s hope our favorite RAW converter will never plot against us :rofl:

For me AI can be use for

  • intelligent masks creation
  • help for tagging pictures (this can be done with an external software, not my priority here).

The latest version of Luminar, version 4, supposedly has AI tools. one of which allows for quick and easy replacement of skies. Whether using these new AI tools is appropriate for advanced photographers as opposed to being a quick and easy way to improve images for those with poorer photographic and posts processing skills, I will leave to others.

In one review of Luminar 4 the author compared it specifically to PhotoLab when he said:

"The flagship enhancement is called, simply enough, “AI Image Enhancer.” Using it on a variety of images I found that it does an excellent job of making images more pleasing. Until now, I’ve found that DxO’s PhotoLab had the best-automated image process for 1-click image enhancement, but Luminar 4 definitely provides a competitive alternative. "

Notice he didn’t say it was superior, merely that it was a competitive alternative to PhotoLab for that specific type of enhancement. Lets not go overboard with AI. It is not a panacea, and AI, which can be poorly implemented, doesn’t guaranty superior results.



For me, Luminar is not a tool for photographers but for post creationist.
Those looking to swap skies, pull the colour levers to maximum until your eye bleeds.
For the ones who want the captured subdued autumn mornings to glow as if they where on fire.

If they want to AI-swop a sky or AI-remove a tree. So let them.
I do not want a tool that focus on that.
Many others might. But I do not.


I agree, although I’m hoping AI technology can also be utilized in ways that will enhance the post processing results for more serious photographers. ON1 also has an AI Auto setting that is a one-click adjustment to images. The results are a much poorer starting point for further adjustments then the DXO Standard preset.



Hello guys,

Well, well, well :slight_smile: , this is for @wolf analysis I guess.

Svetlana G.

It seems to be the current buzz word. I guess that software that doesn’t have it AI components will be judged by some people as weak or old fashioned. Adobe, Topaz, Skylum, ON1 and perhaps others have components that are claimed to be using AI. That doesn’t necessarily make their software superior.



I’d rather use real intelligence instead of artificial. Have you seen the mess AI makes of stuff that gets hit by self-driving cars? Not to mention that a real driver has to be alert at all times in case the AI makes a mistake :crazy_face:


Hi light recovery would be the priority for me among the ones listed. Mask creation comes second.


I think so, too. Hand-operated, not AI. :wink:

Most AI is a better version of old auto systems. Like auto masking, auto light, etc.

Most of the time, the AI version is working is a bit better because it recognises parts of images. But the algorithm behind may not be different or even worse than the algorithm of PL.

I hope DXO will invest in great algorithms and leaves the pixel editing tools to other software. Maybe layer support with the option to use things like Liminar Flex or Topaz will give best of both worlds.

Today I use the export to application to that. The drawback of that is that I can’t easily repeat actions and don’t have a non-destructive workflow for pixel editing.

Thank you all for your very interesting comments.

I agree with @Corros. AI is very performing to recognize patterns, objects or even concepts in a picture. It is this property that I propose to use in #1 AI mask. But it is only a help for selecting specific areas of the image, not an automatic “creative" or “artistic” transformation. I expect to apply by myself my own and custom recipe.

For #2 AI high light recovery, I expect better results in high light recovery. In overexposed channels, information is lost, that means there is no difference between the 101% and 120% signal levels. Even if you reduce the signal globally to 70% (for example), conventional interpolation can not decide if the resulting level is 70.7% or 84%. It is exactly what happens when you reduce the brightness with U-points, for example: a sad and ugly grey. But, by an AI analysis of near areas of the overexposed zone, and completed by a library of similar cases coming from an adequate trained model, it would be possible to approach a predictive and realistic value for each overexposed pixel. It never would be as precise as a real arithmetical transformation obtained with perfect exposure, but I will be satisfied with this result to improve my photos with my limited and imperfect settings! More seriously, even with fine tuned exposure, overexposure is often an inevitable compromise due to the limits of the dynamic of CMOS sensors.

#4 Noise removing improvement

Even if, with Prime, DPL has today the best result in noise removing, AI approach could be useful to improve noise removing results, as the proof of concept presented by Nvidia:

DPL competitors are probably looking at this carefully if they want to get back on race with DxO on noise removing.

#5 For AI-assisted DAM, it would be very useful, but I think DxO is not the best competitor for this. Google, or even Adobe, with their cloud services, can take advantage of an inexhaustible database to train their own algorithms.

Do you see other useful AI applications for DPL? AI scene modes, AI makeup for portrait, AI motion blur removal… Why not? :wink:


Hi all,

“AI” is indeed more a buzzword than anything else. We could perfectly claim, for example, that our Smart Lighting is AI, as it changes lighting in your photos “intelligently” in function of a couple of decisions taken by looking at them. It even has a mode where it optimizes exposure for faces detected in the photos. So why does our marketing team nor claim AI for this? Probably they should, but the feature is several years old, it was released even before AI became a buzzword, probably they just did not think about it :wink:

However, AI has improved a lot in recent years, due to some breakthrough research work in “neural networks”, which is one specific way to implement AI. This does not necessarily mean that every feature on the market claiming AI these days does use neural networks. But the progress in neural networks is the reason why AI became such a buzzword.

Does DxO plan to use neural networks? Definitely. We have some promising prototypes here.

Will our marketing team claim AI when these features will be released? Definitely :wink:

What will we use them for? That’s a question that I cannot answer here, partly because we prefer to keep it secret until it’s available and partly because we have yet to decide which prototypes will make it in the products. Remember that we’re not as big as Google or IBM, so we have to focus. That being said, we see PhotoLab more as a tool to develop photos than to edit them. Therefore, currently at least, sky replacement is not at the top of our list.

Please go ahead with your wishlist, it will help us in arbitrating where to put our efforts.

Have a great day,


I went to university to do a degree in computer science in the early '90s (I was in my late 30s at the time). One subject we studied was IKBS (Intelligent Knowledge Based Systems) - a forerunner of AI.

We learnt that part of the process of creating knowledge systems was called knowledge elicitation - basically gathering knowledge from experts to feed into the system, to enable the system to make decisions based on that knowledge. At the time, one of the perceived problem was that experts were not going to be willing to contribute their expertise, only to be made redundant by the machine that would replace them.

What several of us had problems with, and often discussed strongly, was the difference between a knowledge base that could be consulted in decision making and the algorithm for an actual decision making process.

All current computers rely on binary logic (either yes or no); something that can prove rather limiting in the real world in which we live. There are often decisions that have to be made to which the answer isn’t a simple “yes” or “no”. Sometimes the answer can be “maybe” or “it depends” and, although we can create “degrees” of yes-ness or no-ness in the same way that we can have multiple shades of a colour in a digital image, the result is always either yes or no at a certain level; it is impossible to represent just a little bit more or less than a precise value.

Also, a neural network needs “training” in how to make the right decisions. But what happens if the decision can be equally valid in either of two possible directions? How do you decide which of the decisions is right? The example of a self-driving car comes to mind once again; should the car avoid a collision with the vehicle in front, which has suddenly braked, at the expense of hitting a pedestrian, or should it “take the hit”, risking injuring its own passengers and those in the car in front, in order to save the life of the pedestrian?

Any “feature” in software that claims to be “intelligent” has to have a knowledge base on which to make its decisions and the algorithm that makes those decisions has to be written by someone - whose idea of yes or no may not coincide with ours.

One of the assignments on my IKBS module involved writing 3000 words on “can a machine possess intelligence?”. I discussed the nature of God, the nature of man, quoted from the Hitchhikers Guide to the Galaxy and postulated the need for ternary or even quaternary logic. In the end, my conclusion had to be no, a machine cannot possess intelligence; at least as it is commonly accepted at a human level.

Do we really need more and more AI in DxO? Maybe, or maybe not, it all depends on the purpose.

I would argue that DxO does a pretty amazing job with features like Smart Lighting and ClearView +. Are they truly AI, or more the result of an awful lot of real intelligence from programmers who know their domain very well?

I class myself as a photographer. Part of my 50+ years of experience meant learning how to expose as perfect a negative as possible in the camera, knowing the end result I wanted, and knowing how I was going to develop the negative to achieve the best possible density and contrast to achieve the best possible print by drawing up a printing plan for dodging and burning under the enlarger, finally developing the paper, washing it to remove any residual chemistry and carefully drying the print in a dust-free environment.

All that takes real intelligence - in other words, practice and making lots of mistakes on the way - also known as learning.

Nowadays it seems that photographers no longer want to learn their craft. Instead they want to be able to take thousands of pictures, let a computer decide which is the best, let a computer examine the picture and decide how to make it better and, one could argue, abdicate responsibility for the finished result to a piece of software.

Personally, I don’t feel that AI is anything other than a marketing buzzword, designed to give people the impression that they don’t have to do any work - it can all be done “by magic”.

No, PhotoLab should not offer things like sky replacement - the has nothing to do with photography and everything to do with a poor photographer who is not willing to admit that not every picture is possible and may need a return visit. Here’s an example of a large format image I made several years ago; it took five return trips of 150 miles before I was able to get the sky, the lighting and everything else that makes the picture. Graduated filters used on the lens at the time and no digital post-processing involved, apart from removing the dust spots from the scanned transparency.

DxO should continue to do what it does extremely well - providing the best RAW processing software out there. Sure, if using AI on the blurb makes more sales, go ahead. Should DxO spend inordinate amounts of time and effort dreaming up ideas to justify that term? No. There will always be those that want a Swiss Army penknife tool to do all the work for them. Just continue to be the best single purpose tool a photographer can find :stuck_out_tongue_winking_eye:


I agree.
With or without AI to achieve this goal.

Thank you @wolf for your interesting input.

One more thing… maybe for the marketing team… just skip AI word and go straight for Quantum :crazy_face: