The issue of AI in photos has nothing to do with noise reduction, but it’s all about adding false elements or construction of the photos with AI from the start.
Looking back, the processing of glass-based photos could serve as a model. Negatives were cleaned by hand, errors corrected, and graphically edited to make contrasts more visible and perhaps make a beard look a little more impressive
. In the case of special customer requests, objects were also removed from the image and the area made visually invisible. That would be a violation. Afterwards, artistic photographers colored the images—usually from memory. This is where is a joint area—memory can be deceiving, or a color can change the impression to such an extent that the photographed reality is altered. Anything beyond that would be a violation.
What seems important to me here is the desire to manipulate reality.
Going by similar systems, it learns to recognise characteristic sensor noise patterns in various types of images from various cameras and at various ISO´s. Just a guess, but that seems most probable.
For DxO PRIME and DeepPRIME theory you may search (Google, arxiv.org, www.ipol.im, IEEE Xplore, ACM, patents, …) for papers by Jean-Michel Morel, Gabriele Facciolo, Charles Hessel, Marc Lebrun, Miguel Colom, Frederic Cao, and others who I don’t remember now (some of them, if not all, DxO (co)workers at some point). The actual implementation is a different story, kept secret (there is some presentation by Lebrun about theory–>practice experience, can’t find the link, it’s different than Lebrun’s presentation mentioned at the bottom). Adobe Denoise AI may have its roots in https://groups.csail.mit.edu/graphics/demosaicnet/data/demosaic.pdf . It’s all speculations, useless for average user, I think. Nothing about “creative AI” there, and in fact it can happen that parts of hair are lost somewhere in the middle (“discontinued” and then “reborn”) after applying AI denoising on extremely high ISO photos.
I would trust “common sense”, whatever that means. After all, human judges will decide, perhaps assisted by hopefully double-checked AI, or ML to be exact. Maybe such rules should have two parts: a) informal intent and b) best effort technical details explanation, plus a comment for lawyers that the choice will be subjective. If you overuse just tone curve, microcontrast, blur, sharpness (strictly non-AI things), you may be discarded in one contest and get the prize in another – for example think of the ‘Engrave’ preset, somewhere in this forum, being used for photojournalistic contest
. Not sure how to classify AI recovery of data blown-out in RAW. Trying to define these things precisely is a bit like with security – hard to decide when to stop, before going crazy. Of course there’s always possibility of corruption, but that’s not about photography.
BTW, as a side remark, see http://dev.ipol.im/~ibal/Files/Vannes.pdf dated Jan 2015 (old!), for an interesting remark about DxO patent being “joyfully violated” by Adobe
.
If it is truly blown out there is no data there so the only option would be to clone into that area. ‘Recovering’ a blown out area that has no data is adding to the image that was not present.
Sometimes what we see in camera as blown out is the jpeg thumbnail being produced that is being over cautious. Almost like your camera software is saying “thats cutting it a bit fine, maybe do another shot with a different exposure?
With the raw file just move your exposure slider to the extremes in turn and you will quite easily see which areas are truly blown out - they will stay white, or so dark there is no shadow recovery - they stay black.
edited as my typing really sucks!
Take for example blown out sky and blown out sun(set). In each case AI/ML (or some “classic” algorithm, “intelligent” enough) can in theory reconstruct more or less the original using different hues, blue vs orange in this example. Not sure how to classify this but I would be more concerned about if it looks artificial, which is subjective, alas. Are you aware of any “kinds of Generative AI” classification? It’s a new language I’d like to learn. Quick search shows only popular content or things less interesting to me.
As a side remark, for some years it was a hot topic for various “followers” trying to prove that software X is better than software Y at highlight recovery. Well, you can prove anything “by example” using suitable setup, e.g. that Java code is faster than “C” code. Some people followed, though…
I use my own app based on libraw. It’s like RawDigger but much more “raw” ![]()
With older cameras, like Nikon D700 and D4, I had to be careful with highlights in raw. I find newer ones, like Z8, much safer in ‘A’ mode, perhaps “too safe” sometimes (and I’m aware of different metering modes, ADL, etc, just to be clear). That’s a different story, though.