*“Following consultations with the relevant PSA services, the two organisations, FIAP and
PSA, have agreed on the following common statement regarding the use of AI in salons
held under their joint patronage. This statement must be included in the regulations of
salons or events applying for FIAP patronage.
Permitted AI-enhanced editing (Note 1 below) and prohibited (Note 2 below).
Permitted AI-enhanced editing: includes editing tools that perform
transformations, enhancements, or corrections based exclusively on the existing pixel
data captured in the author’s original photograph without introducing externally sourced
content.
Prohibited AI editing: includes any AI-assisted processes for synthetic image
generation that incorporate external image data, visual elements, textures, objects, or
scenes not originally present in the author’s photograph are prohibited. “*
How does the AI tools of PL 9 fit into these requirements? Witch tools can used and witch not?
Anything present in an image can be enhanced or edited.
Add something and you’re out.
→ Most things in FilmPack are potential no-goes (grain, emulations, frames…)
→ Watermark? Adds external content (no-go, but might be an accepted exception)
→ Dust removal? Should be okay, it’s not externally sourced and only “moves” pixels.
Best check official statements/examples and comments.
Note: The above assumes that fundamental principles apply, be it with AI or without.
1/ From my humble opinion, this the case for DXO PLB
Prohibited AI editing: includes any AI-assisted processes for synthetic image
generation that incorporate external image data, visual elements, textures, objects, or
scenes not originally present in the author’s photograph are prohibited. “*
2/ From my humble opinion, this the case for Adobe PS
Anyone know how DeepPRIME works? I know it was “trained” on images. DxO made a specific claim about the number used in training of a recent version compared to previous.
So… does DeepPRIME have learning that tells it “when you see a red pixel next to a blue pixel, do this operation” or does it have learning that tells it “when you see a red pixel next to a blue pixel, here’s what we found in other images in this situation”?
Also… if you can only use pixels in the original image, I have a plan. I will carry with me a small card that has a full gamut of colours on it and always include it in the shot. Then I can simply copy whichever colour pixel I need from that area and put it anywhere else on the image to create whatever I want there.
In other words, their attempt to be succinct probably makes it less clear than they intended.
The distinction is not easy to make; in my opinion, there is no clear-cut line. The expectation that no “general” information should be included is debatable, but is it realistic? If the training is based on limited data, the resulting system may not be smart enough for the task. That doesn’t make sense. On this point, I would not speak of a (learning) interface but rather a junction. These AI systems will probably not be able to function without additional image information. If this is the case, the focus should be on appropriately compensating the owners of relevant image information (photographers).
This does not refer to explicit generative systems. That is a completely different story.
But even that is a blurred line. If I remove an object from a photo, what takes its place? Sure, if I use clone or repair, then it’s pixels from elsewhere in the same image. But tools like Topaz Photo and Photoshop will “generate” something to put there. If I remove a sheep from a field, what would Photoshop put there? I could say “make it more grass like the rest” and it will… take the grass from a different image? Or does it smartly clone from within the existing image? Do we, heck can we, even know?
It’s probably an easier thing to just say “we will be the judge, thank you”. I’ve seen in some photo competitions, they say if you are selected as a winner, you will be required to submit the original RAW file. They can then decide if you’ve done “too much” and, honestly, so long as reasonable guidelines are given in the entry rules, I’m totally fine with this approach.
Now, separately to this, I don’t enter photo competitions because they favour getting a heck of a lot right in-camera, and that’s not a level I’m at. In some cases, it also favours more expensive gear and/or extreme amounts of time available. For example, one I looked at recently said (their emphasis) “moderate cropping is allowed”. Even with my 450mm lens on APS-C (690mm equivalent) I still need plenty of deep crops on birds!
I think its still based on the photo pixel data. “or corrections based exclusively on the existing pixel data captured in the author’s original photograph without introducing externally sourced content.”
All photographs entered into the Singles, Stories and Long-Term Projects categories must be made with a camera. No synthetic or artificially generated images are allowed, and no use of artificially generative fill is allowed. Any use of these tools will automatically disqualify the entry from the contest.
However, the use of smart tools, or AI-powered enhancement tools is possible within the contest rules, as long as these tools do not lead to significant changes to the image as a whole, introduce new information to the image, nor remove information from the image that was captured by the camera.
Some examples of tools where limited usage may be allowed are Denoise, automatic adjustments (e.g on levels, colors, contrast) and object selection for local adjustments. These are permitted up to a certain extent, which is to be determined by the contest organization and global jury. Tools that do immediately breach the contest rules are all AI-powered enlarging tools such as Adobe Super Resolution and Topaz Photo AI. These tools are based on generative AI models that introduce new information to enlarge and sharpen images.
It continues with a video with examples. Also an example of not allowed denoising.
What you are referring to are the rules for press or journalistic photography, which are understandably strict, as this kind of photography is recording historical fact to inform people of real events.
Permitted AI-enhanced editing: includes editing tools that perform transformations, enhancements, or corrections based exclusively on the existing pixel data captured in the author’s original photograph without introducing externally sourced content.
Note the wording carefully.
based exclusively on the existing pixel data captured in the author’s original photograph
This means that you are free to mess around with anything in the original image. If you want to clone or move something within that image to somewhere else in that image, that is allowed.
AI noise reduction is based totally on the data from the original image as long as it searches for alternative data from with the original image and doesn’t use external generative tools that draw from other images, either on the computer or the internet.
DxO’s NR is totally sourced from within the original image.
Prohibited AI editing: includes any AI-assisted processes for synthetic image generation that incorporate external image data, visual elements, textures, objects, or scenes not originally present in the author’s photograph are prohibited.
Note the wording carefully.
AI-assisted processes for synthetic image generation that incorporate external…
Once again, as long as the image is only ever self-referring, you are OK.