I would like to discuss whether an agreement can be reached in the forum that photos published here that have been edited or created using generative AI should also be labeled as such. It would also be interesting to know in what form AI was used. There are already gray areas - denoising and upsampling are on the borderline. In my opinion, this would not currently be a reason for labeling. But it would have to be monitored. However, removing objects or people with convenient tools already falls into this category. Where exactly is the limit? Will it be pushed further? Is a real photo more valuable than one edited with generative AI?
I am completely in favor of that, and ideally, the post should identify the tools that were used. To be honest, I personally wish people already identified the gear that they used to create, process, and post their photos.
I’m not sure if these images should be “labeled”, but I’d prefer to read in the post how the image was created.
I have tried some of these systems, and asked them to create an image of an Indian Ox-Cart on the streets of New York, and the software created a plausible image (which included technical mistakes, but did look like a "snapshot). We had a big discussion about this in India, with the question falling out as “will people trust photographs even less than they already do”.
There is no reason I can think of not to do what you suggest for photos published in these forums. I think we should encourage people to do so, when/if they use “generative AI”.
Follow-up question - what do we do about people using a combination of AI and PhotoLab, with the computer creating the final image, not the photographer?