First and foremost, I apologize for my poor English; but, this is the best method to be read and, hopefully, to receive more input.
I have a query about the DXO adjustments that are generated. Is there a way to have a value for the corrections done on the three RGB channels (relative percent, or raw value that can be used later to determine this difference)? My inquiry is about the new certification requirements (IRCC: https://www.ircc.photo/) that are necessary to ensure that animal images are not manipulated. Excessive abuse will inevitably lead to future limits.
I applied the identical adjustment circumstances to two photos, one for a pelican in flight, thus in the sky, and the other for a pelican on the sea, in an attempt to determine the limits of the IRCC criteria (So the mid of color RGB is not the same). One received an A, while the other received a C (too much modification of the R channel). Is there anyone who knows the answer? PS: At level 40, deep prime does not appear to prevent certification. That’s fantastic news.
Thank you for your time and consideration, and have a pleasant day.
Good morning. This really isn’t the domain of a image editing software and I’m not sure how it would even be possible, since every RAW processing app applies their own algorithm to demosaïcing RAW data into a visible image.
Wow! this sounds like an organisation that wants to impose what it thinks is excessive manipulation on the world of photography, possibly without regard for the reality of taking and processing of images.
Whilst obvious manipulation like copy/pasting and image blending should definitely come under scrutiny, it is hard to see where any clear line could be drawn between what is manipulated in terms of colour, tonality, sharpness, etc.
Things like colour and tonality of a birds feathers are influenced by the origin, type and direction of the light striking it and have nothing to do with manipulation. In any case, what do these self-appointed gods of “integrity” say about converting colour RAW images into B&W, or is even that not allowed?
They talk about excessive manipulation of perspective but how would they regard an image taken on a large format film camera with movements? The camera itself is manipulated to correct geometric distortions and the scanned negative/transparency is usually a TIFF file with no manipulation of geometry necessary. Or are scanned film photographs not allowed because it is not possible to provide a RAW file?
All in all, this feels like the dreaded Académies Françaises, who so successfully stifled advances in the world of painting and who impose “normes” on what makes a good painting or not, despite the obvious talent and skill of the artist.
Given the variability of taking a photo in terms of lighting, I doubt you are ever going to be able to guarantee approval, even though you leave an image totally untouched. In the example of your pelican, the only way you are going to satisfy such conditions would be to measure the IRCC approved pelican feather colour and tone, taken in the IRCC approved colour temperature and apply that to all your photos of pelicans, which surely counts as manipulation. Ot take the example of a fox, which are normally reddish but can be black - are they going to fail assessment?
I have to apologise for my “rant” but organisations like this are not designed to take account of the real world, where light is ever changing and wildlife is ever evolving.
This is a real question and for this photographer a real certification issue so can we try to find an answer not just telling him do use IRCC?
In an other forum I suggested him to look inside the dop sidecar where the RGB Channels modifications are stored by PL. To the best of my understanding for each color and each point the modification are stored as an X/Y coordinate on the curve graph, from 0 to 1 with the neutral at 0.5. Question: Is it possible to translate these coordinates in % relative to the neutral point, like if 0.5 = 0 so 0.6 = 20% and 0.45=-10%? Maybe I am totally wrong, I know
Unfortunately, the problem is that RGB values are not just stored in one place. You have the colour wheel, the saturation, the vibrance, etc; and that’s just some of the global adjustments. Then you have local adjustments for colour temperature, tint, saturation, vibrance, etc. Not to mention the undocumented effect that Smart Lighting and ClearView Plus can have. Not forgetting the effects of the Channel Mixer, Colour Filters, film emulations.
All these things and more can affect the RGB values of a pixel.
And, as I already said, with different colour temperature and direction of lighting, combined with nature’s ability to reflect infrared and ultra violet, the whole idea of an “automated” assessment of “fakery” is never going to be reliable.
Measuring colors depends on many things and as long as the IRCC does not sufficiently specify how they measure (tools, methods, references…), you’re left with trial and error.
On Mac, there is a “digital color meter” application that measures RGB values of whatever screen pixel you sample. Depending on how you set the tool, values will differ from setting to setting…
Even if you get a certificate for a jpeg, it does not mean that the people who look at the certified image will see the colors that you saw on your screen…unless each image came with a color reference, something that is used in repro work, but seems to be excluded by IRCC.
Thank you very much for your encouraging and constructive response. I didn’t take the time to respond to individuals who told me not to use IRCC because if I had, I wouldn’t have written this piece.
I tried what you suggested, looking at the effect of light variation as well as simply one color variation on the dop file… and, as Joanna mentioned, it’s not as simple as it seems.
I wrote this post thinking that perhaps I missed something or I did not take the right approach.
The essential point is that, in the end, we have to not utilize a lot of buttons to optimize the color and light because we will modify too many RGB values of a pixel. So the goal is probably as you proposed, to follow the modification stored in an Y/Y coordinate when you only applied the change on two or three specific buttons and see the results in the IRCC report.
Anyway thank you very much for your answer.
you’re right… the only data I have at the end is the standard files I joined one with this post. This is the final product, but they also did it on each of the nine sections of the photo. This is what I generally performed with the trial and error method, but certification is not free in this case… This is why I asked if someone has already think on that specific point.
Anyway thank you for you help.
I’d like to understand what means rgb corrections and how they can measure this ?
What if your image is not perfectly exposed when shooting ?
What if you use different demosaicing softwares ?
In fact what means not corrected RGB ?
How do they define this ?
Now, this is interesting, since sensors are usually RGBG devices, whose data has to be de-mosaïced in order to turn it into a viewable image and different converters produce different results. I wonder whose software they would use as the standard?
So “they can’t share informations I asked”.
I tried to read provided links, but it seems that it is a secret potion. Unless I missed something, but I don’t think so.
If anyone find more info than me, I’m interested to know it.
A good test would be to send them several RAW + jpg processed with several demosaiceurs without any tool applied. Seeing results we should find wich demosaiceur they use. Because they have to use one. (Unless they have developed some IA thing supposed to be smart).
Yes, without a doubt… but the test isn’t free ;-). According to their website, the software they utilize is based on medical image analysis research…
Anyway, if you need assistance, you can use ImageJ, which is also a free image analysis software.