Working in 3d rendering and as compositing softwares I use is a visual activity too.
But they work internally in linear space because applying curves implies loss of datas. Curves are applied at the end of the process which is either : dispalying on screen (when working on those images in those softwares) or when saving thoses images. In both of those stage it is too possible to see what happens without curve applied (so it is possible to see linear display or to save linear images).
Why ?
Because when a curve (gamma or other “visual” curves ) is aplied there is loss of datas. So curves are applied at the end of the process, so :
either on the fly when displaying work in progress in the software
or when saving images
This is how color management works with softwares that keep data loss to a minimum.
PS : this has other advantages for functions applied when working on those images.
I’m not sure a curve means a loss of data. Data is corrected but not lost.
I don’t know why one places the gamma correction in another place in the editing sequence. It’s the gamma correction for the human eye. The one for the monitor is taken care for by the OS.
I think modifying datas with curve involves losing original datas (where datas are “compressed”) and adding interpolated datas (where datas are “stretched”).
Gamma correction doesn’t lose data. The range stays the same from 0 till 255. The values are just corrected for the difference between the linear property of the sensor and the non linear property of the human eye.
In the editing queue pixel values are constant changed. They are first created out of the sensels, then the gamma correction, the conversion to the working color space, the editing itself, conversion to the monitors gamut space and then also the gamma correction for the monitor. And maybe more.
0 to 255 is a big loss of datas … hopefully sensors provides more than 8 bits values per channel when demosaiced.
Anyway what’s lost is not the quantity of datas when applying curve. Weither base is 8, 16, 32 bits or FP16, FP32 quantity stays the same. But what’s lost is the tru information if you need to go to an other color space or if you need to modify those datas further.
Am i right or wrong ? Is the process reversive without loss (of true datas) ?
What I’m nearly sure about is that some transformations works “well” in linear space but not in “modified” space.
Info or data is what the sensor captures,analogue. Dividing that that info in the digitizing process in more or less pieces doesn’t mean that you get more or less data. Storing a 12 bit pixel in a 16 bit pixel doesn’t add data. More pieces means only that editing is going smoother.
It’s just a way of approaching.
Yes. only cinamatographic sensors can benefit of 16 bits (way more dynamic range than photographic sensors).
And yes 12 (or 14) bits store in 16 bits container should not loose datas.
But 12 bits store in 8 bits container obviously loose datas.
But my english probably lacks because I don’t see what this has to do with the subject we were discussing. I must surely be expressing myself very badly.
EDIT : and very bad illustration of the case, in fact …
English isn’t my native language either.
But i object to the use of the word data. That is common used on the internet. Data is the whole pixel., only that. The smaller parts are only calculation parts.