I think i read that even if you change the order of the adjustments the engine has a predefined order that renders the image so i would like to know what is that order cause i think it would be better to organize the adjustments having in mind this info, thanks in advance!!
Hello and welcome to the Forum!
As for your question. Let me ask @wolf to reply.
…we’d have an issue if the order of applying changes differed between customizing and calculating output images.
If there were a difference, we’d have a non-zero-probability of ending up way off WYSIWYG.
Thanks for the reply just asking what is the actual order, for example contrast adjustment is performed before the curve tone or selective tone, clear view is performed after exposure compensation and so on
Knowing the exact order of the algorithms in the engine is not really necessary to use them properly, and on top of that documenting all of that will take time, as there are a lot of them. But anyway, as your question seems mostly to focus on the handling of lights, here is a partial short answer: Vigneting is corrected first (as it depends on calibrated data), then exposure, smart lighting, selective tones, contrast, clearview, microcontrast and then custom tone curve. Hoping this answer your question, have a good day!
Thanks that was the mainly what i needed to know
Thank you for this piece of info. I think it should be in the manual somewhere. I hope you don’t mind another question…
What I’d like to know is which tools operate before demosaicing (my guess would be Raw White Balance, PRIME, Chromatic Aberration?), and do the colour manipulation tools come before the tonal tools? I’m specifically interested in when the Color Rendering tool takes place. Is it right after demosaicing? – in other words, do the tonal operations like Exposure Compensation happen in a linear space or do they take place after the base curve used by the Rendering profile?
Hi. Sorry for late answer and … wow, that’s a very accurate and relevant question, I’m really amazed by the knowledge of our customers! You nailed it, applying white balance before demosaicing makes sense as a good demosaicing should have in mind to avoid wrong color creation, hence it’s good to define what is grey before ; noise is structured by demosaicing, so it’s more efficient to remove noise before demosaicing and that’s what PRIME does ; removing chromatic aberration before demosaicing also makes perfectly sense because trying to interpolate a color channel based on information from others color channels that are shifted because of chromatic aberration is a mess ; etc. Light changes are mostly (ClearView excluded) done in linear RAW sensor color space (before color rendering) to be at the closest of how picture would have looked if you would just have increased or decreased light in the scene.
Thank you, @Benoit, for getting back on this.
Coming from another thread to here, I’ld like to know some more. Are you referring to the ‘develop’ part or to the edit part?. The develop part meaning the basic routine PL preforms to get an image.
I’m asking this for editing is a visual job: I decide what I want to change is depending on the actual status of the image. If I change a former edit, let’s say one I did 3 steps before, I do change that actual status of the image for there have been 2 more adjustments made.
Unlike other RAW editing software, DxO doesn’t have an obvious developing stage. You can just make any adjustments you need in any order you want.
I’m not sure why that would be relevant since all edits appear to be refreshed whenever you make any change.
Let’s assume I’m working on a picture and have done edit1,edit2,edit3,edit4. Will the result be the same when I’ve done edit4,edit2,edit3,edit1? And would I’ve made the same decisions based on what I see on the screen?
I assume when you write refreshed you mean recalculated.
There’s an edit list, so also an edit order. You can go through it by using Ctrl-Z. Playing with that you can see it’s collecting key-strokes/mouse-movements. See this thread Photolab 3 keylogger?. Every time I move the slider it’s added tot the edit list. Playing 3 times with the slider results in 3 edits, without leaving that function.
What you see in the viewer is what you will get within the limits of the viewer eg you can’t see PRIME NR, so the order you make the adjustments won’t impact the result.The adjustments will be implemented in the optimum way as described earlier.
And remember, you can copy and paste global and local adjustments from one image to another, and that doesn’t affect the result. You can turn adjustments on and off and that doesn’t affect the final result. The adjustments are always applied in the same order, regardless of what order you add them.
Thank you for this answer. Now i understand why PRIME isn’t reacting on ISOvalue in the exif.(like most other rawdevelopers do.)
Before demosaicing isovalue is just a marker for brightneslevel in the exposure data to give the rawviewer a level.
This is used after demosaicing to set lumination value (exposure) in the viewer when it is lineardata.
And if i understand correct the exposure(brightnes) tools selective tone, smartlighting, clearview, tonecurve, contrast are before color renderings coding?
And @sankos wrote about generic rendering that it is a neutral tonality where the color rendering as vibrance and saturation and also the dcp &cameradepending colorprofiles are layed over. Added to the linear line creating the tonecurve?
And is the color saturation protection slider a gamut compressor?
(i try to understand why my huelight G80 dcp profile is hitting often the clipping warnings, and when i do i can advise or inform the guy who makes this camera dcp for dxo so he can fine tune his dcpprofile. )
Thinking about this topic I started two google searches for “lightroom/rawtherapee order of operations”.
It seems to me, that both of these software (and probably also every other “parametric” raw developer) has a fixed order of the execution of operations.
Although Rawtherapee is not my favorite tool, but this explanation might be useful:
The order of the tools inside RawTherapee’s engine pipeline is hard-coded, so from that point of view it does not matter when you enable or disable a tool. However some tools can make a large impact on other tools, e.g. changing exposure may require you to re-adjust color toning, and some tools may require plenty of CPU power to calculate the preview making updates of the preview from then on slow, so it is for this reason we suggest you stick to this general order of operations:
Of course, in a pixel editor like Photoshop the order of the edits does matter more.
That’s what I think. There’s always an order. Some hardcoded in the converter itself, taking place on the raw data, global edits on the RGB raster image,local adjustments. It’s so logical. I do the editing based on my visual perception. If you change the sequence the image won’t be the same.
LR has a history list. You can go back to another edit and see the image at that time. But as soon you change that edit, all the later edits are gone. CaptureNx idem. Unless there’s a constant recalculation of the image. I know in CaptureNx one could choose that to be done but it took time for that calculation. I’m speaking now of some years ago.
In Darktable there’s a history list too. This one shows even the edits that where done during the conversion.
A parametric editor is no magic. Like CAD it saves the edits/routines with the used parameters. They’re based on how to change the image I see. Those parameters can’t be changed.
You can make any adjustments you need in any order you want …
Except for White balance which must be the first because of impact on density and tonality.
ClearView must be quickly enabled for the same reasons.
A beginner can also follow “Essential tools” palette from the top down.
Let’s not forget to manage colour rendition soon too…