Off-Topic - advice, experiences and examples, for images that will be processed in PhotoLab

Yes, you did lose me, but from what you’ve now written, I understand you are essentially talking about a “fuse box”. I remember those as a kid - lights and power went out, go to the fuse box, unscrew the blown fuse, and screw in a new one. What I’ve got now in my apartment is a “power panel” of switches that can “trip off”, and I need to manually switch them back on.

Since I know I will get this all mixed up in my mind, I just printed it, and I will see if I can use this effectively in a test image. I guess I’ve always been confused about this, and just blindly followed some suggestions from @Joanna.

I don’t remember there being a tool for “dehaze”, “clarity”, and some other terms you mentioned. It’s time for me to make several virtual copies of a test image, and try to see the difference between:

  • Clearview Plus
  • Clarity
  • Contrast
  • Microcontrast (dehaze?)
  • Fine contrast
  • —Highlights
  • —Midtones
  • —Shadows

Is there a section in the documentation that compares all these things, and suggest how and when to use one or more of them?

No other applications often talk about Dehaze tool and Clarity tool.
I did a comparison between dxoPL v5 i think and Silkypix v10
Which is
Clearview plus ( smart microcontrast) vs smart sharpening in Silkypix.
Microcontrast vs Dehaze slider
Fine contrast vs Clarity slider
Tone curve vs blacklevel (dxo doesn’t have a blacklevel slider only close to it “black” which change the exposure of the dark side of the image.
It’s somewhere in the topic/header tips tutorials and resources i placed a test with fish in the water. That shows how different things react on the sliders in extreme settings. (this shows there function.)
I used the names dehaze and clarity because most people know them from lightroom kind of aplications.

Some info FilmPack Fine contrast question - #27 by rrblint

I did made an other one can’t find it right now.

True enough. Turns out I was editing in bright daylight, so overshot on the exposure in PL6. Dropping the exposure a few notches produces (I think) a more natural image. Pretty simple.

Mike, have you ever clicked on the ? help option next to each feature in PhotoLab? It will give you a brief descriptive popup of that tool.

Mark

ClearView Plus
image

Contrast Sliders

2 Likes

“Lightroom???” What’s that?

…er, seriously, I haven’t used Lightroom since the first time @Joanna responded to posts here.

I still make the monthly payment to Adobe, although I wonder why. Every so often I need Photoshop I guess.

I assume you’re using a calibrated display?

I got in trouble with that all the time, until I did a better job of calibrating my display, and try to not do editing with daylight streaming into my room.

The photo you just re-posted looks FAR more natural!

I have been struggling to never use any image editor other than PhotoLab. Otherwise I get more confusabobbled than I normally do, which is bad enough.

Don’t change the way you write just because of me. I suspect everyone else here understands you just fine. I often think I understand, only to realize later I was wrong.

I used to do that, but haven’t done so in a very long time. I obviously should start doing so again. For tools I think I’m used to, I expect some kind of result, but doing this would confirm things for me, or warn me off. I enjoy copying things many of you do, to see if I get a similar result.

Hmm, wow, was I wrong. I thought I understood these tools, but I was very wrong. I need to capture a test image tomorrow, even if it’s not beautiful, to try some of these tools again. I’ll probably use the D780 to prevent @Joanna’s blood pressure from boiling over… :slight_smile:

I started editing with Silkypix v5 pro for panasonic.
Which is as software calibrated, tailored, to panasonic and some other Japanese brands. I bought Silkypix v10 because of there stacking ability’s which PL don’t have and fore comparision user interface, quality level of tools.

I wanted to understand why PL didn’t had a “blacklevel” adjustment. Which i used as dehaze with shots through water of something.
Silkypix has a few smart things, which i like, embedded in the contrasttoolset, the tonecurvetool.
So in order to support my featurerequest i used those as examples to show what i liked about it.
That’s why i did a comparision between.
Silkypix toolset: Contrast balance(midpoint in tonecurve), blacklevel, clarity, Dehaze, tonecurve and some more as selective sharpening which is microcontrast. VS PL’s clearview plus, advanged contrast sliders, tone curve

Due that testing i discoverd that clearview a form of combination is between clarity and dehaze. But that blacklevel isn’t the same as microcontrast. For that you need to use tonecurve.

May I suggest you keep a favourite RAW photo, one you really like that has lots of different elements of composition, colour, subject (s), as an “always try something new in this one first” sample.

It will be an image you are familiar with, with both problems and great bits, that you have edited before and probably struggled with.

DxO is non-destructive as you know, so working with an old friend and knowing what you’ve tried before will give you a single, solid reference when weighing one tool against another without breaking anything.

It might be a more focused way of seeing how things change as you apply them, rather than new image, new tool

Completely off-topic… we had some discussions earlier about old wood sailing ships. I accidentally stumbled on this video, which I think many of you might find fascinating. I’m not going to say much about it, other than post the link:
How an 18th Century Sailing Battleship Works - YouTube
As for me, I watched a few minutes - will watch the rest later this week.

I will review what you wrote later today, but I don’t want to get even more confusabobbled by studying other image editing software. Right now, I want to stick to “doing”, more so than “understanding”.

My interpretation of what you wrote, is to create “virtual copies” of images I want to work on, and try different techniques, and compare. I think that would be a better way for me to discover what works best, for different types of images, and make comparisons.

That’s part of it, yes, but I was proposing something more fundamental. Don’t feel you need to take new pictures to test new tools or a new idea. You can have one favourite, familiar picture that you’ve worked on and already have some knowledge of what you’ve done before and liked or not liked. When you then use fine contrast tools (as an example) you will quickly see “ah, I see that John’s hat is sharper and I don’t have the horrible effect I didn’t like when I tried microcontrast, or Narandajit’s turban really pops with a tiny bit of clearview applied locally rather than what it did to his face that I hated when I used it on the whole image last week.

If you go out with your D780, take new pictures and go at them right away with new unfamiliar tools (that you suggest you don’t fully understand) without any reference to what you’ve done before then you make bringing the whole toolset together as a cohesive solution much harder I think.

The video is simplified because the subject is very complex, but it seems to be a good general overview with a reasonable amount of detail. This ship is a three decker (three decks of cannon) and would be probably be a “First Rate” ship of over 100 guns like Admiral Nelson’s “Victory”. Most line of battle ships of that period were standard “Third Rate” 74 gun two deckers. If I haven’t mentored it before that is where the commonly used terms, first rate, second rate, etc., come from.

Mark.

I will try this starting with my next image, probably tomorrow. Instead of jumping in with the tool-set I’m used to, I’ll start with a “virtual copy” and try tools I’m not yet familiar with. It will likely take me a lot more time, as I’ll be learning so many things as “new”.

I think of this as “painting” were I used to have a palette of perhaps a dozen colors to work with, compared to an experienced painter who may have 50 or more colors to work with. This is not like mechanics, where a few screwdrivers, pliers, and a crescent wrench (and a hammer) is enough for most things, compared to a mechanic with a huge toolbox filled with hundreds of tools…

As you wish, though I think you misunderstand me :grinning:

I probably don’t understand you perfectly, but I think I understand what you are suggesting.
There is a lot of learning that needs to happen before I can really do what you suggest, but isn’t the best learning tool actually DOING all these things and evaluate and compare the results? I may not yet fully understand you, but what you suggest I do sounds to me like an excellent way at improving my image editing.

I’ve got another concern. Before I post my own thoughts, people might want to read this from today’s news: https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated

So, my simplified question for this forum is Does using the tools of Photolab to get rid of digital noise - turn the result into an AI manipulated image?

Or, maybe I should ask what tools, if any, in PhotoLab should not be used if we don’t want to be accused of using AI on an image?

Is AI enhancement the same as asking the AI software to create the full image?

DeepPRIME and DeepPRIME XD are AI based. Are you assuming that AI only exists to create or mimic reality? Perhaps you need to develop a better understanding of what AI actually is. There is a lot more to it than the results you got from that query program. There are also different several types of AI. Why don’t you spend a short while familiarizing yourself with the basic concepts.

Mark

Yeah, calibrated with Spyder Pro but since I wasn’t doing a ‘serious’ edit, I was on my laptop by a ceiling-to-floor window. Calibration won’t be of much help then. Things look fine to your adjusted eyes until you see it again under more conducive light. I do most of my ‘real’ editing at night, so it’s generally not an issue… :slight_smile:

I still don’t know what AI exactly means.

George

I am not the one to teach you about AI. My limited knowledge is based on my personal research.

Mark

I know.
But I think AI is mentioned to often without knowing what it is. I don’t know it either.

George

Which is why it has been suggested that you play with several virtual copies of the same image, in order to see what effect different treatments have, compared to each other. Did you know you can rename virtual copies on a Mac? This means you can more easily differentiate between different versions than just VC1, VC2, etc.

There is a world of difference between the AI that Boris Eldagsen used to create an image and the AI that DxO uses to get rid of noise.

Is it enhancement or correction?

Before electronic computers existed, computers were (mainly) women who computed stuff. Give them a mathematical problem and they would work it out. The problem, it all took lots and lots of time and was prone to human error.

Then came the idea of, first mechanical, then electronic, computers, which did exactly the same job, just a lot quicker.

Whereas a human computer used “real intelligence” (the brain), electronic computers were crafted, by a human, to mimic that intelligence, or at least to logical process to solve the problem. Essentially, the necessary algorithm was recorded so that it could be executed by the machine.

Noise reduction is just an algorithm, that has been recorded, as computer code and that simply speeds up the operation. It goes something like…

  • is this pixel noise?
  • are the pixels surrounding it noise?
  • compare the surrounding pixels to see which provides the best replacement

This is something that could be done by a human but, when it comes to a 45Mpx image, you could expect to wait rather a long time for the result, so it makes sense to provide an automated method. Of course, a noise reduction algorithm is only ever as good as the replacement strategy it has been taught by a human and is simply a faster way of coming to the same result but, as testified by the number of competing NR tools available, some are better than others.

In fact, although labelled as AI, NR algorithms are not truly intelligent - they are simply executing an algorithm taught by a human. It’s just that AI is the new buzzword to help sell so much software nowadays.

On the other hand, AI image generators are taught to “replace” the creative process, in deciding what should constitute an image of a requested subject. Think of it as asking someone else to create a collage, using bits of other people’s images, possibly without the originators’ consent. A bit like the way so much modern “music” is constructed by sampling other musicians’ work.

Is it photography? Well, if you account for those bits of images stolen from other photographers as photographs, then possibly. I would argue it is the result of having taught a machine how to stick bits of images together, which may or may not produce a pleasing result - a sort of machine artist.

But it is definitely not photography, as is commonly understood by using the light entering a camera to record what a photographer sees.

What is worrying is that AI is now being used to create, not only still images, but also video. Take a look at the whole business of “Deep Fake” videos, where one person’s face can be superimposed on another person, so that the viewer gets the impression that someone they know is doing something uncharacteristic. This could lead to AI newsreaders, presenting AI generated news of events that never happened.

Fortunately, Boris Eldagsen was up front and declared his image to be fake. @mikemyers this is where photojournalistic rules come into play. But, if an image is declared to be a work of art, which most images tend to be, then I see no problem with “tidying up” an photograph, or even creating a work of art, based on one of more photographs.

The aim of DeepPRIME is not to create something that you couldn’t be bothered to photograph yourself, it is just another (very efficient) tool for improving your own work.

2 Likes

I was going to write what I think it means, but I think this very short definition on the internet does this better:
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

Yes, thanks to recent posts here, I think that is a wonderful idea. I assume I would do this, immediately after creating a Virtual Copy, before doing anything to it. Seems to me that in the past, renaming my PhotoLab files using macOS confused PhotoLab, so I stopped doing so.

Can you please explain when, and how, I can rename my PhotoLab files such that both PhotoLab and macOS accept the change, and I don’t have issues? When I use PL5 to compare my different Virtual Files, will this still work properly after doing the rename?

Maybe some day a future version of PhotoLab will provide an “information box” into which I can enter the changes I made, how I did it, and why, so years from now I will be better prepared to remind my memory of what I did, and why?

I agree with you, but where do we draw the line? Do the image improvement tools from other images go too far? If we change the color of a bird, or part of the bird, is that “legal” in this regard?

Wearing my old PhotoJournalist hat, none of these things would have been “legal”, unless I was creating a Photographic Illustration - they cross the boundary from being Photographs.

I thought the only line to be concerned about crossing, was if the image was altered. I guess I shouldn’t be asking the here - I should find one of these contests, possibly the one that rejected the image I was describing, and read their rules.

Question specifically for @Joanna - I’m pretty sure you are comfortable with all the tools available in PhotoLab. Are you equally comfortable with software like Topaz, that show various possibilities and you select the one you prefer? How about Luminar?

…and as for me, for now, I’m not going to pay attention to any of this, other than how to properly rename my VC images to quickly remind me a year from now what I did, and why.

I’m not going to debate this, but the eventual goal of computers will include all the choices available, and decide on its own which is best. It may be “taught”, the way all of us are taught, learning along the way. So while I don’t agree with you, that doesn’t matter.

I don’t agree with most of what you wrote about AI, but for us, in this forum, it comes down to this:

I have lots of thoughts about this, very few of which are relevant. I would like to be able to use PhotoLab to the best of my capacity to learn it. I know I have access to image editors that can simply replace the sky if I wish, but that’s not my goal.

There are a lot of people in this discussion who I am leaning from, especially about how how to best use the tools that PhotoLab provides. As always, tools can be used for “good” or for “evil”. PhotoLab seems to be especially good at taking an image captured by MY camera, and turning the image into what I thought I saw with MY eyes. And as always, “GIGO”.

Garbage IN > Garbage OUT.
If I don’t get it right in the camera, anything I do later is just a band-aid.

Life was easier when I just took snapshots.
Now I want more, much more.

Time to shut up, put away the computer, make breakfast, and start a new day.