It is about using an AI plugin together with Lightroom and Capture One to speed up both culling and editing workflow based on the data in your picture database.
To run the AI editor, simply add your Lightroom or Capture One catalog to the edits screen, choose your AI Profile, and filter for the images you want to edit. Aftershoot will do its thing, and youâll have all your edited images ready to review in minutes!
Although Aftershoot uses AI to automate the editing process, photographers retain full control over their final edits.
Photographers can review and tweak their edits as usual in Lightroom and Capture One, or they can use Profile Adjustments in Aftershoot to fine-tune their AI Profiles and change their styles with ease.
The software continuously learns from a photographerâs input and adjustments to keep improving in consistency and quality.
What about an AI that will directly produce the pictures that will make you a great photographer ? Just ask, no need to shoot. No camera needed either.
Problem : everyone will become a great photographer. So, there will be no more great photographers. An interesting paradox that great philosophers will certainly study. Oooops ! Theyâll probably ask their AI.
I think Adobe was trying to advertise that you donât need camera anymore or be a photographer you can just type what you want and their Firefly will give it to you. lol
#ClownWorld
Stenis
(Sten-à ke Sändh (Sony, Win 11, PL 6, CO 16, PM Plus 6, XnView))
4
⌠but, the difference is that in this case it will suggest a tweek of your pictures that will be based on thousands of the pictures you have polished before and nothing else. That is the difference and as it says, nothing prevents you from adding som extra corrections if you please.
Most people today use to fix a decent starting point with a dumb preset. In this case it will just be a little smarter than you normally are. I don´t see all that much difference in that really. You will just get a better starting point and save som boring time doing a lot of repetitive task. I wonder though how big the market will be for these solutions when most lazy photographers already have migrated to phone cameras.
Satire aside - this will be a challenge for professional photographers. In my opinion, the AI-supported âprofessionalizationâ of photography will also reduce the need for photographers to maintain old business models. I am following how quickly AI-supported tools have become established in software development - combined with new tasks for reliable and high-quality use of the technology. The same will happen with AI-supported processes in photography. The AI functions provide an inscrutable technology that often produces strange phenomena. In my experience, the results are also rarely reproducible. People, this is no competition for photographers There must be new areas of activity in the field of AI functions!
Stenis
(Sten-à ke Sändh (Sony, Win 11, PL 6, CO 16, PM Plus 6, XnView))
6
This is not general generative AI so this might work a little bit better than that since it is based on the photographers own pictures.
In one field I can definitely agree on disputable productivity. I would never rely on AI to produce any metadata and especially not keywords. That might be fine if you have low ambitions but AI generated keywords can really screw things. For all people living in non-English speaking countries, where the phone pictures most certainly will be tagged with a local language and more serious system-camera photographers publishing on the Internet most likely would add all that metadata in English, I think it would be a pretty serious mess.
I believe this falls into the first category. I havenât read the article, and donât intend to, but the mere concept of âAI cullingâ gets a giant âNOPEâ from me. I know what meets my standards and I know my standards change over time. I cannot conceive how any algorithm is going to have a chance of replicating that.
It probably will, and while capabilities grow, they are getting 480 USD a year for subscription while you possibly need to give up some of your authors rights too.
Itâs just a way to turn creative works of art into please-all industrial products.
The future; without lol, and without photographers.
Agency will no need photographers for most of their work.
Only real artists will survive. And will be far fewer in number and earn more.
And social network guys will think they are good photographers (but will not earn their money with it.).
The climax of sony clic.
The first mistake Iâd expect an AI culling tool to make would be to delete, or even reject any photos. Because I donât do that. Never have. I select photos I wish to work on and publish. I keep all of them.
Iâve frequently gone back and re-made decisions on which photos I want to publish and in some cases those that never made initial selection have become firm favourites.
Like I said⌠my standards change over time. As does software. What Iâm able to make out of a photo with PhotoLab 7 is a far cry from what I could make with Luminar 2018 only 5 years ago. Throw in some Topaz upscaling and sharpening and itâs a whole new world.
So yeah⌠no AI tool is going to choose nor edit my photos.
2 Likes
Stenis
(Sten-à ke Sändh (Sony, Win 11, PL 6, CO 16, PM Plus 6, XnView))
11
⌠but say for a start it will suggest all unsharp for delete and it then by machine learning has learnt some of your culling âpatternsâ and add some that falls out of that frame?
I should probably not be the obvious user of a tool like that because I rarely take all that many copies of the âsameâ motif but what about a hardspraying bird- or sports-photographer?
This us all about getting a better and more effective starting point isnât?
In Capture One they recently have taken quite a few smart steps to speed up our workflows. We now instantly can get a detail preview in order to monitor for example details and sharpness on faces - just as an example. I can think of a lot of other tools and smart features developed to improve and make the flows more efficient than they have been before.
Some others are more batch oriented like the new feature AI-Crop.
The AI-support in our converters is just about two years old now. Is there really anyone that believe that we have reached that roads end already?? The really interesting AI-Crop feature is just a couple of months old (came in May -24) as it surprisingly showed up in the the then new version they released. I guess quite a few product- and say school photo portrait and studio photographers are pretty thrilled and happy over these really exiting and effective tools.
⌠and if you look through the video below you will find that the photographers are i total control over how much control they want to hand over to AI. It is not at all to send your pictures in to a black box (like for example with DXO PureRAW) and wait for the results, it is far more versatile than that.
Of that reason I think it is wise to approach this AI-support development with an open mind because this is far from handing over it all to AI. You choose and are in control over how to apply the AI-support with a tool like AI-Crop in Capture One. Of course there also an âAuto modeâ and there will be situations where AI takes care of it all when that will be the best choise.
When I was working with FotoWare Enterprise DAM-system at the City Museum of Stockholm I worked very tight with an expert n that system that told med that a publising house in southern Sweden even had turned on a funktion called âSmart Colorâ which was a feature that automatically post processed the picture flow through the system. That was âpre-AIâ even if there was some smartness built in even to a system like âSmart Colorâ.
At that time they kept a duplicate semi-manual flow too that was open for manual postprocessing if needed. After a couple of months they closed down that parallell flow for good and totally relied on the automated flow which they found âgood enoughâ for their needs.
The thread title mentioned âculling and editingâ.
I donât cull (and I donât spray, either). So no to that from me.
As for editing, Iâll consider it when we are several generations beyond these initial âbecause we canâ implementations. If it actually worked as a training system, where it would suggest them to me rather than just âhere you goâ then I might be more amenable. Or perhaps offer me four or more different edits and a means of saying what I like about each.
Iâll be honest, I have not watched any of the videos, because I have long ruled out Capture One based on price and am very happy with PhotoLab. Thatâs not to say I will never change, but itâs not happening any time soon and âAIâ is actually a negative to me at this stage, particularly if itâs in the headline. Iâve seen many promises under the âAIâ headlines and most have been hit and miss at best.
Yes, âgood enoughâ is essential for an automated workflow and always a compromise. Iâm not quite sure whether the procedure is suitable for less extensive photo sequences.
Iâm not in the business and donât really understand the culling and the exact feature requests. When I think about it, I could imagine the following configurable functions.
Filtering out:
Group photos where one person has their eyes closed
Photos in which parts of people are cut off
Photos with flares
Photos with large areas of overexposure or underexposure
Photos with too much sky or foreground (there are certainly more aspects)
How does culling work for photo sequences of 120 frames / second (Sony Alpha 9 III)? Press the shutter button 10 times during a car race and there are already 1200 pictures to cull
Part of the reason itâs not for me. I do not âspray and prayâ. Although I might take multiple shots of a bird or aircraft as it passes by one time, each and every frame is a button press at a time I choose. I started on film when any other approach was outrageously expensive, and I have not seen any need to change my ways.
Generally speaking, you need a fairly expensive camera/lens combo to make âspray and prayâ work anyway, as with cheaper gear, youâre likely to get less sharp images than if you just press the button once.
I agree with you except for one point. There is always the human reaction time between the impulse to trigger and the actual shot, so you could use some support. I imagine the process as a kind of cinematic photo series, similar to the earlier experiments with stroboscope shots. Perhaps a photographic sequence presented as a whole could be interesting.
But what difference is there to video sequences that are broken down into individual images? Video editors also have a lot of tools to get the best out of the film material. The culling support is ideal here
Overall, however, this is probably a specialized area.
Sure, but that just means itâs part of the craft of photography. I could also shoot in complete auto mode, but I choose not to.
I am reminded of an experience in one of my favourite places. Zealandia Te MÄra a TÄne is a world class Ecosanctuary 20 minutes from my home. There is a lot of bird life there which is the main reason I go. I once stood near some supplemental feeders and watched a couple of guys with giant cameras with giant lenses and sizeable backpacks, no doubt with yet more gear. Both were intent on a bird that looked like it was about to take flight.
Click-click-click-click-clickâŚclick! They must have fired off many dozens of shots over multiple bursts. During several of those bursts, the bird did nothing.
I took exactly 5 frames, one at a time, and I got this. One of the other frames was almost as good. Three were pedestrian. What I did not have, was a big task of finding the âbestâ shot.
After years of getting frustrated with having to do so much post processing work on digital images, a few years ago, I realised that it was far better to transfer my analogue LF skills to the digital world. Just because I could use automatic everything, didnât mean I had to.
With transparency film, I had to âcompressâ wide dynamic range into no more than 5 stops in the camera because there is no easy way to post-process transparencies.
With negative film, I learned to expose for the deepest shadow with detail, over-expose and then under-develop to fit everything in, sometimes up to 14 stops.
Finally, I realised that, for digital sensors, I had know intimately the dynamic range of the sensor, expose for the brightest highlights (often involving up to 2 stops of over-exposure) and then recover the shadow details in post-processing.
The feeling that I am totally in control appeals to my slight OCD
Good isnât it
And just think of all those âmachine-gunnersâ who will wear out their shutter box prematurely. Which is why, I guess, they are all going for mirrorless, so they donât have anything mechanical to wear out