So that’s a serve. You can use this method to create a summary of each forum participant and place the summaries under a separate topic. That would bring steam into the forum This would be the classic problem of generative AI - using data that each of us has published, but we are now surprised that the information is freely available on the forum.
For example, the AI system might describe that the user has received a lot of help but is struggling to learn, or that the user changes his mind more often, or …
DXO could summarize the forum entries to determine “important”, “unimportant” or relevant entries based on the profiles described above. There are certainly no limits to further possibilities.
In more heated discussions in the forum, every user could get support from the AI system in order to respond in a quick-witted manner.
Well, brave new world.
The ChatGPT summary first, and please remember you only get a summary if you wrote stuff in the first place. I haven’t tried giving it a few bullet points and then letting it write the whole text in the first place.
Summary:
The author responds to a concern about AI using publicly shared forum data, noting that generative AI models are designed with safeguards. For instance, interactions are “ring-fenced” to prevent session data from affecting the broader model, helping protect against manipulation by bad actors.
They reflect on both the promise and limitations of AI, based on personal experience using the free version of ChatGPT. While sometimes frustrating or inconsistent, AI has also been highly effective—especially in helping code applications. The author emphasizes that while AI is a powerful tool, it’s still evolving, and users must remain critical of its output rather than blindly trusting it. Nonetheless, AI should be embraced when it offers genuine help."
Now the text I submitted for summarisation.
@gserim Not if a deep and meaningful conversation that my eldest son had with one of the chatbots. It indicated that both a colossal amount of effort had been expended to get the engines to learn and understand the “truth” so that when used their answers are based on the truth.
But more importantly it indicated that the area in which it interacted with a user was ring-fenced so that nothing from that session would spill out and potentially “contaminate” the model. This makes a lot of sense because otherwise “bad agents” could seek to corrupt or subvert the model by the way that data was presented to the model during a session.
Either because of my lack of knowledge on how to conduct a dialogue over the course of a number of days or this lack of any real cohesion of the data received and given to the users during a session, it was like “pulling teeth” at times while at others every step was a success.
Please remember that I am using a free version of the model and ChatGPT has a number of engines to choose from when processing a user request.
As for the getting ChatGPT (or similar) to summarise more detailed posts that has merit and for the more practiced it could be one route to a fast response, but the forum isn’t really supposed to be combative , although it might seem that way sometimes.
I have been using computers since 1965 and they have changed somewhat since then and I see a use in using AI for a number of tasks, when it worked for me in coding a new application it was a joy to use but that could easily descend into real chaos with ChatGPT telling me what errors I had made when I pointed out the syntax errors in the code ,which it had just written/generated.
Typical “it wasn’t me” behaviour!
It is relatively new technology and will only get better with time, but that is a double edged sword and users need to be wary and not swallow everything that is churned out as being 100% spot on!
But wary doesn’t mean scared so we should not be be afraid to use it when it genuinely is saving time and helping create something useful, to at least one person.
There is still a lot to think about and learn in this area. I had already considered initiating a controlled experiment here in the forum, where a forum participant who has access to an AI assistant with a Pro license starts a specific topic and acts as a test interface between the forum and the AI, and we could see what the AI system is capable of in terms of photography and emotion. Would be something like a BOT, which we have already suspected from time to time behind various answers
Once again, you are using it to create programs. What I want is not a program. I want a research assistant.
AI could no sooner solve my keyword drudgery than it could have replaced my mother as a genealogical researcher. Both of these tasks require more than simple logic. For one thing, some of the information is not online so cannot be “scraped” for learning. These are paper documents found via indexes in brick-and-mortar buildings. My stuff probably is largely online, but I also use my own personal knowledge and experience gained over decades.
On an infinite timescale? Possibly. In the foreseeable future? Absolutely not.
@zkarj Indeed I am using it to create programs and although programming is a presciptive “art” there is an element of creativity in it, within the tight bounds prescribed by the language.
For the example I took to attempt to create using AI, the AI was left to create the UI and then write the code to support it, sometimes very well often times less so.
AI is in its infancy and has been launched now because the developers want to recoup some or all of their considerable investment!
I would suggest that the “Research Assistant” role is an ideal one for AI but that all depends on whether the particular area you are involved in has attracted the “creators” to ingest into their model.
I do not consider AI as some magic cure-all nor some dark disturbing force, in some situations it is probably both, but that largely comes down to the uses to which it is put.
In your earlier post you stated
AI is not some “wizard” that can go beyond the bounds of the human knowledge that it has been “made” to ingest except that, providing it is not hobbled in some way, it can “think” beyond the boundaries that humans automatically place because of the upbringing, biases etc.
It can be a “free spirit” and potentially open the eyes of closed minded humans.
As for it not being able to tell you “whats the best way to take a photograph” that is “simple”, given that by your own statement it is not possible for a human to do that.
Put the AI behind the lens and it can see everything that the human can.
It can value judge the light, the composition etc… and if you gave the camera free reign to move itself on the tripod it could look for a better angle etc. or just tell you you are wasting your time.
And you presume that others do not possess the same or have not also gained experience over decades!? You also assume that the accumulated knowledge of you and millions more cannot be ingested into a probalistic model and an answer spat out?
In my case I am learning a new (programming) language and used AI to help me both shortcut the development time and provide me with examples from which I can learn.
A very specific case and I used a model that happens to have some knowledge of a particular programming language that I am currently using, PureBasic, which other AI models do not possess. Most if not all can handle Python etc. etc.
A very specific task that works for me whether it or any part of it will work for you and others remains to be seen
In both your examples you seem to be suggesting that AI cannot replace the human element.
I must admit that my example was looking to replace me in the actual process of writing the program and left me to “only” formulate the original requirements in such a way that the AI model produced what I wanted and then test the program to see if could handle the various boundary conditions.
I don’t see AI as a replacement for humans but rather an an Augmentation, HAI, Human Augmentation Intelligence.
Here’s what ChatGPT made of my post above
Here’s a summary of the discussion:
The poster responds to zkarj’s comments about AI not being a suitable research assistant, arguing that AI can play that role depending on how it’s used and trained. While the responder primarily uses AI for programming, they acknowledge that AI still has creative potential within limits. They believe AI is still in its infancy, but it can augment human capabilities, especially in specific, well-defined tasks.
They challenge zkarj’s view that AI can’t provide nuanced answers like “what’s the best way to take a photograph,” stating that even humans struggle with such questions. AI, if given the right input and freedom (like control over a camera), could theoretically assess conditions and offer valuable insights.
They also counter zkarj’s assertion that certain human tasks (e.g., genealogy or keyword tagging) are beyond AI, arguing that AI could still assist if trained on sufficient data. Ultimately, the poster sees AI not as a human replacement but as Human Augmentation Intelligence (HAI)—a tool to extend human ability, not eliminate it.
If you ask me “what’s the best way to take a photograph” I can absolutely answer your question by having a conversation with you.
Lest you think you can have a conversation with AI, well sort of. I have witnessed first hand that the conversation can go off in pointless directions and even get stuck in loops. All that “AI” is is pattern making. It cannot think, it cannot intuit, it cannot reason, it can’t (in my experience) even reasonably judge its own answers to be “on topic”.
No doubt there are humans that have some of the same problems, but there are hundreds, thousands, more, humans that each have their own experiences.
Going back to my original point — if I want to identify the four aircraft in the photo; to identify what their precise identities and therefore marks and models, it can be done by research, which is not deterministic. Believe me when I say I have looked for resources online that detail which aircraft were present at the particular show and they are incomplete. However, someone told me once something that led me to make a particular assumption on one of the identities and thence to research that aircraft in an attempt to prove it was there and, therefore, on balance, likely to be the aircraft in the photo. This is a complete reversal of the identification task.
It is a process that requires thinking, intuition, and reasoning.
In the case of my mother, she travelled half way around the world and visited churches to read books. I’d stake money that AI could not solve that problem, and while theoretically it could solve my problem, my experience, logic, and many research papers (what happens if we ask AI to research AI, I wonder?!) say that current technology has significant limitations.
You want it to write programs. It’s decently good at that because it is a clearly bounded model. There are only so many languages and each has a very restricted syntax. Even so, the possible permutations are staggeringly large. My own use of AI for programming says that it regularly imagines things that “work” and it even claims “are tested” that do not work at all.
AI will not succeed in adding the detailed keywords to my photos. It can add “dog” and “aeroplane” and possibly even identify types (though I expect some angles would confuse it). But — and feel free to prove me wrong — it cannot identify the aircraft in a series of photos unless the identities are clearly visible in the photo. In which case I don’t need AI.
@zkarj Thank you for your informative response.
With respect to your mothers undertaking, as you rightly point out in your post, if the data is not digitized, and then analysed, then no amount of AI or even remote HI (Human Intelligence) can uncover what your mother can by direct interaction with the records.
With respect to AI being useful to the task of recognising the specific model of an aircraft in a photo I asked AI for a comment and this is what I got in response.
2025-07-03_142852_.pdf (1.4 MB)
Yes I am asking AI about AI but my eldest son had a very long “discussion” with chatGPT and it appears that the underlying principle, at the moment, is to ensure that responses are as accurate as they can be, i.e. humans will seek to add a distortion layer on top of the models to serve their own ends but a model that simply lies about anything and everything is ultimately no use to anyone.
However, my coding efforts were hampered by the product “forgetting” the exact syntax of PureBasic, then suggesting I was using a different version of the product as an explanation, rather than it confusing PureBasic with another version of Basic, old or current or using a deprecated element of PureBasic!?
It was very polite whenever I or the compiler found another error but did seek to place the blame on me, at times, for some errors in code that it had just written. It seems to suffer from selective memory, just like humans.
The process of getting the model to do exactly what you want can be a very tedious exercise or one where the law of diminishing returns means settling for something less than exact/perfect, from time to time.
Nevertheless, in my specific case, it has helped me code what I wanted/needed and in double quick time.
Regards
Bryan
I’m trying to identify the specific airframe. But even just sticking to the model, I’d be interested to see if any AI could distinguish between models that differ only by internal components!
The worst part of that is that the models learn from humans. Who lie and distort. Garbage in. Garbage out,
Notwithstanding recommending glue on pizzas or suggesting the right intake of rocks per day for good nutrition (real examples), even the best AI coding assistants need to be told they are wrong. Frequently.
“Thank you for being so observant! You are right that version X of language Y does not allow… blah blah blah.”
As for lying. I once tried to use AI to search out a specific XKCD comic I recalled. Not only did it fail to find the comic in question, and eventually get stuck in a loop between just two different comics, every comic it suggested was not only completely unrelated, but it also assigned the comic number incorrectly. That is to say, 100% of the time it gave me links to comics, telling me they were number X when the link lead to number Y.
Someone recently mentioned one of the (Mac) calendar software vendors was adding AI to their calendar. My response was formulated based on personal experience.
“Why didn’t you tell me I had a meeting at 11am?”
“Your meeting is at noon.”
“No, it clearly says 11am.”
“Thank you for being vigilant! You are correct, your meeting is at 11am.”
Which would, of course, be utterly pointless.
A contrived example, for sure, but this is the “logic” we have to assume exists in all LLM interactions.
@zkarj I don’t think that is the case and I believe that user sessions are ring-fenced to stop attempts to subvert the model by introducing falsehoods during a users session.
The models have cost a lot to create, notwithstanding issues of where the training material was obtained from, and their value lies in their ability to answer questions well and as accurately as possible, unless the owner wants the model to tow the “party” line but I believe that is restricted to the upper layers and even the corrupted models start spewing out facts after a short while of use.
To be honest not in the conversations I have had and not apparently in the discussions my eldest son had with an AI model.
Why you seem to be so unlucky in your interactions I am not sure but I have had the model straying from the programming language I am using from time to time, when at other times it is spot on.
Occasionally it seems to find it difficult to understand exactly what change needs to be made, lack of accuracy in my description perhaps, and that any such change is not to be made by completely re-writing the application, i.e. discovering a completely different way of coding the existing requirements not just remodelling the code to accommodate the revised requirements.
However, the first iteration of the backup utility I described in a post above was entirely designed and written by AI, from as exact a set of instructions as I could manage to give it and probably took a total of 2 hours to get from a simpler version to that final version in a number of sessions over the course of a day.
Things got somewhat less predictable thereafter so what happened (changed) I am not sure but I did get it to change one set of conditions and then on a later occasion to add a database backup option to the “product”.
However, neither of those amendments was as easy and smooth to achieve as the original development phase.
If it was as predictably bad and unreliable as you are suggesting then I wouldn’t have got anywhere near as far as I have!?
A program that is to be compiled either passes the syntax checks or it does not, and it either works for “normal” inputs or it does not and it either works successfully when testing boundary conditions or it does not!
Any muddled thinking from the AI will result in a program that doesn’t compile, or doesn’t work at all or breaks when stress tested and although I think the original might have a weakness in one particular boundary test case it actually works for most inputs.
One case that needs to be addressed is that in my original specification I requested that the backup should be capable of traversing the subdirectories, if they existed, which it does but it flattens the structure in the backup.
So I failed to specify that the original structure should be maintained in the backed up directories. Whether the model should have decided that given the input was structured the output should be the same I don’t know.
@zkarj your scepticism of AI may well be warranted but in the example I quoted here and other utilities I have asked it to create, it does anything from a reasonable job to an excellent job and given that I am still learning the language provides me with templates I can copy without forever referring to the manual (online or pdf).
It works for me, I am sorry if it doesn’t work for you but I will continue to use it, hoping that it uses the same model within ChatGPT that produced the first set of code.
You’ve hit the nail on the head there.
I didn’t say it cannot be made to work, just that… as per your own examples… it is flawed.
Again, this is a highly documented and tightly bounded domain. Exactly what my problem domain is not.
@zkarj You went a lot further than that, you effectively quoted the examples frequently presented that make out that AI is either biased, takes short cuts or lies.
Although coding is a special case, the AI I was using “mostly” stayed within the bounds not only of the syntax, and there is a lot of that in PureBasic, but was creating from scratch, “it was making it up” but within tight boundaries and it worked.
What I believe was happening was that the boundaries were not tight enough within the model and it was straying into other Basic dialects erroneously, there was an element of confusion from time to time but that is why AI is evolving all the time…
Your domain with respect to aircraft recognition, is just what AI was built for and is being deployed very successfully in recognising tumours, a bounded model once again but aircraft recognition is no less bounded.
The pdf I included showed the interested parties who are keen to see such technology working successfully and it undoubtedly will get there!
You were deliberately highly selective with the phrases I used to support your case, you carefully excluded the “but …” that followed in my text. I was being “polite”, you are entitled to be selective but your post was full of the “tropes” trotted out to “discredit” AI and they are the opposite to what I have encountered.
You would be a deliberately flawed model if we were looking at you as a piece of AI, in the context of this particular topic, not on your skill/ability as a photographer or photograph editor.
Take care (AI is probably out to get you)
Bryan
Yes, I was selective, as you have been. And you still don’t understand my requirement. But we’re going in circles. I hope it gives you results for your needs, but we are on entirely different pages. There I shall leave it.