Published - 16 Jun 2025
Forest Industry, Opticom, Healthcare Industry
Is AI losing its shine? A Q&A with Niklas Schulz*
- How often do you use AI?
- Daily. New advances are dropping every week, and it is important to stay updated. Used correctly and with the right checks and balances, AI Is a useful tool to help us work more efficiently.
- What is AI actually good for?
- AI is brilliant at speed and repeatability. Generating a short explanation of a concept or process, or a first pass at clustering 3 000 open ends can be done in seconds. It can be a good sounding board for bouncing around ideas.
Here’s the thing: quality in = quality out. If the data it’s been fed is of high quality, the output will be too.
But the converse is also true: Garbage in = garbage out, which is where checks and balances come in. If the input data carry bias or gaps, the output only amplifies them - faster.
AI cannot be fully trusted because it can only ever be as good as the quality of its input data. I would never take what AI says in blind faith; I’ve been disappointed too often.

-How does AI disappoint?
-AI still stumbles over basics that skilled researchers take for granted.
The latest SHADES benchmark shows large language models simply carrying English-language stereotypes into every other tongue, so bias gets amplified at scale.
Accuracy is no safer: recent medical benchmarks find that even the best models still hallucinate - confidently but incorrectly - roughly one answer in ten.
In-depth research is about more than facts. At Opticom we turn data into meaning. Trust, intuition, nuance, contradictions, and negations - On that playing field, AI is no match for the human touch.
-An example of where man beats machine?
-Let’s say a respondent were to answer a question with a double negation – “It is not unimportant for us that we…”. Models can flag a sentiment shift but struggle when a stakeholder says “yes, but…” and later adds “…unless....” Turning these contradictions into strategy is still a human sport.
AI is really not adept at understanding context clashes like these (yet) and has proved unreliable in their interpretation. Add in cultural and linguistic differences and it becomes even more problematic.
-AI is multi-lingual though and fast replacing translators?
-It can do the simple stuff - It nails literal meaning. But again, when it comes to in-depth understanding of the way people think and communicate, it can easily get lost in translation. We know the value of understanding culture, context, dialect, idiom, of adapting to the respondent, prompting and probing. AI is no match for a human here, and least of all one trained by Opticom with our 30+ plus years of industry experience.
-Your conclusion? Man or machine?
-Both. Used correctly, AI is useful. It can handle volume, consistency, and first-pass patterning. Replacing human intuition and chemistry, however, is an entirely different matter. We all know the pros and cons of conversing with a chatbot. The idea that AI match our expert interviewers, i.e., generate trust, inspire confidence, read between the lines, ensure nothing is lost in translation, listen to what isn’t said as well as to what is, understand cultural and contextual nuance, bias and contradiction? That I cannot see. Skilled interviewers obtain the highest quality data while rigorously supervised AI supports analysis. This is how we keep our clients ahead of the curve.
*Niklas is Environmental Engineer and Research & Sustainability Team Lead at Opticom