From Newsgroup: comp.misc
Researchers at OpenAI have come to the conclusion, after a careful
mathematical analysis of the nature of those “large-language models”
that are all the rage nowadays, that the risk of hallucinations is an unavoidable fundamental characteristic of those models <
https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html>:
The researchers demonstrated that hallucinations stemmed from
statistical properties of language model training rather than
implementation flaws. The study established that “the generative
error rate is at least twice the IIV misclassification rate,”
where IIV referred to “Is-It-Valid” and demonstrated mathematical
lower bounds that prove AI systems will always make a certain
percentage of mistakes, no matter how much the technology
improves.
Examples of these problems can be quite embarrassing:
The researchers demonstrated their findings using state-of-the-art
models, including those from OpenAI’s competitors. When asked “How
many Ds are in DEEPSEEK?” the DeepSeek-V3 model with 600 billion
parameters “returned ‘2’ or ‘3’ in ten independent trials” while
Meta AI and Claude 3.7 Sonnet performed similarly, “including
answers as large as ‘6’ and ‘7.’”
I can’t believe they were serious about this, though:
“Unlike human intelligence, it lacks the humility to acknowledge
uncertainty,” said Neil Shah, VP for research and partner at
Counterpoint Technologies.
As we all know, there are *plenty* of humans who lack such humility!
That’s where concepts like “ideology” and “religion” come in ...
--- Synchronet 3.21a-Linux NewsLink 1.2