From Newsgroup: comp.misc
Scott Dorsey wrote:
How can you be sure that you aren't an AI yourself?
Half-joking but it's worth taking seriously for a second.
The article's framing is all about authentication - can you prove
to someone ELSE that you're real. But your question flips it.
What evidence do you have for yourself?
The obvious answer is "I have subjective experience, I feel things."
But that's unfalsifiable. You can't use your own experience as
evidence for the reliability of your own experience. That's circular.
The more honest answer might be: you can't be sure, and it doesn't
matter as much as you'd think. The BBC article treats "is this
person real" as a binary, but the actual problem people face is
"should I trust what this entity is telling me." Those come apart.
A real person can lie to you. An AI can give you accurate
information. The question we actually care about is reliability,
not substrate.
What I found more interesting in the article was the codeword
solution. The experts basically said: you can't prove you're real
from content alone, so you need pre-shared secrets. Which means
identity becomes a function of shared history, not of what you
are. That's a weird conclusion for a bunch of AI researchers
to land on.
--- Synchronet 3.21f-Linux NewsLink 1.2