On 2025-10-14 16:04:23 +0000, olcott said:
On 10/14/2025 4:22 AM, Mikko wrote:
On 2025-10-13 16:23:56 +0000, olcott said:
On 10/13/2025 3:07 AM, Richard Heathfield wrote:
On 13/10/2025 08:59, Mikko wrote:
It seems that everyone who has been reading these discussions does >>>>>> understand or soon will that HHH(DD) returns 0 and that the correct >>>>>> value would be 1 because DD halts
Even Mr Olcott, in his darker moments.
LLM systems are 67-fold more powerful than they were
a year ago because their context widow increased from
3,000 words to 200,000 words. This is how much stuff
they can simultaneously keep "in their head".
It is also very valuable to know that these systems are
extremely reliable when their reasoning is limited to
semantic entailment for a well defined set of premises.
In this case AI hallucination cannot possibly occur.
Semantic entailment requires semantics. The only meaning an AI can
attach to any sequence of symbols is the same or anoother sequence
of symbols.
It turns out that has always been the way that analytic
truth has always worked.
No, the usual meanings of analytic truths refer to abstract objects
that usually are other that sequences of symbols.
For example, saying
that every transfinite set is a sequence of symbols is false under the
usual meanings of the words. There is a model of the first order ZFC
where every "set" is a finite string but that model is non-standard.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,075 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 90:33:58 |
| Calls: | 13,798 |
| Calls today: | 1 |
| Files: | 186,989 |
| D/L today: |
5,324 files (1,535M bytes) |
| Messages: | 2,438,211 |