On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as
you say, and you don't have the faintest idea how to put a dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is >>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's >>>>>>>>>>> not
researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of them?
On 10/20/2025 10:05 PM, dbush wrote:
On 10/20/2025 11:00 PM, olcott wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words. Those exact same
words still form the basis of my whole proof.
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>> network, and how it is ultimately just token prediction.
It excels at generating good syntax. The reason for that is that the >>>>>>>>>> vast amount of training data exhibits good syntax. (Where it has bad >>>>>>>>>> syntax, it is idiosyncratic; whereas good syntax is broadly shared.) >>>>>>>>>>
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
That is not any actual rebuttal of the specific points that I make.
No, indeed /that/ isn't; but plenty of those have also been made not
only by me but various others, over a considerable time span.
Never any actual rebuttal ever since Professor
Sipser agreed with my words.
Those exact same
words still form the basis of my whole proof.
*Here is an accurate paraphrase of those words*
<Input to LLM systems>
Perform the complete simulation specified in main() and provide a
full execution trace before doing anything else.
Simulating Termination Analyzer HHH correctly simulates its input until:
(a) It correctly detects that its simulated input cannot
possibly reach its own simulated final halt state then:
abort simulation and return 0 rejecting its input as non-halting.
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
I'm convinced by the argumentation; and that conviction has
the side effect of convincing me of the falsehood of your
ineffective, contrary argumentation.
Not really it actually gives you the bias to refuse
to pay attention.
LOL! the world at large is incredibly biased against giving a crank
like you any attention.
On 10/20/2025 10:16 PM, dbush wrote:
On 10/20/2025 11:13 PM, olcott wrote:
On 10/20/2025 10:05 PM, dbush wrote:>>>
You mean the words where he didn't agree with your interpretation of
them?
According to a Claude AI analysis there
are only two interpretations and one of
them is wrong and the other one is my
interpretation.
Whether you think one interpretation is wrong is irrelevant. What is
relevant is that that's how everyone else including Sipser interpreted
those words, so you lie by implying that he agrees with your
interpretation.
<repeat of previously refuted point>
Fritz Feldhase <franz.fri...@gmail.com> writes:halting theorem?
On Monday, March 6, 2023 at 3:56:52 AM UTC+1, olcott wrote:
On 3/5/2023 8:33 PM, Fritz Feldhase wrote:
On Monday, March 6, 2023 at 3:30:38 AM UTC+1, olcott wrote:
Does Sipser support your view/claim that you have refuted the
I needed Sipser for people [bla]
Professor Sipser only agreed that [...]
Does he write/teach that the halting theorem is invalid?
Tell us, oh genius!
So the answer is no. Noted.
Because he has >250 students he did not have time to examine anything
else. [...]
Oh, a CS professor does not have the time to check a refutation of the halting theorem. *lol*I exchanged emails with him about this. He does not agree with anything substantive that PO has written. I won't quote him, as I don't have permission, but he was, let's say... forthright, in his reply to me.
joes <noreply@example.org> writes:
Am Wed, 21 Aug 2024 20:55:52 -0500 schrieb olcott:
Professor Sipser clearly agreed that an H that does a finite simulation
of D is to predict the behavior of an unlimited simulation of D.
If the simulator *itself* would not abort. The H called by D is,
by construction, the same and *does* abort.
We don't really know what context Sipser was given. I got in touch at
the time so do I know he had enough context to know that PO's ideas were "wacky" and that had agreed to what he considered a "minor remark".
Since PO considers his words finely crafted and key to his so-called
work I think it's clear that Sipser did not take the "minor remark" he agreed to to mean what PO takes it to mean! My own take if that he
(Sipser) read it as a general remark about how to determine some cases,
i.e. that D names an input that H can partially simulate to determine
it's halting or otherwise. We all know or could construct some such
cases.
I suspect he was tricked because PO used H and D as the names without
making it clear that D was constructed from H in the usual way (Sipser
uses H and D in at least one of his proofs). Of course, he is clued in enough know that, if D is indeed constructed from H like that, the
"minor remark" becomes true by being a hypothetical: if the moon is made
of cheese, the Martians can look forward to a fine fondue. But,
personally, I think the professor is more straight talking than that,
and he simply took as a method that can work for some inputs. That's
the only way is could be seen as a "minor remark" with being accused of being disingenuous.
So that PO will have no cause to quote me as supporting his case: what Sipser understood he was agreeing to was NOT what PO interprets it as meaning. Sipser would not agree that the conclusion applies in PO's HHH(DDD) scenario, where DDD halts.
PO is trying to interpret Sipser's quote:
--- Start Sipser quote
If simulating halt decider H correctly simulates its input D
until H correctly determines that its simulated D would never
stop running unless aborted then
H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
--- End Sipser quote
The following interpretation is ok:
If H is given input D, and while simulating D gathers enough
information to deduce that UTM(D) would never halt, then
H can abort its simulation and decide D never halts.
I'd say it's obvious that this is what Sipser is saying, because it's natural, correct, and relevant to what was being discussed (valid
strategy for a simulating halt decider). It is trivial to check that
what my interpretation says is valid:
if UTM(D) would never halt, then D never halts, so if H(D) returns
never_halts then that is the correct answer for the input. QED :)
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear life. anyone with
Throngs of dumb boomers are falling for AI generated videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. He's not >>>>>>>>>>> researched the fundamentals of what it means to train a language >>>>>>>>>>> network, and how it is ultimately just token prediction. >>>>>>>>>>>
It excels at generating good syntax. The reason for that is that the
vast amount of training data exhibits good syntax. (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from invalid. >>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other than as >>>>> you say, and you don't have the faintest idea how to put a dent in it. >>>>>
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated videos, >>>>>>>>>>>> believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where it >>>>>>>>>>>> has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other
than as
you say, and you don't have the faintest idea how to put a dent in >>>>>> it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>
It excels at generating good syntax. The reason for that is >>>>>>>>>>>>> that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is broadly >>>>>>>>>>>>> shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply
that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed directly
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear life. >>>>>>>>>>>>>>> anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows
But you're incapable of recognizing valid entailment from >>>>>>>>>>>> invalid.
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely
accepted result, but whose reasoning anyone can follow to see it
for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
and HHH has
no way to even know who its caller is.
My simulating halt decider
exposed the gap ofThe only false assumption is that the above requirements can be
false assumptions
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>> invalid.On 2025-10-19, dart200 <user7160@newsgrouper.org.invalid> >>>>>>>>>>>>>>> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. (Where >>>>>>>>>>>>>>> it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid
semantic logical entailment on this basis and shows >>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are other >>>>>>>>> than as
you say, and you don't have the faintest idea how to put a dent >>>>>>>>> in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing.
AI is just another thing Olcott has no understanding of. >>>>>>>>>>>>>>>> He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for that >>>>>>>>>>>>>>>> is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does not >>>>>>>> follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which
we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have admitted
exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment from >>>>>>>>>>>>>>> invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote:
i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train a >>>>>>>>>>>>>>>>> language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric
such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored
in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/
extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say does >>>>>>>>> not
follow a valid mode of inference for refuting an argument.
If you are trying to refute something which is not only a widely >>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being
directly refuted by elements of the established result which >>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not simply >>>>>>>>> that I believe it because of the Big Names attached to it.
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>> deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly
(<X>,Y) maps to 0 if and only if X(Y) does not halt when executed
directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that DD will subsequently halt.
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>
It is disingenuous to say that you've simply had your details ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
Blah, Blah Blah, no Olcott you are wrong, I know
that you are wrong because I simply don't believe you.
You are wrong because I (1) don't see that gaping flaw in the
definition of the halting problem, (2) you don't even
try to explain how such that flaw can be. Where, how, why
is any decider being asked to decide something other than
an input representable as a finite string.
I've repeated many times that the diagonal case is constructable as a
finite string, whose halting status can be readily ascertained.
Because it's obvious to me, of course I'm going to reject
baseless claims that simply ask me to /believe/ otherwise.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
even though I do remember that you did do this once.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they deserve. >>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
On 10/22/2025 8:50 AM, dbush wrote:
On 10/22/2025 9:47 AM, olcott wrote:
On 10/22/2025 8:00 AM, dbush wrote:
On 10/22/2025 8:48 AM, olcott wrote:
On 10/22/2025 7:25 AM, dbush wrote:
On 10/22/2025 7:56 AM, olcott wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 9:11 PM, Kaz Kylheku wrote:
On 2025-10-21, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 8:27 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 4:03 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>> On 10/20/2025 1:29 PM, Kaz Kylheku wrote:
On 2025-10-20, olcott <polcott333@gmail.com> wrote: >>>>>>>>>>>>>>>>> On 10/19/2025 2:39 PM, Kaz Kylheku wrote:
But you're incapable of recognizing valid entailment >>>>>>>>>>>>>>>> from invalid.On 2025-10-19, dart200
<user7160@newsgrouper.org.invalid> wrote: >>>>>>>>>>>>>>>>>>> i don't get y polcott keep hanging onto ai for dear >>>>>>>>>>>>>>>>>>> life. anyone with
Throngs of dumb boomers are falling for AI generated >>>>>>>>>>>>>>>>>> videos, believing
them to be real. This is much the same thing. >>>>>>>>>>>>>>>>>>
AI is just another thing Olcott has no understanding >>>>>>>>>>>>>>>>>> of. He's not
researched the fundamentals of what it means to train >>>>>>>>>>>>>>>>>> a language
network, and how it is ultimately just token prediction. >>>>>>>>>>>>>>>>>>
It excels at generating good syntax. The reason for >>>>>>>>>>>>>>>>>> that is that the
vast amount of training data exhibits good syntax. >>>>>>>>>>>>>>>>>> (Where it has bad
syntax, it is idiosyncratic; whereas good syntax is >>>>>>>>>>>>>>>>>> broadly shared.)
I provide a basis to it and it does perform valid >>>>>>>>>>>>>>>>> semantic logical entailment on this basis and shows >>>>>>>>>>>>>>>>
Any freaking idiot can spew out baseless rhetoric >>>>>>>>>>>>>>> such as this. I could do the same sort of thing
and say you are wrong and stupidly wrong.
But you don't?
It is a whole other ballgame when one attempts
to point out actual errors that are not anchored >>>>>>>>>>>>>>> in one's own lack of comprehension.
You don't comprehend the pointing-out.
You need to have a sound reasoning basis to prove
that an error is an actual error.
No; /YOU/ need to have sound reasonings to prove /YOUR/ >>>>>>>>>>>> extraordinary claims. The burden is on you.
We already have the solid reasoning which says things are >>>>>>>>>>>> other than as
you say, and you don't have the faintest idea how to put a >>>>>>>>>>>> dent in it.
In other words you assume that I must be wrong
entirely on the basis that what I say does not
conform to conventional wisdom.
Yes; you are wrong entirely on the basis that what you say >>>>>>>>>> does not
follow a valid mode of inference for refuting an argument. >>>>>>>>>>
If you are trying to refute something which is not only a widely >>>>>>>>>> accepted result, but whose reasoning anyone can follow to see it >>>>>>>>>> for themselves, you are automatically assumed wrong.
The established result is presumed correct, pending your
presentation of a convincing argument.
That's not just wanton arbitrariness: your claims are being >>>>>>>>>> directly refuted by elements of the established result which >>>>>>>>>> we can refer to.
I cannot identify any flaw in the halting theorem. It's not >>>>>>>>>> simply
that I believe it because of the Big Names attached to it. >>>>>>>>>>
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
False:
(<X>,Y) maps to 1 if and only if X(Y) halts when executed directly >>>>>> (<X>,Y) maps to 0 if and only if X(Y) does not halt when executed >>>>>> directly
Yes that it the exact error that I have been
referring to.
That is not an error. That is simply a mapping that you have
admitted exists.
In the case of HHH(DD) the above requires HHH to
report on the behavior of its caller
False. It requires HHH to report on the behavior of the machine
described by its input.
That includes that DD calls HHH(DD) in recursive
simulation.
Which therefore includes the fact that HHH(DD) will return 0 and that
DD will subsequently halt.
You keep ignoring that we are only focusing on
DD correctly simulated by HHH.
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been
dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they
deserve.
It is disingenuous to say that you've simply had your details
ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>> dissected by multiple poeple to a much greater detail than they deserve. >>>>>>
It is disingenuous to say that you've simply had your details ignored. >>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
DD can be passed as an argument to any decider, not only HHH.
For instance, don't you have a HHH1 such that HHH1(DD)
correctly steps DD to the end and returns the correct value 1?
DD's behavior is dependent on a decider which it calls;
but not dependent on anything which is analyzing DD.
Even when those two are the same, they are different
instances/activations.
DD creates an activation of HHH on whose result it depends.
The definition of DD's behavior does not depend on the ongoing
activation of something which happens to be analyzing it;
it has no knowledge of that.
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
André
On 10/22/2025 2:24 PM, André G. Isaak wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have >>>>>>>>> been
dissected by multiple poeple to a much greater detail than they >>>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
André
That includes that HHH(DD) keeps simulating yet
another instance of itself and DD forever and ever
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the "finite string input" DD *must* include as a substring
the entire description of HHH.
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they >>>>>>>> deserve.
It is disingenuous to say that you've simply had your details >>>>>>>> ignored.
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
When a person is asked a yes or no question
there are not two separate people in parallel
universes one that answers yes and one that
answers no. There is one person that thinks
through both hypothetical possibilities and
then provides one answer.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:False, as you have admitted otherwise:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
Yet again with deflection.
That the input to HHH(DD) specfies non-haltin
On 10/20/2025 10:45 PM, dbush wrote:
And it is a semantic tautology that a finite string description of a
Turing machine is stipulated to specify all semantic properties of the
described machine, including whether it halts when executed directly.
And it is this semantic property that halt deciders are required to
report on.
Yes that is all correct
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time.
DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH
cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 1:40 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 10:40 AM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/20/2025 10:20 PM, Kaz Kylheku wrote:
And when I identify a flaw yo simply ignore
whatever I say.
Nope; all the ways you say claim you've identified a flaw have been >>>>>>>> dissected by multiple poeple to a much greater detail than they deserve.
It is disingenuous to say that you've simply had your details ignored. >>>>>>>>
Turing machines in general can only compute mappings
from their inputs. The halting problem requires computing
mappings that in some cases are not provided in the
inputs therefore the halting problem is wrong.
The halting problem positively does not propose anything
like that, which would be gapingly wrong.
It only seems that way because you are unable to
No, it doesn't only seem that way. Thanks for playing.
provide the actual mapping that the actual input
to HHH(DD) specifies when DD is simulated by HHH
according to the semantics of the C language,
DD is a "finite string input" which specifies a behavior that is
independent of what simulates it,
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
That too is stupidly incorrect.
It is the job of every simulating halt decider
to predict what the behavior of it simulated
input would be if it never aborted.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD)
specifies behavior
such that the correctly simulated DD
cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
Due to the aborting behavior of HHH,
it is not actually realized in simulation; we have to step
through the aborted simulations to keep it going.
The other dimension is the execution /within/ the simulations.
That can be halting or non-halting.
In the HHH(DD) simulation tower, though that is infinite,
the simulations are halting.
I said that before. Your memory of that has vaporized, and you have now
focused only on my statement that the simluation tower is infinite.
The depth of the simulation tower, and the halting of the simulations
within that tower, are independent phenomena.
A decider must not mistake one for the other.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
On 10/22/2025 6:15 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
Thus proving that DD correctly simulated by HHH
cannot possibly reach its own simulated final halt
state no matter what HHH does.
I explained that a nested simulation tower is two dimensional.
One dimension is the simulation level, the nesting itself;
that goes out to infinity.
Great. Thus the input to HHH(DD) specifies behavior
such that the correctly simulated DD cannot possibly
reach its own simulated final halt state.
On 10/22/2025 7:24 PM, olcott wrote:
Great. Thus the input to HHH(DD)
i.e. finite string DD which is the description of machine DD i.e. <DD>
and therefore stipulated to specify all semantic properties of machine
DD including the fact that it halts when executed directly.
specifies behavior
such that the correctly simulated DD
i.e. UTM(DD)
cannot possibly
reach its own simulated final halt state.
False, as proven by UTM(DD) halting.
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the "finite string input" DD *must* include as a substring the entire description of HHH.
On 22/10/2025 21:24, André G. Isaak wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 12:07 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
That is stupidly incorrect.
That DD calls HHH(DD) (its own simulator) IS PART OF
THE BEHAVIOR THAT THE INPUT TO HHH(DD) SPECIFIES.
In no way am I saying that DD is not built on HHH, and
does not have a behavior dependent on that of HHH.
Why would I ever say that?
But that entire bundle is one fixed case DD, with a single behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire
description of HHH.
The problem is you are a bunch of fucking retards,
you spamming pieces of shit and 10 years of flooding
all channels with just retarded bullshit, and rigorously
cross-posted.
*Plonk*
Julio
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
DD specifies a procedure that transitions to a terminating state,--
whether any given simulation of it is carried far enough to show that.
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:*Hence the halting problem is wrong*
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
The halting problem requires that halt deciders do what
no Turing machine decider can do report on the semantic
property of non-inputs.
On 2025-10-23, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:*Hence the halting problem is wrong*
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The much simpler explanation is that the decider is wrong.
By being
wrong, it contributes one point of evidence that confirms the theorem.
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
I don't understand what you think you're achieving by repeating
this, but its inclusion does mean that your poting has four
correct lines, improving its correctness average.
The halting problem requires that halt deciders do what
no Turing machine decider can do report on the semantic
property of non-inputs.
It positively doesn't.
there will still be a nested simulation tower
Even the diagonal cases that defeat deciders are
all valid inputs: self-contained finite strings denoting machines, which perpetrate their trick without referencing anything outside of their own description.
You are just making up nonsense and presenting without a shred of
rational evidence (which, of course, doesn't exist for a falsehood).
HHH(DD) does report on the behavior that its actual
input actually specifies:
On 10/23/2025 11:47 AM, Kaz Kylheku wrote:
On 2025-10-23, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:*Hence the halting problem is wrong*
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the
"finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The much simpler explanation is that the decider is wrong.
https://www.liarparadox.org/Simple_but_Wrong.png
The halting problem requires that halt deciders do what
no Turing machine decider can do report on the semantic
property of non-inputs.
It positively doesn't.
HHH(DD) does report on the behavior that its actual
input actually specifies:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
there will still be a nested simulation tower
The halting problem requires HHH(DD) to report on
something else, QED the halting problem is wrong.
Turing machine deciders only compute the mapping
from their finite string inputs to an accept state
or reject state on the basis that this input finite
string specifies a semantic or syntactic property.
This means that the ultimate measure of the behavior
that a finite string input D specifies is D correctly
simulated by simulating halt decider H.
The halting problem requires that halt deciders do what
no Turing machine decider can do report on the semantic
property of non-inputs.
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>> "finite string input" DD *must* include as a substring the entire
description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property is not an input
The DD that has the halting property is not an input
The DD that has the halting property is not an input
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property is not an input
The DD that has the halting property is not an input
The DD that has the halting property is not an input
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp
that the
"finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property
is not an inputFalse, see above.
On 10/23/2025 6:08 PM, olcott wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single
behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp >>>>>>>> that the
"finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be >>>>>>> HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
i.e. finite string DD which is the description of machine DD and
therefore is stipulated to specify all semantic properties of the
described machine, including halting when executed directly.
*does not have the halting property*
False, see above.>
Turing machine deciders only compute the mapping
from their finite string inputs
And the finite string input DD has the halting property as show above.
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property
i.e. finite string DD which is an input to HH
is not an inputFalse, see above.
On 10/23/2025 5:40 PM, dbush wrote:
On 10/23/2025 6:08 PM, olcott wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
"finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
i.e. finite string DD which is the description of machine DD and
therefore is stipulated to specify all semantic properties of the
described machine, including halting when executed directly.
*does not have the halting property*
False, see above.>
Turing machine deciders only compute the mapping
from their finite string inputs
And the finite string input DD has the halting property as show above.
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property
i.e. finite string DD which is an input to HH
is not an inputFalse, see above.
Correct simulation is defined as simulation
according to the semantics of the specification
language: C, x86 or TM description.
The execution trace of DD correctly simulatedDoes not exist because HHH aborts.
by HHH
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower.
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
The DD that has the halting property is not an input
The DD that has the halting property is not an input
On 2025-10-23, olcott <polcott333@gmail.com> wrote:The show all the steps of DD simulated by HHH
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single behavior, >>>>>>>>> which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp that the >>>>>>>> "finite string input" DD *must* include as a substring the entire >>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be HHH, >>>>>>> but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
It obviously does.
On 10/23/2025 5:40 PM, dbush wrote:
On 10/23/2025 6:08 PM, olcott wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
"finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
i.e. finite string DD which is the description of machine DD and
therefore is stipulated to specify all semantic properties of the
described machine, including halting when executed directly.
*does not have the halting property*
False, see above.>
Turing machine deciders only compute the mapping
from their finite string inputs
And the finite string input DD has the halting property as show above.
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property
i.e. finite string DD which is an input to HH
is not an inputFalse, see above.
Correct simulation is defined as simulation
according to the semantics of the specification
language: C, x86 or TM description.
The execution trace of DD correctly simulated
by HHH differs from the execution trace of
DD correctly simulated HHH1 proving that I
am right and you are stupid or dishonest.
On 2025-10-23, olcott <polcott333@gmail.com> wrote:
On 10/23/2025 5:40 PM, dbush wrote:
On 10/23/2025 6:08 PM, olcott wrote:
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote: >>>>>>>>>> On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single >>>>>>>>>>> behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp >>>>>>>>>> that the
"finite string input" DD *must* include as a substring the entire >>>>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>>> HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language. >>>>>>>
But nonetheless, yes, there will still be a nested simulation tower. >>>>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
i.e. finite string DD which is the description of machine DD and
therefore is stipulated to specify all semantic properties of the
described machine, including halting when executed directly.
*does not have the halting property*
False, see above.>
Turing machine deciders only compute the mapping
from their finite string inputs
And the finite string input DD has the halting property as show above.
Turing machine deciders only compute the mapping
from their finite string inputs
Turing machine deciders only compute the mapping
from their finite string inputs
The DD that has the halting property
i.e. finite string DD which is an input to HH
is not an inputFalse, see above.
Correct simulation is defined as simulation
according to the semantics of the specification
language: C, x86 or TM description.
Correct simulation must continue while the
final instruction has not been reached.
The correct simulation of a non-terminating machine
never stops.
The correct simulation of a terminating machine
must reach its halt state.
The execution trace of DD correctly simulated
by HHH differs from the execution trace of
DD correctly simulated HHH1 proving that I
am right and you are stupid or dishonest.
Someone idenfiable as an engineer, and not necessarily even
a great one, will immediately know that if two simulations
(of a deterministic program that has a single behavior)
do not agree, /at most/ one of them can be called "correct".
They could be both wrong, but they cannot be both right.
If you think so, then obviously you must be stupid
or dishonest.
On 10/23/2025 6:45 PM, Kaz Kylheku wrote:
On 2025-10-23, olcott <polcott333@gmail.com> wrote:The show all the steps of DD simulated by HHH
On 10/22/2025 6:01 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 3:20 PM, Kaz Kylheku wrote:
On 2025-10-22, olcott <polcott333@gmail.com> wrote:
On 10/22/2025 2:52 PM, Kaz Kylheku wrote:
On 2025-10-22, André G Isaak <agisaak@gm.invalid> wrote:
On 2025-10-22 12:40, Kaz Kylheku wrote:
But that entire bundle is one fixed case DD, with a single >>>>>>>>>> behavior,
which is a property of DD, which is a finite string.
I think part of the problem here is that Olcott doesn't grasp >>>>>>>>> that the
"finite string input" DD *must* include as a substring the entire >>>>>>>>> description of HHH.
Furthermore, he doesn't get that it doesn't literally have to be >>>>>>>> HHH,
but the same algorithm: a workalike.
The HHH analyzing DD's halting could be in C, while the HHH
called by DD could be in Python.
DD does call HHH(DD) in recursive simulation
and you try to get away with lying about it.
I'm saying that's not a requirement in the halting problem.
DD does not have to use that implementation of HHH; it can have
its own clean-room implementation and it can be in any language.
But nonetheless, yes, there will still be a nested simulation tower. >>>>>>
I made sure to read what you said all the way through
this time. DD correctly simulated by HHH cannot possibly
reach its own final halt state no matter what HHH does.
The /simulation/ of DD by HHH will not /reproduce/ the halt
state of DD, which DD undeniably /has/.
The finite string as an actual input to HHH(DD)
*does not have the halting property*
It obviously does.
according to the semantics of the C programming
language where DD reaches its own final halt
state by pure simulation with no inference by
anything.
This is exactly what I mean:
int DD()
{
int Halt_Status = UTM(DD);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
UTM(DD);
}
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible
for some halting algorithms.
Not just impossible outside of the scope of every Turing machine.
Its the same kind of thing as requiring the purely mental object
of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some
deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
When observed by H, D.input and D.noninput have different quantum
states: D is effectively split, identifiable as two particles. When
observed by non-H, no difference in any quantum properties (charge, spin
...) is observed, and so D.input and D.noninput must be one and the
same; they are indistinguishable particles: https://en.wikipedia.org/wiki/Indistinguishable_particles
Olcott is a top quantum computer scientist, on the level of Dirac or
Feynman.
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible
for some halting algorithms.
Not just impossible outside of the scope of every Turing machine.
Its the same kind of thing as requiring the purely mental object
of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some
deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible >>>>>>>> for some halting algorithms.
Not just impossible outside of the scope of every Turing machine. >>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>> of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some
deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
The problem is that "input" is just a role of some datum in a context,
like a function
The input versus non-input distinction cannot be found in any
binary digit of the bit string comprising the datum itself.
And so you need an algorithm
is_input(function, datum)
to denote the abstract property.
Then we need to define the property: what does it mean to be an
input or non-input. Just like we do with halting: we know what
it means to halt or not halt.
Next question: can you calculate this property? Say we know
what it means to be a non-input; can we reliably calculate it
for all possible <function, datum> pairs?
If that property is itself is incalculable, how does it help defeat the halting theorem?
On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible >>>>>>>>> for some halting algorithms.
Not just impossible outside of the scope of every Turing machine. >>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>> of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some
deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
The problem is that "input" is just a role of some datum in a context,
like a function
The input versus non-input distinction cannot be found in any
binary digit of the bit string comprising the datum itself.
Unless you bother to pay attention to the fact that
the sequence of steps of D simulated by H is a different
sequence of steps than D simulated by H1.
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible >>>>>>>>>> for some halting algorithms.
Not just impossible outside of the scope of every Turing machine. >>>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>>> of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some >>>>>>>> deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
The problem is that "input" is just a role of some datum in a context,
like a function
The input versus non-input distinction cannot be found in any
binary digit of the bit string comprising the datum itself.
Unless you bother to pay attention to the fact that
the sequence of steps of D simulated by H is a different
sequence of steps than D simulated by H1.
How do we know what is correct?
Suppose I'm given an input D, and two deciders X and Y.
I know nothing about these. (But they are simulating deciders,
producing different executions.)
X accepts, Y rejects.
Do I regard both of them as right? Or one of them wrong?
If so, which one? Or both wrong?
Suppose I happen to be sure that D halts. So I know X is correct.
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is
deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether
D is non-input or input to Y in order to /decide/ whether its rejection
is correct.
Do you not see that your concept leaves decision problems?
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
If you allow halting programs to be decided as non-halting in situations
in which they are non-inputs, the end user of deciders has to /know/
when they are looking at that case, and when they are dealing with a
broken decider.
That's a decision problem you hae punted to the end user.
I say that that decision problem you have punted to the end user
is incomputable!!!
So even if ostensibly you resolved the halting problem on one level
by simply /excusing/ some programs from calculating the traditional
halting value when they are given non-inputs, you've not actually
solved the real halting problem: that of the end-user simply wanting
to know whether a given program will terminate or not. The
end user has no way of knowing whether a program was excused on
a non-input, or whether it is just fumbling an input.
It doesn't appear you've improved the situation at all; you've
just reshuffled how incomputablity fits into the picture.
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible >>>>>>>>>>> for some halting algorithms.
Not just impossible outside of the scope of every Turing machine. >>>>>>>>>> Its the same kind of thing as requiring the purely mental object >>>>>>>>>> of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some >>>>>>>>> deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott
reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
The problem is that "input" is just a role of some datum in a context, >>>> like a function
The input versus non-input distinction cannot be found in any
binary digit of the bit string comprising the datum itself.
Unless you bother to pay attention to the fact that
the sequence of steps of D simulated by H is a different
sequence of steps than D simulated by H1.
How do we know what is correct?
Because all deciders only report on what their
input specifies H(D)==0 and H1(D)==1 are both correct.
Suppose I'm given an input D, and two deciders X and Y.
I know nothing about these. (But they are simulating deciders,
producing different executions.)
X accepts, Y rejects.
Do I regard both of them as right? Or one of them wrong?
If so, which one? Or both wrong?
Suppose I happen to be sure that D halts. So I know X is correct.
As far as a DOS (denial of service) attack
goes H(D)==0 correctly rejects its input.
On 10/28/2025 5:33 PM, olcott wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 6:52 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 5:14 PM, Kaz Kylheku wrote:
On 2025-10-28, dbush <dbush.mobile@gmail.com> wrote:
On 10/28/2025 4:57 PM, olcott wrote:
On 10/28/2025 2:37 PM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 11:35 AM, Kaz Kylheku wrote:
On 2025-10-28, olcott <polcott333@gmail.com> wrote:
Deciders only compute a mapping from their actual
inputs. Computing the mapping from non-inputs is
outside of the scope of Turing machines.
Calculating the halting of certain inputs is indeed impossible >>>>>>>>>>>> for some halting algorithms.
Not just impossible outside of the scope of every Turing >>>>>>>>>>> machine.
Its the same kind of thing as requiring the purely mental object >>>>>>>>>>> of a Turing machine to bake a birthday cake.
It simply isn't. Inputs that are not correctly solvable by some >>>>>>>>>> deciders are decided by some others.
THIS INPUT IS SOLVABLE
THE NON-INPUT IS OUT-OF-SCOPE
Then why do you claim that H(D) must decide on this non input?
Because he claims that D is two things; it has two properties:
D.input and D.noninput.
H(D) is solving D.input (as machines are required) and (believe
him when he says) that D.input is nonterminating.
What is terminating is D.noninput (he acknowledges).
Good job.
Your naming conventions make things very clear.
If some H_other decider is tested on H_other(D), then the "Olcott >>>>>>> reality distortion wave function" (ORDF) collapses, and D.input
becomes the same as D.noninput.
*So we need to clarify*
D.input_to_H versus
D.non_input_to_H which can be:
D.input_to_H1
D.executed_from_main
I take Rice's semantic properties of programs and
clarify that this has always meant the semantic
properties of finite string machine descriptions.
Then I further divide this into
(a) semantic properties of INPUT finite strings
(b) semantic properties of NON_INPUT finite strings
The problem is that "input" is just a role of some datum in a context, >>>>> like a function
The input versus non-input distinction cannot be found in any
binary digit of the bit string comprising the datum itself.
Unless you bother to pay attention to the fact that
the sequence of steps of D simulated by H is a different
sequence of steps than D simulated by H1.
How do we know what is correct?
Because all deciders only report on what their
input specifies H(D)==0 and H1(D)==1 are both correct.
Suppose I'm given an input D, and two deciders X and Y.
I know nothing about these. (But they are simulating deciders,
producing different executions.)
X accepts, Y rejects.
Do I regard both of them as right? Or one of them wrong?
If so, which one? Or both wrong?
Suppose I happen to be sure that D halts. So I know X is correct.
As far as a DOS (denial of service) attack
goes H(D)==0 correctly rejects its input.
Imvvho, I don't think you know what to do with a proper DOS, or a DDOS. Think if a bunch of connections some in, they send some commands to the server, communicate a little, then stop for say a random amount of time, then continue, then stop. Then a flood of connections come in that
execute commands that upload/download big random files, over and over
again. Some of them say done!, Some of them hang around for say 20
minutes holding your socket open. I don't think you know what a DOS even
is? Humm... lol.
[...]
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is
deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether
D is non-input or input to Y in order to /decide/ whether its rejection
is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is
deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether >>> D is non-input or input to Y in order to /decide/ whether its rejection
is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Again, I just want to know, does D halt?
Under your paradigm, even though I have a certified correct H,
I am not informed.
Under the standard halting problem, I am not informed because
I /don't/ have a certified correct H; it doesn't exist.
Under your paradigm, I have a deemed-correct H, which is
excused for giving a garbage answer on non-inputs, which
I have no way to identify.
How am I better off in your paradigm?
Do I use 10 different certified deciders, and take a majority vote?
But the function which combines 10 deciders into a majority vote
is itself a decider! And that 10-majority-decider function can be
targeted by a diagonal test case ... and such a test case is now
a non-input. See?
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But how does the user interpret that result?
The user just wants to know, does this thing halt or not?
How does it answer the user's question?
The user's question is not incorrect; it may be incorrect when
posed to H, in which case the user needs some other H, like H1.
How do they decide between H and H1?
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>> deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/
whether
D is non-input or input to Y in order to /decide/ whether its rejection >>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
Again, I just want to know, does D halt?
You might also want a purely mental Turing
machine to bake you a birthday cake.
Under your paradigm, even though I have a certified correct H,
I am not informed.
Under the standard halting problem, I am not informed because
I /don't/ have a certified correct H; it doesn't exist.
The standard halting problem requires behavior
that is out-of-scope for Turing machines, like
requiring that they bake birthday cakes.
Under your paradigm, I have a deemed-correct H, which is
excused for giving a garbage answer on non-inputs, which
I have no way to identify.
int sum(int x, int y){return x + y;}
Expecting sum(3,4) to return the sum of 5 + 7 is nuts.
A function only computes from its arguments.
How am I better off in your paradigm?
In my paradigm you face reality rather than
ignoring it.
Do I use 10 different certified deciders, and take a majority vote?
sum(3,4) computes the sum of 3+4 even if
the sum of 5+6 is required from sum(3,4).
Whatever behavior is measured by the decider's
simulation of its input *is* the behavior that
it must report on.
But the function which combines 10 deciders into a majority vote
is itself a decider! And that 10-majority-decider function can be
targeted by a diagonal test case ... and such a test case is now
a non-input. See?
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But how does the user interpret that result?
The the input to this decider specifies a sequence
that cannot possibly reach its final halt state.
The user just wants to know, does this thing halt or not?
The user may equally want a purely imaginary
Turing machine to bake a birthday cake.
How does it answer the user's question?
As far as theoretical limitations go I have addressed
them. Practical workarounds can be addressed after I
am published and my work is accepted.
The user's question is not incorrect; it may be incorrect when
posed to H, in which case the user needs some other H, like H1.
How do they decide between H and H1?
On 28/10/2025 21:58, dbush wrote:
On 10/28/2025 4:51 PM, olcott wrote:
<snip>
<repeat of previously refuted point>
So again you admit that Kaz's code proves that D is halting.
Credit where credit is due. By returning 0*, /olcott's/ code proves that
D is halting.
*i.e. non-halting.
On 10/28/2025 10:16 PM, Richard Heathfield wrote:
On 28/10/2025 21:58, dbush wrote:
On 10/28/2025 4:51 PM, olcott wrote:
<snip>
<repeat of previously refuted point>
So again you admit that Kaz's code proves that D is halting.
Credit where credit is due. By returning 0*, /olcott's/ code proves
that D is halting.
*i.e. non-halting.
Yet (as I have said hundreds of times) Turing machines
can only compute the mapping from *INPUT* finite string
machine descriptions to the behavior that these *INPUT*
finite string machine descriptions *ACTUALLY SPECIFY*
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>> deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether >>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
Again, I just want to know, does D halt?
You might also want a purely mental Turing
machine to bake you a birthday cake.
Under your paradigm, even though I have a certified correct H,
I am not informed.
Under the standard halting problem, I am not informed because
I /don't/ have a certified correct H; it doesn't exist.
The standard halting problem requires behavior
that is out-of-scope for Turing machines, like
requiring that they bake birthday cakes.
How am I better off in your paradigm?
In my paradigm you face reality rather than
ignoring it.
Do I use 10 different certified deciders, and take a majority vote?
sum(3,4) computes the sum of 3+4 even if
the sum of 5+6 is required from sum(3,4).
Whatever behavior is measured by the decider's
simulation of its input *is* the behavior that
it must report on.
But the function which combines 10 deciders into a majority vote
is itself a decider! And that 10-majority-decider function can be
targeted by a diagonal test case ... and such a test case is now
a non-input. See?
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But how does the user interpret that result?
The the input to this decider specifies a sequence
that cannot possibly reach its final halt state.
The user just wants to know, does this thing halt or not?
The user may equally want a purely imaginary
Turing machine to bake a birthday cake.
How does it answer the user's question?
As far as theoretical limitations go I have addressed
them.
Practical workarounds can be addressed after I
am published and my work is accepted.
On 10/28/2025 10:16 PM, Richard Heathfield wrote:
On 28/10/2025 21:58, dbush wrote:
On 10/28/2025 4:51 PM, olcott wrote:
<snip>
<repeat of previously refuted point>
So again you admit that Kaz's code proves that D is halting.
Credit where credit is due. By returning 0*, /olcott's/ code proves that
D is halting.
*i.e. non-halting.
Yet (as I have said hundreds of times) Turing machines
can only compute the mapping from *INPUT* finite string
machine descriptions to the behavior that these *INPUT*
finite string machine descriptions *ACTUALLY SPECIFY*
D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
The halting problem itself makes a category error
when it requires deciders to report on behavior
other than the behavior that *THEIR INPUT SPECIFIES*
On 10/28/2025 10:16 PM, Richard Heathfield wrote:
On 28/10/2025 21:58, dbush wrote:
On 10/28/2025 4:51 PM, olcott wrote:
<snip>
<repeat of previously refuted point>
So again you admit that Kaz's code proves that D is halting.
Credit where credit is due. By returning 0*, /olcott's/ code proves
that D is halting.
*i.e. non-halting.
Yet (as I have said hundreds of times) Turing machines
can only compute the mapping from *INPUT* finite string
machine descriptions to the behavior that these *INPUT*
finite string machine descriptions *ACTUALLY SPECIFY*
D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
The halting problem itself makes a category error
when it requires deciders to report on behavior
other than the behavior that *THEIR INPUT SPECIFIES*
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/
is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>>> deemed to be correct because D being a non-input to Y means that D
denotes non-halting semantics to Y (and /that/ is why its execution
trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether >>>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
So how it's supposed to work that an otherwise halting D
is a non-halting input to H.
When the non-halting D is an input to H (which it undeniably as you have
now decided) D is non-halting.
With respect to H, it's as if the halting D exists in another dimension; /that/ D is not the input.
Okay, but anyway ...
- The decider user has some program P..
- P terminates, but it takes three years on the user's hardware.
- The user does not know this; they tried running P for weeks,
months, but it never terminated.
- The user has H which they have been assured is correct under
the Olcott Halting Paradigm.
- The applies H to P, and H rejects it.
- The program P is actually D, but the user doesn't know this.
What should the user believe? Does D halt or not?
How is the user /not/ deceived if they believe that P doesn't halt?
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
But that behavior is only real /as/ an argument to H; it is not the
behavior that the halter-decider customer wants reported on.
How is the user supposed to know which inputs are handled by their
decider and which are not?
Again, I just want to know, does D halt?
You might also want a purely mental Turing
machine to bake you a birthday cake.
Are you insinuating that the end user for halt deciders is wrong to want
to know whether something halts?
And /that's/ how you ultimately refute the halting problem?
The standard halting problem and its theorem tells the user
they cannot have a halting algorithm that will decide everything;
stop wanting that!
Your paradigm tells the user that the question is wrong, or at least for
some programs, and doesn't tell them which.
Under your paradigm, even though I have a certified correct H,
I am not informed.
Under the standard halting problem, I am not informed because
I /don't/ have a certified correct H; it doesn't exist.
The standard halting problem requires behavior
that is out-of-scope for Turing machines, like
requiring that they bake birthday cakes.
But what changes if we simply /stop requiring/ that behavior?
How am I better off in your paradigm?
In my paradigm you face reality rather than
ignoring it.
So does that reality provide an algorithm to decide the
halting of any machine, or not?
Do I use 10 different certified deciders, and take a majority vote?
sum(3,4) computes the sum of 3+4 even if
the sum of 5+6 is required from sum(3,4).
Whatever behavior is measured by the decider's
simulation of its input *is* the behavior that
it must report on.
That's the internallhy focused discussion. How are you
solving the end user's demand for halting decision?
But the function which combines 10 deciders into a majority vote
is itself a decider! And that 10-majority-decider function can be
targeted by a diagonal test case ... and such a test case is now
a non-input. See?
You are not looking at it from the perspective of a /consumer/ of a
/decider product/ actually trying to use deciders and trust their
answer.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But how does the user interpret that result?
The the input to this decider specifies a sequence
that cannot possibly reach its final halt state.
But you have inputs for which that is reported, which
readily halt when they are executed.
Don't you think the user wants to know /that/, and not what happens
under the decider (if that is different)?
The user just wants to know, does this thing halt or not?
The user may equally want a purely imaginary
Turing machine to bake a birthday cake.
How does it answer the user's question?
As far as theoretical limitations go I have addressed
them.
By address, do you mean remove?
Practical workarounds can be addressed after I
am published and my work is accepted.
Workarounds for what? You've left something unsolved in halting; what is that?
On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>> is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>> trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether >>>>>> D is non-input or input to Y in order to /decide/ whether its rejection >>>>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
So how it's supposed to work that an otherwise halting D
is a non-halting input to H.
int D()
{
int Halt_Status = H(D);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
H simulates D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
until H sees this repeating pattern.
When the non-halting D is an input to H (which it undeniably as you have
now decided) D is non-halting.
D.input_to_H is non-halting is confirmed in that
D simulated by cannot possibly reach its own
"return" statement final halt state. This divides
non-halting from stopping running.
With respect to H, it's as if the halting D exists in another dimension;
/that/ D is not the input.
Okay, but anyway ...
- The decider user has some program P..
- P terminates, but it takes three years on the user's hardware.
- The user does not know this; they tried running P for weeks,
months, but it never terminated.
- The user has H which they have been assured is correct under
the Olcott Halting Paradigm.
- The applies H to P, and H rejects it.
The would mean that P has specifically targeted
H in an attempt to thwart a correct assessment.
- The program P is actually D, but the user doesn't know this.
The system works on source-code.
What should the user believe? Does D halt or not?
When the input P targets the decider H or does not target
the decider H input P simulated by decider H always reports
on the basis of whether P can reach its own final halt state.
How is the user /not/ deceived if they believe that P doesn't halt?
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
But that behavior is only real /as/ an argument to H; it is not the
behavior that the halter-decider customer wants reported on.
When what the customer wants and what is in the scope of
Turing machines differ the user must face reality.
There
may be practical workarounds these are outside the scope
of the theoretical limits.
The halting problem has always been incorrect, so just
like ZFC eliminated Russell's Paradox I have eliminated
the halting problem.
The halting problem as defined requires something
that is outside of the scope of all Turing machines.
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
The halting problem as defined requires something
that is outside of the scope of all Turing machines.
That is false. The halting problem simply /asks/ a question whether a
Turing machine can exist which does something. The answer comes back
negative in the form of a theorem.
The halting problem doesn't require anything of Turing computation
that it cannot do; it's not a requirements specification.
The input cases which are not decided correctly by a given partial
decider are real, constructable entities which have a definite halting status.
They are not paradoxical absurdities that cannot exist; you cannot dismiss something which exists.
It's like "solving" indivisibility by two by banning odd numbers
as incorrect.
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>>> is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>>> trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether
D is non-input or input to Y in order to /decide/ whether its rejection >>>>>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
So how it's supposed to work that an otherwise halting D
is a non-halting input to H.
int D()
{
int Halt_Status = H(D);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
H simulates D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
until H sees this repeating pattern.
When the non-halting D is an input to H (which it undeniably as you have >>> now decided) D is non-halting.
D.input_to_H is non-halting is confirmed in that
D simulated by cannot possibly reach its own
"return" statement final halt state. This divides
non-halting from stopping running.
With respect to H, it's as if the halting D exists in another dimension; >>> /that/ D is not the input.
Okay, but anyway ...
- The decider user has some program P..
- P terminates, but it takes three years on the user's hardware.
- The user does not know this; they tried running P for weeks,
months, but it never terminated.
- The user has H which they have been assured is correct under
the Olcott Halting Paradigm.
- The applies H to P, and H rejects it.
The would mean that P has specifically targeted
H in an attempt to thwart a correct assessment.
- The program P is actually D, but the user doesn't know this.
The system works on source-code.
Whatever results you have, they have to be valid
for any representation of Turing machines whatsoever.
Source code can be obfuscated, as well as extremely large and
obfuscated.
That the user can have source code (not required at all by the Turing
model) doesn't change that the user has no idea that P is actually a D program with respect to their halting decider.
Also, the question "is this input P a diagonal input targetin this
decider H" is an undecidable problem!!!
What should the user believe? Does D halt or not?
When the input P targets the decider H or does not target
the decider H input P simulated by decider H always reports
on the basis of whether P can reach its own final halt state.
But that's not the output we want; we want to know whether P()
halts.
How is the user /not/ deceived if they believe that P doesn't halt?
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
But that behavior is only real /as/ an argument to H; it is not the
behavior that the halter-decider customer wants reported on.
When what the customer wants and what is in the scope of
Turing machines differ the user must face reality.
So under your paradigm, the user is told they must face reality: some machines cannot be decided,
so you cannot get an answer for whether P()
halts. Sometimes you get answer for whether P is considered to be
halting when simulated by H, which doesn't match whether P() halts. Suck
it up!
That's just a stupidly convoluted version of what they are told
under the standard Halting Problem, which informs them that for
every halting decider, there are inputs for which it is wrong or nonterminating.
You are not improving the standard halting problem and its theorem
one iota; just vandalizing it with impertinent content and details.
Under your paradigm, a halting decider reports rubbish for some inputs,
and is called correct; e.g. rejecting a halting input.
How is what you are doing different from calling a tail "leg",
and claiming that canines are five-legged animals?
There
may be practical workarounds these are outside the scope
of the theoretical limits.
I cannot think of any example of an engineering technique
which overcomes theoretical limits.
There are no workarounds for the undecidability of halting;
you can only work within the limits not overcome them.
The halting problem has always been incorrect, so just
like ZFC eliminated Russell's Paradox I have eliminated
the halting problem.
Only, you've not eliminated it sufficently far that wouldn't have to
tell the halting decider client to accept reality?
ZFC eliminating Russel's Paradox is a formal-system-level
change.
You're not changing the formal system; you are staying in
the Turing Model.
Also, Russel's Paradox is nonsense. Whereas a program H
deciding a program P which integrates H itself in some shape
is not nonsense; it is constructible.
Get it? Russel's silly set cannot even be imagined, let alone
constructedd.
Failing test cases for halting can be constructed; they
are real.
On 10/28/2025 10:16 PM, Richard Heathfield wrote:
On 28/10/2025 21:58, dbush wrote:
On 10/28/2025 4:51 PM, olcott wrote:
<snip>
<repeat of previously refuted point>
So again you admit that Kaz's code proves that D is halting.
Credit where credit is due. By returning 0*, /olcott's/ code proves
that D is halting.
*i.e. non-halting.
Yet (as I have said hundreds of times) Turing machines
can only compute the mapping from *INPUT* finite string
machine descriptions to the behavior that these *INPUT*
finite string machine descriptions *ACTUALLY SPECIFY*
D simulated by H SPECIFIES NOT-HALTING BEHAVIOR.
The halting problem itself makes a category error
when it requires deciders to report on behavior
other than the behavior that *THEIR INPUT SPECIFIES*
When what the customer wants and what is in the scope of[...]
Turing machines differ the user must face reality. There
may be practical workarounds these are outside the scope
of the theoretical limits.
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/29/2025 12:36 AM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 9:19 PM, Kaz Kylheku wrote:
On 2025-10-29, olcott <polcott333@gmail.com> wrote:
On 10/28/2025 7:25 PM, Kaz Kylheku wrote:
Under your system, I don't know whether Y is correct.
Y could be a broken decider that is wrongly deciding D (and /that/ >>>>>>> is why its execution trace differs from X).
Or it could be the case that D is a non-input to Y, in which case Y is >>>>>>> deemed to be correct because D being a non-input to Y means that D >>>>>>> denotes non-halting semantics to Y (and /that/ is why its execution >>>>>>> trace differs from X).
The fact that the execution trace differs doesn't inform.
We need to know the value of is_input(Y, D): we need to /decide/ whether
D is non-input or input to Y in order to /decide/ whether its rejection >>>>>>> is correct.
Whatever is a correct simulation of an input by
a decider is the behavior that must be reported on.
But under your system, if I am a user of deciders, and have been
given a decider H which is certified to be correct, I cannot
rely on it to decide halting.
When halting is defined correctly:
Does this input specify a sequence of moves that
reach a final halt state?
and not defined incorrectly: to require something
that is not specified in the input then this does
overcome the halting problem proof and shows that
the halting problem itself has always been a category
error. (Flibble's brilliant term).
I want to know whether D halts, that's all.
H says no. It is certified correct under your paradigm, so
so I don't have to suspect that if it is given an /input/
it will be wrong.
But: I have no idea whether D is an input to H or a non-input!
That is ridiculous. If it is an argument
to the decider function then it is an input.
So how it's supposed to work that an otherwise halting D
is a non-halting input to H.
int D()
{
int Halt_Status = H(D);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
H simulates D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
that calls H(D) to simulate D
until H sees this repeating pattern.
When the non-halting D is an input to H (which it undeniably as you have >>> now decided) D is non-halting.
D.input_to_H is non-halting is confirmed in that
D simulated by cannot possibly reach its own
"return" statement final halt state. This divides
non-halting from stopping running.
With respect to H, it's as if the halting D exists in another dimension; >>> /that/ D is not the input.
Okay, but anyway ...
- The decider user has some program P..
- P terminates, but it takes three years on the user's hardware.
- The user does not know this; they tried running P for weeks,
months, but it never terminated.
- The user has H which they have been assured is correct under
the Olcott Halting Paradigm.
- The applies H to P, and H rejects it.
The would mean that P has specifically targeted
H in an attempt to thwart a correct assessment.
- The program P is actually D, but the user doesn't know this.
The system works on source-code.
Whatever results you have, they have to be valid
for any representation of Turing machines whatsoever.
Source code can be obfuscated, as well as extremely large and
obfuscated.
That the user can have source code (not required at all by the Turing
model) doesn't change that the user has no idea that P is actually a D program with respect to their halting decider.
Also, the question "is this input P a diagonal input targetin this
decider H" is an undecidable problem!!!
What should the user believe? Does D halt or not?
When the input P targets the decider H or does not target
the decider H input P simulated by decider H always reports
on the basis of whether P can reach its own final halt state.
But that's not the output we want; we want to know whether P()
halts.
How is the user /not/ deceived if they believe that P doesn't halt?
When H says 0, I have no idea whether it's being judged non-halting
as an input, or whether it's being judged as a non-input (whereby
either value is the correct answer as far as H is concerned).
Judging by anything besides and input has always
been incorrect. H(D) maps its input to a reject
value on the basis of the behavior that this
argument to H specifies.
But that behavior is only real /as/ an argument to H; it is not the
behavior that the halter-decider customer wants reported on.
When what the customer wants and what is in the scope of
Turing machines differ the user must face reality.
So under your paradigm, the user is told they must face reality: some machines cannot be decided, so you cannot get an answer for whether P() halts. Sometimes you get answer for whether P is considered to be
halting when simulated by H, which doesn't match whether P() halts. Suck
it up!
That's just a stupidly convoluted version of what they are told
under the standard Halting Problem, which informs them that for
every halting decider, there are inputs for which it is wrong or nonterminating.
You are not improving the standard halting problem and its theorem
one iota; just vandalizing it with impertinent content and details.
Under your paradigm, a halting decider reports rubbish for some inputs,
and is called correct; e.g. rejecting a halting input.
How is what you are doing different from calling a tail "leg",
and claiming that canines are five-legged animals?
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,075 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 90:33:59 |
| Calls: | 13,798 |
| Calls today: | 1 |
| Files: | 186,989 |
| D/L today: |
5,324 files (1,535M bytes) |
| Messages: | 2,438,211 |