• Re: Proof that the halting problem itself is a category error

    From Oleksiy Gapotchenko@alex.s.gap@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 6 01:24:39 2026
    From Newsgroup: comp.ai.philosophy

    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in code analysis) are not even tried because it's assumed that they are bound by
    the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a dogma,
    at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is fine.


    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 5 18:39:52 2026
    From Newsgroup: comp.ai.philosophy

    On 1/5/2026 6:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in code analysis) are not even tried because it's assumed that they are bound by
    the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a dogma,
    at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.


    The true issue with the misconception of undecidability
    is that it prevents
    "true on the basis of meaning expressed in language"
    from being reliable computable for the body of knowledge.

    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is fine.


    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.


    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 5 23:47:13 2026
    From Newsgroup: comp.ai.philosophy

    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in code analysis) are not even tried because it's assumed that they are bound by
    the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a dogma,
    at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for any
    given machine, just that a "general" decider does not exist for all
    machiens ...

    heck it must be certain that for any given machine there must exist a
    partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical computing. therefore some true decider must exist for any given machine that exists
    ... we just can't funnel the knowledge thru a general interface.

    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the gold-standard for what is computable, but we haven't actually proved that.

    the CT-thesis is a thesis, not a proof. we've been treating it as a law
    ... but we never actually justified that it should be law. this whole
    time we've been discarding things like a general halting decidable
    because TM computing can be used to create paradoxes in regards to it,
    but maybe the problem is that TM computing is not sufficient to describe
    a general halting decider, not that a general halting decider is impossible.

    that's my new attack vector on the consensus understanding: the CT
    thesis. i am to describe a general algo that *we* can obviously compute
    using deterministic steps, but such algo cannot be funneled thru a
    general interface because TM computing will read and paradox it.


    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is fine.


    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 6 15:23:55 2026
    From Newsgroup: comp.ai.philosophy

    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Tue Jan 6 08:02:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F" https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G) // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 6 19:26:45 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in
    code analysis) are not even tried because it's assumed that they are
    bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a dogma,
    at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for any
    given machine, just that a "general" decider does not exist for all
    machiens ...

    heck it must be certain that for any given machine there must exist a partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical computing. therefore some true decider must exist for any given machine that
    exists ... we just can't funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the gold-standard for what is computable, but we haven't actually proved that.

    the CT-thesis is a thesis, not a proof. we've been treating it as a
    law ... but we never actually justified that it should be law. this
    whole time we've been discarding things like a general halting decidable because TM computing can be used to create paradoxes in regards to it,
    but maybe the problem is that TM computing is not sufficient to describe
    a general halting decider, not that a general halting decider is
    impossible.

    that's my new attack vector on the consensus understanding: the CT
    thesis. i am to describe a general algo that *we* can obviously compute using deterministic steps, but such algo cannot be funneled thru a
    general interface because TM computing will read and paradox it.


    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is fine. >>>>

    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.



    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 6 19:03:38 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in
    code analysis) are not even tried because it's assumed that they are
    bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for any
    given machine, just that a "general" decider does not exist for all
    machiens ...

    heck it must be certain that for any given machine there must exist a
    partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical
    computing. therefore some true decider must exist for any given
    machine that exists ... we just can't funnel the knowledge thru a
    general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the
    gold-standard for what is computable, but we haven't actually proved
    that.

    the CT-thesis is a thesis, not a proof. we've been treating it as a
    law ... but we never actually justified that it should be law. this
    whole time we've been discarding things like a general halting
    decidable because TM computing can be used to create paradoxes in
    regards to it, but maybe the problem is that TM computing is not
    sufficient to describe a general halting decider, not that a general
    halting decider is impossible.

    that's my new attack vector on the consensus understanding: the CT
    thesis. i am to describe a general algo that *we* can obviously
    compute using deterministic steps, but such algo cannot be funneled
    thru a general interface because TM computing will read and paradox it.


    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is fine. >>>>>

    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.





    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 6 22:33:44 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/2026 9:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's
    assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist a
    partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical
    computing. therefore some true decider must exist for any given
    machine that exists ... we just can't funnel the knowledge thru a
    general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    No it is not that.
    After spending 20,000 hours on this over 20 years
    equivalent to ten full time years.

    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*

    The simplest 100% correct resolution to the
    actual definition of the Halting Problem
    (that includes the counter-example input)
    Is that (in the case of the counter-example input)
    The halting problem asks a yes/no question
    that has no correct yes/no answer.

    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*

    We can only get to your idea of a different
    interface when we change the definition of
    that Halting Problem. The original problem
    itself is simply incorrect.

    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*

    function LoopIfYouSayItHalts (bool YouSayItHalts):
    if YouSayItHalts () then
    while true do {}
    else
    return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
    translated to Boolean as the function's input
    parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 7 00:56:01 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 8:33 PM, olcott wrote:
    On 1/6/2026 9:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's >>>>> assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive it. >>>>
    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist
    a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any
    given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    No it is not that.
    After spending 20,000 hours on this over 20 years
    equivalent to ten full time years.

    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*

    The simplest 100% correct resolution to the
    actual definition of the Halting Problem
    (that includes the counter-example input)
    Is that (in the case of the counter-example input)
    The halting problem asks a yes/no question
    that has no correct yes/no answer.

    i love how you agree with the consensus position but think u don't


    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*

    We can only get to your idea of a different
    interface when we change the definition of
    that Halting Problem. The original problem
    itself is simply incorrect.

    the question's a fine expectation

    TMs can't represent the answer tho, and that's the real problem


    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*

    function LoopIfYouSayItHalts (bool YouSayItHalts):
      if YouSayItHalts () then
        while true do {}
      else
        return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
     translated to Boolean as the function's input
     parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 7 05:50:14 2026
    From Newsgroup: comp.ai.philosophy

    On 1/7/2026 2:56 AM, dart200 wrote:
    On 1/6/26 8:33 PM, olcott wrote:
    On 1/6/2026 9:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>> perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because
    it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. >>>>>> And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive >>>>>> it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for >>>>> all machiens ...

    heck it must be certain that for any given machine there must exist >>>>> a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any >>>>> given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    No it is not that.
    After spending 20,000 hours on this over 20 years
    equivalent to ten full time years.

    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*

    The simplest 100% correct resolution to the
    actual definition of the Halting Problem
    (that includes the counter-example input)
    Is that (in the case of the counter-example input)
    The halting problem asks a yes/no question
    that has no correct yes/no answer.

    i love how you agree with the consensus position but think u don't


    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*

    We can only get to your idea of a different
    interface when we change the definition of
    that Halting Problem. The original problem
    itself is simply incorrect.

    the question's a fine expectation

    TMs can't represent the answer tho, and that's the real problem


    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*

    function LoopIfYouSayItHalts (bool YouSayItHalts):
    if YouSayItHalts () then
    while true do {}
    else
    return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
    translated to Boolean as the function's input
    parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ

    *yes/no questions lacking a correct yes/no answer are incorrect*
    *yes/no questions lacking a correct yes/no answer are incorrect*
    *yes/no questions lacking a correct yes/no answer are incorrect*
    *yes/no questions lacking a correct yes/no answer are incorrect*

    The above is a yes/no question such that both yes and
    no are the wrong answer making the question itself incorrect.

    The logical law of polar questions
    Peter Olcott
    Feb 20, 2015, 11:38:48 AM
    The logical law of polar questions

    When posed to a man whom has never been married,
    the question: Have you stopped beating your wife?
    Is an incorrect polar question because neither yes nor
    no is a correct answer.

    All polar questions (including incorrect polar questions)
    have exactly one answer from the following:
    1) No
    2) Yes
    3) Neither // Only applies to incorrect polar questions

    As far as I know I am the original discoverer of the
    above logical law, thus copyright 2015 by Peter Olcott.

    Permission to copy and freely distribute the above
    is hereby granted as long as it is distributed in
    its entirely including this license agreement.

    https://groups.google.com/g/sci.lang/c/AO5Vlupeelo/m/nxJy7N2vULwJ
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 7 14:05:35 2026
    From Newsgroup: comp.ai.philosophy

    On 06/01/2026 09:47, dart200 wrote:

    ...

    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships.

    It is not. Although we don't know any way compute what is not Turing computable, we can image a machine that can compute what a Turing
    machine can't. Such machine can use a tape as an input and output
    device as well as working storage just like Turing machine but has
    additional instructions for operations that are not Turring computable.
    While it is possible to imagine a halting decider for TUring machines implemented with an extended machine, a halting decider for all such
    machines still requires that the decider can compute something that
    none of the machines in its input domain can.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Wed Jan 7 14:10:11 2026
    From Newsgroup: comp.ai.philosophy

    On 06/01/2026 16:02, olcott wrote:
    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F" https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    From the way G is constructed it can be meta-proven that either
    G is true and unprovable in F (which means that F is incomplete)
    or G is false and provable in F (which means that F is inconsistent).
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Wed Jan 7 07:06:37 2026
    From Newsgroup: comp.ai.philosophy

    On 1/7/2026 6:10 AM, Mikko wrote:
    On 06/01/2026 16:02, olcott wrote:
    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F"
    https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    From the way G is constructed it can be meta-proven that either

    Did you hear me stutter ?
    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    G is true and unprovable in F (which means that F is incomplete)
    or G is false and provable in F (which means that F is inconsistent).

    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Thu Jan 8 12:21:15 2026
    From Newsgroup: comp.ai.philosophy

    On 07/01/2026 15:06, olcott wrote:
    On 1/7/2026 6:10 AM, Mikko wrote:
    On 06/01/2026 16:02, olcott wrote:
    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F"
    https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

     From the way G is constructed it can be meta-proven that either

    Did you hear me stutter ?
    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    An F where such sequence really exists then in that F both G and
    the negation of G are provable.

    In an F where such sequnénce does not exist G is unprovable by
    definition. However it is meta-provable frome the way it is
    constructed and therefore true in every interpretation where
    the natural numbers contained in F have their standard properties.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Thu Jan 8 08:18:30 2026
    From Newsgroup: comp.ai.philosophy

    On 1/8/2026 4:21 AM, Mikko wrote:
    On 07/01/2026 15:06, olcott wrote:
    On 1/7/2026 6:10 AM, Mikko wrote:
    On 06/01/2026 16:02, olcott wrote:
    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>> perceive the halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably
    true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F"
    https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom >>>>
    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

     From the way G is constructed it can be meta-proven that either

    Did you hear me stutter ?
    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    An F where such sequence really exists then in that F both G and
    the negation of G are provable.

    G := (F ⊬ G) // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.
    Does not exist because is contradicts itself.

    Rene Descartes: I think therefore thoughts do not exist
    is simply incorrect because it contradicts itself.

    In an F where such sequnénce does not exist G is unprovable by
    definition. However it is meta-provable frome the way it is
    constructed and therefore true in every interpretation where
    the natural numbers contained in F have their standard properties.


    Self-contradictory gibberish is never true or provable.
    It is better to reject it as gibberish before
    proceeding otherwise someone might make an
    incompleteness theorem out of it and falsely
    conclude that math is incomplete.

    This sentence is not true:
    "This sentence is not true"
    is true because the inner sentence
    is self-contradictory gibberish.

    This sentence cannot be proven in F:
    "This sentence cannot be proven in F"
    is true because the inner sentence
    is self-contradictory gibberish.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Sat Jan 10 11:25:01 2026
    From Newsgroup: comp.ai.philosophy

    On 08/01/2026 16:18, olcott wrote:
    On 1/8/2026 4:21 AM, Mikko wrote:
    On 07/01/2026 15:06, olcott wrote:
    On 1/7/2026 6:10 AM, Mikko wrote:
    On 06/01/2026 16:02, olcott wrote:
    On 1/6/2026 7:23 AM, Mikko wrote:
    On 06/01/2026 02:24, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>>> perceive the halting problem as a dogma.

    It is a dogma in the same sense as 2 * 3 = 6 is a dogma: a provably >>>>>> true sentence of a certain theory.


    ...We are therefore confronted with a proposition which
    asserts its own unprovability. 15 … (Gödel 1931:40-41)

    Gödel, Kurt 1931.
    On Formally Undecidable Propositions of
    Principia Mathematica And Related Systems

    F ⊢ G_F ↔ ¬Prov_F (⌜G_F⌝)
    "F proves that: G_F is equivalent to
    Gödel_Number(G_F) is not provable in F"
    https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom >>>>>
    Stripping away the inessential baggage using a formal
    language with its own self-reference operator and
    provability operator (thus outside of arithmetic)

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

     From the way G is constructed it can be meta-proven that either

    Did you hear me stutter ?
    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.

    An F where such sequence really exists then in that F both G and
    the negation of G are provable.

    G := (F ⊬ G)   // G asserts its own unprovability in F

    A proof of G in F would be a sequence of inference
    steps in F that prove that they themselves do not exist.
    Does not exist because is contradicts itself.

    That conclusion needs the additional assumption that F is consistent,
    which requires that the first order Peano arithmetic is consistent.
    If F is not consistent then both G and its negation are provable in F.
    The first order Peano arithmetic is believed to be sonsistent but its consistency is not proven.
    --
    Mikko
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 07:06:54 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's
    assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist a
    partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical
    computing. therefore some true decider must exist for any given
    machine that exists ... we just can't funnel the knowledge thru a
    general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing Machines.

    All it needs is that the decider be limited by the rules of a computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a
    non-answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't understand
    the basics of what a computation is.


    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the
    gold-standard for what is computable, but we haven't actually proved
    that.

    the CT-thesis is a thesis, not a proof. we've been treating it as a
    law ... but we never actually justified that it should be law. this
    whole time we've been discarding things like a general halting
    decidable because TM computing can be used to create paradoxes in
    regards to it, but maybe the problem is that TM computing is not
    sufficient to describe a general halting decider, not that a general
    halting decider is impossible.

    that's my new attack vector on the consensus understanding: the CT
    thesis. i am to describe a general algo that *we* can obviously
    compute using deterministic steps, but such algo cannot be funneled
    thru a general interface because TM computing will read and paradox it.


    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is
    fine.


    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.








    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 07:12:07 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 11:33 PM, olcott wrote:
    On 1/6/2026 9:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's >>>>> assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive it. >>>>
    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist
    a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any
    given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    No it is not that.
    After spending 20,000 hours on this over 20 years
    equivalent to ten full time years.

    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*

    Maybe what is broken is your concept of truth, because it is just wrong.




    The simplest 100% correct resolution to the
    actual definition of the Halting Problem
    (that includes the counter-example input)
    Is that (in the case of the counter-example input)
    The halting problem asks a yes/no question
    that has no correct yes/no answer.

    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*

    But it doesn't.

    Your "logic" is based on FALSE concepts, like subjecteive questions are
    valid.


    We can only get to your idea of a different
    interface when we change the definition of
    that Halting Problem. The original problem
    itself is simply incorrect.

    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*

    Nope.

    function LoopIfYouSayItHalts (bool YouSayItHalts):
      if YouSayItHalts () then
        while true do {}
      else
        return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
     translated to Boolean as the function's input
     parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ



    WHich just shows you didn't know what the Halting Problem was then, and
    you still don't as the halting problem isn't based on TELLING the
    machine you answer, but on predicting the behavior of the input program
    for its specific input.

    LoopIfYouSayHalts(false) is halting.
    LoopIfYouSayHalts(true) is non-halting.

    One answer per input is the specification of the problem, and there is a decider that can give the answer.

    Note also, you above program is just an invalid syntax error as YouSayIt
    Halts is defined to be a bool, but then you "call" it in a malformed if statement whose opening parenthesis is misplaced.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 14:09:07 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's >>>>> assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive it. >>>>
    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist
    a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any
    given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a non-
    answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't understand
    the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are deciding on ... ofc deciders can modify their behavior based on an input
    which they are included within, like they can modify their behavior
    based on the properties of the input.

    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when encountering a paradox specific to their interface.

    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS, LOOPS,
    and PARADOX. when partial deciders receive PARADOX back from their algo
    run they will then just loop forever to never respond.

    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    "but i can loop over all partial deciders to produce a paradox" ... uhh
    no you can't? traditional computing cannot iterate over all functionally equivalent machines, so it certainly can't iterate over all almost functionally equivalent machines, so you cannot claim to produce a
    general paradox for the general algo as such a computation is outside
    the scope of classical computing limits.

    i await to see how you purposefully misunderstand this
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 22:16:48 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>> perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because
    it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. >>>>>> And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive >>>>>> it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for >>>>> all machiens ...

    heck it must be certain that for any given machine there must exist >>>>> a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any >>>>> given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a non-
    answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are deciding on ... ofc deciders can modify their behavior based on an input which they are included within, like they can modify their behavior
    based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    And nothing in the theory disagrees with partial halt deciders existing,
    they just can NEVER give an answer to the pathological program based on
    them, as if they give an answer, it will be wrong.


    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS, LOOPS,
    and PARADOX. when partial deciders receive PARADOX back from their algo
    run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it
    can't determine if the input is just using a computational equivalent of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give the
    right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that implements it.


    "but i can loop over all partial deciders to produce a paradox" ... uhh
    no you can't? traditional computing cannot iterate over all functionally equivalent machines, so it certainly can't iterate over all almost functionally equivalent machines, so you cannot claim to produce a
    general paradox for the general algo as such a computation is outside
    the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    The pathological program doesn't need to enumerate the deciders, it just
    needs to user what you make your final decider, which can only partially enumerate the partial deciders.


    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that
    depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 20:21:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>>> perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because
    it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. >>>>>>> And even when one hits it, they can safely discard a particular >>>>>>> analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for >>>>>> any given machine, just that a "general" decider does not exist
    for all machiens ...

    heck it must be certain that for any given machine there must
    exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial
    deciders in a computable fashion and that runs up against it's own >>>>>> limit to classical computing. therefore some true decider must
    exist for any given machine that exists ... we just can't funnel
    the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit >>>>

    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental property
    of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus a
    non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are
    deciding on ... ofc deciders can modify their behavior based on an
    input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when
    encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)


    And nothing in the theory disagrees with partial halt deciders existing, they just can NEVER give an answer to the pathological program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a method
    that does correctly decide on it without ever given any wrong answers to
    any other machine (tho it may not give answers at times).

    as a man, i'm not subject to having my output read and contradicted like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because some
    method must exist that can correct decide on any given input... i can
    know that method for any given input, and therefor i can decide on any
    given input.

    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why am
    i limited by the absolute nonsense that is turing machines producing pathological input to themselves?

    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation cannot
    be generally expressed thru a turing machine computation input/ouput specification.



    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    can't determine if the input is just using a computational equivalent of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing


    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is either
    it returns PARADOX in which case the partial decider decides to loop(),
    or maybe we can just extract that functionality into the general partial
    algo itself...

    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that implements it.

    sure, but all the partial decider do is construct a self-reference using
    a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in order
    to ascertain where their output feedbacks into affect the prediction
    they are making.

    and yes the partial deciders do contain full descriptions of the common backing algo, but they still really do just act as an interface to that
    common algo

    they act like an exposed API/interface into the common algo



    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary


    The pathological program doesn't need to enumerate the deciders, it just needs to user what you make your final decider, which can only partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the self
    acts as the fixed point of reference to which the decision is made ...
    and while no fixed point can decide on all input, for any given input
    there is a fixed point of self that can decide on that input. and this
    can be encapsulated into a general algorithm that encapsulated a general procedure even if any given fixed point of self is not general.

    therefore a general algo exists, even if any particular fixed point of decision making is contradicted.

    dear god, why is computing theory is such a shit show? cause we've been
    almost blindly following what the forefathers of computing said?



    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.ai.philosophy,sci.lang on Tue Jan 13 07:02:46 2026
    From Newsgroup: comp.ai.philosophy

    On 13/01/2026 04:21, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    This is the god complex. While it's true that definition is a volition,
    like transforming into a jupiter-sized victoria sponge it is not always
    a free choice.

    Also, people often use "define" to mean "define a constraint for", you
    can define a constraint as freely as you can describe it without needing
    the god complex but perhaps there is no definition of a solution for a
    system of constraints that includes it.

    I want to know the right terminology for what you did in your statement
    "i'm defining the algo, so i say it does": you have one meaning for
    "defining" in "i'm defining the algo" which is "defining a constraint
    for", but a different meaning in a re-interpretation of the supposed
    world that's referenced by "so" in "so i say it does" which is actual
    defining.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 13 07:09:44 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on
    them perceive the halting problem as a dogma. As result, certain >>>>>>>> practical things (in code analysis) are not even tried because >>>>>>>> it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard a >>>>>>>> particular analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has >>>>>>>> 0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists
    for any given machine, just that a "general" decider does not
    exist for all machiens ...

    heck it must be certain that for any given machine there must
    exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial
    deciders in a computable fashion and that runs up against it's
    own limit to classical computing. therefore some true decider
    must exist for any given machine that exists ... we just can't
    funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit >>>>>

    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental property
    of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for
    ANY specific decider, and thus no decider can get all answers correct. >>>>
    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus a
    non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on
    an input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when
    encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable
    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem, as
    it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a method that does correctly decide on it without ever given any wrong answers to
    any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to use it.


    as a man, i'm not subject to having my output read and contradicted like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because some method must exist that can correct decide on any given input... i can
    know that method for any given input, and therefor i can decide on any
    given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    I guess you have fallen for the Olcott trap of convinsing yourself that
    you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why am
    i limited by the absolute nonsense that is turing machines producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.


    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation cannot
    be generally expressed thru a turing machine computation input/ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input, doesn't
    mean it can detect the other "self" that exists in the paradox input.

    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm gives.

    That doesn't make the name for that behavior "Paradox", it will still be
    one of "Halting" or "Non-Halting", so the third answer can't be correct.


    can't determine if the input is just using a computational equivalent
    of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    I guess your problem is you don't understand what an actual ALGORITHM is.



    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is either
    it returns PARADOX in which case the partial decider decides to loop(),
    or maybe we can just extract that functionality into the general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    You don't seem to understand that LYING is just LYING and incorrect.

    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one of
    them will answer is just a lie.


    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference using
    a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in order
    to ascertain where their output feedbacks into affect the prediction
    they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect themselves)
    will detect themselves and fail to answer.



    and yes the partial deciders do contain full descriptions of the common backing algo, but they still really do just act as an interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common algorithm, and it can NEVER give the right answer.




    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    Your enumerable set of partial deciders will just never give an answer,
    and thus you can't say that some partial decider can answer for every
    possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the self
    acts as the fixed point of reference to which the decision is made ...
    and while no fixed point can decide on all input, for any given input
    there is a fixed point of self that can decide on that input. and this
    can be encapsulated into a general algorithm that encapsulated a general procedure even if any given fixed point of self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the conclusion.


    dear god, why is computing theory is such a shit show? cause we've been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.




    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that
    depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different answer.

    No machine has "Paradoxical" halting behavior as its specific behavior.
    It might be described as contrary to a given instance of a general
    algorithm, but all that shows was that algorithm was WRONG, as it still
    had specific behavior as to its actual behavior.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 13 12:33:18 2026
    From Newsgroup: comp.ai.philosophy

    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>> them perceive the halting problem as a dogma. As result,
    certain practical things (in code analysis) are not even tried >>>>>>>>> because it's assumed that they are bound by the halting problem. >>>>>>>>>
    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard >>>>>>>>> a particular analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has >>>>>>>>> 0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists >>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>> exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial >>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic
    limit


    Nope, because the proof doesn't actually need to talk about HOW the >>>>> decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for
    ANY specific decider, and thus no decider can get all answers correct. >>>>>
    That the input HAS a correct answer (just the opposite of what that >>>>> specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus
    a non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on
    an input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding
    to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't
    be more efficient that brute force, because again, they can block
    when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable

    infinite loops in turing complete system are also fully enumerable just
    like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of time
    using brute force.

    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem, as
    it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a
    method that does correctly decide on it without ever given any wrong
    answers to any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    those are not valid partial deciders, why are you still bringing up red herrings?


    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to use it.

    and where's the proof i must enumerate all partial deciders in order to
    know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input... i
    can know that method for any given input, and therefor i can decide on
    any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid computation?


    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving the
    input ...


    I guess you have fallen for the Olcott trap of convinsing yourself that
    you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why
    am i limited by the absolute nonsense that is turing machines
    producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting


    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law because
    a bandwagon has convinced u



    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are
    treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation input/
    ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input, doesn't
    mean it can detect the other "self" that exists in the paradox input.

    again circular logic


    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm gives.

    That doesn't make the name for that behavior "Paradox", it will still be
    one of "Halting" or "Non-Halting", so the third answer can't be correct.


    can't determine if the input is just using a computational equivalent
    of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down


    I guess your problem is you don't understand what an actual ALGORITHM is.

    gaslighting again, why u must argue like a child?




    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides to
    loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.


    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation paradigm to fit the pigeonhole fallacy ur now making

    this is a problem with assuming the CT-thesis is correct.


    It seems that the result of your description is that ALL your partial deciders are going to just loop forever, and thus your claim that one of them will answer is just a lie.

    no



    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is enumerating all those deciders, everyone (if they can detect themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an interface
    to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed points,
    the pathological input can only counter a subset of fixed points under classical limitations to computing.





    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary


    Your enumerable set of partial deciders will just never give an answer,
    and thus you can't say that some partial decider can answer for every possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of self
    is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point of
    decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh





    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that input. >>>
    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different answer.

    No machine has "Paradoxical" halting behavior as its specific behavior.
    It might be described as contrary to a given instance of a general algorithm, but all that shows was that algorithm was WRONG, as it still
    had specific behavior as to its actual behavior.
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 14 22:43:35 2026
    From Newsgroup: comp.ai.philosophy

    On 1/13/26 3:33 PM, dart200 wrote:
    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>>> them perceive the halting problem as a dogma. As result,
    certain practical things (in code analysis) are not even tried >>>>>>>>>> because it's assumed that they are bound by the halting problem. >>>>>>>>>>
    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard >>>>>>>>>> a particular analysis branch by marking it as inconclusive. >>>>>>>>>>
    Halting problem for sure can be better framed to not sound as >>>>>>>>>> a dogma, at least. In practice, algorithmic inconclusiveness >>>>>>>>>> has 0.001 probability, not a 100% guarantee as many engineers >>>>>>>>>> perceive it.

    god it's been such a mind-fuck to unpack the halting problem, >>>>>>>>>
    but the halting problem does not mean that no algorithm exists >>>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>>> exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial >>>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular >>>>>>> interface is a flaw of TM computing, not an inherent algorithmic >>>>>>> limit


    Nope, because the proof doesn't actually need to talk about HOW
    the decider actually made its decision, and thus not limited to
    Turing Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error >>>>>> that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made >>>>>> from will get the wrong answer, and we can make such an input for >>>>>> ANY specific decider, and thus no decider can get all answers
    correct.

    That the input HAS a correct answer (just the opposite of what
    that specific decider gives) shows that there IS a correct answer, >>>>>> so there is nothing wrong about the question of its halting, and
    thus a non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on >>>>> an input which they are included within, like they can modify their >>>>> behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding
    to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't
    be more efficient that brute force, because again, they can block
    when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space), >>>
    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable

    infinite loops in turing complete system are also fully enumerable just
    like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of time using brute force.

    You seem to confuse "enumerable" with effectively enumerable.

    Enumerable means that via the process of the Axiom of Choice, some
    enumeration is possible to be found. Doing this might require having a
    "God Function" to determine if a given candidate is part of the set.

    Just because an infinite set is enumberable, doesn't mean that you CAN
    create that set in a computation.

    While the looping machines are enumerable, that doesn't mean you can
    generate a set of all such machines.

    Unless you can actually SHOW how to DECIDE on this, in countable
    unbounded time, you can't claim it.

    THe problem is that while N is enumerable, and N^k is enumberable, 2^N
    is not.

    Your problem is you can't tell when you have simulated a machine long
    enough to say that it is no longer a bounded space machine, and thus
    "looping" is not deciable, only recognizable.


    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem,
    as it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a
    method that does correctly decide on it without ever given any wrong
    answers to any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    those are not valid partial deciders, why are you still bringing up red herrings?

    Because you keep on trying to assert them.

    You claim that for any given machine, there must be a machine to
    correctxl decide it.

    If you don't mean those trivial decider, PROVE your claim. You just
    assert without any proof.

    Note, perhaps you are overlooking the implied requirement, we must KNOW
    that it is correct for that input, otherwise we get to the trivial
    variation of the above, that check if it is THIS machine, and if so one
    says halting and the other says non-halting, and if it is any other
    machine, just keep looping.

    I hope you admit that this would not be a valid answer to you claim.



    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to
    use it.

    and where's the proof i must enumerate all partial deciders in order to
    know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input...
    i can know that method for any given input, and therefor i can decide
    on any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid computation?

    Because you are not fixed and deterministic. If I can't build a
    computation on you, you are not a computation, as that is one of the fundamentals of a computation.



    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving the input ...

    "Get the correct answer" is not an algorithm, and resorting to it just
    shows you don't know what you are talking about.

    You "logic" is apparently still based on lying to yourself that you can
    do it, believing that lie, and then living in your stupidity,.

    If you could algorithmically determine the correct method, then the
    input could have done the same algorithm to determine the method you
    will uses, and then break that result.

    All you are doing is showing you don't understand what algorithmic means.



    I guess you have fallen for the Olcott trap of convinsing yourself
    that you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why
    am i limited by the absolute nonsense that is turing machines
    producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting

    Nope, as you keep on making statements that are just incorrect about computations.

    Name calling rather than showing the actual error just shows that you
    are out of lies to use.



    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law because
    a bandwagon has convinced u

    The "Halting Problem" is specifically written about Turing Machines, and
    thus doesn't depend on CT.

    On the assumption of CT, it can be extended to all computations.

    Since no no method of compuation has been found that allows us to
    COMPUTE beyond what a Turing Machine can do, means as far as we know the generalization holds.

    Note, what is a "Compuation" is rigidly defined, and isn't done by
    reference to a Turing Machine. This includes that an algorithm that
    performs a computation is based on a series of finitely and
    deterministically defined operations. This means that given a specific
    input, that algorithm will ALWAYS produce the same results.




    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you
    are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century.
    i don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation
    input/ ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference

    That doesn't make sense, since compuations are fixed.

    All you are doing is showing you just don't fundamentally understand
    what compuation is about, perhaps because you (like Olcott) think that 'algorithms' can actually 'think' or 'decide' rather than just 'compute'.


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input >>>>> which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from >>>>> their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input,
    doesn't mean it can detect the other "self" that exists in the paradox
    input.

    again circular logic

    Really? Try to show it wrong.

    Give the definite algorithm that can detect its functional equivalent in
    the paradox input.



    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm
    gives.

    That doesn't make the name for that behavior "Paradox", it will still
    be one of "Halting" or "Non-Halting", so the third answer can't be
    correct.


    can't determine if the input is just using a computational
    equivalent of itself that doesn't match its idea of what it looks like. >>>>
    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is
    going on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down

    No, you assume it is computable, and show that given that false
    assumption you can show that it should be.

    This is why you can't actually present your "base algorithm" because you
    need to keep changing it to handle the pathological input that gets to
    know your algorithm.



    I guess your problem is you don't understand what an actual ALGORITHM is.

    gaslighting again, why u must argue like a child?


    No, it seems it is YOU that has been inhaling too much gas.

    Since you clearly can't show what you claim.




    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides
    to loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.

    Nope. There is no halting behavior of "Paradox". As all machines either
    halt or not.

    The best you can do is not answer, but that doesn't help you, as you end
    up not answering to a lot of inputs.



    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation paradigm to fit the pigeonhole fallacy ur now making

    They are the most powerful machines we know of, so I guess you are just
    saying that we don't know how to actually compute.


    this is a problem with assuming the CT-thesis is correct.

    No, your fallacy is assuming it is incorrect, and that you have an
    unknown system that is better.

    You WANT there to be something better, and if you could actually create
    one, the world might beat a path to your door (or try to annalate you
    for breaking too many existing systems).

    It was thought that Quantum Computing might do it, but so far we haven't
    found it able to do anything unique, just do somethings that were know
    to be computable, just impractical.



    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one
    of them will answer is just a lie.

    no

    So, which of your deciders is going to answer for the unbounded space
    machine that has no decernable patterns in its growth?




    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and
    it passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine,
    so you are just showing you are fundamentally incorrect in your basis. >>>>
    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect
    themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an
    interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common
    algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed points,
    the pathological input can only counter a subset of fixed points under classical limitations to computing.

    No such thing.

    The problem is that the "pathological input" isn't the only input that
    causes problems, just a simple to show one.

    There are other problems (like Busy Beaver) that are shown to be
    fundamentally uncomputable, and thus can be a base for making halting undecidable.






    "but i can loop over all partial deciders to produce a paradox" ... >>>>> uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate
    over all almost functionally equivalent machines, so you cannot
    claim to produce a general paradox for the general algo as such a
    computation is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary

    But just ASSUMING you can find the right one, IF it exists.

    Maybe you can say that all machines of the given pathological structure
    of the proof can be decided by some other partial decider, that is a
    very different thing than that ALL machines can be decided by some
    partial decider.



    Your enumerable set of partial deciders will just never give an
    answer, and thus you can't say that some partial decider can answer
    for every possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of
    self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point
    of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the
    conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh

    Pot calling the kettle black.

    You are still showing that you don't know what compuations actually are,
    as it seems your thesis is that there must be a system more powerful
    than Turing Machines, even if you have no idea how it would work,






    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that
    input.

    The definition of a computation means it can't squirrel away the
    fact that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that
    limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different
    answer.

    No machine has "Paradoxical" halting behavior as its specific
    behavior. It might be described as contrary to a given instance of a
    general algorithm, but all that shows was that algorithm was WRONG, as
    it still had specific behavior as to its actual behavior.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Thu Jan 15 04:23:35 2026
    From Newsgroup: comp.ai.philosophy

    On 1/14/26 7:43 PM, Richard Damon wrote:
    On 1/13/26 3:33 PM, dart200 wrote:
    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>>>> them perceive the halting problem as a dogma. As result, >>>>>>>>>>> certain practical things (in code analysis) are not even >>>>>>>>>>> tried because it's assumed that they are bound by the halting >>>>>>>>>>> problem.

    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely
    discard a particular analysis branch by marking it as
    inconclusive.

    Halting problem for sure can be better framed to not sound as >>>>>>>>>>> a dogma, at least. In practice, algorithmic inconclusiveness >>>>>>>>>>> has 0.001 probability, not a 100% guarantee as many engineers >>>>>>>>>>> perceive it.

    god it's been such a mind-fuck to unpack the halting problem, >>>>>>>>>>
    but the halting problem does not mean that no algorithm exists >>>>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>>>> exist a partial decider that can decide on it ... because >>>>>>>>>> otherwise a paradox would have to address all possible partial >>>>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular >>>>>>>> interface is a flaw of TM computing, not an inherent algorithmic >>>>>>>> limit


    Nope, because the proof doesn't actually need to talk about HOW >>>>>>> the decider actually made its decision, and thus not limited to >>>>>>> Turing Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error >>>>>>> that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made >>>>>>> from will get the wrong answer, and we can make such an input for >>>>>>> ANY specific decider, and thus no decider can get all answers
    correct.

    That the input HAS a correct answer (just the opposite of what
    that specific decider gives) shows that there IS a correct
    answer, so there is nothing wrong about the question of its
    halting, and thus a non- answer like "its behavior is contrary" >>>>>>> is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they >>>>>> are deciding on ... ofc deciders can modify their behavior based
    on an input which they are included within, like they can modify
    their behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by >>>>> the "pathological" input.


    this is how partial deciders can intelligently block on responding >>>>>> to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't >>>>>> be more efficient that brute force, because again, they can block >>>>>> when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded
    space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space) >>>
    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable >>
    infinite loops in turing complete system are also fully enumerable
    just like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of
    time using brute force.

    You seem to confuse "enumerable" with effectively enumerable.

    Enumerable means that via the process of the Axiom of Choice, some enumeration is possible to be found. Doing this might require having a
    "God Function" to determine if a given candidate is part of the set.

    Just because an infinite set is enumberable, doesn't mean that you CAN create that set in a computation.

    While the looping machines are enumerable, that doesn't mean you can generate a set of all such machines.

    Unless you can actually SHOW how to DECIDE on this, in countable
    unbounded time, you can't claim it.

    THe problem is that while N is enumerable, and N^k is enumberable, 2^N
    is not.

    Your problem is you can't tell when you have simulated a machine long
    enough to say that it is no longer a bounded space machine, and thus "looping" is not deciable, only recognizable.


    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem,
    as it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there
    that must decide correctly on it. so, for any given machine there is
    a method that does correctly decide on it without ever given any
    wrong answers to any other machine (tho it may not give answers at
    times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we
    just don't know.

    those are not valid partial deciders, why are you still bringing up
    red herrings?

    Because you keep on trying to assert them.

    You claim that for any given machine, there must be a machine to
    correctxl decide it.

    If you don't mean those trivial decider, PROVE your claim. You just
    assert without any proof.

    Note, perhaps you are overlooking the implied requirement, we must KNOW
    that it is correct for that input, otherwise we get to the trivial
    variation of the above, that check if it is THIS machine, and if so one
    says halting and the other says non-halting, and if it is any other
    machine, just keep looping.

    I hope you admit that this would not be a valid answer to you claim.



    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to
    use it.

    and where's the proof i must enumerate all partial deciders in order
    to know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input...
    i can know that method for any given input, and therefor i can
    decide on any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid
    computation?

    Because you are not fixed and deterministic. If I can't build a
    computation on you, you are not a computation, as that is one of the fundamentals of a computation.



    So, all you are stating is you are too stupid to understand the
    nature of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving
    the input ...

    "Get the correct answer" is not an algorithm, and resorting to it just
    shows you don't know what you are talking about.

    You "logic" is apparently still based on lying to yourself that you can
    do it, believing that lie, and then living in your stupidity,.

    If you could algorithmically determine the correct method, then the
    input could have done the same algorithm to determine the method you
    will uses, and then break that result.

    All you are doing is showing you don't understand what algorithmic means.



    I guess you have fallen for the Olcott trap of convinsing yourself
    that you are God.


    this is why i'm really starting to think the ct-thesis is cooked.
    you say i can't do that because turing machines can't do that ...
    but where's the proof that turing machine encompass all of
    computing? why am i limited by the absolute nonsense that is turing
    machines producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting

    Nope, as you keep on making statements that are just incorrect about computations.

    Name calling rather than showing the actual error just shows that you
    are out of lies to use.



    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law
    because a bandwagon has convinced u

    The "Halting Problem" is specifically written about Turing Machines, and thus doesn't depend on CT.

    On the assumption of CT, it can be extended to all computations.

    Since no no method of compuation has been found that allows us to
    COMPUTE beyond what a Turing Machine can do, means as far as we know the

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here u
    are crying about how no superior method has been found as if u'd ever
    even tried to look past the ct-thesis...

    generalization holds.

    Note, what is a "Compuation" is rigidly defined, and isn't done by
    reference to a Turing Machine. This includes that an algorithm that
    performs a computation is based on a series of finitely and deterministically defined operations. This means that given a specific input, that algorithm will ALWAYS produce the same results.




    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you
    are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century.
    i don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation
    input/ ouput specification.

    No, the problem is you don't understand what computating is, and
    thus, just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference

    That doesn't make sense, since compuations are fixed.

    All you are doing is showing you just don't fundamentally understand
    what compuation is about, perhaps because you (like Olcott) think that 'algorithms' can actually 'think' or 'decide' rather than just 'compute'.


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the >>>>>> partial deciders, all the general algorithm needs is a "self"
    input which identifies the particular interface it's computing
    for. this general algo for partial deciders will have three
    outputs: HALTS, LOOPS, and PARADOX. when partial deciders receive >>>>>> PARADOX back from their algo run they will then just loop forever >>>>>> to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input,
    doesn't mean it can detect the other "self" that exists in the
    paradox input.

    again circular logic

    Really? Try to show it wrong.

    Give the definite algorithm that can detect its functional equivalent in
    the paradox input.



    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm
    gives.

    That doesn't make the name for that behavior "Paradox", it will still
    be one of "Halting" or "Non-Halting", so the third answer can't be
    correct.


    can't determine if the input is just using a computational
    equivalent of itself that doesn't match its idea of what it looks
    like.

    This FACT just breaks you concept, as you just assume you can
    detect what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is
    going on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what
    my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down

    No, you assume it is computable, and show that given that false
    assumption you can show that it should be.

    This is why you can't actually present your "base algorithm" because you need to keep changing it to handle the pathological input that gets to
    know your algorithm.



    I guess your problem is you don't understand what an actual ALGORITHM
    is.

    gaslighting again, why u must argue like a child?


    No, it seems it is YOU that has been inhaling too much gas.

    Since you clearly can't show what you claim.




    And, even if it CAN detect that the input is using a copy of
    itself, that doesn't help it as it still can't get the right
    answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides
    to loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that
    can't be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.

    Nope. There is no halting behavior of "Paradox". As all machines either
    halt or not.

    The best you can do is not answer, but that doesn't help you, as you end
    up not answering to a lot of inputs.



    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation
    paradigm to fit the pigeonhole fallacy ur now making

    They are the most powerful machines we know of, so I guess you are just saying that we don't know how to actually compute.


    this is a problem with assuming the CT-thesis is correct.

    No, your fallacy is assuming it is incorrect, and that you have an
    unknown system that is better.

    You WANT there to be something better, and if you could actually create
    one, the world might beat a path to your door (or try to annalate you
    for breaking too many existing systems).

    It was thought that Quantum Computing might do it, but so far we haven't found it able to do anything unique, just do somethings that were know
    to be computable, just impractical.



    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one
    of them will answer is just a lie.

    no

    So, which of your deciders is going to answer for the unbounded space machine that has no decernable patterns in its growth?




    pathological input based on your general algorithm effectively uses >>>>> copies of all the algorithms it enumerates, so NONE of them can
    give the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial >>>>>> decider, and that's what i mean by passing in a self. the partial >>>>>> decider must have a quine that allows it to recognize itself, and >>>>>> it passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine,
    so you are just showing you are fundamentally incorrect in your basis. >>>>>
    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect
    themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an
    interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common
    algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed
    points, the pathological input can only counter a subset of fixed
    points under classical limitations to computing.

    No such thing.

    The problem is that the "pathological input" isn't the only input that causes problems, just a simple to show one.

    There are other problems (like Busy Beaver) that are shown to be fundamentally uncomputable, and thus can be a base for making halting undecidable.






    "but i can loop over all partial deciders to produce a
    paradox" ... uhh no you can't? traditional computing cannot
    iterate over all functionally equivalent machines, so it certainly >>>>>> can't iterate over all almost functionally equivalent machines, so >>>>>> you cannot claim to produce a general paradox for the general algo >>>>>> as such a computation is outside the scope of classical computing >>>>>> limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary

    But just ASSUMING you can find the right one, IF it exists.

    Maybe you can say that all machines of the given pathological structure
    of the proof can be decided by some other partial decider, that is a
    very different thing than that ALL machines can be decided by some
    partial decider.



    Your enumerable set of partial deciders will just never give an
    answer, and thus you can't say that some partial decider can answer
    for every possible input.



    The pathological program doesn't need to enumerate the deciders, it >>>>> just needs to user what you make your final decider, which can only >>>>> partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of
    self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point
    of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the
    conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh

    Pot calling the kettle black.

    You are still showing that you don't know what compuations actually are,
    as it seems your thesis is that there must be a system more powerful
    than Turing Machines, even if you have no idea how it would work,






    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that
    input.

    The definition of a computation means it can't squirrel away the
    fact that it was once used on this particular input, and needs to
    do something different, which is what is needed to CHANGE their
    behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that
    limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a
    problem, we can't say that the right answer doesn't exist just
    because a DIFFERENT problem based on a DIFFERENT algorithm gives a
    different answer.

    No machine has "Paradoxical" halting behavior as its specific
    behavior. It might be described as contrary to a given instance of a
    general algorithm, but all that shows was that algorithm was WRONG,
    as it still had specific behavior as to its actual behavior.



    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Thu Jan 15 22:28:00 2026
    From Newsgroup: comp.ai.philosophy

    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here u
    are crying about how no superior method has been found as if u'd ever
    even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of the "model".

    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete ability,
    is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing Complete.

    And, any algorithm that actually USES their capability to detect if they
    have been nested will become incorrect as a decider, as a decider is a
    machine that computes a specific mapping of its input to its output, and
    if that result changes in the submachine, only one of the answers it
    gives (as a stand-alone, or as the sub-machine) can be right, so you
    just show that it gave a wrong answer.

    This is sort of like the problem with a RASP machine architecture, sub-machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by the
    rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 01:08:19 2026
    From Newsgroup: comp.ai.philosophy

    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here
    u are crying about how no superior method has been found as if u'd
    ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of the "model".

    you honestly could have just said that cause the rest of this is just u repeating urself as if that makes it more correct


    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete ability,
    is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power with that??? clearly they can express anything that TMs can ...


    And, any algorithm that actually USES their capability to detect if they have been nested will become incorrect as a decider, as a decider is a machine that computes a specific mapping of its input to its output, and
    if that result changes in the submachine, only one of the answers it
    gives (as a stand-alone, or as the sub-machine) can be right, so you
    just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with the
    only actual rational way being it's just the "one true way"


    This is sort of like the problem with a RASP machine architecture, sub- machines on such a platform are not necessarily computations, if they
    use the machines capability to pass information not allowed by the rules
    of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 11:46:56 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and
    here u are crying about how no superior method has been found as if
    u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you get to,



    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have that
    sort of "instructions".



    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a decider
    is a machine that computes a specific mapping of its input to its
    output, and if that result changes in the submachine, only one of the
    answers it gives (as a stand-alone, or as the sub-machine) can be
    right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.



    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by
    the rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a
    computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes "hidden"
    state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 14:21:04 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found as
    if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you get
    to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power
    with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input to
    its output, and if that result changes in the submachine, only one of
    the answers it gives (as a stand-alone, or as the sub-machine) can be
    right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with
    the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    add that to list of the growing fallacies i've pointed out in ur recent arguments, which i'm sure ur not actually tracking, as that would be far
    more honesty than u are capable of putting out.




    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by
    the rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing
    a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-dependent computation don't include context-dependent sub-computation.

    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these
    computations aren't generally computable by TMs because TMs lack the
    necessary mechanisms to grant context-awareness.

    unless u can produce some actual proof of some computation that actually breaks in context-dependence, rather than just listing things u assume
    are true, i won't believe u know what ur talking about
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Fri Jan 16 16:58:20 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    *The essence of all Computation generically defined*

    Computation only applies finite string
    transformation rules to finite string inputs.

    Computable functions are Computations that
    always stop running.

    The empty string counts as a string.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Fri Jan 16 18:21:05 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 5:58 PM, olcott wrote:
    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis, >>>>>> and here u are crying about how no superior method has been found >>>>>> as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of >>>>> the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS. >>>>>
    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if >>>>> they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    *The essence of all Computation generically defined*

    Computation only applies finite string
    transformation rules to finite string inputs.

    Computable functions are Computations that
    always stop running.

    The empty string counts as a string.


    In other words, you admit to not knowing what you are talking about.

    Note, that Computable Functions are a subset of the category Function,
    which are MAPPINGS of input to output (not something a computing machine
    does, that is a meaning in a different field of Computer Science)

    No "algorithm" needed to define a function, and it doesn't run in "steps"

    Computable Functions are Functions for which there exists a proper
    finite algorithm that can compute them for all values.

    It is that finite algorithm that needs to stop, not the "function" as functions don't run at all, they just are.

    You have had the told to you many times, but you don't care that you are
    just repeating your lies. as "facts" aren't important to you, as is
    seems words are flexible in meaning and you just redefine them as you
    want, which means everything you say is actually meaningless, as your
    words no longer have established meaning.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 18:21:10 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.


    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    Try to specify the tuple that your "operation" is.





    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,




    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be irrespective
    of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input, the machine can not be a computation.


    add that to list of the growing fallacies i've pointed out in ur recent arguments, which i'm sure ur not actually tracking, as that would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations,
    if they use the machines capability to pass information not allowed
    by the rules of a computation. Your RTM similarly break that property. >>>>
    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing
    a computation.

    And without that property, using what the machine could do, becomes
    a pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall computation context-dependent too ... if u dont want a context-dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.


    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these computations aren't generally computable by TMs because TMs lack the necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual computations.


    unless u can produce some actual proof of some computation that actually breaks in context-dependence, rather than just listing things u assume
    are true, i won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT.

    Your context, being not part of the input, can't change the well-defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory where
    the result can depend on things that aren't part of the actual input to
    the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 18:35:03 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    Computation only applies finite string
    transformation rules to finite string inputs.

    Deciders transform finite string inputs by
    finite string transformation rules into
    {Accept, Reject} values.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 16:43:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis, >>>>>> and here u are crying about how no superior method has been found >>>>>> as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of >>>>> the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS. >>>>>
    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention



    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to self-evident just like with the other rules of turing machines.


    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has a
    list of transition functions:

    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that should
    be taking to transition the tape to the next step.






    And, any algorithm that actually USES their capability to detect if >>>>> they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be irrespective
    of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input, the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a mapping of:

    (context, input) -> output

    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and the
    reason we got stuck on the halting problem of a fucking century is
    ignoring that context matters.



    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that
    would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT


    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations,
    if they use the machines capability to pass information not allowed >>>>> by the rules of a computation. Your RTM similarly break that property. >>>>>
    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a
    specific input might produce different output, your architecture is >>>>> NOT doing a computation.

    And without that property, using what the machine could do, becomes >>>>> a pretty worthless criteria, as you can't actually talk much about it. >>>>
    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if
    you do that, you find that since you are trying to make it so the
    caller can't just define that context, your system is less than
    turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho



    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these
    computations aren't generally computable by TMs because TMs lack the
    necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing things
    u assume are true, i won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include the
    entire computing context, not just the formal parameters. it's still
    well defined and it grants us access to meta computation that is not as expressible in TM computing.

    ct-thesis is cooked dude


    Your context, being not part of the input, can't change the well-defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    ur overgeneralizing. just become some computation is context-dependent
    doesn't mean all computation is context-dependent.

    another fallacy.


    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory where
    the result can depend on things that aren't part of the actual input to
    the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input was.
    there's nothing random about it, context-dependent computation is just
    as well-defend and deterministic as context-independent computation
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 22:24:21 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 7:43 PM, dart200 wrote:
    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface... >>>>>>>
    when i tried to suggest improvements to the computational model, >>>>>>> like RTMs, u then told me i *can't* do that because muh ct-
    thesis, and here u are crying about how no superior method has
    been found as if u'd ever even tried to look past the ct-thesis... >>>>>>
    No, you didn't suggest improvements to the model, you just showed >>>>>> you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part
    of the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do >>>>>> COMPUTATIONS, as it violates the basic rules of what a compuation IS. >>>>>>
    Computations are specific algorithms acting on just the input data. >>>>>>
    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ... >>>>
    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention

    Nope, because they don't have the actual form of a TM.

    Their operations isn't by the basic principles of a TM.

    I think your problem is you don't actually know how a TM works, and thus
    this is meaningless.

    Please try to show how you would actually DEFINE in a system similar to
    how you would define a regular TM one of your RTMS.

    Not just hand-waving arguement, and actually encoded RTM that looks like
    just an extension of some TM that has been encoded, and an explaination
    of how such a hardware platform could be constructed.




    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to self-evident just like with the other rules of turing machines.

    No, it is trying to put a hyper-cube into a flat plane drawing of a square.

    It seems you are just showing that you don't understand what you are
    actually talking about, but are trying to baffle people with your
    bullshit hopeing they won't notice your ignorance.



    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has a
    list of transition functions:

    So, it is a "tape motion". and how do you move the tape a "reflect"?


    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that should
    be taking to transition the tape to the next step.

    That isn't an "action" slot, in classic representation it is a binary
    field for tape motion direction.







    And, any algorithm that actually USES their capability to detect
    if they have been nested will become incorrect as a decider, as a >>>>>> decider is a machine that computes a specific mapping of its input >>>>>> to its output, and if that result changes in the submachine, only >>>>>> one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer. >>>>>
    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the
    foot, with the only actual rational way being it's just the "one
    true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by
    it, which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be
    irrespective of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input,
    the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a mapping of:

    (context, input) -> output

    WHich means that you are calling context as part of your input.

    But you alse say that you can't set it, so you


    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and the reason we got stuck on the halting problem of a fucking century is
    ignoring that context matters.

    And thus you system no longer has composition, as you have defined that
    the context wasn't changable by the "caller" of a sub-computation.

    This makes your system strictly LESS powerful than a Turing Machine.




    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that
    would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT

    Where did I do that.

    I stated the DEFINITION of the term, something it seems you are just
    aferming you don't understand.



    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine architecture, >>>>>> sub- machines on such a platform are not necessarily computations, >>>>>> if they use the machines capability to pass information not
    allowed by the rules of a computation. Your RTM similarly break
    that property.

    Remember, Computations are NOT just what some model of processing >>>>>> produce, but specifically is defined based on producing a specific >>>>>> mapping of input to output, so if (even as a sub-machine) a
    specific input might produce different output, your architecture
    is NOT doing a computation.

    And without that property, using what the machine could do,
    becomes a pretty worthless criteria, as you can't actually talk
    much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the machine. >>>>

    context-dependent computations are still computations. the fact TMs >>>>> don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if
    you do that, you find that since you are trying to make it so the
    caller can't just define that context, your system is less than
    turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same
    field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new
    definition of what a computation is, go ahead.

    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these
    computations aren't generally computable by TMs because TMs lack the
    necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing
    things u assume are true, i won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include the entire computing context, not just the formal parameters. it's still
    well defined and it grants us access to meta computation that is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is
    written about.

    You can't change the definition of a computation, and still talk about
    things as if you were in the same system.

    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well-
    defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    ur overgeneralizing. just become some computation is context-dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-dependent.



    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory
    where the result can depend on things that aren't part of the actual
    input to the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input was. there's nothing random about it, context-dependent computation is just
    as well-defend and deterministic as context-independent computation


    The problem is that when you look at the computation itself (that might
    be imbedded into a larger computation) you don't know which of the
    infinite contexts it might be within.

    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is about
    being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories based on simpler theories and the axioms. If those simplere things were "context dependent" it makes it much harder for them to specifiy what they
    actually do in all contexts, and to then use them in all contexts.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 23:23:01 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 7:24 PM, Richard Damon wrote:
    On 1/16/26 7:43 PM, dart200 wrote:
    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface... >>>>>>>>
    when i tried to suggest improvements to the computational model, >>>>>>>> like RTMs, u then told me i *can't* do that because muh ct-
    thesis, and here u are crying about how no superior method has >>>>>>>> been found as if u'd ever even tried to look past the ct-thesis... >>>>>>>
    No, you didn't suggest improvements to the model, you just showed >>>>>>> you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part >>>>>>> of the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think
    you get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM >>>>>>> might be a type of machine that could be thought of, they don't >>>>>>> do COMPUTATIONS, as it violates the basic rules of what a
    compuation IS.

    Computations are specific algorithms acting on just the input data. >>>>>>>
    A fundamental property needed to reach at least Turing Complete >>>>>>> ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing >>>>>>> Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction >>>>>> that dumps static meta-data + copies tape ... how have they *lost* >>>>>> power with that??? clearly they can express anything that TMs can ... >>>>>
    Which means you don't understand how "TM"s work, as they don't have >>>>> that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention

    Nope, because they don't have the actual form of a TM.

    Their operations isn't by the basic principles of a TM.

    I think your problem is you don't actually know how a TM works, and thus this is meaningless.

    Please try to show how you would actually DEFINE in a system similar to
    how you would define a regular TM one of your RTMS.

    RTMs can run TM machine_descriptions directly without modification
    because REFLECT is just an operation that need not be used in the
    computation


    Not just hand-waving arguement, and actually encoded RTM that looks like just an extension of some TM that has been encoded, and an explaination
    of how such a hardware platform could be constructed.




    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to
    self-evident just like with the other rules of turing machines.

    No, it is trying to put a hyper-cube into a flat plane drawing of a square.

    It seems you are just showing that you don't understand what you are actually talking about, but are trying to baffle people with your
    bullshit hopeing they won't notice your ignorance.

    or u just don't understand what i mean by RTM,

    maybe ur just too old for me teach any new tricks...




    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has a
    list of transition functions:

    So, it is a "tape motion". and how do you move the tape a "reflect"?

    it's a tape operation like all the rest of the operations



    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that
    should be taking to transition the tape to the next step.

    That isn't an "action" slot, in classic representation it is a binary
    field for tape motion direction.

    richard, please do actually read turing's paper one of these days. i've already posted at you his first machine description in text, and now
    i'll post it in image form:

    https://imgur.com/a/pzhHTMb

    do let me know when ur done with retardedly quibbling over syntax so we
    can actually get around to discussing semantics one of these days,

    god i wish i had someone like turing to discuss this with, but so far ur
    the only still responding to any depth.








    And, any algorithm that actually USES their capability to detect >>>>>>> if they have been nested will become incorrect as a decider, as a >>>>>>> decider is a machine that computes a specific mapping of its
    input to its output, and if that result changes in the
    submachine, only one of the answers it gives (as a stand-alone, >>>>>>> or as the sub- machine) can be right, so you just show that it
    gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the >>>>>> "one true way". seems like u just enjoy shooting urself in the
    foot, with the only actual rational way being it's just the "one
    true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by
    it, which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be
    irrespective of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input,
    the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a
    mapping of:

    (context, input) -> output

    WHich means that you are calling context as part of your input.

    But you alse say that you can't set it, so you

    the context comes from REFLECT



    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and the
    reason we got stuck on the halting problem of a fucking century is
    ignoring that context matters.

    And thus you system no longer has composition, as you have defined that
    the context wasn't changable by the "caller" of a sub-computation.

    This makes your system strictly LESS powerful than a Turing Machine.

    no it doesn't because using context is an optional feature, not a
    requirement for RTM machine descriptions. like i said RTMs can run TMs directly so they include all TM computations as well.





    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that
    would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT

    Where did I do that.

    I stated the DEFINITION of the term, something it seems you are just aferming you don't understand.

    what makes that definition right beyond you repeating yourself?
    sometimes we get definitions wrong dude.

    it's pretty crazy i can produce a machine (even if u haven't understood
    it yet) that produces a consistent deterministic result that is "not a computation".

    not sure what the fuck it's doing if it's not a computation




    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not
    necessarily computations, if they use the machines capability to >>>>>>> pass information not allowed by the rules of a computation. Your >>>>>>> RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing >>>>>>> produce, but specifically is defined based on producing a
    specific mapping of input to output, so if (even as a sub-
    machine) a specific input might produce different output, your
    architecture is NOT doing a computation.

    And without that property, using what the machine could do,
    becomes a pretty worthless criteria, as you can't actually talk >>>>>>> much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the
    machine.


    context-dependent computations are still computations. the fact
    TMs don't capture them is an indication that the ct-thesis may be >>>>>> false


    Nope. Not unless the "context" is made part of the "input", and if
    you do that, you find that since you are trying to make it so the
    caller can't just define that context, your system is less than
    turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same
    field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, with one
    new operation.


    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations
    *must* have context-awareness and are therefore context-dependent.
    these computations aren't generally computable by TMs because TMs
    lack the necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing
    things u assume are true, i won't believe u know what ur talking about >>>>

    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include the
    entire computing context, not just the formal parameters. it's still
    well defined and it grants us access to meta computation that is not
    as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is
    written about.

    You can't change the definition of a computation, and still talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well-
    defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    ur overgeneralizing. just become some computation is context-dependent
    doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-dependent.

    ur just arguing in circles with this.




    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory
    where the result can depend on things that aren't part of the actual
    input to the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input was.
    there's nothing random about it, context-dependent computation is just
    as well-defend and deterministic as context-independent computation


    The problem is that when you look at the computation itself (that might
    be imbedded into a larger computation) you don't know which of the
    infinite contexts it might be within.

    depth is not infinite for any given step,

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE DESCRIPTION
    OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A FULL COPY OF THE
    TAPE ...

    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow it to compute anything that is "knowable" about where it is in the computation
    at time of the REFLECT operation...

    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the conversation...


    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is about
    being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories based on simpler theories and the axioms. If those simplere things were "context dependent" it makes it much harder for them to specifiy what they
    actually do in all contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    if the simplest theory was always correct we'd still be using newtonian gravity for everything
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 12:17:11 2026
    From Newsgroup: comp.ai.philosophy

    On 16/01/2026 23:21, Richard Damon wrote:
    the only "operations" that a turing machine does is write a specified
    value to the tape, move the tape, and change state.

    And arbitrarily long sequences of those. It says so in his 1936 paper.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 07:33:13 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 2:23 AM, dart200 wrote:
    On 1/16/26 7:24 PM, Richard Damon wrote:
    On 1/16/26 7:43 PM, dart200 wrote:
    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface... >>>>>>>>>
    when i tried to suggest improvements to the computational
    model, like RTMs, u then told me i *can't* do that because muh >>>>>>>>> ct- thesis, and here u are crying about how no superior method >>>>>>>>> has been found as if u'd ever even tried to look past the ct- >>>>>>>>> thesis...

    No, you didn't suggest improvements to the model, you just
    showed you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part >>>>>>>> of the "model".

    you honestly could have just said that cause the rest of this is >>>>>>> just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think
    you get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM >>>>>>>> might be a type of machine that could be thought of, they don't >>>>>>>> do COMPUTATIONS, as it violates the basic rules of what a
    compuation IS.

    Computations are specific algorithms acting on just the input data. >>>>>>>>
    A fundamental property needed to reach at least Turing Complete >>>>>>>> ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing >>>>>>>> Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction >>>>>>> that dumps static meta-data + copies tape ... how have they
    *lost* power with that??? clearly they can express anything that >>>>>>> TMs can ...

    Which means you don't understand how "TM"s work, as they don't
    have that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the >>>>> list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention

    Nope, because they don't have the actual form of a TM.

    Their operations isn't by the basic principles of a TM.

    I think your problem is you don't actually know how a TM works, and
    thus this is meaningless.

    Please try to show how you would actually DEFINE in a system similar
    to how you would define a regular TM one of your RTMS.

    RTMs can run TM machine_descriptions directly without modification
    because REFLECT is just an operation that need not be used in the computation

    So, you admit you can't do it, or are just too stupid to understand what
    it means to DEFINE something.



    Not just hand-waving arguement, and actually encoded RTM that looks
    like just an extension of some TM that has been encoded, and an
    explaination of how such a hardware platform could be constructed.




    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to
    self-evident just like with the other rules of turing machines.

    No, it is trying to put a hyper-cube into a flat plane drawing of a
    square.

    It seems you are just showing that you don't understand what you are
    actually talking about, but are trying to baffle people with your
    bullshit hopeing they won't notice your ignorance.

    or u just don't understand what i mean by RTM,

    I think understand what you are trying to do. But your problem is you
    don't seem to understand it well enough to actually define it.


    maybe ur just too old for me teach any new tricks...

    I doubt that. I think it is more that you are too ignorant of the field
    to understand your issues.





    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has a
    list of transition functions:

    So, it is a "tape motion". and how do you move the tape a "reflect"?

    it's a tape operation like all the rest of the operations

    No, it isn't. WHich way is "Reflect"

    The closest that can means is flip the tape end to end.

    Your problem is you don't seem to understand the need to specify in
    precise detail what that instruction does.




    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that
    should be taking to transition the tape to the next step.

    That isn't an "action" slot, in classic representation it is a binary
    field for tape motion direction.

    richard, please do actually read turing's paper one of these days. i've already posted at you his first machine description in text, and now
    i'll post it in image form:

    https://imgur.com/a/pzhHTMb

    So, And did you read the descriptions of what those operations were.

    Have you looked at how the description evolved over time as it was refined.


    do let me know when ur done with retardedly quibbling over syntax so we
    can actually get around to discussing semantics one of these days,

    When you actually DEFINE what you mean by your "Reflect" instruction, as
    an actually implementable operation.


    god i wish i had someone like turing to discuss this with, but so far ur
    the only still responding to any depth.

    I don't think it would help.









    And, any algorithm that actually USES their capability to detect >>>>>>>> if they have been nested will become incorrect as a decider, as >>>>>>>> a decider is a machine that computes a specific mapping of its >>>>>>>> input to its output, and if that result changes in the
    submachine, only one of the answers it gives (as a stand-alone, >>>>>>>> or as the sub- machine) can be right, so you just show that it >>>>>>>> gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the >>>>>>> "one true way". seems like u just enjoy shooting urself in the
    foot, with the only actual rational way being it's just the "one >>>>>>> true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by >>>>>> it, which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be
    irrespective of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output". >>>>
    If the machine can produce two different output from the same input,
    the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a
    mapping of:

    (context, input) -> output

    WHich means that you are calling context as part of your input.

    But you alse say that you can't set it, so you

    the context comes from REFLECT

    So, "Reflect" makes up the context? How does it know what to do?




    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and
    the reason we got stuck on the halting problem of a fucking century
    is ignoring that context matters.

    And thus you system no longer has composition, as you have defined
    that the context wasn't changable by the "caller" of a sub-computation.

    This makes your system strictly LESS powerful than a Turing Machine.

    no it doesn't because using context is an optional feature, not a requirement for RTM machine descriptions. like i said RTMs can run TMs directly so they include all TM computations as well.

    You don't seem to understand, that the fact that the sub-computation
    COULD use the reflect operation means that you can't control its input.

    Thus, perhaps the better way to say it is the computations able to be
    done with machines that actually use your feature, are less-than Turng Complete.

    There is nothing that you RTMs can do that a TM can't do, except to
    define handcuffs for themselves.

    The USE of your extention weakens the machine,






    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that
    would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT

    Where did I do that.

    I stated the DEFINITION of the term, something it seems you are just
    aferming you don't understand.

    what makes that definition right beyond you repeating yourself?
    sometimes we get definitions wrong dude.

    Because it IS the definition.

    I guess you don't understand the rules of logic.


    it's pretty crazy i can produce a machine (even if u haven't understood
    it yet) that produces a consistent deterministic result that is "not a computation".

    Because you get that result only by equivocating on your definitions.

    If the context is part of the inpt to make the output determistic from
    the input, then they fail to be usable as sub-computations as we can't
    control that context part of the input.

    When we look at just the controllable input for a sub-computation, the
    output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    Showing that you really don't understand what you are talking about.





    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not
    necessarily computations, if they use the machines capability to >>>>>>>> pass information not allowed by the rules of a computation. Your >>>>>>>> RTM similarly break that property.

    Remember, Computations are NOT just what some model of
    processing produce, but specifically is defined based on
    producing a specific mapping of input to output, so if (even as >>>>>>>> a sub- machine) a specific input might produce different output, >>>>>>>> your architecture is NOT doing a computation.

    And without that property, using what the machine could do,
    becomes a pretty worthless criteria, as you can't actually talk >>>>>>>> much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the
    machine.


    context-dependent computations are still computations. the fact >>>>>>> TMs don't capture them is an indication that the ct-thesis may be >>>>>>> false


    Nope. Not unless the "context" is made part of the "input", and if >>>>>> you do that, you find that since you are trying to make it so the >>>>>> caller can't just define that context, your system is less than
    turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-computation. >>>>
    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same
    field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new
    definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, with one
    new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as plane geometry, we just added a small extension.



    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations
    *must* have context-awareness and are therefore context-dependent.
    these computations aren't generally computable by TMs because TMs
    lack the necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing
    things u assume are true, i won't believe u know what ur talking about >>>>>

    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include
    the entire computing context, not just the formal parameters. it's
    still well defined and it grants us access to meta computation that
    is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is
    written about.

    You can't change the definition of a computation, and still talk about
    things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well-
    defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    ur overgeneralizing. just become some computation is context-
    dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-
    dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the definition.





    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory
    where the result can depend on things that aren't part of the actual
    input to the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what
    you will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input was.
    there's nothing random about it, context-dependent computation is
    just as well-defend and deterministic as context-independent computation >>>

    The problem is that when you look at the computation itself (that
    might be imbedded into a larger computation) you don't know which of
    the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE DESCRIPTION
    OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the machine description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow it to compute anything that is "knowable" about where it is in the computation
    at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is about
    being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories based
    on simpler theories and the axioms. If those simplere things were
    "context dependent" it makes it much harder for them to specifiy what
    they actually do in all contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit it is a
    new field.


    if the simplest theory was always correct we'd still be using newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 08:15:30 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 7:17 AM, Tristan Wibberley wrote:
    On 16/01/2026 23:21, Richard Damon wrote:
    the only "operations" that a turing machine does is write a specified
    value to the tape, move the tape, and change state.

    And arbitrarily long sequences of those. It says so in his 1936 paper.


    Yes, but only as sequenced by the base atomic operations.

    THe key point is what is an "atomic operation" that a computation can be
    built from as a sequence of.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 09:47:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 4:17 AM, Tristan Wibberley wrote:
    On 16/01/2026 23:21, Richard Damon wrote:
    the only "operations" that a turing machine does is write a specified
    value to the tape, move the tape, and change state.

    And arbitrarily long sequences of those. It says so in his 1936 paper.


    turing was like: "yeah, it's supposed to be only one operation per state transition... but that's inefficient to write, including several is equivalent, so we're doing that!"

    LOL, i can imagine rick telling off turing for not conforming to mUh dEfInITiOnS...
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 15:31:16 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 12:47 PM, dart200 wrote:
    On 1/17/26 4:17 AM, Tristan Wibberley wrote:
    On 16/01/2026 23:21, Richard Damon wrote:
    the only "operations" that a turing machine does is write a specified
    value to the tape, move the tape, and change state.

    And arbitrarily long sequences of those. It says so in his 1936 paper.


    turing was like: "yeah, it's supposed to be only one operation per state transition... but that's inefficient to write, including several is equivalent, so we're doing that!"

    LOL, i can imagine rick telling off turing for not conforming to mUh dEfInITiOnS...


    No, because as I remember, the more complicated was the first version,
    that he improved to allow the more simpler form, and showed how you
    could convert one form to the other.

    Something you have resisted doing, perhaps because you know you can't do it. --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 19:14:47 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 4:33 AM, Richard Damon wrote:
    On 1/17/26 2:23 AM, dart200 wrote:
    On 1/16/26 7:24 PM, Richard Damon wrote:
    On 1/16/26 7:43 PM, dart200 wrote:
    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface... >>>>>>>>>>
    when i tried to suggest improvements to the computational >>>>>>>>>> model, like RTMs, u then told me i *can't* do that because muh >>>>>>>>>> ct- thesis, and here u are crying about how no superior method >>>>>>>>>> has been found as if u'd ever even tried to look past the ct- >>>>>>>>>> thesis...

    No, you didn't suggest improvements to the model, you just
    showed you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't >>>>>>>>> part of the "model".

    you honestly could have just said that cause the rest of this is >>>>>>>> just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think >>>>>>> you get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your >>>>>>>>> RTM might be a type of machine that could be thought of, they >>>>>>>>> don't do COMPUTATIONS, as it violates the basic rules of what a >>>>>>>>> compuation IS.

    Computations are specific algorithms acting on just the input >>>>>>>>> data.

    A fundamental property needed to reach at least Turing Complete >>>>>>>>> ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than
    Turing Complete.

    i'm sorry, RTMs are literally just TMs with one added
    instruction that dumps static meta-data + copies tape ... how >>>>>>>> have they *lost* power with that??? clearly they can express
    anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't
    have that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to
    the list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention

    Nope, because they don't have the actual form of a TM.

    Their operations isn't by the basic principles of a TM.

    I think your problem is you don't actually know how a TM works, and
    thus this is meaningless.

    Please try to show how you would actually DEFINE in a system similar
    to how you would define a regular TM one of your RTMS.

    RTMs can run TM machine_descriptions directly without modification
    because REFLECT is just an operation that need not be used in the
    computation

    So, you admit you can't do it, or are just too stupid to understand what
    it means to DEFINE something.

    take the TM definition and add REFLECT to it's set of possible operations.

    regurgitation a TM definition to do that is not interesting to me, i'm
    sure a gpt can help u out with that.




    Not just hand-waving arguement, and actually encoded RTM that looks
    like just an extension of some TM that has been encoded, and an
    explaination of how such a hardware platform could be constructed.




    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to
    self-evident just like with the other rules of turing machines.

    No, it is trying to put a hyper-cube into a flat plane drawing of a
    square.

    It seems you are just showing that you don't understand what you are
    actually talking about, but are trying to baffle people with your
    bullshit hopeing they won't notice your ignorance.

    or u just don't understand what i mean by RTM,

    I think  understand what you are trying to do. But your problem is you don't seem to understand it well enough to actually define it.


    maybe ur just too old for me teach any new tricks...

    I doubt that. I think it is more that you are too ignorant of the field
    to understand your issues.





    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has
    a list of transition functions:

    So, it is a "tape motion". and how do you move the tape a "reflect"?

    it's a tape operation like all the rest of the operations

    No, it isn't. WHich way is "Reflect"

    The closest that can means is flip the tape end to end.

    Your problem is you don't seem to understand the need to specify in
    precise detail what that instruction does.

    i've described what REFLECT does several times to you by now, clearly u
    aren't paying attention so idk why one more time would make a difference:

    REFLECT will cause a bunch of machine meta-information to be written to
    the tape, starting at the head, overwriting anything its path. at the
    end of the operation, the head will still be in the same position as at
    the start of the operation. the information written to tape will include
    3 components:

    - machine description
    - current state transition
    - current tape (the tape state before command runs)






    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that
    should be taking to transition the tape to the next step.

    That isn't an "action" slot, in classic representation it is a binary
    field for tape motion direction.

    richard, please do actually read turing's paper one of these days.
    i've already posted at you his first machine description in text, and
    now i'll post it in image form:

    https://imgur.com/a/pzhHTMb

    So, And did you read the descriptions of what those operations were.

    Have you looked at how the description evolved over time as it was refined.


    do let me know when ur done with retardedly quibbling over syntax so
    we can actually get around to discussing semantics one of these days,

    When you actually DEFINE what you mean by your "Reflect" instruction, as
    an actually implementable operation.


    god i wish i had someone like turing to discuss this with, but so far
    ur the only still responding to any depth.

    I don't think it would help.

    agree to disagree dick










    And, any algorithm that actually USES their capability to
    detect if they have been nested will become incorrect as a
    decider, as a decider is a machine that computes a specific >>>>>>>>> mapping of its input to its output, and if that result changes >>>>>>>>> in the submachine, only one of the answers it gives (as a
    stand-alone, or as the sub- machine) can be right, so you just >>>>>>>>> show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is >>>>>>>> the "one true way". seems like u just enjoy shooting urself in >>>>>>>> the foot, with the only actual rational way being it's just the >>>>>>>> "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is >>>>>>> algorithms and results. "Computation" is the mapping generated by >>>>>>> it, which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing >>>>>> machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be
    irrespective of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the
    "output".

    If the machine can produce two different output from the same
    input, the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a
    mapping of:

    (context, input) -> output

    WHich means that you are calling context as part of your input.

    But you alse say that you can't set it, so you

    the context comes from REFLECT

    So, "Reflect" makes up the context? How does it know what to do?

    not answering to red herrings from someone who isn't paying attention in
    the slightest





    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and
    the reason we got stuck on the halting problem of a fucking century
    is ignoring that context matters.

    And thus you system no longer has composition, as you have defined
    that the context wasn't changable by the "caller" of a sub-computation.

    This makes your system strictly LESS powerful than a Turing Machine.

    no it doesn't because using context is an optional feature, not a
    requirement for RTM machine descriptions. like i said RTMs can run TMs
    directly so they include all TM computations as well.

    You don't seem to understand, that the fact that the sub-computation
    COULD use the reflect operation means that you can't control its input.

    machines are well defined at the start the machine, so whether it does
    or does not utilize REFLECT is knowable before the machine runs u moron


    Thus, perhaps the better way to say it is the computations able to be
    done with machines that actually use your feature, are less-than Turng Complete.

    There is nothing that you RTMs can do that a TM can't do, except to
    define handcuffs for themselves.

    The USE of your extention weakens the machine,






    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that >>>>>> would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT

    Where did I do that.

    I stated the DEFINITION of the term, something it seems you are just
    aferming you don't understand.

    what makes that definition right beyond you repeating yourself?
    sometimes we get definitions wrong dude.

    Because it IS the definition.

    not an argument


    I guess you don't understand the rules of logic.

    also not an argument



    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic result
    that is "not a computation".

    Because you get that result only by equivocating on your definitions.

    If the context is part of the inpt to make the output determistic from
    the input, then they fail to be usable as sub-computations as we can't control that context part of the input.

    When we look at just the controllable input for a sub-computation, the output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    pretty crazy we do a bunch "non-computating" in the normal act of
    programming computers


    Showing that you really don't understand what you are talking about.





    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not
    necessarily computations, if they use the machines capability >>>>>>>>> to pass information not allowed by the rules of a computation. >>>>>>>>> Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of
    processing produce, but specifically is defined based on
    producing a specific mapping of input to output, so if (even as >>>>>>>>> a sub- machine) a specific input might produce different
    output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, >>>>>>>>> becomes a pretty worthless criteria, as you can't actually talk >>>>>>>>> much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the
    machine.


    context-dependent computations are still computations. the fact >>>>>>>> TMs don't capture them is an indication that the ct-thesis may >>>>>>>> be false


    Nope. Not unless the "context" is made part of the "input", and >>>>>>> if you do that, you find that since you are trying to make it so >>>>>>> the caller can't just define that context, your system is less
    than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall >>>>>> computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-
    computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same
    field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new
    definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, with
    one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations
    *must* have context-awareness and are therefore context-dependent. >>>>>> these computations aren't generally computable by TMs because TMs >>>>>> lack the necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing
    things u assume are true, i won't believe u know what ur talking
    about


    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include
    the entire computing context, not just the formal parameters. it's
    still well defined and it grants us access to meta computation that
    is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is
    written about.

    You can't change the definition of a computation, and still talk
    about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well-
    defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man? >>>>
    ur overgeneralizing. just become some computation is context-
    dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-
    dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the definition.





    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory
    where the result can depend on things that aren't part of the
    actual input to the machine, and see what you can show that is useful. >>>>>
    The problem becomes that you can't really say anything about what
    you will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input
    was. there's nothing random about it, context-dependent computation
    is just as well-defend and deterministic as context-independent
    computation


    The problem is that when you look at the computation itself (that
    might be imbedded into a larger computation) you don't know which of
    the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A
    FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the machine description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow it
    to compute anything that is "knowable" about where it is in the
    computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is about
    being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories based
    on simpler theories and the axioms. If those simplere things were
    "context dependent" it makes it much harder for them to specifiy what
    they actually do in all contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit it is a
    new field.

    well it'd be great if someone fucking helped me out there, but all i get
    is a bunch adversarial dismissal cause i'm stuck on a god forsaking
    planet of a fucking half-braindead clowns



    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 22:28:39 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 10:14 PM, dart200 wrote:
    On 1/17/26 4:33 AM, Richard Damon wrote:
    On 1/17/26 2:23 AM, dart200 wrote:
    On 1/16/26 7:24 PM, Richard Damon wrote:
    On 1/16/26 7:43 PM, dart200 wrote:
    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface... >>>>>>>>>>>
    when i tried to suggest improvements to the computational >>>>>>>>>>> model, like RTMs, u then told me i *can't* do that because >>>>>>>>>>> muh ct- thesis, and here u are crying about how no superior >>>>>>>>>>> method has been found as if u'd ever even tried to look past >>>>>>>>>>> the ct- thesis...

    No, you didn't suggest improvements to the model, you just >>>>>>>>>> showed you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't >>>>>>>>>> part of the "model".

    you honestly could have just said that cause the rest of this >>>>>>>>> is just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think >>>>>>>> you get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your >>>>>>>>>> RTM might be a type of machine that could be thought of, they >>>>>>>>>> don't do COMPUTATIONS, as it violates the basic rules of what >>>>>>>>>> a compuation IS.

    Computations are specific algorithms acting on just the input >>>>>>>>>> data.

    A fundamental property needed to reach at least Turing
    Complete ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than >>>>>>>>>> Turing Complete.

    i'm sorry, RTMs are literally just TMs with one added
    instruction that dumps static meta-data + copies tape ... how >>>>>>>>> have they *lost* power with that??? clearly they can express >>>>>>>>> anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't >>>>>>>> have that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to >>>>>>> the list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention

    Nope, because they don't have the actual form of a TM.

    Their operations isn't by the basic principles of a TM.

    I think your problem is you don't actually know how a TM works, and
    thus this is meaningless.

    Please try to show how you would actually DEFINE in a system similar
    to how you would define a regular TM one of your RTMS.

    RTMs can run TM machine_descriptions directly without modification
    because REFLECT is just an operation that need not be used in the
    computation

    So, you admit you can't do it, or are just too stupid to understand
    what it means to DEFINE something.

    take the TM definition and add REFLECT to it's set of possible operations.

    regurgitation a TM definition to do that is not interesting to me, i'm
    sure a gpt can help u out with that.




    Not just hand-waving arguement, and actually encoded RTM that looks
    like just an extension of some TM that has been encoded, and an
    explaination of how such a hardware platform could be constructed.




    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to >>>>> self-evident just like with the other rules of turing machines.

    No, it is trying to put a hyper-cube into a flat plane drawing of a
    square.

    It seems you are just showing that you don't understand what you are
    actually talking about, but are trying to baffle people with your
    bullshit hopeing they won't notice your ignorance.

    or u just don't understand what i mean by RTM,

    I think  understand what you are trying to do. But your problem is you
    don't seem to understand it well enough to actually define it.


    maybe ur just too old for me teach any new tricks...

    I doubt that. I think it is more that you are too ignorant of the
    field to understand your issues.





    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has >>>>> a list of transition functions:

    So, it is a "tape motion". and how do you move the tape a "reflect"?

    it's a tape operation like all the rest of the operations

    No, it isn't. WHich way is "Reflect"

    The closest that can means is flip the tape end to end.

    Your problem is you don't seem to understand the need to specify in
    precise detail what that instruction does.

    i've described what REFLECT does several times to you by now, clearly u aren't paying attention so idk why one more time would make a difference:

    REFLECT will cause a bunch of machine meta-information to be written to
    the tape, starting at the head, overwriting anything its path. at the
    end of the operation, the head will still be in the same position as at
    the start of the operation. the information written to tape will include
    3 components:

    - machine description
    - current state transition
    - current tape (the tape state before command runs)






    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that
    should be taking to transition the tape to the next step.

    That isn't an "action" slot, in classic representation it is a
    binary field for tape motion direction.

    richard, please do actually read turing's paper one of these days.
    i've already posted at you his first machine description in text, and
    now i'll post it in image form:

    https://imgur.com/a/pzhHTMb

    So, And did you read the descriptions of what those operations were.

    Have you looked at how the description evolved over time as it was
    refined.


    do let me know when ur done with retardedly quibbling over syntax so
    we can actually get around to discussing semantics one of these days,

    When you actually DEFINE what you mean by your "Reflect" instruction,
    as an actually implementable operation.


    god i wish i had someone like turing to discuss this with, but so far
    ur the only still responding to any depth.

    I don't think it would help.

    agree to disagree dick










    And, any algorithm that actually USES their capability to >>>>>>>>>> detect if they have been nested will become incorrect as a >>>>>>>>>> decider, as a decider is a machine that computes a specific >>>>>>>>>> mapping of its input to its output, and if that result changes >>>>>>>>>> in the submachine, only one of the answers it gives (as a >>>>>>>>>> stand-alone, or as the sub- machine) can be right, so you just >>>>>>>>>> show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is >>>>>>>>> the "one true way". seems like u just enjoy shooting urself in >>>>>>>>> the foot, with the only actual rational way being it's just the >>>>>>>>> "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is >>>>>>>> algorithms and results. "Computation" is the mapping generated >>>>>>>> by it, which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing >>>>>>> machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be
    irrespective of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the
    "output".

    If the machine can produce two different output from the same
    input, the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a
    mapping of:

    (context, input) -> output

    WHich means that you are calling context as part of your input.

    But you alse say that you can't set it, so you

    the context comes from REFLECT

    So, "Reflect" makes up the context? How does it know what to do?

    not answering to red herrings from someone who isn't paying attention in
    the slightest





    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and
    the reason we got stuck on the halting problem of a fucking century >>>>> is ignoring that context matters.

    And thus you system no longer has composition, as you have defined
    that the context wasn't changable by the "caller" of a sub-computation. >>>>
    This makes your system strictly LESS powerful than a Turing Machine.

    no it doesn't because using context is an optional feature, not a
    requirement for RTM machine descriptions. like i said RTMs can run
    TMs directly so they include all TM computations as well.

    You don't seem to understand, that the fact that the sub-computation
    COULD use the reflect operation means that you can't control its input.

    machines are well defined at the start the machine, so whether it does
    or does not utilize REFLECT is knowable before the machine runs u moron


    Thus, perhaps the better way to say it is the computations able to be
    done with machines that actually use your feature, are less-than Turng
    Complete.

    There is nothing that you RTMs can do that a TM can't do, except to
    define handcuffs for themselves.

    The USE of your extention weakens the machine,






    add that to list of the growing fallacies i've pointed out in ur >>>>>>> recent arguments, which i'm sure ur not actually tracking, as
    that would be far more honesty than u are capable of putting out. >>>>>>
    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT

    Where did I do that.

    I stated the DEFINITION of the term, something it seems you are just
    aferming you don't understand.

    what makes that definition right beyond you repeating yourself?
    sometimes we get definitions wrong dude.

    Because it IS the definition.

    not an argument

    Ok, so you are just admitting that you are stupid, illogical, and a liar.

    Good luck starving to death when your money runs out.



    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic result
    that is "not a computation".

    Because you get that result only by equivocating on your definitions.

    If the context is part of the inpt to make the output determistic from
    the input, then they fail to be usable as sub-computations as we can't
    control that context part of the input.

    When we look at just the controllable input for a sub-computation, the
    output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers work.

    I guess you are just showing that you fundamentally don't understand the problem field you are betting your life on.



    Showing that you really don't understand what you are talking about.





    It seems you just assume you are allowed to change the definition, >>>>>> perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not
    necessarily computations, if they use the machines capability >>>>>>>>>> to pass information not allowed by the rules of a computation. >>>>>>>>>> Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of
    processing produce, but specifically is defined based on
    producing a specific mapping of input to output, so if (even >>>>>>>>>> as a sub- machine) a specific input might produce different >>>>>>>>>> output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, >>>>>>>>>> becomes a pretty worthless criteria, as you can't actually >>>>>>>>>> talk much about it.

    the output is still well-defined and deterministic at runtime, >>>>>>>>
    Not from the "input" to the piece of algorithm, as it includes >>>>>>>> "hidden" state from outside that input stored elsewhere in the >>>>>>>> machine.


    context-dependent computations are still computations. the fact >>>>>>>>> TMs don't capture them is an indication that the ct-thesis may >>>>>>>>> be false


    Nope. Not unless the "context" is made part of the "input", and >>>>>>>> if you do that, you find that since you are trying to make it so >>>>>>>> the caller can't just define that context, your system is less >>>>>>>> than turing complete.

    Your system break to property of building a computation by the >>>>>>>> concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall >>>>>>> computation context-dependent too ... if u dont want a context- >>>>>>> dependent computation don't include context-dependent sub-
    computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a
    distinct type of computation that has been ignored by the theory of >>>>> computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same
    field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new
    definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, with
    one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as
    plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations
    *must* have context-awareness and are therefore context-
    dependent. these computations aren't generally computable by TMs >>>>>>> because TMs lack the necessary mechanisms to grant context-
    awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that >>>>>>> actually breaks in context-dependence, rather than just listing >>>>>>> things u assume are true, i won't believe u know what ur talking >>>>>>> about


    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include
    the entire computing context, not just the formal parameters. it's
    still well defined and it grants us access to meta computation that >>>>> is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is
    written about.

    You can't change the definition of a computation, and still talk
    about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well- >>>>>> defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man? >>>>>
    ur overgeneralizing. just become some computation is context-
    dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-
    dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory >>>>>> where the result can depend on things that aren't part of the
    actual input to the machine, and see what you can show that is
    useful.

    The problem becomes that you can't really say anything about what >>>>>> you will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input
    was. there's nothing random about it, context-dependent computation >>>>> is just as well-defend and deterministic as context-independent
    computation


    The problem is that when you look at the computation itself (that
    might be imbedded into a larger computation) you don't know which of
    the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A
    FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the machine
    description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow it
    to compute anything that is "knowable" about where it is in the
    computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the
    conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is about
    being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories based
    on simpler theories and the axioms. If those simplere things were
    "context dependent" it makes it much harder for them to specifiy
    what they actually do in all contexts, and to then use them in all
    contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit it is a
    new field.

    well it'd be great if someone fucking helped me out there, but all i get
    is a bunch adversarial dismissal cause i'm stuck on a god forsaking
    planet of a fucking half-braindead clowns

    Help has been offered, but you just reject it as it doesn't match your
    ideas, largely because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that doesn't understand the world.




    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 17 22:05:25 2026
    From Newsgroup: comp.ai.philosophy

    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic result
    that is "not a computation".

    Because you get that result only by equivocating on your definitions.

    If the context is part of the inpt to make the output determistic
    from the input, then they fail to be usable as sub-computations as we
    can't control that context part of the input.

    When we look at just the controllable input for a sub-computation,
    the output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of
    programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers work.

    I guess you are just showing that you fundamentally don't understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be general
    enough to encapsulate everything computed by real world computers, no???




    Showing that you really don't understand what you are talking about.





    It seems you just assume you are allowed to change the
    definition, perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not
    necessarily computations, if they use the machines capability >>>>>>>>>>> to pass information not allowed by the rules of a
    computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of
    processing produce, but specifically is defined based on >>>>>>>>>>> producing a specific mapping of input to output, so if (even >>>>>>>>>>> as a sub- machine) a specific input might produce different >>>>>>>>>>> output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, >>>>>>>>>>> becomes a pretty worthless criteria, as you can't actually >>>>>>>>>>> talk much about it.

    the output is still well-defined and deterministic at runtime, >>>>>>>>>
    Not from the "input" to the piece of algorithm, as it includes >>>>>>>>> "hidden" state from outside that input stored elsewhere in the >>>>>>>>> machine.


    context-dependent computations are still computations. the >>>>>>>>>> fact TMs don't capture them is an indication that the ct- >>>>>>>>>> thesis may be false


    Nope. Not unless the "context" is made part of the "input", and >>>>>>>>> if you do that, you find that since you are trying to make it >>>>>>>>> so the caller can't just define that context, your system is >>>>>>>>> less than turing complete.

    Your system break to property of building a computation by the >>>>>>>>> concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur
    overall computation context-dependent too ... if u dont want a >>>>>>>> context- dependent computation don't include context-dependent >>>>>>>> sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a
    distinct type of computation that has been ignored by the theory
    of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the same >>>>> field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new
    definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, with
    one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as
    plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations >>>>>>>> *must* have context-awareness and are therefore context-
    dependent. these computations aren't generally computable by TMs >>>>>>>> because TMs lack the necessary mechanisms to grant context-
    awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that >>>>>>>> actually breaks in context-dependence, rather than just listing >>>>>>>> things u assume are true, i won't believe u know what ur talking >>>>>>>> about


    The definition.

    A computation produces the well defined result based on the INPUT. >>>>>>
    context-dependent computation simply expands it's input to include >>>>>> the entire computing context, not just the formal parameters. it's >>>>>> still well defined and it grants us access to meta computation
    that is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it is >>>>> written about.

    You can't change the definition of a computation, and still talk
    about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the well- >>>>>>> defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread >>>>>>> man?

    ur overgeneralizing. just become some computation is context-
    dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context-
    dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation
    Theory where the result can depend on things that aren't part of >>>>>>> the actual input to the machine, and see what you can show that >>>>>>> is useful.

    The problem becomes that you can't really say anything about what >>>>>>> you will get, since you don't know what the "hidden" factors are. >>>>>>
    ??? i was very clear multiple times over what the "hidden" input
    was. there's nothing random about it, context-dependent
    computation is just as well-defend and deterministic as context-
    independent computation


    The problem is that when you look at the computation itself (that
    might be imbedded into a larger computation) you don't know which
    of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A
    FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the
    machine description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow
    it to compute anything that is "knowable" about where it is in the
    computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the
    conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is
    about being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories
    based on simpler theories and the axioms. If those simplere things
    were "context dependent" it makes it much harder for them to
    specifiy what they actually do in all contexts, and to then use
    them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit it is
    a new field.

    well it'd be great if someone fucking helped me out there, but all i
    get is a bunch adversarial dismissal cause i'm stuck on a god
    forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    but you just reject it as it doesn't match your
    ideas, largely because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that doesn't understand the world.




    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand

    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 07:05:09 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic result
    that is "not a computation".

    Because you get that result only by equivocating on your definitions.

    If the context is part of the inpt to make the output determistic
    from the input, then they fail to be usable as sub-computations as
    we can't control that context part of the input.

    When we look at just the controllable input for a sub-computation,
    the output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of
    programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers work.

    I guess you are just showing that you fundamentally don't understand
    the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be general enough to encapsulate everything computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer as
    you know it.

    All you are doing is showing your ignorance of what you are talking about.





    Showing that you really don't understand what you are talking about.





    It seems you just assume you are allowed to change the
    definition, perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine
    architecture, sub- machines on such a platform are not >>>>>>>>>>>> necessarily computations, if they use the machines
    capability to pass information not allowed by the rules of a >>>>>>>>>>>> computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of >>>>>>>>>>>> processing produce, but specifically is defined based on >>>>>>>>>>>> producing a specific mapping of input to output, so if (even >>>>>>>>>>>> as a sub- machine) a specific input might produce different >>>>>>>>>>>> output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, >>>>>>>>>>>> becomes a pretty worthless criteria, as you can't actually >>>>>>>>>>>> talk much about it.

    the output is still well-defined and deterministic at runtime, >>>>>>>>>>
    Not from the "input" to the piece of algorithm, as it includes >>>>>>>>>> "hidden" state from outside that input stored elsewhere in the >>>>>>>>>> machine.


    context-dependent computations are still computations. the >>>>>>>>>>> fact TMs don't capture them is an indication that the ct- >>>>>>>>>>> thesis may be false


    Nope. Not unless the "context" is made part of the "input", >>>>>>>>>> and if you do that, you find that since you are trying to make >>>>>>>>>> it so the caller can't just define that context, your system >>>>>>>>>> is less than turing complete.

    Your system break to property of building a computation by the >>>>>>>>>> concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur
    overall computation context-dependent too ... if u dont want a >>>>>>>>> context- dependent computation don't include context-dependent >>>>>>>>> sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a
    distinct type of computation that has been ignored by the theory >>>>>>> of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the
    same field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a new >>>>>> definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines,
    with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as
    plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations >>>>>>>>> *must* have context-awareness and are therefore context-
    dependent. these computations aren't generally computable by >>>>>>>>> TMs because TMs lack the necessary mechanisms to grant context- >>>>>>>>> awareness.

    In other words, you require some computations to not be actual >>>>>>>> computations.


    unless u can produce some actual proof of some computation that >>>>>>>>> actually breaks in context-dependence, rather than just listing >>>>>>>>> things u assume are true, i won't believe u know what ur
    talking about


    The definition.

    A computation produces the well defined result based on the INPUT. >>>>>>>
    context-dependent computation simply expands it's input to
    include the entire computing context, not just the formal
    parameters. it's still well defined and it grants us access to
    meta computation that is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it
    is written about.

    You can't change the definition of a computation, and still talk
    about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the
    well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread >>>>>>>> man?

    ur overgeneralizing. just become some computation is context-
    dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context- >>>>>> dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation
    Theory where the result can depend on things that aren't part of >>>>>>>> the actual input to the machine, and see what you can show that >>>>>>>> is useful.

    The problem becomes that you can't really say anything about
    what you will get, since you don't know what the "hidden"
    factors are.

    ??? i was very clear multiple times over what the "hidden" input >>>>>>> was. there's nothing random about it, context-dependent
    computation is just as well-defend and deterministic as context- >>>>>>> independent computation


    The problem is that when you look at the computation itself (that >>>>>> might be imbedded into a larger computation) you don't know which >>>>>> of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND A >>>>> FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the
    machine description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow
    it to compute anything that is "knowable" about where it is in the
    computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated
    responses that lack overall *contextual* awareness of the
    conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited.

    You don't seem to understand that a key point of the theory is
    about being able to build complicate things from simpler pieces.

    It comes out of how logic works, we build complicated theories
    based on simpler theories and the axioms. If those simplere things >>>>>> were "context dependent" it makes it much harder for them to
    specifiy what they actually do in all contexts, and to then use
    them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩 >>>>
    Which is why you need to actually FULLY DEFINE them, and admit it is
    a new field.

    well it'd be great if someone fucking helped me out there, but all i
    get is a bunch adversarial dismissal cause i'm stuck on a god
    forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    No, perhaps the problem is I assume you at least attempt to learn what
    you are trying to talk about.



    but you just reject it as it doesn't match your ideas, largely because
    you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that doesn't
    understand the world.




    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand




    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 10:15:05 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic result >>>>>> that is "not a computation".

    Because you get that result only by equivocating on your definitions. >>>>>
    If the context is part of the inpt to make the output determistic
    from the input, then they fail to be usable as sub-computations as
    we can't control that context part of the input.

    When we look at just the controllable input for a sub-computation,
    the output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of
    programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers work. >>>
    I guess you are just showing that you fundamentally don't understand
    the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be general
    enough to encapsulate everything computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer as
    you know it.

    so ur saying it's outdated and needs updating in regards to new things
    we do with computers that apparently turing machines as a model don't
    have variations of ...

    or what ... someone writes down a fundamental theory and then it just
    sticks around like an unchanging law when u haven't even proven the
    ct-thesis correct???


    All you are doing is showing your ignorance of what you are talking about.





    Showing that you really don't understand what you are talking about. >>>>>




    It seems you just assume you are allowed to change the
    definition, perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine >>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>> necessarily computations, if they use the machines
    capability to pass information not allowed by the rules of >>>>>>>>>>>>> a computation. Your RTM similarly break that property. >>>>>>>>>>>>>
    Remember, Computations are NOT just what some model of >>>>>>>>>>>>> processing produce, but specifically is defined based on >>>>>>>>>>>>> producing a specific mapping of input to output, so if >>>>>>>>>>>>> (even as a sub- machine) a specific input might produce >>>>>>>>>>>>> different output, your architecture is NOT doing a
    computation.

    And without that property, using what the machine could do, >>>>>>>>>>>>> becomes a pretty worthless criteria, as you can't actually >>>>>>>>>>>>> talk much about it.

    the output is still well-defined and deterministic at runtime, >>>>>>>>>>>
    Not from the "input" to the piece of algorithm, as it
    includes "hidden" state from outside that input stored
    elsewhere in the machine.


    context-dependent computations are still computations. the >>>>>>>>>>>> fact TMs don't capture them is an indication that the ct- >>>>>>>>>>>> thesis may be false


    Nope. Not unless the "context" is made part of the "input", >>>>>>>>>>> and if you do that, you find that since you are trying to >>>>>>>>>>> make it so the caller can't just define that context, your >>>>>>>>>>> system is less than turing complete.

    Your system break to property of building a computation by >>>>>>>>>>> the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>> overall computation context-dependent too ... if u dont want a >>>>>>>>>> context- dependent computation don't include context-dependent >>>>>>>>>> sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a
    distinct type of computation that has been ignored by the theory >>>>>>>> of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the
    same field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a
    new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines,
    with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as
    plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain computations >>>>>>>>>> *must* have context-awareness and are therefore context-
    dependent. these computations aren't generally computable by >>>>>>>>>> TMs because TMs lack the necessary mechanisms to grant
    context- awareness.

    In other words, you require some computations to not be actual >>>>>>>>> computations.


    unless u can produce some actual proof of some computation >>>>>>>>>> that actually breaks in context-dependence, rather than just >>>>>>>>>> listing things u assume are true, i won't believe u know what >>>>>>>>>> ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT. >>>>>>>>
    context-dependent computation simply expands it's input to
    include the entire computing context, not just the formal
    parameters. it's still well defined and it grants us access to >>>>>>>> meta computation that is not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it >>>>>>> is written about.

    You can't change the definition of a computation, and still talk >>>>>>> about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the >>>>>>>>> well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a
    gingerbread man?

    ur overgeneralizing. just become some computation is context- >>>>>>>> dependent doesn't mean all computation is context-dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be context- >>>>>>> dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>
    Go ahead, try to define an alternate version of Computation >>>>>>>>> Theory where the result can depend on things that aren't part >>>>>>>>> of the actual input to the machine, and see what you can show >>>>>>>>> that is useful.

    The problem becomes that you can't really say anything about >>>>>>>>> what you will get, since you don't know what the "hidden"
    factors are.

    ??? i was very clear multiple times over what the "hidden" input >>>>>>>> was. there's nothing random about it, context-dependent
    computation is just as well-defend and deterministic as context- >>>>>>>> independent computation


    The problem is that when you look at the computation itself (that >>>>>>> might be imbedded into a larger computation) you don't know which >>>>>>> of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND >>>>>> A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the
    machine description isn't unique.


    all the info required to compute all configurations between the
    beginning and the current step of the computation, which can allow >>>>>> it to compute anything that is "knowable" about where it is in the >>>>>> computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on isolated >>>>>> responses that lack overall *contextual* awareness of the
    conversation...

    No, you are ignoring the requirements to implement what you desire.





    Thus, what you can say about that "computation" is very limited. >>>>>>>
    You don't seem to understand that a key point of the theory is
    about being able to build complicate things from simpler pieces. >>>>>>>
    It comes out of how logic works, we build complicated theories
    based on simpler theories and the axioms. If those simplere
    things were "context dependent" it makes it much harder for them >>>>>>> to specifiy what they actually do in all contexts, and to then
    use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩 >>>>>
    Which is why you need to actually FULLY DEFINE them, and admit it
    is a new field.

    well it'd be great if someone fucking helped me out there, but all i
    get is a bunch adversarial dismissal cause i'm stuck on a god
    forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    No, perhaps the problem is I assume you at least attempt to learn what
    you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical relationship,

    and until u recognize that ur going to continue to be non-constructive




    but you just reject it as it doesn't match your ideas, largely
    because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that
    doesn't understand the world.




    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand




    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 15:56:02 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic
    result that is "not a computation".

    Because you get that result only by equivocating on your definitions. >>>>>>
    If the context is part of the inpt to make the output determistic >>>>>> from the input, then they fail to be usable as sub-computations as >>>>>> we can't control that context part of the input.

    When we look at just the controllable input for a sub-computation, >>>>>> the output is NOT a deterministic function of that inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of
    programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers
    work.

    I guess you are just showing that you fundamentally don't understand
    the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be
    general enough to encapsulate everything computed by real world
    computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer as
    you know it.

    so ur saying it's outdated and needs updating in regards to new things
    we do with computers that apparently turing machines as a model don't
    have variations of ...

    No, it still handles that which it was developed for.


    or what ... someone writes down a fundamental theory and then it just
    sticks around like an unchanging law when u haven't even proven the ct- thesis correct???

    Why does it need to change?

    If a new problem comes up, a new theory might be needed to handle it.



    All you are doing is showing your ignorance of what you are talking
    about.





    Showing that you really don't understand what you are talking about. >>>>>>




    It seems you just assume you are allowed to change the
    definition, perhaps because you never bothered to learn it. >>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>>> necessarily computations, if they use the machines >>>>>>>>>>>>>> capability to pass information not allowed by the rules of >>>>>>>>>>>>>> a computation. Your RTM similarly break that property. >>>>>>>>>>>>>>
    Remember, Computations are NOT just what some model of >>>>>>>>>>>>>> processing produce, but specifically is defined based on >>>>>>>>>>>>>> producing a specific mapping of input to output, so if >>>>>>>>>>>>>> (even as a sub- machine) a specific input might produce >>>>>>>>>>>>>> different output, your architecture is NOT doing a >>>>>>>>>>>>>> computation.

    And without that property, using what the machine could >>>>>>>>>>>>>> do, becomes a pretty worthless criteria, as you can't >>>>>>>>>>>>>> actually talk much about it.

    the output is still well-defined and deterministic at runtime, >>>>>>>>>>>>
    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. the >>>>>>>>>>>>> fact TMs don't capture them is an indication that the ct- >>>>>>>>>>>>> thesis may be false


    Nope. Not unless the "context" is made part of the "input", >>>>>>>>>>>> and if you do that, you find that since you are trying to >>>>>>>>>>>> make it so the caller can't just define that context, your >>>>>>>>>>>> system is less than turing complete.

    Your system break to property of building a computation by >>>>>>>>>>>> the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>>> overall computation context-dependent too ... if u dont want >>>>>>>>>>> a context- dependent computation don't include context- >>>>>>>>>>> dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a >>>>>>>>> distinct type of computation that has been ignored by the
    theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the >>>>>>>> same field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a >>>>>>>> new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, >>>>>>> with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as >>>>>> plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain
    computations *must* have context-awareness and are therefore >>>>>>>>>>> context- dependent. these computations aren't generally >>>>>>>>>>> computable by TMs because TMs lack the necessary mechanisms >>>>>>>>>>> to grant context- awareness.

    In other words, you require some computations to not be actual >>>>>>>>>> computations.


    unless u can produce some actual proof of some computation >>>>>>>>>>> that actually breaks in context-dependence, rather than just >>>>>>>>>>> listing things u assume are true, i won't believe u know what >>>>>>>>>>> ur talking about


    The definition.

    A computation produces the well defined result based on the >>>>>>>>>> INPUT.

    context-dependent computation simply expands it's input to
    include the entire computing context, not just the formal
    parameters. it's still well defined and it grants us access to >>>>>>>>> meta computation that is not as expressible in TM computing. >>>>>>>>>
    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field it >>>>>>>> is written about.

    You can't change the definition of a computation, and still talk >>>>>>>> about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the >>>>>>>>>> well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a
    gingerbread man?

    ur overgeneralizing. just become some computation is context- >>>>>>>>> dependent doesn't mean all computation is context-dependent. >>>>>>>>>
    another fallacy.

    Right, but nothing that actually is a computation can be
    context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>>
    Go ahead, try to define an alternate version of Computation >>>>>>>>>> Theory where the result can depend on things that aren't part >>>>>>>>>> of the actual input to the machine, and see what you can show >>>>>>>>>> that is useful.

    The problem becomes that you can't really say anything about >>>>>>>>>> what you will get, since you don't know what the "hidden" >>>>>>>>>> factors are.

    ??? i was very clear multiple times over what the "hidden"
    input was. there's nothing random about it, context-dependent >>>>>>>>> computation is just as well-defend and deterministic as
    context- independent computation


    The problem is that when you look at the computation itself
    (that might be imbedded into a larger computation) you don't
    know which of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, AND >>>>>>> A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the
    machine description isn't unique.


    all the info required to compute all configurations between the >>>>>>> beginning and the current step of the computation, which can
    allow it to compute anything that is "knowable" about where it is >>>>>>> in the computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite).

    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an
    appreciable degree of comprehension, being too focused on
    isolated responses that lack overall *contextual* awareness of
    the conversation...

    No, you are ignoring the requirements to implement what you desire. >>>>>>




    Thus, what you can say about that "computation" is very limited. >>>>>>>>
    You don't seem to understand that a key point of the theory is >>>>>>>> about being able to build complicate things from simpler pieces. >>>>>>>>
    It comes out of how logic works, we build complicated theories >>>>>>>> based on simpler theories and the axioms. If those simplere
    things were "context dependent" it makes it much harder for them >>>>>>>> to specifiy what they actually do in all contexts, and to then >>>>>>>> use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩 >>>>>>
    Which is why you need to actually FULLY DEFINE them, and admit it >>>>>> is a new field.

    well it'd be great if someone fucking helped me out there, but all
    i get is a bunch adversarial dismissal cause i'm stuck on a god
    forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    No, perhaps the problem is I assume you at least attempt to learn what
    you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical relationship,

    and until u recognize that ur going to continue to be non-constructive

    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis.

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use
    things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the ground floor, fully define what you mean by things, show what you ideas can do,
    and why that would be useful.

    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built and
    how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely
    because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that
    doesn't understand the world.




    if the simplest theory was always correct we'd still be using
    newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand







    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 13:50:48 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't
    understood it yet) that produces a consistent deterministic
    result that is "not a computation".

    Because you get that result only by equivocating on your
    definitions.

    If the context is part of the inpt to make the output determistic >>>>>>> from the input, then they fail to be usable as sub-computations >>>>>>> as we can't control that context part of the input.

    When we look at just the controllable input for a sub-
    computation, the output is NOT a deterministic function of that >>>>>>> inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which
    apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of >>>>>> programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers
    work.

    I guess you are just showing that you fundamentally don't
    understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be
    general enough to encapsulate everything computed by real world
    computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer
    as you know it.

    so ur saying it's outdated and needs updating in regards to new things
    we do with computers that apparently turing machines as a model don't
    have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...



    or what ... someone writes down a fundamental theory and then it just
    sticks around like an unchanging law when u haven't even proven the
    ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate all
    that is possible within computing??

    idk, what's what i thot a fundamental theory is supposed to do, but i
    guess you don't agree???

    like, if the fundamental theory doesn't encapsulate everything done
    within computing ... then idk why u think the halting problem should
    apply to modern computing???


    If a new problem comes up, a new theory might be needed to handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of route
    learning with actual intelligence, but i suppose that's all u need
    working for a military contractor...

    military intelligence is an oxymoron, remember?




    All you are doing is showing your ignorance of what you are talking
    about.





    Showing that you really don't understand what you are talking about. >>>>>>>




    It seems you just assume you are allowed to change the
    definition, perhaps because you never bothered to learn it. >>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>>>> necessarily computations, if they use the machines >>>>>>>>>>>>>>> capability to pass information not allowed by the rules >>>>>>>>>>>>>>> of a computation. Your RTM similarly break that property. >>>>>>>>>>>>>>>
    Remember, Computations are NOT just what some model of >>>>>>>>>>>>>>> processing produce, but specifically is defined based on >>>>>>>>>>>>>>> producing a specific mapping of input to output, so if >>>>>>>>>>>>>>> (even as a sub- machine) a specific input might produce >>>>>>>>>>>>>>> different output, your architecture is NOT doing a >>>>>>>>>>>>>>> computation.

    And without that property, using what the machine could >>>>>>>>>>>>>>> do, becomes a pretty worthless criteria, as you can't >>>>>>>>>>>>>>> actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. the >>>>>>>>>>>>>> fact TMs don't capture them is an indication that the ct- >>>>>>>>>>>>>> thesis may be false


    Nope. Not unless the "context" is made part of the "input", >>>>>>>>>>>>> and if you do that, you find that since you are trying to >>>>>>>>>>>>> make it so the caller can't just define that context, your >>>>>>>>>>>>> system is less than turing complete.

    Your system break to property of building a computation by >>>>>>>>>>>>> the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>>>> overall computation context-dependent too ... if u dont want >>>>>>>>>>>> a context- dependent computation don't include context- >>>>>>>>>>>> dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a >>>>>>>>>> distinct type of computation that has been ignored by the >>>>>>>>>> theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the >>>>>>>>> same field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a >>>>>>>>> new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, >>>>>>>> with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field as >>>>>>> plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain
    computations *must* have context-awareness and are therefore >>>>>>>>>>>> context- dependent. these computations aren't generally >>>>>>>>>>>> computable by TMs because TMs lack the necessary mechanisms >>>>>>>>>>>> to grant context- awareness.

    In other words, you require some computations to not be >>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some computation >>>>>>>>>>>> that actually breaks in context-dependence, rather than just >>>>>>>>>>>> listing things u assume are true, i won't believe u know >>>>>>>>>>>> what ur talking about


    The definition.

    A computation produces the well defined result based on the >>>>>>>>>>> INPUT.

    context-dependent computation simply expands it's input to >>>>>>>>>> include the entire computing context, not just the formal >>>>>>>>>> parameters. it's still well defined and it grants us access to >>>>>>>>>> meta computation that is not as expressible in TM computing. >>>>>>>>>>
    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field >>>>>>>>> it is written about.

    You can't change the definition of a computation, and still >>>>>>>>> talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the >>>>>>>>>>> well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a
    gingerbread man?

    ur overgeneralizing. just become some computation is context- >>>>>>>>>> dependent doesn't mean all computation is context-dependent. >>>>>>>>>>
    another fallacy.

    Right, but nothing that actually is a computation can be
    context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the
    definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>>>
    Go ahead, try to define an alternate version of Computation >>>>>>>>>>> Theory where the result can depend on things that aren't part >>>>>>>>>>> of the actual input to the machine, and see what you can show >>>>>>>>>>> that is useful.

    The problem becomes that you can't really say anything about >>>>>>>>>>> what you will get, since you don't know what the "hidden" >>>>>>>>>>> factors are.

    ??? i was very clear multiple times over what the "hidden" >>>>>>>>>> input was. there's nothing random about it, context-dependent >>>>>>>>>> computation is just as well-defend and deterministic as
    context- independent computation


    The problem is that when you look at the computation itself >>>>>>>>> (that might be imbedded into a larger computation) you don't >>>>>>>>> know which of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE
    DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, >>>>>>>> AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the
    machine description isn't unique.


    all the info required to compute all configurations between the >>>>>>>> beginning and the current step of the computation, which can
    allow it to compute anything that is "knowable" about where it >>>>>>>> is in the computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite). >>>>>>>
    The machine itself is bounded in size, plus the unbounded tape.


    the problem is ur literally not reading what i'm writing to an >>>>>>>> appreciable degree of comprehension, being too focused on
    isolated responses that lack overall *contextual* awareness of >>>>>>>> the conversation...

    No, you are ignoring the requirements to implement what you desire. >>>>>>>




    Thus, what you can say about that "computation" is very limited. >>>>>>>>>
    You don't seem to understand that a key point of the theory is >>>>>>>>> about being able to build complicate things from simpler pieces. >>>>>>>>>
    It comes out of how logic works, we build complicated theories >>>>>>>>> based on simpler theories and the axioms. If those simplere >>>>>>>>> things were "context dependent" it makes it much harder for >>>>>>>>> them to specifiy what they actually do in all contexts, and to >>>>>>>>> then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩 >>>>>>>
    Which is why you need to actually FULLY DEFINE them, and admit it >>>>>>> is a new field.

    well it'd be great if someone fucking helped me out there, but all >>>>>> i get is a bunch adversarial dismissal cause i'm stuck on a god
    forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    No, perhaps the problem is I assume you at least attempt to learn
    what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical
    relationship,

    and until u recognize that ur going to continue to be non-constructive

    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use
    things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the ground floor, fully define what you mean by things, show what you ideas can do,
    and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out at me


    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built and
    how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely
    because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that
    doesn't understand the world.




    if the simplest theory was always correct we'd still be using >>>>>>>> newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand







    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 22:27:45 2026
    From Newsgroup: comp.ai.philosophy

    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    Note "computing (the field, today)" does not refer to exactly the same
    relata as "computation" or "computing (the activity, then)".

    computing to
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 22:28:45 2026
    From Newsgroup: comp.ai.philosophy

    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    Note "computing (the field, today)" does not refer to exactly the same
    relata as "computation" or "computing (the activity, then)".

    computing today means watching netflix and laughing at cats in jars. The
    term has somewhat bleached.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 15:01:31 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 2:27 PM, Tristan Wibberley wrote:
    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    modern computing utilized context-dependent functions, whereas turing
    machine computations cannot be context-dependent.

    like a simple total stack trace cannot be generally implemented for
    turing machines cause the top level runtime cannot be deduced by the computation. there's no mechanism to do that.


    Note "computing (the field, today)" does not refer to exactly the same
    relata as "computation" or "computing (the activity, then)".

    computing to

    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 19:28:24 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 6:01 PM, dart200 wrote:
    On 1/18/26 2:27 PM, Tristan Wibberley wrote:
    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    modern computing utilized context-dependent functions, whereas turing machine computations cannot be context-dependent.

    like a simple total stack trace cannot be generally implemented for
    turing machines cause the top level runtime cannot be deduced by the computation. there's no mechanism to do that.


    Note "computing (the field, today)" does not refer to exactly the same
    relata as "computation" or "computing (the activity, then)".

    computing to




    Since Turing Machines don't HAVE a stack, that is just a red herring.

    And yes, there IS a mechanism, if the sub part needs to know that to do
    its computation, then you pass that as part of its input or state.

    It just means you need to be EXPLICIT about what you are doing.

    Explicit helps us know exactly what is happening.

    Most subroutines shouldn't care about their caller, and if they do, it
    should be explicit.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 19:28:29 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't >>>>>>>>> understood it yet) that produces a consistent deterministic >>>>>>>>> result that is "not a computation".

    Because you get that result only by equivocating on your
    definitions.

    If the context is part of the inpt to make the output
    determistic from the input, then they fail to be usable as sub- >>>>>>>> computations as we can't control that context part of the input. >>>>>>>>
    When we look at just the controllable input for a sub-
    computation, the output is NOT a deterministic function of that >>>>>>>> inut.


    not sure what the fuck it's doing if it's not a computation

    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which >>>>>>> apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act of >>>>>>> programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern computers >>>>>> work.

    I guess you are just showing that you fundamentally don't
    understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be
    general enough to encapsulate everything computed by real world
    computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer
    as you know it.

    so ur saying it's outdated and needs updating in regards to new
    things we do with computers that apparently turing machines as a
    model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN be
    built with care to fall under its guidance.

    THere ARE advantages to doing so, as that DOES add a lot of correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it just
    sticks around like an unchanging law when u haven't even proven the
    ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate all
    that is possible within computing??

    That is like asking about shouldn't number theory talk about everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, but i
    guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,


    like, if the fundamental theory doesn't encapsulate everything done
    within computing ... then idk why u think the halting problem should
    apply to modern computing???

    Because it DOES present a limitation of what modern computers can do.

    After all, every non-computation can be converted into a computation by forcing all the "hidden inputs" to be considered as inputs.

    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of route learning with actual intelligence, but i suppose that's all u need
    working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprized about that statement.

    You don't want a "smart bomb" locked onto you.





    All you are doing is showing your ignorance of what you are talking
    about.





    Showing that you really don't understand what you are talking >>>>>>>> about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>> definition, perhaps because you never bothered to learn it. >>>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>>>>> necessarily computations, if they use the machines >>>>>>>>>>>>>>>> capability to pass information not allowed by the rules >>>>>>>>>>>>>>>> of a computation. Your RTM similarly break that property. >>>>>>>>>>>>>>>>
    Remember, Computations are NOT just what some model of >>>>>>>>>>>>>>>> processing produce, but specifically is defined based on >>>>>>>>>>>>>>>> producing a specific mapping of input to output, so if >>>>>>>>>>>>>>>> (even as a sub- machine) a specific input might produce >>>>>>>>>>>>>>>> different output, your architecture is NOT doing a >>>>>>>>>>>>>>>> computation.

    And without that property, using what the machine could >>>>>>>>>>>>>>>> do, becomes a pretty worthless criteria, as you can't >>>>>>>>>>>>>>>> actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. >>>>>>>>>>>>>>> the fact TMs don't capture them is an indication that the >>>>>>>>>>>>>>> ct- thesis may be false


    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>> "input", and if you do that, you find that since you are >>>>>>>>>>>>>> trying to make it so the caller can't just define that >>>>>>>>>>>>>> context, your system is less than turing complete. >>>>>>>>>>>>>>
    Your system break to property of building a computation by >>>>>>>>>>>>>> the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>>>>> overall computation context-dependent too ... if u dont >>>>>>>>>>>>> want a context- dependent computation don't include >>>>>>>>>>>>> context- dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a >>>>>>>>>>> distinct type of computation that has been ignored by the >>>>>>>>>>> theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in the >>>>>>>>>> same field to solve a problem specified in the field.

    As I said, if you want to try to define a new field based on a >>>>>>>>>> new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing machines, >>>>>>>>> with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field >>>>>>>> as plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain
    computations *must* have context-awareness and are
    therefore context- dependent. these computations aren't >>>>>>>>>>>>> generally computable by TMs because TMs lack the necessary >>>>>>>>>>>>> mechanisms to grant context- awareness.

    In other words, you require some computations to not be >>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some computation >>>>>>>>>>>>> that actually breaks in context-dependence, rather than >>>>>>>>>>>>> just listing things u assume are true, i won't believe u >>>>>>>>>>>>> know what ur talking about


    The definition.

    A computation produces the well defined result based on the >>>>>>>>>>>> INPUT.

    context-dependent computation simply expands it's input to >>>>>>>>>>> include the entire computing context, not just the formal >>>>>>>>>>> parameters. it's still well defined and it grants us access >>>>>>>>>>> to meta computation that is not as expressible in TM computing. >>>>>>>>>>>
    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field >>>>>>>>>> it is written about.

    You can't change the definition of a computation, and still >>>>>>>>>> talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the >>>>>>>>>>>> well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a
    gingerbread man?

    ur overgeneralizing. just become some computation is context- >>>>>>>>>>> dependent doesn't mean all computation is context-dependent. >>>>>>>>>>>
    another fallacy.

    Right, but nothing that actually is a computation can be
    context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the >>>>>>>> definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>>>>
    Go ahead, try to define an alternate version of Computation >>>>>>>>>>>> Theory where the result can depend on things that aren't >>>>>>>>>>>> part of the actual input to the machine, and see what you >>>>>>>>>>>> can show that is useful.

    The problem becomes that you can't really say anything about >>>>>>>>>>>> what you will get, since you don't know what the "hidden" >>>>>>>>>>>> factors are.

    ??? i was very clear multiple times over what the "hidden" >>>>>>>>>>> input was. there's nothing random about it, context-dependent >>>>>>>>>>> computation is just as well-defend and deterministic as >>>>>>>>>>> context- independent computation


    The problem is that when you look at the computation itself >>>>>>>>>> (that might be imbedded into a larger computation) you don't >>>>>>>>>> know which of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE >>>>>>>>> DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, >>>>>>>>> AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the >>>>>>>> machine description isn't unique.


    all the info required to compute all configurations between the >>>>>>>>> beginning and the current step of the computation, which can >>>>>>>>> allow it to compute anything that is "knowable" about where it >>>>>>>>> is in the computation at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite). >>>>>>>>
    The machine itself is bounded in size, plus the unbounded tape. >>>>>>>>

    the problem is ur literally not reading what i'm writing to an >>>>>>>>> appreciable degree of comprehension, being too focused on
    isolated responses that lack overall *contextual* awareness of >>>>>>>>> the conversation...

    No, you are ignoring the requirements to implement what you desire. >>>>>>>>




    Thus, what you can say about that "computation" is very limited. >>>>>>>>>>
    You don't seem to understand that a key point of the theory is >>>>>>>>>> about being able to build complicate things from simpler pieces. >>>>>>>>>>
    It comes out of how logic works, we build complicated theories >>>>>>>>>> based on simpler theories and the axioms. If those simplere >>>>>>>>>> things were "context dependent" it makes it much harder for >>>>>>>>>> them to specifiy what they actually do in all contexts, and to >>>>>>>>>> then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩 >>>>>>>>
    Which is why you need to actually FULLY DEFINE them, and admit >>>>>>>> it is a new field.

    well it'd be great if someone fucking helped me out there, but
    all i get is a bunch adversarial dismissal cause i'm stuck on a >>>>>>> god forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write

    No, perhaps the problem is I assume you at least attempt to learn
    what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical
    relationship,

    and until u recognize that ur going to continue to be non-constructive

    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use
    things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the
    ground floor, fully define what you mean by things, show what you
    ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out at me

    No real dichotomy.

    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still can apply.

    To say you can change the foundation but keep the building is just lying.



    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built and
    how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely
    because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that
    doesn't understand the world.




    if the simplest theory was always correct we'd still be using >>>>>>>>> newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand










    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 20:30:46 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 6:01 PM, dart200 wrote:
    On 1/18/26 2:27 PM, Tristan Wibberley wrote:
    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    modern computing utilized context-dependent functions, whereas turing
    machine computations cannot be context-dependent.

    like a simple total stack trace cannot be generally implemented for
    turing machines cause the top level runtime cannot be deduced by the
    computation. there's no mechanism to do that.


    Note "computing (the field, today)" does not refer to exactly the same
    relata as "computation" or "computing (the activity, then)".

    computing to




    Since Turing Machines don't HAVE a stack, that is just a red herring.

    it's amazing how retared one can be even when technically correct


    And yes, there IS a mechanism, if the sub part needs to know that to do
    its computation, then you pass that as part of its input or state.

    It just means you need to be EXPLICIT about what you are doing.

    Explicit helps us know exactly what is happening.

    that's a design choice, sure


    Most subroutines shouldn't care about their caller, and if they do, it should be explicit.

    but if want to care generally, i can't do that because turing machines
    don't have the mechanisms in place to ensure i have theoretically robust access to whatever would be the stack trace equivalent
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 18 20:51:11 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't >>>>>>>>>> understood it yet) that produces a consistent deterministic >>>>>>>>>> result that is "not a computation".

    Because you get that result only by equivocating on your
    definitions.

    If the context is part of the inpt to make the output
    determistic from the input, then they fail to be usable as sub- >>>>>>>>> computations as we can't control that context part of the input. >>>>>>>>>
    When we look at just the controllable input for a sub-
    computation, the output is NOT a deterministic function of that >>>>>>>>> inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>
    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which >>>>>>>> apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate >>>>>>>
    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act >>>>>>>> of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern
    computers work.

    I guess you are just showing that you fundamentally don't
    understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be
    general enough to encapsulate everything computed by real world
    computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the computer >>>>> as you know it.

    so ur saying it's outdated and needs updating in regards to new
    things we do with computers that apparently turing machines as a
    model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN be
    built with care to fall under its guidance.

    lol, what are they even if not "computations"???


    THere ARE advantages to doing so, as that DOES add a lot of correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it
    just sticks around like an unchanging law when u haven't even proven
    the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate all
    that is possible within computing??

    That is like asking about shouldn't number theory talk about everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, but i
    guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ...



    like, if the fundamental theory doesn't encapsulate everything done
    within computing ... then idk why u think the halting problem should
    apply to modern computing???

    Because it DOES present a limitation of what modern computers can do.

    After all, every non-computation can be converted into a computation by forcing all the "hidden inputs" to be considered as inputs.

    lol schrodinger's computation


    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of route
    learning with actual intelligence, but i suppose that's all u need
    working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are talking >>>>> about.





    Showing that you really don't understand what you are talking >>>>>>>>> about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>> definition, perhaps because you never bothered to learn it. >>>>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>>>>>> necessarily computations, if they use the machines >>>>>>>>>>>>>>>>> capability to pass information not allowed by the rules >>>>>>>>>>>>>>>>> of a computation. Your RTM similarly break that property. >>>>>>>>>>>>>>>>>
    Remember, Computations are NOT just what some model of >>>>>>>>>>>>>>>>> processing produce, but specifically is defined based >>>>>>>>>>>>>>>>> on producing a specific mapping of input to output, so >>>>>>>>>>>>>>>>> if (even as a sub- machine) a specific input might >>>>>>>>>>>>>>>>> produce different output, your architecture is NOT >>>>>>>>>>>>>>>>> doing a computation.

    And without that property, using what the machine could >>>>>>>>>>>>>>>>> do, becomes a pretty worthless criteria, as you can't >>>>>>>>>>>>>>>>> actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. >>>>>>>>>>>>>>>> the fact TMs don't capture them is an indication that >>>>>>>>>>>>>>>> the ct- thesis may be false


    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>> "input", and if you do that, you find that since you are >>>>>>>>>>>>>>> trying to make it so the caller can't just define that >>>>>>>>>>>>>>> context, your system is less than turing complete. >>>>>>>>>>>>>>>
    Your system break to property of building a computation >>>>>>>>>>>>>>> by the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>>>>>> overall computation context-dependent too ... if u dont >>>>>>>>>>>>>> want a context- dependent computation don't include >>>>>>>>>>>>>> context- dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a >>>>>>>>>>>> distinct type of computation that has been ignored by the >>>>>>>>>>>> theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in >>>>>>>>>>> the same field to solve a problem specified in the field. >>>>>>>>>>>
    As I said, if you want to try to define a new field based on >>>>>>>>>>> a new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing
    machines, with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field >>>>>>>>> as plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>> therefore context- dependent. these computations aren't >>>>>>>>>>>>>> generally computable by TMs because TMs lack the necessary >>>>>>>>>>>>>> mechanisms to grant context- awareness.

    In other words, you require some computations to not be >>>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some computation >>>>>>>>>>>>>> that actually breaks in context-dependence, rather than >>>>>>>>>>>>>> just listing things u assume are true, i won't believe u >>>>>>>>>>>>>> know what ur talking about


    The definition.

    A computation produces the well defined result based on the >>>>>>>>>>>>> INPUT.

    context-dependent computation simply expands it's input to >>>>>>>>>>>> include the entire computing context, not just the formal >>>>>>>>>>>> parameters. it's still well defined and it grants us access >>>>>>>>>>>> to meta computation that is not as expressible in TM computing. >>>>>>>>>>>>
    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the field >>>>>>>>>>> it is written about.

    You can't change the definition of a computation, and still >>>>>>>>>>> talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change the >>>>>>>>>>>>> well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is
    context- dependent doesn't mean all computation is context- >>>>>>>>>>>> dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the >>>>>>>>> definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>>>>>
    Go ahead, try to define an alternate version of Computation >>>>>>>>>>>>> Theory where the result can depend on things that aren't >>>>>>>>>>>>> part of the actual input to the machine, and see what you >>>>>>>>>>>>> can show that is useful.

    The problem becomes that you can't really say anything >>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" >>>>>>>>>>>> input was. there's nothing random about it, context-
    dependent computation is just as well-defend and
    deterministic as context- independent computation


    The problem is that when you look at the computation itself >>>>>>>>>>> (that might be imbedded into a larger computation) you don't >>>>>>>>>>> know which of the infinite contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts.


    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE >>>>>>>>>> DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, >>>>>>>>>> AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the >>>>>>>>> machine description isn't unique.


    all the info required to compute all configurations between >>>>>>>>>> the beginning and the current step of the computation, which >>>>>>>>>> can allow it to compute anything that is "knowable" about >>>>>>>>>> where it is in the computation at time of the REFLECT
    operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite). >>>>>>>>>
    The machine itself is bounded in size, plus the unbounded tape. >>>>>>>>>

    the problem is ur literally not reading what i'm writing to an >>>>>>>>>> appreciable degree of comprehension, being too focused on >>>>>>>>>> isolated responses that lack overall *contextual* awareness of >>>>>>>>>> the conversation...

    No, you are ignoring the requirements to implement what you >>>>>>>>> desire.





    Thus, what you can say about that "computation" is very limited. >>>>>>>>>>>
    You don't seem to understand that a key point of the theory >>>>>>>>>>> is about being able to build complicate things from simpler >>>>>>>>>>> pieces.

    It comes out of how logic works, we build complicated
    theories based on simpler theories and the axioms. If those >>>>>>>>>>> simplere things were "context dependent" it makes it much >>>>>>>>>>> harder for them to specifiy what they actually do in all >>>>>>>>>>> contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit >>>>>>>>> it is a new field.

    well it'd be great if someone fucking helped me out there, but >>>>>>>> all i get is a bunch adversarial dismissal cause i'm stuck on a >>>>>>>> god forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write >>>>>
    No, perhaps the problem is I assume you at least attempt to learn
    what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical
    relationship,

    and until u recognize that ur going to continue to be non-constructive

    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use
    things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the
    ground floor, fully define what you mean by things, show what you
    ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.


    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still can apply.

    To say you can change the foundation but keep the building is just lying.

    you can in fact replace foundation without even lifting the house bro

    i'm not invalidating most of computing, just gunning for a few classical limits that don't actually do anything interesting anyways. not really
    sure why people are to bent up about them




    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built and
    how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely
    because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that
    doesn't understand the world.




    if the simplest theory was always correct we'd still be using >>>>>>>>>> newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand










    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 20 00:29:29 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 11:30 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 6:01 PM, dart200 wrote:
    On 1/18/26 2:27 PM, Tristan Wibberley wrote:
    On 18/01/2026 21:50, dart200 wrote:
    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory

    In what ways is that apparent to you?

    modern computing utilized context-dependent functions, whereas turing
    machine computations cannot be context-dependent.

    like a simple total stack trace cannot be generally implemented for
    turing machines cause the top level runtime cannot be deduced by the
    computation. there's no mechanism to do that.


    Note "computing (the field, today)" does not refer to exactly the same >>>> relata as "computation" or "computing (the activity, then)".

    computing to




    Since Turing Machines don't HAVE a stack, that is just a red herring.

    it's amazing how retared one can be even when technically correct

    And how stupid you are by showing you are just incorrect.



    And yes, there IS a mechanism, if the sub part needs to know that to
    do its computation, then you pass that as part of its input or state.

    It just means you need to be EXPLICIT about what you are doing.

    Explicit helps us know exactly what is happening.

    that's a design choice, sure

    One that helps with being correct.



    Most subroutines shouldn't care about their caller, and if they do, it
    should be explicit.

    but if want to care generally, i can't do that because turing machines
    don't have the mechanisms in place to ensure i have theoretically robust access to whatever would be the stack trace equivalent


    That is the problem, there ISN'T an equivalent.

    Since code reuse is done by code dupication, the current state shows the eqivalent of the stack trace, as each "parent" call to a given
    "function" creates its own copy of that function.

    Since computabiliry isn't about efficency or resource usage, the
    duplication isn't a problem.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 20 00:29:30 2026
    From Newsgroup: comp.ai.philosophy

    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't >>>>>>>>>>> understood it yet) that produces a consistent deterministic >>>>>>>>>>> result that is "not a computation".

    Because you get that result only by equivocating on your
    definitions.

    If the context is part of the inpt to make the output
    determistic from the input, then they fail to be usable as >>>>>>>>>> sub- computations as we can't control that context part of the >>>>>>>>>> input.

    When we look at just the controllable input for a sub-
    computation, the output is NOT a deterministic function of >>>>>>>>>> that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>
    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something which >>>>>>>>> apparently u think the tHeOrY oF CoMpUtInG fails to encapsulate >>>>>>>>
    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act >>>>>>>>> of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern
    computers work.

    I guess you are just showing that you fundamentally don't
    understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be
    general enough to encapsulate everything computed by real world >>>>>>> computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the
    computer as you know it.

    so ur saying it's outdated and needs updating in regards to new
    things we do with computers that apparently turing machines as a
    model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN be
    built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations



    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it
    just sticks around like an unchanging law when u haven't even
    proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate all
    that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, but i
    guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories within it.

    You seem to confuse Computation THeory with fundamental of computing.



    like, if the fundamental theory doesn't encapsulate everything done
    within computing ... then idk why u think the halting problem should
    apply to modern computing???

    Because it DOES present a limitation of what modern computers can do.

    After all, every non-computation can be converted into a computation
    by forcing all the "hidden inputs" to be considered as inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of route
    learning with actual intelligence, but i suppose that's all u need
    working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are
    talking about.





    Showing that you really don't understand what you are talking >>>>>>>>>> about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>>> definition, perhaps because you never bothered to learn it. >>>>>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>>>>> architecture, sub- machines on such a platform are not >>>>>>>>>>>>>>>>>> necessarily computations, if they use the machines >>>>>>>>>>>>>>>>>> capability to pass information not allowed by the >>>>>>>>>>>>>>>>>> rules of a computation. Your RTM similarly break that >>>>>>>>>>>>>>>>>> property.

    Remember, Computations are NOT just what some model of >>>>>>>>>>>>>>>>>> processing produce, but specifically is defined based >>>>>>>>>>>>>>>>>> on producing a specific mapping of input to output, so >>>>>>>>>>>>>>>>>> if (even as a sub- machine) a specific input might >>>>>>>>>>>>>>>>>> produce different output, your architecture is NOT >>>>>>>>>>>>>>>>>> doing a computation.

    And without that property, using what the machine >>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as you >>>>>>>>>>>>>>>>>> can't actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. >>>>>>>>>>>>>>>>> the fact TMs don't capture them is an indication that >>>>>>>>>>>>>>>>> the ct- thesis may be false


    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>> "input", and if you do that, you find that since you are >>>>>>>>>>>>>>>> trying to make it so the caller can't just define that >>>>>>>>>>>>>>>> context, your system is less than turing complete. >>>>>>>>>>>>>>>>
    Your system break to property of building a computation >>>>>>>>>>>>>>>> by the concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur >>>>>>>>>>>>>>> overall computation context-dependent too ... if u dont >>>>>>>>>>>>>>> want a context- dependent computation don't include >>>>>>>>>>>>>>> context- dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a >>>>>>>>>>>>> distinct type of computation that has been ignored by the >>>>>>>>>>>>> theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in >>>>>>>>>>>> the same field to solve a problem specified in the field. >>>>>>>>>>>>
    As I said, if you want to try to define a new field based on >>>>>>>>>>>> a new definition of what a computation is, go ahead.

    it's not a new field, it's a mild extension of turing
    machines, with one new operation.

    No, it is, as you are changing essential core defintions.

    That is like saying that spherical geometery is the same field >>>>>>>>>> as plane geometry, we just added a small extension.

    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>> therefore context- dependent. these computations aren't >>>>>>>>>>>>>>> generally computable by TMs because TMs lack the >>>>>>>>>>>>>>> necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>
    In other words, you require some computations to not be >>>>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some
    computation that actually breaks in context-dependence, >>>>>>>>>>>>>>> rather than just listing things u assume are true, i >>>>>>>>>>>>>>> won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on >>>>>>>>>>>>>> the INPUT.

    context-dependent computation simply expands it's input to >>>>>>>>>>>>> include the entire computing context, not just the formal >>>>>>>>>>>>> parameters. it's still well defined and it grants us access >>>>>>>>>>>>> to meta computation that is not as expressible in TM >>>>>>>>>>>>> computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and still >>>>>>>>>>>> talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change >>>>>>>>>>>>>> the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>> context- dependent doesn't mean all computation is context- >>>>>>>>>>>>> dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with the >>>>>>>>>> definition.





    All you are doing is saying you disagree with the definition. >>>>>>>>>>>>>>
    Go ahead, try to define an alternate version of
    Computation Theory where the result can depend on things >>>>>>>>>>>>>> that aren't part of the actual input to the machine, and >>>>>>>>>>>>>> see what you can show that is useful.

    The problem becomes that you can't really say anything >>>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" >>>>>>>>>>>>> input was. there's nothing random about it, context- >>>>>>>>>>>>> dependent computation is just as well-defend and
    deterministic as context- independent computation


    The problem is that when you look at the computation itself >>>>>>>>>>>> (that might be imbedded into a larger computation) you don't >>>>>>>>>>>> know which of the infinite contexts it might be within. >>>>>>>>>>>
    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE >>>>>>>>>>> DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE NUMBER, >>>>>>>>>>> AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is the >>>>>>>>>> machine description isn't unique.


    all the info required to compute all configurations between >>>>>>>>>>> the beginning and the current step of the computation, which >>>>>>>>>>> can allow it to compute anything that is "knowable" about >>>>>>>>>>> where it is in the computation at time of the REFLECT
    operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but finite). >>>>>>>>>>
    The machine itself is bounded in size, plus the unbounded tape. >>>>>>>>>>

    the problem is ur literally not reading what i'm writing to >>>>>>>>>>> an appreciable degree of comprehension, being too focused on >>>>>>>>>>> isolated responses that lack overall *contextual* awareness >>>>>>>>>>> of the conversation...

    No, you are ignoring the requirements to implement what you >>>>>>>>>> desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the theory >>>>>>>>>>>> is about being able to build complicate things from simpler >>>>>>>>>>>> pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>> theories based on simpler theories and the axioms. If those >>>>>>>>>>>> simplere things were "context dependent" it makes it much >>>>>>>>>>>> harder for them to specifiy what they actually do in all >>>>>>>>>>>> contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and admit >>>>>>>>>> it is a new field.

    well it'd be great if someone fucking helped me out there, but >>>>>>>>> all i get is a bunch adversarial dismissal cause i'm stuck on a >>>>>>>>> god forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write >>>>>>
    No, perhaps the problem is I assume you at least attempt to learn >>>>>> what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical
    relationship,

    and until u recognize that ur going to continue to be non-constructive >>>>
    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use
    things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the
    ground floor, fully define what you mean by things, show what you
    ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.

    Sure you do. You need to figure out what might have changed.

    Remove the first floor of your building and see what happens.



    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still can
    apply.

    To say you can change the foundation but keep the building is just lying.

    you can in fact replace foundation without even lifting the house bro

    Not in logic.

    I guess you don't understand the use of figures of speach.


    i'm not invalidating most of computing, just gunning for a few classical limits that don't actually do anything interesting anyways. not really
    sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know if
    your system is valid.





    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built
    and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely >>>>>>>> because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that >>>>>>>> doesn't understand the world.




    if the simplest theory was always correct we'd still be using >>>>>>>>>>> newtonian gravity for everything


    You can't change a thing and it still be the same thing.

    I guess that truth is something you don't understand













    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 19 22:18:59 2026
    From Newsgroup: comp.ai.philosophy

    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u haven't >>>>>>>>>>>> understood it yet) that produces a consistent deterministic >>>>>>>>>>>> result that is "not a computation".

    Because you get that result only by equivocating on your >>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output
    determistic from the input, then they fail to be usable as >>>>>>>>>>> sub- computations as we can't control that context part of >>>>>>>>>>> the input.

    When we look at just the controllable input for a sub-
    computation, the output is NOT a deterministic function of >>>>>>>>>>> that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>>
    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something >>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal act >>>>>>>>>> of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern
    computers work.

    I guess you are just showing that you fundamentally don't
    understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be >>>>>>>> general enough to encapsulate everything computed by real world >>>>>>>> computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the
    computer as you know it.

    so ur saying it's outdated and needs updating in regards to new
    things we do with computers that apparently turing machines as a
    model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN be
    built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    fucking dick is just pulling shit out of his ass, 🤮🤮🤮




    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it
    just sticks around like an unchanging law when u haven't even
    proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate all
    that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, but
    i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories within
    it.

    You seem to confuse Computation THeory with fundamental of computing.



    like, if the fundamental theory doesn't encapsulate everything done
    within computing ... then idk why u think the halting problem should
    apply to modern computing???

    Because it DOES present a limitation of what modern computers can do.

    After all, every non-computation can be converted into a computation
    by forcing all the "hidden inputs" to be considered as inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle it. >>>>
    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of route
    learning with actual intelligence, but i suppose that's all u need
    working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are
    talking about.





    Showing that you really don't understand what you are talking >>>>>>>>>>> about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>>>> definition, perhaps because you never bothered to learn it. >>>>>>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>>>>>> architecture, sub- machines on such a platform are >>>>>>>>>>>>>>>>>>> not necessarily computations, if they use the >>>>>>>>>>>>>>>>>>> machines capability to pass information not allowed >>>>>>>>>>>>>>>>>>> by the rules of a computation. Your RTM similarly >>>>>>>>>>>>>>>>>>> break that property.

    Remember, Computations are NOT just what some model >>>>>>>>>>>>>>>>>>> of processing produce, but specifically is defined >>>>>>>>>>>>>>>>>>> based on producing a specific mapping of input to >>>>>>>>>>>>>>>>>>> output, so if (even as a sub- machine) a specific >>>>>>>>>>>>>>>>>>> input might produce different output, your >>>>>>>>>>>>>>>>>>> architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>
    And without that property, using what the machine >>>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as you >>>>>>>>>>>>>>>>>>> can't actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still computations. >>>>>>>>>>>>>>>>>> the fact TMs don't capture them is an indication that >>>>>>>>>>>>>>>>>> the ct- thesis may be false


    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>> "input", and if you do that, you find that since you >>>>>>>>>>>>>>>>> are trying to make it so the caller can't just define >>>>>>>>>>>>>>>>> that context, your system is less than turing complete. >>>>>>>>>>>>>>>>>
    Your system break to property of building a computation >>>>>>>>>>>>>>>>> by the concatination of sub-computations.

    ...including a context-dependent sub-computation makes >>>>>>>>>>>>>>>> ur overall computation context-dependent too ... if u >>>>>>>>>>>>>>>> dont want a context- dependent computation don't include >>>>>>>>>>>>>>>> context- dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's >>>>>>>>>>>>>> a distinct type of computation that has been ignored by >>>>>>>>>>>>>> the theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in >>>>>>>>>>>>> the same field to solve a problem specified in the field. >>>>>>>>>>>>>
    As I said, if you want to try to define a new field based >>>>>>>>>>>>> on a new definition of what a computation is, go ahead. >>>>>>>>>>>>
    it's not a new field, it's a mild extension of turing >>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>
    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>> therefore context- dependent. these computations aren't >>>>>>>>>>>>>>>> generally computable by TMs because TMs lack the >>>>>>>>>>>>>>>> necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>
    In other words, you require some computations to not be >>>>>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>> computation that actually breaks in context-dependence, >>>>>>>>>>>>>>>> rather than just listing things u assume are true, i >>>>>>>>>>>>>>>> won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on >>>>>>>>>>>>>>> the INPUT.

    context-dependent computation simply expands it's input to >>>>>>>>>>>>>> include the entire computing context, not just the formal >>>>>>>>>>>>>> parameters. it's still well defined and it grants us >>>>>>>>>>>>>> access to meta computation that is not as expressible in >>>>>>>>>>>>>> TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and still >>>>>>>>>>>>> talk about things as if you were in the same system.


    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change >>>>>>>>>>>>>>> the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with >>>>>>>>>>> the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>> Computation Theory where the result can depend on things >>>>>>>>>>>>>>> that aren't part of the actual input to the machine, and >>>>>>>>>>>>>>> see what you can show that is useful.

    The problem becomes that you can't really say anything >>>>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" >>>>>>>>>>>>>> input was. there's nothing random about it, context- >>>>>>>>>>>>>> dependent computation is just as well-defend and
    deterministic as context- independent computation


    The problem is that when you look at the computation itself >>>>>>>>>>>>> (that might be imbedded into a larger computation) you >>>>>>>>>>>>> don't know which of the infinite contexts it might be within. >>>>>>>>>>>>
    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL MACHINE >>>>>>>>>>>> DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT STATE >>>>>>>>>>>> NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is >>>>>>>>>>> the machine description isn't unique.


    all the info required to compute all configurations between >>>>>>>>>>>> the beginning and the current step of the computation, which >>>>>>>>>>>> can allow it to compute anything that is "knowable" about >>>>>>>>>>>> where it is in the computation at time of the REFLECT >>>>>>>>>>>> operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the unbounded tape. >>>>>>>>>>>

    the problem is ur literally not reading what i'm writing to >>>>>>>>>>>> an appreciable degree of comprehension, being too focused on >>>>>>>>>>>> isolated responses that lack overall *contextual* awareness >>>>>>>>>>>> of the conversation...

    No, you are ignoring the requirements to implement what you >>>>>>>>>>> desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the theory >>>>>>>>>>>>> is about being able to build complicate things from simpler >>>>>>>>>>>>> pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>> theories based on simpler theories and the axioms. If those >>>>>>>>>>>>> simplere things were "context dependent" it makes it much >>>>>>>>>>>>> harder for them to specifiy what they actually do in all >>>>>>>>>>>>> contexts, and to then use them in all contexts.

    i'm sorry context-dependent computation aren't as simple 🫩🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out there, but >>>>>>>>>> all i get is a bunch adversarial dismissal cause i'm stuck on >>>>>>>>>> a god forsaking planet of a fucking half-braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i write >>>>>>>
    No, perhaps the problem is I assume you at least attempt to learn >>>>>>> what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical >>>>>> relationship,

    and until u recognize that ur going to continue to be non-
    constructive

    YOU were the one asking for help to develop your ideas, if only by
    posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can use >>>>> things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the
    ground floor, fully define what you mean by things, show what you
    ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out at me >>>
    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so
    everything we already could compute is still computable.

    that fact that's not obvious to you is just u being willfully ignorant
    at this point.


    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!




    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still can
    apply.

    To say you can change the foundation but keep the building is just
    lying.

    you can in fact replace foundation without even lifting the house bro

    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy



    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting anyways.
    not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know if
    your system is valid.

    ur just commenting on how little u've tried to understand it






    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built
    and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely >>>>>>>>> because you don't understand what you are trying to get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that >>>>>>>>> doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>
    I guess that truth is something you don't understand













    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 20 07:59:08 2026
    From Newsgroup: comp.ai.philosophy

    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>> deterministic result that is "not a computation".

    Because you get that result only by equivocating on your >>>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>> determistic from the input, then they fail to be usable as >>>>>>>>>>>> sub- computations as we can't control that context part of >>>>>>>>>>>> the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>> computation, the output is NOT a deterministic function of >>>>>>>>>>>> that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>>>
    Its using hidden inputs that the caller can't control.

    which we do all the time in normal programming, something >>>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal >>>>>>>>>>> act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern
    computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>> understand the problem field you are betting your life on.

    one would presume the fundamental theory of computing would be >>>>>>>>> general enough to encapsulate everything computed by real world >>>>>>>>> computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the
    computer as you know it.

    so ur saying it's outdated and needs updating in regards to new >>>>>>> things we do with computers that apparently turing machines as a >>>>>>> model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN
    be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    Because it isn't deterministically based on the INPUT, but include other "unknown" factors.

    The key point is that a computation always gives the same answer for a
    given input, if it doesn't, it can't be a computation.

    If you can't control the whole input, it isn't as useful, if it has any usefullness at all.


    fucking dick is just pulling shit out of his ass, 🤮🤮🤮


    It seems you are stuffing yours with shit.





    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it >>>>>>> just sticks around like an unchanging law when u haven't even
    proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate
    all that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, but >>>>> i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories
    within it.

    You seem to confuse Computation THeory with fundamental of computing.



    like, if the fundamental theory doesn't encapsulate everything done >>>>> within computing ... then idk why u think the halting problem
    should apply to modern computing???

    Because it DOES present a limitation of what modern computers can do.

    After all, every non-computation can be converted into a computation
    by forcing all the "hidden inputs" to be considered as inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle it. >>>>>
    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of
    route learning with actual intelligence, but i suppose that's all u >>>>> need working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are
    talking about.





    Showing that you really don't understand what you are >>>>>>>>>>>> talking about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>>>>> definition, perhaps because you never bothered to learn it. >>>>>>>>>>>>>>>>




    This is sort of like the problem with a RASP machine >>>>>>>>>>>>>>>>>>>> architecture, sub- machines on such a platform are >>>>>>>>>>>>>>>>>>>> not necessarily computations, if they use the >>>>>>>>>>>>>>>>>>>> machines capability to pass information not allowed >>>>>>>>>>>>>>>>>>>> by the rules of a computation. Your RTM similarly >>>>>>>>>>>>>>>>>>>> break that property.

    Remember, Computations are NOT just what some model >>>>>>>>>>>>>>>>>>>> of processing produce, but specifically is defined >>>>>>>>>>>>>>>>>>>> based on producing a specific mapping of input to >>>>>>>>>>>>>>>>>>>> output, so if (even as a sub- machine) a specific >>>>>>>>>>>>>>>>>>>> input might produce different output, your >>>>>>>>>>>>>>>>>>>> architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>>
    And without that property, using what the machine >>>>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as >>>>>>>>>>>>>>>>>>>> you can't actually talk much about it.

    the output is still well-defined and deterministic at >>>>>>>>>>>>>>>>>>> runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>>>>>> includes "hidden" state from outside that input stored >>>>>>>>>>>>>>>>>> elsewhere in the machine.


    context-dependent computations are still >>>>>>>>>>>>>>>>>>> computations. the fact TMs don't capture them is an >>>>>>>>>>>>>>>>>>> indication that the ct- thesis may be false >>>>>>>>>>>>>>>>>>>

    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>>> "input", and if you do that, you find that since you >>>>>>>>>>>>>>>>>> are trying to make it so the caller can't just define >>>>>>>>>>>>>>>>>> that context, your system is less than turing complete. >>>>>>>>>>>>>>>>>>
    Your system break to property of building a >>>>>>>>>>>>>>>>>> computation by the concatination of sub-computations. >>>>>>>>>>>>>>>>>
    ...including a context-dependent sub-computation makes >>>>>>>>>>>>>>>>> ur overall computation context-dependent too ... if u >>>>>>>>>>>>>>>>> dont want a context- dependent computation don't >>>>>>>>>>>>>>>>> include context- dependent sub- computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's >>>>>>>>>>>>>>> a distinct type of computation that has been ignored by >>>>>>>>>>>>>>> the theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be in >>>>>>>>>>>>>> the same field to solve a problem specified in the field. >>>>>>>>>>>>>>
    As I said, if you want to try to define a new field based >>>>>>>>>>>>>> on a new definition of what a computation is, go ahead. >>>>>>>>>>>>>
    it's not a new field, it's a mild extension of turing >>>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>>
    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>>> therefore context- dependent. these computations aren't >>>>>>>>>>>>>>>>> generally computable by TMs because TMs lack the >>>>>>>>>>>>>>>>> necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>>
    In other words, you require some computations to not be >>>>>>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>>> computation that actually breaks in context-dependence, >>>>>>>>>>>>>>>>> rather than just listing things u assume are true, i >>>>>>>>>>>>>>>>> won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on >>>>>>>>>>>>>>>> the INPUT.

    context-dependent computation simply expands it's input >>>>>>>>>>>>>>> to include the entire computing context, not just the >>>>>>>>>>>>>>> formal parameters. it's still well defined and it grants >>>>>>>>>>>>>>> us access to meta computation that is not as expressible >>>>>>>>>>>>>>> in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and >>>>>>>>>>>>>> still talk about things as if you were in the same system. >>>>>>>>>>>>>

    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change >>>>>>>>>>>>>>>> the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with >>>>>>>>>>>> the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>>> Computation Theory where the result can depend on things >>>>>>>>>>>>>>>> that aren't part of the actual input to the machine, and >>>>>>>>>>>>>>>> see what you can show that is useful.

    The problem becomes that you can't really say anything >>>>>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the >>>>>>>>>>>>>>> "hidden" input was. there's nothing random about it, >>>>>>>>>>>>>>> context- dependent computation is just as well-defend and >>>>>>>>>>>>>>> deterministic as context- independent computation >>>>>>>>>>>>>>>

    The problem is that when you look at the computation >>>>>>>>>>>>>> itself (that might be imbedded into a larger computation) >>>>>>>>>>>>>> you don't know which of the infinite contexts it might be >>>>>>>>>>>>>> within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL >>>>>>>>>>>>> MACHINE DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT >>>>>>>>>>>>> STATE NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is >>>>>>>>>>>> the machine description isn't unique.


    all the info required to compute all configurations between >>>>>>>>>>>>> the beginning and the current step of the computation, >>>>>>>>>>>>> which can allow it to compute anything that is "knowable" >>>>>>>>>>>>> about where it is in the computation at time of the REFLECT >>>>>>>>>>>>> operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the unbounded tape. >>>>>>>>>>>>

    the problem is ur literally not reading what i'm writing to >>>>>>>>>>>>> an appreciable degree of comprehension, being too focused >>>>>>>>>>>>> on isolated responses that lack overall *contextual* >>>>>>>>>>>>> awareness of the conversation...

    No, you are ignoring the requirements to implement what you >>>>>>>>>>>> desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the >>>>>>>>>>>>>> theory is about being able to build complicate things from >>>>>>>>>>>>>> simpler pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>>> theories based on simpler theories and the axioms. If >>>>>>>>>>>>>> those simplere things were "context dependent" it makes it >>>>>>>>>>>>>> much harder for them to specifiy what they actually do in >>>>>>>>>>>>>> all contexts, and to then use them in all contexts. >>>>>>>>>>>>>
    i'm sorry context-dependent computation aren't as simple 🫩 >>>>>>>>>>>>> 🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out there, >>>>>>>>>>> but all i get is a bunch adversarial dismissal cause i'm >>>>>>>>>>> stuck on a god forsaking planet of a fucking half-braindead >>>>>>>>>>> clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i >>>>>>>>> write

    No, perhaps the problem is I assume you at least attempt to
    learn what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a hierarchical >>>>>>> relationship,

    and until u recognize that ur going to continue to be non-
    constructive

    YOU were the one asking for help to develop your ideas, if only by >>>>>> posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can
    use things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the >>>>>> ground floor, fully define what you mean by things, show what you >>>>>> ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out
    at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so everything we already could compute is still computable.

    But only if you DON'T use reflect.


    that fact that's not obvious to you is just u being willfully ignorant
    at this point.

    The problem is, once your "machine" definition can do non-computations,
    you can't assume it does a computation, and thus your gaurenetees go
    away, so you can say less about what it does.



    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!

    Nope, that is EXACTLY what changing a foundational rule without seeing
    what it supported does.

    I guess you don't understand cause and effect.





    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still can
    apply.

    To say you can change the foundation but keep the building is just
    lying.

    you can in fact replace foundation without even lifting the house bro

    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy

    In other words, you don't understand what an analogy is.

    Too bad you are dooming yourself and your wife to starvation.




    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting anyways.
    not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know if
    your system is valid.

    ur just commenting on how little u've tried to understand it

    I'm trying to get you off the wrong track.

    What would you do if you saw someone cutting the branch they were
    sitting on, being outside the cut they were making.







    It seems you want to change the foundation, but keep most of the
    building on top, without even knowing how that building was built >>>>>> and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, largely >>>>>>>>>> because you don't understand what you are trying to get in. >>>>>>>>>>
    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown that >>>>>>>>>> doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>>
    I guess that truth is something you don't understand
















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 20 17:55:22 2026
    From Newsgroup: comp.ai.philosophy

    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>> deterministic result that is "not a computation".

    Because you get that result only by equivocating on your >>>>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>> determistic from the input, then they fail to be usable as >>>>>>>>>>>>> sub- computations as we can't control that context part of >>>>>>>>>>>>> the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>> computation, the output is NOT a deterministic function of >>>>>>>>>>>>> that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>>>>
    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>
    which we do all the time in normal programming, something >>>>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal >>>>>>>>>>>> act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>>> understand the problem field you are betting your life on. >>>>>>>>>>
    one would presume the fundamental theory of computing would be >>>>>>>>>> general enough to encapsulate everything computed by real >>>>>>>>>> world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the
    computer as you know it.

    so ur saying it's outdated and needs updating in regards to new >>>>>>>> things we do with computers that apparently turing machines as a >>>>>>>> model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN
    be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    it may or may not have an input, and in fact the entirety of turing
    machine computing can be expressed by enumerating only the turing
    machines that do NOT take input.

    but include other "unknown" factors.

    lol, so when u print a stack trace, u consider those factors "unknown"?


    The key point is that a computation always gives the same answer for a
    given input, if it doesn't, it can't be a computation.

    If you can't control the whole input, it isn't as useful, if it has any usefullness at all.


    fucking dick is just pulling shit out of his ass, 🤮🤮🤮

    It seems you are stuffing yours with shit.





    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it
    deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then it >>>>>>>> just sticks around like an unchanging law when u haven't even >>>>>>>> proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate
    all that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do,
    but i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories
    within it.

    You seem to confuse Computation THeory with fundamental of computing.



    like, if the fundamental theory doesn't encapsulate everything
    done within computing ... then idk why u think the halting problem >>>>>> should apply to modern computing???

    Because it DOES present a limitation of what modern computers can do. >>>>>
    After all, every non-computation can be converted into a
    computation by forcing all the "hidden inputs" to be considered as
    inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface.



    If a new problem comes up, a new theory might be needed to handle >>>>>>> it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of
    route learning with actual intelligence, but i suppose that's all >>>>>> u need working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are >>>>>>>>> talking about.





    Showing that you really don't understand what you are >>>>>>>>>>>>> talking about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>>>>>> definition, perhaps because you never bothered to learn >>>>>>>>>>>>>>>>> it.





    This is sort of like the problem with a RASP >>>>>>>>>>>>>>>>>>>>> machine architecture, sub- machines on such a >>>>>>>>>>>>>>>>>>>>> platform are not necessarily computations, if they >>>>>>>>>>>>>>>>>>>>> use the machines capability to pass information not >>>>>>>>>>>>>>>>>>>>> allowed by the rules of a computation. Your RTM >>>>>>>>>>>>>>>>>>>>> similarly break that property.

    Remember, Computations are NOT just what some model >>>>>>>>>>>>>>>>>>>>> of processing produce, but specifically is defined >>>>>>>>>>>>>>>>>>>>> based on producing a specific mapping of input to >>>>>>>>>>>>>>>>>>>>> output, so if (even as a sub- machine) a specific >>>>>>>>>>>>>>>>>>>>> input might produce different output, your >>>>>>>>>>>>>>>>>>>>> architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>>>
    And without that property, using what the machine >>>>>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as >>>>>>>>>>>>>>>>>>>>> you can't actually talk much about it. >>>>>>>>>>>>>>>>>>>>
    the output is still well-defined and deterministic >>>>>>>>>>>>>>>>>>>> at runtime,

    Not from the "input" to the piece of algorithm, as it >>>>>>>>>>>>>>>>>>> includes "hidden" state from outside that input >>>>>>>>>>>>>>>>>>> stored elsewhere in the machine.


    context-dependent computations are still >>>>>>>>>>>>>>>>>>>> computations. the fact TMs don't capture them is an >>>>>>>>>>>>>>>>>>>> indication that the ct- thesis may be false >>>>>>>>>>>>>>>>>>>>

    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>>>> "input", and if you do that, you find that since you >>>>>>>>>>>>>>>>>>> are trying to make it so the caller can't just define >>>>>>>>>>>>>>>>>>> that context, your system is less than turing complete. >>>>>>>>>>>>>>>>>>>
    Your system break to property of building a >>>>>>>>>>>>>>>>>>> computation by the concatination of sub-computations. >>>>>>>>>>>>>>>>>>
    ...including a context-dependent sub-computation makes >>>>>>>>>>>>>>>>>> ur overall computation context-dependent too ... if u >>>>>>>>>>>>>>>>>> dont want a context- dependent computation don't >>>>>>>>>>>>>>>>>> include context- dependent sub- computation. >>>>>>>>>>>>>>>>>
    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming >>>>>>>>>>>>>>>> it's a distinct type of computation that has been >>>>>>>>>>>>>>>> ignored by the theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be >>>>>>>>>>>>>>> in the same field to solve a problem specified in the field. >>>>>>>>>>>>>>>
    As I said, if you want to try to define a new field based >>>>>>>>>>>>>>> on a new definition of what a computation is, go ahead. >>>>>>>>>>>>>>
    it's not a new field, it's a mild extension of turing >>>>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>>>
    what the did the nut say when it was all grown up???




    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>>>> therefore context- dependent. these computations >>>>>>>>>>>>>>>>>> aren't generally computable by TMs because TMs lack >>>>>>>>>>>>>>>>>> the necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>>>
    In other words, you require some computations to not be >>>>>>>>>>>>>>>>> actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>>>> computation that actually breaks in context- >>>>>>>>>>>>>>>>>> dependence, rather than just listing things u assume >>>>>>>>>>>>>>>>>> are true, i won't believe u know what ur talking about >>>>>>>>>>>>>>>>>>

    The definition.

    A computation produces the well defined result based on >>>>>>>>>>>>>>>>> the INPUT.

    context-dependent computation simply expands it's input >>>>>>>>>>>>>>>> to include the entire computing context, not just the >>>>>>>>>>>>>>>> formal parameters. it's still well defined and it grants >>>>>>>>>>>>>>>> us access to meta computation that is not as expressible >>>>>>>>>>>>>>>> in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and >>>>>>>>>>>>>>> still talk about things as if you were in the same system. >>>>>>>>>>>>>>

    That just shows you are smoking some bad weed.



    Your context, being not part of the input, can't change >>>>>>>>>>>>>>>>> the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with >>>>>>>>>>>>> the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>>>> Computation Theory where the result can depend on >>>>>>>>>>>>>>>>> things that aren't part of the actual input to the >>>>>>>>>>>>>>>>> machine, and see what you can show that is useful. >>>>>>>>>>>>>>>>>
    The problem becomes that you can't really say anything >>>>>>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the >>>>>>>>>>>>>>>> "hidden" input was. there's nothing random about it, >>>>>>>>>>>>>>>> context- dependent computation is just as well-defend >>>>>>>>>>>>>>>> and deterministic as context- independent computation >>>>>>>>>>>>>>>>

    The problem is that when you look at the computation >>>>>>>>>>>>>>> itself (that might be imbedded into a larger computation) >>>>>>>>>>>>>>> you don't know which of the infinite contexts it might be >>>>>>>>>>>>>>> within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL >>>>>>>>>>>>>> MACHINE DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT >>>>>>>>>>>>>> STATE NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is >>>>>>>>>>>>> the machine description isn't unique.


    all the info required to compute all configurations >>>>>>>>>>>>>> between the beginning and the current step of the >>>>>>>>>>>>>> computation, which can allow it to compute anything that >>>>>>>>>>>>>> is "knowable" about where it is in the computation at time >>>>>>>>>>>>>> of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the unbounded >>>>>>>>>>>>> tape.


    the problem is ur literally not reading what i'm writing >>>>>>>>>>>>>> to an appreciable degree of comprehension, being too >>>>>>>>>>>>>> focused on isolated responses that lack overall
    *contextual* awareness of the conversation...

    No, you are ignoring the requirements to implement what you >>>>>>>>>>>>> desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the >>>>>>>>>>>>>>> theory is about being able to build complicate things >>>>>>>>>>>>>>> from simpler pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>>>> theories based on simpler theories and the axioms. If >>>>>>>>>>>>>>> those simplere things were "context dependent" it makes >>>>>>>>>>>>>>> it much harder for them to specifiy what they actually do >>>>>>>>>>>>>>> in all contexts, and to then use them in all contexts. >>>>>>>>>>>>>>
    i'm sorry context-dependent computation aren't as simple >>>>>>>>>>>>>> 🫩 🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out there, >>>>>>>>>>>> but all i get is a bunch adversarial dismissal cause i'm >>>>>>>>>>>> stuck on a god forsaking planet of a fucking half-braindead >>>>>>>>>>>> clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i >>>>>>>>>> write

    No, perhaps the problem is I assume you at least attempt to >>>>>>>>> learn what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a
    hierarchical relationship,

    and until u recognize that ur going to continue to be non-
    constructive

    YOU were the one asking for help to develop your ideas, if only >>>>>>> by posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis

    You need to make a choice of directions.

    Either you work in the currently established theory, so you can >>>>>>> use things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at the >>>>>>> ground floor, fully define what you mean by things, show what you >>>>>>> ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out >>>>>> at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so
    everything we already could compute is still computable.

    But only if you DON'T use reflect.

    but so no power has been lost



    that fact that's not obvious to you is just u being willfully ignorant
    at this point.

    The problem is, once your "machine" definition can do non-computations,
    you can't assume it does a computation, and thus your gaurenetees go
    away, so you can say less about what it does.

    i think ur just pulling a definist fallacy. until u make it produce a contradiction, i don't really care what u label it as.




    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!

    Nope, that is EXACTLY what changing a foundational rule without seeing
    what it supported does.

    I guess you don't understand cause and effect.





    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still
    can apply.

    To say you can change the foundation but keep the building is just
    lying.

    you can in fact replace foundation without even lifting the house bro

    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy

    In other words, you don't understand what an analogy is.

    Too bad you are dooming yourself and your wife to starvation.

    pretty nuts u think u need to keep bringing that up,

    lol, u think ur on the right side here???





    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting
    anyways. not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know if
    your system is valid.

    ur just commenting on how little u've tried to understand it

    I'm trying to get you off the wrong track.

    and yet all u do is push me down the track further cause ain't accept ur fallacies bro


    What would you do if you saw someone cutting the branch they were
    sitting on, being outside the cut they were making.







    It seems you want to change the foundation, but keep most of the >>>>>>> building on top, without even knowing how that building was built >>>>>>> and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas,
    largely because you don't understand what you are trying to >>>>>>>>>>> get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown >>>>>>>>>>> that doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>>>
    I guess that truth is something you don't understand
















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 09:44:25 2026
    From Newsgroup: comp.ai.philosophy

    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out.

    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>>> deterministic result that is "not a computation". >>>>>>>>>>>>>>
    Because you get that result only by equivocating on your >>>>>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>> determistic from the input, then they fail to be usable as >>>>>>>>>>>>>> sub- computations as we can't control that context part of >>>>>>>>>>>>>> the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>>> computation, the output is NOT a deterministic function of >>>>>>>>>>>>>> that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>>>>>
    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>
    which we do all the time in normal programming, something >>>>>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal >>>>>>>>>>>>> act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>>>> understand the problem field you are betting your life on. >>>>>>>>>>>
    one would presume the fundamental theory of computing would >>>>>>>>>>> be general enough to encapsulate everything computed by real >>>>>>>>>>> world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the >>>>>>>>>> computer as you know it.

    so ur saying it's outdated and needs updating in regards to new >>>>>>>>> things we do with computers that apparently turing machines as >>>>>>>>> a model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and
    apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines CAN >>>>>> be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.


    it may or may not have an input, and in fact the entirety of turing
    machine computing can be expressed by enumerating only the turing
    machines that do NOT take input.

    In which case the input can be thought of as the empty set and the
    output is a constant.


    but include other "unknown" factors.

    lol, so when u print a stack trace, u consider those factors "unknown"?

    Thus making your definistic fallacy of confusing an instance of a
    computation with the definition and use of the computation itself.

    If I am trying to document an API, but the results depend on something
    not provided through that API, as far as that documentation is conserned
    those details are "unknown".



    The key point is that a computation always gives the same answer for a
    given input, if it doesn't, it can't be a computation.

    If you can't control the whole input, it isn't as useful, if it has
    any usefullness at all.


    fucking dick is just pulling shit out of his ass, 🤮🤮🤮

    It seems you are stuffing yours with shit.





    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it >>>>>> deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then >>>>>>>>> it just sticks around like an unchanging law when u haven't >>>>>>>>> even proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate >>>>>>> all that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, >>>>>>> but i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like
    Complexity Theory,

    complexity theory is built on top of the fundamentals of computing ... >>>>
    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories
    within it.

    You seem to confuse Computation THeory with fundamental of computing. >>>>>


    like, if the fundamental theory doesn't encapsulate everything
    done within computing ... then idk why u think the halting
    problem should apply to modern computing???

    Because it DOES present a limitation of what modern computers can do. >>>>>>
    After all, every non-computation can be converted into a
    computation by forcing all the "hidden inputs" to be considered as >>>>>> inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface. >>>>>>


    If a new problem comes up, a new theory might be needed to
    handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of
    route learning with actual intelligence, but i suppose that's all >>>>>>> u need working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are >>>>>>>>>> talking about.





    Showing that you really don't understand what you are >>>>>>>>>>>>>> talking about.





    It seems you just assume you are allowed to change the >>>>>>>>>>>>>>>>>> definition, perhaps because you never bothered to >>>>>>>>>>>>>>>>>> learn it.





    This is sort of like the problem with a RASP >>>>>>>>>>>>>>>>>>>>>> machine architecture, sub- machines on such a >>>>>>>>>>>>>>>>>>>>>> platform are not necessarily computations, if they >>>>>>>>>>>>>>>>>>>>>> use the machines capability to pass information >>>>>>>>>>>>>>>>>>>>>> not allowed by the rules of a computation. Your >>>>>>>>>>>>>>>>>>>>>> RTM similarly break that property. >>>>>>>>>>>>>>>>>>>>>>
    Remember, Computations are NOT just what some >>>>>>>>>>>>>>>>>>>>>> model of processing produce, but specifically is >>>>>>>>>>>>>>>>>>>>>> defined based on producing a specific mapping of >>>>>>>>>>>>>>>>>>>>>> input to output, so if (even as a sub- machine) a >>>>>>>>>>>>>>>>>>>>>> specific input might produce different output, >>>>>>>>>>>>>>>>>>>>>> your architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>>>>
    And without that property, using what the machine >>>>>>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as >>>>>>>>>>>>>>>>>>>>>> you can't actually talk much about it. >>>>>>>>>>>>>>>>>>>>>
    the output is still well-defined and deterministic >>>>>>>>>>>>>>>>>>>>> at runtime,

    Not from the "input" to the piece of algorithm, as >>>>>>>>>>>>>>>>>>>> it includes "hidden" state from outside that input >>>>>>>>>>>>>>>>>>>> stored elsewhere in the machine.


    context-dependent computations are still >>>>>>>>>>>>>>>>>>>>> computations. the fact TMs don't capture them is an >>>>>>>>>>>>>>>>>>>>> indication that the ct- thesis may be false >>>>>>>>>>>>>>>>>>>>>

    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>>>>> "input", and if you do that, you find that since you >>>>>>>>>>>>>>>>>>>> are trying to make it so the caller can't just >>>>>>>>>>>>>>>>>>>> define that context, your system is less than turing >>>>>>>>>>>>>>>>>>>> complete.

    Your system break to property of building a >>>>>>>>>>>>>>>>>>>> computation by the concatination of sub-computations. >>>>>>>>>>>>>>>>>>>
    ...including a context-dependent sub-computation >>>>>>>>>>>>>>>>>>> makes ur overall computation context-dependent >>>>>>>>>>>>>>>>>>> too ... if u dont want a context- dependent >>>>>>>>>>>>>>>>>>> computation don't include context- dependent sub- >>>>>>>>>>>>>>>>>>> computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming >>>>>>>>>>>>>>>>> it's a distinct type of computation that has been >>>>>>>>>>>>>>>>> ignored by the theory of computing thus far

    nice try tho

    But you don't actually do that, as you then claim to be >>>>>>>>>>>>>>>> in the same field to solve a problem specified in the >>>>>>>>>>>>>>>> field.

    As I said, if you want to try to define a new field >>>>>>>>>>>>>>>> based on a new definition of what a computation is, go >>>>>>>>>>>>>>>> ahead.

    it's not a new field, it's a mild extension of turing >>>>>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>>>>
    what the did the nut say when it was all grown up??? >>>>>>>>>>>>>



    You need to work out your formal definition.

    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>>>>> therefore context- dependent. these computations >>>>>>>>>>>>>>>>>>> aren't generally computable by TMs because TMs lack >>>>>>>>>>>>>>>>>>> the necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>>>>
    In other words, you require some computations to not >>>>>>>>>>>>>>>>>> be actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>>>>> computation that actually breaks in context- >>>>>>>>>>>>>>>>>>> dependence, rather than just listing things u assume >>>>>>>>>>>>>>>>>>> are true, i won't believe u know what ur talking about >>>>>>>>>>>>>>>>>>>

    The definition.

    A computation produces the well defined result based >>>>>>>>>>>>>>>>>> on the INPUT.

    context-dependent computation simply expands it's input >>>>>>>>>>>>>>>>> to include the entire computing context, not just the >>>>>>>>>>>>>>>>> formal parameters. it's still well defined and it >>>>>>>>>>>>>>>>> grants us access to meta computation that is not as >>>>>>>>>>>>>>>>> expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and >>>>>>>>>>>>>>>> still talk about things as if you were in the same system. >>>>>>>>>>>>>>>

    That just shows you are smoking some bad weed. >>>>>>>>>>>>>>>>


    Your context, being not part of the input, can't >>>>>>>>>>>>>>>>>> change the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can be >>>>>>>>>>>>>>>> context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree with >>>>>>>>>>>>>> the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>>>>> Computation Theory where the result can depend on >>>>>>>>>>>>>>>>>> things that aren't part of the actual input to the >>>>>>>>>>>>>>>>>> machine, and see what you can show that is useful. >>>>>>>>>>>>>>>>>>
    The problem becomes that you can't really say anything >>>>>>>>>>>>>>>>>> about what you will get, since you don't know what the >>>>>>>>>>>>>>>>>> "hidden" factors are.

    ??? i was very clear multiple times over what the >>>>>>>>>>>>>>>>> "hidden" input was. there's nothing random about it, >>>>>>>>>>>>>>>>> context- dependent computation is just as well-defend >>>>>>>>>>>>>>>>> and deterministic as context- independent computation >>>>>>>>>>>>>>>>>

    The problem is that when you look at the computation >>>>>>>>>>>>>>>> itself (that might be imbedded into a larger
    computation) you don't know which of the infinite >>>>>>>>>>>>>>>> contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL >>>>>>>>>>>>>>> MACHINE DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT >>>>>>>>>>>>>>> STATE NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem is >>>>>>>>>>>>>> the machine description isn't unique.


    all the info required to compute all configurations >>>>>>>>>>>>>>> between the beginning and the current step of the >>>>>>>>>>>>>>> computation, which can allow it to compute anything that >>>>>>>>>>>>>>> is "knowable" about where it is in the computation at >>>>>>>>>>>>>>> time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the unbounded >>>>>>>>>>>>>> tape.


    the problem is ur literally not reading what i'm writing >>>>>>>>>>>>>>> to an appreciable degree of comprehension, being too >>>>>>>>>>>>>>> focused on isolated responses that lack overall >>>>>>>>>>>>>>> *contextual* awareness of the conversation...

    No, you are ignoring the requirements to implement what >>>>>>>>>>>>>> you desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the >>>>>>>>>>>>>>>> theory is about being able to build complicate things >>>>>>>>>>>>>>>> from simpler pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>>>>> theories based on simpler theories and the axioms. If >>>>>>>>>>>>>>>> those simplere things were "context dependent" it makes >>>>>>>>>>>>>>>> it much harder for them to specifiy what they actually >>>>>>>>>>>>>>>> do in all contexts, and to then use them in all contexts. >>>>>>>>>>>>>>>
    i'm sorry context-dependent computation aren't as simple >>>>>>>>>>>>>>> 🫩 🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out there, >>>>>>>>>>>>> but all i get is a bunch adversarial dismissal cause i'm >>>>>>>>>>>>> stuck on a god forsaking planet of a fucking half-braindead >>>>>>>>>>>>> clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what i >>>>>>>>>>> write

    No, perhaps the problem is I assume you at least attempt to >>>>>>>>>> learn what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a
    hierarchical relationship,

    and until u recognize that ur going to continue to be non-
    constructive

    YOU were the one asking for help to develop your ideas, if only >>>>>>>> by posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis >>>>>>>>
    You need to make a choice of directions.

    Either you work in the currently established theory, so you can >>>>>>>> use things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at >>>>>>>> the ground floor, fully define what you mean by things, show
    what you ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat out >>>>>>> at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few
    classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so
    everything we already could compute is still computable.

    But only if you DON'T use reflect.

    but so no power has been lost



    that fact that's not obvious to you is just u being willfully
    ignorant at this point.

    The problem is, once your "machine" definition can do non-
    computations, you can't assume it does a computation, and thus your
    gaurenetees go away, so you can say less about what it does.

    i think ur just pulling a definist fallacy. until u make it produce a contradiction, i don't really care what u label it as.




    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!

    Nope, that is EXACTLY what changing a foundational rule without seeing
    what it supported does.

    I guess you don't understand cause and effect.





    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still
    can apply.

    To say you can change the foundation but keep the building is just >>>>>> lying.

    you can in fact replace foundation without even lifting the house bro >>>>
    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy

    In other words, you don't understand what an analogy is.

    Too bad you are dooming yourself and your wife to starvation.

    pretty nuts u think u need to keep bringing that up,

    lol, u think ur on the right side here???





    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting
    anyways. not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know
    if your system is valid.

    ur just commenting on how little u've tried to understand it

    I'm trying to get you off the wrong track.

    and yet all u do is push me down the track further cause ain't accept ur fallacies bro


    What would you do if you saw someone cutting the branch they were
    sitting on, being outside the cut they were making.







    It seems you want to change the foundation, but keep most of the >>>>>>>> building on top, without even knowing how that building was
    built and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, >>>>>>>>>>>> largely because you don't understand what you are trying to >>>>>>>>>>>> get in.

    When your errors are explained, just just curse back.

    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown >>>>>>>>>>>> that doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>>>>
    I guess that truth is something you don't understand >>>>>>>>>>>>>


















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 14:36:45 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>
    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic.

    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>>>> deterministic result that is "not a computation". >>>>>>>>>>>>>>>
    Because you get that result only by equivocating on your >>>>>>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>>> determistic from the input, then they fail to be usable >>>>>>>>>>>>>>> as sub- computations as we can't control that context >>>>>>>>>>>>>>> part of the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>>>> computation, the output is NOT a deterministic function >>>>>>>>>>>>>>> of that inut.


    not sure what the fuck it's doing if it's not a computation >>>>>>>>>>>>>>>
    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>
    which we do all the time in normal programming, something >>>>>>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the normal >>>>>>>>>>>>>> act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>>>>> understand the problem field you are betting your life on. >>>>>>>>>>>>
    one would presume the fundamental theory of computing would >>>>>>>>>>>> be general enough to encapsulate everything computed by real >>>>>>>>>>>> world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the >>>>>>>>>>> computer as you know it.

    so ur saying it's outdated and needs updating in regards to >>>>>>>>>> new things we do with computers that apparently turing
    machines as a model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and >>>>>>>> apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be
    computations, but whole programs will tend to be. Sub-routines
    CAN be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW



    it may or may not have an input, and in fact the entirety of turing
    machine computing can be expressed by enumerating only the turing
    machines that do NOT take input.

    In which case the input can be thought of as the empty set and the
    output is a constant.


    but include other "unknown" factors.

    lol, so when u print a stack trace, u consider those factors "unknown"?

    Thus making your definistic fallacy of confusing an instance of a computation with the definition and use of the computation itself.

    If I am trying to document an API, but the results depend on something
    not provided through that API, as far as that documentation is conserned those details are "unknown".

    in react we deal with contexts all the time with to encapsulate state
    with the functionality that renders/modifies it.

    it doesn't make it "unknown", just not passed thru form

    and yes it does need to be documented because you'll need to setup that
    state at the root the tree so it can be used by components lower in the
    react tree.




    The key point is that a computation always gives the same answer for
    a given input, if it doesn't, it can't be a computation.

    If you can't control the whole input, it isn't as useful, if it has
    any usefullness at all.


    fucking dick is just pulling shit out of his ass, 🤮🤮🤮

    It seems you are stuffing yours with shit.





    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when it >>>>>>> deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then >>>>>>>>>> it just sticks around like an unchanging law when u haven't >>>>>>>>>> even proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to encapsulate >>>>>>>> all that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, >>>>>>>> but i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like >>>>>>> Complexity Theory,

    complexity theory is built on top of the fundamentals of
    computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories
    within it.

    You seem to confuse Computation THeory with fundamental of computing. >>>>>>


    like, if the fundamental theory doesn't encapsulate everything >>>>>>>> done within computing ... then idk why u think the halting
    problem should apply to modern computing???

    Because it DOES present a limitation of what modern computers can >>>>>>> do.

    After all, every non-computation can be converted into a
    computation by forcing all the "hidden inputs" to be considered >>>>>>> as inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface. >>>>>>>


    If a new problem comes up, a new theory might be needed to
    handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of >>>>>>>> route learning with actual intelligence, but i suppose that's >>>>>>>> all u need working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are >>>>>>>>>>> talking about.





    Showing that you really don't understand what you are >>>>>>>>>>>>>>> talking about.





    It seems you just assume you are allowed to change >>>>>>>>>>>>>>>>>>> the definition, perhaps because you never bothered to >>>>>>>>>>>>>>>>>>> learn it.





    This is sort of like the problem with a RASP >>>>>>>>>>>>>>>>>>>>>>> machine architecture, sub- machines on such a >>>>>>>>>>>>>>>>>>>>>>> platform are not necessarily computations, if >>>>>>>>>>>>>>>>>>>>>>> they use the machines capability to pass >>>>>>>>>>>>>>>>>>>>>>> information not allowed by the rules of a >>>>>>>>>>>>>>>>>>>>>>> computation. Your RTM similarly break that property. >>>>>>>>>>>>>>>>>>>>>>>
    Remember, Computations are NOT just what some >>>>>>>>>>>>>>>>>>>>>>> model of processing produce, but specifically is >>>>>>>>>>>>>>>>>>>>>>> defined based on producing a specific mapping of >>>>>>>>>>>>>>>>>>>>>>> input to output, so if (even as a sub- machine) a >>>>>>>>>>>>>>>>>>>>>>> specific input might produce different output, >>>>>>>>>>>>>>>>>>>>>>> your architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>>>>>
    And without that property, using what the machine >>>>>>>>>>>>>>>>>>>>>>> could do, becomes a pretty worthless criteria, as >>>>>>>>>>>>>>>>>>>>>>> you can't actually talk much about it. >>>>>>>>>>>>>>>>>>>>>>
    the output is still well-defined and deterministic >>>>>>>>>>>>>>>>>>>>>> at runtime,

    Not from the "input" to the piece of algorithm, as >>>>>>>>>>>>>>>>>>>>> it includes "hidden" state from outside that input >>>>>>>>>>>>>>>>>>>>> stored elsewhere in the machine.


    context-dependent computations are still >>>>>>>>>>>>>>>>>>>>>> computations. the fact TMs don't capture them is >>>>>>>>>>>>>>>>>>>>>> an indication that the ct- thesis may be false >>>>>>>>>>>>>>>>>>>>>>

    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>>>>>> "input", and if you do that, you find that since >>>>>>>>>>>>>>>>>>>>> you are trying to make it so the caller can't just >>>>>>>>>>>>>>>>>>>>> define that context, your system is less than >>>>>>>>>>>>>>>>>>>>> turing complete.

    Your system break to property of building a >>>>>>>>>>>>>>>>>>>>> computation by the concatination of sub-computations. >>>>>>>>>>>>>>>>>>>>
    ...including a context-dependent sub-computation >>>>>>>>>>>>>>>>>>>> makes ur overall computation context-dependent >>>>>>>>>>>>>>>>>>>> too ... if u dont want a context- dependent >>>>>>>>>>>>>>>>>>>> computation don't include context- dependent sub- >>>>>>>>>>>>>>>>>>>> computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming >>>>>>>>>>>>>>>>>> it's a distinct type of computation that has been >>>>>>>>>>>>>>>>>> ignored by the theory of computing thus far >>>>>>>>>>>>>>>>>>
    nice try tho

    But you don't actually do that, as you then claim to be >>>>>>>>>>>>>>>>> in the same field to solve a problem specified in the >>>>>>>>>>>>>>>>> field.

    As I said, if you want to try to define a new field >>>>>>>>>>>>>>>>> based on a new definition of what a computation is, go >>>>>>>>>>>>>>>>> ahead.

    it's not a new field, it's a mild extension of turing >>>>>>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>>>>>
    what the did the nut say when it was all grown up??? >>>>>>>>>>>>>>



    You need to work out your formal definition. >>>>>>>>>>>>>>>>>
    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>>>>>> therefore context- dependent. these computations >>>>>>>>>>>>>>>>>>>> aren't generally computable by TMs because TMs lack >>>>>>>>>>>>>>>>>>>> the necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>>>>>
    In other words, you require some computations to not >>>>>>>>>>>>>>>>>>> be actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>>>>>> computation that actually breaks in context- >>>>>>>>>>>>>>>>>>>> dependence, rather than just listing things u assume >>>>>>>>>>>>>>>>>>>> are true, i won't believe u know what ur talking about >>>>>>>>>>>>>>>>>>>>

    The definition.

    A computation produces the well defined result based >>>>>>>>>>>>>>>>>>> on the INPUT.

    context-dependent computation simply expands it's >>>>>>>>>>>>>>>>>> input to include the entire computing context, not >>>>>>>>>>>>>>>>>> just the formal parameters. it's still well defined >>>>>>>>>>>>>>>>>> and it grants us access to meta computation that is >>>>>>>>>>>>>>>>>> not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside the >>>>>>>>>>>>>>>>> field it is written about.

    You can't change the definition of a computation, and >>>>>>>>>>>>>>>>> still talk about things as if you were in the same system. >>>>>>>>>>>>>>>>

    That just shows you are smoking some bad weed. >>>>>>>>>>>>>>>>>


    Your context, being not part of the input, can't >>>>>>>>>>>>>>>>>>> change the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can >>>>>>>>>>>>>>>>> be context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree >>>>>>>>>>>>>>> with the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>>>>>> Computation Theory where the result can depend on >>>>>>>>>>>>>>>>>>> things that aren't part of the actual input to the >>>>>>>>>>>>>>>>>>> machine, and see what you can show that is useful. >>>>>>>>>>>>>>>>>>>
    The problem becomes that you can't really say >>>>>>>>>>>>>>>>>>> anything about what you will get, since you don't >>>>>>>>>>>>>>>>>>> know what the "hidden" factors are.

    ??? i was very clear multiple times over what the >>>>>>>>>>>>>>>>>> "hidden" input was. there's nothing random about it, >>>>>>>>>>>>>>>>>> context- dependent computation is just as well-defend >>>>>>>>>>>>>>>>>> and deterministic as context- independent computation >>>>>>>>>>>>>>>>>>

    The problem is that when you look at the computation >>>>>>>>>>>>>>>>> itself (that might be imbedded into a larger >>>>>>>>>>>>>>>>> computation) you don't know which of the infinite >>>>>>>>>>>>>>>>> contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL >>>>>>>>>>>>>>>> MACHINE DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT >>>>>>>>>>>>>>>> STATE NUMBER, AND A FULL COPY OF THE TAPE ...

    And WHICH machine description does it dump? The problem >>>>>>>>>>>>>>> is the machine description isn't unique.


    all the info required to compute all configurations >>>>>>>>>>>>>>>> between the beginning and the current step of the >>>>>>>>>>>>>>>> computation, which can allow it to compute anything that >>>>>>>>>>>>>>>> is "knowable" about where it is in the computation at >>>>>>>>>>>>>>>> time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the unbounded >>>>>>>>>>>>>>> tape.


    the problem is ur literally not reading what i'm writing >>>>>>>>>>>>>>>> to an appreciable degree of comprehension, being too >>>>>>>>>>>>>>>> focused on isolated responses that lack overall >>>>>>>>>>>>>>>> *contextual* awareness of the conversation...

    No, you are ignoring the requirements to implement what >>>>>>>>>>>>>>> you desire.





    Thus, what you can say about that "computation" is very >>>>>>>>>>>>>>>>> limited.

    You don't seem to understand that a key point of the >>>>>>>>>>>>>>>>> theory is about being able to build complicate things >>>>>>>>>>>>>>>>> from simpler pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>>>>>> theories based on simpler theories and the axioms. If >>>>>>>>>>>>>>>>> those simplere things were "context dependent" it makes >>>>>>>>>>>>>>>>> it much harder for them to specifiy what they actually >>>>>>>>>>>>>>>>> do in all contexts, and to then use them in all contexts. >>>>>>>>>>>>>>>>
    i'm sorry context-dependent computation aren't as simple >>>>>>>>>>>>>>>> 🫩 🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out there, >>>>>>>>>>>>>> but all i get is a bunch adversarial dismissal cause i'm >>>>>>>>>>>>>> stuck on a god forsaking planet of a fucking half- >>>>>>>>>>>>>> braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what >>>>>>>>>>>> i write

    No, perhaps the problem is I assume you at least attempt to >>>>>>>>>>> learn what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a
    hierarchical relationship,

    and until u recognize that ur going to continue to be non- >>>>>>>>>> constructive

    YOU were the one asking for help to develop your ideas, if only >>>>>>>>> by posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis >>>>>>>>>
    You need to make a choice of directions.

    Either you work in the currently established theory, so you can >>>>>>>>> use things in it, and see if you can develop something new.

    Or, you branch out and start a brand new theory, and start at >>>>>>>>> the ground floor, fully define what you mean by things, show >>>>>>>>> what you ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat >>>>>>>> out at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few >>>>>> classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so
    everything we already could compute is still computable.

    But only if you DON'T use reflect.

    but so no power has been lost



    that fact that's not obvious to you is just u being willfully
    ignorant at this point.

    The problem is, once your "machine" definition can do non-
    computations, you can't assume it does a computation, and thus your
    gaurenetees go away, so you can say less about what it does.

    i think ur just pulling a definist fallacy. until u make it produce a
    contradiction, i don't really care what u label it as.




    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!

    Nope, that is EXACTLY what changing a foundational rule without
    seeing what it supported does.

    I guess you don't understand cause and effect.





    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still >>>>>>> can apply.

    To say you can change the foundation but keep the building is
    just lying.

    you can in fact replace foundation without even lifting the house bro >>>>>
    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy

    In other words, you don't understand what an analogy is.

    Too bad you are dooming yourself and your wife to starvation.

    pretty nuts u think u need to keep bringing that up,

    lol, u think ur on the right side here???





    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting
    anyways. not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know
    if your system is valid.

    ur just commenting on how little u've tried to understand it

    I'm trying to get you off the wrong track.

    and yet all u do is push me down the track further cause ain't accept
    ur fallacies bro


    What would you do if you saw someone cutting the branch they were
    sitting on, being outside the cut they were making.







    It seems you want to change the foundation, but keep most of >>>>>>>>> the building on top, without even knowing how that building was >>>>>>>>> built and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, >>>>>>>>>>>>> largely because you don't understand what you are trying to >>>>>>>>>>>>> get in.

    When your errors are explained, just just curse back. >>>>>>>>>>>>>
    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown >>>>>>>>>>>>> that doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>>>>>
    I guess that truth is something you don't understand >>>>>>>>>>>>>>


















    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 17:06:26 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get discarded
    from the very beginning because people who work on them perceive the
    halting problem as a dogma. As result, certain practical things (in
    code analysis) are not even tried because it's assumed that they are
    bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. And
    even when one hits it, they can safely discard a particular analysis
    branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a dogma,
    at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for any
    given machine, just that a "general" decider does not exist for all
    machiens ...

    heck it must be certain that for any given machine there must exist a partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical computing. therefore some true decider must exist for any given machine that
    exists ... we just can't funnel the knowledge thru a general interface.

    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the gold-standard for what is computable, but we haven't actually proved that.

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 19:52:21 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a representation of another computation and its input, determine for all
    cases if the computation will halt does nothing to further the question
    of are Turing Machines the most powerful form of computation.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 19:52:30 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>>>>> deterministic result that is "not a computation". >>>>>>>>>>>>>>>>
    Because you get that result only by equivocating on your >>>>>>>>>>>>>>>> definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>>>> determistic from the input, then they fail to be usable >>>>>>>>>>>>>>>> as sub- computations as we can't control that context >>>>>>>>>>>>>>>> part of the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>>>>> computation, the output is NOT a deterministic function >>>>>>>>>>>>>>>> of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>
    which we do all the time in normal programming, something >>>>>>>>>>>>>>> which apparently u think the tHeOrY oF CoMpUtInG fails to >>>>>>>>>>>>>>> encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>>>>>> understand the problem field you are betting your life on. >>>>>>>>>>>>>
    one would presume the fundamental theory of computing would >>>>>>>>>>>>> be general enough to encapsulate everything computed by >>>>>>>>>>>>> real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the >>>>>>>>>>>> computer as you know it.

    so ur saying it's outdated and needs updating in regards to >>>>>>>>>>> new things we do with computers that apparently turing
    machines as a model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and >>>>>>>>> apparently modern computing has transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to be >>>>>>>> computations, but whole programs will tend to be. Sub-routines >>>>>>>> CAN be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is
    somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do.

    Just because you don't understand what the words mean, doesn't mean you
    get to change them.

    "Computations" have a fairly precise definition (well, a couple for
    different classes of them) based on the needs of a couple specific
    fields of mathematics.

    That definition *IS* the definition of a Computation.

    If you want to look at somethine else, go ahead, just best not to try to
    be deceptive and call it by the same name without being clear you are
    doing something different.




    it may or may not have an input, and in fact the entirety of turing
    machine computing can be expressed by enumerating only the turing
    machines that do NOT take input.

    In which case the input can be thought of as the empty set and the
    output is a constant.


    but include other "unknown" factors.

    lol, so when u print a stack trace, u consider those factors "unknown"?

    Thus making your definistic fallacy of confusing an instance of a
    computation with the definition and use of the computation itself.

    If I am trying to document an API, but the results depend on something
    not provided through that API, as far as that documentation is
    conserned those details are "unknown".

    in react we deal with contexts all the time with to encapsulate state
    with the functionality that renders/modifies it.

    it doesn't make it "unknown", just not passed thru form

    and yes it does need to be documented because you'll need to setup that state at the root the tree so it can be used by components lower in the react tree.

    And "react" likely isn't trying to compute things about something
    outside of itself, but is just a support engine whose actions are
    dependent on context.

    Its not like a system trying to model the weather around the world, and
    making the weather in Paris be dependent upon where YOU are, as opposed
    to where Paris is.





    The key point is that a computation always gives the same answer for
    a given input, if it doesn't, it can't be a computation.

    If you can't control the whole input, it isn't as useful, if it has
    any usefullness at all.


    fucking dick is just pulling shit out of his ass, 🤮🤮🤮

    It seems you are stuffing yours with shit.





    THere ARE advantages to doing so, as that DOES add a lot of
    correctness provability to the code.

    The biggest part of code not being analyzable/provable is when >>>>>>>> it deviates from the requirements of being a computation.




    or what ... someone writes down a fundamental theory and then >>>>>>>>>>> it just sticks around like an unchanging law when u haven't >>>>>>>>>>> even proven the ct- thesis correct???

    Why does it need to change?

    why does the fundamental theory of computing need to
    encapsulate all that is possible within computing??

    That is like asking about shouldn't number theory talk about
    everything mathematics.


    idk, what's what i thot a fundamental theory is supposed to do, >>>>>>>>> but i guess you don't agree???

    Nope, it handles ONE ASPECT of the general field.

    We not only have Computation Theory, but we also get things like >>>>>>>> Complexity Theory,

    complexity theory is built on top of the fundamentals of
    computing ...

    Yes, just like computability/comptation theory.

    The field of "Computer Science" has a bunch of subfields/theories >>>>>> within it.

    You seem to confuse Computation THeory with fundamental of computing. >>>>>>>


    like, if the fundamental theory doesn't encapsulate everything >>>>>>>>> done within computing ... then idk why u think the halting
    problem should apply to modern computing???

    Because it DOES present a limitation of what modern computers >>>>>>>> can do.

    After all, every non-computation can be converted into a
    computation by forcing all the "hidden inputs" to be considered >>>>>>>> as inputs.

    lol schrodinger's computation

    Model conversion.



    This just shows the limitation in controlability of the interface. >>>>>>>>


    If a new problem comes up, a new theory might be needed to >>>>>>>>>> handle it.

    or maybe new techniques could rectify old problems ...

    talk about a lack of curiosity. you confusing regurgitation of >>>>>>>>> route learning with actual intelligence, but i suppose that's >>>>>>>>> all u need working for a military contractor...

    military intelligence is an oxymoron, remember?

    You might be surprised about that statement.

    You don't want a "smart bomb" locked onto you.

    they also don't want that if they know what's best for them






    All you are doing is showing your ignorance of what you are >>>>>>>>>>>> talking about.





    Showing that you really don't understand what you are >>>>>>>>>>>>>>>> talking about.





    It seems you just assume you are allowed to change >>>>>>>>>>>>>>>>>>>> the definition, perhaps because you never bothered >>>>>>>>>>>>>>>>>>>> to learn it.





    This is sort of like the problem with a RASP >>>>>>>>>>>>>>>>>>>>>>>> machine architecture, sub- machines on such a >>>>>>>>>>>>>>>>>>>>>>>> platform are not necessarily computations, if >>>>>>>>>>>>>>>>>>>>>>>> they use the machines capability to pass >>>>>>>>>>>>>>>>>>>>>>>> information not allowed by the rules of a >>>>>>>>>>>>>>>>>>>>>>>> computation. Your RTM similarly break that >>>>>>>>>>>>>>>>>>>>>>>> property.

    Remember, Computations are NOT just what some >>>>>>>>>>>>>>>>>>>>>>>> model of processing produce, but specifically is >>>>>>>>>>>>>>>>>>>>>>>> defined based on producing a specific mapping of >>>>>>>>>>>>>>>>>>>>>>>> input to output, so if (even as a sub- machine) >>>>>>>>>>>>>>>>>>>>>>>> a specific input might produce different output, >>>>>>>>>>>>>>>>>>>>>>>> your architecture is NOT doing a computation. >>>>>>>>>>>>>>>>>>>>>>>>
    And without that property, using what the >>>>>>>>>>>>>>>>>>>>>>>> machine could do, becomes a pretty worthless >>>>>>>>>>>>>>>>>>>>>>>> criteria, as you can't actually talk much about it. >>>>>>>>>>>>>>>>>>>>>>>
    the output is still well-defined and >>>>>>>>>>>>>>>>>>>>>>> deterministic at runtime,

    Not from the "input" to the piece of algorithm, as >>>>>>>>>>>>>>>>>>>>>> it includes "hidden" state from outside that input >>>>>>>>>>>>>>>>>>>>>> stored elsewhere in the machine.


    context-dependent computations are still >>>>>>>>>>>>>>>>>>>>>>> computations. the fact TMs don't capture them is >>>>>>>>>>>>>>>>>>>>>>> an indication that the ct- thesis may be false >>>>>>>>>>>>>>>>>>>>>>>

    Nope. Not unless the "context" is made part of the >>>>>>>>>>>>>>>>>>>>>> "input", and if you do that, you find that since >>>>>>>>>>>>>>>>>>>>>> you are trying to make it so the caller can't just >>>>>>>>>>>>>>>>>>>>>> define that context, your system is less than >>>>>>>>>>>>>>>>>>>>>> turing complete.

    Your system break to property of building a >>>>>>>>>>>>>>>>>>>>>> computation by the concatination of sub-computations. >>>>>>>>>>>>>>>>>>>>>
    ...including a context-dependent sub-computation >>>>>>>>>>>>>>>>>>>>> makes ur overall computation context-dependent >>>>>>>>>>>>>>>>>>>>> too ... if u dont want a context- dependent >>>>>>>>>>>>>>>>>>>>> computation don't include context- dependent sub- >>>>>>>>>>>>>>>>>>>>> computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming >>>>>>>>>>>>>>>>>>> it's a distinct type of computation that has been >>>>>>>>>>>>>>>>>>> ignored by the theory of computing thus far >>>>>>>>>>>>>>>>>>>
    nice try tho

    But you don't actually do that, as you then claim to >>>>>>>>>>>>>>>>>> be in the same field to solve a problem specified in >>>>>>>>>>>>>>>>>> the field.

    As I said, if you want to try to define a new field >>>>>>>>>>>>>>>>>> based on a new definition of what a computation is, go >>>>>>>>>>>>>>>>>> ahead.

    it's not a new field, it's a mild extension of turing >>>>>>>>>>>>>>>>> machines, with one new operation.

    No, it is, as you are changing essential core defintions. >>>>>>>>>>>>>>>>
    That is like saying that spherical geometery is the same >>>>>>>>>>>>>>>> field as plane geometry, we just added a small extension. >>>>>>>>>>>>>>>
    what the did the nut say when it was all grown up??? >>>>>>>>>>>>>>>



    You need to work out your formal definition. >>>>>>>>>>>>>>>>>>
    Show how the system actually works out.

    Show what it can show.

    And show why anyone would want to use it.




    but in order to be complete and coherent, certain >>>>>>>>>>>>>>>>>>>>> computations *must* have context-awareness and are >>>>>>>>>>>>>>>>>>>>> therefore context- dependent. these computations >>>>>>>>>>>>>>>>>>>>> aren't generally computable by TMs because TMs lack >>>>>>>>>>>>>>>>>>>>> the necessary mechanisms to grant context- awareness. >>>>>>>>>>>>>>>>>>>>
    In other words, you require some computations to not >>>>>>>>>>>>>>>>>>>> be actual computations.


    unless u can produce some actual proof of some >>>>>>>>>>>>>>>>>>>>> computation that actually breaks in context- >>>>>>>>>>>>>>>>>>>>> dependence, rather than just listing things u >>>>>>>>>>>>>>>>>>>>> assume are true, i won't believe u know what ur >>>>>>>>>>>>>>>>>>>>> talking about


    The definition.

    A computation produces the well defined result based >>>>>>>>>>>>>>>>>>>> on the INPUT.

    context-dependent computation simply expands it's >>>>>>>>>>>>>>>>>>> input to include the entire computing context, not >>>>>>>>>>>>>>>>>>> just the formal parameters. it's still well defined >>>>>>>>>>>>>>>>>>> and it grants us access to meta computation that is >>>>>>>>>>>>>>>>>>> not as expressible in TM computing.

    ct-thesis is cooked dude

    Nope, because you are just putting yourself outside >>>>>>>>>>>>>>>>>> the field it is written about.

    You can't change the definition of a computation, and >>>>>>>>>>>>>>>>>> still talk about things as if you were in the same >>>>>>>>>>>>>>>>>> system.


    That just shows you are smoking some bad weed. >>>>>>>>>>>>>>>>>>


    Your context, being not part of the input, can't >>>>>>>>>>>>>>>>>>>> change the well- defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a >>>>>>>>>>>>>>>>>>>> gingerbread man?

    ur overgeneralizing. just become some computation is >>>>>>>>>>>>>>>>>>> context- dependent doesn't mean all computation is >>>>>>>>>>>>>>>>>>> context- dependent.

    another fallacy.

    Right, but nothing that actually is a computation can >>>>>>>>>>>>>>>>>> be context- dependent.

    ur just arguing in circles with this.

    No, you are just lying to yourself to try to disagree >>>>>>>>>>>>>>>> with the definition.





    All you are doing is saying you disagree with the >>>>>>>>>>>>>>>>>>>> definition.

    Go ahead, try to define an alternate version of >>>>>>>>>>>>>>>>>>>> Computation Theory where the result can depend on >>>>>>>>>>>>>>>>>>>> things that aren't part of the actual input to the >>>>>>>>>>>>>>>>>>>> machine, and see what you can show that is useful. >>>>>>>>>>>>>>>>>>>>
    The problem becomes that you can't really say >>>>>>>>>>>>>>>>>>>> anything about what you will get, since you don't >>>>>>>>>>>>>>>>>>>> know what the "hidden" factors are.

    ??? i was very clear multiple times over what the >>>>>>>>>>>>>>>>>>> "hidden" input was. there's nothing random about it, >>>>>>>>>>>>>>>>>>> context- dependent computation is just as well-defend >>>>>>>>>>>>>>>>>>> and deterministic as context- independent computation >>>>>>>>>>>>>>>>>>>

    The problem is that when you look at the computation >>>>>>>>>>>>>>>>>> itself (that might be imbedded into a larger >>>>>>>>>>>>>>>>>> computation) you don't know which of the infinite >>>>>>>>>>>>>>>>>> contexts it might be within.

    depth is not infinite for any given step,

    I didn't say infinite depth, I said from infinite contexts. >>>>>>>>>>>>>>>>

    AND THAT'S WHERE REFLECT COMES IN: IT DUMPS THE FULL >>>>>>>>>>>>>>>>> MACHINE DESCRIPTION OF THE RUNNING MACHINE, THE CURRENT >>>>>>>>>>>>>>>>> STATE NUMBER, AND A FULL COPY OF THE TAPE ... >>>>>>>>>>>>>>>>
    And WHICH machine description does it dump? The problem >>>>>>>>>>>>>>>> is the machine description isn't unique.


    all the info required to compute all configurations >>>>>>>>>>>>>>>>> between the beginning and the current step of the >>>>>>>>>>>>>>>>> computation, which can allow it to compute anything >>>>>>>>>>>>>>>>> that is "knowable" about where it is in the computation >>>>>>>>>>>>>>>>> at time of the REFLECT operation...

    And where did it store that information?

    Remember, the starting tape was unbounded in length (but >>>>>>>>>>>>>>>> finite).

    The machine itself is bounded in size, plus the >>>>>>>>>>>>>>>> unbounded tape.


    the problem is ur literally not reading what i'm >>>>>>>>>>>>>>>>> writing to an appreciable degree of comprehension, >>>>>>>>>>>>>>>>> being too focused on isolated responses that lack >>>>>>>>>>>>>>>>> overall *contextual* awareness of the conversation... >>>>>>>>>>>>>>>>
    No, you are ignoring the requirements to implement what >>>>>>>>>>>>>>>> you desire.





    Thus, what you can say about that "computation" is >>>>>>>>>>>>>>>>>> very limited.

    You don't seem to understand that a key point of the >>>>>>>>>>>>>>>>>> theory is about being able to build complicate things >>>>>>>>>>>>>>>>>> from simpler pieces.

    It comes out of how logic works, we build complicated >>>>>>>>>>>>>>>>>> theories based on simpler theories and the axioms. If >>>>>>>>>>>>>>>>>> those simplere things were "context dependent" it >>>>>>>>>>>>>>>>>> makes it much harder for them to specifiy what they >>>>>>>>>>>>>>>>>> actually do in all contexts, and to then use them in >>>>>>>>>>>>>>>>>> all contexts.

    i'm sorry context-dependent computation aren't as >>>>>>>>>>>>>>>>> simple 🫩 🫩🫩

    Which is why you need to actually FULLY DEFINE them, and >>>>>>>>>>>>>>>> admit it is a new field.

    well it'd be great if someone fucking helped me out >>>>>>>>>>>>>>> there, but all i get is a bunch adversarial dismissal >>>>>>>>>>>>>>> cause i'm stuck on a god forsaking planet of a fucking >>>>>>>>>>>>>>> half- braindead clowns

    Help has been offered,

    not constructively, ur barely even paying attention to what >>>>>>>>>>>>> i write

    No, perhaps the problem is I assume you at least attempt to >>>>>>>>>>>> learn what you are trying to talk about.

    u don't even know what constructive help is to be frank

    i'm not ur student, ur not my teachers, this isn't a
    hierarchical relationship,

    and until u recognize that ur going to continue to be non- >>>>>>>>>>> constructive

    YOU were the one asking for help to develop your ideas, if >>>>>>>>>> only by posting them and asking for comments.

    I have just pointed out the fundamental errors in your analysis >>>>>>>>>>
    You need to make a choice of directions.

    Either you work in the currently established theory, so you >>>>>>>>>> can use things in it, and see if you can develop something new. >>>>>>>>>>
    Or, you branch out and start a brand new theory, and start at >>>>>>>>>> the ground floor, fully define what you mean by things, show >>>>>>>>>> what you ideas can do, and why that would be useful.

    false dichotomy, add that to growing list of fallacies u shat >>>>>>>>> out at me

    No real dichotomy.

    no, i don't have to totally rewrite the system to transcend a few >>>>>>> classical limits.

    Sure you do. You need to figure out what might have changed.

    nothing about this change affect computation without REFLECT ... so >>>>> everything we already could compute is still computable.

    But only if you DON'T use reflect.

    but so no power has been lost



    that fact that's not obvious to you is just u being willfully
    ignorant at this point.

    The problem is, once your "machine" definition can do non-
    computations, you can't assume it does a computation, and thus your
    gaurenetees go away, so you can say less about what it does.

    i think ur just pulling a definist fallacy. until u make it produce a
    contradiction, i don't really care what u label it as.




    Remove the first floor of your building and see what happens.

    false analogy! wow, another fallacy!

    Nope, that is EXACTLY what changing a foundational rule without
    seeing what it supported does.

    I guess you don't understand cause and effect.





    Follow the rules and you can stay in the system.

    Change anything and you are outside, and need to show what still >>>>>>>> can apply.

    To say you can change the foundation but keep the building is >>>>>>>> just lying.

    you can in fact replace foundation without even lifting the house >>>>>>> bro

    Not in logic.

    I guess you don't understand the use of figures of speach.

    or i just don't care for ur false analogy

    In other words, you don't understand what an analogy is.

    Too bad you are dooming yourself and your wife to starvation.

    pretty nuts u think u need to keep bringing that up,

    lol, u think ur on the right side here???





    i'm not invalidating most of computing, just gunning for a few
    classical limits that don't actually do anything interesting
    anyways. not really sure why people are to bent up about them

    And, if you don't understand what those changes do, you don't know >>>>>> if your system is valid.

    ur just commenting on how little u've tried to understand it

    I'm trying to get you off the wrong track.

    and yet all u do is push me down the track further cause ain't accept
    ur fallacies bro


    What would you do if you saw someone cutting the branch they were
    sitting on, being outside the cut they were making.







    It seems you want to change the foundation, but keep most of >>>>>>>>>> the building on top, without even knowing how that building >>>>>>>>>> was built and how it connects to the foundation.

    That just doesn't work.





    but you just reject it as it doesn't match your ideas, >>>>>>>>>>>>>> largely because you don't understand what you are trying >>>>>>>>>>>>>> to get in.

    When your errors are explained, just just curse back. >>>>>>>>>>>>>>
    I can't fix stupid.

    You aren't stuck on a planet of clowns, you are the clown >>>>>>>>>>>>>> that doesn't understand the world.




    if the simplest theory was always correct we'd still be >>>>>>>>>>>>>>>>> using newtonian gravity for everything


    You can't change a thing and it still be the same thing. >>>>>>>>>>>>>>>>
    I guess that truth is something you don't understand >>>>>>>>>>>>>>>





















    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 18:05:08 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a representation of another computation and its input, determine for all
    cases if the computation will halt does nothing to further the question
    of are Turing Machines the most powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 18:24:54 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>>>>>> deterministic result that is "not a computation". >>>>>>>>>>>>>>>>>
    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>>>>> determistic from the input, then they fail to be usable >>>>>>>>>>>>>>>>> as sub- computations as we can't control that context >>>>>>>>>>>>>>>>> part of the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>>>>>> computation, the output is NOT a deterministic function >>>>>>>>>>>>>>>>> of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally don't >>>>>>>>>>>>>>> understand the problem field you are betting your life on. >>>>>>>>>>>>>>
    one would presume the fundamental theory of computing >>>>>>>>>>>>>> would be general enough to encapsulate everything computed >>>>>>>>>>>>>> by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the >>>>>>>>>>>>> computer as you know it.

    so ur saying it's outdated and needs updating in regards to >>>>>>>>>>>> new things we do with computers that apparently turing >>>>>>>>>>>> machines as a model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, and >>>>>>>>>> apparently modern computing has transcended that theory ... >>>>>>>>>
    Not really.

    THe way modern processors work, "sub-routines" can fail to be >>>>>>>>> computations, but whole programs will tend to be. Sub-routines >>>>>>>>> CAN be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is >>>>>> somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do.

    i will never care about you complaining about the fact the computations
    i'm talking about don't fit within the particular box you call a "Computation", because i just doesn't mean anything,

    u and the entire field can be wrong about how u specified "Computation",

    and that potential is well codified by the fact the ct-thesis is still a thesis and not a law.

    i will not respond to more comments on this because it's a boring, lazy, non-argument that is fucking waste of both our time.
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 20:35:50 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    The fact that it is impossible to build a computation that, given a representation of another computation and its input, determine for all
    cases if the computation will halt does nothing to further the question
    of are Turing Machines the most powerful form of computation.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 18:38:15 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for all
    cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 20:53:34 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy


    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.




    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 19:12:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 6:53 PM, olcott wrote:
    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy

    sorry idk what u mean: Type-0 recursively enumerable langauges,
    "recognized" by turing machines, are the most "powerful" in that they encompass the "most" computations ... ?

    ... huh a bit unrelated but it's interesting to note that despite being technically the same cardinality, the Type-0 language encompasses "more" computations than say Type-1 Type-2 or Type-3 language.

    sure we call this "power" and not "size", but the fundamental fact is
    that Type-0 includes computations of Type 1, 2, and 3 languages + more
    that aren't included in any of those, so it includes "more" computations
    than the more limited types.



    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.






    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 21:42:32 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/2026 9:12 PM, dart200 wrote:
    On 1/24/26 6:53 PM, olcott wrote:
    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy

    sorry idk what u mean: Type-0 recursively enumerable langauges,
    "recognized" by turing machines, are the most "powerful" in that they encompass the "most" computations ... ?


    It requires the most powerful machine to recognize them.
    Regular thus finite-state-machines are the weakest.

    ... huh a bit unrelated but it's interesting to note that despite being technically the same cardinality, the Type-0 language encompasses "more" computations than say Type-1 Type-2 or Type-3 language.

    sure we call this "power" and not "size", but the fundamental fact is
    that Type-0 includes computations of Type 1, 2, and 3 languages + more
    that aren't included in any of those, so it includes "more" computations than the more limited types.



    The fact that it is impossible to build a computation that, given a >>>>> representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation. >>>>







    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 20:03:15 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 7:42 PM, olcott wrote:
    On 1/24/2026 9:12 PM, dart200 wrote:
    On 1/24/26 6:53 PM, olcott wrote:
    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy

    sorry idk what u mean: Type-0 recursively enumerable langauges,
    "recognized" by turing machines, are the most "powerful" in that they
    encompass the "most" computations ... ?


    It requires the most powerful machine to recognize them.
    Regular thus finite-state-machines are the weakest.

    i literally said turing machine, not finite-state-automota... ???


    ... huh a bit unrelated but it's interesting to note that despite
    being technically the same cardinality, the Type-0 language
    encompasses "more" computations than say Type-1 Type-2 or Type-3
    language.

    sure we call this "power" and not "size", but the fundamental fact is
    that Type-0 includes computations of Type 1, 2, and 3 languages + more
    that aren't included in any of those, so it includes "more"
    computations than the more limited types.



    The fact that it is impossible to build a computation that, given >>>>>> a representation of another computation and its input, determine
    for all cases if the computation will halt does nothing to further >>>>>> the question of are Turing Machines the most powerful form of
    computation.










    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 22:06:22 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/2026 10:03 PM, dart200 wrote:
    On 1/24/26 7:42 PM, olcott wrote:
    On 1/24/2026 9:12 PM, dart200 wrote:
    On 1/24/26 6:53 PM, olcott wrote:
    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.


    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy

    sorry idk what u mean: Type-0 recursively enumerable langauges,
    "recognized" by turing machines, are the most "powerful" in that they
    encompass the "most" computations ... ?


    It requires the most powerful machine to recognize them.
    Regular thus finite-state-machines are the weakest.

    i literally said turing machine, not finite-state-automota... ???


    Yes you did. I reread what you said.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable for the entire body of knowledge.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sat Jan 24 21:45:05 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 8:06 PM, olcott wrote:
    On 1/24/2026 10:03 PM, dart200 wrote:
    On 1/24/26 7:42 PM, olcott wrote:
    On 1/24/2026 9:12 PM, dart200 wrote:
    On 1/24/26 6:53 PM, olcott wrote:
    On 1/24/2026 8:38 PM, dart200 wrote:
    On 1/24/26 6:35 PM, olcott wrote:
    On 1/24/2026 6:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about. >>>>>>>>

    It is categorically impossible to define a
    computation more powerful than that above.

    i mean turing machines are just a method to specify string
    transformations on the tape ???

    they are primarily defined by a large transition table for what
    operation is done based on the state of the machine...


    No if you look at the Chomsky Hierarchy
    they are much more powerful than finite
    state machines.

    https://en.wikipedia.org/wiki/Chomsky_hierarchy

    sorry idk what u mean: Type-0 recursively enumerable langauges,
    "recognized" by turing machines, are the most "powerful" in that
    they encompass the "most" computations ... ?


    It requires the most powerful machine to recognize them.
    Regular thus finite-state-machines are the weakest.

    i literally said turing machine, not finite-state-automota... ???


    Yes you did. I reread what you said.


    i hope u've paying at little a bit of attention to what i've been
    writing recently.

    i have a more formal statement to might on my more recent developments
    where i'm ultimately proposing to just ignore the undecidable paradoxes.

    might be worthy of another few r/logic posts, email spam, maybe even
    paying for academia.edu again...
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 13:21:51 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>>>>>> haven't understood it yet) that produces a consistent >>>>>>>>>>>>>>>>>>> deterministic result that is "not a computation". >>>>>>>>>>>>>>>>>>
    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>>>>>> determistic from the input, then they fail to be >>>>>>>>>>>>>>>>>> usable as sub- computations as we can't control that >>>>>>>>>>>>>>>>>> context part of the input.

    When we look at just the controllable input for a sub- >>>>>>>>>>>>>>>>>> computation, the output is NOT a deterministic >>>>>>>>>>>>>>>>>> function of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how modern >>>>>>>>>>>>>>>> computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>> don't understand the problem field you are betting your >>>>>>>>>>>>>>>> life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES the >>>>>>>>>>>>>> computer as you know it.

    so ur saying it's outdated and needs updating in regards to >>>>>>>>>>>>> new things we do with computers that apparently turing >>>>>>>>>>>>> machines as a model don't have variations of ...

    No, it still handles that which it was developed for.

    well it was developed to be a general theory of computing, >>>>>>>>>>> and apparently modern computing has transcended that theory ... >>>>>>>>>>
    Not really.

    THe way modern processors work, "sub-routines" can fail to be >>>>>>>>>> computations, but whole programs will tend to be. Sub-routines >>>>>>>>>> CAN be built with care to fall under its guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but is >>>>>>> somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do.

    i will never care about you complaining about the fact the computations
    i'm talking about don't fit within the particular box you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about computations.



    u and the entire field can be wrong about how u specified "Computation",

    No, you just don't understand the WHY of computation theory.


    and that potential is well codified by the fact the ct-thesis is still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.


    i will not respond to more comments on this because it's a boring, lazy, non-argument that is fucking waste of both our time.


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 13:23:05 2026
    From Newsgroup: comp.ai.philosophy

    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for all
    cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 13:04:44 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    and ignoring that is the underlying cause of the halting problem

    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    fuck
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 13:05:16 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏




    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if u >>>>>>>>>>>>>>>>>>>> haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the output >>>>>>>>>>>>>>>>>>> determistic from the input, then they fail to be >>>>>>>>>>>>>>>>>>> usable as sub- computations as we can't control that >>>>>>>>>>>>>>>>>>> context part of the input.

    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a deterministic >>>>>>>>>>>>>>>>>>> function of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>> don't understand the problem field you are betting your >>>>>>>>>>>>>>>>> life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in regards >>>>>>>>>>>>>> to new things we do with computers that apparently turing >>>>>>>>>>>>>> machines as a model don't have variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>
    well it was developed to be a general theory of computing, >>>>>>>>>>>> and apparently modern computing has transcended that theory ... >>>>>>>>>>>
    Not really.

    THe way modern processors work, "sub-routines" can fail to be >>>>>>>>>>> computations, but whole programs will tend to be. Sub-
    routines CAN be built with care to fall under its guidance. >>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but >>>>>>>> is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box you
    call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the work
    to demonstrate actual contradictions




    u and the entire field can be wrong about how u specified "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will
    because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"



    and that potential is well codified by the fact the ct-thesis is still
    a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy fucking hypocritical fucking faggot

    imagine if i pulled that argument out on you wildly unfair irrational bastard??

    u make a complete mockery of reason with the disgustingly idiot dogshit
    u post over and over again...

    holy fuck you dude eat a bag of dicks



    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 17:36:01 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if >>>>>>>>>>>>>>>>>>>>> u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail to >>>>>>>>>>>>>>>>>>>> be usable as sub- computations as we can't control >>>>>>>>>>>>>>>>>>>> that context part of the input.

    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a deterministic >>>>>>>>>>>>>>>>>>>> function of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations.


    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in regards >>>>>>>>>>>>>>> to new things we do with computers that apparently turing >>>>>>>>>>>>>>> machines as a model don't have variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>
    well it was developed to be a general theory of computing, >>>>>>>>>>>>> and apparently modern computing has transcended that >>>>>>>>>>>>> theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to >>>>>>>>>>>> be computations, but whole programs will tend to be. Sub- >>>>>>>>>>>> routines CAN be built with care to fall under its guidance. >>>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but >>>>>>>>> is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT
    THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box
    you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about
    computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the work
    to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    YOU are the one assuming things can be done, but refuse to actually try
    to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps, and
    using bounded loops.





    u and the entire field can be wrong about how u specified "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems would
    not have a solution. And thus we had to accept that we couldn't prove everything we might want.




    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and understanding
    the logic of them.

    imagine if i pulled that argument out on you wildly unfair irrational bastard??

    But all you can do is make baseless claims. My statements of unprovable
    truths is based on real proofs, that seem to be beyond you ability to understand.


    u make a complete mockery of reason with the disgustingly idiot dogshit
    u post over and over again...

    How is looking at proofs and accepting their results.

    It is the rejection of proofs and thinking things must be different that
    is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,



    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.





    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 17:40:12 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a
    representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it, just
    the context of the thing being looked at.


    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.


    fuck


    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 21:56:54 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote:
    On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even if >>>>>>>>>>>>>>>>>>>>>> u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating on >>>>>>>>>>>>>>>>>>>>> your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a deterministic >>>>>>>>>>>>>>>>>>>>> function of that inut.


    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't control. >>>>>>>>>>>>>>>>>>>>
    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in regards >>>>>>>>>>>>>>>> to new things we do with computers that apparently >>>>>>>>>>>>>>>> turing machines as a model don't have variations of ... >>>>>>>>>>>>>>>
    No, it still handles that which it was developed for. >>>>>>>>>>>>>>
    well it was developed to be a general theory of computing, >>>>>>>>>>>>>> and apparently modern computing has transcended that >>>>>>>>>>>>>> theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to >>>>>>>>>>>>> be computations, but whole programs will tend to be. Sub- >>>>>>>>>>>>> routines CAN be built with care to fall under its guidance. >>>>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result but >>>>>>>>>> is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY EQUIVALENT >>>>>> THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do. >>>>
    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box
    you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about
    computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's teapot???

    are u even doing math here or this just a giant definist fallacy shitshow???


    YOU are the one assuming things can be done, but refuse to actually try
    to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps, and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will
    because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems would
    not have a solution. And thus we had to accept that we couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem

    which is the part i'm trying to reconcile, that very specific (but quite
    broad within tm computing) problem...

    i'm not here to spoon feed humanity a general decision algo, cause we assuredly do not have enough number theory to build that at this time.

    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist alongside
    the potential for self-referential set-classification paradoxes:

    either by showing that we can just ignore the paradoxes, or by utilizing reflective turing machines to decide on them in a context aware manner,
    both are valid resolutions.

    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier




    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and understanding
    the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair irrational
    bastard??

    But all you can do is make baseless claims. My statements of unprovable truths is based on real proofs, that seem to be beyond you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different that
    is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️




    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.





    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Sun Jan 25 22:50:18 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given a >>>>> representation of another computation and its input, determine for
    all cases if the computation will halt does nothing to further the
    question of are Turing Machines the most powerful form of computation. >>>>
    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it, just
    the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... 🙄



    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually useful, and generates answers to some things we currently think as uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...



    fuck


    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 01:35:45 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/26 10:50 PM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>> a representation of another computation and its input, determine
    for all cases if the computation will halt does nothing to further >>>>>> the question of are Turing Machines the most powerful form of
    computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... 🙄

    or put more generally:

    well, yes, most problems don't involve pathologically querying the truth specifically for the purpose of then contradicting the truth... 🫩🫩🫩




    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things
    totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...



    fuck



    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:39:59 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in the >>>>>>>>>>>>>>>>>>>>> normal act of programming computers

    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of computing >>>>>>>>>>>>>>>>>>> would be general enough to encapsulate everything >>>>>>>>>>>>>>>>>>> computed by real world computers, no???

    Why?

    Remember, the fundamental theory of Computing PREDATES >>>>>>>>>>>>>>>>>> the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>> computing, and apparently modern computing has
    transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail to >>>>>>>>>>>>>> be computations, but whole programs will tend to be. Sub- >>>>>>>>>>>>>> routines CAN be built with care to fall under its guidance. >>>>>>>>>>>>>
    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can do. >>>>>
    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box
    you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about computation
    theory, and thus why are you complaining about what it says about
    computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's teapot???

    You are asking me to disprove something that you won't (and can't) define.


    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never will
    because the ct-thesis isn't proven, and u've already gone down the
    moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we couldn't
    prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting problem





    which is the part i'm trying to reconcile, that very specific (but quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.


    i'm trying to deal with all the claims of hubris that such a general decision algo *cannot* exist, by showing *how* it could exist alongside
    the potential for self-referential set-classification paradoxes:

    either by showing that we can just ignore the paradoxes, or by utilizing reflective turing machines to decide on them in a context aware manner,
    both are valid resolutions.

    In other words, by ignoring the reality,


    i know u want me to spoon feed you all the answers here, but i'm one freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least become evident that it is a dead end.

    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's theory,
    it doesn't change the results significantly for what we currently can see.

    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair irrational
    bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond you
    ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️




    i will not respond to more comments on this because it's a boring,
    lazy, non-argument that is fucking waste of both our time.








    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:43:08 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>> a representation of another computation and its input, determine
    for all cases if the computation will halt does nothing to further >>>>>> the question of are Turing Machines the most powerful form of
    computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form?

    Computation Theory was to answer questions of logic and mathematics.

    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a decider specifically for the purpose of then contradicting the decision... 🙄

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.




    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes, there
    are some thoughts about how to break it, but they require things
    totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.




    fuck




    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:43:39 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote:
    On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote:

    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they fail >>>>>>>>>>>>>>>>>>>>>>> to be usable as sub- computations as we can't >>>>>>>>>>>>>>>>>>>>>>> control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for a >>>>>>>>>>>>>>>>>>>>>>> sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you fundamentally >>>>>>>>>>>>>>>>>>>>> don't understand the problem field you are betting >>>>>>>>>>>>>>>>>>>>> your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT,

    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that???

    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they can >>>>>>> do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular box >>>>>> you call a "Computation", because i just doesn't mean anything,

    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what it
    says about computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to russel's
    teapot???

    You are asking me to disprove something that you won't (and can't) define.

    i tried to but ur incredibly uncooperative



    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone down
    the moronic hole of "maybe my favorite truth isn't even provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this sort
    could be used to generate proofs of the great problems of mathematics
    and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we couldn't
    prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we
    assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither of
    us can imagine

    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    (oooo, add that fallacy to list rick! what number are we at???)



    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again



    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a brand
    new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid ground, and not flights of fancy, it might be easier, or at least become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other repeatedly crying definist fallacy over and over again. heck u can't even explain
    to me what i think tbh, and i know u can't.

    i refuse to buy into fallacy gishgallop, and that's a good thing


    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's theory,
    it doesn't change the results significantly for what we currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.


    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is
    still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond you
    ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️




    i will not respond to more comments on this because it's a boring, >>>>>> lazy, non-argument that is fucking waste of both our time.








    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 11:45:58 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about.

    The fact that it is impossible to build a computation that, given >>>>>>> a representation of another computation and its input, determine >>>>>>> for all cases if the computation will halt does nothing to
    further the question of are Turing Machines the most powerful
    form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form? >>>>>
    Computation Theory was to answer questions of logic and mathematics. >>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... 🙄

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context (the paradoxical input machine), which is why i don't think it
    over-generalizes to disproving our ability to compute the answer in non-pathological contexts.

    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular limit.

    this doesn't remove *all* unknowns, i'm not resolving problems of actual complexity or unknowns due to lack of number theory. i'm resolving the self-referential set-classification paradox that underlies much of uncomputability, and to hopefully put a wrench in this rather odd, paradoxical, and quite frankly fallacy drenched feelings of certainty
    about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make significant progress, my fucking god...





    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is actually
    useful, and generates answers to some things we currently think as
    uncomputable, but until you can actually figure out what that is,
    assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???





    fuck




    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 17:17:18 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote:
    On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>
    Good luck starving to death when your money runs out. >>>>>>>>>>>>>>>>>>>>>
    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine (even >>>>>>>>>>>>>>>>>>>>>>>>> if u haven't understood it yet) that produces a >>>>>>>>>>>>>>>>>>>>>>>>> consistent deterministic result that is "not a >>>>>>>>>>>>>>>>>>>>>>>>> computation".

    Because you get that result only by equivocating >>>>>>>>>>>>>>>>>>>>>>>> on your definitions.

    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they >>>>>>>>>>>>>>>>>>>>>>>> fail to be usable as sub- computations as we >>>>>>>>>>>>>>>>>>>>>>>> can't control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for >>>>>>>>>>>>>>>>>>>>>>>> a sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not a >>>>>>>>>>>>>>>>>>>>>>>>> computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you >>>>>>>>>>>>>>>>>>>>>> fundamentally don't understand the problem field >>>>>>>>>>>>>>>>>>>>>> you are betting your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it.

    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"???

    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT, >>>>>>>>>>>
    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that??? >>>>>>>>>
    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they >>>>>>>> can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular
    box you call a "Computation", because i just doesn't mean anything, >>>>>>
    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what it >>>>>> says about computations.

    no i'm saying i don't care about ur particular definition, richard

    do better that trying to "define" me as wrong. meaning: put in the
    work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to
    russel's teapot???

    You are asking me to disprove something that you won't (and can't)
    define.

    i tried to but ur incredibly uncooperative

    No, because a PROOF starts with things actually defined, and is not
    based on an assumption of something that isn't.

    ALL your proofs have been based on the assumption of something being computable that isn't, sometimes being a complete enumeration of a class
    or sometimes some operation that isn't computable.

    When I point out what isn't computable, rather than showing how it IS conputable, you ask me to prove that it isn't.

    THAT is not how a proof goes, YOU need to actually justify all your assumptions, and if one is questioned, show that it is correct.

    Sorry, you are just proving you don't understand your task at hand.





    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually
    try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone down >>>>> the moronic hole of "maybe my favorite truth isn't even provable!!!??" >>>>
    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this
    sort could be used to generate proofs of the great problems of
    mathematics and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but they
    just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we
    couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the halting
    problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause we
    assuredly do not have enough number theory to build that at this time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither of
    us can imagine

    Then try to show where the ERROR in the proof is.

    If there isn't an error, it isn't a "misproof"


    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    But to claim you can handle the actual Halting problem, YOU NEED to be perfect.

    I guess you just are doing your lying definitions again.


    (oooo, add that fallacy to list rick! what number are we at???)





    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again

    Nope, but I think your brain went to sleep from the gas.




    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a
    brand new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've gotta
    come up with a set of purely logical arguments that stand entirely on
    their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least
    become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other repeatedly crying definist fallacy over and over again. heck u can't even explain
    to me what i think tbh, and i know u can't.

    It isn't "definist fallacy" to quote the actual definition.

    In fact to try to use that label on the actual definition is the
    definist fallacy.


    i refuse to buy into fallacy gishgallop, and that's a good thing

    Nope, you refuse to face reality, and it is slapping you in the face silly.



    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the future.
    Just like classical mechanics were "wrong" in some cases, but close
    enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's
    theory, it doesn't change the results significantly for what we
    currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.

    You haven't given a SPECIFIC refinement, just vague claims with no backing.


    Results based on false premises are not valid,

    If you want to change the rules, you need to actually define your new game.

    So far, its just, lets assume things can be different.



    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start to
    work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you can
    find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is >>>>>>> still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond
    you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING GOD >>>


    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different
    that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️




    i will not respond to more comments on this because it's a
    boring, lazy, non-argument that is fucking waste of both our time. >>>>>>>










    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 26 17:28:17 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 2:45 PM, dart200 wrote:
    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about. >>>>>>>>
    The fact that it is impossible to build a computation that,
    given a representation of another computation and its input,
    determine for all cases if the computation will halt does
    nothing to further the question of are Turing Machines the most >>>>>>>> powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that form? >>>>>>
    Computation Theory was to answer questions of logic and mathematics. >>>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it,
    just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... 🙄

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context (the paradoxical input machine), which is why i don't think it over-
    generalizes to disproving our ability to compute the answer in non- pathological contexts.

    No, becuase the machine in questions halting behavior is fully defined,
    since the SPECIFIC machine it was built on had to be defined.

    Thus, the "paradox", like all real paradoxes is only apparent, as in
    only when we think of the "generalized" template, not the actual machine
    that is the input.

    You have your problem because you think of the machine as being built to
    an API, but it isn't, it is built to a SPECIFIC decider, or it isn't
    actually a computation. As a part of being a computation is having an
    explicit and complete listing of the algorithm used, which can't just reference an "API", but needs the implementation of it.

    The "Template" is built to the API, but the input isn't the template,
    but the actual machine, which means the specific decider, and thus there
    is no real paradox, only an incorrect machine, as all the other ones
    have a chance of being correct (if they are correct partial deciders)


    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    And neither do computations as defined. Even in your model, you try to
    call the context part of the input becuase you know it has to be.


    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular limit.

    And the problem is that the problem space doesn't see past that limit.

    If you want to talk about context dependent computations, you need to
    work out how you are going to actually define that, then figure out what
    you can possibly say about them.


    this doesn't remove *all* unknowns, i'm not resolving problems of actual complexity or unknowns due to lack of number theory. i'm resolving the self-referential set-classification paradox that underlies much of uncomputability, and to hopefully put a wrench in this rather odd, paradoxical, and quite frankly fallacy drenched feelings of certainty
    about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make significant progress, my fucking god...

    So, tackle the part that you can, and not the part that even your
    context dependent part doesn't help with,

    After all, the "Halting Problem" ask a question that is NOT dependent on
    the context it is being asked in, as that machines behavior was defined
    not to so depend on it. Thus a "Context Dependent Compuation" can't use context to help answer it, at best it might help a partial decider be
    able to answer a biger slice of the pie.






    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing
    problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is
    actually useful, and generates answers to some things we currently
    think as uncomputable, but until you can actually figure out what
    that is, assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply your
    own new ones. Just assuming you can change them without actually doing
    so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???

    You can't make a "minor adjustment" to a fixed system.

    That is like saying that 22/7 is close enough to the value of Pi to be
    pi for all uses.






    fuck







    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Mon Jan 26 14:29:09 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:
    On 1/25/26 2:36 PM, Richard Damon wrote:
    On 1/25/26 4:05 PM, dart200 wrote:
    On 1/25/26 10:21 AM, Richard Damon wrote:
    On 1/24/26 9:24 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 5:36 PM, dart200 wrote:
    On 1/24/26 6:44 AM, Richard Damon wrote:
    On 1/20/26 8:55 PM, dart200 wrote:
    On 1/20/26 4:59 AM, Richard Damon wrote:
    On 1/20/26 1:18 AM, dart200 wrote:
    On 1/19/26 9:29 PM, Richard Damon wrote:
    On 1/18/26 11:51 PM, dart200 wrote:
    On 1/18/26 4:28 PM, Richard Damon wrote:
    On 1/18/26 4:50 PM, dart200 wrote:
    On 1/18/26 12:56 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 1/18/26 1:15 PM, dart200 wrote:
    On 1/18/26 4:05 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 1/18/26 1:05 AM, dart200 wrote:
    On 1/17/26 7:28 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 1/17/26 10:14 PM, dart200 wrote: >>>>>>>>>>>>>>>>>>>>>>>
    Good luck starving to death when your money runs >>>>>>>>>>>>>>>>>>>>>>> out.

    one can only hope for so much sometimes 🙏 >>>>>>>>>>>>>>>>>>>>>>



    I guess you don't understand the rules of logic. >>>>>>>>>>>>>>>>>>>>>>>>
    also not an argument

    Again, YOUR PROBLEM.





    it's pretty crazy i can produce a machine >>>>>>>>>>>>>>>>>>>>>>>>>> (even if u haven't understood it yet) that >>>>>>>>>>>>>>>>>>>>>>>>>> produces a consistent deterministic result >>>>>>>>>>>>>>>>>>>>>>>>>> that is "not a computation". >>>>>>>>>>>>>>>>>>>>>>>>>
    Because you get that result only by >>>>>>>>>>>>>>>>>>>>>>>>> equivocating on your definitions. >>>>>>>>>>>>>>>>>>>>>>>>>
    If the context is part of the inpt to make the >>>>>>>>>>>>>>>>>>>>>>>>> output determistic from the input, then they >>>>>>>>>>>>>>>>>>>>>>>>> fail to be usable as sub- computations as we >>>>>>>>>>>>>>>>>>>>>>>>> can't control that context part of the input. >>>>>>>>>>>>>>>>>>>>>>>>>
    When we look at just the controllable input for >>>>>>>>>>>>>>>>>>>>>>>>> a sub- computation, the output is NOT a >>>>>>>>>>>>>>>>>>>>>>>>> deterministic function of that inut. >>>>>>>>>>>>>>>>>>>>>>>>>

    not sure what the fuck it's doing if it's not >>>>>>>>>>>>>>>>>>>>>>>>>> a computation

    Its using hidden inputs that the caller can't >>>>>>>>>>>>>>>>>>>>>>>>> control.

    which we do all the time in normal programming, >>>>>>>>>>>>>>>>>>>>>>>> something which apparently u think the tHeOrY oF >>>>>>>>>>>>>>>>>>>>>>>> CoMpUtInG fails to encapsulate

    Right, but that isn't about computations. >>>>>>>>>>>>>>>>>>>>>>>

    pretty crazy we do a bunch "non-computating" in >>>>>>>>>>>>>>>>>>>>>>>> the normal act of programming computers >>>>>>>>>>>>>>>>>>>>>>>
    Why?

    As I have said, "Computatations" is NOT about how >>>>>>>>>>>>>>>>>>>>>>> modern computers work.

    I guess you are just showing that you >>>>>>>>>>>>>>>>>>>>>>> fundamentally don't understand the problem field >>>>>>>>>>>>>>>>>>>>>>> you are betting your life on.

    one would presume the fundamental theory of >>>>>>>>>>>>>>>>>>>>>> computing would be general enough to encapsulate >>>>>>>>>>>>>>>>>>>>>> everything computed by real world computers, no??? >>>>>>>>>>>>>>>>>>>>>
    Why?

    Remember, the fundamental theory of Computing >>>>>>>>>>>>>>>>>>>>> PREDATES the computer as you know it. >>>>>>>>>>>>>>>>>>>>
    so ur saying it's outdated and needs updating in >>>>>>>>>>>>>>>>>>>> regards to new things we do with computers that >>>>>>>>>>>>>>>>>>>> apparently turing machines as a model don't have >>>>>>>>>>>>>>>>>>>> variations of ...

    No, it still handles that which it was developed for. >>>>>>>>>>>>>>>>>>
    well it was developed to be a general theory of >>>>>>>>>>>>>>>>>> computing, and apparently modern computing has >>>>>>>>>>>>>>>>>> transcended that theory ...

    Not really.

    THe way modern processors work, "sub-routines" can fail >>>>>>>>>>>>>>>>> to be computations, but whole programs will tend to be. >>>>>>>>>>>>>>>>> Sub- routines CAN be built with care to fall under its >>>>>>>>>>>>>>>>> guidance.

    lol, what are they even if not "computations"??? >>>>>>>>>>>>>>>
    not-computations

    great, a set of deterministic steps that produces a result >>>>>>>>>>>>>> but is somehow not a compution!

    Because it isn't deterministically based on the INPUT, >>>>>>>>>>>>
    no it's just a series of steps to produce some output.

    Nope, not in the formulation of the theory.

    again: YOU HAVE NOT PROVEN THAT TURING MACHINES, OR ANY
    EQUIVALENT THEORY, ENCOMPASS ALL POSSIBLE COMPUTATIONS

    like holy fuck, how many times will i need to repeat that??? >>>>>>>>>>
    it's a ct-THESIS not a ct-LAW

    But I can say that Computations as defined, are all that they >>>>>>>>> can do.

    i will never care about you complaining about the fact the
    computations i'm talking about don't fit within the particular >>>>>>>> box you call a "Computation", because i just doesn't mean anything, >>>>>>>
    In other words, you are just saying you don't care about
    computation theory, and thus why are you complaining about what >>>>>>> it says about computations.

    no i'm saying i don't care about ur particular definition, richard >>>>>>
    do better that trying to "define" me as wrong. meaning: put in the >>>>>> work to demonstrate actual contradictions

    In other words, you want me to prove there isn't a teapot in the
    asteroid belt.

    lol, what. asking for a proof of contradiction is now akin to
    russel's teapot???

    You are asking me to disprove something that you won't (and can't)
    define.

    i tried to but ur incredibly uncooperative

    No, because a PROOF starts with things actually defined, and is not
    based on an assumption of something that isn't.

    ALL your proofs have been based on the assumption of something being computable that isn't, sometimes being a complete enumeration of a class
    or sometimes some operation that isn't computable.

    When I point out what isn't computable, rather than showing how it IS conputable, you ask me to prove that it isn't.

    THAT is not how a proof goes, YOU need to actually justify all your assumptions, and if one is questioned, show that it is correct.

    Sorry, you are just proving you don't understand your task at hand.


    bro u should've just agreed ur being uncooperative,

    and it could've taken less words

    but that would've required at least an iota of being cooperative,

    so here we are 😵‍💫😵‍💫😵‍💫

    #god





    are u even doing math here or this just a giant definist fallacy
    shitshow???

    No, you just don't know what that means.



    YOU are the one assuming things can be done, but refuse to actually >>>>> try to define an actual algorithm that does so.

    An actual algorithm being an actual sequence of finite atomic
    steps, and using bounded loops.





    u and the entire field can be wrong about how u specified
    "Computation",

    No, you just don't understand the WHY of computation theory.

    u don't give a why u stupid fucking retarded faggot, and u never
    will because the ct-thesis isn't proven, and u've already gone
    down the moronic hole of "maybe my favorite truth isn't even
    provable!!!??"

    I have mentioned it, but have you bothered to look into it?

    Comptation Theory was developed to see if "Computations" of this
    sort could be used to generate proofs of the great problems of
    mathematics and logic.

    It was hoped that it would provide a solution to the then curretly
    seeming intractable problems that seemed to have an answer, but
    they just couldn't be found.

    Insteed, it showed that it was a provable fact that some problems
    would not have a solution. And thus we had to accept that we
    couldn't prove everything we might want.


    and that fact was only shown, for computing in regards to itself, by
    using self-referential set-classification paradoxes, like the
    halting problem


    which is the part i'm trying to reconcile, that very specific (but
    quite broad within tm computing) problem...

    But you are only saying that there must be something else (that is
    Russel's teapot must exist) but can't show it.

    Thus, it is encumbent on YOU to prove or at least define what you are
    claiming to exist.


    i'm not here to spoon feed humanity a general decision algo, cause
    we assuredly do not have enough number theory to build that at this
    time.

    It seems you are not here to do anything constructive, only engage in
    flights of fancy imagining things that are not, but assuming they are.

    debunking a widely accepted misproof is constructive in ways neither
    of us can imagine

    Then try to show where the ERROR in the proof is.

    If there isn't an error, it isn't a "misproof"


    i don't need to make ALL the progress in order to make SOME progress.
    i'm *extremely* tired of people spouting perfectionist fallacies at me

    But to claim you can handle the actual Halting problem, YOU NEED to be perfect.

    wow, after i bring up a perfection fallacy you then in the next sentence
    u double down on it by claiming i NEED to be prefect???

    like holy fuck dude do u have even a semblance of actual self-awareness???

    my dear lord being all that was, is, and ever will be...

    have mercy on us all for the abject stupidity displayed in this here group

    🙏🙏🙏


    I guess you just are doing your lying definitions again.


    (oooo, add that fallacy to list rick! what number are we at???)





    i'm trying to deal with all the claims of hubris that such a general
    decision algo *cannot* exist, by showing *how* it could exist
    alongside the potential for self-referential set-classification
    paradoxes:

    either by showing that we can just ignore the paradoxes, or by
    utilizing reflective turing machines to decide on them in a context
    aware manner, both are valid resolutions.

    In other words, by ignoring the reality,

    gaslighting again

    Nope, but I think your brain went to sleep from the gas.




    i know u want me to spoon feed you all the answers here, but i'm one
    freaking dude, with very limited time, and training, stuck with
    discussion that is willfully antagonistic and soaked with fallacy
    after fallacy,

    turing spend years coming up with his turing jump nonsense, on a
    brand new fresh theory, and people that likely actually tried to be
    collaborative,

    while i've gotta reconcile a massive almost century old bandwagon, /
    thru argument alone/

    i don't even have the luxury of pointing to an experiment, i've
    gotta come up with a set of purely logical arguments that stand
    entirely on their own right. einstein had it easier

    But, if you listened to people to make sure you were working on solid
    ground, and not flights of fancy, it might be easier, or at least
    become evident that it is a dead end.

    lol, u claim it's a dead end but can't even explain why other
    repeatedly crying definist fallacy over and over again. heck u can't
    even explain to me what i think tbh, and i know u can't.

    It isn't "definist fallacy" to quote the actual definition.

    In fact to try to use that label on the actual definition is the
    definist fallacy.


    i refuse to buy into fallacy gishgallop, and that's a good thing

    Nope, you refuse to face reality, and it is slapping you in the face silly.



    Even Einstein admitted that his theory was likely "wrong", but was
    better than what we currently had, and WOULD be refined in the
    future. Just like classical mechanics were "wrong" in some cases, but
    close enough for most of the work that they were being used for.

    In the same way, yes, perhaps there is a refinement needed to the
    definition of what a "Computation" is, but just like Einstein's
    theory, it doesn't change the results significantly for what we
    currently can see.

    u haven't acknowledged any specific refinement, so u can't say that it
    can or cannot change in terms of results. ur just begging the question
    due to hubris.

    You haven't given a SPECIFIC refinement, just vague claims with no backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    could you even begin to tell me what that was? like what was the name of
    that operation even??? see if u can't even name me what the operation
    was...

    that is a definitive sign of an entirely antagonistic attitude


    Results based on false premises are not valid,

    If you want to change the rules, you need to actually define your new game.

    So far, its just, lets assume things can be different.



    Your issue is you need to find that "improved" definition that still
    works for the common cases that we know about, before you can start
    to work out what it implies.

    STARTING with assumptions of that implicaion, is like assuming you
    can find a road network to drive from New York to Paris, France.






    and that potential is well codified by the fact the ct-thesis is >>>>>>>> still a thesis and not a law.

    It might just be a thesis, because it IS an unprovable truth.

    lookie u just accepting things as "muh unprovable truths". holy
    fucking hypocritical fucking faggot

    It isn't "just accepting", it is looking at the proofs and
    understanding the logic of them.

    YOU HAVEN'T PROVEN THE CT-THESIS, MY GOD


    imagine if i pulled that argument out on you wildly unfair
    irrational bastard??

    But all you can do is make baseless claims. My statements of
    unprovable truths is based on real proofs, that seem to be beyond
    you ability to understand.

    YOU ALSO HAVEN'T PROVEN THAT THE CT-THESIS IS UNPROVABLE, MY FUCKING
    GOD



    u make a complete mockery of reason with the disgustingly idiot
    dogshit u post over and over again...

    How is looking at proofs and accepting their results.

    BECAUSE UR JUST ARBITRARILY OVERGENERALIZING WITHOUT PROOF,

    OH MY FUCKING GOD

    godel's result is a curse on this species even if he wasn't wrong to
    produce it


    It is the rejection of proofs and thinking things must be different >>>>> that is the mockery.


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️




    i will not respond to more comments on this because it's a
    boring, lazy, non-argument that is fucking waste of both our time. >>>>>>>>










    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 27 00:00:20 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/26 2:28 PM, Richard Damon wrote:
    On 1/26/26 2:45 PM, dart200 wrote:
    On 1/26/26 8:43 AM, Richard Damon wrote:
    On 1/26/26 1:50 AM, dart200 wrote:
    On 1/25/26 2:40 PM, Richard Damon wrote:
    On 1/25/26 4:04 PM, dart200 wrote:
    On 1/25/26 10:23 AM, Richard Damon wrote:
    On 1/24/26 9:05 PM, dart200 wrote:
    On 1/24/26 4:52 PM, Richard Damon wrote:
    On 1/24/26 6:06 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:

    the CT-thesis is a thesis, not a proof.
    *I think that I fixed that*
    It seems to me that if something cannot be computed
    by applying finite string transformation rules to
    input finite strings then it cannot be computed.

    As soon as this is shown to be categorically impossible
    then the thesis turns into a proof.


    In other words, you just don't know what you are talking about. >>>>>>>>>
    The fact that it is impossible to build a computation that, >>>>>>>>> given a representation of another computation and its input, >>>>>>>>> determine for all cases if the computation will halt does
    nothing to further the question of are Turing Machines the most >>>>>>>>> powerful form of computation.

    contexts-aware machines compute functions:

    (context,input) -> output


    And what problems of interest to computation theory are of that >>>>>>> form?

    Computation Theory was to answer questions of logic and mathematics. >>>>>>>
    What logic or math is dependent on "context"

    *mechanically computing* the answer *generally* is dependent on
    context,

    Really?

    Most problems don't care about the context of the person asking it, >>>>> just the context of the thing being looked at.

    well, yes, most problems don't involve pathologically querying a
    decider specifically for the purpose of then contradicting the
    decision... 🙄

    Which is a problem that doesn't actually depend on the context of the
    asker, so using the context just makes you wrong.

    yes it does.

    the self-referential set-classification paradox can *only* provably
    happen when a decider is called from within a pathological context
    (the paradoxical input machine), which is why i don't think it over-
    generalizes to disproving our ability to compute the answer in non-
    pathological contexts.

    No, becuase the machine in questions halting behavior is fully defined, since the SPECIFIC machine it was built on had to be defined.

    Thus, the "paradox", like all real paradoxes is only apparent, as in
    only when we think of the "generalized" template, not the actual machine that is the input.

    You have your problem because you think of the machine as being built to
    an API, but it isn't, it is built to a SPECIFIC decider, or it isn't actually a computation. As a part of being a computation is having an explicit and complete listing of the algorithm used, which can't just reference an "API", but needs the implementation of it.

    The "Template" is built to the API, but the input isn't the template,
    but the actual machine, which means the specific decider, and thus there
    is no real paradox, only an incorrect machine, as all the other ones
    have a chance of being correct (if they are correct partial deciders)

    this actually just supports my point that paradoxes only happens when a decider is called within a pathological context



    TMs don't have an ability to discern between contexts, which is why
    current theory accepts that it does...

    And neither do computations as defined.

    idk where ur getting this definition u keep bringing up or who defined it

    Even in your model, you try to
    call the context part of the input becuase you know it has to be.


    the point of my work on RTMs is to grant computation an ability to
    discern between contexts so that we can transcend *that* particular
    limit.

    And the problem is that the problem space doesn't see past that limit.

    If you want to talk about context dependent computations, you need to
    work out how you are going to actually define that, then figure out what
    you can possibly say about them.

    i already did, multiple times, u just refuse acknowledge what i wrote



    this doesn't remove *all* unknowns, i'm not resolving problems of
    actual complexity or unknowns due to lack of number theory. i'm
    resolving the self-referential set-classification paradox that
    underlies much of uncomputability, and to hopefully put a wrench in
    this rather odd, paradoxical, and quite frankly fallacy drenched
    feelings of certainty about unknowable unknowns.

    WHICH IS FINE, i don't need total instant perfection to make
    significant progress, my fucking god...

    So, tackle the part that you can, and not the part that even your
    context dependent part doesn't help with,

    After all, the "Halting Problem" ask a question that is NOT dependent on

    *mechanically computing* the answer *generally* however is. the ability
    itself to compute the answer is context-dependent.

    the context it is being asked in, as that machines behavior was defined
    not to so depend on it. Thus a "Context Dependent Compuation" can't use context to help answer it, at best it might help a partial decider be
    able to answer a biger slice of the pie.






    and ignoring that is the underlying cause of the halting problem

    Nope.


    clearly novel techniques will be required to resolve long standing >>>>>> problems, eh richard???

    Or just lying as you try.

    I guess you think the speed of light is just a suggestion. (Yes,
    there are some thoughts about how to break it, but they require
    things totally outside our current physics).

    Yes, there may be a new definition of "Computations" that is
    actually useful, and generates answers to some things we currently
    think as uncomputable, but until you can actually figure out what
    that is, assuming it is just science fiction.

    or u'd just call it lying over and over again with no serious
    consideration to what's really being said ...

    Yep, that is a good description of what you are doing.

    You forget to consider the topic you are talking about.

    Either you accept the current definitions, or you actually supply
    your own new ones. Just assuming you can change them without actually
    doing so makes your argument baseless.

    false dichotomy ...

    cause why can't a "new" one just be in fact a rather minor adjustment???

    You can't make a "minor adjustment" to a fixed system.

    lots of people made adjustments to turing machines u absolute beyond
    dogshit moron

    why can't i??? because i'm not special enough, and everyone else was???

    now just special pleading.

    add it to the list, my god u have committed more names fallacies than
    anyone i have ever talked to

    what am i doing here?


    That is like saying that 22/7 is close enough to the value of Pi to be
    pi for all uses.






    fuck







    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Dude@punditster@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Tue Jan 27 13:31:21 2026
    From Newsgroup: comp.ai.philosophy

    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that simple
    to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the name of that operation even??? see if u can't even name me what the operation
    was...

    Let's be clear: You still haven't explained why that dude rode his horse
    all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 27 14:07:41 2026
    From Newsgroup: comp.ai.philosophy

    On 1/25/2026 2:36 PM, Richard Damon wrote:
    [...]

    An actual algorithm being an actual sequence of finite atomic steps, and using bounded loops.

    Why must an algorithm use bounded loops? It can run and run...
    generating results along the way...

    [...]
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 01:12:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that simple
    to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL

    could you even begin to tell me what that was? like what was the name
    of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his horse
    all the way through a desert without giving the old mare a name?

    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️

    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@news.x.richarddamon@xoxy.net to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 28 07:23:24 2026
    From Newsgroup: comp.ai.philosophy

    On 1/27/26 5:07 PM, Chris M. Thomasson wrote:
    On 1/25/2026 2:36 PM, Richard Damon wrote:
    [...]

    An actual algorithm being an actual sequence of finite atomic steps,
    and using bounded loops.

    Why must an algorithm use bounded loops? It can run and run...
    generating results along the way...

    [...]

    The classic definition of finitie compuations require the computation to finish, as it is allowed to overwrite its interim results.

    THere is a second definition for infinite computations, where the
    machine can write unerasable partial results that can be used even while
    the machine is continuing to produce more results. This is used for computations that produce "Real" results.

    The "Halting Problem" as normally stated is about the first type.

    The second type of machine typically just never halts, and its
    equivalent to the halting problem would be to determine if a machine
    just gets to a point where it fails to progress and write another digit
    to the output. If the calculation ends up producing a result that could
    be expressed in a finite length real number, these machines are supposed
    to just "end" in a loop that just continues to emit "0", not just stop.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Dude@punditster@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 13:29:42 2026
    From Newsgroup: comp.ai.philosophy

    On 1/28/2026 1:12 AM, dart200 wrote:
    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that
    simple to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL
    ;
    could you even begin to tell me what that was? like what was the name
    of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his
    horse all the way through a desert without giving the old mare a name?
    ;
    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude

    What I'm personally offended about is all the electricity you're using
    every day to send texts to total strangers. You cooking with gas?

    Let me remind you again, that incense you see at your local convenience
    store is not real herbal incense. It may look like Indian Incense and
    the label may even say Indian Incense, but they are probably just punk
    sticks and glue.

    Don't be deceived!


    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng,alt.messianic,alt.buddha.short.fat.guy on Wed Jan 28 13:37:49 2026
    From Newsgroup: comp.ai.philosophy

    On 1/28/26 1:29 PM, Dude wrote:
    On 1/28/2026 1:12 AM, dart200 wrote:
    On 1/27/26 1:31 PM, Dude wrote:
    On 1/26/2026 2:29 PM, dart200 wrote:
    On 1/26/26 2:17 PM, Richard Damon wrote:
    On 1/26/26 2:43 PM, dart200 wrote:
    On 1/26/26 8:39 AM, Richard Damon wrote:
    On 1/26/26 12:56 AM, dart200 wrote:

    You haven't given a SPECIFIC refinement, just vague claims with no
    backing.

    i gave a *very* specific *additional* operation for the machine,
    specified exactly what it does, and gave a demonstration of it in a
    simple case.

    So, I'm not sure you've thought this through. It may not be that
    simple to open the door, Nick. There might be a ghost in the machine.

    "I'm sorry, Dave. I can't do that." - HAL
    ;
    could you even begin to tell me what that was? like what was the
    name of that operation even??? see if u can't even name me what the
    operation was...

    Let's be clear: You still haven't explained why that dude rode his
    horse all the way through a desert without giving the old mare a name?
    ;
    that is a definitive sign of an entirely antagonistic attitude

    Let's not get too personal, Nick!

    tbh, i'm fairly personally offended at the lack of cooperation dude

    What I'm personally offended about is all the electricity you're using

    video, ai, and porn are vastly outclass text messaging dude

    every day to send texts to total strangers. You cooking with gas?

    lol, yes but gas is less efficient dude, not more

    i'd prefer thermally controlled induction like the breville control freak


    Let me remind you again, that incense you see at your local convenience store is not real herbal incense. It may look like Indian Incense and
    the label may even say Indian Incense, but they are probably just punk sticks and glue.

    Don't be deceived!

    i don't. i only use grass greenhouse grown in santa barbara with zero
    sprays. lady bugs are instead used for pest control

    shout out to autumn brands!



    holy fuck you dude eat a bag of dicks

    It seems you have eaten them all already,

    sure thing dick ✌️



    --
    hi, i'm nick! let's end war 🙃

    --- Synchronet 3.21b-Linux NewsLink 1.2