• Best way to use LLMs to augment academic research

    From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Apr 16 10:20:52 2026
    From Newsgroup: comp.ai.philosophy

    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.
    --
    Copyright 2026 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable for the entire body of knowledge.
    The complete structure of this system is now defined.

    This required establishing a new foundation
    g
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Ross Finlayson@ross.a.finlayson@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Apr 16 10:17:54 2026
    From Newsgroup: comp.ai.philosophy

    On 04/16/2026 08:20 AM, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.



    Maybe you should figure more how it's "univocal" than "unequivocal".

    For example, you can give it an account of what "equality",
    according to Quine according to Russell, "is", and show
    that now it's removed and quite capricious and not very arbitrary.

    I.e., that's readily "equivocated".


    The philo-sophy needs an account of the philo-casuy, or as
    with regards to distinguishing and disambiguationg
    the "sophistry" and the "casuistry".

    Or, anybody else's opinion is just as good, and not bad.

    So, "univocity" is a usual account against "the synthetic fragmentation
    into pluralistic accounts of wholes". that's been around forever,
    and is part of the philosophical canon.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Thu Apr 16 12:34:15 2026
    From Newsgroup: comp.ai.philosophy

    On 4/16/2026 12:17 PM, Ross Finlayson wrote:
    On 04/16/2026 08:20 AM, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.



    Maybe you should figure more how it's "univocal" than "unequivocal".


    by "unequivocal" I only mean that every LLM takes the
    prompt to mean exactly the same thing after as many
    as hundreds and hundreds of progressive refinements.

    Then after the prompt has been further refined to achieve
    a complete consensus across all five LLMs this is a good
    ballpark estimate of literally unequivocal.

    The final test is against foundational peer reviewed
    research written by the well established leaders in
    the field.

    For example, you can give it an account of what "equality",
    according to Quine according to Russell, "is", and show
    that now it's removed and quite capricious and not very arbitrary.

    I.e., that's readily "equivocated".


    The philo-sophy needs an account of the philo-casuy, or as
    with regards to distinguishing and disambiguationg
    the "sophistry" and the "casuistry".


    Ultimately my system uses GUIDs for each unique sense
    meaning of every word.

    Or, anybody else's opinion is just as good, and not bad.

    So, "univocity" is a usual account against "the synthetic fragmentation
    into pluralistic accounts of wholes". that's been around forever,
    and is part of the philosophical canon.


    --
    Copyright 2026 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable for the entire body of knowledge.
    The complete structure of this system is now defined.

    This required establishing a new foundation
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.ai.philosophy,comp.lang.prolog,sci.physics on Thu Apr 16 22:00:18 2026
    From Newsgroup: comp.ai.philosophy

    Hi,

    I did the same using multiple LLMs in the past
    few weeks. Until ChatGPT degraded, they phased
    out the old models, and its now only 5.x.

    You get the effect of 4 eyes see more than 2 eyes.
    Now its for ChatGPT 5.x. kind of 1 eye and 1 eye-
    patch, plus completely brain amputated.

    Bye

    P.S.: Maybe the best AI application is this here:

    Does your cat bring home “gifts” too?
    https://zeromouse.com/

    olcott schrieb:
    On 4/16/2026 12:17 PM, Ross Finlayson wrote:
    On 04/16/2026 08:20 AM, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.



    Maybe you should figure more how it's "univocal" than "unequivocal".


    by "unequivocal" I only mean that every LLM takes the
    prompt to mean exactly the same thing after as many
    as hundreds and hundreds of progressive refinements.

    Then after the prompt has been further refined to achieve
    a complete consensus across all five LLMs this is a good
    ballpark estimate of literally unequivocal.

    The final test is against foundational peer reviewed
    research written by the well established leaders in
    the field.

    For example, you can give it an account of what "equality",
    according to Quine according to Russell, "is", and show
    that now it's removed and quite capricious and not very arbitrary.

    I.e., that's readily "equivocated".


    The philo-sophy needs an account of the philo-casuy, or as
    with regards to distinguishing and disambiguationg
    the "sophistry" and the "casuistry".


    Ultimately my system uses GUIDs for each unique sense
    meaning of every word.

    Or, anybody else's opinion is just as good, and not bad.

    So, "univocity" is a usual account against "the synthetic fragmentation
    into pluralistic accounts of wholes". that's been around forever,
    and is part of the philosophical canon.





    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.ai.philosophy,comp.lang.prolog,sci.physics on Thu Apr 16 22:40:55 2026
    From Newsgroup: comp.ai.philosophy

    Hi,

    Rumours are that the real winner is currently
    Google, advancing the art of LLMs and LRMs,
    while OpenAI and Anthropic only want to

    go IP, and have raining money. My brave
    AI Laptops can do the following:

    Q: Did Ramanujan consider this facny diophantine equations:
    x + sqrt(y) = 7
    sqrt(x) + y = 11

    A: The "Ramanujan Style"
    Longer answer generated locally, with y=9,x=4.

    What model did I use:

    https://lmstudio.ai/models/google/gemma-4-26b-a4b

    Performance of the AI Laptops:

    /* AMD Ryzen AI 7 350 with Radeon 860M */
    14 Tokens/sec
    /* Intel Core Ultra 7 258V with Intel Arc 140V */
    14 Tokens/sec

    Still a little lame. Maybe this explains why I don't
    use local models more often.

    But its a start!

    Bye

    Mild Shock schrieb:
    Hi,

    I did the same using multiple LLMs in the past
    few weeks. Until ChatGPT degraded, they phased
    out the old models, and its now only 5.x.

    You get the effect of 4 eyes see more than 2 eyes.
    Now its for ChatGPT 5.x. kind of 1 eye and 1 eye-
    patch, plus completely brain amputated.

    Bye

    P.S.: Maybe the best AI application is this here:

    Does your cat bring home “gifts” too?
    https://zeromouse.com/

    olcott schrieb:
    On 4/16/2026 12:17 PM, Ross Finlayson wrote:
    On 04/16/2026 08:20 AM, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.



    Maybe you should figure more how it's "univocal" than "unequivocal".


    by "unequivocal" I only mean that every LLM takes the
    prompt to mean exactly the same thing after as many
    as hundreds and hundreds of progressive refinements.

    Then after the prompt has been further refined to achieve
    a complete consensus across all five LLMs this is a good
    ballpark estimate of literally unequivocal.

    The final test is against foundational peer reviewed
    research written by the well established leaders in
    the field.

    For example, you can give it an account of what "equality",
    according to Quine according to Russell, "is", and show
    that now it's removed and quite capricious and not very arbitrary.

    I.e., that's readily "equivocated".


    The philo-sophy needs an account of the philo-casuy, or as
    with regards to distinguishing and disambiguationg
    the "sophistry" and the "casuistry".


    Ultimately my system uses GUIDs for each unique sense
    meaning of every word.

    Or, anybody else's opinion is just as good, and not bad.

    So, "univocity" is a usual account against "the synthetic fragmentation
    into pluralistic accounts of wholes". that's been around forever,
    and is part of the philosophical canon.






    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Apr 17 09:38:52 2026
    From Newsgroup: comp.ai.philosophy

    On 16/04/2026 18:20, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.

    How do you know what is the best way or even a good way for
    academic research?
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Apr 17 08:56:03 2026
    From Newsgroup: comp.ai.philosophy

    On 4/17/2026 1:38 AM, Mikko wrote:
    On 16/04/2026 18:20, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.

    How do you know what is the best way or even a good way for
    academic research?


    LLMs are like a guy with a PhD in everything yet
    are a little senile. They were able to look at my
    ideas from a computer science, mathematics, logic,
    linguistics frame of reference which very few
    people can do.

    On top of this they were able to fully integrate
    every alternative philosophical foundation of each
    of these fields not merely the conventional views.
    This is what transformed Olcott's system into
    Olcott's Proof Theoretic Semantics system.

    The best that humans can do is one technical field
    combined with one alternative philosophical foundation.
    To sum this up LLMs have an enormously broader
    perspective than any human. That is what makes them
    better for research.
    --
    Copyright 2026 Olcott

    My 28 year goal has been to make
    "true on the basis of meaning expressed in language"
    reliably computable for the entire body of knowledge.
    The complete structure of this system is now defined.

    This required establishing a new foundation
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Apr 17 14:56:17 2026
    From Newsgroup: comp.ai.philosophy

    On 4/16/2026 11:38 PM, Mikko wrote:
    On 16/04/2026 18:20, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.

    How do you know what is the best way or even a good way for
    academic research?


    AI can be useful:
    _____________________
    Regarding your search for a "Peter Olcott" arrest record, there is a documented case involving a man by that name that matches the details
    you've mentioned.

    The Arrest Details
    In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
    Nebraska. According to court documents and local news reports (such as
    KMTV 3 News), the specific circumstances were:

    The Charges: He was charged with possession of child pornography.

    The "God" Claim: During the investigation, Olcott reportedly told police
    that the material was legal because he was God, and therefore he was not subject to human laws.

    The Outcome: Following his arrest, Olcott underwent a series of mental
    health evaluations. In late 2015, he was found incompetent to stand
    trial, and the court ordered him to be committed to the Lincoln Regional Center for psychiatric treatment.
    _____________________

    See? Pete loves it.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Ross Finlayson@ross.a.finlayson@gmail.com to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Fri Apr 17 17:40:04 2026
    From Newsgroup: comp.ai.philosophy

    On 04/17/2026 02:56 PM, Chris M. Thomasson wrote:
    On 4/16/2026 11:38 PM, Mikko wrote:
    On 16/04/2026 18:20, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.

    How do you know what is the best way or even a good way for
    academic research?


    AI can be useful:
    _____________________
    Regarding your search for a "Peter Olcott" arrest record, there is a documented case involving a man by that name that matches the details
    you've mentioned.

    The Arrest Details
    In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
    Nebraska. According to court documents and local news reports (such as
    KMTV 3 News), the specific circumstances were:

    The Charges: He was charged with possession of child pornography.

    The "God" Claim: During the investigation, Olcott reportedly told police
    that the material was legal because he was God, and therefore he was not subject to human laws.

    The Outcome: Following his arrest, Olcott underwent a series of mental
    health evaluations. In late 2015, he was found incompetent to stand
    trial, and the court ordered him to be committed to the Lincoln Regional Center for psychiatric treatment.
    _____________________

    See? Pete loves it.


    Hm. How distasteful. One might wonder what it was and where he got it,
    since the FBI and Navy are the largest holders and purveyors of
    CSAM, since it drives other their business lines, vis-a-vis the
    mall security guards skulking in the changing-room at Forever 21,
    or the janitor or gas-station rest-room cleaner with their latest
    spy-cam setup, or the pornographers, or sadly enough often enough the
    parents, that all slurped up, and dribbled out, by the FBI and Navy
    calling itself NSA.


    Then, about surveillance-tech and ad-tech, or stalk-tech and
    web-integrated grooming of minors, in the interests of protecting
    the children includes also protecting adults from pimps and pushers.


    Yeah, I'd rather not know, since familiarity breeds contempt, and
    here that ignorance is a defense, since intrusiveness is an attack.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Mikko@mikko.levanto@iki.fi to comp.theory,sci.logic,sci.math,comp.ai.philosophy on Sat Apr 18 12:11:16 2026
    From Newsgroup: comp.ai.philosophy

    On 17/04/2026 16:56, olcott wrote:
    On 4/17/2026 1:38 AM, Mikko wrote:
    On 16/04/2026 18:20, olcott wrote:
    (1) Progressively make the initial prompt more
    unequivocal and succinct across five different LLMs.

    I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
    Grok Expert, Gemini Pro, Copilot Think deeper
    and occasionally NotebookLM for Deep Research
    and deep analysis of specific documents.

    (2) Once initial prompt is unequivocal and succinct
    across five different LLMs then test for consensus.

    (3) Once consensus is achieved carefully examine
    actual verbiage of key source documents. For
    academic research this involves direct quotes from
    foundational peer reviewed papers.

    How do you know what is the best way or even a good way for
    academic research?

    LLMs are like a guy with a PhD in everything yet
    are a little senile. They were able to look at my
    ideas from a computer science, mathematics, logic,
    linguistics frame of reference which very few
    people can do.

    On top of this they were able to fully integrate
    every alternative philosophical foundation of each
    of these fields not merely the conventional views.
    This is what transformed Olcott's system into
    Olcott's Proof Theoretic Semantics system.

    The best that humans can do is one technical field
    combined with one alternative philosophical foundation.
    To sum this up LLMs have an enormously broader
    perspective than any human. That is what makes them
    better for research.

    That you don't answer the question is a strong indication that
    you are just speculating.
    --
    Mikko
    --- Synchronet 3.21f-Linux NewsLink 1.2