(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
Hi,
I did the same using multiple LLMs in the past
few weeks. Until ChatGPT degraded, they phased
out the old models, and its now only 5.x.
You get the effect of 4 eyes see more than 2 eyes.
Now its for ChatGPT 5.x. kind of 1 eye and 1 eye-
patch, plus completely brain amputated.
Bye
P.S.: Maybe the best AI application is this here:
Does your cat bring home “gifts” too?
https://zeromouse.com/
olcott schrieb:
On 4/16/2026 12:17 PM, Ross Finlayson wrote:
On 04/16/2026 08:20 AM, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
Maybe you should figure more how it's "univocal" than "unequivocal".
by "unequivocal" I only mean that every LLM takes the
prompt to mean exactly the same thing after as many
as hundreds and hundreds of progressive refinements.
Then after the prompt has been further refined to achieve
a complete consensus across all five LLMs this is a good
ballpark estimate of literally unequivocal.
The final test is against foundational peer reviewed
research written by the well established leaders in
the field.
For example, you can give it an account of what "equality",
according to Quine according to Russell, "is", and show
that now it's removed and quite capricious and not very arbitrary.
I.e., that's readily "equivocated".
The philo-sophy needs an account of the philo-casuy, or as
with regards to distinguishing and disambiguationg
the "sophistry" and the "casuistry".
Ultimately my system uses GUIDs for each unique sense
meaning of every word.
Or, anybody else's opinion is just as good, and not bad.
So, "univocity" is a usual account against "the synthetic fragmentation
into pluralistic accounts of wholes". that's been around forever,
and is part of the philosophical canon.
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
On 4/16/2026 11:38 PM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
AI can be useful:
_____________________
Regarding your search for a "Peter Olcott" arrest record, there is a documented case involving a man by that name that matches the details
you've mentioned.
The Arrest Details
In April 2015, 60-year-old Peter Olcott Jr. was arrested in Omaha,
Nebraska. According to court documents and local news reports (such as
KMTV 3 News), the specific circumstances were:
The Charges: He was charged with possession of child pornography.
The "God" Claim: During the investigation, Olcott reportedly told police
that the material was legal because he was God, and therefore he was not subject to human laws.
The Outcome: Following his arrest, Olcott underwent a series of mental
health evaluations. In late 2015, he was found incompetent to stand
trial, and the court ordered him to be committed to the Lincoln Regional Center for psychiatric treatment.
_____________________
See? Pete loves it.
On 4/17/2026 1:38 AM, Mikko wrote:
On 16/04/2026 18:20, olcott wrote:
(1) Progressively make the initial prompt more
unequivocal and succinct across five different LLMs.
I use ChatGPT 5.3, Claude AI Sonnet 4.6 Extended,
Grok Expert, Gemini Pro, Copilot Think deeper
and occasionally NotebookLM for Deep Research
and deep analysis of specific documents.
(2) Once initial prompt is unequivocal and succinct
across five different LLMs then test for consensus.
(3) Once consensus is achieved carefully examine
actual verbiage of key source documents. For
academic research this involves direct quotes from
foundational peer reviewed papers.
How do you know what is the best way or even a good way for
academic research?
LLMs are like a guy with a PhD in everything yet
are a little senile. They were able to look at my
ideas from a computer science, mathematics, logic,
linguistics frame of reference which very few
people can do.
On top of this they were able to fully integrate
every alternative philosophical foundation of each
of these fields not merely the conventional views.
This is what transformed Olcott's system into
Olcott's Proof Theoretic Semantics system.
The best that humans can do is one technical field
combined with one alternative philosophical foundation.
To sum this up LLMs have an enormously broader
perspective than any human. That is what makes them
better for research.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,113 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492335:43:48 |
| Calls: | 14,238 |
| Files: | 186,312 |
| D/L today: |
3,558 files (1,159M bytes) |
| Messages: | 2,514,865 |