• Prologers are hurt the most by LLMs

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 11:09:39 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 11:12:48 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I posted this already on sci.math, sci.logic and
    sci.physics. Its probably the most important addition
    to current LLMs, i.e. Retrieval-Augmented Generation (RAG).

    But somehow the morons of MSE don't understand a bit
    whats going on around and about the world. They are
    quite immune to progress in AI. Like stupid cows.

    ------------------ cut here --------------------

    More details on RAG, see here RETRO Project (*) at t=12:01:

    What's wrong with LLMs and what we should be building instead
    Tom Dietterich - 10.07.2023
    https://youtu.be/cEyHsMzbZBs

    So its not a very new technique now appearing in
    generative AIs on the market as well. Some chat bots
    are even now able to sometimes show more clearly the

    used source documents in their answer. The MSE end
    user can still edit a citation by hand to conform
    more to the SEN format, if this would be the issue.

    Also the MSE end user can explicitly now ask a chat
    bot for sources, which he will get most of the time.
    Or he can give a chat bot a source for review and

    discussion. This works also. So there is not anymore
    this "remoteness" of an LLM to the actual virtual
    world of documents. Its more that they now inhabit the

    actual virtual world and interact with it. Another issue
    I see is that in certain countries and educational
    institutions, it might the case that working with a

    chat bot is something that the students learn,
    yet they are not officially allowed to use it on
    MSE, because MSE policies are biased on outdated

    views about generative AI.

    See also:

    (*) RETRO Project:

    Improving language models by retrieving from trillions of tokens
    Sebastian Borgeaud et al. - 7 Feb 2022
    https://arxiv.org/abs/2112.04426

    ------------------ cut here --------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 11:15:23 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    In as far I still hold my position, and you can review
    the position in 3-5 years when stupid cows like the MSE
    people have done all their learning:

    This here:

    The content you provide must either be your own original work, or your
    summary of the properly referenced work of others. [...] Generative
    artificial intelligence tools are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network. https://math.stackexchange.com/help/gen-ai-policy

    Is mostlikely outdated. It ignores RAG:

    Was ist Retrieval-Augmented Generation (RAG)? https://aws.amazon.com/what-is/retrieval-augmented-generation/

    Bye

    Mild Shock schrieb:
    Hi,

    I posted this already on sci.math, sci.logic and
    sci.physics. Its probably the most important addition
    to current LLMs, i.e. Retrieval-Augmented Generation (RAG).

    But somehow the morons of MSE don't understand a bit
    whats going on around and about the world. They are
    quite immune to progress in AI. Like stupid cows.

    ------------------ cut here --------------------

    More details on RAG, see here RETRO Project (*) at t=12:01:

    What's wrong with LLMs and what we should be building instead
    Tom Dietterich - 10.07.2023
    https://youtu.be/cEyHsMzbZBs

    So its not a very new technique now appearing in
    generative AIs on the market as well. Some chat bots
    are even now able to sometimes show more clearly the

    used source documents in their answer. The MSE end
    user can still edit a citation by hand to conform
    more to the SEN format, if this would be the issue.

    Also the MSE end user can explicitly now ask a chat
    bot for sources, which he will get most of the time.
    Or he can give a chat bot a source for review and

    discussion. This works also. So there is not anymore
    this "remoteness" of an LLM to the actual virtual
    world of documents. Its more that they now inhabit the

    actual virtual world and interact with it. Another issue
    I see is that in certain countries and educational
    institutions, it might the case that working with a

    chat bot is something that the students learn,
    yet they are not officially allowed to use it on
    MSE, because MSE policies are biased on outdated

    views about generative AI.

    See also:

    (*) RETRO Project:

    Improving language models by retrieving from trillions of tokens
    Sebastian Borgeaud et al. - 7 Feb 2022
    https://arxiv.org/abs/2112.04426

    ------------------ cut here --------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 11:32:01 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I have switched to use the term "Fuzzy Logic", since
    Probability and/or Bayes is surely misleading.
    "Fuzzy Logic" is quite old:

    In 1965, in his essay Fuzzy Sets[5] - which had been
    cited more than 70,000 times by mid-2017 - he first
    presented his concept of a theory of fuzzy sets, which
    became the nucleus and basis of the rapidly developing
    fuzzy logic - (content: The Logic of Uncertainty ) https://de.wikipedia.org/wiki/Lotfi_Zadeh#Leistungen

    This is also quite intersting, but PostgreSQL is not
    the only database management system, that provides
    such retrieval extensions:

    Vectors are the new JSON https://www.postgresql.eu/events/pgconfeu2023/sessions/session/4592/slides/435/pgconfeu2023_vectors.pdf

    Bye

    Mild Shock schrieb:
    Hi,

    In as far I still hold my position, and you can review
    the position in 3-5 years when stupid cows like the MSE
    people have done all their learning:

    This here:

    The content you provide must either be your own original work, or your summary of the properly referenced work of others. [...] Generative artificial intelligence tools are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network. https://math.stackexchange.com/help/gen-ai-policy

    Is mostlikely outdated. It ignores RAG:

    Was ist Retrieval-Augmented Generation (RAG)? https://aws.amazon.com/what-is/retrieval-augmented-generation/

    Bye

    Mild Shock schrieb:
    Hi,

    I posted this already on sci.math, sci.logic and
    sci.physics. Its probably the most important addition
    to current LLMs, i.e. Retrieval-Augmented Generation (RAG).

    But somehow the morons of MSE don't understand a bit
    whats going on around and about the world. They are
    quite immune to progress in AI. Like stupid cows.

    ------------------ cut here --------------------

    More details on RAG, see here RETRO Project (*) at t=12:01:

    What's wrong with LLMs and what we should be building instead
    Tom Dietterich - 10.07.2023
    https://youtu.be/cEyHsMzbZBs

    So its not a very new technique now appearing in
    generative AIs on the market as well. Some chat bots
    are even now able to sometimes show more clearly the

    used source documents in their answer. The MSE end
    user can still edit a citation by hand to conform
    more to the SEN format, if this would be the issue.

    Also the MSE end user can explicitly now ask a chat
    bot for sources, which he will get most of the time.
    Or he can give a chat bot a source for review and

    discussion. This works also. So there is not anymore
    this "remoteness" of an LLM to the actual virtual
    world of documents. Its more that they now inhabit the

    actual virtual world and interact with it. Another issue
    I see is that in certain countries and educational
    institutions, it might the case that working with a

    chat bot is something that the students learn,
    yet they are not officially allowed to use it on
    MSE, because MSE policies are biased on outdated

    views about generative AI.

    See also:

    (*) RETRO Project:

    Improving language models by retrieving from trillions of tokens
    Sebastian Borgeaud et al. - 7 Feb 2022
    https://arxiv.org/abs/2112.04426

    ------------------ cut here --------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 12:05:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Another example of total nonsense:

    CfR: Vienna World Logic Day Lecture
    Joao Marques-Silva on Trustable Explainable AI
    14 Jan 2025, Online [WLD Event] https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030

    The abstract is out of date. XAI was a problem a
    few years ago. But it has nothing to do with ChatGPT.
    Because ChatGPT is not the machine learning that XAI

    is trying to fix. The fuzzy logic in ChatGPT has nothing
    to do with deep learning and latent parameters.
    ChatGPT throws everything back to natural language and data.

    Virtually no invented latent parameter. There are no
    ontologies with top and bottom in the vectors. They are
    quite flat attribute structures that not only control

    words, but also sentences and polysemy. See also:

    Sentence embedding
    https://en.wikipedia.org/wiki/Sentence_embedding

    This means that the academic world is completely
    overwhelmed. And now stare in mental shock. Don't
    notice that "traditions" like XAI are already out of date.

    Mit freundlichen Grüssen

    P.S.: Here's the abstract, it's complete nonsense:

    Abstract:
    Explainable artificial intelligence (XAI) is a mainstay of
    trustworthy AI. Recent years have witnessed massive efforts
    towards delivering some sort of XAI solutions. Most of these
    efforts are based on non-symbolic methods, an d invariably will
    produce erroneous results. As a result, even if the predictions of
    a machine learning model could be trusted, the lack of reliable
    explanations will also make those predictions unworthy of trust.
    This talk provides a brief glimpse of the emerging field of logic-based explainable AI, a rigorous alternative to the still widely-used but
    extremely problematic non-symbolic methods. https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030

    Mild Shock schrieb:
    Hi,

    I have switched to use the term "Fuzzy Logic", since
    Probability and/or Bayes is surely misleading.
    "Fuzzy Logic" is quite old:

    In 1965, in his essay Fuzzy Sets[5] - which had been
    cited more than 70,000 times by mid-2017 - he first
    presented his concept of a theory of fuzzy sets, which
    became the nucleus and basis of the rapidly developing
    fuzzy logic - (content: The Logic of Uncertainty  ) https://de.wikipedia.org/wiki/Lotfi_Zadeh#Leistungen

    This is also quite intersting, but PostgreSQL is not
    the only database management system, that provides
    such retrieval extensions:

    Vectors are the new JSON https://www.postgresql.eu/events/pgconfeu2023/sessions/session/4592/slides/435/pgconfeu2023_vectors.pdf


    Bye

    Mild Shock schrieb:
    Hi,

    In as far I still hold my position, and you can review
    the position in 3-5 years when stupid cows like the MSE
    people have done all their learning:

    This here:

    The content you provide must either be your own original work, or your
    summary of the properly referenced work of others. [...] Generative
    artificial intelligence tools are not capable of citing the sources of
    knowledge used up to the standards of the Stack Exchange network.
    https://math.stackexchange.com/help/gen-ai-policy

    Is mostlikely outdated. It ignores RAG:

    Was ist Retrieval-Augmented Generation (RAG)?
    https://aws.amazon.com/what-is/retrieval-augmented-generation/

    Bye

    Mild Shock schrieb:
    Hi,

    I posted this already on sci.math, sci.logic and
    sci.physics. Its probably the most important addition
    to current LLMs, i.e. Retrieval-Augmented Generation (RAG).

    But somehow the morons of MSE don't understand a bit
    whats going on around and about the world. They are
    quite immune to progress in AI. Like stupid cows.

    ------------------ cut here --------------------

    More details on RAG, see here RETRO Project (*) at t=12:01:

    What's wrong with LLMs and what we should be building instead
    Tom Dietterich - 10.07.2023
    https://youtu.be/cEyHsMzbZBs

    So its not a very new technique now appearing in
    generative AIs on the market as well. Some chat bots
    are even now able to sometimes show more clearly the

    used source documents in their answer. The MSE end
    user can still edit a citation by hand to conform
    more to the SEN format, if this would be the issue.

    Also the MSE end user can explicitly now ask a chat
    bot for sources, which he will get most of the time.
    Or he can give a chat bot a source for review and

    discussion. This works also. So there is not anymore
    this "remoteness" of an LLM to the actual virtual
    world of documents. Its more that they now inhabit the

    actual virtual world and interact with it. Another issue
    I see is that in certain countries and educational
    institutions, it might the case that working with a

    chat bot is something that the students learn,
    yet they are not officially allowed to use it on
    MSE, because MSE policies are biased on outdated

    views about generative AI.

    See also:

    (*) RETRO Project:

    Improving language models by retrieving from trillions of tokens
    Sebastian Borgeaud et al. - 7 Feb 2022
    https://arxiv.org/abs/2112.04426

    ------------------ cut here --------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 10 12:21:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    As I said, “traditions” are a hindrance.
    XAI was itself promoted by DARPA:

    XAI: Explainable Artificial Intelligence

    https://www.darpa.mil/research/programs/explainable-artificial-intelligence

    This means that certain machine learning
    methods have been problematized. But that
    was 2018, and it's still like a virus
    in people's minds.

    You just have to say the keyword machine
    learning and then XAI comes up. The academics
    have been conditioned like a Pavlovian dog.
    And you can't get out of this "tradition" anymore.

    Bye

    Mild Shock schrieb:
    Hi,

    Another example of total nonsense:

    CfR: Vienna World Logic Day Lecture
    Joao Marques-Silva on Trustable Explainable AI
    14 Jan 2025, Online [WLD Event] https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030

    The abstract is out of date.  XAI was a problem a
    few years ago.  But it has nothing to do with ChatGPT.
    Because ChatGPT is not the machine learning that XAI

    is trying to fix. The fuzzy logic in ChatGPT has nothing
    to do with deep learning and latent parameters.
    ChatGPT throws everything back to natural language and data.

    Virtually no invented latent parameter. There are no
    ontologies with top and bottom in the vectors.  They are
    quite flat attribute structures that not only control

    words, but also sentences and polysemy.  See also:

    Sentence embedding
    https://en.wikipedia.org/wiki/Sentence_embedding

    This means that the academic world is completely
    overwhelmed.  And now stare in mental shock.  Don't
    notice that "traditions" like XAI are already out of date.

    Mit freundlichen Grüssen

    P.S.: Here's the abstract, it's complete nonsense:

    Abstract:
    Explainable artificial intelligence (XAI) is a mainstay of
    trustworthy AI.  Recent years have witnessed massive efforts
    towards delivering some sort of XAI solutions. Most of these
    efforts are based on non-symbolic methods, an d invariably will
    produce erroneous results. As a result, even if the predictions of
    a machine learning model could be trusted, the lack of reliable
    explanations will also make those predictions unworthy of trust.
    This talk provides a brief glimpse of the emerging field of logic-based explainable AI, a rigorous alternative to the still widely-used but
    extremely problematic non-symbolic methods. https://resources.illc.uva.nl/LogicList/newsitem.php?id=12030

    Mild Shock schrieb:
    Hi,

    I have switched to use the term "Fuzzy Logic", since
    Probability and/or Bayes is surely misleading.
    "Fuzzy Logic" is quite old:

    In 1965, in his essay Fuzzy Sets[5] - which had been
    cited more than 70,000 times by mid-2017 - he first
    presented his concept of a theory of fuzzy sets, which
    became the nucleus and basis of the rapidly developing
    fuzzy logic - (content: The Logic of Uncertainty  )
    https://de.wikipedia.org/wiki/Lotfi_Zadeh#Leistungen

    This is also quite intersting, but PostgreSQL is not
    the only database management system, that provides
    such retrieval extensions:

    Vectors are the new JSON
    https://www.postgresql.eu/events/pgconfeu2023/sessions/session/4592/slides/435/pgconfeu2023_vectors.pdf


    Bye

    Mild Shock schrieb:
    Hi,

    In as far I still hold my position, and you can review
    the position in 3-5 years when stupid cows like the MSE
    people have done all their learning:

    This here:

    The content you provide must either be your own original work, or
    your summary of the properly referenced work of others. [...]
    Generative artificial intelligence tools are not capable of citing
    the sources of knowledge used up to the standards of the Stack
    Exchange network. https://math.stackexchange.com/help/gen-ai-policy

    Is mostlikely outdated. It ignores RAG:

    Was ist Retrieval-Augmented Generation (RAG)?
    https://aws.amazon.com/what-is/retrieval-augmented-generation/

    Bye

    Mild Shock schrieb:
    Hi,

    I posted this already on sci.math, sci.logic and
    sci.physics. Its probably the most important addition
    to current LLMs, i.e. Retrieval-Augmented Generation (RAG).

    But somehow the morons of MSE don't understand a bit
    whats going on around and about the world. They are
    quite immune to progress in AI. Like stupid cows.

    ------------------ cut here --------------------

    More details on RAG, see here RETRO Project (*) at t=12:01:

    What's wrong with LLMs and what we should be building instead
    Tom Dietterich - 10.07.2023
    https://youtu.be/cEyHsMzbZBs

    So its not a very new technique now appearing in
    generative AIs on the market as well. Some chat bots
    are even now able to sometimes show more clearly the

    used source documents in their answer. The MSE end
    user can still edit a citation by hand to conform
    more to the SEN format, if this would be the issue.

    Also the MSE end user can explicitly now ask a chat
    bot for sources, which he will get most of the time.
    Or he can give a chat bot a source for review and

    discussion. This works also. So there is not anymore
    this "remoteness" of an LLM to the actual virtual
    world of documents. Its more that they now inhabit the

    actual virtual world and interact with it. Another issue
    I see is that in certain countries and educational
    institutions, it might the case that working with a

    chat bot is something that the students learn,
    yet they are not officially allowed to use it on
    MSE, because MSE policies are biased on outdated

    views about generative AI.

    See also:

    (*) RETRO Project:

    Improving language models by retrieving from trillions of tokens
    Sebastian Borgeaud et al. - 7 Feb 2022
    https://arxiv.org/abs/2112.04426

    ------------------ cut here --------------------

    Bye

    Mild Shock schrieb:
    Hi,

    Prologers with their pipe dream of Ontologies
    with Axioms are most hurt by LLMs that work
    more on the basis of Fuzzy Logic.

    Even good old "hardmath" is not immune to
    this coping mechanism:

    "I've cast one of my rare votes-to-delete. It is
    a self-answer to the OP's off-topic "question".
    Rather than improve the original post, the effort
    has been made to "promote" some so-called RETRO
    Project by linking YouTube and arxiv.org URLs.
    Not worth retaining IMHO.
    -- hardmath

    https://math.meta.stackexchange.com/a/38051/1482376

    Bye





    --- Synchronet 3.20a-Linux NewsLink 1.114