• Lattent Thinking the forbidden Fruit (Was: Prolog totally missed theAI Boom)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 11:58:55 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 12:19:25 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    What is even "Latent Thinking". While some thinking
    models go through varbalization loops and realize a
    form of "Loud Thinking", i.e. think out loud.

    Autoencoders anyway build a latent space during the
    training phase, so one can do chain of thoughs
    in the latent space, providing a form of "Slient Thinking".

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986 https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History) https://www.youtube.com/watch?v=OFS90-FX6pg


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Nov 2 13:20:15 2025
    From Newsgroup: comp.lang.prolog


    Hi,

    And what about this fully automated AI researcher in
    your team suggested in the OpenAI chat. Well latent spaces
    were already there with autoencoders sprouting before

    OpenAI picked them up. They challenge your own "framing"
    culture, from classic to modern, from Aristotle Catgegories,
    to Russells Complexes, to who knows what inside the

    scholarly world. But to cater for its human client, the AI has to
    be trained to use this "framing" culture in conversation. But it
    could equally well communicate in latent space talk?

    The cryptic "framing" that the AI training invents by itself ?
    This kind of interaction might have some benefit, so
    society must arm itself with AI researchers?

    Bye

    P.S.: Napolean was the same Peace Lover as Drump?

    The cartoon of Napoleon III below (EXAMPLE 4)
    appeared in Punch on Feb. 19, 1859.
    Premise: Napoleon declares that “The Empire embodies
    peace” (“L’Empire c’est la paix”).
    Premise: Napoleon has surrounded himself with many armaments.
    Conclusion: Napoleon may sound inoffensive when he says that
    “The Empire embodies peace,” but his build up of armaments
    suggests we should be wary of the empire he has built. https://plato.stanford.edu/entries/logic-informal/



    Mild Shock schrieb:
    Hi,

    Taking this one:

    Sam, Jakub, and Wojciech on the future of OpenAI https://www.youtube.com/watch?v=ngDCxlZcecw

    There are some funny parts where Jakub stutters:

    OpenAI is Deploying the Forbidden Method: GPT-6 is Different! https://www.youtube.com/watch?v=tR2M6JDyrRw

    What is even "Latent Thinking". While some thinking
    models go through varbalization loops and realize a
    form of "Loud Thinking", i.e. think out loud.

    Autoencoders anyway build a latent space during the
    training phase, so one can do chain of thoughs
    in the latent space, providing a form of "Slient Thinking".

    The Energy Part: 20 Billion USD for 1 GW per 5 Years.
    I wonder how, when, and why the Bubble will burst.
    Or is the bubble here to stay?

    Bye

    Mild Shock schrieb:

    Inductive logic programming at 30
    https://arxiv.org/abs/2102.10556

    The paper contains not a single reference to autoencoders!
    Still they show this example:

    Fig. 1 ILP systems struggle with structured examples that
    exhibit observational noise. All three examples clearly
    spell the word "ILP", with some alterations: 3 noisy pixels,
    shifted and elongated letters. If we would be to learn a
    program that simply draws "ILP" in the middle of the picture,
    without noisy pixels and elongated letters, that would
    be a correct program.

    I guess ILP is 30 years behind the AI boom. An early autoencoder
    turned into transformer was already reported here (*):

    SERIAL ORDER, Michael I. Jordan - May 1986
    https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf

    Well ILP might have its merits, maybe we should not ask
    for a marriage of LLM and Prolog, but Autoencoders and ILP.
    But its tricky, I am still trying to decode the da Vinci code of

    things like stacked tensors, are they related to k-literal clauses?
    The paper I referenced is found in this excellent video:

    The Making of ChatGPT (35 Year History)
    https://www.youtube.com/watch?v=OFS90-FX6pg



    --- Synchronet 3.21a-Linux NewsLink 1.2