• NVIDIA Jetson Orin controlled by Prolog

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 3 22:20:10 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
    your Prolog system interpreted.

    - Run the tensor flow DSL locally in
    your Prolog system compiled.

    - Run the tensor flow DSL locally on
    your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
    on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jan 3 22:31:10 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Maybe one can get a better grip of an intimate
    relationship, simply by some hands on?

    Linear Algebraic Approaches to Logic Programming

    Katsumi Inoue (National Institute of Informatics, Japan)

    Abstract: Integration of symbolic reasoning and machine
    learning is important for robust AI. Realization of
    symbolic reasoning based on algebraic methods is promising
    to bridge between symbolic reasoning and machine learning,
    since algebraic data structures have been used in machine
    learning. To this end, Sakama, Inoue and Sato have defined
    notable relations between logic programming and linear
    algebra and have proposed algorithms to compute logic
    programs numerically using tensors. This work has been
    extended in various ways, to compute supported and stable
    models of normal logic programs, to enhance the efficiency
    of computation using sparse methods, and to enable abduction
    for abductive logic programming. A common principle in
    this approach is to formulate logical formulas as vectors/
    matrices/tensors, and linear algebraic operations are
    applied on these elements for computation of logic programming.
    Partial evaluation can be realized in parallel and by
    self-multiplication, showing the potential for exponential
    speedup. Furthermore, the idea to represent logic programs
    as tensors and matrices and to transform logical reasoning
    to numeric computation can be the basis of the differentiable
    methods for learning logic programs.

    https://www.iclp24.utdallas.edu/invited-speakers/

    Bye

    Mild Shock schrieb:
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
      your Prolog system interpreted.

    - Run the tensor flow DSL locally in
      your Prolog system compiled.

    - Run the tensor flow DSL locally on
      your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
      on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jan 5 20:08:41 2025
    From Newsgroup: comp.lang.prolog


    John Sowa shows clear signs of coping problems. We just
    have an instance of “The Emperor’s New Clothes” some
    companies have become naked with the advent of GPT,

    I don’t think it is productive to postulate
    some CANNOT like here:

    Linguists say that LLMs cannot be a language mode'.
    - Tensors do not make the linguistic information explicit.
    - They do not distinguish the syntax, sernantics, and ontology.
    - GPT cannot use the 60• years of Al research and development. https://www.youtube.com/watch?v=6K6F_zsQ264

    Then in the next slide he embraces tensors for
    his new Prolog system nevertheless. WTF! Basically
    this is a very narrow narrative, which is totally

    unfounded in my opinion. Just check out these papers:

    GRIN: GRadient-INformed MoE
    [2409.12136] GRIN: GRadient-INformed MoE
    https://arxiv.org/abs/2409.12136

    A Survey on Mixture of Experts
    [2407.06204] A Survey on Mixture of Experts
    https://arxiv.org/abs/2407.06204

    This paints a totally different picture of LLMs, seems
    they are more in the tradition of CYC by Douglas Lenant.

    Mild Shock schrieb:
    Hi,

    Maybe one can get a better grip of an intimate
    relationship, simply by some hands on?

    Linear Algebraic Approaches to Logic Programming

    Katsumi Inoue (National Institute of Informatics, Japan)

    Abstract: Integration of symbolic reasoning and machine
    learning is important for robust AI.  Realization of
    symbolic reasoning based on algebraic methods is promising
    to bridge between symbolic reasoning and machine learning,
    since algebraic data structures have been used in machine
    learning. To this end, Sakama, Inoue and Sato have defined
    notable relations between logic programming and linear
    algebra and have proposed algorithms to compute logic
    programs numerically using tensors.  This work has been
    extended in various ways, to compute supported and stable
    models of normal logic programs, to enhance the efficiency
    of computation using sparse methods, and to enable abduction
    for abductive logic programming.  A common principle in
    this approach is to formulate logical formulas as vectors/
    matrices/tensors, and linear algebraic operations are
    applied on these elements for computation of logic programming.
    Partial evaluation can be realized in parallel and by
    self-multiplication, showing the potential for exponential
    speedup.  Furthermore, the idea to represent logic programs
    as tensors and matrices and to transform logical reasoning
    to numeric computation can be the basis of the differentiable
    methods for learning logic programs.

    https://www.iclp24.utdallas.edu/invited-speakers/

    Bye

    Mild Shock schrieb:
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super
    https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
       your Prolog system interpreted.

    - Run the tensor flow DSL locally in
       your Prolog system compiled.

    - Run the tensor flow DSL locally on
       your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
       on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jan 5 20:13:38 2025
    From Newsgroup: comp.lang.prolog


    Whats also interesting, the recent physics
    nobel price recipient Geoffrey Hinton has also
    a like 30 year old paper about MoE,

    which has like 6652 citations:

    Adaptive Mixtures of Local Experts https://www.cs.toronto.edu/~fritz/absps/jjnh91.pdf

    Mild Shock schrieb:

    John Sowa shows clear signs of coping problems. We just
    have an instance of “The Emperor’s New Clothes” some
    companies have become naked with the advent of GPT,

    I don’t think it is productive to postulate
    some CANNOT like here:

    Linguists say that LLMs cannot be a language mode'.
    - Tensors do not make the linguistic information explicit.
    - They do not distinguish the syntax, sernantics, and ontology.
    - GPT cannot use the 60• years of Al research and development. https://www.youtube.com/watch?v=6K6F_zsQ264

    Then in the next slide he embraces tensors for
    his new Prolog system nevertheless. WTF! Basically
    this is a very narrow narrative, which is totally

    unfounded in my opinion. Just check out these papers:

    GRIN: GRadient-INformed MoE
    [2409.12136] GRIN: GRadient-INformed MoE
    https://arxiv.org/abs/2409.12136

    A Survey on Mixture of Experts
    [2407.06204] A Survey on Mixture of Experts
    https://arxiv.org/abs/2407.06204

    This paints a totally different picture of LLMs, seems
    they are more in the tradition of CYC by Douglas Lenant.

    Mild Shock schrieb:
    Hi,

    Maybe one can get a better grip of an intimate
    relationship, simply by some hands on?

    Linear Algebraic Approaches to Logic Programming

    Katsumi Inoue (National Institute of Informatics, Japan)

    Abstract: Integration of symbolic reasoning and machine
    learning is important for robust AI.  Realization of
    symbolic reasoning based on algebraic methods is promising
    to bridge between symbolic reasoning and machine learning,
    since algebraic data structures have been used in machine
    learning. To this end, Sakama, Inoue and Sato have defined
    notable relations between logic programming and linear
    algebra and have proposed algorithms to compute logic
    programs numerically using tensors.  This work has been
    extended in various ways, to compute supported and stable
    models of normal logic programs, to enhance the efficiency
    of computation using sparse methods, and to enable abduction
    for abductive logic programming.  A common principle in
    this approach is to formulate logical formulas as vectors/
    matrices/tensors, and linear algebraic operations are
    applied on these elements for computation of logic programming.
    Partial evaluation can be realized in parallel and by
    self-multiplication, showing the potential for exponential
    speedup.  Furthermore, the idea to represent logic programs
    as tensors and matrices and to transform logical reasoning
    to numeric computation can be the basis of the differentiable
    methods for learning logic programs.

    https://www.iclp24.utdallas.edu/invited-speakers/

    Bye

    Mild Shock schrieb:
    Hi,

    Ok this one is only 250 bucks for a TPU:

    Introducing NVIDIA Jetson Orin™ Nano Super
    https://www.youtube.com/watch?v=S9L2WGf1KrM

    Now I am planning to do the following:

    Create a tensor flow Domain Specific Language (DSL).

    With these use cases:

    - Run the tensor flow DSL locally in
       your Prolog system interpreted.

    - Run the tensor flow DSL locally in
       your Prolog system compiled.

    - Run the tensor flow DSL locally on
       your Tensor Processing Unit (TPU).

    - Run the tensor flow DSL remotely
       on a compute server.

    - What else?

    Maybe also support some ONNX file format?

    Bye



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jan 5 21:43:24 2025
    From Newsgroup: comp.lang.prolog

    Douglas Lenat died two years ago in
    August 31, 2023. I don’t know whether
    CYC and Cycorp will make a dent in
    the future. CYC adressed the common

    knowledge bottleneck, and so do LLM. I
    am using CYC mainly as a historical reference.
    The “common knowledge bottleneck” in AI is
    a challenge that plagued early AI systems.
    This bottleneck stems from the difficulty

    of encoding vast amounts of everyday,
    implicit human knowledge things we take for
    granted but computers historically struggled
    to understand. Currently LLM by design focus
    more on shallow

    knowledge, whereas systems such as CYC might
    exhibit more deep knowlege in certain domains,
    making them possibly more suitable when the
    stakeholders expect more reliable
    analytic capabilities.

    The problem is not explainability,
    the problem is intelligence.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jan 5 21:45:52 2025
    From Newsgroup: comp.lang.prolog


    Notice John Sowa calls LLM the “store”
    of GPT. This could be a misconception that
    matches what Permion did for their cognitive memory.
    But matters are a little bit more complicated
    to say the least, especially

    since OpenAI insists that GPT itself is also
    an LLM. What might highlight the situation is
    Fig 6 of this paper, postulating two Mixture of
    Experts (MoE), one on attention mechanism and
    one on feed-forward:

    A Survey on Mixture of Experts
    [2407.06204] A Survey on Mixture of Experts
    https://arxiv.org/abs/2407.06204

    Disclaimer: Pitty Marvin Minksy didn’t describe
    these things already in his society of mind!
    Would make it easier to understand it now…

    Mild Shock schrieb:
    Douglas Lenat died two years ago in
    August 31, 2023. I don’t know whether
    CYC and Cycorp will make a dent in
    the future. CYC adressed the common

    knowledge bottleneck, and so do LLM. I
    am using CYC mainly as a historical reference.
    The “common knowledge bottleneck” in AI is
    a challenge that plagued early AI systems.
    This bottleneck stems from the difficulty

    of encoding vast amounts of everyday,
    implicit human knowledge things we take for
    granted but computers historically struggled
    to understand. Currently LLM by design focus
    more on shallow

    knowledge, whereas systems such as CYC might
    exhibit more deep knowlege in certain domains,
    making them possibly more suitable when the
    stakeholders expect more reliable
    analytic capabilities.

    The problem is not explainability,
    the problem is intelligence.



    --- Synchronet 3.20a-Linux NewsLink 1.114