• AI most hated by formal verification (Was: Fuzzy Testing is yourSwiss Knife)

    From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Nov 26 17:02:37 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    So Boris the Loris and Nazi Retartd Julio are
    not alone. There is now a mobilization of the
    kind of rage against the machine,

    fighting for methods without randomness. Its
    almost like Albert Einstein ascendet from his
    grave and is now preaching,

    "God does not play dice"

    So how it started:

    PIVOT was an interactive program verifier designed by
    L. Peter Deutsch for his Ph.D. dissertation.
    Posted here by permission of L. Peter Deutsch. https://softwarepreservation.computerhistory.org/pivot/

    How its going:

    Formal Methods: Whence and Whither?
    The text also highlights the evolving role of formal
    methods amidst technological advancements, such as
    AI, and explores educational and standardization issues
    related to their adoption. https://de.slideshare.net/slideshow/formal-methods-whence-and-whither-keynote/273708245

    Can the Don Quijotes win, and fight the AI windmills?

    LoL

    Bye

    Mild Shock schrieb:

    I see it as fuzzy testing of the community.
    It is certainly beneficial if used correctly

    Fuzzy Testing goes also by the name QuickCheck.
    You can use Fuzzy Testing also for benchmarking.
    Mathematically it uses the Law of Large Numbers:

    Law of large numbers
    https://en.wikipedia.org/wiki/Law_of_large_numbers

    Means you even don’t need a random generator
    with a programmable seed, so that a comparison
    involves the exact same random number sequences.

    Just assume that your results have a variation σ.
    Then most likely the overall variation decreases
    proportionally to the number n of experiments,
    i.e. gets washed out:

    VAR(X) = σ^2 / n

    A third use case of Fuzzy Testing is to determine
    frequentist probabilities . Like when I determined
    that 25% of a variant of @kuniaki.mukai compare/3
    triples are not transitive.


    Mild Shock schrieb:
    You can use Fuzzy Testing also for
    benchmarking. Not only to find faults.
    For example when I benchmark mercio/3 via
    fuzzy/1, I find it doesn’t fare extremly bad:

    ?- time((between(1,100,_), mercio, fail; true)).
    % 4,386,933 inferences, 0.375 CPU in 0.376 seconds (100% CPU, 11698488
    Lips)
    true.

    And I am not using some of the optimization
    that @kuniaki.mukai posted elsewhere and that
    I posted 06.08.2025 on comp.lang.prolog. Fact is,
    it only ca. 20% slower than SWI-Prologs compare/3:

    ?- time((between(1,100,_), swi, fail; true)).
    % 3,786,880 inferences, 0.312 CPU in 0.325 seconds (96% CPU, 12118016
    Lips)
    true.

    The test harness was:

    swi :-
         between(1,1000,_),
         fuzzy(X), fuzzy(Y),
         swi(_, X, Y), fail; true.

    mercio :-
         between(1,1000,_),
         fuzzy(X), fuzzy(Y),
         mercio(_, X, Y), fail; true.

    The difficulty was to find a 100% Prolog compare/3
    that corresponds to SWI-Prolog. But you find a
    fresh implementation in 100% Prolog using a Union
    Find structure in the below:

    % swi(-Atom, +Term, +Term)
    swi(C, X, Y) :-
        swi(X, Y, C, [], _).

    % swi( -Atom, +Term, +Term,+List, -List)
    swi(C, X, Y, L, R) :- compound(X), compound(Y), !,
        sys_union_find(X, L, Z),
        sys_union_find(Y, L, T),
        swi_found(C, Z, T, L, R).
    swi(X, Y, C, L, L) :- compare(C, X, Y).

    % swi_found(-Atom, +Term, +Term, +List, -List)
    swi_found(C, X, Y, L, L) :-
        same_term(X, Y), !, C = (=).
    swi_found(C, X, Y, _, _) :-
        functor(X, F, N),
        functor(Y, G, M),
        compare(D, N/F, M/G),
        D \== (=), !, C = D.
    swi_found(C, X, Y, L, R) :-
        X =.. [_|P],
        Y =.. [_|Q],
        foldl(swi(C), P, Q, [X-Y|L], R).

    % sys_union_find(+Term, +List, -Term)
    sys_union_find(X, L, T) :-
        member(Y-Z, L),
        same_term(X, Y), !,
        sys_union_find(Z, L, T).
    sys_union_find(X, _, X).


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 27 14:19:54 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
    from 2025 can do that right now. The brands
    are Intel Ultra, AMD Ryzen and Snapdragon X,

    but I guess there might be more brands around,
    which can do that with a price tag less
    than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
    runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Mercio’s Algorithm (2012) for Rational
    Tree Compare is specified here mathematically.
    It is based on computing truncations A' = (A_0,
    A_1, etc..) of a rational tree A:

    A < B ⟺ A′ <_lex B′

    https://math.stackexchange.com/a/210730

    Here is an implementation in Prolog.
    First the truncation:

    trunc(_, T, T) :- var(T), !.
    trunc(0, T, F) :- !, functor(T, F, _).
    trunc(N, T, S) :-
       M is N-1,
       T =.. [F|L],
       maplist(trunc(M), L, R),
       S =.. [F|R].

    And then the iterative deepening:

    mercio(N, X, Y, C) :-
       trunc(N, X, A),
       trunc(N, Y, B),
       compare(D, A, B),
       D \== (=), !, C = D.
    mercio(N, X, Y, C) :-
       M is N + 1,
       mercio(M, X, Y, C).

    The main entry first uses (==)/2 for a
    terminating equality check and if the
    rational trees are not equal, falls back
    to the iterative deepening:

    mercio(C, X, Y) :- X == Y, !, C = (=).
    mercio(C, X, Y) :- mercio(0, X, Y, C).

    I couldn’t find yet a triple that violates
    transitivity. But I am also not much happy
    with the code. Looks a little bit expensive
    to create a truncation copy iteratively.

    Provided there is really no counter example,
    maybe we can do mit more smart and faster? It
    might also stand the test of conservativity?

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 27 14:51:17 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I already posted how to do SAT and Clark Completion
    with ReLU. This was a post from 15.03.2025, 16:13,
    see also below. But can we do CLP as well? Here

    is a take on the dif/2 constraint, or more precisely
    a very primitive (#\=)/2 from CLP(FD), going towards
    analogical computing. Might work for domains that

    fit into the quantization size of a NPU:

    1) First note that we can model abs() via ReLU:

    abs(x) = ReLU(x) + ReLU(- x)

    2) Then note that for integer values, we can model
    chi(x>0), the characteristic function of the predicate x > 0:

    chi(x>0) = 1 - ReLU(1 - x).

    3) Now chi(x=\=y) is simply:

    chi(x=\=y) = chi(abs(x - y) > 0)

    Now insert the formula for chi(x>0) based on ReLU
    and the formula for abs() based on ReLU. Eh voila you
    got an manually created neural network for the

    (#\=)/2 condition of CLP(FD), constraint logic
    programming for finite domains.

    Have Fun!

    Bye

    Mild Shock schrieb:
    A storm of symbolic differentiation libraries
    was posted. But what can these Prolog code
    fossils do?

    Does one of these libraries support Python symbolic
    Pieceweise ? For example one can define rectified
    linear unit (ReLU) with it:

    / x x >= 0
    ReLU(x) := <
    \ 0 otherwise

    With the above one can already translate a
    propositional logic program, that uses negation
    as failure, into a neural network:

    NOT \+ p 1 - x
    AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
    OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)

    For clauses just use Clark Completion, it makes
    the defined predicate a new neuron, dependent on
    other predicate neurons,

    through a network of intermediate neurons. Because
    of the constant shift in AND and OR, the neurons
    will have a bias b.

    So rule based in zero order logic is a subset
    of neural network.

    Python symbolic Pieceweise

    https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/


    rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

    Clark Completion
    https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
      from 2025 can do that right now. The brands
      are Intel Ultra, AMD Ryzen and Snapdragon X,

      but I guess there might be more brands around,
      which can do that with a price tag less
      than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
      runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Mercio’s Algorithm (2012) for Rational
    Tree Compare is specified here mathematically.
    It is based on computing truncations A' = (A_0,
    A_1, etc..) of a rational tree A:

    A < B ⟺ A′ <_lex B′

    https://math.stackexchange.com/a/210730

    Here is an implementation in Prolog.
    First the truncation:

    trunc(_, T, T) :- var(T), !.
    trunc(0, T, F) :- !, functor(T, F, _).
    trunc(N, T, S) :-
        M is N-1,
        T =.. [F|L],
        maplist(trunc(M), L, R),
        S =.. [F|R].

    And then the iterative deepening:

    mercio(N, X, Y, C) :-
        trunc(N, X, A),
        trunc(N, Y, B),
        compare(D, A, B),
        D \== (=), !, C = D.
    mercio(N, X, Y, C) :-
        M is N + 1,
        mercio(M, X, Y, C).

    The main entry first uses (==)/2 for a
    terminating equality check and if the
    rational trees are not equal, falls back
    to the iterative deepening:

    mercio(C, X, Y) :- X == Y, !, C = (=).
    mercio(C, X, Y) :- mercio(0, X, Y, C).

    I couldn’t find yet a triple that violates
    transitivity. But I am also not much happy
    with the code. Looks a little bit expensive
    to create a truncation copy iteratively.

    Provided there is really no counter example,
    maybe we can do mit more smart and faster? It
    might also stand the test of conservativity?


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 27 15:03:18 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    What mindset is needed to program an NPU. Mostlikely
    a mindset based on fork/join parallelism is nonsense.
    What could be more fruitful is view the AI accellerator

    as a blackbox that runs a neural network, whereby
    a neural network can be effectively viewed as a form
    of hardware, although unter the hood, it is open weights

    and matrix operations. So the mindest needs:

    Zeus: A Language for Expressing Algorithms in Hardware
    K. J. Lieberherr - 01 February 1985 https://dl.acm.org/doi/10.1109/MC.1985.1662799

    What changed back to then?

    - 80's Field Programmable Gate Array (FPGA)

    - 20's AI Boom: NPUs, Unified Memory and Routing Fabric

    Bye

    Mild Shock schrieb:
    Hi,

    I already posted how to do SAT and Clark Completion
    with ReLU. This was a post from 15.03.2025, 16:13,
    see also below. But can we do CLP as well? Here

    is a take on the dif/2 constraint, or more precisely
    a very primitive (#\=)/2 from CLP(FD), going towards
    analogical computing. Might work for domains that

    fit into the quantization size of a NPU:

    1) First note that we can model abs() via ReLU:

    abs(x) = ReLU(x) + ReLU(- x)

    2) Then note that for integer values, we can model
    chi(x>0), the characteristic function of the predicate x > 0:

    chi(x>0) = 1 - ReLU(1 - x).

    3) Now chi(x=\=y) is simply:

    chi(x=\=y) = chi(abs(x - y) > 0)

    Now insert the formula for chi(x>0) based on ReLU
    and the formula for abs() based on ReLU. Eh voila you
    got an manually created neural network for the

    (#\=)/2 condition of CLP(FD), constraint logic
    programming for finite domains.

    Have Fun!

    Bye

    Mild Shock schrieb:
    A storm of symbolic differentiation libraries
    was posted. But what can these Prolog code
    fossils do?

    Does one of these libraries support Python symbolic
    Pieceweise ? For example one can define rectified
    linear unit (ReLU) with it:

                      /   x      x  >= 0
          ReLU(x) := <
                      \   0      otherwise

    With the above one can already translate a
    propositional logic program, that uses negation
    as failure, into a neural network:

    NOT     \+ p             1 - x
    AND     p1, ..., pn      ReLU(x1 + ... + xn - (n-1))
    OR      p1; ...; pn      1 - ReLU(-x1 - .. - xn + 1)

    For clauses just use Clark Completion, it makes
    the defined predicate a new neuron, dependent on
    other predicate neurons,

    through a network of intermediate neurons. Because
    of the constant shift in AND and OR, the neurons
    will have a bias b.

    So rule based in zero order logic is a subset
    of neural network.

    Python symbolic Pieceweise

    https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/



    rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

    Clark Completion https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers
    https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
       from 2025 can do that right now. The brands
       are Intel Ultra, AMD Ryzen and Snapdragon X,

       but I guess there might be more brands around,
       which can do that with a price tag less
       than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
       runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Mercio’s Algorithm (2012) for Rational
    Tree Compare is specified here mathematically.
    It is based on computing truncations A' = (A_0,
    A_1, etc..) of a rational tree A:

    A < B ⟺ A′ <_lex B′

    https://math.stackexchange.com/a/210730

    Here is an implementation in Prolog.
    First the truncation:

    trunc(_, T, T) :- var(T), !.
    trunc(0, T, F) :- !, functor(T, F, _).
    trunc(N, T, S) :-
        M is N-1,
        T =.. [F|L],
        maplist(trunc(M), L, R),
        S =.. [F|R].

    And then the iterative deepening:

    mercio(N, X, Y, C) :-
        trunc(N, X, A),
        trunc(N, Y, B),
        compare(D, A, B),
        D \== (=), !, C = D.
    mercio(N, X, Y, C) :-
        M is N + 1,
        mercio(M, X, Y, C).

    The main entry first uses (==)/2 for a
    terminating equality check and if the
    rational trees are not equal, falls back
    to the iterative deepening:

    mercio(C, X, Y) :- X == Y, !, C = (=).
    mercio(C, X, Y) :- mercio(0, X, Y, C).

    I couldn’t find yet a triple that violates
    transitivity. But I am also not much happy
    with the code. Looks a little bit expensive
    to create a truncation copy iteratively.

    Provided there is really no counter example,
    maybe we can do mit more smart and faster? It
    might also stand the test of conservativity?



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 27 15:23:53 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Well I am currently looking in Local AI when I
    consider NPUs, which have very small floats.
    An example of bigger AI accelerators are TPUs,

    which can deal with larger floats. Subsequently
    then can also deal with a larger integer range.
    I only read this anecdote yesterday:

    "In December 2017, Stockfish 8 was used as a
    benchmark to test Google division DeepMind's
    AlphaZero, with Stockfish running on CPU and
    AlphaZero running on Google's proprietary
    Tensor Processing Units (TPUs).

    AlphaZero was trained through self-play for
    a total of nine hours, and reached Stockfish's
    level after just four. AlphaZero also played
    twelve 100-game matches against Stockfish starting
    from twelve popular openings for a final score
    of 290 wins, 886 draws and 24 losses, for a
    point score of 733:467." https://en.wikipedia.org/wiki/Stockfish_(chess)#Stockfish_8_versus_AlphaZero

    And then:

    "AlphaZero's victory over Stockfish sparked a
    flurry of activity in the computer chess community,
    leading to a new open-source engine aimed at
    replicating AlphaZero, known as Leela Chess Zero.
    The two engines remained close in strength for a
    while, but Stockfish has pulled away since the
    introduction of NNUE, winning every TCEC
    season since Season 18."

    Meanwhile the Prolog community: Sleepy Joe

    LoL

    Bye

    Mild Shock schrieb:
    Hi,

    What mindset is needed to program an NPU. Mostlikely
    a mindset based on fork/join parallelism is nonsense.
    What could be more fruitful is view the AI accellerator

    as a blackbox that runs a neural network, whereby
    a neural network can be effectively viewed as a form
    of hardware, although unter the hood, it is open weights

    and matrix operations. So the mindest needs:

    Zeus: A Language for Expressing Algorithms in Hardware
    K. J. Lieberherr -  01 February 1985 https://dl.acm.org/doi/10.1109/MC.1985.1662799

    What changed back to then?

    - 80's Field Programmable Gate Array (FPGA)

    - 20's AI Boom: NPUs, Unified Memory and Routing Fabric

    Bye

    Mild Shock schrieb:
    Hi,

    I already posted how to do SAT and Clark Completion
    with ReLU. This was a post from 15.03.2025, 16:13,
    see also below. But can we do CLP as well? Here

    is a take on the dif/2 constraint, or more precisely
    a very primitive (#\=)/2 from CLP(FD), going towards
    analogical computing. Might work for domains that

    fit into the quantization size of a NPU:

    1) First note that we can model abs() via ReLU:

    abs(x) = ReLU(x) + ReLU(- x)

    2) Then note that for integer values, we can model
    chi(x>0), the characteristic function of the predicate x > 0:

    chi(x>0) = 1 - ReLU(1 - x).

    3) Now chi(x=\=y) is simply:

    chi(x=\=y) = chi(abs(x - y) > 0)

    Now insert the formula for chi(x>0) based on ReLU
    and the formula for abs() based on ReLU. Eh voila you
    got an manually created neural network for the

    (#\=)/2 condition of CLP(FD), constraint logic
    programming for finite domains.

    Have Fun!

    Bye

    Mild Shock schrieb:
    A storm of symbolic differentiation libraries
    was posted. But what can these Prolog code
    fossils do?
    ;
    Does one of these libraries support Python symbolic
    Pieceweise ? For example one can define rectified
    linear unit (ReLU) with it:
    ;
    ;                  /   x      x  >= 0
    ;      ReLU(x) := <
    ;                  \   0      otherwise
    ;
    With the above one can already translate a
    propositional logic program, that uses negation
    as failure, into a neural network:
    ;
    NOT     \+ p             1 - x
    AND     p1, ..., pn      ReLU(x1 + ... + xn - (n-1))
    OR      p1; ...; pn      1 - ReLU(-x1 - .. - xn + 1)
    ;
    For clauses just use Clark Completion, it makes
    the defined predicate a new neuron, dependent on
    other predicate neurons,
    ;
    through a network of intermediate neurons. Because
    of the constant shift in AND and OR, the neurons
    will have a bias b.
    ;
    So rule based in zero order logic is a subset
    of neural network.
    ;
    Python symbolic Pieceweise

    https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/

    ;
    ;
    rectified linear unit (ReLU)
    https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
    ;
    Clark Completion
    https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers
    https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
       from 2025 can do that right now. The brands
       are Intel Ultra, AMD Ryzen and Snapdragon X,

       but I guess there might be more brands around,
       which can do that with a price tag less
       than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
       runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Mercio’s Algorithm (2012) for Rational
    Tree Compare is specified here mathematically.
    It is based on computing truncations A' = (A_0,
    A_1, etc..) of a rational tree A:

    A < B ⟺ A′ <_lex B′

    https://math.stackexchange.com/a/210730

    Here is an implementation in Prolog.
    First the truncation:

    trunc(_, T, T) :- var(T), !.
    trunc(0, T, F) :- !, functor(T, F, _).
    trunc(N, T, S) :-
        M is N-1,
        T =.. [F|L],
        maplist(trunc(M), L, R),
        S =.. [F|R].

    And then the iterative deepening:

    mercio(N, X, Y, C) :-
        trunc(N, X, A),
        trunc(N, Y, B),
        compare(D, A, B),
        D \== (=), !, C = D.
    mercio(N, X, Y, C) :-
        M is N + 1,
        mercio(M, X, Y, C).

    The main entry first uses (==)/2 for a
    terminating equality check and if the
    rational trees are not equal, falls back
    to the iterative deepening:

    mercio(C, X, Y) :- X == Y, !, C = (=).
    mercio(C, X, Y) :- mercio(0, X, Y, C).

    I couldn’t find yet a triple that violates
    transitivity. But I am also not much happy
    with the code. Looks a little bit expensive
    to create a truncation copy iteratively.

    Provided there is really no counter example,
    maybe we can do mit more smart and faster? It
    might also stand the test of conservativity?




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 28 14:50:46 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    I am 100% serious about Giga Logical Inferences
    per Second (GLIPS). Leaving behind the sequential
    constraint solving world:

    The Complexity of Constraint Satisfaction Revisited https://www.cs.ubc.ca/~mack/Publications/AIP93.pdf

    Only I have missed the deep learning bandwagon,
    never programmed with PyTorch or Keras. So even
    for the banal problem of coding some

    ReLU networks and shipping them to a GPU or NPU,
    or a hybrid, I don't have much experience. So
    I am marveling at papers such as:

    Learning Variable Ordering Heuristics
    for Solving Constraint Satisfaction Problems
    https://arxiv.org/abs/1912.10762

    Given that the AI Boom started after 2019,
    the above paper is already old, and it has
    currious antique terminology like Multilayer

    Perceptron, which is not so common anymore?
    It does also more than what I want to demonstrate,
    it does also do policy learning.

    Bye

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
      from 2025 can do that right now. The brands
      are Intel Ultra, AMD Ryzen and Snapdragon X,

      but I guess there might be more brands around,
      which can do that with a price tag less
      than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
      runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!

    Bye

    Mild Shock schrieb:
    Mercio’s Algorithm (2012) for Rational
    Tree Compare is specified here mathematically.
    It is based on computing truncations A' = (A_0,
    A_1, etc..) of a rational tree A:

    A < B ⟺ A′ <_lex B′

    https://math.stackexchange.com/a/210730

    Here is an implementation in Prolog.
    First the truncation:

    trunc(_, T, T) :- var(T), !.
    trunc(0, T, F) :- !, functor(T, F, _).
    trunc(N, T, S) :-
        M is N-1,
        T =.. [F|L],
        maplist(trunc(M), L, R),
        S =.. [F|R].

    And then the iterative deepening:

    mercio(N, X, Y, C) :-
        trunc(N, X, A),
        trunc(N, Y, B),
        compare(D, A, B),
        D \== (=), !, C = D.
    mercio(N, X, Y, C) :-
        M is N + 1,
        mercio(M, X, Y, C).

    The main entry first uses (==)/2 for a
    terminating equality check and if the
    rational trees are not equal, falls back
    to the iterative deepening:

    mercio(C, X, Y) :- X == Y, !, C = (=).
    mercio(C, X, Y) :- mercio(0, X, Y, C).

    I couldn’t find yet a triple that violates
    transitivity. But I am also not much happy
    with the code. Looks a little bit expensive
    to create a truncation copy iteratively.

    Provided there is really no counter example,
    maybe we can do mit more smart and faster? It
    might also stand the test of conservativity?


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Nov 28 15:12:29 2025
    From Newsgroup: comp.lang.prolog

    Hi,

    Notably we try to do something with AI laptops
    that came out in 2025. Tapping into their NPU.
    Back in the late 80's GigaLIP were rather

    hypothetical, small machines could hardly
    do KLIPs. Today small machines do easily MLIPs.
    But hardly any popular Prolog systems already taps

    into the AI Boom, they are all Sleepy Joes.
    Not to mention populate by morons like Boris the
    Loris and Nazi Retard Julio.

    They simply cannot connect the dots.

    Bye

    P.S.: Nice travel to the past is this paper:

    Is a GigaLIP Fast Enough?
    TOM W. KELLER - December 1988
    DOI: 10.1007/BF00436711

    Mild Shock schrieb:
    Hi,

    I am 100% serious about Giga Logical Inferences
    per Second (GLIPS). Leaving behind the sequential
    constraint solving world:

    The Complexity of Constraint Satisfaction Revisited https://www.cs.ubc.ca/~mack/Publications/AIP93.pdf

    Only I have missed the deep learning bandwagon,
    never programmed with PyTorch or Keras. So even
    for the banal problem of coding some

    ReLU networks and shipping them to a GPU or NPU,
    or a hybrid, I don't have much experience. So
    I am marveling at papers such as:

    Learning Variable Ordering Heuristics
    for Solving Constraint Satisfaction Problems
    https://arxiv.org/abs/1912.10762

    Given that the AI Boom started after 2019,
    the above paper is already old, and it has
    currious antique terminology like Multilayer

    Perceptron, which is not so common anymore?
    It does also more than what I want to demonstrate,
    it does also do policy learning.

    Bye

    Mild Shock schrieb:
    Hi,

    I am spekulating an NPU could give 1000x more LIPS.
    For certain combinatorial search problems. It all
    boils down to implement this thingy:

    In June 2020, Stockfish introduced the efficiently
    updatable neural network (NNUE) approach, based
    on earlier work by computer shogi programmers
    https://en.wikipedia.org/wiki/Stockfish_%28chess%29

    There are varying degrees what gets updated of
    a neural network. But the specs of an NPU tell
    me very simply the following:

    - An NPU can make 40 TFLOPS, all my AI Laptops
       from 2025 can do that right now. The brands
       are Intel Ultra, AMD Ryzen and Snapdragon X,

       but I guess there might be more brands around,
       which can do that with a price tag less
       than 1000.- USD.

    - SWI Prolog can make 30 MLIPS, Dogelog Player
       runs similar, some Prolog systems are faster.

    Now thats is 10^12 versus 10^6. If some of the
    LIPS can be delegated to a NPU, and if we assume
    for example less locality or more primitive

    operations that require a layering. Would could assume
    that from the NPU 10^12 a factor of 1000 goes
    away. So we might still see 10'9 LIPS emerge.

    Now make the calculation:

    - Without NPU: MLIPS
    - With NPU: GLIPS
    - Ratio: 1000x times faster

    Have fun!
    --- Synchronet 3.21a-Linux NewsLink 1.2