I see it as fuzzy testing of the community.
It is certainly beneficial if used correctly
Fuzzy Testing goes also by the name QuickCheck.
You can use Fuzzy Testing also for benchmarking.
Mathematically it uses the Law of Large Numbers:
Law of large numbers
https://en.wikipedia.org/wiki/Law_of_large_numbers
Means you even don’t need a random generator
with a programmable seed, so that a comparison
involves the exact same random number sequences.
Just assume that your results have a variation σ.
Then most likely the overall variation decreases
proportionally to the number n of experiments,
i.e. gets washed out:
VAR(X) = σ^2 / n
A third use case of Fuzzy Testing is to determine
frequentist probabilities . Like when I determined
that 25% of a variant of @kuniaki.mukai compare/3
triples are not transitive.
Mild Shock schrieb:
You can use Fuzzy Testing also for
benchmarking. Not only to find faults.
For example when I benchmark mercio/3 via
fuzzy/1, I find it doesn’t fare extremly bad:
?- time((between(1,100,_), mercio, fail; true)).
% 4,386,933 inferences, 0.375 CPU in 0.376 seconds (100% CPU, 11698488
Lips)
true.
And I am not using some of the optimization
that @kuniaki.mukai posted elsewhere and that
I posted 06.08.2025 on comp.lang.prolog. Fact is,
it only ca. 20% slower than SWI-Prologs compare/3:
?- time((between(1,100,_), swi, fail; true)).
% 3,786,880 inferences, 0.312 CPU in 0.325 seconds (96% CPU, 12118016
Lips)
true.
The test harness was:
swi :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
swi(_, X, Y), fail; true.
mercio :-
between(1,1000,_),
fuzzy(X), fuzzy(Y),
mercio(_, X, Y), fail; true.
The difficulty was to find a 100% Prolog compare/3
that corresponds to SWI-Prolog. But you find a
fresh implementation in 100% Prolog using a Union
Find structure in the below:
% swi(-Atom, +Term, +Term)
swi(C, X, Y) :-
swi(X, Y, C, [], _).
% swi( -Atom, +Term, +Term,+List, -List)
swi(C, X, Y, L, R) :- compound(X), compound(Y), !,
sys_union_find(X, L, Z),
sys_union_find(Y, L, T),
swi_found(C, Z, T, L, R).
swi(X, Y, C, L, L) :- compare(C, X, Y).
% swi_found(-Atom, +Term, +Term, +List, -List)
swi_found(C, X, Y, L, L) :-
same_term(X, Y), !, C = (=).
swi_found(C, X, Y, _, _) :-
functor(X, F, N),
functor(Y, G, M),
compare(D, N/F, M/G),
D \== (=), !, C = D.
swi_found(C, X, Y, L, R) :-
X =.. [_|P],
Y =.. [_|Q],
foldl(swi(C), P, Q, [X-Y|L], R).
% sys_union_find(+Term, +List, -Term)
sys_union_find(X, L, T) :-
member(Y-Z, L),
same_term(X, Y), !,
sys_union_find(Z, L, T).
sys_union_find(X, _, X).
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
A storm of symbolic differentiation libraries
was posted. But what can these Prolog code
fossils do?
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
/ x x >= 0
ReLU(x) := <
\ 0 otherwise
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
So rule based in zero order logic is a subset
of neural network.
Python symbolic Pieceweise
rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
Clark Completion
https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
I already posted how to do SAT and Clark Completion
with ReLU. This was a post from 15.03.2025, 16:13,
see also below. But can we do CLP as well? Here
is a take on the dif/2 constraint, or more precisely
a very primitive (#\=)/2 from CLP(FD), going towards
analogical computing. Might work for domains that
fit into the quantization size of a NPU:
1) First note that we can model abs() via ReLU:
abs(x) = ReLU(x) + ReLU(- x)
2) Then note that for integer values, we can model
chi(x>0), the characteristic function of the predicate x > 0:
chi(x>0) = 1 - ReLU(1 - x).
3) Now chi(x=\=y) is simply:
chi(x=\=y) = chi(abs(x - y) > 0)
Now insert the formula for chi(x>0) based on ReLU
and the formula for abs() based on ReLU. Eh voila you
got an manually created neural network for the
(#\=)/2 condition of CLP(FD), constraint logic
programming for finite domains.
Have Fun!
Bye
Mild Shock schrieb:
A storm of symbolic differentiation libraries
was posted. But what can these Prolog code
fossils do?
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
/ x x >= 0
ReLU(x) := <
\ 0 otherwise
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
So rule based in zero order logic is a subset
of neural network.
Python symbolic Pieceweise
https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/
rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
Clark Completion https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
What mindset is needed to program an NPU. Mostlikely
a mindset based on fork/join parallelism is nonsense.
What could be more fruitful is view the AI accellerator
as a blackbox that runs a neural network, whereby
a neural network can be effectively viewed as a form
of hardware, although unter the hood, it is open weights
and matrix operations. So the mindest needs:
Zeus: A Language for Expressing Algorithms in Hardware
K. J. Lieberherr - 01 February 1985 https://dl.acm.org/doi/10.1109/MC.1985.1662799
What changed back to then?
- 80's Field Programmable Gate Array (FPGA)
- 20's AI Boom: NPUs, Unified Memory and Routing Fabric
Bye
Mild Shock schrieb:
Hi,
I already posted how to do SAT and Clark Completion
with ReLU. This was a post from 15.03.2025, 16:13,
see also below. But can we do CLP as well? Here
is a take on the dif/2 constraint, or more precisely
a very primitive (#\=)/2 from CLP(FD), going towards
analogical computing. Might work for domains that
fit into the quantization size of a NPU:
1) First note that we can model abs() via ReLU:
abs(x) = ReLU(x) + ReLU(- x)
2) Then note that for integer values, we can model
chi(x>0), the characteristic function of the predicate x > 0:
chi(x>0) = 1 - ReLU(1 - x).
3) Now chi(x=\=y) is simply:
chi(x=\=y) = chi(abs(x - y) > 0)
Now insert the formula for chi(x>0) based on ReLU
and the formula for abs() based on ReLU. Eh voila you
got an manually created neural network for the
(#\=)/2 condition of CLP(FD), constraint logic
programming for finite domains.
Have Fun!
Bye
Mild Shock schrieb:
A storm of symbolic differentiation librarieshttps://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/
was posted. But what can these Prolog code
fossils do?
;
Does one of these libraries support Python symbolic
Pieceweise ? For example one can define rectified
linear unit (ReLU) with it:
;
; / x x >= 0
; ReLU(x) := <
; \ 0 otherwise
;
With the above one can already translate a
propositional logic program, that uses negation
as failure, into a neural network:
;
NOT \+ p 1 - x
AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)
;
For clauses just use Clark Completion, it makes
the defined predicate a new neuron, dependent on
other predicate neurons,
;
through a network of intermediate neurons. Because
of the constant shift in AND and OR, the neurons
will have a bias b.
;
So rule based in zero order logic is a subset
of neural network.
;
Python symbolic Pieceweise
;
;
rectified linear unit (ReLU)
https://en.wikipedia.org/wiki/Rectifier_(neural_networks)
;
Clark Completion
https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
Bye
Mild Shock schrieb:
Mercio’s Algorithm (2012) for Rational
Tree Compare is specified here mathematically.
It is based on computing truncations A' = (A_0,
A_1, etc..) of a rational tree A:
A < B ⟺ A′ <_lex B′
https://math.stackexchange.com/a/210730
Here is an implementation in Prolog.
First the truncation:
trunc(_, T, T) :- var(T), !.
trunc(0, T, F) :- !, functor(T, F, _).
trunc(N, T, S) :-
M is N-1,
T =.. [F|L],
maplist(trunc(M), L, R),
S =.. [F|R].
And then the iterative deepening:
mercio(N, X, Y, C) :-
trunc(N, X, A),
trunc(N, Y, B),
compare(D, A, B),
D \== (=), !, C = D.
mercio(N, X, Y, C) :-
M is N + 1,
mercio(M, X, Y, C).
The main entry first uses (==)/2 for a
terminating equality check and if the
rational trees are not equal, falls back
to the iterative deepening:
mercio(C, X, Y) :- X == Y, !, C = (=).
mercio(C, X, Y) :- mercio(0, X, Y, C).
I couldn’t find yet a triple that violates
transitivity. But I am also not much happy
with the code. Looks a little bit expensive
to create a truncation copy iteratively.
Provided there is really no counter example,
maybe we can do mit more smart and faster? It
might also stand the test of conservativity?
Hi,--- Synchronet 3.21a-Linux NewsLink 1.2
I am 100% serious about Giga Logical Inferences
per Second (GLIPS). Leaving behind the sequential
constraint solving world:
The Complexity of Constraint Satisfaction Revisited https://www.cs.ubc.ca/~mack/Publications/AIP93.pdf
Only I have missed the deep learning bandwagon,
never programmed with PyTorch or Keras. So even
for the banal problem of coding some
ReLU networks and shipping them to a GPU or NPU,
or a hybrid, I don't have much experience. So
I am marveling at papers such as:
Learning Variable Ordering Heuristics
for Solving Constraint Satisfaction Problems
https://arxiv.org/abs/1912.10762
Given that the AI Boom started after 2019,
the above paper is already old, and it has
currious antique terminology like Multilayer
Perceptron, which is not so common anymore?
It does also more than what I want to demonstrate,
it does also do policy learning.
Bye
Mild Shock schrieb:
Hi,
I am spekulating an NPU could give 1000x more LIPS.
For certain combinatorial search problems. It all
boils down to implement this thingy:
In June 2020, Stockfish introduced the efficiently
updatable neural network (NNUE) approach, based
on earlier work by computer shogi programmers
https://en.wikipedia.org/wiki/Stockfish_%28chess%29
There are varying degrees what gets updated of
a neural network. But the specs of an NPU tell
me very simply the following:
- An NPU can make 40 TFLOPS, all my AI Laptops
from 2025 can do that right now. The brands
are Intel Ultra, AMD Ryzen and Snapdragon X,
but I guess there might be more brands around,
which can do that with a price tag less
than 1000.- USD.
- SWI Prolog can make 30 MLIPS, Dogelog Player
runs similar, some Prolog systems are faster.
Now thats is 10^12 versus 10^6. If some of the
LIPS can be delegated to a NPU, and if we assume
for example less locality or more primitive
operations that require a layering. Would could assume
that from the NPU 10^12 a factor of 1000 goes
away. So we might still see 10'9 LIPS emerge.
Now make the calculation:
- Without NPU: MLIPS
- With NPU: GLIPS
- Ratio: 1000x times faster
Have fun!
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,096 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 399:24:08 |
| Calls: | 14,036 |
| Calls today: | 2 |
| Files: | 187,082 |
| D/L today: |
2,684 files (1,657M bytes) |
| Messages: | 2,479,110 |