We are now exploring file systems with novacore.
And here and then we have a couple of primitives
and then do some bootstrapping. It currently lands
in library(random) until we find a better place:
% directory_member(+Atom, -Atom)
directory_member(F, N) :-
directory_files(F, L),
member(N, L).
% ensure_directory(+Atom)
ensure_directory(F) :-
file_exists(F),
file_property(F, type(directory)),
!.
ensure_directory(F) :-
make_directory(F).
Guess what, finding semantic and support of
directory_files/2, file_exists/1 and file_property/2
is already non trivial.
LogNonsenseTalk with its brainwash is totally
useless. This here is already wrong:
file_exists(File) :-
absolute_file_name(File, ExpandedPath),
{exists_file(ExpandedPath)}.
https://github.com/LogtalkDotOrg/logtalk3/blob/master/library/os/os.lgt
Becaue for example exists_file/1 in SWI-Prolog
means exists regular file. But file_exists/1
should mean exists file of any type. Just
lookup what GNU Prolog provides. In OS lingua
file means often regular, directory, etc..
Mild Shock schrieb:
We are now exploring file systems with novacore.
And here and then we have a couple of primitives
and then do some bootstrapping. It currently lands
in library(random) until we find a better place:
% directory_member(+Atom, -Atom)
directory_member(F, N) :-
directory_files(F, L),
member(N, L).
% ensure_directory(+Atom)
ensure_directory(F) :-
file_exists(F),
file_property(F, type(directory)),
!.
ensure_directory(F) :-
make_directory(F).
Guess what, finding semantic and support of
directory_files/2, file_exists/1 and file_property/2
is already non trivial.
The new multilingual strings are also an exercise in
Novacore. There were a few issues that needed novel
Prolog solutions, to make a Novacore solution.
One problem was I didn't want to use library(format)
and format/3 to format multilingual strings when
generating error messages. This addresses more
the later multilingual strings processing than the
multilingual strings store itself. So how resolve this
paradox? Here is my take, a mini format/3 boostraped
from the Dogelog Player specific atom_split/3:
% sys_inter_polate(+Stream, +Atom, +List)
sys_inter_polate(Stream, Template, Args) :-
atom_split(Template, '~', [Head|Tail]),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
% sys_zipper_output(+List, +List, +Stream)
sys_zipper_output([Arg|Args], [Head|Tail], Stream) :-
writeq(Stream, Arg),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
sys_zipper_output([], [], _).
It only understands format specifier '~', but is sufficient:
/* German Text */
strings('syntax_error.singleton_var', de, 'Alleinstehende Variable(n) ~, anonyme Variable(n) (_) benutzen.').
/* English and Fallback Text */
strings('syntax_error.singleton_var', '', 'Singleton variable(s) ~, use anonymous variable(s) (_).').
LoL
Hi,--- Synchronet 3.20a-Linux NewsLink 1.114
To capture some critical examples of float to string
conversion I went with this kind of little excess
precision and had this float to string conversion:
return shape_number(num.toPrecision(17));
Which gives this unfortunate result, still in release
1.2.1 of Dogelog Player for JavaScript seen:
?- between(1,10,N), X is (20+N)/10, write(X), nl, fail; true. 2.1000000000000001
2.2000000000000002
2.2999999999999998
2.3999999999999999
2.5
2.6000000000000001
2.7000000000000002
2.7999999999999998
2.8999999999999999
3.0
One work around is to check whether precision 16 would
also work. Like this code here:
let res = num.toPrecision(16);
if (Number(res) === num) {
return shape_number(res);
} else {
return shape_number(num.toPrecision(17));
}
The results are much more eye friendly:
?- between(1,10,N), X is (20+N)/10, write(X), nl, fail; true.
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
true.
Can we accept this solution? Will it slow down printing?
Further test cases are:--- Synchronet 3.20a-Linux NewsLink 1.114
?- X is 370370367037037036703703703670 / 123456789012345678901234567890.
X = 3.0000000000000004.
?- X is 0.1+0.1+0.1+0.1+0.1+0.1+0.1+0.1.
X = 0.7999999999999999.
The first test case doesn't work in SWI-Prolog
since recently it has improve its realization of
(/)/2 arithemetic function. While in most Prolog
systems we should have the above result, since
neither the division equals 3.0 nor the sum equals
0.8 when we use floating point numbers, and
when we convert first to floating point number
before doing the division. The adaptive algorithm
is more expensive than just calling num.toPrecision(17).
It will in mimimum call num.toPrecision(16) and do
the back conversion, i.e. Number(res). So unparsing
has a parsing cost. And for critical numbers, it
has a second unparsing via num.toPrecision(17) cost.
But I guess we can accept this little slow down.
Here are two test cases for memory
management of a Prolog system:
/* bomb */
app([], X, X).
app([X|Y], Z, [X|T]) :- app(Y, Z, T).
garbage(0, [0]) :- !.
garbage(N, L) :- M is N-1, garbage(M, R), app(R, R, L).
foo :- garbage(12,_), foo.
/* xbetween */
xbetween1(L, _, L).
xbetween1(L, U, N) :- L < U, M is L+1, xbetween1(M, U, N).
They test possibly something different. xbetween does
not produce a lot of objects during tail recursion,
it only decrements one integer. The xbetween example
might be ok, wherea the bomb example might be neverthelesss
not ok, especially since unlike in the xbetween example,
the bomb example has also an "intermediate" variables.
The "intermediate" variable is "_":
foo :- garbage(12,_), foo.
The xbetween example has no such variable. All
variables in the xbetween example are either in
the head or in the tail recursive call, making
it a more trivial example than the bomb example.
How about some of these PIPs:
PIP401: library(lists)
Provide predicates such as nth1/3, last/2, etc…
PIP402: library(sets)
Provide predicates such as union/3, subset/2, etc…
PIP403: library(sequence)
Provide predicates such as call_nth/2, limit/2, etc…
PIP404: library(aggregate)
Provide predicates such as aggregate_all/3, aggregate/3, etc…
There was once an effort Prolog Commons, which has
some overlap with the above PIPs:
The Prolog Commons Group - PART I - Library https://prolog-commons.org/PrologCommons.html/part_library.html
But its more modularized than just a Prologue to Prolog,
which has a few list predicates and miscellaneous.
Also interesting development, there is something
like “Prologue to Prolog C”, where C possibly
stands for cryptography. So maybe a further PIP:
PIP301: library(crypto)
Predicates such as crypto_data_hash/3, etc…
PIP302: library(charsio)
Predicates such as open_memory_file/2, etc…
PIP303: library(format)
Predicates such as format/3, etc…
In the above I have also added charsio and format,
which is useful in many contexts. Some remark about
the 30x modules, they usually a more based on
additional native routines. This is unlike the 40x
modules, which can have a pure Prolog reference
implementation.
Mild Shock schrieb:
How about some of these PIPs:
PIP401: library(lists)
Provide predicates such as nth1/3, last/2, etc…
PIP402: library(sets)
Provide predicates such as union/3, subset/2, etc…
PIP403: library(sequence)
Provide predicates such as call_nth/2, limit/2, etc…
PIP404: library(aggregate)
Provide predicates such as aggregate_all/3, aggregate/3, etc…
There was once an effort Prolog Commons, which has
some overlap with the above PIPs:
The Prolog Commons Group - PART I - Library
https://prolog-commons.org/PrologCommons.html/part_library.html
But its more modularized than just a Prologue to Prolog,
which has a few list predicates and miscellaneous.
And this one:
PIP304: library(math)
Evaluable functions such as popcount/1, etc…
The idea of a library(math) is brand new,
I first had it named library(random) and
then it got out of hands, exploded and
had many bitwise operations.
Mild Shock schrieb:
Also interesting development, there is something
like “Prologue to Prolog C”, where C possibly
stands for cryptography. So maybe a further PIP:
PIP301: library(crypto)
Predicates such as crypto_data_hash/3, etc…
PIP302: library(charsio)
Predicates such as open_memory_file/2, etc…
PIP303: library(format)
Predicates such as format/3, etc…
In the above I have also added charsio and format,
which is useful in many contexts. Some remark about
the 30x modules, they usually a more based on
additional native routines. This is unlike the 40x
modules, which can have a pure Prolog reference
implementation.
Mild Shock schrieb:
How about some of these PIPs:
PIP401: library(lists)
Provide predicates such as nth1/3, last/2, etc…
PIP402: library(sets)
Provide predicates such as union/3, subset/2, etc…
PIP403: library(sequence)
Provide predicates such as call_nth/2, limit/2, etc…
PIP404: library(aggregate)
Provide predicates such as aggregate_all/3, aggregate/3, etc…
There was once an effort Prolog Commons, which has
some overlap with the above PIPs:
The Prolog Commons Group - PART I - Library
https://prolog-commons.org/PrologCommons.html/part_library.html
But its more modularized than just a Prologue to Prolog,
which has a few list predicates and miscellaneous.
The new multilingual strings are also an exercise in
Novacore. There were a few issues that needed novel
Prolog solutions, to make a Novacore solution.
One problem was I didn't want to use library(format)
and format/3 to format multilingual strings when
generating error messages. This addresses more
the later multilingual strings processing than the
multilingual strings store itself. So how resolve this
paradox? Here is my take, a mini format/3 boostraped
from the Dogelog Player specific atom_split/3:
% sys_inter_polate(+Stream, +Atom, +List)
sys_inter_polate(Stream, Template, Args) :-
atom_split(Template, '~', [Head|Tail]),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
% sys_zipper_output(+List, +List, +Stream)
sys_zipper_output([Arg|Args], [Head|Tail], Stream) :-
writeq(Stream, Arg),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
sys_zipper_output([], [], _).
It only understands format specifier '~', but is sufficient:
/* German Text */
strings('syntax_error.singleton_var', de, 'Alleinstehende Variable(n) ~, anonyme Variable(n) (_) benutzen.').
/* English and Fallback Text */
strings('syntax_error.singleton_var', '', 'Singleton variable(s) ~, use anonymous variable(s) (_).').
LoL
The implementation to allow native libraries to register
new evaluable functions is the same as how predicates
are consulted. Take these two files:
$ cat > foo.p
test :- hello.
$ cat > bar.p
hello :- write('hello'), nl.
If I consult foo.p before bar.p there is a forward reference.
But if I ultimately consult bar.p the forward reference is
resolved. How does a Prolog system do that?
$ target/release/scryer-prolog
?- ['foo.p'].
true.
?- test.
error(existence_error(procedure,hello/0),hello/0).
?- ['bar.p'].
true.
?- test.
hello
true.
Now enhance is/2, etc... so that it uses the same resultion
meachanism, but only for the evaluable function namespace,
and provide a FFI where one can register evaluable functions,
in the evaluable function namespace, eh voila you are done,
makeing your Prolog extensible. Allowing to register evaluable
functions like gdc/2, popcount/1, etc.. outside of the core.
Mild Shock schrieb:
The new multilingual strings are also an exercise in
Novacore. There were a few issues that needed novel
Prolog solutions, to make a Novacore solution.
One problem was I didn't want to use library(format)
and format/3 to format multilingual strings when
generating error messages. This addresses more
the later multilingual strings processing than the
multilingual strings store itself. So how resolve this
paradox? Here is my take, a mini format/3 boostraped
from the Dogelog Player specific atom_split/3:
% sys_inter_polate(+Stream, +Atom, +List)
sys_inter_polate(Stream, Template, Args) :-
atom_split(Template, '~', [Head|Tail]),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
% sys_zipper_output(+List, +List, +Stream)
sys_zipper_output([Arg|Args], [Head|Tail], Stream) :-
writeq(Stream, Arg),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
sys_zipper_output([], [], _).
It only understands format specifier '~', but is sufficient:
/* German Text */
strings('syntax_error.singleton_var', de, 'Alleinstehende Variable(n)
~, anonyme Variable(n) (_) benutzen.').
/* English and Fallback Text */
strings('syntax_error.singleton_var', '', 'Singleton variable(s) ~,
use anonymous variable(s) (_).').
LoL
The ISO core standard probably set the
stage for a couple of performance sins.
In 7.1.6.1 Variants of a term we find
these test cases:
- f(A, B, A) is a variant of f(X, Y, X).
- g(A, B) is a variant of g(_, _).
- P+Q is a variant of P+Q.
What is doubious here, is the last test
case with P+Q. Do we need to test terms
that have common variables?
Lets assume we have situations where we
don't need variant working with common
variables in the two argument terms, what
about then using this bootstrapping:
variant_term(X, Y) :-
subsumes_term(X, Y),
subsumes_term(Y, X).
Here some testing, does it work ok? Take this code:
enum_arg(_, 1).
enum_arg(_, _).
enum_arg(X, X).
enum_list(_, []).
enum_list(X, [H|T]) :- enum_arg(X, H), enum_list(X, T).
boole(G, 1) :- G, !.
boole(_, 0).
nok(L, R) :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
boole(variant(L, R), A), boole(variant_term(L, R), B),
A \== B.
Seems to work fine:
?- nok(L, R).
false.
How much faster is it?
Here some test harness:
test :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
variant(L, R), fail; true.
test2 :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
variant_term(L, R), fail; true.
Here some results:
- SWI-Prolog 9.3.11:
?- time(test).
% 2,126,498 inferences, 0.734 CPU in 0.752 seconds
(98% CPU, 2895657 Lips)
true.
?- time(test2).
% 2,159,795 inferences, 0.234 CPU in 0.236 seconds
(99% CPU, 9215125 Lips)
true.
- Trealla Prolog 2.57.16:
?- time(test).
% Time elapsed 3.128s, 14827949 Inferences, 4.741 MLips
true.
?- time(test2).
% Time elapsed 1.079s, 7775516 Inferences, 7.206 MLips
true.
- Scryer Prolog :
?- time(test3).
% CPU time: 5.653s, 5_192_831 inferences
true.
?- time(test2).
% CPU time: 1.544s, 6_116_163 inferences
true.
Note: test3 is like test, only it uses builtins:variant/2.
Mild Shock schrieb:
The ISO core standard probably set the
stage for a couple of performance sins.
In 7.1.6.1 Variants of a term we find
these test cases:
- f(A, B, A) is a variant of f(X, Y, X).
- g(A, B) is a variant of g(_, _).
- P+Q is a variant of P+Q.
What is doubious here, is the last test
case with P+Q. Do we need to test terms
that have common variables?
Lets assume we have situations where we
don't need variant working with common
variables in the two argument terms, what
about then using this bootstrapping:
variant_term(X, Y) :-
subsumes_term(X, Y),
subsumes_term(Y, X).
Here some testing, does it work ok? Take this code:
enum_arg(_, 1).
enum_arg(_, _).
enum_arg(X, X).
enum_list(_, []).
enum_list(X, [H|T]) :- enum_arg(X, H), enum_list(X, T).
boole(G, 1) :- G, !.
boole(_, 0).
nok(L, R) :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
boole(variant(L, R), A), boole(variant_term(L, R), B),
A \== B.
Seems to work fine:
?- nok(L, R).
false.
So what can we do with this insight. Here is
a little performance of a new distinct/1,
that is eager und non-variant enumerating:
/* eagerness */
?- distinct(repeat), !.
true.
d(f(A,A)).
d(f(a,a)).
d(f(B,B)).
/* non-variant enumerating */
?- distinct(d(X)).
X = f(_70321, _70321);
X = f(a, a);
fail.
It is implemented with variant_term/2, since the
remembered list of archived witnesses so far,
is anyway copies of terms. So variant_term/2
is appropriate, we never have to check
two terms that have variables in common.
Cool!
Mild Shock schrieb:
How much faster is it?
Here some test harness:
test :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
variant(L, R), fail; true.
test2 :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
variant_term(L, R), fail; true.
Here some results:
- SWI-Prolog 9.3.11:
?- time(test).
% 2,126,498 inferences, 0.734 CPU in 0.752 seconds
(98% CPU, 2895657 Lips)
true.
?- time(test2).
% 2,159,795 inferences, 0.234 CPU in 0.236 seconds
(99% CPU, 9215125 Lips)
true.
- Trealla Prolog 2.57.16:
?- time(test).
% Time elapsed 3.128s, 14827949 Inferences, 4.741 MLips
true.
?- time(test2).
% Time elapsed 1.079s, 7775516 Inferences, 7.206 MLips
true.
- Scryer Prolog :
?- time(test3).
% CPU time: 5.653s, 5_192_831 inferences
true.
?- time(test2).
% CPU time: 1.544s, 6_116_163 inferences
true.
Note: test3 is like test, only it uses builtins:variant/2.
Mild Shock schrieb:
The ISO core standard probably set the
stage for a couple of performance sins.
In 7.1.6.1 Variants of a term we find
these test cases:
- f(A, B, A) is a variant of f(X, Y, X).
- g(A, B) is a variant of g(_, _).
- P+Q is a variant of P+Q.
What is doubious here, is the last test
case with P+Q. Do we need to test terms
that have common variables?
Lets assume we have situations where we
don't need variant working with common
variables in the two argument terms, what
about then using this bootstrapping:
variant_term(X, Y) :-
subsumes_term(X, Y),
subsumes_term(Y, X).
Here some testing, does it work ok? Take this code:
enum_arg(_, 1).
enum_arg(_, _).
enum_arg(X, X).
enum_list(_, []).
enum_list(X, [H|T]) :- enum_arg(X, H), enum_list(X, T).
boole(G, 1) :- G, !.
boole(_, 0).
nok(L, R) :- length(L, 6), length(R, 6),
enum_list(_, L), enum_list(_, R),
boole(variant(L, R), A), boole(variant_term(L, R), B),
A \== B.
Seems to work fine:
?- nok(L, R).
false.
Now I was struggling giving this predicate
a better name. Namely variant_term bootstrapped
as follows:
variant_term(X, Y) :-
subsumes_term(X, Y),
subsumes_term(Y, X).
Why does it need a better name? Well because it
is not really the variant, as realized by
for example SWI-Prolog's (=@=):
For example I find:
?- variant_term(f(X,Y,Z,T), f(A,B,C,D)).
true.
?- variant_term(f(X,Y,Z,T), f(A,B,Z,D)).
true.
?- variant_term(f(A,Y,Z,T), f(A,B,Z,D)).
true.
?- variant_term(f(A,Y,Z,T), f(A,B,C,Z)).
fail.
?- variant_term(f(X,A,Z,T), f(A,B,Z,D)).
fail.
The first 3 test cases match what variant/2
usually does. But the last 2 test cases don't
match what variant/2 usually does.
So how can characterize the behaviour of
this weak variant. I came up with this observation:
- A weak variant takes has variables that appear
in the left hand side and in the right hand side,
i.e. common variables, only obtaining an identity
relation.
- Othewise variables that are either specific to
the right hand side or that are either specific
to the left hand side, are associated by a
bijection relation.
I came up with names like "twin", "sibling" for
this relation. But I got also inspired by the
fact that builtins:variant/2 from Scryer Prolog
leaves bindings. So looking for a name similar
like "unify", but only its weak variant and it
leaves a binding trace. My idea is to use "marry"!
Here some results:
- SWI-Prolog 9.3.11:
?- time(test).
% 2,126,498 inferences, 0.734 CPU in 0.752 seconds
(98% CPU, 2895657 Lips)
true.
?- time(test2).
% 2,159,795 inferences, 0.234 CPU in 0.236 seconds
(99% CPU, 9215125 Lips)
true.
- Trealla Prolog 2.57.16:
?- time(test).
% Time elapsed 3.128s, 14827949 Inferences, 4.741 MLips
true.
?- time(test2).
% Time elapsed 1.079s, 7775516 Inferences, 7.206 MLips
true.
- Scryer Prolog :
?- time(test3).
% CPU time: 5.653s, 5_192_831 inferences
true.
?- time(test2).
% CPU time: 1.544s, 6_116_163 inferences
true.
Note: test3 is like test, only it uses builtins:variant/2.
Ok, I tried a native implementation of variant/2.
I did it in Dogelog Player for Java. Thats a
Prolog system that doesn't have unification for
cyclic terms. So the situation is a little simpler.
Comparing on the same machine and with the data
we have already gathered it is shockingly fast!
/* Dogelog Player 1.2.4, JDK 22 */
?- time(test).
% Zeit 211 ms, GC 0 ms, Lips 7567459, Uhr 15.10.2024 01:30
true.
?- time(test2).
% Zeit 413 ms, GC 0 ms, Lips 10703217, Uhr 15.10.2024 01:30
true.
Here some results:
- SWI-Prolog 9.3.11:
?- time(test).
% 2,126,498 inferences, 0.734 CPU in 0.752 seconds
(98% CPU, 2895657 Lips)
true.
?- time(test2).
% 2,159,795 inferences, 0.234 CPU in 0.236 seconds
(99% CPU, 9215125 Lips)
true.
@herme wrote:
Just a quick comment: note that you can make
and discuss the PIP proposals directly on
the PIPs discourse.
Here my response:
I will probably never go there since somebody
tried censoring my comments and said I don’t
work towards the Prolog cause. The good thing
about SWI-Prolog discourse, it has become
quite calm cocerning attempts to censor people,
possibly because some particular people left.
Which is in my opinion the best thing that
could happen to this forum. There is no
guarantee in other forums to really have free speech.
Hi,
Not only are the fucking PIPs behind a
login wall that doesn't work:
Dictionaries in Prolog
https://gitlab.software.imdea.org/prolog-lang/pip-0102
I cannot read gitlab.software.imdea.org,
since it redirects to some internal server
during login.
Also formal verification doesn't have the
same track record as fuzzying. Take the SWI-Prolog
red-black trees. Are they really red-black trees?
I used fuzzy testing to find the test case.
Exhaustive enumeration of permutation seemed to
be out of reach computationally.
So I used these random permutations to find a
discrepancy in resulting tree depth:
fuzzer(R) :-
numlist(1, 16, L),
between(1, 1000, _),
random_permutation(L, R).
Maybe the same technique can use to find smaller
discrepancies, and then use the smaller examples to
pin down difference in implemented balance
rules or implement rebalancing strategy.
Bye
Mild Shock schrieb:
@herme wrote:
Just a quick comment: note that you can make
and discuss the PIP proposals directly on
the PIPs discourse.
Here my response:
I will probably never go there since somebody
tried censoring my comments and said I don’t
work towards the Prolog cause. The good thing
about SWI-Prolog discourse, it has become
quite calm cocerning attempts to censor people,
possibly because some particular people left.
Which is in my opinion the best thing that
could happen to this forum. There is no
guarantee in other forums to really have free speech.
So what is the spec. Well one can use:
Red-Black Trees in a Functional Setting https://www.cs.tufts.edu/~nr/cs257/archive/chris-okasaki/redblack99.pdf
insert :: (Ord a) => a -> Tree a -> Tree a
insert x s = makeBlack $ ins s
where ins E = T R E x E
ins (T color a y b)
| x < y = balance color (ins a) y b
| x == y = T color a y b
| x > y = balance color a y (ins b)
makeBlack (T _ a y b) = T B a y b
The balancing would be as follows, again Haskell code:
balance :: Color -> Tree a -> a -> Tree a -> Tree a
balance B (T R (T R a x b) y c) z d = T R (T B a x b) y (T B c z d)
balance B (T R a x (T R b y c)) z d = T R (T B a x b) y (T B c z d)
balance B a x (T R (T R b y c) z d) = T R (T B a x b) y (T B c z d)
balance B a x (T R b y (T R c z d)) = T R (T B a x b) y (T B c z d)
balance color a x b = T color a x b
Does SWI-Prolog implement somewhere Okasaki red-black trees.
Mild Shock schrieb:
Hi,
Not only are the fucking PIPs behind a
login wall that doesn't work:
Dictionaries in Prolog
https://gitlab.software.imdea.org/prolog-lang/pip-0102
I cannot read gitlab.software.imdea.org,
since it redirects to some internal server
during login.
Also formal verification doesn't have the
same track record as fuzzying. Take the SWI-Prolog
red-black trees. Are they really red-black trees?
I used fuzzy testing to find the test case.
Exhaustive enumeration of permutation seemed to
be out of reach computationally.
So I used these random permutations to find a
discrepancy in resulting tree depth:
fuzzer(R) :-
numlist(1, 16, L),
between(1, 1000, _),
random_permutation(L, R).
Maybe the same technique can use to find smaller
discrepancies, and then use the smaller examples to
pin down difference in implemented balance
rules or implement rebalancing strategy.
Bye
Mild Shock schrieb:
@herme wrote:
Just a quick comment: note that you can make
and discuss the PIP proposals directly on
the PIPs discourse.
Here my response:
I will probably never go there since somebody
tried censoring my comments and said I don’t
work towards the Prolog cause. The good thing
about SWI-Prolog discourse, it has become
quite calm cocerning attempts to censor people,
possibly because some particular people left.
Which is in my opinion the best thing that
could happen to this forum. There is no
guarantee in other forums to really have free speech.
In SWI-Prolog, this can rely on an efficient specialised
implementation of aggregate_all(count,_,_), while we provide
a dedicated low-level XSB implementation in a count module
that we include with our program, courtesy of David S.
Warren (personal communication).
library(nb_rbtrees) from 2014 in SWI-Prolog:--- Synchronet 3.20a-Linux NewsLink 1.114
/* SWI-Prolog 9.3.14 */
?- L = [16,10,12,13,15,5,11,8,14,1,7,6,3,2,4,9], rb_new(T),
(member(X,L), nb_rb_insert(T, X, true),
fail; true), rb_display(T, 0).
$BLK 12-true
$RED 5-true
$BLK 2-true
$BLK 1-true
$NIL
$NIL
$BLK 3-true
$NIL
$RED 4-true
$NIL
$NIL
$BLK 10-true
$RED 7-true
$BLK 6-true
$NIL
$NIL
$BLK 8-true
$NIL
$RED 9-true
$NIL
$NIL
$BLK 11-true
$NIL
$NIL
$BLK 15-true
$BLK 13-true
$NIL
$RED 14-true
$NIL
$NIL
$BLK 16-true
$NIL
$NIL
On the other hand the rules by Okasaki give a
different resulting tree, which is better balanced,
has less overall depth and doesn’t have a root 12,
rather the root 8. Implementation based on change_arg/3:
/* Dogelog Player 1.2.5 */
?- L = [16,10,12,13,15,5,11,8,14,1,7,6,3,2,4,9], tree_new(T),
(member(X,L), tree_set(T, X, true),
fail; true), tree_display(T, 0).
$BLK 8-true
$BLK 6-true
$RED 3-true
$BLK 1-true
$NIL
$RED 2-true
$NIL
$NIL
$BLK 5-true
$RED 4-true
$NIL
$NIL
$NIL
$BLK 7-true
$NIL
$NIL
$BLK 12-true
$BLK 10-true
$RED 9-true
$NIL
$NIL
$RED 11-true
$NIL
$NIL
$RED 15-true
$BLK 13-true
$NIL
$RED 14-true
$NIL
$NIL
$BLK 16-true
$NIL
$NIL
I think Okasaki is the more common red-black
trees implementation, if I am not mistaken Java’s
method fixAfterInsertion() from the class TreeMap,
does also implement the Okasaki rules. Bug or
Feature that SWI-Prolog doesn’t use Okasaki?
In SWI-Prolog, this can rely on an efficient specialised
implementation of aggregate_all(count,_,_), while we provide
a dedicated low-level XSB implementation in a count module
that we include with our program, courtesy of David S.
Warren (personal communication).
Hi,
Spain is one of the main recipients of EU recovery funds,
with a total of 163 billion euros ($178 billion) earmarked
for the country, approximately half in grants and the rest
in loans. It has already received 37 billion euros.
Madrid and software.imdea.org is no exception. Probably
the same disaster as the Valencia floods. The money
is just siphoned into some crooks pockets. The biggest
crooks are at the moment ALP & friends,
just publishing a series of failure reports. Every
paper just reports some problems, never solutions.
Most recent cringe example:
BoostRLR: The beauty of Prolog for statistical
relational learning https://www.informatik.uni-wuerzburg.de/fileadmin/10030600/2024/KI_2004_paper_174.pdf
What the fuck does this mean:
In SWI-Prolog, this can rely on an efficient specialised
implementation of aggregate_all(count,_,_), while we provide
a dedicated low-level XSB implementation in a count module
that we include with our program, courtesy of David S.
Warren (personal communication).
Why is change_arg/3 not common among Prolog systems.
Why do we deal with aggregates like we are still in
stone age. Aggregates are a well known discipline
of database technology.
Why only aggregate_all/2 and not also a memory savy
solutions of aggregate/3. Aggregates are the bread
and butter of statistics.
Bye
Mild Shock schrieb:> Results of a fuzzer comparison:
library(nb_rbtrees) from 2014 in SWI-Prolog:
/* SWI-Prolog 9.3.14 */
?- L = [16,10,12,13,15,5,11,8,14,1,7,6,3,2,4,9], rb_new(T),
(member(X,L), nb_rb_insert(T, X, true),
fail; true), rb_display(T, 0).
$BLK 12-true
$RED 5-true
$BLK 2-true
$BLK 1-true
$NIL
$NIL
$BLK 3-true
$NIL
$RED 4-true
$NIL
$NIL
$BLK 10-true
$RED 7-true
$BLK 6-true
$NIL
$NIL
$BLK 8-true
$NIL
$RED 9-true
$NIL
$NIL
$BLK 11-true
$NIL
$NIL
$BLK 15-true
$BLK 13-true
$NIL
$RED 14-true
$NIL
$NIL
$BLK 16-true
$NIL
$NIL
On the other hand the rules by Okasaki give a
different resulting tree, which is better balanced,
has less overall depth and doesn’t have a root 12,
rather the root 8. Implementation based on change_arg/3:
/* Dogelog Player 1.2.5 */
?- L = [16,10,12,13,15,5,11,8,14,1,7,6,3,2,4,9], tree_new(T),
(member(X,L), tree_set(T, X, true),
fail; true), tree_display(T, 0).
$BLK 8-true
$BLK 6-true
$RED 3-true
$BLK 1-true
$NIL
$RED 2-true
$NIL
$NIL
$BLK 5-true
$RED 4-true
$NIL
$NIL
$NIL
$BLK 7-true
$NIL
$NIL
$BLK 12-true
$BLK 10-true
$RED 9-true
$NIL
$NIL
$RED 11-true
$NIL
$NIL
$RED 15-true
$BLK 13-true
$NIL
$RED 14-true
$NIL
$NIL
$BLK 16-true
$NIL
$NIL
I think Okasaki is the more common red-black
trees implementation, if I am not mistaken Java’s
method fixAfterInsertion() from the class TreeMap,
does also implement the Okasaki rules. Bug or
Feature that SWI-Prolog doesn’t use Okasaki?
What the fuck does this mean:
Why is change_arg/3 not common among Prolog systems.
On 06/11/2024 09:51, Mild Shock wrote:
What the fuck does this mean:
The Nazi-retarded shithole is all but going to fix itself.
Why is change_arg/3 not common among Prolog systems.
Only poisoned meatballs for the public.
https://github.com/SWI-Prolog/swipl-devel/issues/1328
-Julio
SICStus Prolog is overrated. A lot of things
in Prolog logic programming are overrated, so
that it has lost it to a lot of turf over
the last decade. But of course clinging to
an old code base, and reselling the same old
shit over and over again, is a very lucrative
business compared to real innovation.
Until OpenAI comes and does LLM. I am pretty
sure we will see some blocks stumbling. For
example the paper I was citing was traditional
machine lerning, it was even not deep learning.
Mostlikely Prolog will be hit much more than
it is already hit with Python & Co. taking over
a lot of data science turf.
LoL
Julio Di Egidio schrieb:
On 06/11/2024 09:51, Mild Shock wrote:
What the fuck does this mean:
The Nazi-retarded shithole is all but going to fix itself.
Why is change_arg/3 not common among Prolog systems.
Only poisoned meatballs for the public.
https://github.com/SWI-Prolog/swipl-devel/issues/1328
-Julio
Hi,
Whats also extremly cringe is the
struggel that Scryer Prolog is experiencing.
It has adopted a bloat of post Herbrand
Prolog nonsense, due to Colmerauer, namely
fucking cyclic term unification. Its
crashing under its own bloath of nonsense,
just go to GitHub and count the issues.
Last time I looked it had like 200 issues,
and now after a blink its already at 300 issues.
335 Open - Scryer Prolog - Issues https://github.com/mthom/scryer-prolog/issues
WTF?
Bye
Mild Shock schrieb:
SICStus Prolog is overrated. A lot of things
in Prolog logic programming are overrated, so
that it has lost it to a lot of turf over
the last decade. But of course clinging to
an old code base, and reselling the same old
shit over and over again, is a very lucrative
business compared to real innovation.
Until OpenAI comes and does LLM. I am pretty
sure we will see some blocks stumbling. For
example the paper I was citing was traditional
machine lerning, it was even not deep learning.
Mostlikely Prolog will be hit much more than
it is already hit with Python & Co. taking over
a lot of data science turf.
LoL
Julio Di Egidio schrieb:
On 06/11/2024 09:51, Mild Shock wrote:
What the fuck does this mean:
The Nazi-retarded shithole is all but going to fix itself.
Why is change_arg/3 not common among Prolog systems.
Only poisoned meatballs for the public.
https://github.com/SWI-Prolog/swipl-devel/issues/1328
-Julio
Hi,
But I don't have a final solution. I am
still a seeker, a free seeker free from
legacy and orthodoxy, using free speech.
Just like Luce:
The official mascot for the Catholic
Church’s 2025 Jubilee Year is named
“Luce,” which is Italian for “light.” https://www.catholicnewsagency.com/news/260129/meet-luce-the-vatican-s-cartoon-mascot-for-jubilee-2025
LoL
Bye
Mild Shock schrieb:
Hi,
Whats also extremly cringe is the
struggel that Scryer Prolog is experiencing.
It has adopted a bloat of post Herbrand
Prolog nonsense, due to Colmerauer, namely
fucking cyclic term unification. Its
crashing under its own bloath of nonsense,
just go to GitHub and count the issues.
Last time I looked it had like 200 issues,
and now after a blink its already at 300 issues.
335 Open - Scryer Prolog - Issues
https://github.com/mthom/scryer-prolog/issues
WTF?
Bye
Mild Shock schrieb:
SICStus Prolog is overrated. A lot of things
in Prolog logic programming are overrated, so
that it has lost it to a lot of turf over
the last decade. But of course clinging to
an old code base, and reselling the same old
shit over and over again, is a very lucrative
business compared to real innovation.
Until OpenAI comes and does LLM. I am pretty
sure we will see some blocks stumbling. For
example the paper I was citing was traditional
machine lerning, it was even not deep learning.
Mostlikely Prolog will be hit much more than
it is already hit with Python & Co. taking over
a lot of data science turf.
LoL
Julio Di Egidio schrieb:
On 06/11/2024 09:51, Mild Shock wrote:
What the fuck does this mean:
The Nazi-retarded shithole is all but going to fix itself.
Why is change_arg/3 not common among Prolog systems.
Only poisoned meatballs for the public.
https://github.com/SWI-Prolog/swipl-devel/issues/1328
-Julio
About Deep learning. Do we really need
GPU and Torch libraries, aren't there
other methods do arrive at the same.
Do we really need to invest in nuclear
power plant and shove money up a
The new multilingual strings are also an exercise in
Novacore. There were a few issues that needed novel
Prolog solutions, to make a Novacore solution.
One problem was I didn't want to use library(format)
and format/3 to format multilingual strings when
generating error messages. This addresses more
the later multilingual strings processing than the
multilingual strings store itself. So how resolve this
paradox? Here is my take, a mini format/3 boostraped
from the Dogelog Player specific atom_split/3:
% sys_inter_polate(+Stream, +Atom, +List)
sys_inter_polate(Stream, Template, Args) :-
atom_split(Template, '~', [Head|Tail]),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
% sys_zipper_output(+List, +List, +Stream)
sys_zipper_output([Arg|Args], [Head|Tail], Stream) :-
writeq(Stream, Arg),
put_atom(Stream, Head),
sys_zipper_output(Args, Tail, Stream).
sys_zipper_output([], [], _).
It only understands format specifier '~', but is sufficient:
/* German Text */
strings('syntax_error.singleton_var', de, 'Alleinstehende Variable(n) ~, anonyme Variable(n) (_) benutzen.').
/* English and Fallback Text */
strings('syntax_error.singleton_var', '', 'Singleton variable(s) ~, use anonymous variable(s) (_).').
LoL
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 997 |
Nodes: | 10 (0 / 10) |
Uptime: | 226:36:38 |
Calls: | 13,046 |
Calls today: | 1 |
Files: | 186,574 |
Messages: | 3,292,807 |