Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
BTW: The Go programming language which has direct--- Synchronet 3.20a-Linux NewsLink 1.114
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
To be fair the problem itself is two loops, like here:--- Synchronet 3.20a-Linux NewsLink 1.114
Primes via two loops
https://go.dev/play/p/3XNt-e3Ozw8?v=gotip
Which can do golang in 0.04 seconds. Is this translatable
to Prolog? Maybe via nb_setarg or via Schimpf loops?
Mild Shock schrieb am Freitag, 2. Februar 2024 um 21:00:44 UTC+1:
BTW: The Go programming language which has direct
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
BTW: The Go programming language which has direct--- Synchronet 3.20a-Linux NewsLink 1.114
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
I tried Erlang. But it cannot deal with the problem, because actors--- Synchronet 3.20a-Linux NewsLink 1.114
in Erlang have unbounded inboxes, and there is no backpressure from
the consumer back to the producers. So the iterator producer will spin
like wild and clog the processor. I can run N=100, but N=5000 explodes:
```
-module(sieve).
-export([main/1]).
above(PID, AT) ->
PID ! AT,
AT2 = AT+1,
above(PID, AT2).
filter(PID, MOD) ->
receive
AT -> if AT rem MOD =/= 0 ->
PID ! AT;
true -> true
end
end,
filter(PID, MOD).
sift(PID) ->
receive
AT -> PID ! AT,
PID2 = spawn(fun() -> sift(PID) end),
filter(PID2, AT)
end.
take(COUNT) ->
if COUNT =/= 0 ->
receive
AT -> if COUNT =:= 1 -> io:format("~w~n", [AT]);
true -> true end
end,
COUNT2 = COUNT-1,
take(COUNT2);
true -> true
end.
main(_) ->
PID = self(),
PID2 = spawn(fun() -> sift(PID) end),
spawn(fun() -> above(PID2, 2) end),
take(100).
% explodes:
% take(5000).
```
Mild Shock schrieb am Freitag, 2. Februar 2024 um 21:00:44 UTC+1:
BTW: The Go programming language which has direct
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
An interesting experiment could be to distribute such actors--- Synchronet 3.20a-Linux NewsLink 1.114
via SWI-Prolog pengines over a couple of nodes in a network. Can
it handle backpressure?
How would it compare to these suggestions here:
Golang: out-of-box backpressure handling with gRPC, proven by a Grafana dashboard
gRPC and the underlying HTTP/2 protocol handle flow control. This means the server can only send data as fast as the client can consume, preventing overwhelming the client.
https://www.linkedin.com/pulse/golang-out-of-box-backpressure-handling-grpc-proven-grafana-melo
What would be the dashboard?
Mild Shock schrieb am Sonntag, 11. Februar 2024 um 23:34:11 UTC+1:
I tried Erlang. But it cannot deal with the problem, because actors
in Erlang have unbounded inboxes, and there is no backpressure from
the consumer back to the producers. So the iterator producer will spin
like wild and clog the processor. I can run N=100, but N=5000 explodes: ```
-module(sieve).
-export([main/1]).
above(PID, AT) ->
PID ! AT,
AT2 = AT+1,
above(PID, AT2).
filter(PID, MOD) ->
receive
AT -> if AT rem MOD =/= 0 ->
PID ! AT;
true -> true
end
end,
filter(PID, MOD).
sift(PID) ->
receive
AT -> PID ! AT,
PID2 = spawn(fun() -> sift(PID) end),
filter(PID2, AT)
end.
take(COUNT) ->
if COUNT =/= 0 ->
receive
AT -> if COUNT =:= 1 -> io:format("~w~n", [AT]);
true -> true end
end,
COUNT2 = COUNT-1,
take(COUNT2);
true -> true
end.
main(_) ->
PID = self(),
PID2 = spawn(fun() -> sift(PID) end),
spawn(fun() -> above(PID2, 2) end),
take(100).
% explodes:
% take(5000).
```
Mild Shock schrieb am Freitag, 2. Februar 2024 um 21:00:44 UTC+1:
BTW: The Go programming language which has direct
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
What would be cool, if the actors could interact via websockets--- Synchronet 3.20a-Linux NewsLink 1.114
with each other. Not sure. Or anything that could provide some
backpressure feedback. A websocket could provide backpressure
in two ways, either by implicitly by blocking a single channel when
sending payload, if this is possible, or more explicitly by using the
other channel, for some control flow information.
BTW: I missed this one: Already 5 years old /
just before Corona [RFC 8441]:
HTTP/2 WebSockets
HTTP/2 was standardized in 2015 without any mention
of WebSockets. For most of the time since then I assumed
that there would be no WebSockets over HTTP/2. That
changed in September last year with the publication of
RFC 8441, which will be supported in browsers in 2019
approximately 10 years after WebSockets were first introduced. https://medium.com/@pgjones/http-2-websockets-81ae3aab36dd
Mild Shock schrieb am Sonntag, 11. Februar 2024 um 23:39:31 UTC+1:
An interesting experiment could be to distribute such actors
via SWI-Prolog pengines over a couple of nodes in a network. Can
it handle backpressure?
How would it compare to these suggestions here:
Golang: out-of-box backpressure handling with gRPC, proven by a Grafana dashboard
gRPC and the underlying HTTP/2 protocol handle flow control. This means the server can only send data as fast as the client can consume, preventing overwhelming the client.
https://www.linkedin.com/pulse/golang-out-of-box-backpressure-handling-grpc-proven-grafana-melo
What would be the dashboard?
Mild Shock schrieb am Sonntag, 11. Februar 2024 um 23:34:11 UTC+1:
I tried Erlang. But it cannot deal with the problem, because actors
in Erlang have unbounded inboxes, and there is no backpressure from
the consumer back to the producers. So the iterator producer will spin
like wild and clog the processor. I can run N=100, but N=5000 explodes: ```
-module(sieve).
-export([main/1]).
above(PID, AT) ->
PID ! AT,
AT2 = AT+1,
above(PID, AT2).
filter(PID, MOD) ->
receive
AT -> if AT rem MOD =/= 0 ->
PID ! AT;
true -> true
end
end,
filter(PID, MOD).
sift(PID) ->
receive
AT -> PID ! AT,
PID2 = spawn(fun() -> sift(PID) end),
filter(PID2, AT)
end.
take(COUNT) ->
if COUNT =/= 0 ->
receive
AT -> if COUNT =:= 1 -> io:format("~w~n", [AT]);
true -> true end
end,
COUNT2 = COUNT-1,
take(COUNT2);
true -> true
end.
main(_) ->
PID = self(),
PID2 = spawn(fun() -> sift(PID) end),
spawn(fun() -> above(PID2, 2) end),
take(100).
% explodes:
% take(5000).
```
Mild Shock schrieb am Freitag, 2. Februar 2024 um 21:00:44 UTC+1:
BTW: The Go programming language which has direct
style runs the primes example in 0.6 seconds.
The primes example is here:
The Go Playground
https://go.dev/play/
Just choose the Concurrent Prime Sieve example.
Twice as slow as the 0.3 seconds of Haskell but
nevertheless faster than Prolog.
For those interested was comparing to:
Prolog Hand Rolled Lazy Lists via call/n: =============================================
iter(F,A,A,iter(F,B)) :- call(F,A,B).
take(0, _, []) :- !.
take(N, C, [X|L]) :- M is N-1, call(C, X, D), take(M, D, L).
modfilt(C, M, X, E) :- call(C, Y, D), modfilt2(D, M, Y, X, E).
modfilt2(D, M, Y, X, E) :- Y mod M =:= 0, !, modfilt(D, M, X, E). modfilt2(D, M, X, X, modfilt(D,M)).
primes(C, X, primes(modfilt(D,X))) :- call(C, X, D).
/* on @emiruz machine */
?- time(take(5000,primes(iter(succ,2)),_)).
% 38,009,267 inferences, 2.005 CPU in 2.005
seconds (100% CPU, 18954407 Lips)
true.
Haskell David Turner's sieve (SASL Language Manual, 1983) =============================================
sieve :: [Integer] -> [Integer]
sieve [] = []
sieve (p:xs) = p : sieve [x | x <- xs, rem x p > 0]
/* on @emiruz machine */
time ./primes
% real 0m0.312s
% user 0m0.294s
% sys 0m0.005s
https://wiki.haskell.org/Prime_numbers#Turner.27s_sieve_-_Trial_division
Disclaimer: Didn't run the Go version by myself
yet. Have first to install Go etc.. Somebody
else run it and reported 0.6 seconds.
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Just watched 2 videos from Scala and the--- Synchronet 3.20a-Linux NewsLink 1.114
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Woa! Not the direct style:--- Synchronet 3.20a-Linux NewsLink 1.114
Chapter 13: Asynchronous Tasks https://github.com/benweidig/a-functional-approach-to-java/tree/main/part-2/13-asynchronous-tasks
From this book:
Functional Approach to Java - Ben Weidig https://github.com/benweidig/a-functional-approach-to-java/tree/main/part-2/13-asynchronous-tasks
So is this book already maculature when it was publish?
What does your crystal ball say?
Mild Shock schrieb am Freitag, 2. Februar 2024 um 20:46:14 UTC+1:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
My revecent Async/Await surrogates for JDK 21
provide coroutines with suspend/resume semantics.
They are not continuations. They aim is to provide
async/await and not only setTimeout().
As a result you don't need to write libraries
with a continuation parameters. This is very unlike
nonsense such as the JavaScript express web framework.
stackfulness
In contrast to a stackless coroutine a stackful
coroutine can be suspended from within a nested
stackframe. Execution resumes at exactly the same
point in the code where it was suspended before.
stackless
With a stackless coroutine, only the top-level routine
may be suspended. Any routine called by that top-level
routine may not itself suspend. This prohibits
providing suspend/resume operations in routines within
a general-purpose library. https://www.boost.org/doc/libs/1_57_0/libs/coroutine/doc/html/coroutine/intro.html#coroutine.intro.stackfulness
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Its also proof of concept that no stack copying
is necessary. Well its not 100% true the Prolog
interpreter does a little bit unwind and rewind
during the '$YIELD'/1 instruction. But we do
nowhere copy some native stack, this is unlike
Martin Odersky's speculation, he might implement
someting with stack copying. Except that a virtual
threads might using a copying when they resize
their stack, I don't see any need for copying.
Also sometimes a callback can be piggy packed on
an existing coroutine if it doesn't yield itself,
I am already using this in Dogelog Player as an
optimization. The idea to use semaphores in my
implementation can be credited to this paper
from 1980 where semaphores are the main switchpoint:
Extension of Pascal and its Application to
Quasi-Parallel Programming and Simulation, Software -
Practice and Experience, 10 (1980), 773-789
J. Kriz and H. Sandmayr
https://www.academia.edu/47139332
But my experience with JDK 21 virtual threads
is still poor, I am only beginning to explore them
as a way to have a large number of coroutines.
Mild Shock schrieb:
My revecent Async/Await surrogates for JDK 21
provide coroutines with suspend/resume semantics.
They are not continuations. They aim is to provide
async/await and not only setTimeout().
As a result you don't need to write libraries
with a continuation parameters. This is very unlike
nonsense such as the JavaScript express web framework.
stackfulness
In contrast to a stackless coroutine a stackful
coroutine can be suspended from within a nested
stackframe. Execution resumes at exactly the same
point in the code where it was suspended before.
stackless
With a stackless coroutine, only the top-level routine
may be suspended. Any routine called by that top-level
routine may not itself suspend. This prohibits
providing suspend/resume operations in routines within
a general-purpose library.
https://www.boost.org/doc/libs/1_57_0/libs/coroutine/doc/html/coroutine/intro.html#coroutine.intro.stackfulness
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023 https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290 https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
BTW: I retract my claim that Trealla Prolog
chokes on call/1. This is amazing. Try this program:
?- [user].
p(X) :- X = (Y is 1+2, _ is Y+3).
First SWI-Prolog:
/* SWI-Prolog */
?- p(X), time((between(1,1000000,_),call(X),fail; true)).
% 2,999,998 inferences, 0.516 CPU in 0.508 seconds (101% CPU, 5818178 Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),call(X),fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.580 seconds (102% CPU, 6736839 Lips)
Then Trealla Prolog:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Time elapsed 0.244s, 4000003 Inferences, 16.363 MLips
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Time elapsed 0.399s, 6000003 Inferences, 15.047 MLips
true.
Whats going on, why is it faster?
Mild Shock schrieb:
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
I can make the first test case faster in SWI-Prolog
by removing the call/1. but it doesn't have an impact
on the second test case:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% 2,999,998 inferences, 0.172 CPU in 0.183 seconds (94% CPU, 17454534 Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.597 seconds (99% CPU, 6736839 Lips) true.
Now I am testing the same for SWI-Prolog and Trealla
Prolog. SWI-Prolog has it an itch faster in the first
test case, but an itch slower in the second test case.
Mild Shock schrieb:
BTW: I retract my claim that Trealla Prolog
chokes on call/1. This is amazing. Try this program:
?- [user].
p(X) :- X = (Y is 1+2, _ is Y+3).
First SWI-Prolog:
/* SWI-Prolog */
?- p(X), time((between(1,1000000,_),call(X),fail; true)).
% 2,999,998 inferences, 0.516 CPU in 0.508 seconds (101% CPU, 5818178
Lips)
X = (_A is 1+2, _ is _A+3).
?- time((between(1,1000000,_),p(X),call(X),fail; true)).
% 3,999,998 inferences, 0.594 CPU in 0.580 seconds (102% CPU, 6736839
Lips)
Then Trealla Prolog:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Time elapsed 0.244s, 4000003 Inferences, 16.363 MLips
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Time elapsed 0.399s, 6000003 Inferences, 15.047 MLips
true.
Whats going on, why is it faster?
Mild Shock schrieb:
Motivation, it is now widespread assume that
Prolog has call/n. But testing shows that both
Scryer Prolog and Trealla Prolog chocke even on
call/1. But then we find this proposal which
includes maplist/n, foldl/4, etc..
A Prologue for Prolog - post-N290
https://www.complang.tuwien.ac.at/ulrich/iso-prolog/prologue
But is there a Prolog technology that makes
call/n fast in the first place?
Mild Shock schrieb:
Can the golang interfaces handling be compared to lazy/sharing
in the non-strict semantics of Haskell? Just curious, maybe
there is nevertheless a chance to speed up Prolog call/n.
After all Haskell and Prolog are very similar they champion
non-strict features. Most novice Prolog programmers are
suprised by this behaviour, and that they need to invoke is/2,
making evaluation explicit, why is it not implicit like in every
other programming language (warning the example could
be misleading, its not what I am attacking, only motivation here):
?- X = 1+2.
X = 1+2
But then there are other more important corner where Prolog
and Haskell are basically the same, i.e. call/n.
The sharing in Haskell could then give semantic to monomorphic
and polymorphic caches. Did somebody write a paper about
golang interfaces handling, relating it to non-strict behaviour?
Mild Shock schrieb:
Just watched 2 videos from Scala and the
year 2023 and nearly chocked on my yoghurt,
especially when I saw the first presenter got a
headeache from early returns. LoL
Async/Await for the Monadic Programmer
https://www.youtube.com/watch?v=OH5cxLNTTPo
DIRECT STYLE SCALA Scalar Conference 2023
https://www.youtube.com/watch?v=0Fm0y4K4YO8
Whats the bottom line: All because they discovered they
could do async/await as well?
But I cannot retract my claim about Scryer Prolog
it definitively chokes:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% CPU time: 2.233s, 11_000_023 inferences
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% CPU time: 6.180s, 13_000_045 inferences
true.
Thats an order slower than Trealla and SWI-Prolog.
A small sanity test. How does formerly Jekejeke Prolog
and Dogelog Player perform. Dogelog Player has not yet
call/N, I do not promote using it, since I am
still waiting for a good idea to compile it a little
bit more statically. Formerly Jekejeke Prolog uses a
dynamic inline cache, polymorphic in the arity.
But this test case is anyway less about call/N and
more about call/1 and is/2. I get:
For formerly Jekejeke Prolog:
/* Jekejeke Prolog 1.6.6 */
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Zeit 540 ms, GC 3 ms, Uhr 01.03.2024 19:37
X = (_0 is 1+2, _1 is _0+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Zeit 867 ms, GC 3 ms, Uhr 01.03.2024 19:37
true.
For Dogelog Player:
/* Dogelog Player 1.1.6 */
?- p(X), time((between(1,1000000,_),X,fail; true)).
% Zeit 447 ms, GC 0 ms, Lips 15660199, Uhr 01.03.2024 19:38
X = (_16030782 is 1+2, _16030783 is _16030782+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% Zeit 1780 ms, GC 0 ms, Lips 14606802, Uhr 01.03.2024 19:38
true.
Jekejeke Prolog and Dogelog Player do not choke
like Scryer Prolog does. The performance of Dogelog
Player is amazing, since call/1 is implemented in
pure Prolog, not via an interpreter, but via a little
compiler, and an intermediate form, that we anyway use
when we generate cross compiled code or
static/dynamic clausse.
Mild Shock schrieb:
But I cannot retract my claim about Scryer Prolog
it definitively chokes:
?- p(X), time((between(1,1000000,_),X,fail; true)).
% CPU time: 2.233s, 11_000_023 inferences
X = (_A is 1+2,_B is _A+3).
?- time((between(1,1000000,_),p(X),X,fail; true)).
% CPU time: 6.180s, 13_000_045 inferences
true.
Thats an order slower than Trealla and SWI-Prolog.
Lets not forget GNU Prolog. It lacks a little bit
behind SWI-Prolog for the first test case:
?- p(X), (between(1,1000000,_),X,fail; true).
X = (A is 1+2,_ is A+3)
(453 ms) yes
?- (between(1,1000000,_),p(X),X,fail; true).
(578 ms) yes
Also this kind of testing might be a little bit
to generous, by ommiting the time/1 call.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 997 |
Nodes: | 10 (0 / 10) |
Uptime: | 226:45:07 |
Calls: | 13,046 |
Calls today: | 1 |
Files: | 186,574 |
Messages: | 3,292,808 |