Not only the speed doesn't double every year anymore,--- Synchronet 3.20a-Linux NewsLink 1.114
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Ok, OpenAI is dead. But we need to get out of the claws--- Synchronet 3.20a-Linux NewsLink 1.114
of the computing cloud. We need the spirit of Niklaus
Wirth, who combined computer science and
electronics. We need to solve the problem of
parallel slilicon. Should have a look again at these
quantum computers. Can we have them on the Edge?
Mild Shock schrieb am Freitag, 23. Juni 2023 um 11:14:15 UTC+2:
Not only the speed doesn't double every year anymore,
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
How my Dogelog Player garbage collector works:--- Synchronet 3.20a-Linux NewsLink 1.114
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
To advance the State of the Art and track performance improvements,--- Synchronet 3.20a-Linux NewsLink 1.114
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
Scryer Prolog has made amazing leaps recently concerning--- Synchronet 3.20a-Linux NewsLink 1.114
performance, its now only like 2-3 times slower than
SWI-Prolog! What does it prevent to get faster than SWI-Prolog?
See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
as-failure and/or call/1. Maybe they should allocate more
time to these areas instead of inference counting formatting:
$ target/release/scryer-prolog -v
v0.9.3-50-gb8ef3678
nrev % CPU time: 0.304s, 3_024_548 inferences
crypt % CPU time: 0.422s, 4_392_537 inferences
deriv % CPU time: 0.462s, 3_150_149 inferences
poly % CPU time: 0.394s, 3_588_369 inferences
sortq % CPU time: 0.481s, 3_654_653 inferences
tictac % CPU time: 1.591s, 3_285_766 inferences
queens % CPU time: 0.517s, 5_713_596 inferences
query % CPU time: 0.909s, 8_678_936 inferences
mtak % CPU time: 0.425s, 6_901_822 inferences
perfect % CPU time: 0.763s, 5_321_436 inferences
calc % CPU time: 0.626s, 6_700_379 inferences
true.
Compared to SWI-Prolog on the same machine:
$ swipl --version
SWI-Prolog version 9.1.18 for x86_64-linux
nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
To advance the State of the Art and track performance improvements,
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
Testing scryer-prolog doesn’t make any sense. Its not a--- Synchronet 3.20a-Linux NewsLink 1.114
Prolog system. It has memory leaks somewhere.
Just try my SAT solver test suite:
?- between(1,100,_), suite_quiet, fail; true.
VSZ and RSS memory is going up and up, with no end.
Clogging my machine. I don’t think that this should happen,
that a failure driven loop eats all memory?
Thats just a fraud. How do you set some limits?
Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
Scryer Prolog has made amazing leaps recently concerning
performance, its now only like 2-3 times slower than
SWI-Prolog! What does it prevent to get faster than SWI-Prolog?
See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
as-failure and/or call/1. Maybe they should allocate more
time to these areas instead of inference counting formatting:
$ target/release/scryer-prolog -v
v0.9.3-50-gb8ef3678
nrev % CPU time: 0.304s, 3_024_548 inferences
crypt % CPU time: 0.422s, 4_392_537 inferences
deriv % CPU time: 0.462s, 3_150_149 inferences
poly % CPU time: 0.394s, 3_588_369 inferences
sortq % CPU time: 0.481s, 3_654_653 inferences
tictac % CPU time: 1.591s, 3_285_766 inferences
queens % CPU time: 0.517s, 5_713_596 inferences
query % CPU time: 0.909s, 8_678_936 inferences
mtak % CPU time: 0.425s, 6_901_822 inferences
perfect % CPU time: 0.763s, 5_321_436 inferences
calc % CPU time: 0.626s, 6_700_379 inferences
true.
Compared to SWI-Prolog on the same machine:
$ swipl --version
SWI-Prolog version 9.1.18 for x86_64-linux
nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
To advance the State of the Art and track performance improvements,
some automatization would be helpful. I can test manually WASM via
this here https://dev.swi-prolog.org/wasm/shell . Since my recent
performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need GC improvements but for the core test cases. I only tested my Ryzen.
Don’t know yet results for Yoga:
dog swi
nrev 1247 1223
crypt 894 2351
deriv 960 1415
poly 959 1475
sortq 1313 1825
tictac 1587 2400
queens 1203 2316
query 1919 4565
mtak 1376 1584
perfect 1020 1369
calc 1224 1583
Total 13702 22106
LoL
Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
How my Dogelog Player garbage collector works:
Ashes to ashes, funk to funky
We know Major Tom's a junkie
Strung out in heaven's high
Hitting an all-time low
https://www.youtube.com/watch?v=CMThz7eQ6K0
Unfortunately no generational garbage collector yet. :-(
With limits I get this result:--- Synchronet 3.20a-Linux NewsLink 1.114
$ target/release/scryer-prolog -v
v0.9.3-57-ge8d8b09e
$ ulimit -m 2000000
$ ulimit -v 2000000
$ target/release/scryer-prolog
?- ['program2.p'].
true.
?- between(1,100,_), suite_quiet, fail; true.
Segmentation fault
Not ok! Should continue running till the end.
Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:47:15 UTC+1:
Testing scryer-prolog doesn’t make any sense. Its not a
Prolog system. It has memory leaks somewhere.
Just try my SAT solver test suite:
?- between(1,100,_), suite_quiet, fail; true.
VSZ and RSS memory is going up and up, with no end.
Clogging my machine. I don’t think that this should happen,
that a failure driven loop eats all memory?
Thats just a fraud. How do you set some limits?
Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
Don't buy your Pearls in Honk Kong. They are all fake.--- Synchronet 3.20a-Linux NewsLink 1.114
So what do you prefer, this Haskell monster: https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Terence Tao, "Machine Assisted Proof" https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
I didn't make all my homework yet.
For example just fiddling around with CLP(FD), I get:
?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
1..3, 1..3, 1..6]), all_distinct(Vs).
false.
Does Scryer Prolog CLP(Z) have some explanator for that?
What is exactly the conflict that it fails?
Mild Shock schrieb:
Terence Tao, "Machine Assisted Proof"
https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Or a more striking example, Peter Norvig's impossible
Sudoku, which he claims took him 1439 seconds
to show that it is unsolvable:
/* Peter Norvig */
problem(9, [[_,_,_,_,_,5,_,8,_],
[_,_,_,6,_,1,_,4,3],
[_,_,_,_,_,_,_,_,_],
[_,1,_,5,_,_,_,_,_],
[_,_,_,1,_,6,_,_,_],
[3,_,_,_,_,_,_,_,5],
[5,3,_,_,_,_,_,6,1],
[_,_,_,_,_,_,_,_,4],
[_,_,_,_,_,_,_,_,_]]).
https://norvig.com/sudoku.html
whereby SWI-Prolog with all_distinct/1 does
it in a blink, even without labeling:
?- problem(9, M), time(sudoku(M)).
% 316,054 inferences, 0.016 CPU in 0.020 seconds
(80% CPU, 20227456 Lips)
false.
Pretty cool!
Mild Shock schrieb:
I didn't make all my homework yet.
For example just fiddling around with CLP(FD), I get:
?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
1..3, 1..3, 1..6]), all_distinct(Vs).
false.
Does Scryer Prolog CLP(Z) have some explanator for that?
What is exactly the conflict that it fails?
Mild Shock schrieb:
Terence Tao, "Machine Assisted Proof"
https://www.youtube.com/watch?v=AayZuuDDKP0
Mostowski Collapse schrieb:
Don't buy your Pearls in Honk Kong. They are all fake.
So what do you prefer, this Haskell monster:
https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
Now I have the feeling there are no difficult 9x9--- Synchronet 3.20a-Linux NewsLink 1.114
Sudokus for the computer. At least not for computers
running SWI-Prolog and using CLP(FD) with the global
constraint all_distinct/1.
I was fishing among the 17-clue Sudokus, and the
hardest I could find so far was this one:
/* Gordon Royle #3668 */
problem(11,[[_,_,_,_,_,_,_,_,_],
[_,_,_,_,_,_,_,1,2],
[_,_,3,_,_,4,_,_,_],
[_,_,_,_,_,_,_,_,3],
[_,1,_,2,5,_,_,_,_],
[6,_,_,_,_,_,7,_,_],
[_,_,_,_,2,_,_,_,_],
[_,_,7,_,_,_,4,_,_],
[5,_,_,1,6,_,_,8,_]]).
But SWI-Prolog still does it in around 3 seconds.
SWI-Prolog does other 17-clue Sudokus in less than 100ms.
Are there any 17-clue Sudokus that take more time?
https://academic.timwylie.com/17CSCI4341/sudoku.pdf
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Terminating intuitionistic calculus
Giulio Fellin and Sara Negri
https://philpapers.org/rec/FELATI
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
A few years ago I was impressed by
the output of either Negri or Plato,
or the two together.
Now they are just an annoyance, all
they show is that they are neither talented
nor have sufficient training.
Just have a look at:
Terminating intuitionistic calculus
Giulio Fellin and Sara Negri
https://philpapers.org/rec/FELATI
Beside the too obvious creative idea and motive
behind it, it is most likely complete useless
nonsense. Already this presentation in the
paper shows utter incompetence:
Γ, A → B ⊢ A Γ, A → B, B ⊢ Δ ----------------------------------------
Γ, A → B ⊢ Δ
Everybody in the business knows that the
looping, resulting from the A → B copying,
is a fact. But can be reduced since the
copying on the right hand side is not needed.
Γ, A → B ⊢ A Γ, B ⊢ Δ --------------------------------
Γ, A → B ⊢ Δ
The above variant is enough. Just like Dragalin
presented the calculus. I really wish people
would completely understand these master pieces,
before they even touch multi consequent calculi:
Mathematical Intuitionism: Introduction to Proof Theory
Albert Grigorevich Dragalin - 1988
https://www.amazon.com/dp/0821845209
Contraction-Free Sequent Calculi for Intuitionistic Logic
Roy Dyckhoff - 1992
http://www.cs.cmu.edu/~fp//courses/atp/cmuonly/D92.pdf
Whats the deeper semantic (sic!) explanation of the
two calculi GHPC and GCPC? I have a Kripke semantics
explanation in my notes, didn't release it yet.
Have Fun!
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Because of the vagueness of the notions of “constructive
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The meteoric rise of Curry-Howard isomorphism
and minimal logic, possibly because proof assistants
such as Lean, Agda, etc… all use it, is quite ironic,
in the light of this statement:
Because of the vagueness of the notions of “constructive
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
There are possibly issues of interdisciplinary
work. For example Sorensen & Urzyczyn in their
Lectures on the Curry-Howard Isomorphism say that
the logic LP has no name in literature.
On the other hand Segerbergs paper, shows that
a logic LP, in his labeling JP, that stems from
accepting Peice's Law is equivalent to a logic
accepting Curry's Refutation rule,
i.e the logic JE with:
Γ, A => B |- A
-----------------
Γ |- A
But the logic JE also implies that LEM was added!
Bye
Mild Shock schrieb:
The meteoric rise of Curry-Howard isomorphism
and minimal logic, possibly because proof assistants
such as Lean, Agda, etc… all use it, is quite ironic,
in the light of this statement:
Because of the vagueness of the notions of “constructiveMild Shock schrieb:
proof”, “constructive operation”, the BHK-interpretation
has never become a versatile technical tool in the way
classical semantics has. Perhaps it is correct to say
that by most people the BHK-interpretation has never been
seen as an intuitionistic counterpart to classical semantics.
https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf >>
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Now I had an extremly resilient correspondent, who
wants to do proof extraction, but at the same
time refuses to learn the Curry-Howard isomorphism.
But its so easy, was just watching:
Hyperon Session with Dr. Ben Goertzel https://www.youtube.com/watch?v=5Uy3j4WCiXQ
At t=1853 he mentions C. S. Peirce thirdness, which
you can use to explain the Curry-Howard isomorphism:
1 *\ Γ = Context
| \
| * 3 t = λ-Expression
| /
2 */ α = Type
The above is a trikonic visualization of the judgement
Γ |- t : α, applying the art of making three-fold divisions.
But I guess C. S. Peirce is not read in France, since
it requires English. Or maybe there is a french translation?
Bye
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Actually thridness is not only the art of making
three-fold divisions. Usually one aims a finding
a 3 that is the relation between 1 and 2, so that
we have this relation satisfied:
3(1, 2)
Of course we can have the stance, and say that |-
does that already. Only |- is highly ambigious,
if you see Γ |- α you don't know what was the last
inference rule applied. But for proof extraction
you want exactly know that.
Bye
P.S.: And Peirce isn't wrong when he says thirdness
is enough, just take set theory, which can do all
of mathematics? Its based on this thirdness only:
x ∈ y
The set membership. But set membership is as ugly as |-,
it also doesn't say why an element belongs to a set.
LoL
Mild Shock schrieb:
Hi,
Now I had an extremly resilient correspondent, who
wants to do proof extraction, but at the same
time refuses to learn the Curry-Howard isomorphism.
But its so easy, was just watching:
Hyperon Session with Dr. Ben Goertzel
https://www.youtube.com/watch?v=5Uy3j4WCiXQ
At t=1853 he mentions C. S. Peirce thirdness, which
you can use to explain the Curry-Howard isomorphism:
1 *\ Γ = Context
| \
| * 3 t = λ-Expression
| /
2 */ α = Type
The above is a trikonic visualization of the judgement
Γ |- t : α, applying the art of making three-fold divisions.
But I guess C. S. Peirce is not read in France, since
it requires English. Or maybe there is a french translation?
Bye
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
In 2023 Dr. Ben Goertzel praised back to
normal, today in 2024 everybody has mysterious
eyeinfections and a new wave is reported:
Flirt-Varianten: Sommer-Coronawelle nimmt Fahrt auf https://www.mdr.de/wissen/medizin-gesundheit/corona-fallzahlen-sommerwelle-100.html
Bye
Mild Shock schrieb:
Hi,
Actually thridness is not only the art of making
three-fold divisions. Usually one aims a finding
a 3 that is the relation between 1 and 2, so that
we have this relation satisfied:
3(1, 2)
Of course we can have the stance, and say that |-
does that already. Only |- is highly ambigious,
if you see Γ |- α you don't know what was the last
inference rule applied. But for proof extraction
you want exactly know that.
Bye
P.S.: And Peirce isn't wrong when he says thirdness
is enough, just take set theory, which can do all
of mathematics? Its based on this thirdness only:
x ∈ y
The set membership. But set membership is as ugly as |-,
it also doesn't say why an element belongs to a set.
LoL
Mild Shock schrieb:
Hi,
Now I had an extremly resilient correspondent, who
wants to do proof extraction, but at the same
time refuses to learn the Curry-Howard isomorphism.
But its so easy, was just watching:
Hyperon Session with Dr. Ben Goertzel
https://www.youtube.com/watch?v=5Uy3j4WCiXQ
At t=1853 he mentions C. S. Peirce thirdness, which
you can use to explain the Curry-Howard isomorphism:
1 *\ Γ = Context
| \
| * 3 t = λ-Expression
| /
2 */ α = Type
The above is a trikonic visualization of the judgement
Γ |- t : α, applying the art of making three-fold divisions.
But I guess C. S. Peirce is not read in France, since
it requires English. Or maybe there is a french translation?
Bye
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Plato (p. 83 of Elements of Logical
Reasoning) … excellent book
Hi,--- Synchronet 3.20a-Linux NewsLink 1.114
I am not halucinating that Negri is nonsense:
This calculus does not terminate (e.g. on Peirce’s
formula). Negri [42] shows how to add a loop-checking
mechanism to ensure termination. The effect on complexity
isn’t yet clear; but the loop-checking is expensive.
Intuitionistic Decision Procedures since Gentzen
The Jägerfest - 2013
https://apt13.unibe.ch/slides/Dyckhoff.pdf
Bye
Sequent calculus offers a good possibility for
exhaustive proof search in propositional logic:
We can check through all the possibilities for
malking a derivation. If none of them worked,
i.e., if each had at least one branch in which
no rule applied and no initial sequent was reached,
the given sequent is underivable. The
symbol |/-, is used for underivability.
The premisses are simpler than the condusion
in all the rules except possibly in the left
premiss of rule L=>. That is the only source
of non-termination. Rules other than L=> can
produce duplication, if an active formula had
another occurrence in the antecedent. This
source of duplication comes to an end.
The sad news is, the book is only
worth some fire wood.
Plato (p. 83 of Elements of Logical Reasoning
Interestingly the book uses non-classical
logic, since it says:
Sequent calculus offers a good possibility for
exhaustive proof search in propositional logic:
We can check through all the possibilities for
malking a derivation. If none of them worked,
i.e., if each had at least one branch in which
no rule applied and no initial sequent was reached,
the given sequent is underivable. The
symbol |/-, is used for underivability.
And then it has unprovable:
c. |/- A v ~A
d. |/- ~~A => A
But mostlikely the book has a blind spot, some
serious errors, or totally unfounded claims, since
for example with such a calculus, the unprovability
of Peirce’s Law cannot be shown so easily.
Exhaustive proof search will usually not terminate.
There are some terminating calculi, like Dyckhoffs
LJT, but a naive calculus based on Gentzens take
will not terminate.
The single-succedent sequent calculus of proof
search of Table 4.1 is a relatively recent invention:
Building on the work of Albert Dragalin (1978) on the
invertibility of logical rules in sequent calculi,
Anne Troelstra worked out the details of the proof
theory of this `contraction-free' calculus in the
book Basic Proof Theorv (2000).
Propositional Dynamic Logic of Regular Programs
Fischer & Ladner - 1979 https://www.sciencedirect.com/science/article/pii/0022000079900461
The modal systems K, T, S4, S5 (cf. Ladner [16]) are
recognizable subsystems of propositional dynamic logic.
K allows only the modality A,
T allows only the modality A u λ,
S4 allows ordy the modality A*,
S5 allows only the modality (A u A-)*.
Rather read the original, von Plato
takes his wisdom from:
The single-succedent sequent calculus of proof
search of Table 4.1 is a relatively recent invention:
Building on the work of Albert Dragalin (1978) on the
invertibility of logical rules in sequent calculi,
Anne Troelstra worked out the details of the proof
theory of this `contraction-free' calculus in the
book Basic Proof Theorv (2000).
But the book by Troelstra (1939-2019) and
Schwichtenberg (1949 -), doesn’t contain a minimal
logic is decidable theorem, based on some “loop
checking”, as indicated by von Plato on page 78.
The problem situation is similar as in Prolog SLD
resolution, where S stands for selection function.
Since the (L=>) inference rule is not invertible, it
involves a selection function σ,
that picks the active formula:
Γ, A => B |- A Γ, B |- C A selection function σ did pick
------------------------------- (L=>) A => B from the left hand side
Γ, A => B |- C
One selection function might loop, another
selection function might not loop. In Jens Otten
ileansep.p through backtracking over the predicate
select/3 and iterative deepening all selections
are tried. To show unprovability you have to show
looping for all possible selection functions, which
is obviously less trivial than the “root-first proof
search” humbug from von Platos vegan products
store that offers “naturally growing trees”.
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
It is interesting to note that almost all the major subfields of AImirror subfields of philosophy: The AI analogue of philosophy of
linguistics; what philosophers call “practical
reasoning” is called “planning and acting” in
AI; ontology (indeed, much of metaphysics
and epistemology) corresponds to knowledge
representation in AI; and automated reasoning
is one of the AI analogues of logic.
– C.2.1.1 Intentions, practitions, and the ought-to-do.
Should AI workers study philosophy? Yes,
unless they are content to reinvent the wheel
every few days. When AI reinvents a wheel, it is
typically square, or at best hexagonal, and
can only make a few hundred revolutions before
it stops. Philosopher’s wheels, on the other hand,
are perfect circles, require in principle no
lubrication, and can go in at least two directions
at once. Clearly a meeting of minds is in order.
– C.4 Summary
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The cognitive revolution was an intellectualhttps://en.wikipedia.org/wiki/Cognitive_revolution
movement that began in the 1950s as an
interdisciplinary study of the mind and its
processes, from which emerged a new
field known as cognitive science.
You are surprised; I am saddened. Not only havewe lost contact with the primary studies of knowledge
Hi,
Yes, maybe we are just before a kind
of 2nd Cognitive Turn. The first Cognitive
Turn is characterized as:
The cognitive revolution was an intellectualhttps://en.wikipedia.org/wiki/Cognitive_revolution
movement that began in the 1950s as an
interdisciplinary study of the mind and its
processes, from which emerged a new
field known as cognitive science.
The current mainstream believe is that
Chat Bots and the progress in AI is mainly
based on "Machine Learning", whereas
most of the progress is more based on
"Deep Learning". But I am also sceptical
about "Deep Learning" in the end a frequentist
is again lurking. In the worst case the
no Bayension Brain shock will come with a
Technological singularity in that the current
short inferencing of LLMs is enhanced by
some long inferencing, like here:
A week ago, I posted that I was cooking a
logical reasoning benchmark as a side project.
Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘,
designed for evaluating LLMs with Logic Puzzles. https://x.com/billyuchenlin/status/1814254565128335705
making it possible not to excell by LLMs
in such puzzles, but to advance to more
elaborate scientific models, that can somehow
overcome fallacies such as:
- Kochen Specker Paradox, some fallacies
caused by averaging?
- Gluts and Gaps in Bayesian Reasoning,
some fallacies by consistency assumptions?
- What else?
So on quiet paws AI might become the new overlord
of science which we will happily depend on.
Jeff Barnett schrieb:
You are surprised; I am saddened. Not only havewe lost contact with the primary studies of knowledge
and reasoning, we have also lost contact with the
studies of methods and motivation. Psychology
was the basic home room of Alan Newell and many
other AI all stars. What is now called AI, I think
incorrectly, is just ways of exercising large amounts
of very cheap computer power to calculate approximates
to correlations and other statistical approximations.
The problem with all of this in my mind, is that we
learn nothing about the capturing of knowledge, what
it is, or how it is used. Both logic and heuristic reasoning
are needed and we certainly believe that intelligence is
not measured by its ability to discover "truth" or its
infallibly consistent results. Newton's thought process
was pure genius but known to produce fallacious results
when you know what Einstein knew at a later time.
I remember reading Ted Shortliffe's dissertation about
MYCIN (an early AI medical consultant for diagnosing
blood-borne infectious diseases) where I learned about
one use of the term "staff disease", or just "staff" for short.
In patient care areas there always seems to be an in-
house infection that changes over time. It changes
because sick patients brought into the area contribute
whatever is making them sick in the first place. In the
second place there is rapid mutations driven by all sorts
of factors present in hospital-like environments. The
result is that the local staff is varying, literally, minute
by minute. In a days time, the samples you took are
no longer valid, i.e., their day old cultures may be
meaningless. The underlying mathematical problem is
that probability theory doesn't really have the tools to
make predictions when the basic probabilities are
changing faster than observations can be
turned into inferences.
Why do I mention the problems of unstable probabilities
here? Because new AI uses fancy ideas of correlation
to simulate probabilistic inference, e.g., Bayesian inference.
Since actual probabilities may not exist in any meaningful
ways, the simulations are often based on air.
A hallmark of excellent human reasoning is the ability to
explain how we arrived at our conclusions. We are also
able to repair our inner models when we are in error if
we can understand why. The abilities to explain and
repair are fundamental to excellence of thought processes.
By the way, I'm not claiming that all humans or I have theses
reflective abilities. Those who do are few and far between.
However, any AI that doesn't have some of these
capabilities isn't very interesting.
For more on reasons why logic and truth are only part of human
ability to reasonably reason, see
https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html
-- Jeff Barnett
My impression Cognitive Science was never
Bayesian Brain, so I guess I made a joke.
The time scale, its start in 1950s and that
it is still relative unknown subject,
would explain:
- why my father or mother never tried to
educated me towards cognitive science.
It could be that they are totally blank
in this respect?
- why my grandfather or grandmothers never
tried to educate me towards cognitive
science. Dito It could be that they are totally
blank in this respect?
- it could be that there are rare cases where
some philosophers had already a glimps of
cognitive science. But when I open for
example this booklet:
System der Logic
Friedrich Ueberweg
Bonn - 1868
https://philpapers.org/rec/UEBSDL
One can feel the dry swimming that is reported
for several millennia. What happened in the
1950s was the possibility of computer modelling.
And "cognitive science" has recently pursued
the relation of intentional mental activities
to neural processes in the brain.
Cognitive science is an interdisciplinary
science that deals with the processing of
information in the context of perception,
thinking and decision-making processes,
both in humans and in animals or machines.
BTW: Friedrich Ueberweg is quite good
and funny to browse, he reports relatively
unfiltered what we would nowadays call
forms of "rational behaviour", so its a little
pot purry, except for his sections where he
explains some schemas, like the Aristotelan
figures, which are more pure logic of the form.
And peng you get a guy talking pages and
pages about pure and form:
"Pure" logic, ontology, and phenomenology
David Woodruff Smith https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm
But the above is a from species of philosophy
that is endangered now. Its predator are
abstractions on the computer like lambda
calculus and the Curry Howard isomorphism. The
revue has become an irrelevant cabarett, only
dead people would be interested in, like
may father, grandfather etc...
Mild Shock schrieb:
My impression Cognitive Science was never
Bayesian Brain, so I guess I made a joke.
The time scale, its start in 1950s and that
it is still relative unknown subject,
would explain:
- why my father or mother never tried to
educated me towards cognitive science.
It could be that they are totally blank
in this respect?
- why my grandfather or grandmothers never
tried to educate me towards cognitive
science. Dito It could be that they are totally
blank in this respect?
- it could be that there are rare cases where
some philosophers had already a glimps of
cognitive science. But when I open for
example this booklet:
System der Logic
Friedrich Ueberweg
Bonn - 1868
https://philpapers.org/rec/UEBSDL
One can feel the dry swimming that is reported
for several millennia. What happened in the
1950s was the possibility of computer modelling.
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Well we all know about this rule:
- Never ask a woman about her weight
- Never ask a woman about her age
There is a similar rule for philosophers:
- Never ask a philosopher what is cognitive science
- Never ask a philosopher what is formula-as-types
Explanation: They like to be the champions of
pure form like in this paper below, so they
don’t like other disciplines dealing with pure
form or even having pure form on the computer.
"Pure” logic, ontology, and phenomenology
David Woodruff Smith - Revue internationale de philosophie 2003/2
Mild Shock schrieb:
There are more and more papers of this sort:
Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language
The future of Prolog is bright?
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Lets say one milestone in cognitive science,
is the concept of "bounded rationality".
It seems LLMs have some traits that are also
found in humans. For example the anchoring effect
is a psychological phenomenon in which an
individual’s judgements or decisions
are influenced by a reference point or “anchor”
which can be completely irrelevant. Like for example
when discussing Curry Howard isomorphism with
a real world philosopher , one that might
not know Curry Howard isomorphism but
https://en.wikipedia.org/wiki/Anchoring_effect
nevertheless be tempted to hallucinate some nonsense.
One highly cited paper in this respect is Tversky &
Kahneman 1974. R.I.P. Daniel Kahneman,
March 27, 2024. The paper is still cited today:
Artificial Intelligence and Cognitive Biases: A Viewpoint https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm
Maybe using deeper and/or more careful reasoning,
possibly backed up by Prolog engine, could have
a positive effect? Its very difficult also for a
Prolog engine, since there is a trade-off
between producing no answer at all if the software
agent is too careful, and of producing a wealth
of nonsense otherwise.
Bye
Mild Shock schrieb:
Well we all know about this rule:
- Never ask a woman about her weight
- Never ask a woman about her age
There is a similar rule for philosophers:
- Never ask a philosopher what is cognitive science
- Never ask a philosopher what is formula-as-types
Explanation: They like to be the champions of
pure form like in this paper below, so they
don’t like other disciplines dealing with pure
form or even having pure form on the computer.
"Pure” logic, ontology, and phenomenology
David Woodruff Smith - Revue internationale de philosophie 2003/2
https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm
Mild Shock schrieb:
There are more and more papers of this sort:
Reliable Reasoning Beyond Natural Language
To address this, we propose a neurosymbolic
approach that prompts LLMs to extract and encode
all relevant information from a problem statement as
logical code statements, and then use a logic programming
language (Prolog) to conduct the iterative computations of
explicit deductive reasoning.
[2407.11373] Reliable Reasoning Beyond Natural Language
The future of Prolog is bright?
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Now I wonder whether LLMs should be an
inch more informed by results from Neuro-
endocrinology research. I remember Marvin
Minsky publishing his ‘The Society of Mind’:
Introduction to ‘The Society of Mind’ https://www.youtube.com/watch?v=-pb3z2w9gDg
But this made me think about a multi agent
systems. Now with LLMs what about a new
connectionist and deep learning approach.
Plus Prolog for the pre frontal context (PFC).
But who can write a blue print? Now there
is this amazing guy called Robert M. Sapolsky
who recently published Determined: A Science
of Life without Free Will, who
calls consciousness just a hicup. His turtles
all the way down model is a tour de force
through an unsettling conclusion: We may not
grasp the precise marriage of nature and nurture
that creates the physics and chemistry at the
base of human behavior, but that doesn’t mean it
doesn’t exist. But the pre frontal context (PFC)
seems to be still quite brittle and not extremly
performant and quite energy hungry.
So Prolog might excell?
Determined: A Science of Life Without Free Will https://www.amazon.de/dp/0525560998
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hold your breath, the bartender in your next
vacation destination will be most likely an AI
robot. Lets say in 5 years from now. Right?
Michael Sheen The Robot Bartender
https://www.youtube.com/watch?v=tV4Fxy5IyBM
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets. https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers--- Synchronet 3.20a-Linux NewsLink 1.114
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing https://twitter.com/AIForHumansShow/status/1831465601782706352--- Synchronet 3.20a-Linux NewsLink 1.114
Bye
Its amazing how we are in the mists of new buzzwords
such as superintelligence, superhuman, etc… I used
the term “long inferencing” in one post somewhere
for a combination of LLM with a more capable inferencing,
compared to current LLMs that rather show “short inferencing”.
Then just yesterday its was Strawberry and Orion, as the
next leap by OpenAI. Is the leap getting out of control?
OpenAI wanted to do “Superalignment” but lost a figure head.
Now there is new company which wants to do safety-focused
non-narrow AI. But they chose another name. If I translate
superhuman to German I might end with “Übermensch”,
first used by Nietzsche and later by Hitler and the
Nazi regime. How ironic!
Nick Bostrom - Superintelligence https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459
Mild Shock schrieb:
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters.
https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing
https://twitter.com/AIForHumansShow/status/1831465601782706352
Bye
Hi,
Not sure whether this cinematic master piece
contains a rendition when I was hunted recently
by a virus and had some hypomanic episodes.
But the chapter "Electromagnetic Waves" is fun:
Three Thousand Years of Longing https://youtu.be/id8-z5vANvc?si=h3mvNLs11UuY8HnD&t=3881
Bye
Mild Shock schrieb:
Its amazing how we are in the mists of new buzzwords
such as superintelligence, superhuman, etc… I used
the term “long inferencing” in one post somewhere
for a combination of LLM with a more capable inferencing,
compared to current LLMs that rather show “short inferencing”.
Then just yesterday its was Strawberry and Orion, as the
next leap by OpenAI. Is the leap getting out of control?
OpenAI wanted to do “Superalignment” but lost a figure head.
Now there is new company which wants to do safety-focused
non-narrow AI. But they chose another name. If I translate
superhuman to German I might end with “Übermensch”,
first used by Nietzsche and later by Hitler and the
Nazi regime. How ironic!
Nick Bostrom - Superintelligence
https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459
Mild Shock schrieb:
Hi,
SAN FRANCISCO/NEW YORK, Sept 4 - Safe
Superintelligence (SSI), newly co-founded by OpenAI's
former chief scientist Ilya Sutskever, has raised $1
billion in cash to help develop safe artificial
intelligence systems that far surpass human
capabilities, company executives told Reuters.
https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/
Now they are dancing
https://twitter.com/AIForHumansShow/status/1831465601782706352
Bye
Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
Hi,
MIS is acronym for management information systems.
In the past, people from MIS, offered consulting by means
balanced scorecard, which could be benefitial for companies:
Balanced Scorecard
https://en.wikipedia.org/wiki/Balanced_scorecard
Now after big data, artificial intelligence, etc.. we can
do text scraping and venture into Luhmanns Autopoiesis,
d.h. Selbsterhaltung durch Nabelschau:
Are we on the right track? an update to Lyytinen
et al.’s commentary on why the old world cannot publish https://www.tandfonline.com/doi/pdf/10.1080/0960085X.2021.1940324
LoL
Gruss, Jan
P.S.: Autopoiesis
Autopoietische Systeme erzeugen und ermöglichen sich
selbst. "Als autopoietisch wollen wir Systeme bezeichnen, die
die Elemente, aus denen sie bestehen, durch die Elementen,
aus denen sie bestehen, selbst produzieren und reproduzieren. (...)
Ein autopoietisches System ist ein selbstreferenziell-zirkulär
geschlossener Zusammenhang von Operationen." https://luhmann.fandom.com/de/wiki/Autopoiesis
Mild Shock schrieb:
Trump: They're eating the dogs, the cats
https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
The carbon emissions of writing and illustrating
are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
The blue are AfD, the green are:
German greens after losing badly https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755
Time to start a yellow party, the first party
with an Artificial Intelligence Ethics agenda?
Bye
P.S.: Here I tried some pigwrestling with
ChatGPT demonstrating Mira Murati is just
a nice face. But ChatGPT is just like a child,
spamming me with large bullets list, from
its huge lexical memory, without any deep
understanding. But it also gave me an interesting
list of potential caliber AI critiques. Any new
Greta Thunberg of Artificial Intelligence
Ethics among them?
Mira Murati Education Background https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4
Mild Shock schrieb:
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets.
https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
ChatGPT is rather dry, giving me always some
choice lists displaying his knowledge. The
interaction is not very "involving".
Could this be improved. There are possibly two
traits missing:
Feelings:
- Emotional states
- Temporariness
- Reaction to external circumstances
- Changeability
- Subjective sensations
Soul:
- Spirituality
- Immortality
- Innermost being
- Essence of an individual
- Deep, enduring aspects of human existence
Mostlikely we will see both traits added to AI.
"Emotional AI" has been more discussed already,
"Spiritual AI" seems to be rather new.
In a "Spiritual AI" Faith would probably be important,
which is probably at the upper end of credulous
reasoning. This means that such a ChatGPT could
also babble that in a Prisoner Dilemma Game,
cooperation is always the better alternative,
e.g. promoting "altruistic" motives, etc.
I also suspect that “Spiritual AI” and “Emotional
AI” could coexist. Many religions give Cosmopolitan
magazin style life advice, and not just theological
dogmas. There will probably soon be an “Inner Engineering”
app from Sadhguru that works with AI. Sadhguru is
also sometimes satirically referred to as Chadguru:
Sat Guru Parody | Carryminati
https://www.youtube.com/watch?v=PlZqxP5MXFs
Mild Shock schrieb:
Could be a wake-up call this many participants
already in the commitee, that the whole logic
world was asleep for many years:
Non-Classical Logics. Theory and Applications XI,
5-8 September 2024, Lodz (Poland)
https://easychair.org/cfp/NCL24
Why is Minimal Logic at the core of many things?
Because it is the logic of Curry-Howard isomoprhism
for symple types:
----------------
Γ ∪ { A } ⊢ A
Γ ∪ { A } ⊢ B
----------------
Γ ⊢ A → B
Γ ⊢ A → B Δ ⊢ A
----------------------------
Γ ∪ Δ ⊢ B
And funny things can happen, especially when people
hallucinate duality or think symmetry is given, for
example in newer inventions such as λμ-calculus,
but then omg ~~p => p is nevertheless not provable,
because they forgot an inference rule. LoL
Recommended reading so far:
Propositional Logics Related to Heyting’s and Johansson’s
February 2008 - Krister Segerberg
https://www.researchgate.net/publication/228036664
The Logic of Church and Curry
Jonathan P. Seldin - 2009
https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C
Meanwhile I am going back to my tinkering with my
Prolog system, which even provides a more primitive
logic than minimal logic, pure Prolog is minimal
logic without embedded implication.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
will probably never get a Turing Award or something
for what I did 23 years ago. Why is its reading
count on research gate suddently going up?
Knowledge, Planning and Language,
November 2001
I guess because of this, the same topic takled by
Microsofts recent model GRIN. Shit. I really should
find some investor and pump up a start up!
"Mixture-of-Experts (MoE) models scale more
effectively than dense models due to sparse
computation through expert routing, selectively
activating only a small subset of expert modules." https://arxiv.org/pdf/2409.12136
But somehow I am happy with my dolce vita as
it is now... Or maybe I am decepting myself?
P.S.: From the GRIN paper, here you see how
expert domains modules relate with each other:
Figure 6 (b): MoE Routing distribution similarity
across MMLU 57 tasks for the control recipe.
How it started:
How Hezbollah used pagers and couriers to counter
July 9, 2024 https://www.reuters.com/world/middle-east/pagers-drones-how-hezbollah-aims-counter-israels-high-tech-surveillance-2024-07-09/
How its going:
What we know about the Hezbollah pager explosions
Sept 17, 2024
https://www.bbc.com/news/articles/cz04m913m49o
Mild Shock schrieb:
Trump: They're eating the dogs, the cats
https://www.youtube.com/watch?v=5llMaZ80ErY
https://twitter.com/search?q=trump+cat
Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Hi,
So this study colleague with his Flavia, the female
mathematician, has given me something to think about.
Why don't I react exactly the same as him?
Maybe he's a different strain of homosapiens?
And therefore rules differently than me, at least
I never had a fetish for female mathematicians,
a contradiction to determinism?
Ha ha, now I can feed you something again:
What is it Like to be a Bat?
the hard problem of consciousness
https://www.youtube.com/watch?v=aaZbCctlll4
Bye
Mild Shock schrieb:
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption.
https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess!
https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Not only the speed doesn't double every year anymore,
also the density of transistors doesn't double
every year anymore. See also:
‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618
So there is some hope in FPGAs. The article writes:
"In the latter paper, which includes a great overview of
the state of the art, Pilch and colleagues summarize
this as shifting the processing from time to space —
from using slow sequential CPU processing to hardware
complexity, using the FPGA’s configurable fabric
and inherent parallelism."
In reference to (no pay wall):
An FPGA-based real quantum computer emulator
15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Hi,
Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
was born around 1950. Here what happened in this decade:
1) "Perceptron":
Rosenblatt's perceptrons were initially simulated on an
IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
Mark I Perceptron machine, the first implementation of
the perceptron algorithm. It was connected to a camera
with 20×20 cadmium sulfide photocells to
make a 400-pixel image.
https://de.wikipedia.org/wiki/Perzeptron
2) "Voder"
The Bell Telephone Laboratory's Voder (abbreviation of
Voice Operating Demonstrator) was the first attempt to
electronically synthesize human speech by breaking it down
into its acoustic components. The Voder was developed from
research into compression schemes for transmission of voice
on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M
3) "Mini-Chess"
Los Alamos chess was the first chess-like game played by a
computer program. This program was written at Los Alamos
Scientific Laboratory by Paul Stein and Mark Wells for the
MANIAC I computer in 1956. The computer was primarily
constructed to perform calculations in support of hydrogen bomb
research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE
Bye
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
Hi,
Next issue 2:1 scheduled for January 2025 https://www.iospress.com/catalog/journals/neurosymbolic-artificial-intelligence
What is Neuro-Symbolic AI? https://allegrograph.com/what-is-neuro-symbolic-ai/
Connectionists methods combined with symbolic
methods? BTW: Not something new really, but
nevertheless, the current times might ask for
more interdisciplinary work.
The article by Ron Sun, Dual-process theories,
cognitive architectures, and hybrid neural-
symbolic models, even admits it:
"This idea immediately harkens back to the 1990s
when hybrid models first emerged [..] Besides
being termed neural-symbolic or neurosymbolic models,
they have also been variously known as connectionist
symbolic model, hybrid symbolic neural networks,
or simply hybrid models or systems.
I argued back then and am still arguing today [..]
. In particular, within the human mental architecture,
we need to take into account dual processes (e.g.,
as has been variously termed as implicit versus explicit,
unconscious versus conscious, intuition versus reason,
System 1 versus System 2, and so on, albeit sometimes
with somewhat different connotations). Incidentally,
dual-process (or two-system) theories have become quite
popular lately."
Ok I will take a nap, and let my automatic
processing do the disgesting of what he wrote.
LoL
Mild Shock schrieb:
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once.
https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
To hell with GPUs. Here come the FPGA qubits:
Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/
The superposition property enables a quantum computer
to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit
Maybe their new board is even less suited for hitting
a ship with a torpedo than some machine learning?
You know USA has a problem,--- Synchronet 3.20a-Linux NewsLink 1.114
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU
Hi,
Now I had the idea of a new book project,
termed "WomanLogic". It would be an
introduction to logic and computational
thinking, by means of Prolog. Especially
taylored and respecting the needs of a
female brain. Just in the spirit of
extending the "WomanSphere":
"She Will Be in the Shop": Women's Sphere of
Trade in Eighteenth-Century Philadelphia and New York
Author(s): Patricia Cleary
Source: The Pennsylvania Magazine of History and Biography,
Vol. 119, No. 3 (Jul., 1995), pp. 181-202
doi: 10.2307/20092959
"WomanLogic" would enabling woman to participate
in Web 3.0, like doing crypto trading or
program AI robot traders?
Bye
Mild Shock schrieb:
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!”
https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality
https://www.youtube.com/watch?v=cvMAVWDD-DU
I told you so, not worth a dime:
I have something to share wit you. After much reflection,
I have made the difficut decision to leave OpenAI. https://twitter.com/miramurati/status/1839025700009030027
Who is stepping in with the difficult task, Sam Altman himself?
The Intelligence Age
September 23, 2024
https://ia.samaltman.com/
Mild Shock schrieb:
Hi,
The blue are AfD, the green are:
German greens after losing badly
https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755
Time to start a yellow party, the first party
with an Artificial Intelligence Ethics agenda?
Bye
P.S.: Here I tried some pigwrestling with
ChatGPT demonstrating Mira Murati is just
a nice face. But ChatGPT is just like a child,
spamming me with large bullets list, from
its huge lexical memory, without any deep
understanding. But it also gave me an interesting
list of potential caliber AI critiques. Any new
Greta Thunberg of Artificial Intelligence
Ethics among them?
Mira Murati Education Background
https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4
Mild Shock schrieb:
What a bullshit:
Another concern is the potential for AI to displace
jobs and exacerbate economic inequality. A recent
study by McKinsey estimates that up to 800 million
jobs could be automated by 2030. While Murati believes
that AI will ultimately create more jobs than it
displaces, she acknowledges the need for policies to
support workers through the transition, such as job
retraining programs and strengthened social safety nets.
https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/
Lets say there is a wine valley. All workers
are replaced by AI robots. Where do they go.
In some cultures you don't find people over
30 that are long life learners. What should they
learn, on another valley where they harvest
oranges, they also replaced everybody by AI
robots. And so on the next valley, and the
next valley. We need NGO's and a Greta Thunberg
for AI ethics, not a nice face from OpenAI.
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
Chatgpt is definitely unreliable.
Hi,
Lets say I have to chose between pig wrestle with a
grammar nazi stackoverflow user with 100k reputation, or
to interact with ChatGPT that puts a lot of
effort to understand the least cue I give, isn't
shot in to english only, you can also use it with
german, turkish, etc.. what ever.
Who do I use as a programmimg companion, stackoverflow
or ChatGPT. I think ChatGPT is the clear winner,
it doesn't feature the abomination of a virtual
prison like stackoverflow. Or as Cycorp, Inc has put
it already decades ago:
Common Sense Reasoning – From Cyc to Intelligent Assistant
Doug Lenat et al. - August 2006
2 The Case for an Ambient Research Assistant
2.3 Components of a Truly Intelligent Computational Assistant
Natural Language:
An assistant system must be able to remember
questions, statements, etc. from the user, and
what its own response was, in order to understand
the kinds of language ‘shortcuts’ people normally use
in context.
https://www.researchgate.net/publication/226813714
Bye
Mild Shock schrieb:
Thats a funny quote:
"Once you have a truly massive amount of information
integrated as knowledge, then the human-software
system will be superhuman, in the same sense that
mankind with writing is superhuman compared to
mankind before writing."
https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes
The biggest flop in logic programming
history, scryer prolog is dead. The poor
thing is a prolog system without garbage
collection, not very useful. So how will
Austria get out of all this?
With 50 PhDs and 10 Postdocs?
"To develop its foundations, BILAI employs a
Bilateral AI approach, effectively combining
sub-symbolic AI (neural networks and machine learning)
with symbolic AI (logic, knowledge representation,
and reasoning) in various ways."
https://www.bilateral-ai.net/jobs/.
LoL
Mild Shock schrieb:
You know USA has a problem,
when Oracle enters the race:
To source the 131,072 GPU Al "supercluster,"
Larry Ellison, appealed directly to Jensen Huang,
during a dinner joined by Elon Musk at Nobu.
"I would describe the dinner as me and Elon
begging Jensen for GPUs. Please take our money.
We need you to take more of our money. Please!”
https://twitter.com/benitoz/status/1834741314740756621
Meanwhile a contender in Video GenAI
FLUX.1 from Germany, Hurray! With Open Source:
OK. Now I'm Scared... AI Better Than Reality
https://www.youtube.com/watch?v=cvMAVWDD-DU
Mild Shock schrieb:
The carbon emissions of writing and illustrating
are lower for AI than for humans
https://www.nature.com/articles/s41598-024-54271-x
Perplexity CEO Aravind Srinivas says that the cost per
query in AI models has decreased by 100x in the past
2 years and quality will improve as hallucinations
decrease 10x per year
https://twitter.com/tsarnick/status/1830045611036721254
Disclaimer: Can't verify the later claim... need to find a paper.
Mild Shock schrieb:
Your new Scrum Master is here! - ChatGPT, 2023
https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years
LoL
Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
UTC+2:
Prolog Class Signpost - American Style 2018
https://www.youtube.com/watch?v=CxQKltWI0NA
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 997 |
Nodes: | 10 (0 / 10) |
Uptime: | 226:46:41 |
Calls: | 13,046 |
Calls today: | 1 |
Files: | 186,574 |
Messages: | 3,292,809 |