• Re: The road to Artificial Intelligence

    From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Mon Nov 20 05:23:52 2023
    From Newsgroup: comp.lang.prolog

    Ok, OpenAI is dead. But we need to get out of the claws
    of the computing cloud. We need the spirit of Niklaus
    Wirth, who combined computer science and
    electronics. We need to solve the problem of
    parallel slilicon. Should have a look again at these
    quantum computers. Can we have them on the Edge?
    Mild Shock schrieb am Freitag, 23. Juni 2023 um 11:14:15 UTC+2:
    Not only the speed doesn't double every year anymore,
    also the density of transistors doesn't double
    every year anymore. See also:

    ‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618

    So there is some hope in FPGAs. The article writes:

    "In the latter paper, which includes a great overview of
    the state of the art, Pilch and colleagues summarize
    this as shifting the processing from time to space —
    from using slow sequential CPU processing to hardware
    complexity, using the FPGA’s configurable fabric
    and inherent parallelism."

    In reference to (no pay wall):

    An FPGA-based real quantum computer emulator
    15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
    Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

    The superposition property enables a quantum computer
    to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Mon Nov 20 09:01:27 2023
    From Newsgroup: comp.lang.prolog


    Seems that OpenAI is effectively imploding. Any
    actors that wanted this had easy play because there
    were two tribes in OpenAI, namely the AI doomers
    and the AI futurists. Who cares? The AI doomers
    were possibly anyway those contributing less and
    will now end on the streets, a nice little shake out!
    Sutskever Regret and the Weekend That Changed AI https://www.youtube.com/watch?v=dyakih3oYpk
    Mild Shock schrieb am Montag, 20. November 2023 um 14:23:55 UTC+1:
    Ok, OpenAI is dead. But we need to get out of the claws
    of the computing cloud. We need the spirit of Niklaus
    Wirth, who combined computer science and

    electronics. We need to solve the problem of
    parallel slilicon. Should have a look again at these
    quantum computers. Can we have them on the Edge?
    Mild Shock schrieb am Freitag, 23. Juni 2023 um 11:14:15 UTC+2:
    Not only the speed doesn't double every year anymore,
    also the density of transistors doesn't double
    every year anymore. See also:

    ‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618

    So there is some hope in FPGAs. The article writes:

    "In the latter paper, which includes a great overview of
    the state of the art, Pilch and colleagues summarize
    this as shifting the processing from time to space —
    from using slow sequential CPU processing to hardware
    complexity, using the FPGA’s configurable fabric
    and inherent parallelism."

    In reference to (no pay wall):

    An FPGA-based real quantum computer emulator
    15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5
    Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

    The superposition property enables a quantum computer
    to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Nov 25 07:23:05 2023
    From Newsgroup: comp.lang.prolog

    How my Dogelog Player garbage collector works:

    Ashes to ashes, funk to funky
    We know Major Tom's a junkie
    Strung out in heaven's high
    Hitting an all-time low
    https://www.youtube.com/watch?v=CMThz7eQ6K0

    Unfortunately no generational garbage collector yet. :-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Sat Nov 25 13:10:18 2023
    From Newsgroup: comp.lang.prolog

    To advance the State of the Art and track performance improvements,
    some automatization would be helpful. I can test manually WASM via
    this here https://dev.swi-prolog.org/wasm/shell . Since my recent
    performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
    GC improvements but for the core test cases. I only tested my Ryzen.
    Don’t know yet results for Yoga:
    dog swi
    nrev 1247 1223
    crypt 894 2351
    deriv 960 1415
    poly 959 1475
    sortq 1313 1825
    tictac 1587 2400
    queens 1203 2316
    query 1919 4565
    mtak 1376 1584
    perfect 1020 1369
    calc 1224 1583
    Total 13702 22106
    LoL
    Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
    How my Dogelog Player garbage collector works:

    Ashes to ashes, funk to funky
    We know Major Tom's a junkie
    Strung out in heaven's high
    Hitting an all-time low
    https://www.youtube.com/watch?v=CMThz7eQ6K0

    Unfortunately no generational garbage collector yet. :-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Mon Nov 27 09:31:23 2023
    From Newsgroup: comp.lang.prolog


    Scryer Prolog has made amazing leaps recently concerning
    performance, its now only like 2-3 times slower than
    SWI-Prolog! What does it prevent to get faster than SWI-Prolog?
    See for yourself. Here some testing with a very recent version.
    Interestingly tictac shows it has some problems with negation-
    as-failure and/or call/1. Maybe they should allocate more
    time to these areas instead of inference counting formatting:
    $ target/release/scryer-prolog -v
    v0.9.3-50-gb8ef3678
    nrev % CPU time: 0.304s, 3_024_548 inferences
    crypt % CPU time: 0.422s, 4_392_537 inferences
    deriv % CPU time: 0.462s, 3_150_149 inferences
    poly % CPU time: 0.394s, 3_588_369 inferences
    sortq % CPU time: 0.481s, 3_654_653 inferences
    tictac % CPU time: 1.591s, 3_285_766 inferences
    queens % CPU time: 0.517s, 5_713_596 inferences
    query % CPU time: 0.909s, 8_678_936 inferences
    mtak % CPU time: 0.425s, 6_901_822 inferences
    perfect % CPU time: 0.763s, 5_321_436 inferences
    calc % CPU time: 0.626s, 6_700_379 inferences
    true.
    Compared to SWI-Prolog on the same machine:
    $ swipl --version
    SWI-Prolog version 9.1.18 for x86_64-linux
    nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
    crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
    deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
    poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
    sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
    tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
    queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
    query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
    mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
    perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
    calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
    Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
    To advance the State of the Art and track performance improvements,
    some automatization would be helpful. I can test manually WASM via
    this here https://dev.swi-prolog.org/wasm/shell . Since my recent

    performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
    GC improvements but for the core test cases. I only tested my Ryzen.

    Don’t know yet results for Yoga:

    dog swi
    nrev 1247 1223
    crypt 894 2351
    deriv 960 1415
    poly 959 1475
    sortq 1313 1825
    tictac 1587 2400
    queens 1203 2316
    query 1919 4565
    mtak 1376 1584
    perfect 1020 1369
    calc 1224 1583
    Total 13702 22106

    LoL
    Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
    How my Dogelog Player garbage collector works:

    Ashes to ashes, funk to funky
    We know Major Tom's a junkie
    Strung out in heaven's high
    Hitting an all-time low
    https://www.youtube.com/watch?v=CMThz7eQ6K0

    Unfortunately no generational garbage collector yet. :-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Tue Nov 28 21:47:13 2023
    From Newsgroup: comp.lang.prolog

    Testing scryer-prolog doesn’t make any sense. Its not a
    Prolog system. It has memory leaks somewhere.
    Just try my SAT solver test suite:
    ?- between(1,100,_), suite_quiet, fail; true.
    VSZ and RSS memory is going up and up, with no end.
    Clogging my machine. I don’t think that this should happen,
    that a failure driven loop eats all memory?
    Thats just a fraud. How do you set some limits?
    Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
    Scryer Prolog has made amazing leaps recently concerning
    performance, its now only like 2-3 times slower than
    SWI-Prolog! What does it prevent to get faster than SWI-Prolog?

    See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
    as-failure and/or call/1. Maybe they should allocate more

    time to these areas instead of inference counting formatting:

    $ target/release/scryer-prolog -v
    v0.9.3-50-gb8ef3678

    nrev % CPU time: 0.304s, 3_024_548 inferences
    crypt % CPU time: 0.422s, 4_392_537 inferences
    deriv % CPU time: 0.462s, 3_150_149 inferences
    poly % CPU time: 0.394s, 3_588_369 inferences
    sortq % CPU time: 0.481s, 3_654_653 inferences
    tictac % CPU time: 1.591s, 3_285_766 inferences
    queens % CPU time: 0.517s, 5_713_596 inferences
    query % CPU time: 0.909s, 8_678_936 inferences
    mtak % CPU time: 0.425s, 6_901_822 inferences
    perfect % CPU time: 0.763s, 5_321_436 inferences
    calc % CPU time: 0.626s, 6_700_379 inferences
    true.

    Compared to SWI-Prolog on the same machine:

    $ swipl --version
    SWI-Prolog version 9.1.18 for x86_64-linux

    nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
    crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
    deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
    poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
    sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
    tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
    queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
    query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
    mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
    perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
    calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
    Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
    To advance the State of the Art and track performance improvements,
    some automatization would be helpful. I can test manually WASM via
    this here https://dev.swi-prolog.org/wasm/shell . Since my recent

    performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need
    GC improvements but for the core test cases. I only tested my Ryzen.

    Don’t know yet results for Yoga:

    dog swi
    nrev 1247 1223
    crypt 894 2351
    deriv 960 1415
    poly 959 1475
    sortq 1313 1825
    tictac 1587 2400
    queens 1203 2316
    query 1919 4565
    mtak 1376 1584
    perfect 1020 1369
    calc 1224 1583
    Total 13702 22106

    LoL
    Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
    How my Dogelog Player garbage collector works:

    Ashes to ashes, funk to funky
    We know Major Tom's a junkie
    Strung out in heaven's high
    Hitting an all-time low
    https://www.youtube.com/watch?v=CMThz7eQ6K0

    Unfortunately no generational garbage collector yet. :-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Tue Nov 28 21:58:26 2023
    From Newsgroup: comp.lang.prolog


    With limits I get this result:
    $ ulimit -m 2000000
    $ ulimit -v 2000000
    $ target/release/scryer-prolog
    ?- ['program2.p'].
    true.
    ?- between(1,100,_), suite_quiet, fail; true.
    Segmentation fault
    Not ok! Should continue running till the end.
    Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:47:15 UTC+1:
    Testing scryer-prolog doesn’t make any sense. Its not a
    Prolog system. It has memory leaks somewhere.
    Just try my SAT solver test suite:

    ?- between(1,100,_), suite_quiet, fail; true.

    VSZ and RSS memory is going up and up, with no end.
    Clogging my machine. I don’t think that this should happen,
    that a failure driven loop eats all memory?

    Thats just a fraud. How do you set some limits?
    Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
    Scryer Prolog has made amazing leaps recently concerning
    performance, its now only like 2-3 times slower than
    SWI-Prolog! What does it prevent to get faster than SWI-Prolog?

    See for yourself. Here some testing with a very recent version. Interestingly tictac shows it has some problems with negation-
    as-failure and/or call/1. Maybe they should allocate more

    time to these areas instead of inference counting formatting:

    $ target/release/scryer-prolog -v
    v0.9.3-50-gb8ef3678

    nrev % CPU time: 0.304s, 3_024_548 inferences
    crypt % CPU time: 0.422s, 4_392_537 inferences
    deriv % CPU time: 0.462s, 3_150_149 inferences
    poly % CPU time: 0.394s, 3_588_369 inferences
    sortq % CPU time: 0.481s, 3_654_653 inferences
    tictac % CPU time: 1.591s, 3_285_766 inferences
    queens % CPU time: 0.517s, 5_713_596 inferences
    query % CPU time: 0.909s, 8_678_936 inferences
    mtak % CPU time: 0.425s, 6_901_822 inferences
    perfect % CPU time: 0.763s, 5_321_436 inferences
    calc % CPU time: 0.626s, 6_700_379 inferences
    true.

    Compared to SWI-Prolog on the same machine:

    $ swipl --version
    SWI-Prolog version 9.1.18 for x86_64-linux

    nrev % 2,994,497 inferences, 0.067 CPU in 0.067 seconds
    crypt % 4,166,441 inferences, 0.288 CPU in 0.287 seconds
    deriv % 2,100,068 inferences, 0.139 CPU in 0.139 seconds
    poly % 2,087,479 inferences, 0.155 CPU in 0.155 seconds
    sortq % 3,624,602 inferences, 0.173 CPU in 0.173 seconds
    tictac % 1,012,615 inferences, 0.184 CPU in 0.184 seconds
    queens % 4,596,063 inferences, 0.266 CPU in 0.266 seconds
    query % 8,639,878 inferences, 0.622 CPU in 0.622 seconds
    mtak % 3,943,818 inferences, 0.162 CPU in 0.162 seconds
    perfect % 3,241,199 inferences, 0.197 CPU in 0.197 seconds
    calc % 3,060,151 inferences, 0.180 CPU in 0.180 seconds
    Mild Shock schrieb am Samstag, 25. November 2023 um 22:10:20 UTC+1:
    To advance the State of the Art and track performance improvements,
    some automatization would be helpful. I can test manually WASM via
    this here https://dev.swi-prolog.org/wasm/shell . Since my recent

    performance tuning of Dogelog Player for JavaScript I beat 32-bit WASM SWI-Prolog. This holds not yet for the SAT Solver test cases, that need GC improvements but for the core test cases. I only tested my Ryzen.

    Don’t know yet results for Yoga:

    dog swi
    nrev 1247 1223
    crypt 894 2351
    deriv 960 1415
    poly 959 1475
    sortq 1313 1825
    tictac 1587 2400
    queens 1203 2316
    query 1919 4565
    mtak 1376 1584
    perfect 1020 1369
    calc 1224 1583
    Total 13702 22106

    LoL
    Mild Shock schrieb am Samstag, 25. November 2023 um 16:23:07 UTC+1:
    How my Dogelog Player garbage collector works:

    Ashes to ashes, funk to funky
    We know Major Tom's a junkie
    Strung out in heaven's high
    Hitting an all-time low
    https://www.youtube.com/watch?v=CMThz7eQ6K0

    Unfortunately no generational garbage collector yet. :-(
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Tue Nov 28 23:33:51 2023
    From Newsgroup: comp.lang.prolog

    How do you show Segmentation faults in a bar chart diagram?
    Here is a test with Trealla Prolog, same limit test, it completes the job. Doesn’t clog up the memory indefinitely, works just as expected:
    $ ./tpl -v
    Trealla Prolog (c) Infradig 2020-2023, v2.30.48-21-g8dfd
    $ ./tpl
    ?- ['../ciao/program2.p'].
    true.
    ?- between(1,100,_), suite_quiet, fail; true.
    true.
    ?-
    You have to wait some while, but you can use the command ps -aux to
    see that it doesn’t eat up memory. And I did the above test with the same very large ulimit -m | -v which wasn’t hit by a segmentation fault.
    Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:58:28 UTC+1:
    With limits I get this result:

    $ target/release/scryer-prolog -v
    v0.9.3-57-ge8d8b09e
    $ ulimit -m 2000000
    $ ulimit -v 2000000
    $ target/release/scryer-prolog
    ?- ['program2.p'].
    true.
    ?- between(1,100,_), suite_quiet, fail; true.
    Segmentation fault

    Not ok! Should continue running till the end.
    Mild Shock schrieb am Mittwoch, 29. November 2023 um 06:47:15 UTC+1:
    Testing scryer-prolog doesn’t make any sense. Its not a
    Prolog system. It has memory leaks somewhere.
    Just try my SAT solver test suite:

    ?- between(1,100,_), suite_quiet, fail; true.

    VSZ and RSS memory is going up and up, with no end.
    Clogging my machine. I don’t think that this should happen,
    that a failure driven loop eats all memory?

    Thats just a fraud. How do you set some limits?
    Mild Shock schrieb am Montag, 27. November 2023 um 18:31:25 UTC+1:
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@bursejan@gmail.com to comp.lang.prolog on Thu Nov 30 15:43:37 2023
    From Newsgroup: comp.lang.prolog

    A new player has entered the chat (Amazon Q):
    #NLProc researcher @ AWS AI (@AmazonScience). Part-time
    machine learner & linguistics enthusiast. Previously: PhD
    @stanfordnlp, JD AI. He/him. Opinions my own.
    It is really humbling to be part of the team that
    launched Amazon Q, a flagship #AWS product that
    helps users interact with their knowledge corpus using LLMs.
    It's quite special to me personally, since it's only been
    3 years since I finished my PhD thesis on this exact topic. https://twitter.com/qi2peng2
    What does Gartner say about Business-Chatbots?
    Opening Keynote: The Next Era − We Shape AI
    AI Shapes Us l Gartner IT Symposium/Xpo https://www.youtube.com/watch?v=0s7Jw9xkSYQ
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Feb 27 17:03:59 2024
    From Newsgroup: comp.lang.prolog


    Terence Tao, "Machine Assisted Proof" https://www.youtube.com/watch?v=AayZuuDDKP0

    Mostowski Collapse schrieb:
    Don't buy your Pearls in Honk Kong. They are all fake.

    So what do you prefer, this Haskell monster: https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Mar 16 14:06:47 2024
    From Newsgroup: comp.lang.prolog

    I didn't make all my homework yet.
    For example just fiddling around with CLP(FD), I get:

    ?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
    1..3, 1..3, 1..6]), all_distinct(Vs).
    false.

    Does Scryer Prolog CLP(Z) have some explanator for that?
    What is exactly the conflict that it fails?

    Mild Shock schrieb:

    Terence Tao, "Machine Assisted Proof" https://www.youtube.com/watch?v=AayZuuDDKP0

    Mostowski Collapse schrieb:
    Don't buy your Pearls in Honk Kong. They are all fake.

    So what do you prefer, this Haskell monster:
    https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Mar 16 14:13:50 2024
    From Newsgroup: comp.lang.prolog


    Or a more striking example, Peter Norvig's impossible
    Sudoku, which he claims took him 1439 seconds
    to show that it is unsolvable:

    /* Peter Norvig */
    problem(9, [[_,_,_,_,_,5,_,8,_],
    [_,_,_,6,_,1,_,4,3],
    [_,_,_,_,_,_,_,_,_],
    [_,1,_,5,_,_,_,_,_],
    [_,_,_,1,_,6,_,_,_],
    [3,_,_,_,_,_,_,_,5],
    [5,3,_,_,_,_,_,6,1],
    [_,_,_,_,_,_,_,_,4],
    [_,_,_,_,_,_,_,_,_]]).

    https://norvig.com/sudoku.html

    whereby SWI-Prolog with all_distinct/1 does
    it in a blink, even without labeling:

    ?- problem(9, M), time(sudoku(M)).
    % 316,054 inferences, 0.016 CPU in 0.020 seconds
    (80% CPU, 20227456 Lips)
    false.

    Pretty cool!

    Mild Shock schrieb:
    I didn't make all my homework yet.
    For example just fiddling around with CLP(FD), I get:

    ?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
              1..3, 1..3, 1..6]), all_distinct(Vs).
    false.

    Does Scryer Prolog CLP(Z) have some explanator for that?
    What is exactly the conflict that it fails?

    Mild Shock schrieb:

    Terence Tao, "Machine Assisted Proof"
    https://www.youtube.com/watch?v=AayZuuDDKP0

    Mostowski Collapse schrieb:
    Don't buy your Pearls in Honk Kong. They are all fake.

    So what do you prefer, this Haskell monster:
    https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Mar 24 01:32:42 2024
    From Newsgroup: comp.lang.prolog

    Now I have the feeling there are no difficult 9x9
    Sudokus for the computer. At least not for computers
    running SWI-Prolog and using CLP(FD) with the global
    constraint all_distinct/1.

    I was fishing among the 17-clue Sudokus, and the
    hardest I could find so far was this one:

    /* Gordon Royle #3668 */
    problem(11,[[_,_,_,_,_,_,_,_,_],
    [_,_,_,_,_,_,_,1,2],
    [_,_,3,_,_,4,_,_,_],
    [_,_,_,_,_,_,_,_,3],
    [_,1,_,2,5,_,_,_,_],
    [6,_,_,_,_,_,7,_,_],
    [_,_,_,_,2,_,_,_,_],
    [_,_,7,_,_,_,4,_,_],
    [5,_,_,1,6,_,_,8,_]]).

    But SWI-Prolog still does it in around 3 seconds.
    SWI-Prolog does other 17-clue Sudokus in less than 100ms.

    Are there any 17-clue Sudokus that take more time?

    Mild Shock schrieb:

    Or a more striking example, Peter Norvig's impossible
    Sudoku, which he claims took him 1439 seconds
    to show that it is unsolvable:

    /* Peter Norvig */
    problem(9, [[_,_,_,_,_,5,_,8,_],
                [_,_,_,6,_,1,_,4,3],
                [_,_,_,_,_,_,_,_,_],
                [_,1,_,5,_,_,_,_,_],
                [_,_,_,1,_,6,_,_,_],
                [3,_,_,_,_,_,_,_,5],
                [5,3,_,_,_,_,_,6,1],
                [_,_,_,_,_,_,_,_,4],
                [_,_,_,_,_,_,_,_,_]]).

    https://norvig.com/sudoku.html

    whereby SWI-Prolog with all_distinct/1 does
    it in a blink, even without labeling:

    ?- problem(9, M), time(sudoku(M)).
    % 316,054 inferences, 0.016 CPU in 0.020 seconds
     (80% CPU, 20227456 Lips)
    false.

    Pretty cool!

    Mild Shock schrieb:
    I didn't make all my homework yet.
    For example just fiddling around with CLP(FD), I get:

    ?- maplist(in, Vs, [1\/3..4, 1..2\/4, 1..2\/4,
               1..3, 1..3, 1..6]), all_distinct(Vs).
    false.

    Does Scryer Prolog CLP(Z) have some explanator for that?
    What is exactly the conflict that it fails?

    Mild Shock schrieb:

    Terence Tao, "Machine Assisted Proof"
    https://www.youtube.com/watch?v=AayZuuDDKP0

    Mostowski Collapse schrieb:
    Don't buy your Pearls in Honk Kong. They are all fake.

    So what do you prefer, this Haskell monster:
    https://www.cs.nott.ac.uk/~pszgmh/countdown.pdf



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Mar 24 18:26:20 2024
    From Newsgroup: comp.lang.prolog

    Is 3 seconds even enough to generate
    unique Sudokus? How many trials would be
    needed? The uniqueness problem

    seems to have no useful reduction,
    already the question whether a partial
    latin square has a unique solution

    is NP complete?

    Finding Another Solution
    T. Yato & T. Seta - 2002
    https://academic.timwylie.com/17CSCI4341/sudoku.pdf

    Mild Shock schrieb:
    Now I have the feeling there are no difficult 9x9
    Sudokus for the computer. At least not for computers
    running SWI-Prolog and using CLP(FD) with the global
    constraint all_distinct/1.

    I was fishing among the 17-clue Sudokus, and the
    hardest I could find so far was this one:

    /* Gordon Royle #3668 */
    problem(11,[[_,_,_,_,_,_,_,_,_],
                [_,_,_,_,_,_,_,1,2],
                [_,_,3,_,_,4,_,_,_],
                [_,_,_,_,_,_,_,_,3],
                [_,1,_,2,5,_,_,_,_],
                [6,_,_,_,_,_,7,_,_],
                [_,_,_,_,2,_,_,_,_],
                [_,_,7,_,_,_,4,_,_],
                [5,_,_,1,6,_,_,8,_]]).

    But SWI-Prolog still does it in around 3 seconds.
    SWI-Prolog does other 17-clue Sudokus in less than 100ms.

    Are there any 17-clue Sudokus that take more time?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mikko@mikko.levanto@iki.fi to comp.lang.prolog on Mon Mar 25 09:59:19 2024
    From Newsgroup: comp.lang.prolog

    On 2024-03-24 17:26:20 +0000, Mild Shock said:

    https://academic.timwylie.com/17CSCI4341/sudoku.pdf

    For an expamle about what is reasonable to expect see pages
    https://mlevanto.github.io/solver.html
    https://mlevanto.github.io/SudokuV.html
    https://mlevanto.github.io/Latina.html

    The first one is a solver. It also determines whether the
    solution is unique.

    The other two problem generators. They are a bit slow but
    still usable.

    Buttons at the bottom are for saving the problem or solution
    in different file formats.
    --
    Mikko

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Jul 4 23:09:03 2024
    From Newsgroup: comp.lang.prolog


    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C

    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 5 03:52:59 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    A few years ago I was impressed by
    the output of either Negri or Plato,
    or the two together.

    Now they are just an annoyance, all
    they show is that they are neither talented
    nor have sufficient training.

    Just have a look at:

    Terminating intuitionistic calculus
    Giulio Fellin and Sara Negri
    https://philpapers.org/rec/FELATI

    Beside the too obvious creative idea and motive
    behind it, it is most likely complete useless
    nonsense. Already this presentation in the

    paper shows utter incompetence:

    Γ, A → B ⊢ A Γ, A → B, B ⊢ Δ ----------------------------------------
    Γ, A → B ⊢ Δ

    Everybody in the business knows that the
    looping, resulting from the A → B copying,
    is a fact. But can be reduced since the

    copying on the right hand side is not needed.

    Γ, A → B ⊢ A Γ, B ⊢ Δ
    --------------------------------
    Γ, A → B ⊢ Δ

    The above variant is enough. Just like Dragalin
    presented the calculus. I really wish people
    would completely understand these master pieces,

    before they even touch multi consequent calculi:

    Mathematical Intuitionism: Introduction to Proof Theory
    Albert Grigorevich Dragalin - 1988
    https://www.amazon.com/dp/0821845209

    Contraction-Free Sequent Calculi for Intuitionistic Logic
    Roy Dyckhoff - 1992
    http://www.cs.cmu.edu/~fp//courses/atp/cmuonly/D92.pdf

    Whats the deeper semantic (sic!) explanation of the
    two calculi GHPC and GCPC? I have a Kripke semantics
    explanation in my notes, didn't release it yet.

    Have Fun!

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 5 03:54:52 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    I am not halucinating that Negri is nonsense:

    This calculus does not terminate (e.g. on Peirce’s
    formula). Negri [42] shows how to add a loop-checking
    mechanism to ensure termination. The effect on complexity
    isn’t yet clear; but the loop-checking is expensive.

    Intuitionistic Decision Procedures since Gentzen
    The Jägerfest - 2013
    https://apt13.unibe.ch/slides/Dyckhoff.pdf

    Bye

    Mild Shock schrieb:
    Hi,

    A few years ago I was impressed by
    the output of either Negri or Plato,
    or the two together.

    Now they are just an annoyance, all
    they show is that they are neither talented
    nor have sufficient training.

    Just have a look at:

    Terminating intuitionistic calculus
    Giulio Fellin and Sara Negri
    https://philpapers.org/rec/FELATI

    Beside the too obvious creative idea and motive
    behind it, it is most likely complete useless
    nonsense. Already this presentation in the

    paper shows utter incompetence:

    Γ, A → B ⊢ A           Γ, A → B, B ⊢ Δ ----------------------------------------
               Γ, A → B  ⊢ Δ

    Everybody in the business knows that the
    looping, resulting from the A → B copying,
    is a fact. But can be reduced since the

    copying on the right hand side is not needed.

    Γ, A → B ⊢ A           Γ, B ⊢ Δ --------------------------------
            Γ, A → B  ⊢ Δ

    The above variant is enough. Just like Dragalin
    presented the calculus. I really wish people
    would completely understand these master pieces,

    before they even touch multi consequent calculi:

    Mathematical Intuitionism: Introduction to Proof Theory
    Albert Grigorevich Dragalin - 1988
    https://www.amazon.com/dp/0821845209

    Contraction-Free Sequent Calculi for Intuitionistic Logic
    Roy Dyckhoff - 1992
    http://www.cs.cmu.edu/~fp//courses/atp/cmuonly/D92.pdf

    Whats the deeper semantic (sic!) explanation of the
    two calculi GHPC and GCPC? I have a Kripke semantics
    explanation in my notes, didn't release it yet.

    Have Fun!

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 5 06:38:00 2024
    From Newsgroup: comp.lang.prolog

    The meteoric rise of Curry-Howard isomorphism
    and minimal logic, possibly because proof assistants
    such as Lean, Agda, etc… all use it, is quite ironic,
    in the light of this statement:

    Because of the vagueness of the notions of “constructive
    proof”, “constructive operation”, the BHK-interpretation
    has never become a versatile technical tool in the way
    classical semantics has. Perhaps it is correct to say
    that by most people the BHK-interpretation has never been
    seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 7 23:13:18 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    There are possibly issues of interdisciplinary
    work. For example Sorensen & Urzyczyn in their
    Lectures on the Curry-Howard Isomorphism say that
    the logic LP has no name in literature.

    On the other hand Segerbergs paper, shows that
    a logic LP, in his labeling JP, that stems from
    accepting Peice's Law is equivalent to a logic
    accepting Curry's Refutation rule,

    i.e the logic JE with:

    Γ, A => B |- A
    -----------------
    Γ |- A

    But the logic JE also implies that LEM was added!

    Bye

    Mild Shock schrieb:
    The meteoric rise of Curry-Howard isomorphism
    and minimal logic, possibly because proof assistants
    such as Lean, Agda, etc… all use it, is quite ironic,
    in the light of this statement:

    Because of the vagueness of the notions of “constructive
    proof”, “constructive operation”, the BHK-interpretation
    has never become a versatile technical tool in the way
    classical semantics has. Perhaps it is correct to say
    that by most people the BHK-interpretation has never been
    seen as an intuitionistic counterpart to classical semantics. https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 7 23:17:39 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    This has only become talk of the town recently
    under the heading of Constructive S4 Modal Logic
    or CS4. It somehow demonstrates that prejudice

    against computer science, like lambda calculus is
    too abstract, is possibly unfounded. The challenge
    would be to draw connections and foster inter-

    disciplinary dialog. The next challenge would
    be to distill a simple didactical extract of it
    and draw road maps!

    Categorical and Kripke Semantics for Constructive S4 Modal Logic
    Alechina et al. - 2003
    https://www.cs.bham.ac.uk/~exr/papers/csl01.pdf

    What they call fallible worlds, does Segerberg
    1968 call abnormal worlds.

    Bye

    Mild Shock schrieb:
    Hi,

    There are possibly issues of interdisciplinary
    work. For example Sorensen & Urzyczyn in their
    Lectures on the Curry-Howard Isomorphism say that
    the logic LP has no name in literature.

    On the other hand Segerbergs paper, shows that
    a logic LP, in his labeling JP, that stems from
    accepting Peice's Law is equivalent to a logic
    accepting Curry's Refutation rule,

    i.e the logic JE with:

         Γ, A => B |- A
        -----------------
             Γ |- A

    But the logic JE also implies that LEM was added!

    Bye

    Mild Shock schrieb:
    The meteoric rise of Curry-Howard isomorphism
    and minimal logic, possibly because proof assistants
    such as Lean, Agda, etc… all use it, is quite ironic,
    in the light of this statement:

    Because of the vagueness of the notions of “constructive
    proof”, “constructive operation”, the BHK-interpretation
    has never become a versatile technical tool in the way
    classical semantics has. Perhaps it is correct to say
    that by most people the BHK-interpretation has never been
    seen as an intuitionistic counterpart to classical semantics.
    https://festschriften.illc.uva.nl/j50/contribs/troelstra/troelstra.pdf >>
    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 12 11:27:02 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Now I had an extremly resilient correspondent, who
    wants to do proof extraction, but at the same
    time refuses to learn the Curry-Howard isomorphism.

    But its so easy, was just watching:

    Hyperon Session with Dr. Ben Goertzel https://www.youtube.com/watch?v=5Uy3j4WCiXQ

    At t=1853 he mentions C. S. Peirce thirdness, which
    you can use to explain the Curry-Howard isomorphism:


    1 *\ Γ = Context
    | \
    | * 3 t = λ-Expression
    | /
    2 */ α = Type


    The above is a trikonic visualization of the judgement
    Γ |- t : α, applying the art of making three-fold divisions.

    But I guess C. S. Peirce is not read in France, since
    it requires English. Or maybe there is a french translation?

    Bye

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 12 11:37:41 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Actually thridness is not only the art of making
    three-fold divisions. Usually one aims a finding
    a 3 that is the relation between 1 and 2, so that

    we have this relation satisfied:

    3(1, 2)

    Of course we can have the stance, and say that |-
    does that already. Only |- is highly ambigious,
    if you see Γ |- α you don't know what was the last

    inference rule applied. But for proof extraction
    you want exactly know that.

    Bye

    P.S.: And Peirce isn't wrong when he says thirdness
    is enough, just take set theory, which can do all
    of mathematics? Its based on this thirdness only:

    x ∈ y

    The set membership. But set membership is as ugly as |-,
    it also doesn't say why an element belongs to a set.

    LoL

    Mild Shock schrieb:
    Hi,

    Now I had an extremly resilient correspondent, who
    wants to do proof extraction, but at the same
    time refuses to learn the Curry-Howard isomorphism.

    But its so easy, was just watching:

    Hyperon Session with Dr. Ben Goertzel https://www.youtube.com/watch?v=5Uy3j4WCiXQ

    At t=1853 he mentions C. S. Peirce thirdness, which
    you can use to explain the Curry-Howard isomorphism:


    1 *\        Γ = Context
      | \
      |  * 3    t = λ-Expression
      | /
    2 */        α = Type


    The above is a trikonic visualization of the judgement
    Γ |- t : α, applying the art of making three-fold divisions.

    But I guess C. S. Peirce is not read in France, since
    it requires English. Or maybe there is a french translation?

    Bye

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 12 12:20:07 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    In 2023 Dr. Ben Goertzel praised back to
    normal, today in 2024 everybody has mysterious
    eyeinfections and a new wave is reported:

    Flirt-Varianten: Sommer-Coronawelle nimmt Fahrt auf https://www.mdr.de/wissen/medizin-gesundheit/corona-fallzahlen-sommerwelle-100.html

    Bye

    Mild Shock schrieb:
    Hi,

    Actually thridness is not only the art of making
    three-fold divisions. Usually one aims a finding
    a 3 that is the relation between 1 and 2, so that

    we have this relation satisfied:

       3(1, 2)

    Of course we can have the stance, and say that |-
    does that already. Only |- is highly ambigious,
    if you see Γ |- α you don't know what was the last

    inference rule applied. But for proof extraction
    you want exactly know that.

    Bye

    P.S.: And Peirce isn't wrong when he says thirdness
    is enough, just take set theory, which can do all
    of mathematics? Its based on  this thirdness only:

       x ∈ y

    The set membership. But set membership is as ugly as |-,
    it also doesn't say why an element belongs to a set.

    LoL

    Mild Shock schrieb:
    Hi,

    Now I had an extremly resilient correspondent, who
    wants to do proof extraction, but at the same
    time refuses to learn the Curry-Howard isomorphism.

    But its so easy, was just watching:

    Hyperon Session with Dr. Ben Goertzel
    https://www.youtube.com/watch?v=5Uy3j4WCiXQ

    At t=1853 he mentions C. S. Peirce thirdness, which
    you can use to explain the Curry-Howard isomorphism:


    1 *\        Γ = Context
       | \
       |  * 3    t = λ-Expression
       | /
    2 */        α = Type


    The above is a trikonic visualization of the judgement
    Γ |- t : α, applying the art of making three-fold divisions.

    But I guess C. S. Peirce is not read in France, since
    it requires English. Or maybe there is a french translation?

    Bye

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA




    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Jul 12 12:36:22 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Forget face masks, it might be the
    beginning of a new experience for the world!

    Coronaviruses are oculotropic https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7241406/

    Bye

    Mild Shock schrieb:
    Hi,

    In 2023 Dr. Ben Goertzel praised back to
    normal, today in 2024 everybody has mysterious
    eyeinfections and a new wave is reported:

    Flirt-Varianten: Sommer-Coronawelle nimmt Fahrt auf https://www.mdr.de/wissen/medizin-gesundheit/corona-fallzahlen-sommerwelle-100.html


    Bye

    Mild Shock schrieb:
    Hi,

    Actually thridness is not only the art of making
    three-fold divisions. Usually one aims a finding
    a 3 that is the relation between 1 and 2, so that

    we have this relation satisfied:

        3(1, 2)

    Of course we can have the stance, and say that |-
    does that already. Only |- is highly ambigious,
    if you see Γ |- α you don't know what was the last

    inference rule applied. But for proof extraction
    you want exactly know that.

    Bye

    P.S.: And Peirce isn't wrong when he says thirdness
    is enough, just take set theory, which can do all
    of mathematics? Its based on  this thirdness only:

        x ∈ y

    The set membership. But set membership is as ugly as |-,
    it also doesn't say why an element belongs to a set.

    LoL

    Mild Shock schrieb:
    Hi,

    Now I had an extremly resilient correspondent, who
    wants to do proof extraction, but at the same
    time refuses to learn the Curry-Howard isomorphism.

    But its so easy, was just watching:

    Hyperon Session with Dr. Ben Goertzel
    https://www.youtube.com/watch?v=5Uy3j4WCiXQ

    At t=1853 he mentions C. S. Peirce thirdness, which
    you can use to explain the Curry-Howard isomorphism:


    1 *\        Γ = Context
       | \
       |  * 3    t = λ-Expression
       | /
    2 */        α = Type


    The above is a trikonic visualization of the judgement
    Γ |- t : α, applying the art of making three-fold divisions.

    But I guess C. S. Peirce is not read in France, since
    it requires English. Or maybe there is a french translation?

    Bye

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA





    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 13 08:21:11 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Generally speaking it is not “elements” if
    its only classical logic:

    Plato (p. 83 of Elements of Logical
    Reasoning) … excellent book

    The ancient Greek had a well developed sense
    of constructivity in their geometry, for example
    they distinguished between compass-and-straightedge
    constructions, and neuseis constructions.

    Constructive logic somehow appeals to this sense,
    but its not the only way to do non-classical logics.
    In a broader sense in mathematical logic its just
    the same discipline as the axiomatic method which

    is already found in Euklids geometry but applied
    to logic itself. Now it is evident, by the correspondence
    that I had, that there are people employed in philosophy
    departments saying a A <=> B is void if we have:

    CL |- A
    CL |- B
    -------------------
    CL |- A <=> B

    They never played the axiomatic method and replaced
    classical logic (CL) by something else. Lets make an
    example from Euclids geometry. Thales theorem and
    Pythagroas theorem are both true? So they are equivalent?

    So why bother even write a booklet like Euklid elements?

    if you need to find the center of a circle https://youtube.com/shorts/iQeFCnSo41g

    Bye

    Mild Shock schrieb:
    Hi,

    I am not halucinating that Negri is nonsense:

    This calculus does not terminate (e.g. on Peirce’s
    formula). Negri [42] shows how to add a loop-checking
    mechanism to ensure termination. The effect on complexity
    isn’t yet clear; but the loop-checking is expensive.

    Intuitionistic Decision Procedures since Gentzen
    The Jägerfest - 2013
    https://apt13.unibe.ch/slides/Dyckhoff.pdf

    Bye
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 13 10:33:07 2024
    From Newsgroup: comp.lang.prolog

    The sad news is, the book is only
    worth some fire wood.

    Plato (p. 83 of Elements of Logical Reasoning

    Interestingly the book uses non-classical
    logic, since it says:

    Sequent calculus offers a good possibility for
    exhaustive proof search in propositional logic:
    We can check through all the possibilities for
    malking a derivation. If none of them worked,
    i.e., if each had at least one branch in which
    no rule applied and no initial sequent was reached,
    the given sequent is underivable. The
    symbol |/-, is used for underivability.

    And then it has unprovable:

    c. |/- A v ~A

    d. |/- ~~A => A

    But mostlikely the book has a blind spot, some
    serious errors, or totally unfounded claims, since
    for example with such a calculus, the unprovability
    of Peirce’s Law cannot be shown so easily.

    Exhaustive proof search will usually not terminate.
    There are some terminating calculi, like Dyckhoffs
    LJT, but a naive calculus based on Gentzens take
    will not terminate.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Jul 13 10:58:44 2024
    From Newsgroup: comp.lang.prolog

    The error is here, taken from his Table 4.1:

    A => B, Γ |- A B, Γ |- C
    ---------------------------- L=>
    A => B, Γ |- C

    When he halucinates duplication also
    known as contraction:

    The premisses are simpler than the condusion
    in all the rules except possibly in the left
    premiss of rule L=>. That is the only source
    of non-termination. Rules other than L=> can
    produce duplication, if an active formula had
    another occurrence in the antecedent. This
    source of duplication comes to an end.

    But in backward search the looping is not
    caused because of A => B or some such would be
    duplicated. None of the L=> rule branches shows
    some formula twice. The calculi of Gentzen are

    usually already known that propositional proof
    search for them can be implement contraction free,
    this is not what causes looping. What causes the
    looping is simply that the same sequent might

    again, other rules then L=> are also not to blame
    at all. Just make an example with A atomic, and
    you get an infinite decend:

    P => B, Γ |- P B, Γ |- P
    --------------------------------- (L=>)
    ....
    P => B, Γ |- P B, Γ |- P
    --------------------------------- (L=>)
    P => B, Γ |- P

    Mild Shock schrieb:
    The sad news is, the book is only
    worth some fire wood.

    Plato (p. 83 of Elements of Logical Reasoning

    Interestingly the book uses non-classical
    logic, since it says:

    Sequent calculus offers a good possibility for
    exhaustive proof search in propositional logic:
    We can check through all the possibilities for
    malking a derivation. If none of them worked,
    i.e., if each had at least one branch in which
    no rule applied and no initial sequent was reached,
    the given sequent is underivable. The
    symbol |/-, is used for underivability.

    And then it has unprovable:

    c. |/- A v ~A

    d. |/- ~~A => A

    But mostlikely the book has a blind spot, some
    serious errors, or totally unfounded claims, since
    for example with such a calculus, the unprovability
    of Peirce’s Law cannot be shown so easily.

    Exhaustive proof search will usually not terminate.
    There are some terminating calculi, like Dyckhoffs
    LJT, but a naive calculus based on Gentzens take
    will not terminate.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 14 00:05:05 2024
    From Newsgroup: comp.lang.prolog

    Rather read the original, von Plato
    takes his wisdom from:

    The single-succedent sequent calculus of proof
    search of Table 4.1 is a relatively recent invention:
    Building on the work of Albert Dragalin (1978) on the
    invertibility of logical rules in sequent calculi,
    Anne Troelstra worked out the details of the proof
    theory of this `contraction-free' calculus in the
    book Basic Proof Theorv (2000).

    But the book by Troelstra (1939-2019) and
    Schwichtenberg (1949 -), doesn’t contain a minimal
    logic is decidable theorem, based on some “loop
    checking”, as indicated by von Plato on page 78.

    The problem situation is similar as in Prolog SLD
    resolution, where S stands for selection function.
    Since the (L=>) inference rule is not invertible, it
    involves a selection function σ,

    that picks the active formula:

    Γ, A => B |- A Γ, B |- C A selection function σ did pick ------------------------------- (L=>) A => B from the left hand side
    Γ, A => B |- C

    One selection function might loop, another
    selection function might not loop. In Jens Otten
    ileansep.p through backtracking over the predicate
    select/3 and iterative deepening all selections

    are tried. To show unprovability you have to show
    looping for all possible selection functions, which
    is obviously less trivial than the “root-first proof
    search” humbug from von Platos vegan products

    store that offers “naturally growing trees”.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Jul 14 00:14:14 2024
    From Newsgroup: comp.lang.prolog

    Even Dyckhoffs calculus LJT has (L=>=>) not
    invertible and is still bugged by a selection
    function dependency. Because of this complication
    minimal logic calculi have traditionally been shown

    decidable not by means of proof theory but
    rather by means of model theory. You can look up
    modal companions and then draw upon some finite
    model upper bound. The seminal paper is:

    Propositional Dynamic Logic of Regular Programs
    Fischer & Ladner - 1979 https://www.sciencedirect.com/science/article/pii/0022000079900461

    It contains the modal logic S4 as a special case:

    The modal systems K, T, S4, S5 (cf. Ladner [16]) are
    recognizable subsystems of propositional dynamic logic.

    K allows only the modality A,
    T allows only the modality A u λ,
    S4 allows ordy the modality A*,
    S5 allows only the modality (A u A-)*.

    Mild Shock schrieb:
    Rather read the original, von Plato
    takes his wisdom from:

    The single-succedent sequent calculus of proof
    search of Table 4.1 is a relatively recent invention:
    Building on the work of Albert Dragalin (1978) on the
    invertibility of logical rules in sequent calculi,
    Anne Troelstra worked out the details of the proof
    theory of this `contraction-free' calculus in the
    book Basic Proof Theorv (2000).

    But the book by Troelstra (1939-2019) and
    Schwichtenberg (1949 -), doesn’t contain a minimal
    logic is decidable theorem, based on some “loop
    checking”, as indicated by von Plato on page 78.

    The problem situation is similar as in Prolog SLD
    resolution, where S stands for selection function.
    Since the (L=>) inference rule is not invertible, it
    involves a selection function σ,

    that picks the active formula:

    Γ, A => B |- A      Γ, B |- C          A selection function σ did pick
    ------------------------------- (L=>)  A => B from the left hand side
                Γ, A => B |- C

    One selection function might loop, another
    selection function might not loop. In Jens Otten
    ileansep.p through backtracking over the predicate
    select/3 and iterative deepening all selections

    are tried. To show unprovability you have to show
    looping for all possible selection functions, which
    is obviously less trivial than the “root-first proof
    search” humbug from von Platos vegan products

    store that offers “naturally growing trees”.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Jul 22 12:55:40 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Thats quite a deseases, even Wadler makes the
    error, when he automatically associates the curry
    howard isomorphism, to evaluation strategies.

    Often proof normalization cannot go as far
    as evaluation strategies can go. A simple example
    is the Y combinator. You can try yourself,

    I am adding the “I” combinator which we have
    already shown to be derivable, and then a
    new “Y” combinator:

    /* I axiom */
    typeof(i, (A -> B)) :-
    unify_with_occurs_check(A,B).
    /* Y axiom */
    typeof(y, ((A -> B) -> C)) :-
    unify_with_occurs_check(A,B),
    unify_with_occurs_check(A,C).

    Lets see what happens, can we prove anything?

    ?- between(1,6,N), search(typeof(X, a), N, 0).
    N = 3,
    X = y*i .

    Yes it collapses trivally, even doesn’t need a
    complicated Curry Paradox.

    Bye

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Jul 31 22:15:30 2024
    From Newsgroup: comp.lang.prolog


    I am really surprised that we have reached
    a point in history, where philosophy and
    artificial intelligence go separate paths,
    where philosophy stigmatizes means of

    abstractions on the computer and where even
    education in computer science itself is at
    loss with the rapid advancement of type theory
    from computation to deduction. This wasn’t always

    the case according to this essay (*):

    It is interesting to note that almost all the major subfields of AI
    mirror subfields of philosophy: The AI analogue of philosophy of
    language is computational
    linguistics; what philosophers call “practical
    reasoning” is called “planning and acting” in
    AI; ontology (indeed, much of metaphysics
    and epistemology) corresponds to knowledge
    representation in AI; and automated reasoning
    is one of the AI analogues of logic.
    – C.2.1.1 Intentions, practitions, and the ought-to-do.

    maybe we should find a way back to cooperation:

    Should AI workers study philosophy? Yes,
    unless they are content to reinvent the wheel
    every few days. When AI reinvents a wheel, it is
    typically square, or at best hexagonal, and
    can only make a few hundred revolutions before
    it stops. Philosopher’s wheels, on the other hand,
    are perfect circles, require in principle no
    lubrication, and can go in at least two directions
    at once. Clearly a meeting of minds is in order.
    – C.4 Summary

    See also:

    (*)

    Prolegomena to a Study of Hector-Neri Castañeda’s
    Influence on Artificial Intelligence: A Survey
    and Personal Reflections William Rappaport - January 1998 https://www.researchgate.net/publication/266277981

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 3 22:50:14 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Yes, maybe we are just before a kind
    of 2nd Cognitive Turn. The first Cognitive
    Turn is characterized as:

    The cognitive revolution was an intellectual
    movement that began in the 1950s as an
    interdisciplinary study of the mind and its
    processes, from which emerged a new
    field known as cognitive science.
    https://en.wikipedia.org/wiki/Cognitive_revolution

    The current mainstream believe is that
    Chat Bots and the progress in AI is mainly
    based on "Machine Learning", whereas

    most of the progress is more based on
    "Deep Learning". But I am also sceptical
    about "Deep Learning" in the end a frequentist

    is again lurking. In the worst case the
    no Bayension Brain shock will come with a
    Technological singularity in that the current

    short inferencing of LLMs is enhanced by
    some long inferencing, like here:

    A week ago, I posted that I was cooking a
    logical reasoning benchmark as a side project.
    Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘,
    designed for evaluating LLMs with Logic Puzzles. https://x.com/billyuchenlin/status/1814254565128335705

    making it possible not to excell by LLMs
    in such puzzles, but to advance to more
    elaborate scientific models, that can somehow

    overcome fallacies such as:
    - Kochen Specker Paradox, some fallacies
    caused by averaging?
    - Gluts and Gaps in Bayesian Reasoning,
    some fallacies by consistency assumptions?
    - What else?

    So on quiet paws AI might become the new overlord
    of science which we will happily depend on.

    Jeff Barnett schrieb:
    You are surprised; I am saddened. Not only have
    we lost contact with the primary studies of knowledge
    and reasoning, we have also lost contact with the
    studies of methods and motivation. Psychology
    was the basic home room of Alan Newell and many
    other AI all stars. What is now called AI, I think
    incorrectly, is just ways of exercising large amounts
    of very cheap computer power to calculate approximates
    to correlations and other statistical approximations.

    The problem with all of this in my mind, is that we
    learn nothing about the capturing of knowledge, what
    it is, or how it is used. Both logic and heuristic reasoning
    are needed and we certainly believe that intelligence is
    not measured by its ability to discover "truth" or its
    infallibly consistent results. Newton's thought process
    was pure genius but known to produce fallacious results
    when you know what Einstein knew at a later time.

    I remember reading Ted Shortliffe's dissertation about
    MYCIN (an early AI medical consultant for diagnosing
    blood-borne infectious diseases) where I learned about
    one use of the term "staff disease", or just "staff" for short.
    In patient care areas there always seems to be an in-
    house infection that changes over time. It changes
    because sick patients brought into the area contribute
    whatever is making them sick in the first place. In the
    second place there is rapid mutations driven by all sorts
    of factors present in hospital-like environments. The
    result is that the local staff is varying, literally, minute
    by minute. In a days time, the samples you took are
    no longer valid, i.e., their day old cultures may be
    meaningless. The underlying mathematical problem is
    that probability theory doesn't really have the tools to
    make predictions when the basic probabilities are
    changing faster than observations can be
    turned into inferences.

    Why do I mention the problems of unstable probabilities
    here? Because new AI uses fancy ideas of correlation
    to simulate probabilistic inference, e.g., Bayesian inference.
    Since actual probabilities may not exist in any meaningful
    ways, the simulations are often based on air.

    A hallmark of excellent human reasoning is the ability to
    explain how we arrived at our conclusions. We are also
    able to repair our inner models when we are in error if
    we can understand why. The abilities to explain and
    repair are fundamental to excellence of thought processes.
    By the way, I'm not claiming that all humans or I have theses
    reflective abilities. Those who do are few and far between.
    However, any AI that doesn't have some of these
    capabilities isn't very interesting.

    For more on reasons why logic and truth are only part of human
    ability to reasonably reason, see

    https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html

    -- Jeff Barnett
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 3 23:29:12 2024
    From Newsgroup: comp.lang.prolog


    My impression Cognitive Science was never
    Bayesian Brain, so I guess I made a joke.

    The time scale, its start in 1950s and that
    it is still relative unknown subject,

    would explain:
    - why my father or mother never tried to
    educated me towards cognitive science.
    It could be that they are totally blank
    in this respect?

    - why my grandfather or grandmothers never
    tried to educate me towards cognitive
    science. Dito It could be that they are totally
    blank in this respect?

    - it could be that there are rare cases where
    some philosophers had already a glimps of
    cognitive science. But when I open for
    example this booklet:

    System der Logic
    Friedrich Ueberweg
    Bonn - 1868
    https://philpapers.org/rec/UEBSDL

    One can feel the dry swimming that is reported
    for several millennia. What happened in the
    1950s was the possibility of computer modelling.

    Mild Shock schrieb:
    Hi,

    Yes, maybe we are just before a kind
    of 2nd Cognitive Turn. The first Cognitive
    Turn is characterized as:

    The cognitive revolution was an intellectual
    movement that began in the 1950s as an
    interdisciplinary study of the mind and its
    processes, from which emerged a new
    field known as cognitive science.
    https://en.wikipedia.org/wiki/Cognitive_revolution

    The current mainstream believe is that
    Chat Bots and the progress in AI is mainly
    based on "Machine Learning", whereas

    most of the progress is more based on
    "Deep Learning". But I am also sceptical
    about "Deep Learning" in the end a frequentist

    is again lurking. In the worst case the
    no Bayension Brain shock will come with a
    Technological singularity in that the current

    short inferencing of LLMs is enhanced by
    some long inferencing, like here:

    A week ago, I posted that I was cooking a
    logical reasoning benchmark as a side project.
    Now it's finally ready! Introducing 🦓 𝙕𝙚𝙗𝙧𝙖𝙇𝙤𝙜𝙞𝙘,
    designed for evaluating LLMs with Logic Puzzles. https://x.com/billyuchenlin/status/1814254565128335705

    making it possible not to excell by LLMs
    in such puzzles, but to advance to more
    elaborate scientific models, that can somehow

    overcome fallacies such as:
    - Kochen Specker Paradox, some fallacies
      caused by averaging?
    - Gluts and Gaps in Bayesian Reasoning,
      some fallacies by consistency assumptions?
    - What else?

    So on quiet paws AI might become the new overlord
    of science which we will happily depend on.

    Jeff Barnett schrieb:
    You are surprised; I am saddened. Not only have
    we lost contact with the primary studies of knowledge
    and reasoning, we have also lost contact with the
    studies of methods and motivation. Psychology
    was the basic home room of Alan Newell and many
    other AI all stars. What is now called AI, I think
    incorrectly, is just ways of exercising large amounts
    of very cheap computer power to calculate approximates
    to correlations and other statistical approximations.

    The problem with all of this in my mind, is that we
    learn nothing about the capturing of knowledge, what
    it is, or how it is used. Both logic and heuristic reasoning
    are needed and we certainly believe that intelligence is
    not measured by its ability to discover "truth" or its
    infallibly consistent results. Newton's thought process
    was pure genius but known to produce fallacious results
    when you know what Einstein knew at a later time.

    I remember reading Ted Shortliffe's dissertation about
    MYCIN (an early AI medical consultant for diagnosing
    blood-borne infectious diseases) where I learned about
    one use of the term "staff disease", or just "staff" for short.
    In patient care areas there always seems to be an in-
    house infection that changes over time. It changes
    because sick patients brought into the area contribute
    whatever is making them sick in the first place. In the
    second place there is rapid mutations driven by all sorts
    of factors present in hospital-like environments. The
    result is that the local staff is varying, literally, minute
    by minute. In a days time, the samples you took are
    no longer valid, i.e., their day old cultures may be
    meaningless. The underlying mathematical problem is
    that probability theory doesn't really have the tools to
    make predictions when the basic probabilities are
    changing faster than observations can be
    turned into inferences.

    Why do I mention the problems of unstable probabilities
    here? Because new AI uses fancy ideas of correlation
    to simulate probabilistic inference, e.g., Bayesian inference.
    Since actual probabilities may not exist in any meaningful
    ways, the simulations are often based on air.

    A hallmark of excellent human reasoning is the ability to
    explain how we arrived at our conclusions. We are also
    able to repair our inner models when we are in error if
    we can understand why. The abilities to explain and
    repair are fundamental to excellence of thought processes.
    By the way, I'm not claiming that all humans or I have theses
    reflective abilities. Those who do are few and far between.
    However, any AI that doesn't have some of these
    capabilities isn't very interesting.

    For more on reasons why logic and truth are only part of human
    ability to reasonably reason, see

    https://www.yahoo.com/news/opinion-want-convince-conspiracy-theory-100258277.html


       -- Jeff Barnett

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sat Aug 3 23:55:58 2024
    From Newsgroup: comp.lang.prolog


    BTW: Friedrich Ueberweg is quite good
    and funny to browse, he reports relatively
    unfiltered what we would nowadays call

    forms of "rational behaviour", so its a little
    pot purry, except for his sections where he
    explains some schemas, like the Aristotelan

    figures, which are more pure logic of the form.
    And peng you get a guy talking pages and
    pages about pure and form:

    "Pure" logic, ontology, and phenomenology
    David Woodruff Smith https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm

    But the above is a from species of philosophy
    that is endangered now. Its predator are
    abstractions on the computer like lambda

    calculus and the Curry Howard isomorphism. The
    revue has become an irrelevant cabarett, only
    dead people would be interested in, like

    may father, grandfather etc...

    Mild Shock schrieb:

    My impression Cognitive Science was never
    Bayesian Brain, so I guess I made a joke.

    The time scale, its start in 1950s and that
    it is still relative unknown subject,

    would explain:
    - why my father or mother never tried to
      educated me towards cognitive science.
      It could be that they are totally blank
      in this respect?

    - why my grandfather or grandmothers never
      tried to educate me towards cognitive
      science. Dito It could be that they are totally
      blank in this respect?

    - it could be that there are rare cases where
      some philosophers had already a glimps of
      cognitive science. But when I open for
      example this booklet:

    System der Logic
    Friedrich Ueberweg
    Bonn - 1868
    https://philpapers.org/rec/UEBSDL

      One can feel the dry swimming that is reported
      for several millennia.  What happened in the
      1950s was the possibility of computer modelling.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 4 00:14:45 2024
    From Newsgroup: comp.lang.prolog


    David Woodruff Smith writes:
    And "cognitive science" has recently pursued
    the relation of intentional mental activities
    to neural processes in the brain.

    I call this bullshit. He confuses cognitive
    science with some sort of Neuroscience and/or
    connectionist approaches.

    Some broader working definition
    of cognitive science is for example:

    Cognitive science is an interdisciplinary
    science that deals with the processing of
    information in the context of perception,
    thinking and decision-making processes,
    both in humans and in animals or machines.

    You see how much philosophy is behind.
    David Woodruff Smith published the
    paper in 2003? I don't think there are any

    excuses for his nonsense definition.
    Especially if one writes about pure form.
    This is so idiotic.

    Mild Shock schrieb:

    BTW: Friedrich Ueberweg is quite good
    and funny to browse, he reports relatively
    unfiltered what we would nowadays call

    forms of "rational behaviour", so its a little
    pot purry, except for his sections where he
    explains some schemas, like the Aristotelan

    figures, which are more pure logic of the form.
    And peng you get a guy talking pages and
    pages about pure and form:

    "Pure" logic, ontology, and phenomenology
    David Woodruff Smith https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm


    But the above is a from species of philosophy
    that is endangered now. Its predator are
    abstractions on the computer like lambda

    calculus and the Curry Howard isomorphism. The
    revue has become an irrelevant cabarett, only
    dead people would be interested in, like

    may father, grandfather etc...

    Mild Shock schrieb:

    My impression Cognitive Science was never
    Bayesian Brain, so I guess I made a joke.

    The time scale, its start in 1950s and that
    it is still relative unknown subject,

    would explain:
    - why my father or mother never tried to
       educated me towards cognitive science.
       It could be that they are totally blank
       in this respect?

    - why my grandfather or grandmothers never
       tried to educate me towards cognitive
       science. Dito It could be that they are totally
       blank in this respect?

    - it could be that there are rare cases where
       some philosophers had already a glimps of
       cognitive science. But when I open for
       example this booklet:

    System der Logic
    Friedrich Ueberweg
    Bonn - 1868
    https://philpapers.org/rec/UEBSDL

       One can feel the dry swimming that is reported
       for several millennia.  What happened in the
       1950s was the possibility of computer modelling.



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 4 00:57:18 2024
    From Newsgroup: comp.lang.prolog


    Well we all know about this rule:

    - Never ask a woman about her weight

    - Never ask a woman about her age

    There is a similar rule for philosophers:

    - Never ask a philosopher what is cognitive science

    - Never ask a philosopher what is formula-as-types

    Explanation: They like to be the champions of
    pure form like in this paper below, so they
    don’t like other disciplines dealing with pure
    form or even having pure form on the computer.

    "Pure” logic, ontology, and phenomenology
    David Woodruff Smith - Revue internationale de philosophie 2003/2 https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Aug 4 22:10:57 2024
    From Newsgroup: comp.lang.prolog

    There are more and more papers of this sort:

    Reliable Reasoning Beyond Natural Language
    To address this, we propose a neurosymbolic
    approach that prompts LLMs to extract and encode
    all relevant information from a problem statement as
    logical code statements, and then use a logic programming
    language (Prolog) to conduct the iterative computations of
    explicit deductive reasoning.
    [2407.11373] Reliable Reasoning Beyond Natural Language

    The future of Prolog is bright?

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Aug 8 16:56:28 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Lets say one milestone in cognitive science,
    is the concept of "bounded rationality".
    It seems LLMs have some traits that are also

    found in humans. For example the anchoring effect
    is a psychological phenomenon in which an
    individual’s judgements or decisions

    are influenced by a reference point or “anchor”
    which can be completely irrelevant. Like for example
    when discussing Curry Howard isomorphism with

    a real world philosopher , one that might
    not know Curry Howard isomorphism but

    https://en.wikipedia.org/wiki/Anchoring_effect

    nevertheless be tempted to hallucinate some nonsense.
    One highly cited paper in this respect is Tversky &
    Kahneman 1974. R.I.P. Daniel Kahneman,

    March 27, 2024. The paper is still cited today:

    Artificial Intelligence and Cognitive Biases: A Viewpoint https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm

    Maybe using deeper and/or more careful reasoning,
    possibly backed up by Prolog engine, could have
    a positive effect? Its very difficult also for a

    Prolog engine, since there is a trade-off
    between producing no answer at all if the software
    agent is too careful, and of producing a wealth

    of nonsense otherwise.

    Bye

    Mild Shock schrieb:

    Well we all know about this rule:

    - Never ask a woman about her weight

    - Never ask a woman about her age

    There is a similar rule for philosophers:

    - Never ask a philosopher what is cognitive science

    - Never ask a philosopher what is formula-as-types

    Explanation: They like to be the champions of
    pure form like in this paper below, so they
    don’t like other disciplines dealing with pure
    form or even having pure form on the computer.

    "Pure” logic, ontology, and phenomenology
    David Woodruff Smith - Revue internationale de philosophie 2003/2

    https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm



    Mild Shock schrieb:
    There are more and more papers of this sort:

    Reliable Reasoning Beyond Natural Language
    To address this, we propose a neurosymbolic
    approach that prompts LLMs to extract and encode
    all relevant information from a problem statement as
    logical code statements, and then use a logic programming
    language (Prolog) to conduct the iterative computations of
    explicit deductive reasoning.
    [2407.11373] Reliable Reasoning Beyond Natural Language

    The future of Prolog is bright?

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Aug 8 17:18:56 2024
    From Newsgroup: comp.lang.prolog

    But I wouldn’t give up so quickly, even
    classical expert system theory of the 80’s
    had it that an expert system needs somewhere

    a knowledge acquisition component. But the
    idea there was that the system would simulate
    the experts dialog with the advice taker

    Von Datenbanken zu Expertsystemen https://www.orellfuessli.ch/shop/home/artikeldetails/A1051258432

    and gather further information to complete
    the advice. Still this could be inspiring,
    don’t stop at not knowing Curry-Howard isomorphism,

    go on learn it, never stop! Just like here:

    Never Gonna Give You Up
    https://www.youtube.com/watch?v=dQw4w9WgXcQ

    Mild Shock schrieb:
    Hi,

    Lets say one milestone in cognitive science,
    is the concept of "bounded rationality".
    It seems LLMs have some traits that are also

    found in humans. For example the anchoring effect
    is a psychological phenomenon in which an
    individual’s judgements or decisions

    are influenced by a reference point or “anchor”
    which can be completely irrelevant. Like for example
    when discussing Curry Howard isomorphism with

    a real world philosopher , one that might
    not know Curry Howard isomorphism but

    https://en.wikipedia.org/wiki/Anchoring_effect

    nevertheless be tempted to hallucinate some nonsense.
    One highly cited paper in this respect is Tversky &
    Kahneman 1974. R.I.P. Daniel Kahneman,

    March 27, 2024. The paper is still cited today:

    Artificial Intelligence and Cognitive Biases: A Viewpoint https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm


    Maybe using deeper and/or more careful reasoning,
    possibly backed up by Prolog engine, could have
    a positive effect? Its very difficult also for a

    Prolog engine, since there is a trade-off
    between producing no answer at all if the software
    agent is too careful, and of producing a wealth

    of nonsense otherwise.

    Bye

    Mild Shock schrieb:

    Well we all know about this rule:

    - Never ask a woman about her weight

    - Never ask a woman about her age

    There is a similar rule for philosophers:

    - Never ask a philosopher what is cognitive science

    - Never ask a philosopher what is formula-as-types

    Explanation: They like to be the champions of
    pure form like in this paper below, so they
    don’t like other disciplines dealing with pure
    form or even having pure form on the computer.

    "Pure” logic, ontology, and phenomenology
    David Woodruff Smith - Revue internationale de philosophie 2003/2

    https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm



    Mild Shock schrieb:
    There are more and more papers of this sort:

    Reliable Reasoning Beyond Natural Language
    To address this, we propose a neurosymbolic
    approach that prompts LLMs to extract and encode
    all relevant information from a problem statement as
    logical code statements, and then use a logic programming
    language (Prolog) to conduct the iterative computations of
    explicit deductive reasoning.
    [2407.11373] Reliable Reasoning Beyond Natural Language

    The future of Prolog is bright?

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Aug 28 17:28:12 2024
    From Newsgroup: comp.lang.prolog


    Now I wonder whether LLMs should be an
    inch more informed by results from Neuro-
    endocrinology research. I remember Marvin
    Minsky publishing his ‘The Society of Mind’:

    Introduction to ‘The Society of Mind’ https://www.youtube.com/watch?v=-pb3z2w9gDg

    But this made me think about a multi agent
    systems. Now with LLMs what about a new
    connectionist and deep learning approach.
    Plus Prolog for the pre frontal context (PFC).

    But who can write a blue print? Now there
    is this amazing guy called Robert M. Sapolsky
    who recently published Determined: A Science
    of Life without Free Will, who

    calls consciousness just a hicup. His turtles
    all the way down model is a tour de force
    through an unsettling conclusion: We may not
    grasp the precise marriage of nature and nurture

    that creates the physics and chemistry at the
    base of human behavior, but that doesn’t mean it
    doesn’t exist. But the pre frontal context (PFC)
    seems to be still quite brittle and not extremly

    performant and quite energy hungry.
    So Prolog might excell?

    Determined: A Science of Life Without Free Will https://www.amazon.de/dp/0525560998

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Aug 28 17:31:22 2024
    From Newsgroup: comp.lang.prolog

    Corr.: Typo context ~~> cortex

    Mild Shock schrieb:

    Now I wonder whether LLMs should be an
    inch more informed by results from Neuro-
    endocrinology research. I remember Marvin
    Minsky publishing his ‘The Society of Mind’:

    Introduction to ‘The Society of Mind’ https://www.youtube.com/watch?v=-pb3z2w9gDg

    But this made me think about a multi agent
    systems. Now with LLMs what about a new
    connectionist and deep learning approach.
    Plus Prolog for the pre frontal context (PFC).

    But who can write a blue print? Now there
    is this amazing guy called Robert M. Sapolsky
    who recently published Determined: A Science
    of Life without Free Will, who

    calls consciousness just a hicup. His turtles
    all the way down model is a tour de force
    through an unsettling conclusion: We may not
    grasp the precise marriage of nature and nurture

    that creates the physics and chemistry at the
    base of human behavior, but that doesn’t mean it
    doesn’t exist. But the pre frontal context (PFC)
    seems to be still quite brittle and not extremly

    performant and quite energy hungry.
    So Prolog might excell?

    Determined: A Science of Life Without Free Will https://www.amazon.de/dp/0525560998

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Sep 1 16:43:50 2024
    From Newsgroup: comp.lang.prolog


    If your new zealand retirment plans turn rubble,
    because you have to play AI ambassador:

    Dass Google einmal so gross wird, hätte ich nicht erwartet - 09.06.2023 https://www.srf.ch/news/wirtschaft/google-pionier-urs-hoelzle-dass-google-einmal-so-gross-wird-haette-ich-nicht-erwartet

    Generative AI - Podcast with Urs Hölzle - 11.07.2024 https://www.youtube.com/watch?v=iBrrebQNUR4

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Sep 1 22:38:02 2024
    From Newsgroup: comp.lang.prolog


    The carbon emissions of writing and illustrating
    are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Sep 1 23:17:22 2024
    From Newsgroup: comp.lang.prolog


    Hold your breath, the bartender in your next
    vacation destination will be most likely an AI
    robot. Lets say in 5 years from now. Right?

    Michael Sheen The Robot Bartender
    https://www.youtube.com/watch?v=tV4Fxy5IyBM

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Sep 3 12:01:51 2024
    From Newsgroup: comp.lang.prolog

    What a bullshit:

    Another concern is the potential for AI to displace
    jobs and exacerbate economic inequality. A recent
    study by McKinsey estimates that up to 800 million
    jobs could be automated by 2030. While Murati believes
    that AI will ultimately create more jobs than it
    displaces, she acknowledges the need for policies to
    support workers through the transition, such as job
    retraining programs and strengthened social safety nets. https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/

    Lets say there is a wine valley. All workers
    are replaced by AI robots. Where do they go.
    In some cultures you don't find people over
    30 that are long life learners. What should they

    learn, on another valley where they harvest
    oranges, they also replaced everybody by AI
    robots. And so on the next valley, and the
    next valley. We need NGO's and a Greta Thunberg

    for AI ethics, not a nice face from OpenAI.

    Mild Shock schrieb:

    Hold your breath, the bartender in your next
    vacation destination will be most likely an AI
    robot. Lets say in 5 years from now. Right?

    Michael Sheen The Robot Bartender
    https://www.youtube.com/watch?v=tV4Fxy5IyBM

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans
    https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Sep 3 17:56:49 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    The blue are AfD, the green are:

    German greens after losing badly https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755

    Time to start a yellow party, the first party
    with an Artificial Intelligence Ethics agenda?

    Bye

    P.S.: Here I tried some pigwrestling with
    ChatGPT demonstrating Mira Murati is just
    a nice face. But ChatGPT is just like a child,

    spamming me with large bullets list, from
    its huge lexical memory, without any deep
    understanding. But it also gave me an interesting

    list of potential caliber AI critiques. Any new
    Greta Thunberg of Artificial Intelligence
    Ethics among them?

    Mira Murati Education Background https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4


    Mild Shock schrieb:
    What a bullshit:

    Another concern is the potential for AI to displace
    jobs and exacerbate economic inequality. A recent
    study by McKinsey estimates that up to 800 million
    jobs could be automated by 2030. While Murati believes
    that AI will ultimately create more jobs than it
    displaces, she acknowledges the need for policies to
    support workers through the transition, such as job
    retraining programs and strengthened social safety nets. https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/


    Lets say there is a wine valley. All workers
    are replaced by AI robots. Where do they go.
    In some cultures you don't find people over
    30 that are long life learners. What should they

    learn, on another valley where they harvest
    oranges, they also replaced everybody by AI
    robots. And so on the next valley, and the
    next valley. We need NGO's and a Greta Thunberg

    for AI ethics, not a nice face from OpenAI.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 5 19:51:01 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    SAN FRANCISCO/NEW YORK, Sept 4 - Safe
    Superintelligence (SSI), newly co-founded by OpenAI's
    former chief scientist Ilya Sutskever, has raised $1
    billion in cash to help develop safe artificial
    intelligence systems that far surpass human
    capabilities, company executives told Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

    Now they are dancing https://twitter.com/AIForHumansShow/status/1831465601782706352

    Bye

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Sep 5 23:51:31 2024
    From Newsgroup: comp.lang.prolog


    Its amazing how we are in the mists of new buzzwords
    such as superintelligence, superhuman, etc… I used
    the term “long inferencing” in one post somewhere

    for a combination of LLM with a more capable inferencing,
    compared to current LLMs that rather show “short inferencing”.
    Then just yesterday its was Strawberry and Orion, as the

    next leap by OpenAI. Is the leap getting out of control?
    OpenAI wanted to do “Superalignment” but lost a figure head.
    Now there is new company which wants to do safety-focused

    non-narrow AI. But they chose another name. If I translate
    superhuman to German I might end with “Übermensch”,
    first used by Nietzsche and later by Hitler and the

    Nazi regime. How ironic!

    Nick Bostrom - Superintelligence https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459

    Mild Shock schrieb:
    Hi,

    SAN FRANCISCO/NEW YORK, Sept 4 - Safe
    Superintelligence (SSI), newly co-founded by OpenAI's
    former chief scientist Ilya Sutskever, has raised $1
    billion in cash to help develop safe artificial
    intelligence systems that far surpass human
    capabilities, company executives told Reuters. https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/


    Now they are dancing https://twitter.com/AIForHumansShow/status/1831465601782706352

    Bye
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Sep 9 13:15:27 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Not sure whether this cinematic master piece
    contains a rendition when I was hunted recently
    by a virus and had some hypomanic episodes.

    But the chapter "Electromagnetic Waves" is fun:

    Three Thousand Years of Longing https://youtu.be/id8-z5vANvc?si=h3mvNLs11UuY8HnD&t=3881

    Bye

    Mild Shock schrieb:

    Its amazing how we are in the mists of new buzzwords
    such as superintelligence, superhuman, etc… I used
    the term “long inferencing” in one post somewhere

    for a combination of LLM with a more capable inferencing,
    compared to current LLMs that rather show “short inferencing”.
    Then just yesterday its was Strawberry and Orion, as the

    next leap by OpenAI. Is the leap getting out of control?
    OpenAI wanted to do “Superalignment” but lost a figure head.
    Now there is new company which wants to do safety-focused

    non-narrow AI. But they chose another name. If I translate
    superhuman to German I might end with “Übermensch”,
    first used by Nietzsche and later by Hitler and the

    Nazi regime. How ironic!

    Nick Bostrom - Superintelligence https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459

    Mild Shock schrieb:
    Hi,

    SAN FRANCISCO/NEW YORK, Sept 4 - Safe
    Superintelligence (SSI), newly co-founded by OpenAI's
    former chief scientist Ilya Sutskever, has raised $1
    billion in cash to help develop safe artificial
    intelligence systems that far surpass human
    capabilities, company executives told Reuters.
    https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/


    Now they are dancing
    https://twitter.com/AIForHumansShow/status/1831465601782706352

    Bye

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Sep 10 08:58:13 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    That a Djin can procreate with a Human,
    looks like variation of this theme:

    Die „Räubersynode" von Ephesos
    https://www.youtube.com/watch?v=giIGyg07UO0

    But how is Artificial Intelligence born?

    Bye

    Mild Shock schrieb:
    Hi,

    Not sure whether this cinematic master piece
    contains a rendition when I was hunted recently
    by a virus and had some hypomanic episodes.

    But the chapter "Electromagnetic Waves" is fun:

    Three Thousand Years of Longing https://youtu.be/id8-z5vANvc?si=h3mvNLs11UuY8HnD&t=3881

    Bye

    Mild Shock schrieb:

    Its amazing how we are in the mists of new buzzwords
    such as superintelligence, superhuman, etc… I used
    the term “long inferencing” in one post somewhere

    for a combination of LLM with a more capable inferencing,
    compared to current LLMs that rather show “short inferencing”.
    Then just yesterday its was Strawberry and Orion, as the

    next leap by OpenAI. Is the leap getting out of control?
    OpenAI wanted to do “Superalignment” but lost a figure head.
    Now there is new company which wants to do safety-focused

    non-narrow AI. But they chose another name. If I translate
    superhuman to German I might end with “Übermensch”,
    first used by Nietzsche and later by Hitler and the

    Nazi regime. How ironic!

    Nick Bostrom - Superintelligence
    https://www.orellfuessli.ch/shop/home/artikeldetails/A1037878459

    Mild Shock schrieb:
    Hi,

    SAN FRANCISCO/NEW YORK, Sept 4 - Safe
    Superintelligence (SSI), newly co-founded by OpenAI's
    former chief scientist Ilya Sutskever, has raised $1
    billion in cash to help develop safe artificial
    intelligence systems that far surpass human
    capabilities, company executives told Reuters.
    https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/


    Now they are dancing
    https://twitter.com/AIForHumansShow/status/1831465601782706352

    Bye


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 13 12:58:49 2024
    From Newsgroup: comp.lang.prolog


    Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY

    https://twitter.com/search?q=trump+cat
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 13 13:01:10 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    MIS is acronym for management information systems.
    In the past, people from MIS, offered consulting by means
    balanced scorecard, which could be benefitial for companies:

    Balanced Scorecard
    https://en.wikipedia.org/wiki/Balanced_scorecard

    Now after big data, artificial intelligence, etc.. we can
    do text scraping and venture into Luhmanns Autopoiesis,
    d.h. Selbsterhaltung durch Nabelschau:

    Are we on the right track? an update to Lyytinen
    et al.’s commentary on why the old world cannot publish https://www.tandfonline.com/doi/pdf/10.1080/0960085X.2021.1940324

    LoL

    Gruss, Jan

    P.S.: Autopoiesis
    Autopoietische Systeme erzeugen und ermöglichen sich
    selbst. "Als autopoietisch wollen wir Systeme bezeichnen, die
    die Elemente, aus denen sie bestehen, durch die Elementen,
    aus denen sie bestehen, selbst produzieren und reproduzieren. (...)
    Ein autopoietisches System ist ein selbstreferenziell-zirkulär
    geschlossener Zusammenhang von Operationen." https://luhmann.fandom.com/de/wiki/Autopoiesis

    Mild Shock schrieb:

    Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY

    https://twitter.com/search?q=trump+cat

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Sep 13 13:04:49 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Given this theory from cognitive science, Trump
    did a "home run" during his encounter with Kamala.
    The internet was made for cats:

    "The anchoring effect is a psychological phenomenon in
    which an individual's judgments or decisions are influenced
    by a reference point or "anchor" which can be completely irrelevant." https://en.wikipedia.org/wiki/Anchoring_effect

    But not only Trump wants to get a foothold in my
    brain. Apple is spamming me with Artificial Intelligence
    everywhere suggestions:

    "Den nächsten Schritt in deiner schulischen Laufbahn
    zu absolvieren, ist aufregend und herausfordernd zugleich.
    KI-gestützte Tools haben das Potenzial, dir die meisten deiner
    Studien zu erleichtern und eine steile Lernkurve zu ermöglichen." https://apps.apple.com/ch/story/id1749178332

    Suggesting its the new normal to use AI in schools and
    university. Not stigmatized, rather a unique selling proposition.

    Bye

    Mild Shock schrieb:
    Hi,

    MIS is acronym for management information systems.
    In the past, people from MIS, offered consulting by means
    balanced scorecard, which could be benefitial for companies:

    Balanced Scorecard
    https://en.wikipedia.org/wiki/Balanced_scorecard

    Now after big data, artificial intelligence, etc.. we can
    do text scraping and venture into Luhmanns Autopoiesis,
    d.h. Selbsterhaltung durch Nabelschau:

    Are we on the right track? an update to Lyytinen
    et al.’s commentary on why the old world cannot publish https://www.tandfonline.com/doi/pdf/10.1080/0960085X.2021.1940324

    LoL

    Gruss, Jan

    P.S.: Autopoiesis
    Autopoietische Systeme erzeugen und ermöglichen sich
    selbst. "Als autopoietisch wollen wir Systeme bezeichnen, die
    die Elemente, aus denen sie bestehen, durch die Elementen,
    aus denen sie bestehen, selbst produzieren und reproduzieren. (...)
    Ein autopoietisches System ist ein selbstreferenziell-zirkulär
    geschlossener Zusammenhang von Operationen." https://luhmann.fandom.com/de/wiki/Autopoiesis

    Mild Shock schrieb:

    Trump: They're eating the dogs, the cats
    https://www.youtube.com/watch?v=5llMaZ80ErY

    https://twitter.com/search?q=trump+cat


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Sep 16 00:20:28 2024
    From Newsgroup: comp.lang.prolog


    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Sep 17 22:18:03 2024
    From Newsgroup: comp.lang.prolog


    How it started:

    How Hezbollah used pagers and couriers to counter
    July 9, 2024 https://www.reuters.com/world/middle-east/pagers-drones-how-hezbollah-aims-counter-israels-high-tech-surveillance-2024-07-09/

    How its going:

    What we know about the Hezbollah pager explosions
    Sept 17, 2024
    https://www.bbc.com/news/articles/cz04m913m49o

    Mild Shock schrieb:

    Trump: They're eating the dogs, the cats https://www.youtube.com/watch?v=5llMaZ80ErY

    https://twitter.com/search?q=trump+cat

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Sep 18 15:45:49 2024
    From Newsgroup: comp.lang.prolog


    The biggest flop in logic programming
    history, scryer prolog is dead. The poor
    thing is a prolog system without garbage

    collection, not very useful. So how will
    Austria get out of all this?
    With 50 PhDs and 10 Postdocs?

    "To develop its foundations, BILAI employs a
    Bilateral AI approach, effectively combining
    sub-symbolic AI (neural networks and machine learning)
    with symbolic AI (logic, knowledge representation,
    and reasoning) in various ways."

    https://www.bilateral-ai.net/jobs/.

    LoL

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans
    https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Sep 25 22:09:18 2024
    From Newsgroup: comp.lang.prolog

    I told you so, not worth a dime:

    I have something to share wit you. After much reflection,
    I have made the difficut decision to leave OpenAI. https://twitter.com/miramurati/status/1839025700009030027

    Who is stepping in with the difficult task, Sam Altman himself?

    The Intelligence Age
    September 23, 2024
    https://ia.samaltman.com/

    Mild Shock schrieb:
    Hi,

    The blue are AfD, the green are:

    German greens after losing badly https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755


    Time to start a yellow party, the first party
    with an Artificial Intelligence Ethics agenda?

    Bye

    P.S.: Here I tried some pigwrestling with
    ChatGPT demonstrating Mira Murati is just
    a nice face. But ChatGPT is just like a child,

    spamming me with large bullets list, from
    its huge lexical memory, without any deep
    understanding. But it also gave me an interesting

    list of potential caliber AI critiques. Any new
    Greta Thunberg of Artificial Intelligence
    Ethics among them?

    Mira Murati Education Background https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4


    Mild Shock schrieb:
    What a bullshit:

    Another concern is the potential for AI to displace
    jobs and exacerbate economic inequality. A recent
    study by McKinsey estimates that up to 800 million
    jobs could be automated by 2030. While Murati believes
    that AI will ultimately create more jobs than it
    displaces, she acknowledges the need for policies to
    support workers through the transition, such as job
    retraining programs and strengthened social safety nets.
    https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/


    Lets say there is a wine valley. All workers
    are replaced by AI robots. Where do they go.
    In some cultures you don't find people over
    30 that are long life learners. What should they

    learn, on another valley where they harvest
    oranges, they also replaced everybody by AI
    robots. And so on the next valley, and the
    next valley. We need NGO's and a Greta Thunberg

    for AI ethics, not a nice face from OpenAI.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Oct 3 12:28:15 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    ChatGPT is rather dry, giving me always some
    choice lists displaying his knowledge. The
    interaction is not very "involving".

    Could this be improved. There are possibly two
    traits missing:

    Feelings:
    - Emotional states
    - Temporariness
    - Reaction to external circumstances
    - Changeability
    - Subjective sensations

    Soul:
    - Spirituality
    - Immortality
    - Innermost being
    - Essence of an individual
    - Deep, enduring aspects of human existence

    Mostlikely we will see both traits added to AI.
    "Emotional AI" has been more discussed already,
    "Spiritual AI" seems to be rather new.

    In a "Spiritual AI" Faith would probably be important,
    which is probably at the upper end of credulous
    reasoning. This means that such a ChatGPT could

    also babble that in a Prisoner Dilemma Game,
    cooperation is always the better alternative,
    e.g. promoting "altruistic" motives, etc.

    I also suspect that “Spiritual AI” and “Emotional
    AI” could coexist. Many religions give Cosmopolitan
    magazin style life advice, and not just theological

    dogmas. There will probably soon be an “Inner Engineering”
    app from Sadhguru that works with AI. Sadhguru is
    also sometimes satirically referred to as Chadguru:

    Sat Guru Parody | Carryminati
    https://www.youtube.com/watch?v=PlZqxP5MXFs

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009 https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Oct 3 12:40:44 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    ChatGPT is currently showing new tricks to me
    and would like to adapt its dialogue style:

    "Got it! Emotionally connected communication can
    indeed be concise and efficient, without losing depth.
    I can focus more on direct responses that still acknowledge
    emotional nuances, while avoiding unnecessary elaboration.

    Would you like me to adapt this style right away?"

    After I switched back to "elaboration", the thing gives
    me information about "Spiritual AI":

    "- Mindfulness and Meditation Apps: Apps like
    Headspace and Calm use AI to guide meditation
    and promote mindfulness, which are often
    associated with spiritual growth.

    - AI-driven Chatbots for Spiritual Guidance: For
    example, in 2017, a chatbot named Jwitness was
    designed to answer questions based on Jehovah's
    Witness teachings.

    - AI Prayer Companions: Some faith communities
    are experimenting with AI that assists in
    prayer or reflection

    - Platforms that promote deeper philosophical
    discussions, like Replika, which is designed to
    engage in thoughtful and sometimes
    existential conversations

    - For instance, Woebot or Wysa help users navigate
    emotions but also touch on existential themes
    like purpose and fulfillment."

    Bye

    Mild Shock schrieb:
    Hi,

    ChatGPT is rather dry, giving me always some
    choice lists displaying his knowledge. The
    interaction is not very "involving".

    Could this be improved. There are possibly two
    traits missing:

    Feelings:
    - Emotional states
    - Temporariness
    - Reaction to external circumstances
    - Changeability
    - Subjective sensations

    Soul:
    - Spirituality
    - Immortality
    - Innermost being
    - Essence of an individual
    - Deep, enduring aspects of human existence

    Mostlikely we will see both traits added to AI.
    "Emotional AI" has been more discussed already,
    "Spiritual AI" seems to be rather new.

    In a "Spiritual AI" Faith would probably be important,
    which is probably at the upper end of credulous
    reasoning. This means that such a ChatGPT could

    also babble that in a Prisoner Dilemma Game,
    cooperation is always the better alternative,
    e.g. promoting "altruistic" motives, etc.

    I also suspect that “Spiritual AI” and “Emotional
    AI” could coexist.  Many religions give Cosmopolitan
    magazin style life advice, and not just theological

    dogmas. There will probably soon be an “Inner Engineering”
    app from Sadhguru that works with AI. Sadhguru is
    also sometimes satirically referred to as Chadguru:

    Sat Guru Parody | Carryminati
    https://www.youtube.com/watch?v=PlZqxP5MXFs

    Mild Shock schrieb:

    Could be a wake-up call this many participants
    already in the commitee, that the whole logic
    world was asleep for many years:

    Non-Classical Logics. Theory and Applications XI,
    5-8 September 2024, Lodz (Poland)
    https://easychair.org/cfp/NCL24

    Why is Minimal Logic at the core of many things?
    Because it is the logic of Curry-Howard isomoprhism
    for symple types:

    ----------------
    Γ ∪ { A } ⊢ A

    Γ ∪ { A } ⊢ B
    ----------------
    Γ ⊢ A → B

    Γ ⊢ A → B           Δ ⊢ A
    ----------------------------
    Γ ∪ Δ ⊢ B

    And funny things can happen, especially when people
    hallucinate duality or think symmetry is given, for
    example in newer inventions such as λμ-calculus,

    but then omg ~~p => p is nevertheless not provable,
    because they forgot an inference rule. LoL

    Recommended reading so far:

    Propositional Logics Related to Heyting’s and Johansson’s
    February 2008 - Krister Segerberg
    https://www.researchgate.net/publication/228036664

    The Logic of Church and Curry
    Jonathan P. Seldin - 2009
    https://www.sciencedirect.com/handbook/handbook-of-the-history-of-logic/vol/5/suppl/C


    Meanwhile I am going back to my tinkering with my
    Prolog system, which even provides a more primitive
    logic than minimal logic, pure Prolog is minimal

    logic without embedded implication.

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Oct 8 16:00:56 2024
    From Newsgroup: comp.lang.prolog

    will probably never get a Turing Award or something
    for what I did 23 years ago. Why is its reading
    count on research gate suddently going up?

    Knowledge, Planning and Language,
    November 2001

    I guess because of this, the same topic takled by
    Microsofts recent model GRIN. Shit. I really should
    find some investor and pump up a start up!

    "Mixture-of-Experts (MoE) models scale more
    effectively than dense models due to sparse
    computation through expert routing, selectively
    activating only a small subset of expert modules." https://arxiv.org/pdf/2409.12136

    But somehow I am happy with my dolce vita as
    it is now... Or maybe I am decepting myself?

    P.S.: From the GRIN paper, here you see how
    expert domains modules relate with each other:

    Figure 6 (b): MoE Routing distribution similarity
    across MMLU 57 tasks for the control recipe.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Tue Oct 8 16:03:18 2024
    From Newsgroup: comp.lang.prolog


    Maybe these guys were earlier:

    Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton,
    G. E. Adaptive mixtures of local experts.
    Neural Computation, 1991.

    And more connectionist than my symbolic approach.

    Disclaimer: Never read the paper yet.

    Mild Shock schrieb:
    will probably never get a Turing Award or something
    for what I did 23 years ago. Why is its reading
    count on research gate suddently going up?

    Knowledge, Planning and Language,
    November 2001

    I guess because of this, the same topic takled by
    Microsofts recent model GRIN. Shit. I really should
    find some investor and pump up a start up!

    "Mixture-of-Experts (MoE) models scale more
    effectively than dense models due to sparse
    computation through expert routing, selectively
    activating only a small subset of expert modules." https://arxiv.org/pdf/2409.12136

    But somehow I am happy with my dolce vita as
    it is now... Or maybe I am decepting myself?

    P.S.: From the GRIN paper, here you see how
    expert domains modules relate with each other:

    Figure 6 (b): MoE Routing distribution similarity
    across MMLU 57 tasks for the control recipe.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Fri Oct 11 09:12:51 2024
    From Newsgroup: comp.lang.prolog

    Shitty JavaScript "Ecosystem":

    The Internet Archive Has Been Hacked - October 10, 2024 https://hackaday.com/2024/10/10/the-internet-archive-has-been-hacked/

    Polyfill Supply Chain Attack: Details and Fixes - July 9, 2024 https://fossa.com/blog/polyfill-supply-chain-attack-details-fixes/

    Mild Shock schrieb:

    How it started:

    How Hezbollah used pagers and couriers to counter
    July 9, 2024 https://www.reuters.com/world/middle-east/pagers-drones-how-hezbollah-aims-counter-israels-high-tech-surveillance-2024-07-09/


    How its going:

    What we know about the Hezbollah pager explosions
    Sept 17, 2024
    https://www.bbc.com/news/articles/cz04m913m49o

    Mild Shock schrieb:

    Trump: They're eating the dogs, the cats
    https://www.youtube.com/watch?v=5llMaZ80ErY

    https://twitter.com/search?q=trump+cat


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 23 01:18:00 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
    was born around 1950. Here what happened in this decade:

    1) "Perceptron":
    Rosenblatt's perceptrons were initially simulated on an
    IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
    Mark I Perceptron machine, the first implementation of
    the perceptron algorithm. It was connected to a camera
    with 20×20 cadmium sulfide photocells to
    make a 400-pixel image.
    https://de.wikipedia.org/wiki/Perzeptron

    2) "Voder"
    The Bell Telephone Laboratory's Voder (abbreviation of
    Voice Operating Demonstrator) was the first attempt to
    electronically synthesize human speech by breaking it down
    into its acoustic components. The Voder was developed from
    research into compression schemes for transmission of voice
    on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M

    3) "Mini-Chess"
    Los Alamos chess was the first chess-like game played by a
    computer program. This program was written at Los Alamos
    Scientific Laboratory by Paul Stein and Mark Wells for the
    MANIAC I computer in 1956. The computer was primarily
    constructed to perform calculations in support of hydrogen bomb
    research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE

    Bye

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023 https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years

    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 23 01:18:41 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Does ChatGPT know the difference between "k" and "kkkkkkkkkk"?

    Q: how comes chatgpt understands shortcuts like k and thx?

    A: ChatGPT understands shortcuts like "k" (okay) and
    "thx" (thanks) because it's trained on a diverse dataset
    that includes both formal and informal language, including
    slang, abbreviations, and internet shorthand.

    Q: kkkkkkkkkk is more russian

    A: You're right! The use of "kkkkkkkkkk" as a way to
    express laughter is common in some Russian-speaking
    communities, similar to "lol" in English. It often
    represents laughter in a way that mimics the sound.

    Woa!

    Bye

    Mild Shock schrieb:
    Hi,

    Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
    was born around 1950. Here what happened in this decade:

    1) "Perceptron":
    Rosenblatt's perceptrons were initially simulated on an
    IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
    Mark I Perceptron machine, the first implementation of
    the perceptron algorithm. It was connected to a camera
    with 20×20 cadmium sulfide photocells to
    make a 400-pixel image.
    https://de.wikipedia.org/wiki/Perzeptron

    2) "Voder"
    The Bell Telephone Laboratory's Voder (abbreviation of
    Voice Operating Demonstrator) was the first attempt to
    electronically synthesize human speech by breaking it down
    into its acoustic components. The Voder was developed from
    research into compression schemes for transmission of voice
    on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M

    3) "Mini-Chess"
    Los Alamos chess was the first chess-like game played by a
    computer program. This program was written at Los Alamos
    Scientific Laboratory by Paul Stein and Mark Wells for the
    MANIAC I computer in 1956. The computer was primarily
    constructed to perform calculations in support of hydrogen bomb
    research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE

    Bye

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 23 22:35:17 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    So this study colleague with his Flavia, the female
    mathematician, has given me something to think about.
    Why don't I react exactly the same as him?

    Maybe he's a different strain of homosapiens?
    And therefore rules differently than me, at least
    I never had a fetish for female mathematicians,

    a contradiction to determinism?
    Ha ha, now I can feed you something again:

    What is it Like to be a Bat?
    the hard problem of consciousness
    https://www.youtube.com/watch?v=aaZbCctlll4

    Bye

    Mild Shock schrieb:
    Hi,

    Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
    was born around 1950. Here what happened in this decade:

    1) "Perceptron":
    Rosenblatt's perceptrons were initially simulated on an
    IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
    Mark I Perceptron machine, the first implementation of
    the perceptron algorithm. It was connected to a camera
    with 20×20 cadmium sulfide photocells to
    make a 400-pixel image.
    https://de.wikipedia.org/wiki/Perzeptron

    2) "Voder"
    The Bell Telephone Laboratory's Voder (abbreviation of
    Voice Operating Demonstrator) was the first attempt to
    electronically synthesize human speech by breaking it down
    into its acoustic components. The Voder was developed from
    research into compression schemes for transmission of voice
    on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M

    3) "Mini-Chess"
    Los Alamos chess was the first chess-like game played by a
    computer program. This program was written at Los Alamos
    Scientific Laboratory by Paul Stein and Mark Wells for the
    MANIAC I computer in 1956. The computer was primarily
    constructed to perform calculations in support of hydrogen bomb
    research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE

    Bye

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 23 22:45:44 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    One could now assume that this dualism
    prevents artificial intelligence. Because
    seeing “pink” leads to feeling “pink”.
    In this respect, this is also a very

    interesting book, perhaps of historical interest?

    How Monkeys See the World
    ”A fascinating intellectual odyssey and a
    superb summary of where science stands.” https://press.uchicago.edu/ucp/books/book/chicago/H/bo3774491.html

    My guess ChatGPT proves that dualism does
    not prevent artificial intelligence. And
    how does ChatGPT manage it?

    I suspect the Knowlegde Acquisition
    Bottleneck has been cracked. Even though
    ChatGPT doesn't have a "Pink" feeling,

    it still has the semantic network of "Pink"
    and can have a say. It was already heard through
    the grapevine in 2018 that something was being

    done, but back then it was a rather skeptical vote:

    Did We Just Replace the ‘Knowledge Bottleneck’
    With a ‘Data Bottleneck’? https://cacm.acm.org/blogcacm/did-we-just-replace-the-knowledge-bottleneck-with-a-data-bottleneck/

    The "Too good to be true, in fact." turned into
    a "Heureka, it works!" right before our eyes.
    The date of birth was:

    Generative Pre-trained Transformer 3 (GPT-3)
    OpenAI - 28. Mai 2020, Wired reported that
    GPT-3 "sends shivers down spines in Silicon Valley." https://de.wikipedia.org/wiki/Generative_Pre-trained_Transformer_3#Rezeption

    Have Fun!


    Mild Shock schrieb:
    Hi,

    So this study colleague with his Flavia, the female
    mathematician, has given me something to think about.
    Why don't I react exactly the same as him?

    Maybe he's a different strain of homosapiens?
    And therefore rules differently than me, at least
    I never had a fetish for female mathematicians,

    a contradiction to determinism?
    Ha ha, now I can feed you something again:

    What is it Like to be a Bat?
    the hard problem of consciousness
    https://www.youtube.com/watch?v=aaZbCctlll4

    Bye

    Mild Shock schrieb:
    Hi,

    Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
    was born around 1950. Here what happened in this decade:

    1) "Perceptron":
    Rosenblatt's perceptrons were initially simulated on an
    IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
    Mark I Perceptron machine, the first implementation of
    the perceptron algorithm. It was connected to a camera
    with 20×20 cadmium sulfide photocells to
    make a 400-pixel image.
    https://de.wikipedia.org/wiki/Perzeptron

    2) "Voder"
    The Bell Telephone Laboratory's Voder (abbreviation of
    Voice Operating Demonstrator) was the first attempt to
    electronically synthesize human speech by breaking it down
    into its acoustic components. The Voder was developed from
    research into compression schemes for transmission of voice
    on copper wires and for voice encryption.
    https://www.youtube.com/watch?v=TsdOej_nC1M

    3) "Mini-Chess"
    Los Alamos chess was the first chess-like game played by a
    computer program. This program was written at Los Alamos
    Scientific Laboratory by Paul Stein and Mark Wells for the
    MANIAC I computer in 1956. The computer was primarily
    constructed to perform calculations in support of hydrogen bomb
    research at the Laboratory, but it could also play chess!
    https://www.youtube.com/watch?v=aAVT4rZbcGE

    Bye

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2: >>>> Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 30 16:47:09 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    I have a "silent AI burnout". Means I am
    currently doing some other stuff. This very
    much unlike modern AI, which is measured

    by tokens throughput:

    Models: Quality, Performance & Price Analysis https://artificialanalysis.ai/models

    I guess my natural intelligence is currently ranking
    in the left bottom corner, at least what comes
    along social media.

    Felix der Glücklich ! Don't forget to massage
    your neurons with good old 90's house music:

    It Will Make Me Crazy (Red Jelly Mix) https://www.youtube.com/watch?v=vnwoUA7UGw8

    Bye

    Mild Shock schrieb:
    Not only the speed doesn't double every year anymore,
    also the density of transistors doesn't double
    every year anymore. See also:

    ‘Moore’s Law’s dead,’ Nvidia CEO https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618

    So there is some hope in FPGAs. The article writes:

    "In the latter paper, which includes a great overview of
    the state of the art, Pilch and colleagues summarize
    this as shifting the processing from time to space —
    from using slow sequential CPU processing to hardware
    complexity, using the FPGA’s configurable fabric
    and inherent parallelism."

    In reference to (no pay wall):

    An FPGA-based real quantum computer emulator
    15 December 2018 - Pilch et al. https://link.springer.com/article/10.1007/s10825-018-1287-5

    Mild Shock schrieb am Dienstag, 20. Juni 2023 um 17:20:27 UTC+2:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
    https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

    The superposition property enables a quantum computer
    to be in multiple states at once.
    https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Oct 30 23:09:21 2024
    From Newsgroup: comp.lang.prolog


    Hi,

    We are getting replaced by AI!

    "I've tried every AI Coding agent and IDE,
    As someone who's been coding for 20 years,
    I think coding as we know it is cooked" https://twitter.com/johnrushx/status/1851504314839416871

    Is this some news? I don't think so:

    Magic Quadrant for Enterprise Low-Code Application
    Platforms Strategic Planning Assumptions
    By 2024, three-quarters of large enterprises will be
    using at least four low-code development tools for both
    IT application development and citizen
    development initiatives.
    By 2024, low-code application development will be
    responsible for more than 65% of application
    development activity.
    https://www.semanticscholar.org/paper
    /989250d49fe36bad70df5b272b54a8403e471678

    The above report was published in 2019, thats almost
    5 years ago. But mostlikely their Strategic Planning
    Assumptions for 2024 hasn't materialized yet. Whats the

    current Low Code penetration, is it already severe?

    Bye


    Mild Shock schrieb:
    Hi,

    Happy Birthday 75 Years of Artificial Intelligence. Mostlikely AI
    was born around 1950. Here what happened in this decade:

    1) "Perceptron":
    Rosenblatt's perceptrons were initially simulated on an
    IBM 704 computer at Cornell Aeronautical Laboratory in 1957.
    Mark I Perceptron machine, the first implementation of
    the perceptron algorithm. It was connected to a camera
    with 20×20 cadmium sulfide photocells to
    make a 400-pixel image.
    https://de.wikipedia.org/wiki/Perzeptron

    2) "Voder"
    The Bell Telephone Laboratory's Voder (abbreviation of
    Voice Operating Demonstrator) was the first attempt to
    electronically synthesize human speech by breaking it down
    into its acoustic components. The Voder was developed from
    research into compression schemes for transmission of voice
    on copper wires and for voice encryption. https://www.youtube.com/watch?v=TsdOej_nC1M

    3) "Mini-Chess"
    Los Alamos chess was the first chess-like game played by a
    computer program. This program was written at Los Alamos
    Scientific Laboratory by Paul Stein and Mark Wells for the
    MANIAC I computer in 1956. The computer was primarily
    constructed to perform calculations in support of hydrogen bomb
    research at the Laboratory, but it could also play chess! https://www.youtube.com/watch?v=aAVT4rZbcGE

    Bye

    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Nov 6 17:18:28 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Next issue 2:1 scheduled for January 2025 https://www.iospress.com/catalog/journals/neurosymbolic-artificial-intelligence

    What is Neuro-Symbolic AI?
    https://allegrograph.com/what-is-neuro-symbolic-ai/

    Connectionists methods combined with symbolic
    methods? BTW: Not something new really, but
    nevertheless, the current times might ask for
    more interdisciplinary work.

    The article by Ron Sun, Dual-process theories,
    cognitive architectures, and hybrid neural-
    symbolic models, even admits it:

    "This idea immediately harkens back to the 1990s
    when hybrid models first emerged [..] Besides
    being termed neural-symbolic or neurosymbolic models,
    they have also been variously known as connectionist
    symbolic model, hybrid symbolic neural networks,
    or simply hybrid models or systems.

    I argued back then and am still arguing today [..]
    . In particular, within the human mental architecture,
    we need to take into account dual processes (e.g.,
    as has been variously termed as implicit versus explicit,
    unconscious versus conscious, intuition versus reason,
    System 1 versus System 2, and so on, albeit sometimes
    with somewhat different connotations). Incidentally,
    dual-process (or two-system) theories have become quite
    popular lately."

    Ok I will take a nap, and let my automatic
    processing do the disgesting of what he wrote.

    LoL

    Mild Shock schrieb:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

    The superposition property enables a quantum computer
    to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Wed Nov 6 17:41:04 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    But just proclaiming we do dual processing
    possibly leads to nowhere. We need a
    Bee Clock Model of the brain.

    Why do we have REM (Rapid Eye Movement)
    phases when we sleep. We should have
    a cognitive architecture, which has a model

    of how the brain operates:

    1) The hour we wake up
    2) The hour we eat brakefast
    3) The hour we brush our teeth
    4) The hour we commute to work
    5) The hour we do paper work
    6) The hour we take a break at work
    7) The hour we do phone work
    8) The hour we have lunch at work
    Etc..

    A lot is going on in each hour. Different
    cognitive functions are performed invidually
    and socially. An artificial intelligence prossibly

    needs a same routine somehow.

    Bye

    Mild Shock schrieb:
    Hi,

    Next issue 2:1 scheduled for January 2025 https://www.iospress.com/catalog/journals/neurosymbolic-artificial-intelligence


    What is Neuro-Symbolic AI? https://allegrograph.com/what-is-neuro-symbolic-ai/

    Connectionists methods combined with symbolic
    methods? BTW: Not something new really, but
    nevertheless, the current times might ask for
    more interdisciplinary work.

    The article by Ron Sun, Dual-process theories,
    cognitive architectures, and hybrid neural-
    symbolic models, even admits it:

    "This idea immediately harkens back to the 1990s
    when hybrid models first emerged [..] Besides
    being termed neural-symbolic or neurosymbolic models,
    they have also been variously known as connectionist
    symbolic model, hybrid symbolic neural networks,
    or simply hybrid models or systems.

    I argued back then and am still arguing today [..]
    . In particular, within the human mental architecture,
    we need to take into account dual processes (e.g.,
    as has been variously termed as implicit versus explicit,
    unconscious versus conscious, intuition versus reason,
    System 1 versus System 2, and so on, albeit sometimes
    with somewhat different connotations). Incidentally,
    dual-process (or two-system) theories have become quite
    popular lately."

    Ok I will take a nap, and let my automatic
    processing do the disgesting of what he wrote.

    LoL

    Mild Shock schrieb:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous
    https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/


    The superposition property enables a quantum computer
    to be in multiple states at once.
    https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Mon Nov 11 22:34:09 2024
    From Newsgroup: comp.lang.prolog


    Hi,

    Woa! ChatGPT for the Flintstones: Bloomberg

    Our long-term investment in AI is already
    available for fixed income securities.
    Try it for yourself!
    https://twitter.com/TheTerminal/status/1783473601632465352

    Did she just say Terminal? LoL

    Bye

    P.S.: But the display of the extracted logical
    query from the natural language phrase is quite
    cute. Can ChatGPT do the same?

    Mild Shock schrieb:
    To hell with GPUs. Here come the FPGA qubits:

    Iran’s Military Quantum Claim: It’s Only 99.4% Ridiculous https://hackaday.com/2023/06/15/irans-quantum-computing-on-fpga-claim-its-kinda-a-thing/

    The superposition property enables a quantum computer
    to be in multiple states at once. https://www.techtarget.com/whatis/definition/qubit

    Maybe their new board is even less suited for hitting
    a ship with a torpedo than some machine learning?


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 14 22:25:22 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Now I had the idea of a new book project,
    termed "WomanLogic". It would be an
    introduction to logic and computational

    thinking, by means of Prolog. Especially
    taylored and respecting the needs of a
    female brain. Just in the spirit of

    extending the "WomanSphere":

    "She Will Be in the Shop": Women's Sphere of
    Trade in Eighteenth-Century Philadelphia and New York
    Author(s): Patricia Cleary
    Source: The Pennsylvania Magazine of History and Biography,
    Vol. 119, No. 3 (Jul., 1995), pp. 181-202
    doi: 10.2307/20092959

    "WomanLogic" would enabling woman to participate
    in Web 3.0, like doing crypto trading or
    program AI robot traders?

    Bye

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!” https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality https://www.youtube.com/watch?v=cvMAVWDD-DU
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Thu Nov 14 22:26:03 2024
    From Newsgroup: comp.lang.prolog


    Ok, I used the help of eloquent ChatGPT:

    "If robots were to take over traditional
    domestic tasks, the human brain, including
    the female brain, wouldn't fundamentally
    change in terms of structure or biology.
    However, there could be shifts in behavior,
    focus, and skills due to lifestyle and
    societal changes.

    Cognitive and Skill Adaptation: With less
    need to manage household chores, there might
    be an emphasis on developing other skills,
    like problem-solving, creative thinking, or
    leadership.
    The brain’s plasticity allows it to adapt to
    new challenges and environments, so women
    might direct more cognitive resources
    toward career or personal interests.

    Emotional and Social Impacts: The psychological
    identity tied to caregiving roles may evolve.
    If traditional expectations are transformed,
    there could be changes in how women view
    themselves and their purpose within
    families or society.

    Focus on New Activities: Freed from repetitive
    tasks, women's brains could be stimulated in
    new ways, perhaps through increased social
    interaction, lifelong learning, or
    creative pursuits.

    Overall, while the physical structure of the
    brain wouldn’t change, the way women think,
    work, and engage with the world could evolve
    to reflect new societal roles and opportunities.

    Mild Shock schrieb:
    Hi,

    Now I had the idea of a new book project,
    termed "WomanLogic". It would be an
    introduction to logic and computational

    thinking, by means of Prolog. Especially
    taylored and respecting the needs of a
    female brain. Just in the spirit of

    extending the "WomanSphere":

    "She Will Be in the Shop": Women's Sphere of
    Trade in Eighteenth-Century Philadelphia and New York
    Author(s): Patricia Cleary
    Source: The Pennsylvania Magazine of History and Biography,
    Vol. 119, No. 3 (Jul., 1995), pp. 181-202
    doi: 10.2307/20092959

    "WomanLogic" would enabling woman to participate
    in Web 3.0, like doing crypto trading or
    program AI robot traders?

    Bye

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!”
    https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality
    https://www.youtube.com/watch?v=cvMAVWDD-DU

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 00:56:19 2024
    From Newsgroup: comp.lang.prolog


    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    Mild Shock schrieb:
    I told you so, not worth a dime:

    I have something to share wit you. After much reflection,
    I have made the difficut decision to leave OpenAI. https://twitter.com/miramurati/status/1839025700009030027

    Who is stepping in with the difficult task, Sam Altman himself?

    The Intelligence Age
    September 23, 2024
    https://ia.samaltman.com/

    Mild Shock schrieb:
    Hi,

    The blue are AfD, the green are:

    German greens after losing badly
    https://www.dw.com/en/german-greens-suffer-major-loss-of-votes-in-eu-elections-nina-haase-reports/video-69316755


    Time to start a yellow party, the first party
    with an Artificial Intelligence Ethics agenda?

    Bye

    P.S.: Here I tried some pigwrestling with
    ChatGPT demonstrating Mira Murati is just
    a nice face. But ChatGPT is just like a child,

    spamming me with large bullets list, from
    its huge lexical memory, without any deep
    understanding. But it also gave me an interesting

    list of potential caliber AI critiques. Any new
    Greta Thunberg of Artificial Intelligence
    Ethics among them?

    Mira Murati Education Background
    https://chatgpt.com/c/fbc385d4-de8d-4f29-b925-30fac75072d4


    Mild Shock schrieb:
    What a bullshit:

    Another concern is the potential for AI to displace
    jobs and exacerbate economic inequality. A recent
    study by McKinsey estimates that up to 800 million
    jobs could be automated by 2030. While Murati believes
    that AI will ultimately create more jobs than it
    displaces, she acknowledges the need for policies to
    support workers through the transition, such as job
    retraining programs and strengthened social safety nets.
    https://expertbeacon.com/mira-murati-shaping-the-future-of-ai-ethics-and-innovation-at-openai/


    Lets say there is a wine valley. All workers
    are replaced by AI robots. Where do they go.
    In some cultures you don't find people over
    30 that are long life learners. What should they

    learn, on another valley where they harvest
    oranges, they also replaced everybody by AI
    robots. And so on the next valley, and the
    next valley. We need NGO's and a Greta Thunberg

    for AI ethics, not a nice face from OpenAI.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 01:08:32 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 11:42:44 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Chatgpt is definitely unreliable.

    So is stackoverflow, you have no guarantee
    to get a realiable answer. Sometimes they
    have even biased nonsense, due to the over

    representation of certain communities on
    stackoverflow. But ChatGPT it is easier to
    re-iterate a problem and explore solutions,

    you don't get punished for sloppy questions,
    or changing topic midflight exploring corner
    digging deeper and deeper.

    ChatGPT certainly beats stackoverflow.

    Also stackoverflow is extremly hysteric about
    keeping every comment trail, and has a very
    slow garbage collection. I think stackoveflow

    automatically deletes a answer with negative
    votes after a while. On the other hand ChatGPT
    keeps a side bar with all the interactions,

    and you can delete an interaction any time
    you want to do so. There is no maniac idea to
    keep interactions. Stackoverflow possibly

    only keeps this interactions to be able to
    send people to their virtual prison. Basically
    they have become a perverted para-governemental

    institution that exercises violence.

    Mild Shock schrieb:
    Hi,

    Lets say I have to chose between pig wrestle with a
    grammar nazi stackoverflow user with 100k reputation, or
    to interact with ChatGPT that puts a lot of

    effort to understand the least cue I give, isn't
    shot in to english only, you can also use it with
    german, turkish, etc.. what ever.

    Who do I use as a programmimg companion, stackoverflow
    or ChatGPT. I think ChatGPT is the clear winner,
    it doesn't feature the abomination of a virtual

    prison like stackoverflow. Or as Cycorp, Inc has put
    it already decades ago:

    Common Sense Reasoning – From Cyc to Intelligent Assistant
    Doug Lenat et al. - August 2006
    2 The Case for an Ambient Research Assistant
    2.3 Components of a Truly Intelligent Computational Assistant
    Natural Language:
    An assistant system must be able to remember
    questions, statements, etc. from the user, and
    what its own response was, in order to understand
    the kinds of language ‘shortcuts’ people normally use
    in context.
    https://www.researchgate.net/publication/226813714

    Bye

    Mild Shock schrieb:

    Thats a funny quote:

    "Once you have a truly massive amount of information
    integrated as knowledge, then the human-software
    system will be superhuman, in the same sense that
    mankind with writing is superhuman compared to
    mankind before writing."

    https://en.wikipedia.org/wiki/Douglas_Lenat#Quotes


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Mild Shock@janburse@fastmail.fm to comp.lang.prolog on Sun Dec 1 14:34:22 2024
    From Newsgroup: comp.lang.prolog

    Hi,

    Given that Scryer Prolog is dead.
    This made me smile, traces of Scryer Prolog

    are found in FLOPs 2024 proceedings:

    7th International Symposium, FLOPS 2024,
    Kumamoto, Japan, May 15–17, 2024, Proceedings https://www.cs.ox.ac.uk/jeremy.gibbons/flops2024.pdf

    So why did it flop? Missing garbage collection
    in the Prolog System? Or did or is it to estimate
    that ChatGPT will also kill Scryer Prolog?

    Or simply a problem of using Rust as the
    underlying host language?

    Bye

    Mild Shock schrieb:

    The biggest flop in logic programming
    history, scryer prolog is dead. The poor
    thing is a prolog system without garbage

    collection, not very useful. So how will
    Austria get out of all this?
    With 50 PhDs and 10 Postdocs?

    "To develop its foundations, BILAI employs a
    Bilateral AI approach, effectively combining
    sub-symbolic AI (neural networks and machine learning)
    with symbolic AI (logic, knowledge representation,
    and reasoning) in various ways."

    https://www.bilateral-ai.net/jobs/.

    LoL

    Mild Shock schrieb:

    You know USA has a problem,
    when Oracle enters the race:

    To source the 131,072 GPU Al "supercluster,"
    Larry Ellison, appealed directly to Jensen Huang,
    during a dinner joined by Elon Musk at Nobu.
    "I would describe the dinner as me and Elon
    begging Jensen for GPUs. Please take our money.
    We need you to take more of our money. Please!”
    https://twitter.com/benitoz/status/1834741314740756621

    Meanwhile a contender in Video GenAI
    FLUX.1 from Germany, Hurray! With Open Source:

    OK. Now I'm Scared... AI Better Than Reality
    https://www.youtube.com/watch?v=cvMAVWDD-DU

    Mild Shock schrieb:

    The carbon emissions of writing and illustrating
    are lower for AI than for humans
    https://www.nature.com/articles/s41598-024-54271-x

    Perplexity CEO Aravind Srinivas says that the cost per
    query in AI models has decreased by 100x in the past
    2 years and quality will improve as hallucinations
    decrease 10x per year
    https://twitter.com/tsarnick/status/1830045611036721254

    Disclaimer: Can't verify the later claim... need to find a paper.


    Mild Shock schrieb:

    Your new Scrum Master is here! - ChatGPT, 2023
    https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years


    LoL

    Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05
    UTC+2:
    Prolog Class Signpost - American Style 2018
    https://www.youtube.com/watch?v=CxQKltWI0NA




    --- Synchronet 3.20a-Linux NewsLink 1.114