• Re: Port forwarding from RPi to Windows machine

    From 68g.1503@68g.1504@exr3.net to comp.sys.raspberry-pi on Fri Feb 9 20:41:11 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2/9/24 5:00 AM, mm0fmf wrote:
    On 09/02/2024 09:30, Björn Lundin wrote:
    On 2024-02-09 10:14, 68g.1502 wrote:

    I have a script called by cron every 15mins

    #! /bin/bash

    wget -O -
    http://freedns.afraid.org/dynamic/update.php?<some_magic_UID>  >>
    /tmp/afraid_dns.log 2>& 1


       DO gloss over the source code, just in case  :-)


    Gloss over wget's source code?
    Because that is the only one mentioned here. no daemons, just plain
    wget. And I got an example with curl. Installed by the OS





    I blocked the nym changing troll sometime back. At first I thought it
    was someone's AI experiment. But it's too full of shit and wrong so much
    of the time that even AI isn't that dumb.


    Feelings of inadequacy dude ? :-)

    Wget and the daemons for dynDNS and friends are very
    different things BTW.

    And hmmmm ... when IS the last time anyone actually
    DID look-over wget's source code ??? The best place
    to hide evil is inside something deemed "old and
    reliable" .....
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sat Feb 10 07:14:27 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Fri, 9 Feb 2024 20:41:11 -0500
    "68g.1503" <68g.1504@exr3.net> wrote:

    Wget and the daemons for dynDNS and friends are very
    different things BTW.

    No they are not - every dynamic dns service I know of updates in essentially the same way, you make an http(s) request with the domain, an authentication key and optionally the IP address (otherwise it uses the source). Usually curl or wget is used to make the request, it is good
    practice to minimise the requests by checking for IP changes but not
    necessary. My ddns daemon looks like this:

    #!/bin/sh
    old_external=''
    while true
    do
    external_ip=`fetch -q -o - http://ifconfig.me/ip`
    if [ "$external_ip" -a "$old_external" != "$external_ip" ]
    then
    curl -s -4 "https://<HOST>:<KEY>@dyn.dns.he.net/nic/update?hostname=<HOST>"
    fi
    sleep 30
    done
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Andy Burns@usenet@andyburns.uk to comp.sys.raspberry-pi on Sat Feb 10 07:40:58 2024
    From Newsgroup: comp.sys.raspberry-pi


    Ahem A Rivet's Shot wrote:

    every dynamic dns service I know of updates in essentially the same
    way, you make an http(s) request with the domain, an authentication
    key and optionally the IP address

    And then there are DNS providers which accept RFC1996 compliant NOTIFY transactions, rather than rolling their own ...

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sat Feb 10 08:14:57 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sat, 10 Feb 2024 07:40:58 +0000
    Andy Burns <usenet@andyburns.uk> wrote:


    Ahem A Rivet's Shot wrote:

    every dynamic dns service I know of updates in essentially the same
    way, you make an http(s) request with the domain, an authentication
    key and optionally the IP address

    And then there are DNS providers which accept RFC1996 compliant NOTIFY transactions, rather than rolling their own ...

    That's for relaying updates from master to slave DNS servers not
    for announcing a change of dynamic IP to the master.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From =?UTF-8?Q?Bj=C3=B6rn_Lundin?=@bnl@nowhere.com to comp.sys.raspberry-pi on Sat Feb 10 11:47:32 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2024-02-10 02:41, 68g.1503 wrote:
      And hmmmm ... when IS the last time anyone actually
      DID look-over wget's source code ??? The best place
      to hide evil is inside something deemed "old and
      reliable" .....

    Are you saying you are looking into EVERY package's source code you
    download via apt BEFORE you install it?
    Or are you saying that one should do it?

    Sounds bit hysterical to me.

    You do know that both wget and curl are provided by debian based (and
    most other ) distribution via their repository tool ?

    If you don't trust your distribution, you should switch, or roll your
    own distribution.
    But then - I guess you have lots of work ahead of you verifying/looking
    over source code ...
    --
    /Björn

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Sat Feb 10 10:56:05 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 10/02/2024 10:47, Björn Lundin wrote:
    On 2024-02-10 02:41, 68g.1503 wrote:
       And hmmmm ... when IS the last time anyone actually
       DID look-over wget's source code ??? The best place
       to hide evil is inside something deemed "old and
       reliable" .....

    Are you saying you are looking into EVERY package's source code you
    download via apt BEFORE you install it?
    Or are you saying that one should do it?

    Sounds bit hysterical to me.

    You do know that both wget and curl are provided by debian based (and
    most other ) distribution via their repository tool ?

    If you don't trust your distribution, you should switch, or roll your
    own distribution.
    But then - I guess you have lots of work ahead of you verifying/looking
    over source code ...

    Not hard to build a simple wget clone using libcurl
    --
    “It is hard to imagine a more stupid decision or more dangerous way of making decisions than by putting those decisions in the hands of people
    who pay no price for being wrong.”

    Thomas Sowell

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sat Feb 10 13:44:06 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sat, 10 Feb 2024 10:56:05 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    Not hard to build a simple wget clone using libcurl

    Well yes libcurl does all the work.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From =?UTF-8?Q?Bj=C3=B6rn_Lundin?=@bnl@nowhere.com to comp.sys.raspberry-pi on Sat Feb 10 19:37:04 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2024-02-10 11:56, The Natural Philosopher wrote:
    On 10/02/2024 10:47, Björn Lundin wrote:
    On 2024-02-10 02:41, 68g.1503 wrote:
       And hmmmm ... when IS the last time anyone actually
       DID look-over wget's source code ??? The best place
       to hide evil is inside something deemed "old and
       reliable" .....

    Are you saying you are looking into EVERY package's source code you
    download via apt BEFORE you install it?
    Or are you saying that one should do it?

    Sounds bit hysterical to me.

    You do know that both wget and curl are provided by debian based (and
    most other ) distribution via their repository tool ?

    If you don't trust your distribution, you should switch, or roll your
    own distribution.
    But then - I guess you have lots of work ahead of you
    verifying/looking over source code ...

    Not hard to build a simple wget clone using libcurl


    Of course not - but I was referring to that if you don't trust the
    binaries from your distribution, then you'd need to check ALL source code.
    Not only wget/curl - but everything from kernel all the way to web browsers

    Either you trust your distribution or not.
    --
    /Björn

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sat Feb 10 19:12:44 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sat, 10 Feb 2024 19:37:04 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    Of course not - but I was referring to that if you don't trust the
    binaries from your distribution, then you'd need to check ALL source code. Not only wget/curl - but everything from kernel all the way to web
    browsers

    Do all that and you are *still* open to Ken Thompson's attack via a poisoned compiler.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From =?UTF-8?Q?Bj=C3=B6rn_Lundin?=@bnl@nowhere.com to comp.sys.raspberry-pi on Sat Feb 10 23:13:20 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2024-02-10 20:12, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 19:37:04 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    Of course not - but I was referring to that if you don't trust the
    binaries from your distribution, then you'd need to check ALL source code. >> Not only wget/curl - but everything from kernel all the way to web
    browsers

    Do all that and you are *still* open to Ken Thompson's attack via a poisoned compiler.


    Well, verifying gcc sources could be included in the above - 'between
    kernel and web browser'. At least if you are installing compilers
    --
    /Björn

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sun Feb 11 05:17:08 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sat, 10 Feb 2024 23:13:20 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    On 2024-02-10 20:12, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 19:37:04 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    Of course not - but I was referring to that if you don't trust the
    binaries from your distribution, then you'd need to check ALL source
    code. Not only wget/curl - but everything from kernel all the way to
    web browsers

    Do all that and you are *still* open to Ken Thompson's attack
    via a poisoned compiler.


    Well, verifying gcc sources could be included in the above - 'between
    kernel and web browser'. At least if you are installing compilers

    The point of Ken Thomson's attack is that you have to compile those
    gcc sources and that compiler can poison the binary you produce from the
    clean gcc sources. So inspecting sources doesn't help you.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Sun Feb 11 08:58:50 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 05:17, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 23:13:20 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    On 2024-02-10 20:12, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 19:37:04 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    Of course not - but I was referring to that if you don't trust the
    binaries from your distribution, then you'd need to check ALL source
    code. Not only wget/curl - but everything from kernel all the way to
    web browsers

    Do all that and you are *still* open to Ken Thompson's attack
    via a poisoned compiler.


    Well, verifying gcc sources could be included in the above - 'between
    kernel and web browser'. At least if you are installing compilers

    The point of Ken Thomson's attack is that you have to compile those
    gcc sources and that compiler can poison the binary you produce from the clean gcc sources. So inspecting sources doesn't help you.

    Obviously one must write one's own compiler!

    A mere snap for any greybeard unix sysadmin...
    --
    “It is hard to imagine a more stupid decision or more dangerous way of making decisions than by putting those decisions in the hands of people
    who pay no price for being wrong.”

    Thomas Sowell

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sun Feb 11 09:07:42 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sun, 11 Feb 2024 08:58:50 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 11/02/2024 05:17, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 23:13:20 +0100

    The point of Ken Thomson's attack is that you have to compile
    those gcc sources and that compiler can poison the binary you produce
    from the clean gcc sources. So inspecting sources doesn't help you.

    Obviously one must write one's own compiler!

    So what do you compile it with ? If the compiler you use to compile your clean room compiler is poisoned then so will be the compiled compiler despite your clean room code. That's the Thompson trap.

    The only way out of the Thompson trap is to write a new compiler
    from scratch in assembler and assemble it by hand. Then you just have to
    trust the hardware.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Sun Feb 11 10:02:46 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 09:07, Ahem A Rivet's Shot wrote:
    On Sun, 11 Feb 2024 08:58:50 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    On 11/02/2024 05:17, Ahem A Rivet's Shot wrote:
    On Sat, 10 Feb 2024 23:13:20 +0100

    The point of Ken Thomson's attack is that you have to compile
    those gcc sources and that compiler can poison the binary you produce
    from the clean gcc sources. So inspecting sources doesn't help you.

    Obviously one must write one's own compiler!

    So what do you compile it with ?

    You don't. You already wrote your own assembler. In machine code!

    You compile the proto-compiler with THAT.

    THEN you can use it to compile future versions of itself.

    An assemblers is - or ought to be - a 1:1 translator from human readable
    to machine readable commands.

    It cant be corrupted.

    Unless the chip itself is compromisesd :-)

    So design your own chip!

    That is what happened with the early days of ARM chips. I know people involved in writing those compilers..


    If the compiler you use to compile
    your clean room compiler is poisoned then so will be the compiled compiler despite your clean room code. That's the Thompson trap.

    The only way out of the Thompson trap is to write a new compiler
    from scratch in assembler and assemble it by hand. Then you just have to trust the hardware.

    Or design it yourself.

    The ARM is a special CPU that was designed initially to beat the 6502
    and walk all over z80s and 8080s.

    Because they couldn't afford massive wafers, it was strictly limited in hardware. All they could do was a very basic instruction set and a lot
    of on-chip registers. And a three instruction set pipeline and clock it
    as fast as it would go. And a 32 bit address bus. To take advantage of
    a lot more RAM that was getting cheaper by the day. The low power was
    simply a cost saving measure - a plastic cased low dissipation chip was *cheaper*.

    And a few - maybe only one - very bright boys (sophie wilson) , looked
    at the absolute minimum of what those instructions had to do.

    Complex instructions could be written to use those registers and
    accessing them was super fast.

    In a way it mimicked what was happening in CISC but pulled the microcode
    out of the chip into external software libraries.

    And there it was. The Acorn Risc Machine. Not a very much used chip
    until mobile computing came along, when its low power performance and
    ability to be integrated under license into anyone else's hardware chips
    made it a world beater.

    https://arstechnica.com/gadgets/2022/09/a-history-of-arm-part-1-building-the-first-chip/

    Read the whole story.
    --
    For in reason, all government without the consent of the governed is the
    very definition of slavery.

    Jonathan Swift


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From =?UTF-8?Q?Bj=C3=B6rn_Lundin?=@bnl@nowhere.com to comp.sys.raspberry-pi on Sun Feb 11 13:22:37 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2024-02-11 06:17, Ahem A Rivet's Shot wrote:


    The point of Ken Thomson's attack is that you have to compile those
    gcc sources and that compiler can poison the binary you produce from the clean gcc sources. So inspecting sources doesn't help you.


    Ah, oh, didn't know that.
    --
    /Björn

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sun Feb 11 13:48:42 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sun, 11 Feb 2024 13:22:37 +0100
    Björn Lundin <bnl@nowhere.com> wrote:

    On 2024-02-11 06:17, Ahem A Rivet's Shot wrote:
    The point of Ken Thomson's attack is that you have to compile
    those gcc sources and that compiler can poison the binary you produce
    from the clean gcc sources. So inspecting sources doesn't help you.


    Ah, oh, didn't know that.

    It was a classic eye-opening paper, it is extremely hard to get a system that is certain not to contain any hidden surprises - you have have
    to write your own assembler in machine code, compiler in assembler and
    build your own hardware, from scratch.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sun Feb 11 14:04:38 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sun, 11 Feb 2024 10:02:46 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    So design your own chip!

    Do you trust the chip layout software not to embed a backdoor or something ? What do you run that chip layout software on and why do you
    trust that system. You'd best start from MSI TTL/CMOS logic and build your
    own system to run the chip design software (that you write or at least
    audit) to design the chips.

    The ARM is a special CPU that was designed initially to beat the 6502
    and walk all over z80s and 8080s.

    I know - I was in Cambridge and in the business when it was being
    done. I knew about the ARM before it was released, they were pretty good at keeping it out of the rumour mill but nothing is completely secret in Cambridge. Earliest rumours had Andy Hopper involved.

    The modern ARMv8 architecture bears little resemblance to the
    original ARM used in the Archimedes, it has become massively complex. Even
    so for the time the performance of the original ARM was stunning, matched
    only by the Transputer which was weird and expensive. Once thought to be
    the future of computing the Transputer is all but forgotten, while ARMv8 has become the dominant 64 bit architecture (measured in numbers of CPUs manufactured).
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Sun Feb 11 14:52:15 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 14:04, Ahem A Rivet's Shot wrote:
    On Sun, 11 Feb 2024 10:02:46 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    So design your own chip!

    Do you trust the chip layout software not to embed a backdoor or something ? What do you run that chip layout software on and why do you
    trust that system. You'd best start from MSI TTL/CMOS logic and build your own system to run the chip design software (that you write or at least
    audit) to design the chips.


    I think that its pretty difficult to encode an invisible backdoor in the silicon and not have it spotted at some fairly early stage.

    So many of these 'threat narratives' are, when examined closely,
    implausible to the point of downright impossibility.

    You can examine the machine code that your compiler and linker
    assembles. And people do. I certainly have done. If it doesn't match
    what you asked for in the high level language, there are questions to be answered.

    And of course the lower level the language the easier it is to check.
    One reason why I don't like C++ and friends.

    And likewise, if the silicon is updating bits of memory you didn't ask
    it to, or doing stuff that you dont recognise as valid...

    lets face it, I was watching an illicit free stream of an F1 race and
    suddenly frames started dropping...and I looked at my network widget and
    saw my uplink to the internet was being saturated. I was it seemed, part
    of some javascript botnet. It finished as soon as I closed the browser
    window.

    These things get *noticed*.

    The ARM is a special CPU that was designed initially to beat the 6502
    and walk all over z80s and 8080s.

    I know - I was in Cambridge and in the business when it was being
    done. I knew about the ARM before it was released, they were pretty good at keeping it out of the rumour mill but nothing is completely secret in Cambridge. Earliest rumours had Andy Hopper involved.

    The modern ARMv8 architecture bears little resemblance to the
    original ARM used in the Archimedes, it has become massively complex.

    True, but not germane to the point that its possible for a very small
    number of people to create a CPU, write an assembler, write a compile in assembler and bootsrtap their way to a known (to them, at least) good
    secure chipset and toolchain.

    And the antidote to all these people who assure you that lizards are
    running the earth and bugging their brains with embedded chips is to
    tell them to do precisely that.

    Or assure them that the only reason they think that, is *because the
    lizards want then to*.

    :-)


    --
    "When one man dies it's a tragedy. When thousands die it's statistics."

    Josef Stalin


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mm0fmf@none@invalid.com to comp.sys.raspberry-pi on Sun Feb 11 14:53:16 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 14:04, Ahem A Rivet's Shot wrote:
    On Sun, 11 Feb 2024 10:02:46 +0000
    The Natural Philosopher <tnp@invalid.invalid> wrote:

    So design your own chip!

    Do you trust the chip layout software not to embed a backdoor or something ? What do you run that chip layout software on and why do you
    trust that system. You'd best start from MSI TTL/CMOS logic and build your own system to run the chip design software (that you write or at least
    audit) to design the chips.

    The ARM is a special CPU that was designed initially to beat the 6502
    and walk all over z80s and 8080s.

    I know - I was in Cambridge and in the business when it was being
    done. I knew about the ARM before it was released, they were pretty good at keeping it out of the rumour mill but nothing is completely secret in Cambridge. Earliest rumours had Andy Hopper involved.

    The modern ARMv8 architecture bears little resemblance to the
    original ARM used in the Archimedes, it has become massively complex. Even
    so for the time the performance of the original ARM was stunning, matched only by the Transputer which was weird and expensive. Once thought to be
    the future of computing the Transputer is all but forgotten, while ARMv8 has become the dominant 64 bit architecture (measured in numbers of CPUs manufactured).


    I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced
    90deg apart... for those that know ;-)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ahem A Rivet's Shot@steveo@eircom.net to comp.sys.raspberry-pi on Sun Feb 11 15:31:09 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Sun, 11 Feb 2024 14:53:16 +0000
    mm0fmf <none@invalid.com> wrote:

    I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced 90deg apart... for those that know ;-)

    When I first read about the Transputer I wanted to hook 16 of them
    up into a hypercube.
    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/
    For forms of government let fools contest
    Whate're is best administered is best - Alexander Pope
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mm0fmf@none@invalid.com to comp.sys.raspberry-pi on Sun Feb 11 16:37:59 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 15:31, Ahem A Rivet's Shot wrote:
    On Sun, 11 Feb 2024 14:53:16 +0000
    mm0fmf <none@invalid.com> wrote:

    I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced
    90deg apart... for those that know ;-)

    When I first read about the Transputer I wanted to hook 16 of them
    up into a hypercube.


    I have programmed hypercubes of Analog Devices SHARC processors. The
    number crunching power was "awesome" back in the late 90s.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Andy Burns@usenet@andyburns.uk to comp.sys.raspberry-pi on Mon Feb 12 16:14:25 2024
    From Newsgroup: comp.sys.raspberry-pi

    mm0fmf wrote:

    I have a Transputer T9000 coffee mug somewhere. It has 4 handles spaced 90deg apart... for those that know 😉

    I wanted to do my final year project using transputers, unfortunately it turned out the department only had one dev board and couldn't get any
    more ... ended up using Apollo workstations instead.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From druck@news@druck.org.uk to comp.sys.raspberry-pi on Mon Feb 12 21:46:52 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 11/02/2024 14:52, The Natural Philosopher wrote:
    I think that its pretty difficult to encode an invisible backdoor in the silicon and not have it spotted at some fairly early stage.

    Then don't hide it, have it there in plain sight - like the Intel
    Management Engine, and the AMD equivalent.

    So many of these 'threat narratives' are, when examined closely,
    implausible to the point of downright impossibility.

    The more you examine details of the IME that we know about, the more
    worrying it gets. It's a CPU within CPU running closed software with
    higher privilege than main CPU, able to access all memory, any hardware
    and create its own network connections.

    You can examine the machine code that your compiler and linker
    assembles. And people do. I certainly have done. If it doesn't match
    what you asked for in the high level language, there are questions to be answered.

    You can look at the assembler of the main CPU as much as you like, but
    you've no idea what is running on the IME.

    Luckily ARM doesn't have a management engine - yet!

    ---druck
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo@theom+news@chiark.greenend.org.uk to comp.sys.raspberry-pi on Mon Feb 12 22:17:42 2024
    From Newsgroup: comp.sys.raspberry-pi

    druck <news@druck.org.uk> wrote:
    Luckily ARM doesn't have a management engine - yet!

    Arm doesn't have a management engine, because Arm (mostly) don't make chips. That's up to Qualcomm, Samsung or whoever. You don't get a full datasheet
    for what's in one of those.

    In the case of the original Pi, the Arm *is* the management engine. It was used for managing the GPU, which was the main function of the chip originally.

    (well sorta, the original Broadcom chips didn't have an Arm in them)

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Tue Feb 13 09:13:36 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 12/02/2024 21:46, druck wrote:
    On 11/02/2024 14:52, The Natural Philosopher wrote:
    I think that its pretty difficult to encode an invisible backdoor in
    the silicon and not have it spotted at some fairly early stage.

    Then don't hide it, have it there in plain sight - like the Intel
    Management Engine, and the AMD equivalent.

    So many of these 'threat narratives' are, when examined closely,
    implausible to the point of downright impossibility.

    The more you examine details of the IME that we know about, the more worrying it gets. It's a CPU within CPU running closed software with
    higher privilege than main CPU, able to access all memory, any hardware
    and create its own network connections.

    You can examine the machine code that your compiler and linker
    assembles. And people do. I certainly have done. If it doesn't match
    what you asked for in the high level language, there are questions to
    be answered.

    You can look at the assembler of the main CPU as much as you like, but you've no idea what is running on the IME.

    Luckily ARM doesn't have a management engine - yet!

    ---druck

    Indeed. CISC processors running microcode are definitely in the 'secret software' class.
    Which is the nice thing about ARM. Keep it simple and run it blazingly
    fast. Although my friend who worked on the first chip at Acorn says it
    is massively more complex today than the original incarnation.

    But even microcode can be disassembled if you have the right kit and
    skills...
    --
    "Corbyn talks about equality, justice, opportunity, health care, peace, community, compassion, investment, security, housing...."
    "What kind of person is not interested in those things?"

    "Jeremy Corbyn?"


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Tue Feb 13 09:16:03 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 12/02/2024 22:17, Theo wrote:
    druck <news@druck.org.uk> wrote:
    Luckily ARM doesn't have a management engine - yet!

    Arm doesn't have a management engine, because Arm (mostly) don't make chips. That's up to Qualcomm, Samsung or whoever. You don't get a full datasheet for what's in one of those.

    In the case of the original Pi, the Arm *is* the management engine. It was used for managing the GPU, which was the main function of the chip originally.

    (well sorta, the original Broadcom chips didn't have an Arm in them)

    Theo

    Tell me more. This is a corner of history I am only vaguely familiar
    with., Wasn't the original chip a failed set top box chip? Which is why
    it always had HDMI.....
    --
    "First, find out who are the people you can not criticise. They are your oppressors."
    - George Orwell

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo@theom+news@chiark.greenend.org.uk to comp.sys.raspberry-pi on Tue Feb 13 10:55:58 2024
    From Newsgroup: comp.sys.raspberry-pi

    The Natural Philosopher <tnp@invalid.invalid> wrote:
    On 12/02/2024 22:17, Theo wrote:
    druck <news@druck.org.uk> wrote:
    Luckily ARM doesn't have a management engine - yet!

    Arm doesn't have a management engine, because Arm (mostly) don't make chips.
    That's up to Qualcomm, Samsung or whoever. You don't get a full datasheet for what's in one of those.

    In the case of the original Pi, the Arm *is* the management engine. It was used for managing the GPU, which was the main function of the chip originally.

    (well sorta, the original Broadcom chips didn't have an Arm in them)

    Theo

    Tell me more. This is a corner of history I am only vaguely familiar
    with., Wasn't the original chip a failed set top box chip? Which is why
    it always had HDMI.....

    https://en.wikipedia.org/wiki/Alphamosaic https://en.wikipedia.org/wiki/VideoCore

    The original Videocore graphics processor was designed by a Cambridge
    company called Alphamosaic. They were then bought by Broadcom in 2004.
    They were used as GPUs that were in addition to the primary processor in the system - eg in phones and media players (the 5th gen video iPod has a
    Videocore 2, the Nokia 808 Pureview has a VC4 [*]). There wasn't a CPU in
    them at this point - all the software was running on the GPU, which would communicate with the host CPU on a separate chip. For example the video
    iPod used an ARM7 CPU (PortalPlayer 5021C-TDF, dual ARM7 at about 75MHz).

    When the Pi project was coming together, the folks at Cambridge Broadcom
    were looking for a suitable chip. The VC4 was already in production and did roughly what they wanted but had no CPU, so the decision was made to modify
    the existing chip. The story goes that they raided their parts bin, found
    an already-decade-old ARM1176 and slapped it in the VC4 - I heard the
    timeline for this was a month. The rest is history.

    This is why the Pi Zero and 1 have the ancient ARMv6 CPU architecture, where other vendors were already shipping ARMv7 CPUs (Cortex A8 and similar)
    around 2007 or so. And also why the boot process on Pi 1-3 is backwards
    from what we're used to: they boot the GPU first and only later does the GPU start up the Arm. The GPU is the main processor in the system, the Arm is
    a secondary processor.

    Theo

    [*] I believe the image processing stack for the 808 Pureview with its then-massive sensor was written by Broadcom and/or Nokia Cambridge folks,
    some of whom later worked on the Pi camera interface.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From scott@scott@alfter.diespammersdie.us (Scott Alfter) to comp.sys.raspberry-pi on Tue Feb 13 17:46:22 2024
    From Newsgroup: comp.sys.raspberry-pi

    In article <uqfbs0$20vgk$3@dont-email.me>,
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    Indeed. CISC processors running microcode are definitely in the 'secret >software' class.
    Which is the nice thing about ARM. Keep it simple and run it blazingly
    fast. Although my friend who worked on the first chip at Acorn says it
    is massively more complex today than the original incarnation.

    I've been poking around at these lately:

    https://github.com/BrunoLevy/learn-fpga/ https://github.com/bl0x/learn-fpga-amaranth/

    Two different HDLs, but with the same goal: start from nothing to bring up a RISC-V CPU on an FPGA. In its simplest form, you get only 11 instructions...not even integer multiply. It's even simpler than the 6502
    on which I got through high school and most of college.
    --
    _/_
    / v \ Scott Alfter (remove the obvious to send mail)
    (IIGS( https://alfter.us/ Top-posting!
    \_^_/ >What's the most annoying thing on Usenet? --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Andrew Smallshaw@andrews@sdf.org to comp.sys.raspberry-pi on Wed Feb 14 21:02:37 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 2024-02-11, The Natural Philosopher <tnp@invalid.invalid> wrote:

    An assemblers is - or ought to be - a 1:1 translator from human readable
    to machine readable commands.

    There is _plenty_ of scope for an assembler to choose whatever
    opcodes it wants. Consider the simplest case of a NOP - some
    architectures have a specific NOP opcode, others it is simply a
    shorthand for an operation that does nothing, e.g. add 0 to a
    register and so on. Regardless there are plenty of alternatives
    that can be chosen. Things get more opaque once addressing modes,
    the size of jumps, immediate operands and so on are considered.
    I'm reminded of the warning in the A86 manual "this assembler
    generates a unique fingerprint in these cases, which I can detect
    in the binary whether you are registered or not".

    If the compiler you use to compile
    your clean room compiler is poisoned then so will be the compiled compiler >> despite your clean room code. That's the Thompson trap.

    There's a lot of mysticism that has been attached to that over the
    years, mostly by people who have never read the original report.
    It wasn't some magical AI code fairy that could identify that you
    were compiling any abstract compiler or login program and automatically
    conjure up appropriate code for the circumstances, it used
    fingerprinting IIRC at the token level (i.e. after the code is
    broken in to "words", but before parsing to figure out how those
    "words" are associated with each other). An independent implementation
    of functionally equivalent code, or even the same code after heavy
    edits over time, would not be affected.

    The ARM is a special CPU that was designed initially to beat the 6502
    and walk all over z80s and 8080s.

    No, it was designed for the Archimedes, pure and simple. The 8086
    was already one the market thus the rest of the industry essentially leapfrogged 16 bit and jumped straight to 32 bit. To this day 16
    bitters are few and far between. There's 8086-80286, MSP430, and...
    err... Well there's the original 68000 but that was 32 bit from a
    software viewpoint.

    Because they couldn't afford massive wafers, it was strictly limited in hardware. All they could do was a very basic instruction set and a lot
    of on-chip registers. And a three instruction set pipeline and clock it
    as fast as it would go. And a 32 bit address bus. To take advantage of
    a lot more RAM that was getting cheaper by the day. The low power was
    simply a cost saving measure - a plastic cased low dissipation chip was *cheaper*.

    And a few - maybe only one - very bright boys (sophie wilson) , looked
    at the absolute minimum of what those instructions had to do.

    That's a very romanticised view, it often happens in science and
    engineering when one of the characters has an interesting personal
    story, Alan Turing and Stephen Hawking would be others that come
    to mind. The feature set was a committee effort, the high level
    design was Roger/Sophie Wilson and the low level Steve Furber.
    But as above, it was designed for the Archimedes, no more and no
    less.

    The primary design objectives were a low per-unit cost (not design
    cost as sometimes stated) and a minimum of glue logic between major
    subsystems. I recall seeing a "triangle" diagram with the corners
    cut off, the centre of the triangle was the CPU, the corners were
    memory controller, graphics, and peripheral bus.

    You're correct to identify a plastic package as a design criteria,
    from memory the target was £2/chip which implied that over a ceramic
    one. None of the group had any chip design experience, they knew
    a plastic package meant no more than a 1-2W power dissipation, but
    had no idea what that meant in terms of design. Thus they optimised
    for power at every opportunity and undercut the target by orders
    of magnitude.

    The other dimension to lowering the cost of the package was reducing
    pin out to the bare minimum, hence the 24 bit (not 32 bit) address
    bus. Size of the wafer was an irrelevance since they never baked
    their own chips, die size yes they wanted to keep small to lower
    cost but not an over-riding consideration - it wasn't that much
    smaller than many other designs of the period.

    This is from my lecture notes and also a couple of pints while at
    Uni 25 years ago. The lecturer for hardware design was none other
    than Steve Furber who co-designed and literally wrote the book on
    the thing.
    --
    Andrew Smallshaw
    andrews@sdf.org
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Theo@theom+news@chiark.greenend.org.uk to comp.sys.raspberry-pi on Wed Feb 14 22:37:37 2024
    From Newsgroup: comp.sys.raspberry-pi

    Andrew Smallshaw <andrews@sdf.org> wrote:
    The primary design objectives were a low per-unit cost (not design
    cost as sometimes stated) and a minimum of glue logic between major subsystems. I recall seeing a "triangle" diagram with the corners
    cut off, the centre of the triangle was the CPU, the corners were
    memory controller, graphics, and peripheral bus.

    You're correct to identify a plastic package as a design criteria,
    from memory the target was £2/chip which implied that over a ceramic
    one. None of the group had any chip design experience, they knew
    a plastic package meant no more than a 1-2W power dissipation, but
    had no idea what that meant in terms of design. Thus they optimised
    for power at every opportunity and undercut the target by orders
    of magnitude.

    The other dimension to lowering the cost of the package was reducing
    pin out to the bare minimum, hence the 24 bit (not 32 bit) address
    bus. Size of the wafer was an irrelevance since they never baked
    their own chips, die size yes they wanted to keep small to lower
    cost but not an over-riding consideration - it wasn't that much
    smaller than many other designs of the period.

    This is from my lecture notes and also a couple of pints while at
    Uni 25 years ago. The lecturer for hardware design was none other
    than Steve Furber who co-designed and literally wrote the book on
    the thing.

    That's about right - ARM1/ARM2 was designed specifically for the Archimedes, and various design decisions that remain in Aarch32 are because of specific constraints on that platform. For example ARM2 had no cache and was
    designed to make best use of FPM DRAM. Every instruction took two cycles except some where sequential memory accesses could be completed in a single cycle - hence LDM/STM instructions.

    Matt Evans (another of Steve's former students) did a good talk on this at
    CCC a few years ago: https://media.ccc.de/v/36c3-10703-the_ultimate_acorn_archimedes_talk

    Theo
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From druck@news@druck.org.uk to comp.sys.raspberry-pi on Thu Feb 15 21:26:17 2024
    From Newsgroup: comp.sys.raspberry-pi

    On 14/02/2024 22:37, Theo wrote:
    That's about right - ARM1/ARM2 was designed specifically for the Archimedes, and various design decisions that remain in Aarch32 are because of specific constraints on that platform. For example ARM2 had no cache and was
    designed to make best use of FPM DRAM. Every instruction took two cycles except some where sequential memory accesses could be completed in a single cycle - hence LDM/STM instructions.

    I know what you mean there, but just to clarify, the majority of
    arithmetic instructions took one cycle (except an extra cycle when using
    shift by a register or where the PC was the destination, and multiplies
    were up to 3 cycles).

    Memory loads and stores were two cycles, one to set up the transfer and
    one to do the transfer, but the memory system allowed a read or write to
    the next word in just one cycle. So the LDM and STM instructions were
    included which could transfer from 1 to 16 registers at a cost of 1 +
    number of registers transferred cycles (as long as it was within the
    same memory page). That did make quite a high upper bound on the
    interrupt latency though, which was an issue for real-time use.

    Strange how I can remember that from 35 years ago, but then you only had
    to know a few classes of instruction timings in order to be able to
    write highly optimised assembler. It became more and more complex with
    each subsequent ARM generation, and has been best left to a compiler for
    quite a while.

    ---druck
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.sys.raspberry-pi on Mon Feb 19 05:39:00 2024
    From Newsgroup: comp.sys.raspberry-pi

    On Fri, 9 Feb 2024 20:41:11 -0500, 68g.1503 wrote:

    And hmmmm ... when IS the last time anyone actually
    DID look-over wget's source code ??? The best place
    to hide evil is inside something deemed "old and
    reliable" .....

    That’s what they want you to think.
    --- Synchronet 3.20a-Linux NewsLink 1.114