• In-Memory Computing

    From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch,comp.lang.misc on Wed Nov 13 06:30:34 2024
    From Newsgroup: comp.arch

    Has anyone heard of this idea? It apparently delegates some
    lower-level computing functions directly to the memory itself, to get
    a speedup from doing everything in the CPU. It seems to be an
    outgrowth of the “memristor” component that was discovered/invented by
    some researchers at HP a few decades ago.

    The researchers in this paper have come up with a Python library to
    make the technology easier to write programs for.

    <https://www.tomshardware.com/pc-components/cpus/researchers-develop-python-code-that-is-compatible-with-in-memory-computing-python-commands-converted-into-machine-code-to-be-executed-in-the-computers-memory>
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Terje Mathisen@terje.mathisen@tmsw.no to comp.arch,comp.lang.misc on Wed Nov 13 07:46:28 2024
    From Newsgroup: comp.arch

    Lawrence D'Oliveiro wrote:
    Has anyone heard of this idea? It apparently delegates some
    lower-level computing functions directly to the memory itself, to get
    a speedup from doing everything in the CPU. It seems to be an
    outgrowth of the “memristor” component that was discovered/invented by some researchers at HP a few decades ago.
    Delegating memory operations to lower layers in the hierarchy is one of
    those wheel of re-incarnation ideas that pop back up every decade or two.You typically start with shared atomic operations and very simple
    computation, like a LOCK XADD, then once you are on this slippery slope
    you quickly decide to add more advanced capabilities, quickly ending up
    with something like Bunny Chang's distributed virtual machine which can securely distribute its code anywhere in the cluster.
    Taken to its extreme, any cloud datacenter works this way, but at a far
    higher granularity.
    Terje
    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch,comp.lang.misc on Wed Nov 13 08:19:29 2024
    From Newsgroup: comp.arch

    On Wed, 13 Nov 2024 07:46:28 +0100, Terje Mathisen wrote:

    Taken to its extreme, any cloud datacenter works this way, but at a far higher granularity.

    Maybe you’re thinking of supercomputers. They have a processor
    interconnect which is sufficiently fast as to make the collection of nodes behave more like a single machine. Just having a cloud data centre filled
    with nodes on a conventional network isn’t quite the same thing.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch,comp.lang.misc on Wed Nov 13 09:14:24 2024
    From Newsgroup: comp.arch

    Terje Mathisen <terje.mathisen@tmsw.no> schrieb:

    You typically start with shared atomic operations and very simple computation, like a LOCK XADD, then once you are on this slippery slope
    you quickly decide to add more advanced capabilities, quickly ending up
    with something like Bunny Chang's distributed virtual machine which can securely distribute its code anywhere in the cluster.

    What is Bunny Chang's distributed virtual machine?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From John Ames@commodorejohn@gmail.com to comp.arch,comp.lang.misc on Wed Nov 13 07:44:50 2024
    From Newsgroup: comp.arch

    On Wed, 13 Nov 2024 06:30:34 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    <https://www.tomshardware.com/pc-components/cpus/researchers-develop-python-code-that-is-compatible-with-in-memory-computing-python-commands-converted-into-machine-code-to-be-executed-in-the-computers-memory>

    Quoth the article:

    Compute code written for conventional computers has purportedly
    "barely changed" since the 1940s.

    I'm curious to hear what the practical implications of the real thing
    are, but that right there is some laughable clickbait garbage written
    by someone who has absolutely no idea what they're talking about.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mitchalsup@mitchalsup@aol.com (MitchAlsup1) to comp.arch,comp.lang.misc on Wed Nov 13 19:38:13 2024
    From Newsgroup: comp.arch

    On Wed, 13 Nov 2024 6:30:34 +0000, Lawrence D'Oliveiro wrote:

    Has anyone heard of this idea? It apparently delegates some
    lower-level computing functions directly to the memory itself, to get
    a speedup from doing everything in the CPU. It seems to be an
    outgrowth of the “memristor” component that was discovered/invented by some researchers at HP a few decades ago.

    Denelcore: 1980:: had atomic memory ops in memory; so at least 40 YO.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Terje Mathisen@terje.mathisen@tmsw.no to comp.arch,comp.lang.misc on Thu Nov 14 11:51:29 2024
    From Newsgroup: comp.arch

    Thomas Koenig wrote:
    Terje Mathisen <terje.mathisen@tmsw.no> schrieb:

    You typically start with shared atomic operations and very simple
    computation, like a LOCK XADD, then once you are on this slippery slope
    you quickly decide to add more advanced capabilities, quickly ending up
    with something like Bunny Chang's distributed virtual machine which can
    securely distribute its code anywhere in the cluster.

    What is Bunny Chang's distributed virtual machine?


    Oops, typo! I meant Andrew "Bunnie" Huang, not Chang. :-(

    He's the guy who got famous for reverse engineering all the encrypted
    Xbox stuff. He's done a lot since then, including one time when he
    talked about hacking thumb drives to do basically anything:

    Every single flash drive contains a full 32-bit CPU, upon first boot it surveys all the connected flash blocks and test them, eventually
    deciding how many are good and then picking the correct size of the drive.

    There is nothing that prevents such a cpu to be reprogrammed to also act
    as a keyboard or mouse input device, or to hide half the available disk
    space and use it to keep copies of everything that gets deleted from the visible part.

    Terje
    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch,comp.lang.misc on Fri Nov 15 03:19:55 2024
    From Newsgroup: comp.arch

    On Wed, 13 Nov 2024 19:38:13 +0000, MitchAlsup1 wrote:

    On Wed, 13 Nov 2024 6:30:34 +0000, Lawrence D'Oliveiro wrote:

    Has anyone heard of this idea? It apparently delegates some
    lower-level computing functions directly to the memory itself, to get
    a speedup from doing everything in the CPU. It seems to be an
    outgrowth of the “memristor” component that was discovered/invented by >> some researchers at HP a few decades ago.

    Denelcore: 1980:: had atomic memory ops in memory; so at least 40 YO.

    They didn’t have memristors back then, though. This paper uses memristors
    in place of traditional DRAM/SRAM memory cells. The resulting read/write networks look remarkably like old-style magnetic-core memories, except
    that these cells can act as logic gates to perform operations in parallel.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mitchalsup@mitchalsup@aol.com (MitchAlsup1) to comp.arch,comp.lang.misc on Sun Nov 17 21:32:29 2024
    From Newsgroup: comp.arch

    On Fri, 15 Nov 2024 3:19:55 +0000, Lawrence D'Oliveiro wrote:

    On Wed, 13 Nov 2024 19:38:13 +0000, MitchAlsup1 wrote:

    On Wed, 13 Nov 2024 6:30:34 +0000, Lawrence D'Oliveiro wrote:

    Has anyone heard of this idea? It apparently delegates some
    lower-level computing functions directly to the memory itself, to get
    a speedup from doing everything in the CPU. It seems to be an
    outgrowth of the “memristor” component that was discovered/invented by >>> some researchers at HP a few decades ago.

    Denelcore: 1980:: had atomic memory ops in memory; so at least 40 YO.

    They didn’t have memristors back then, though. This paper uses
    memristors
    in place of traditional DRAM/SRAM memory cells. The resulting read/write networks look remarkably like old-style magnetic-core memories, except
    that these cells can act as logic gates to perform operations in
    parallel.

    In a Sph. project, we were given a ferrite core (~1 pound) and were told
    to use it as a counter, adding up when a new car entered a parking lot,
    and subtracting down when a car left. So, doing arithmetic in ferrite
    cores has been around for a very long time, indeed. {{OH, BTW, the
    purpose of the count was to prevent overflowing of the parking lot}}
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch,comp.lang.misc on Sun Nov 17 23:17:54 2024
    From Newsgroup: comp.arch

    On Sun, 17 Nov 2024 21:32:29 +0000, MitchAlsup1 wrote:

    ... doing arithmetic in ferrite cores has been around for a very long
    time, indeed.

    Memristors are a new kind of electronic component, where the resistance is proportional to the integral of applied voltage over time.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From John Levine@johnl@taugh.com to comp.arch,comp.lang.misc on Mon Nov 18 15:25:37 2024
    From Newsgroup: comp.arch

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 17 Nov 2024 21:32:29 +0000, MitchAlsup1 wrote:

    ... doing arithmetic in ferrite cores has been around for a very long
    time, indeed.

    Memristors are a new kind of electronic component, where the resistance is >proportional to the integral of applied voltage over time.

    This is a rather capacious version of "new" since memristors were invented in 1971.

    My impression is that they are real, they work, but they don't work well enough to
    replace conventional components.

    There is a very long article about them in Wikipedia.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Michael S@already5chosen@yahoo.com to comp.arch,comp.lang.misc on Mon Nov 18 20:09:17 2024
    From Newsgroup: comp.arch

    On Mon, 18 Nov 2024 15:25:37 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 17 Nov 2024 21:32:29 +0000, MitchAlsup1 wrote:

    ... doing arithmetic in ferrite cores has been around for a very
    long time, indeed.

    Memristors are a new kind of electronic component, where the
    resistance is proportional to the integral of applied voltage over
    time.

    This is a rather capacious version of "new" since memristors were
    invented in 1971.

    My impression is that they are real, they work, but they don't work
    well enough to replace conventional components.

    There is a very long article about them in Wikipedia.

    My impression from Wikipedia article is different. Memristors are not
    real.
    I.e. there are no physical devices that approximate mathematical
    abstraction proposed in 1971. There are some devices taht look like
    that, but only before researcher starts to pay attention to details.
    After researcher starts to pays attention to details it typically turns
    out that device resistance does not really depend on charge, but on
    something else that happens to correlate with charge on bigger or
    smaller parts of characteristic curves.

    What does exist and does work and does not work well enough relatively
    to conventional tech are various variants of ReRAM. But memory elements
    of those various ReRAMs are *not* memristors. That applies as much to
    HP's not quite working "memristor" ReRAM as to all others ReRAMs in
    existence including those that work relatively better.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Mon Nov 18 18:50:46 2024
    From Newsgroup: comp.arch

    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    In a Sph. project, we were given a ferrite core (~1 pound) and were told
    to use it as a counter, adding up when a new car entered a parking lot,
    and subtracting down when a car left. So, doing arithmetic in ferrite
    cores has been around for a very long time, indeed. {{OH, BTW, the
    purpose of the count was to prevent overflowing of the parking lot}}

    Sounds like an interesting project, I assume you could add some
    extra logic :-)

    Were there enough cores so you could use a one-hot representation,
    or did you have to do something more elaborate?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mitchalsup@mitchalsup@aol.com (MitchAlsup1) to comp.arch on Mon Nov 18 21:21:11 2024
    From Newsgroup: comp.arch

    On Mon, 18 Nov 2024 18:50:46 +0000, Thomas Koenig wrote:

    MitchAlsup1 <mitchalsup@aol.com> schrieb:

    In a Sph. project, we were given a ferrite core (~1 pound) and were told
    to use it as a counter, adding up when a new car entered a parking lot,
    and subtracting down when a car left. So, doing arithmetic in ferrite
    cores has been around for a very long time, indeed. {{OH, BTW, the
    purpose of the count was to prevent overflowing of the parking lot}}

    Sounds like an interesting project, I assume you could add some
    extra logic :-)

    Were there enough cores so you could use a one-hot representation,
    or did you have to do something more elaborate?

    We use the hysteresis of the core to do the counting.
    There was only 1 core.
    Each count took a unit of energy to up/down count.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch,comp.lang.misc on Mon Nov 18 22:28:56 2024
    From Newsgroup: comp.arch

    On Mon, 18 Nov 2024 15:25:37 -0000 (UTC), John Levine wrote:

    ... memristors were invented in 1971.

    They were theorized in 1971, but there was no physical component was
    created that came close to the theoretical behaviour until somewhat more recently.

    There is a very long article about them in Wikipedia.

    Yes <https://en.wikipedia.org/wiki/Memristor>. Among other things, it says their distinguishing characteristic is a linear relationship between the
    rate of change of flux and the rate of change of charge.

    But the paper that I linked to in the posting that started this thread
    shows a connection between the resistance, and the integral of voltage
    over time.

    Ah, I get it: the Wikipedia article says the “flux” thing is indeed the integral of voltage with respect to time.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch,comp.lang.misc on Tue Nov 19 08:20:55 2024
    From Newsgroup: comp.arch

    On 18/11/2024 19:09, Michael S wrote:
    On Mon, 18 Nov 2024 15:25:37 -0000 (UTC)
    John Levine <johnl@taugh.com> wrote:

    According to Lawrence D'Oliveiro <ldo@nz.invalid>:
    On Sun, 17 Nov 2024 21:32:29 +0000, MitchAlsup1 wrote:

    ... doing arithmetic in ferrite cores has been around for a very
    long time, indeed.

    Memristors are a new kind of electronic component, where the
    resistance is proportional to the integral of applied voltage over
    time.

    This is a rather capacious version of "new" since memristors were
    invented in 1971.

    My impression is that they are real, they work, but they don't work
    well enough to replace conventional components.

    There is a very long article about them in Wikipedia.

    My impression from Wikipedia article is different. Memristors are not
    real.
    I.e. there are no physical devices that approximate mathematical
    abstraction proposed in 1971. There are some devices taht look like
    that, but only before researcher starts to pay attention to details.
    After researcher starts to pays attention to details it typically turns
    out that device resistance does not really depend on charge, but on
    something else that happens to correlate with charge on bigger or
    smaller parts of characteristic curves.


    All electronic devices are approximations. There is no such thing as a
    pure resistor, or a pure capacitor, or pure inductor. Current
    memristors are no different in principle, but are - for now, at least -
    poorer approximations than the more common components. Whether they
    will ever be close enough to be of practical use, remains to be seen.

    What does exist and does work and does not work well enough relatively
    to conventional tech are various variants of ReRAM. But memory elements
    of those various ReRAMs are *not* memristors. That applies as much to
    HP's not quite working "memristor" ReRAM as to all others ReRAMs in
    existence including those that work relatively better.


    Yes, that is my understanding too - there are a variety of memory
    devices that have been made with different properties and niches, but I
    don't believe any of them are based on devices that are close enough to
    ideal memristors to justify using the term.



    --- Synchronet 3.20a-Linux NewsLink 1.114