• Arm to run within IBM z

    From Michael S@already5chosen@yahoo.com to comp.arch on Sat Apr 4 21:34:01 2026
  • From quadi@quadibloc@ca.invalid to comp.arch on Sat Apr 4 20:39:19 2026
    From Newsgroup: comp.arch

    On Sat, 04 Apr 2026 21:34:01 +0300, Michael S wrote:

    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-
    collaboration-with-arm-to-shape-the-future-of-enterprise-computing

    I would consider this a profoundly _unnatural_ appendage to IBM's history
    of hardware innovation... but then what do I know?

    John Savard
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sat Apr 4 22:08:11 2026
    From Newsgroup: comp.arch

    On Sat, 4 Apr 2026 20:39:19 -0000 (UTC), quadi wrote:

    I would consider this a profoundly _unnatural_ appendage to IBM's
    history of hardware innovation... but then what do I know?

    What next? Build a small machine based on another non-IBM-made
    processor ... say an Intel 8088? And have it run a completely
    non-IBM-made OS, licensed from some small upstart software house?
    With an open expansion bus that anyone can make cards for?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From quadi@quadibloc@ca.invalid to comp.arch on Sat Apr 4 23:32:36 2026
    From Newsgroup: comp.arch

    On Sat, 04 Apr 2026 22:08:11 +0000, Lawrence D’Oliveiro wrote:
    On Sat, 4 Apr 2026 20:39:19 -0000 (UTC), quadi wrote:

    I would consider this a profoundly _unnatural_ appendage to IBM's
    history of hardware innovation... but then what do I know?

    What next? Build a small machine based on another non-IBM-made processor
    ... say an Intel 8088? And have it run a completely non-IBM-made OS,
    licensed from some small upstart software house?
    With an open expansion bus that anyone can make cards for?

    I hadn't considered that as an appropriate comparison.

    At the time the IBM PC came out, though, many people were disappointed
    that it wasn't based on the Motorola 68000, which had a much nicer ISA,
    they felt.

    But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit external bus, which allowed cheaper computers to be built using this architecture. It wasn't until much later that Motorola came out with the equivalent 68008, which allowed the Sinclair QL to reach the market.

    However, there is another way to cut costs when building a computer around
    a 16-bit microprocessor.

    However, if IBM had used the same technique to build an inexpensive 68000- based computer that Texas Instruments did to build an inexpensive 9900-
    based computer... then the IBM PC, even if IBM didn't also make Texas Instrument's _other_ mistake of not facilitating the availability of third- party software for the machine, would have performed so badly as to go
    down to ignominious failure like the 99/4.

    John Savard
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Schultz@david.schultz@earthlink.net to comp.arch on Sat Apr 4 20:12:14 2026
    From Newsgroup: comp.arch

    On 4/4/26 6:32 PM, quadi wrote:
    But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit external bus, which allowed cheaper computers to be built using this architecture. It wasn't until much later that Motorola came out with the equivalent 68008, which allowed the Sinclair QL to reach the market.

    Not as big of a problem as you might think.

    Sure the memory bus was 16 bits but most implementations would be using
    enough memory chips that it would be a wash. The IBM PC motherboard had
    space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
    32KX16 is that your cheapest model has 8 more DRAM chips in it.

    For peripherals, the 68000 had a 6800 compatibility mode so that you
    could use those old 8 bit I/O chips if you desired.
    --
    http://davesrocketworks.com
    David Schultz
    "It's just this little chromium switch here..."
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.arch on Sun Apr 5 02:35:39 2026
    From Newsgroup: comp.arch

    Michael S <already5chosen@yahoo.com> wrote:
    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing

    That anonoucement has little content. In particular I do not see
    "Arm within Z" statement. Since Z people are involved this is
    likely, but equally likely IBM may wish to deliver servers with
    Arm cores, but peripherial system taken from Z. Or something
    entirely different.

    Now, interesting question is what will happen to Power? Using
    Arm may mean that IBM no longer considers Power a viable platform.
    Also, if IBM delivers machines capable of running both Arm and
    Z code, then likely consequence will be demise of Z as a hardware
    architecture. That is, once performance crritical things run on
    Arm they would no longer need real hardware for Z, software emulation
    will be good enough.
    --
    Waldek Hebisch
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sun Apr 5 02:56:50 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 02:35:39 -0000 (UTC), Waldek Hebisch wrote:

    Now, interesting question is what will happen to Power? Using Arm
    may mean that IBM no longer considers Power a viable platform. Also,
    if IBM delivers machines capable of running both Arm and Z code,
    then likely consequence will be demise of Z as a hardware
    architecture. That is, once performance crritical things run on Arm
    they would no longer need real hardware for Z, software emulation
    will be good enough.

    Mainframes were never about “performance critical” stuff -- unless
    your meaning of “performance” was purely about I/O throughput, not CPU power. Because I/O throughput is what mainframes are/were optimized
    for.

    But nowadays you can get high I/O throughput mure more cheaply and
    simply, with a storage server with say 24, 48 or 72 disk trays,
    running Linux.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Stephen Fuld@sfuld@alumni.cmu.edu.invalid to comp.arch on Sat Apr 4 20:56:50 2026
    From Newsgroup: comp.arch

    On 4/4/2026 7:35 PM, Waldek Hebisch wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing

    That anonoucement has little content.

    Agreed.

    In particular I do not see
    "Arm within Z" statement. Since Z people are involved this is
    likely, but equally likely IBM may wish to deliver servers with
    Arm cores, but peripherial system taken from Z. Or something
    entirely different.

    Of course, I may be way off base here. But I thought it could be
    something like - Z series currently has specialized cores or some
    mechanism for running crypto stuff, and at least some AI stuff. So I
    thought it may be a mechanism to have ARM cores as specialized cores to
    run some Android like stuff. Perhaps the advantage of running this
    stuff in the Z series itself instead of remotely is better communication between the Android app and whatever is running on the Z series.

    Once again, I emphasize that I have no inside knowledge or do I know if
    such a scheme makes any sense.


    Now, interesting question is what will happen to Power?

    Probably nothing. Power and Z are pretty much independent.


    Using
    Arm may mean that IBM no longer considers Power a viable platform.

    They make a lot of money from it.


    Also, if IBM delivers machines capable of running both Arm and
    Z code, then likely consequence will be demise of Z as a hardware architecture.

    I don't think so. IBM is unlikely to port things like CICS, or TPF to
    ARM and without those, ARM couldn't replace Z series CPUs.

    That is, once performance crritical things run on
    Arm they would no longer need real hardware for Z, software emulation
    will be good enough.

    Again, I may be totally off base here, but I saw nothing to indicate
    that they were going to move any substantial amount of software to ARM.
    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Stephen Fuld@sfuld@alumni.cmu.edu.invalid to comp.arch on Sat Apr 4 21:11:05 2026
    From Newsgroup: comp.arch

    On 4/4/2026 7:56 PM, Lawrence D’Oliveiro wrote:
    On Sun, 5 Apr 2026 02:35:39 -0000 (UTC), Waldek Hebisch wrote:

    Now, interesting question is what will happen to Power? Using Arm
    may mean that IBM no longer considers Power a viable platform. Also,
    if IBM delivers machines capable of running both Arm and Z code,
    then likely consequence will be demise of Z as a hardware
    architecture. That is, once performance crritical things run on Arm
    they would no longer need real hardware for Z, software emulation
    will be good enough.

    Mainframes were never about “performance critical” stuff -- unless
    your meaning of “performance” was purely about I/O throughput, not CPU power. Because I/O throughput is what mainframes are/were optimized
    for.

    But nowadays you can get high I/O throughput mure more cheaply and
    simply, with a storage server with say 24, 48 or 72 disk trays,
    running Linux.

    Yes, but . . .

    I think IBM's mainframe business survives (in fact generates billions of dollars in revenue for IBM) based on two factors.

    1. Their customer's tie to proprietary software that would cost them
    many millions of dollars to convert, and may not even be possible due to
    loss of expertise, etc.

    2. Their customers relying on a single company to provide a totally integrated extraordinarily high RAS solution.
    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.arch on Sun Apr 5 00:14:17 2026
    From Newsgroup: comp.arch

    On Sat, 4 Apr 2026 20:12:14 -0500, David Schultz
    <david.schultz@earthlink.net> wrote:

    On 4/4/26 6:32 PM, quadi wrote:
    But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit >> external bus, which allowed cheaper computers to be built using this
    architecture. It wasn't until much later that Motorola came out with the
    equivalent 68008, which allowed the Sinclair QL to reach the market.

    Not as big of a problem as you might think.

    Sure the memory bus was 16 bits but most implementations would be using >enough memory chips that it would be a wash. The IBM PC motherboard had >space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
    32KX16 is that your cheapest model has 8 more DRAM chips in it.

    The PC's RAM had parity checking - 9 chips per bank.

    Also remember RAM was very expensive in 1981: the 64KB PC cost 50%
    more than the 32KB PC - roughly $3000 vs $2000 at retail - with the
    only difference being the amount of DRAM.


    For peripherals, the 68000 had a 6800 compatibility mode so that you
    could use those old 8 bit I/O chips if you desired.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sun Apr 5 04:34:13 2026
    From Newsgroup: comp.arch

    On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:

    I think IBM's mainframe business survives (in fact generates
    billions of dollars in revenue for IBM) based on two factors.

    [factors omitted]

    I don’t think it does generate “billions of dollars in revenue” any
    more. Which is why, after years, decades of losses and layoffs, the
    entire IBM company is but a shadow of its former self. Which is why it
    started embracing Linux in a big way a quarter century ago, and why it
    acquired Red Hat. And why it is now looking to support ARM-based
    workloads on top of that.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sun Apr 5 04:36:03 2026
    From Newsgroup: comp.arch

    On Sat, 4 Apr 2026 20:56:50 -0700, Stephen Fuld wrote:

    On 4/4/2026 7:35 PM, Waldek Hebisch wrote:

    Using Arm may mean that IBM no longer considers Power a viable
    platform.

    They make a lot of money from it.

    POWER still has a significant presence in the Top500 list of the
    world’s most powerful computers. I’m sure there’s a decent bit of
    profit to be made in one of the few low-volume, high-margin sectors
    still left in the computing market.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Stephen Fuld@sfuld@alumni.cmu.edu.invalid to comp.arch on Sat Apr 4 22:04:01 2026
    From Newsgroup: comp.arch

    On 4/4/2026 9:34 PM, Lawrence D’Oliveiro wrote:
    On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:

    I think IBM's mainframe business survives (in fact generates
    billions of dollars in revenue for IBM) based on two factors.

    [factors omitted]

    I don’t think it does generate “billions of dollars in revenue” any more.

    IBM doesn't break out publicly the revenue from Z series, however a few quotations from

    https://www.theregister.com/2026/01/29/ibm_q4_2025/

    Arvind Krishna, IBM chairman, president, and chief executive officer, was speaking as the company turned in full-year results that showed a leap in mainframe sales.

    Note that IBM includes mainframe sales in "infrastructure"


    However, the strongest growth came in IBM's infrastructure business, which grew revenues 12 percent for the year, and 21 percent in the fourth quarter.

    That infrastructure boost was in large part powered by the launch of IBM's z17 series of mainframes.

    In his prepared remarks, Krishna said: "Innovation value can also be seen in our IBM Z performance, up 48 percent this year, achieving the highest annual revenue for Z in about 20 years."

    CFO James Kavanaugh referred to a "record z17 launch, achieving the highest annual revenue for IBM Z in about 20 years and outpacing z16 over the first three quarters of the program."

    show that mainframe sales are significant to IBM.
    --
    - Stephen Fuld
    (e-mail address disguised to prevent spam)
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From quadi@quadibloc@ca.invalid to comp.arch on Sun Apr 5 05:24:47 2026
    From Newsgroup: comp.arch

    On Sun, 05 Apr 2026 02:35:39 +0000, Waldek Hebisch wrote:

    That anonoucement has little content. In particular I do not see "Arm
    within Z" statement. Since Z people are involved this is likely, but
    equally likely IBM may wish to deliver servers with Arm cores, but peripherial system taken from Z. Or something entirely different.

    Well, it began by referring to "dual-architecture hardware", so it seems
    like it does refer to a way of making ARM code run on System z mainframes.

    Why? Well, the announcement also focuses on running Arm applications on
    IBM hardware. Obviously, a System z mainframe is a lot heavier than a smartphone. But ARM is trying to enter the server marketplace too.

    So I think the idea is that if you want to use a popular application that happens to run on Arm servers, but you already have a System z mainframe
    for stuff you need to do on that architecture, now you will be able to run that application on your existing computer. At least if you have one of
    the new ones with some ARM chips in it - or older ones, if this is just
    some licensed software emulation.

    John Savard

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.arch on Sun Apr 5 12:51:33 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 04:34:13 -0000 (UTC)
    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
    On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:

    I think IBM's mainframe business survives (in fact generates
    billions of dollars in revenue for IBM) based on two factors.

    [factors omitted]

    I don’t think it does generate “billions of dollars in revenue” any more. Which is why, after years, decades of losses and layoffs, the
    entire IBM company is but a shadow of its former self. Which is why it started embracing Linux in a big way a quarter century ago, and why it acquired Red Hat. And why it is now looking to support ARM-based
    workloads on top of that.
    I am pretty sure that you got it backward: most likely IBM lost serious
    amounts of money on POWER-based supercomputers, like Summit an Sierra
    (both decommissioned in 2024-2025). But they are good PR and even
    better for relationships with US Governments.
    OTOH, IBM z hardware sells are very profitable.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From anton@anton@mips.complang.tuwien.ac.at (Anton Ertl) to comp.arch on Sun Apr 5 09:56:49 2026
    From Newsgroup: comp.arch

    David Schultz <david.schultz@earthlink.net> writes:
    Sure the memory bus was 16 bits but most implementations would be using >enough memory chips that it would be a wash. The IBM PC motherboard had >space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
    32KX16 is that your cheapest model has 8 more DRAM chips in it.

    Comparing the pin-out of 8088 and 8086, the former multiplexes only 8
    data lines with address lines, while the latter multiplexes 16 data
    lines with address lines. These lines had to be demultiplexed, which
    takes more support circuitry with the 8086 than with the 8088. I
    don't know how much more expensive the board would have become by
    running 16 data lines around the board instead of 8. Wouldn't they
    also have needed 16 bits worth of ROMs for the BIOS?

    Not just IBM, but most others in the market favoured the 8088 over the
    8086. So going for the 8088 apparently had serious advantages that
    more than made up for its performance disadvantages.

    - anton
    --
    'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
    Mitch Alsup, <c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.arch on Sun Apr 5 13:41:36 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 04:36:03 -0000 (UTC)
    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
    On Sat, 4 Apr 2026 20:56:50 -0700, Stephen Fuld wrote:

    On 4/4/2026 7:35 PM, Waldek Hebisch wrote:

    Using Arm may mean that IBM no longer considers Power a viable
    platform.

    They make a lot of money from it.

    POWER still has a significant presence in the Top500 list of the
    world’s most powerful computers. I’m sure there’s a decent bit of profit to be made in one of the few low-volume, high-margin sectors
    still left in the computing market.
    Depends on what you consider significant.
    There are 4 POWER-based systems in top500 list
    #23 Sierra (DOE/NNSA/LLNL, USA)
    #99 Lassen (DOE/NNSA/LLNL, USA)
    #104 PANGEA III (Total Exploration Production, France)
    #194 AiMOS (Rensselaer Polytechnic Institute Center for Computational Innovations, USA)
    Out of those, Sierra and Lassen are oficially decommissioned.
    PANGEA III is still working, but overshaddowed by nex-generation non-POWER-based PANGEA 4.
    AiMOS is in academia, so will likely remain used for many years.
    All four systems rely on NVIDIA GPUs rather than on POWER CPUs to
    deliver overwhelming majority of their FLOPS. That's not unique to
    POWER-based systems. 9 out of 10 systems in Top10 deliver majority of
    FLOPS from GPUs.
    What is unique, however, is that POWER9 has built-in NVLink 2.0 that
    allowes better integration with [contemporary] NV GPUs than their Intel
    and AMD competitors. POWER10 and supposedly POWER11 have no NVLink on
    chip, so have to compete on even ground.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Schultz@david.schultz@earthlink.net to comp.arch on Sun Apr 5 06:12:21 2026
    From Newsgroup: comp.arch

    On 4/4/26 11:14 PM, George Neuner wrote:
    Also remember RAM was very expensive in 1981: the 64KB PC cost 50%
    more than the 32KB PC - roughly $3000 vs $2000 at retail - with the
    only difference being the amount of DRAM.

    I remember 1981 and 16K DRAM didn't cost anywhere near that much. A
    quick check of a random add in the September 1981 BYTE says $4 each.
    Retail.
    --
    http://davesrocketworks.com
    David Schultz
    "It's just this little chromium switch here..."
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Schultz@david.schultz@earthlink.net to comp.arch on Sun Apr 5 06:17:24 2026
    From Newsgroup: comp.arch

    On 4/5/26 4:56 AM, Anton Ertl wrote:
    Wouldn't they
    also have needed 16 bits worth of ROMs for the BIOS?

    How many ROM chips were on that motherboard? If it was just one, then 8
    bits makes a difference.
    --
    http://davesrocketworks.com
    David Schultz
    "It's just this little chromium switch here..."
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Sun Apr 5 12:44:40 2026
    From Newsgroup: comp.arch

    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collabor ation-with-arm-to-shape-the-future-of-enterprise-computing

    There's a few interesting phrases in there:

    IBM and Arm aim to extend this track record of innovation by
    combining IBM's enterprise leadership in systems reliability,
    security, and scalability with Arm's own leadership in power-
    efficient architecture, workload enablement expertise, and
    broad software ecosystem,

    IBM are good at RAS, but their mainframes have been developing in
    different ways from the rest of the industry for decades. ARM, meanwhile,
    are pretty good at getting software from other parts of the industry
    running on their platforms.

    IBM wants software from other sources running on their RAS-enabled
    systems. But there are obstacles for anyone who wants to run on
    Z/Architecture: it's big-endian, which lots of newer software has never supported, and the terminology and conventions or the architecture and
    the virtualisation systems you need to run Linux on it are very different
    from more familiar platforms.

    I think IBM wants to run ARM software to give their mainframes more
    "relevance" to current fashions in computing, and ARM wants to learn
    about high-grade RAS.

    Intel and AMD assume all software already runs on their platforms. ARM
    don't, and - from personal experience - can be quite effective in helping
    with transitions. I think IBM wants to take advantage of that. Also,
    adding ARM cores to an IBM processor die will be easier than Intel or AMD cores, simply because that's ARM's business model.

    John
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Sun Apr 5 15:04:51 2026
    From Newsgroup: comp.arch

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:

    I think IBM's mainframe business survives (in fact generates
    billions of dollars in revenue for IBM) based on two factors.

    [factors omitted]

    I don’t think it does generate “billions of dollars in revenue” any >more.

    You do realize that IBMs annual reports are available for
    free download on the web, right?

    The infrastructure segment (i.e. Z-series) produced
    15.71 billion in revenue in 2025, out of total revenue
    of 67.5 billion dollars, with a 58% gross margin.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Sun Apr 5 15:12:15 2026
    From Newsgroup: comp.arch

    jgd@cix.co.uk (John Dallman) writes:
    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >(Michael S) wrote:



    I think IBM wants to run ARM software to give their mainframes more >"relevance" to current fashions in computing, and ARM wants to learn
    about high-grade RAS.


    Indeed, although I would suggest that ARM already leads the microprocessor market in understanding high-grade RAS, designing it into the ARMv8 architecture from the start.

    Intel and AMD assume all software already runs on their platforms. ARM
    don't, and - from personal experience - can be quite effective in helping >with transitions. I think IBM wants to take advantage of that. Also,
    adding ARM cores to an IBM processor die will be easier than Intel or AMD >cores, simply because that's ARM's business model.

    ARM9 cores are quite powerful and require very little area
    on the chip. It would be very straightforward to slap an
    ARM subsystem on a large die (or MCM or chiplet) that
    would compete with the ARM servers offered by the existing
    cloud vendors, which would potentially attract more customers to IBMs
    cloud offering.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.arch on Sun Apr 5 16:09:03 2026
    From Newsgroup: comp.arch

    David Schultz <david.schultz@earthlink.net> wrote:
    On 4/4/26 6:32 PM, quadi wrote:
    But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit >> external bus, which allowed cheaper computers to be built using this
    architecture. It wasn't until much later that Motorola came out with the
    equivalent 68008, which allowed the Sinclair QL to reach the market.

    Not as big of a problem as you might think.

    Sure the memory bus was 16 bits but most implementations would be using enough memory chips that it would be a wash. The IBM PC motherboard had space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
    32KX16 is that your cheapest model has 8 more DRAM chips in it.

    For peripherals, the 68000 had a 6800 compatibility mode so that you
    could use those old 8 bit I/O chips if you desired.

    I remember explanation for the choice given by IBM people: design
    with 16-bit bus would cost about 40-50 dollars more to manufacture.
    I do not remember if this included 68000 CPU or if this was a design
    with 8086. Anyway, according to IBM accounting rules this would
    lead to about 200 dollars higher retail price. IBM market research
    established to 3000 dollars (which was base price that they attained)
    was the critical price, and claimed that 200 dollars more would lead
    to failed product.

    One can say that IBM rules were inflating price too much, but that
    was how IBM worked (and not much different from rest of computer
    business). Given popularity of the PC it is quite possible that
    IBM market research was wrong and PC would be a success even at
    higher price. But IBM design team worked with data they obtained
    from marketing and had no reason to question them.

    IIUC there were also secondary concerns:
    - Intel promised availability of 8088 in whatever quantites IBM
    wanted. Availablity of 68000 in higher quantities was less
    clear.
    - Intel had more peripherial chips and IBM team had already
    used them in earlier designs. Using 6800 peripherial chips
    would mean extra design work so probably longer time to
    market.

    From a differnt point of view, PC was quite a success. Clearly
    using 8088 was no obstacle for this. In other words, IBM team
    made right choices. Given scale of success they probably could
    change some aspects and still have a successful product. But
    changes were unlikely to make more money for IBM and hand
    much more chance to lower amount of money made by IBM.
    --
    Waldek Hebisch
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.arch on Sun Apr 5 16:16:11 2026
    From Newsgroup: comp.arch

    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
    On Sun, 5 Apr 2026 02:35:39 -0000 (UTC), Waldek Hebisch wrote:

    Now, interesting question is what will happen to Power? Using Arm
    may mean that IBM no longer considers Power a viable platform. Also,
    if IBM delivers machines capable of running both Arm and Z code,
    then likely consequence will be demise of Z as a hardware
    architecture. That is, once performance crritical things run on Arm
    they would no longer need real hardware for Z, software emulation
    will be good enough.

    Mainframes were never about “performance critical” stuff -- unless
    your meaning of “performance” was purely about I/O throughput, not CPU power. Because I/O throughput is what mainframes are/were optimized
    for.

    Business have some job to do and some performance expectation.
    If performance did not matter they would only use lower end
    Z machines and IBM would not bother to make bigger Z boxes.

    In fact, if performance did not matter IBM could go with emulation
    (possibly using similar approach to AS/400 and later i series).
    Or they could use standard FPGA: that probably would be slower
    than emulation but IBM could still clain that Z runs on real
    hardware, which has some marketing advantage.
    --
    Waldek Hebisch
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.arch on Sun Apr 5 17:04:24 2026
    From Newsgroup: comp.arch

    Stephen Fuld <sfuld@alumni.cmu.edu.invalid> wrote:
    On 4/4/2026 7:35 PM, Waldek Hebisch wrote:
    Michael S <already5chosen@yahoo.com> wrote:
    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing

    That anonoucement has little content.

    Agreed.

    In particular I do not see
    "Arm within Z" statement. Since Z people are involved this is
    likely, but equally likely IBM may wish to deliver servers with
    Arm cores, but peripherial system taken from Z. Or something
    entirely different.

    Of course, I may be way off base here. But I thought it could be
    something like - Z series currently has specialized cores or some
    mechanism for running crypto stuff, and at least some AI stuff. So I thought it may be a mechanism to have ARM cores as specialized cores to
    run some Android like stuff. Perhaps the advantage of running this
    stuff in the Z series itself instead of remotely is better communication between the Android app and whatever is running on the Z series.

    Yes, that is one possible reading. But just one on several
    possibilities.

    Now, interesting question is what will happen to Power?

    Probably nothing. Power and Z are pretty much independent.

    Well, why IBM thinks that it is important to run ARM code and
    apparently (by keeping Power and Z separate) does not think
    it is important to run Power code?

    Using
    Arm may mean that IBM no longer considers Power a viable platform.

    They make a lot of money from it.

    Making a lot of money in short/middle term is good way to kill an
    architecture. To avoid reputational damage IBM may keep making
    Power for long time, but by adapting ARM they no longer pretend
    that is is competitive. In other words, legacy business.

    Also, if IBM delivers machines capable of running both Arm and
    Z code, then likely consequence will be demise of Z as a hardware
    architecture.

    I don't think so. IBM is unlikely to port things like CICS, or TPF to
    ARM and without those, ARM couldn't replace Z series CPUs.

    Well, the question is what IBM really want. Starting at least in
    seventies they were moving their software to high level languages.
    And technologies like compiling assembly for different architecture
    seem to work reasonably well. So I do not see serious technical
    troubles with porting IBM software. Of course, there is cost and
    time, such things do not happen overnight.

    One possiblity is that IBM wants to get rid on most of its CPU business, basicaly only producing their variation of ARM chips (could be
    called "IBM silicon"). It that is the goal, then porting Z OS
    and associated software is logical thing to do.

    Of course another possiblity is that IBM just wants to add one
    more processor to their current ZOO.

    That is, once performance crritical things run on
    Arm they would no longer need real hardware for Z, software emulation
    will be good enough.

    Again, I may be totally off base here, but I saw nothing to indicate
    that they were going to move any substantial amount of software to ARM.

    I do not know what they plan, but for customers there will be temptation
    to use ARM cores for their software. If IBM does not offer their
    software on ARM cores, they will leave room for third parties. Which
    can possibly lead to complete migration off IBM ecosystem.

    My impression was that important part of customer lockout was that
    customers use many IBM things, both hardware features and software.
    Changing many things simultaneously significantly increases risk of
    migration. ARM cores potentially offer intermediate stage of
    migration, lowering barrier to exit from IBM world. It is
    interesting how IBM wants to manage this. One way is to offer
    Z software on ARM cores.
    --
    Waldek Hebisch
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Sun Apr 5 19:14:22 2026
    From Newsgroup: comp.arch

    According to quadi <quadibloc@ca.invalid>:
    On Sun, 05 Apr 2026 02:35:39 +0000, Waldek Hebisch wrote:

    That anonoucement has little content. In particular I do not see "Arm
    within Z" statement. Since Z people are involved this is likely, but
    equally likely IBM may wish to deliver servers with Arm cores, but
    peripherial system taken from Z. Or something entirely different.

    Well, it began by referring to "dual-architecture hardware", so it seems >like it does refer to a way of making ARM code run on System z mainframes.

    Why? Well, the announcement also focuses on running Arm applications on
    IBM hardware. Obviously, a System z mainframe is a lot heavier than a >smartphone. But ARM is trying to enter the server marketplace too.

    They're already there. AWS is on its fifth generation of Arm archtecture Graviton chips.

    All of my virtual servers at AWS are Arm. They're cheaper than x64 and run all of the
    linux and FreeBSD software.

    Google's GCP has Arm architecture Axion chips, same idea.

    Microsoft Azure has second generation Arm architecture Cobalt 200 chips, also same idea.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Mon Apr 6 00:30:28 2026
    From Newsgroup: comp.arch

    On Sat, 4 Apr 2026 22:04:01 -0700, Stephen Fuld wrote:

    Note that IBM includes mainframe sales in "infrastructure"

    But what proportion of “infrastructure” does it make up?

    However, the strongest growth came in IBM's infrastructure
    business, which grew revenues 12 percent for the year, and 21
    percent in the fourth quarter.

    show that mainframe sales are significant to IBM.

    Only if it makes up the lion’s share of that business.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Mon Apr 6 00:31:51 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 16:16:11 -0000 (UTC), Waldek Hebisch wrote:

    Lawrence D’Oliveiro <ldo@nz.invalid> wrote:

    Mainframes were never about “performance critical” stuff -- unless
    your meaning of “performance” was purely about I/O throughput, not
    CPU power. Because I/O throughput is what mainframes are/were
    optimized for.

    Business have some job to do and some performance expectation. If
    performance did not matter they would only use lower end Z machines
    and IBM would not bother to make bigger Z boxes.

    As long as there are dyed-in-the-wool long-time IBM diehards who
    believe that, no doubt there will be some market for new Z-series
    boxes.

    Even if they do mostly seem to be running Linux these days ...
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Mon Apr 6 00:35:10 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 12:43 +0100 (BST), John Dallman wrote:

    But there are obstacles for anyone who wants to run on
    Z/Architecture: it's big-endian, which lots of newer software has
    never supported ...

    Open-source stuff has had to deal with that going back decades.
    It’s no big deal for that segment.

    Linus has been running on Zseries going back a long time now.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From quadi@quadibloc@ca.invalid to comp.arch on Mon Apr 6 03:57:07 2026
    From Newsgroup: comp.arch

    On Sun, 05 Apr 2026 16:16:11 +0000, Waldek Hebisch wrote:

    Business have some job to do and some performance expectation.
    If performance did not matter they would only use lower end Z machines
    and IBM would not bother to make bigger Z boxes.

    In fact, if performance did not matter IBM could go with emulation
    (possibly using similar approach to AS/400 and later i series). Or they
    could use standard FPGA: that probably would be slower than emulation
    but IBM could still clain that Z runs on real hardware, which has some marketing advantage.

    Yes, the people who buy System z hardware would prefer more bang for the
    buck rather than less. But they settle for less bang for the buck than
    they would get with PowerPC hardware or commodity x86 or ARM because of
    two things:

    1) Old System/360 or 370 software they want to still use instead of converting, and

    2) The superior reliability features of IBM's mainframes.

    So there's no contradiction; what there is, instead, is a constraint. An additional criterion that must be satisfied, which limits the available performance.

    John Savard

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From quadi@quadibloc@ca.invalid to comp.arch on Mon Apr 6 04:04:19 2026
    From Newsgroup: comp.arch

    On Sun, 05 Apr 2026 17:04:24 +0000, Waldek Hebisch wrote:

    To avoid reputational damage IBM may keep making Power
    for long time, but by adapting ARM they no longer pretend that is is competitive. In other words, legacy business.

    That could well be true. I am not aware of IBM making great efforts to
    make new generations of POWER chips that are on smaller processes and
    include innovations to improve performance further.
    However, this is not necessarily the case; IBM may simply wish to widen
    the versatility of their mainframes. The issue driving the inclusion of
    ARM on IBM mainframes, after all, if we are to believe the press release, isn't so that these mainframes can deliver more bang for the buck whenever
    a customer has an application not tied to the System z architecture.
    No, it's to allow those mainframes to run software specifically designed
    for ARM and not System z. And so all this implies is what we already know
    - there's more software written for ARM than there is for POWER. Nothing
    to do with how powerful or efficient the chips are.

    John Savard
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Mon Apr 6 07:04:31 2026
    From Newsgroup: comp.arch

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.arch on Mon Apr 6 12:29:15 2026
    From Newsgroup: comp.arch

    On Mon, 6 Apr 2026 04:04:19 -0000 (UTC)
    quadi <quadibloc@ca.invalid> wrote:

    On Sun, 05 Apr 2026 17:04:24 +0000, Waldek Hebisch wrote:

    To avoid reputational damage IBM may keep making Power
    for long time, but by adapting ARM they no longer pretend that is is competitive. In other words, legacy business.

    That could well be true. I am not aware of IBM making great efforts
    to make new generations of POWER chips that are on smaller processes
    and include innovations to improve performance further.

    If we believe the article linked below, since POWER10 IBM changed focus
    for their POWER line. Nowadays their buisness model is more similar to z
    than to earlier generations of POWER - high-end enterprise computing
    only. https://www.hwcooling.net/en/power11-is-here-new-generation-of-ibm-risc-processors-detailed/
    Should we believe the article? I don't know.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Mon Apr 6 12:21:34 2026
    From Newsgroup: comp.arch

    In article <jZuAR.982150$WDc7.366361@fx16.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    jgd@cix.co.uk (John Dallman) writes:
    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>(Michael S) wrote:

    I think IBM wants to run ARM software to give their mainframes more >>"relevance" to current fashions in computing, and ARM wants to learn
    about high-grade RAS.

    Indeed, although I would suggest that ARM already leads the microprocessor >market in understanding high-grade RAS, designing it into the ARMv8 >architecture from the start.

    Better than AMD with MCAX and their own internal enhancements?

    I feel like RAS is something that's now highly relevant to the
    x86 ecosystem in the same way it is to ARM.

    Intel and AMD assume all software already runs on their platforms. ARM >>don't, and - from personal experience - can be quite effective in helping >>with transitions. I think IBM wants to take advantage of that. Also,
    adding ARM cores to an IBM processor die will be easier than Intel or AMD >>cores, simply because that's ARM's business model.

    ARM9 cores are quite powerful and require very little area
    on the chip. It would be very straightforward to slap an
    ARM subsystem on a large die (or MCM or chiplet) that
    would compete with the ARM servers offered by the existing
    cloud vendors, which would potentially attract more customers to IBMs
    cloud offering.

    Indeed. ARM isn't just for your cellphone anymore; it is in,
    and has been in, data centers for some time now.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From MitchAlsup@user5857@newsgrouper.org.invalid to comp.arch on Mon Apr 6 16:35:32 2026
    From Newsgroup: comp.arch


    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an application from one machine to another while Linux made it absolutely
    simple.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Mon Apr 6 16:46:27 2026
    From Newsgroup: comp.arch

    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    In article <jZuAR.982150$WDc7.366361@fx16.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    jgd@cix.co.uk (John Dallman) writes:
    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>>(Michael S) wrote:

    I think IBM wants to run ARM software to give their mainframes more >>>"relevance" to current fashions in computing, and ARM wants to learn >>>about high-grade RAS.

    Indeed, although I would suggest that ARM already leads the microprocessor >>market in understanding high-grade RAS, designing it into the ARMv8 >>architecture from the start.

    Better than AMD with MCAX and their own internal enhancements?

    Similar to MCAX in many respects. A standard error record format
    is defined, and a standard mechanism for accessing said records
    (both internal to the core and for external IP elements such as the
    interrupt controller or IOMMU).

    I feel like RAS is something that's now highly relevant to the
    x86 ecosystem in the same way it is to ARM.

    IMO it has always been relevent, it just wasn't recognized as
    such until relatively recently.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Mon Apr 6 16:48:02 2026
    From Newsgroup: comp.arch

    MitchAlsup <user5857@newsgrouper.org.invalid> writes:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on
    Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.

    I find it interesting that you actually accept L'do's bullshit about
    IBM history. The idea that "batch systems were never designed for
    high uptime" is just another in a long string of incorrect statements
    about computing history from L'do.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.arch on Mon Apr 6 13:09:52 2026
    From Newsgroup: comp.arch

    On Sun, 5 Apr 2026 06:12:21 -0500, David Schultz
    <david.schultz@earthlink.net> wrote:

    On 4/4/26 11:14 PM, George Neuner wrote:
    Also remember RAM was very expensive in 1981: the 64KB PC cost 50%
    more than the 32KB PC - roughly $3000 vs $2000 at retail - with the
    only difference being the amount of DRAM.

    I remember 1981 and 16K DRAM didn't cost anywhere near that much. A
    quick check of a random add in the September 1981 BYTE says $4 each.
    Retail.

    I was commenting on the sale price of the original PC.


    I wasn't yet computing in 1981, and my first IBM was a 256KB AT. But
    I do remember it costing a LOT to expand to 512KB because all 18 RAM
    chips [2 banks of 8 + parity] had to be replaced.

    I believe it was common [still is] for vendors to use all the RAM
    sockets and fill them with the smallest chips/modules that fill the
    order.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Mon Apr 6 18:41:32 2026
    From Newsgroup: comp.arch

    In article <1775493332-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on
    Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.

    I'm not sure that's at all true, but I suppose it depends on the
    definition of an "application". What precisely do you mean?

    Also, to echo what Scott said, Lawrence is a troll.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From MitchAlsup@user5857@newsgrouper.org.invalid to comp.arch on Mon Apr 6 22:32:17 2026
    From Newsgroup: comp.arch


    scott@slp53.sl.home (Scott Lurndal) posted:

    MitchAlsup <user5857@newsgrouper.org.invalid> writes:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on
    Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.

    I find it interesting that you actually accept L'do's bullshit about
    IBM history. The idea that "batch systems were never designed for
    high uptime" is just another in a long string of incorrect statements
    about computing history from L'do.

    My comment had and has nothing to do with IBM or batch systems.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From MitchAlsup@user5857@newsgrouper.org.invalid to comp.arch on Mon Apr 6 22:33:32 2026
    From Newsgroup: comp.arch


    cross@spitfire.i.gajendra.net (Dan Cross) posted:

    In article <1775493332-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on
    Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.

    I'm not sure that's at all true, but I suppose it depends on the
    definition of an "application". What precisely do you mean?

    Say, MS-Office or CorelDraw.

    Also, to echo what Scott said, Lawrence is a troll.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.arch on Tue Apr 7 02:31:52 2026
    From Newsgroup: comp.arch

    On Mon, 06 Apr 2026 22:33:32 GMT
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    cross@spitfire.i.gajendra.net (Dan Cross) posted:

    In article <1775493332-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a
    paper on Bitsavers dated 1986 that says that, to turn daylight
    saving on or off on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to
    transport an application from one machine to another while Linux
    made it absolutely simple.

    I'm not sure that's at all true, but I suppose it depends on the
    definition of an "application". What precisely do you mean?

    Say, MS-Office or CorelDraw.


    Technically or legally?

    Also, to echo what Scott said, Lawrence is a troll.

    - Dan C.



    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From quadi@quadibloc@ca.invalid to comp.arch on Tue Apr 7 02:16:00 2026
    From Newsgroup: comp.arch

    On Mon, 06 Apr 2026 16:35:32 +0000, MitchAlsup wrote:

    I find it interesting that MS made it virtually impossible to transport
    an application from one machine to another while Linux made it
    absolutely simple.

    I didn't think this has anything to do with anything that Microsoft (specifically) did.

    Applications written for Windows are distributed as binaries. Applications written for Linux are available as source code, which anyone can recompile.

    The reasons for this are well known, but I don't think any of them include anything nefarious on the part of Microsoft. Naturally, like other
    operating systems developers, they knew that the way to sell many copies
    of their operating system and make money was to encourage people to write software for it, by letting them make money too. Which meant not having
    stuff like the GPL, or even the LGPL, which doesn't require disclosure of source, but which does require making hooks into one's code possible.

    John Savard

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Tue Apr 7 04:55:38 2026
    From Newsgroup: comp.arch

    On Tue, 7 Apr 2026 02:16:00 -0000 (UTC), quadi wrote:

    The reasons for this are well known, but I don't think any of them
    include anything nefarious on the part of Microsoft.

    Nothing nefarious. Just the consequence of a decades-long chain of
    management decisions prioritizing short-term profit over the long-term integrity of the platform. Until now they have evolved themselves into
    a corner, with no clear way out.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Tue Apr 7 05:44:49 2026
    From Newsgroup: comp.arch

    quadi <quadibloc@ca.invalid> schrieb:
    On Mon, 06 Apr 2026 16:35:32 +0000, MitchAlsup wrote:

    I find it interesting that MS made it virtually impossible to transport
    an application from one machine to another while Linux made it
    absolutely simple.

    I didn't think this has anything to do with anything that Microsoft (specifically) did.

    Applications written for Windows are distributed as binaries. Applications written for Linux are available as source code, which anyone can recompile.

    That turns out not to be the case. There is a lot of commercial software
    for Linux distributed in binary-only form.
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Tue Apr 7 07:53:34 2026
    From Newsgroup: comp.arch

    On Tue, 7 Apr 2026 05:44:49 -0000 (UTC), Thomas Koenig wrote:

    There is a lot of commercial software for Linux distributed in
    binary-only form.

    s/commercial/proprietary/
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Tue Apr 7 14:41:42 2026
    From Newsgroup: comp.arch

    In article <1775514812-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    cross@spitfire.i.gajendra.net (Dan Cross) posted:

    In article <1775493332-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:

    On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:

    2) The superior reliability features of IBM's mainframes.

    Batch systems were never designed for high uptime. There is a paper on
    Bitsavers dated 1986 that says that, to turn daylight saving on or off
    on an IBM mainframe, you have to reboot.

    I find it interesting that MS made it virtually impossible to transport an >> >application from one machine to another while Linux made it absolutely
    simple.

    I'm not sure that's at all true, but I suppose it depends on the
    definition of an "application". What precisely do you mean?

    Say, MS-Office or CorelDraw.

    I'm sorry; I'm still confused by these statements.

    I suppose I'm not clear on what you mean by moving from one
    machine to another: it seems almost trivial to install MS Office
    on two computers; moving documents is also easy. Taking a
    binary build of a random program installed on a Linux computer,
    perhaps from source, and copying it to another machine? Well,
    often these things drop fles all over the filesystem; better
    make sure you get them all into a tarball or something.
    Fortunately, most distributions have some sort of package
    manager that does that for you.

    Unfortunately, all of those managers are different, and each is
    incompatible with all others. Hence, Flatpak, snap etc, and
    heavy-weight solutions like containers to sort out the shared
    library mess. Bottom line: it's not, actually, all that simple
    in the Linux ecosystem. MS has a package manager kind of thing
    too, and I'd say their single offering is better than the
    plethora of mutually-incompatible options in the Linux world.

    On the other hand, if you mean the base level of portability
    between different types of hardware (say, across different ISAs)
    then perhaps Linux has some small advantage here; I'd chalk that
    up more to compilers than the applications themselves. I don't
    know how many x86'isms are in MS Office, but I suspect it runs
    on ARM pretty well, so probably not that many. It likely has
    many msvc dependencies, though.

    Also, to echo what Scott said, Lawrence is a troll.

    (FWIW, this is stll true.)

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Tue Apr 7 14:44:06 2026
    From Newsgroup: comp.arch

    In article <10r1pd0$2he75$1@dont-email.me>,
    quadi <quadibloc@ca.invalid> wrote:
    On Mon, 06 Apr 2026 16:35:32 +0000, MitchAlsup wrote:

    I find it interesting that MS made it virtually impossible to transport
    an application from one machine to another while Linux made it
    absolutely simple.

    I didn't think this has anything to do with anything that Microsoft >(specifically) did.

    Applications written for Windows are distributed as binaries. Applications >written for Linux are available as source code, which anyone can recompile.

    Maybe, maybe not. Setting aside the matter of binary-only
    programs (that are not uncommon on Linux, FWIW) there is _also_
    the matter of software that has a long list of requirements, and
    may or many not work particularly well on any given distribution
    of Linux (of which there are far, far too many).

    The reasons for this are well known, but I don't think any of them include >anything nefarious on the part of Microsoft. Naturally, like other
    operating systems developers, they knew that the way to sell many copies
    of their operating system and make money was to encourage people to write >software for it, by letting them make money too. Which meant not having >stuff like the GPL, or even the LGPL, which doesn't require disclosure of >source, but which does require making hooks into one's code possible.

    I doubt this is true. I'm quite certain that lots of GPL'd code
    gets run on Windows.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Tue Apr 7 19:35:42 2026
    From Newsgroup: comp.arch

    According to MitchAlsup <user5857@newsgrouper.org.invalid>:
    I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.

    I don't understand what this means, either. Most Windows applications are licensed
    so it's deliberate that you can't install it on a bunch of machines without getting
    more license keys.

    If you mean something like x86 to ARM or 32 to 64 bit x86, who knows? We don't have the source code. Windows 11 runs on both AMD64 and ARM64, and they provide applications like Office for both. It's all written in high level languages so I'd think that it would be little more than recompiling.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Tue Apr 7 19:39:14 2026
    From Newsgroup: comp.arch

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe, maybe not. Setting aside the matter of binary-only
    programs (that are not uncommon on Linux, FWIW) there is _also_
    the matter of software that has a long list of requirements, and
    may or many not work particularly well on any given distribution
    of Linux (of which there are far, far too many).

    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames.
    Programs typically ship with the library DLLs they use, and each
    time a program was installed, its DLLs would replace any previous
    ones with the same name, even though it might be a different
    incompatible version.

    They've mostly fixed that, I think by having each program in
    some sort of environment so it's tied to the libraries it was
    using when it was installed.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 8 11:39:07 2026
    From Newsgroup: comp.arch

    In article <10r3mh2$1ifl$2@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe, maybe not. Setting aside the matter of binary-only
    programs (that are not uncommon on Linux, FWIW) there is _also_
    the matter of software that has a long list of requirements, and
    may or many not work particularly well on any given distribution
    of Linux (of which there are far, far too many).

    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. >Programs typically ship with the library DLLs they use, and each
    time a program was installed, its DLLs would replace any previous
    ones with the same name, even though it might be a different
    incompatible version.

    They've mostly fixed that, I think by having each program in
    some sort of environment so it's tied to the libraries it was
    using when it was installed.

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Wed Apr 8 19:25:09 2026
    From Newsgroup: comp.arch

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ...

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that if the API changes you bump the major number, while if it's a bug fix or otherwise compatible you just bump the minor number. Then you symlink the name with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work OK on FreeBSD. What happens on linux?
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Wed Apr 8 19:48:39 2026
    From Newsgroup: comp.arch

    John Levine <johnl@taugh.com> schrieb:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe, maybe not. Setting aside the matter of binary-only
    programs (that are not uncommon on Linux, FWIW) there is _also_
    the matter of software that has a long list of requirements, and
    may or many not work particularly well on any given distribution
    of Linux (of which there are far, far too many).

    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. Programs typically ship with the library DLLs they use, and each
    time a program was installed, its DLLs would replace any previous
    ones with the same name, even though it might be a different
    incompatible version.

    Seems that shared libraries don't work too well on Linux,
    either, or there would be no need for snap:

    $ find ~/snap -name '*.so' | wc -l
    162
    $ find /snap -name '*.so' 2>/dev/null | wc -l
    21567

    If they are shipping shared libraries for single applications, why not
    just do away with the shared library overhead and link these in
    statically?

    They've mostly fixed that, I think by having each program in
    some sort of environment so it's tied to the libraries it was
    using when it was installed.

    See above...
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From David Schultz@david.schultz@earthlink.net to comp.arch on Wed Apr 8 15:48:22 2026
    From Newsgroup: comp.arch

    On 4/8/26 2:25 PM, John Levine wrote:
    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or otherwise compatible you just bump the minor number. Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work
    OK on FreeBSD. What happens on linux?

    A random example:


    /usr/lib64$ ls -l libpng*
    lrwxrwxrwx. 1 root root 19 Feb 12 18:00 libpng16.so ->
    libpng16.so.16.55.0
    lrwxrwxrwx. 1 root root 19 Feb 12 18:00 libpng16.so.16 -> libpng16.so.16.55.0
    -rwxr-xr-x. 1 root root 241112 Feb 12 18:00 libpng16.so.16.55.0
    lrwxrwxrwx. 1 root root 11 Feb 12 18:00 libpng.so -> libpng16.so
    --
    http://davesrocketworks.com
    David Schultz
    "It's just this little chromium switch here..."
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 8 20:54:43 2026
    From Newsgroup: comp.arch

    In article <10r6a2l$57g$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ...

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that >if the API changes you bump the major number, while if it's a bug fix or >otherwise compatible you just bump the minor number. Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work >OK on FreeBSD. What happens on linux?

    That works fine, as long as it's actually done. It's when the
    plethora of dependencies a given program uses don't play by the
    rules that one runs into problems.

    I suppose the issue is that on Windows it cannot be avoided due
    to a technical limitation, while on Linux, it can be, but often
    is not.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.arch on Thu Apr 9 00:07:45 2026
    From Newsgroup: comp.arch

    On Wed, 8 Apr 2026 20:54:43 -0000 (UTC)
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    In article <10r6a2l$57g$1@gal.iecc.com>, John Levine
    <johnl@taugh.com> wrote:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL
    filenames. ...

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan
    being that if the API changes you bump the major number, while if
    it's a bug fix or otherwise compatible you just bump the minor
    number. Then you symlink the name with the minor version number
    back to the name with just the major number so the lists of imported >library names just have the major number. This seems to work OK on
    FreeBSD. What happens on linux?

    That works fine, as long as it's actually done. It's when the
    plethora of dependencies a given program uses don't play by the
    rules that one runs into problems.

    I suppose the issue is that on Windows it cannot be avoided due
    to a technical limitation, while on Linux, it can be, but often
    is not.

    - Dan C.


    In Windows you always have an option of copying desired DLL into the
    same directory with exe. This directory is always first in DLL search
    order.
    So, the only technical limitation here is the size of storage. The rest
    is about culture.





    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.arch on Wed Apr 8 18:41:28 2026
    From Newsgroup: comp.arch

    On 4/4/2026 11:34 AM, Michael S wrote:
    https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing


    appendix 42 in principles of operations? trying to remember it. about a lock-free stack?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.arch on Thu Apr 9 02:27:07 2026
    From Newsgroup: comp.arch

    On 4/6/2026 5:21 AM, Dan Cross wrote:
    In article <jZuAR.982150$WDc7.366361@fx16.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    jgd@cix.co.uk (John Dallman) writes:
    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>> (Michael S) wrote:

    I think IBM wants to run ARM software to give their mainframes more
    "relevance" to current fashions in computing, and ARM wants to learn
    about high-grade RAS.

    Indeed, although I would suggest that ARM already leads the microprocessor >> market in understanding high-grade RAS, designing it into the ARMv8
    architecture from the start.

    Better than AMD with MCAX and their own internal enhancements?

    I feel like RAS is something that's now highly relevant to the
    x86 ecosystem in the same way it is to ARM.

    Intel and AMD assume all software already runs on their platforms. ARM
    don't, and - from personal experience - can be quite effective in helping >>> with transitions. I think IBM wants to take advantage of that. Also,
    adding ARM cores to an IBM processor die will be easier than Intel or AMD >>> cores, simply because that's ARM's business model.

    ARM9 cores are quite powerful and require very little area
    on the chip. It would be very straightforward to slap an
    ARM subsystem on a large die (or MCM or chiplet) that
    would compete with the ARM servers offered by the existing
    cloud vendors, which would potentially attract more customers to IBMs
    cloud offering.

    Indeed. ARM isn't just for your cellphone anymore; it is in,
    and has been in, data centers for some time now

    IBM Motherbords with clusters of ARM among others "plexed" in a way to
    handle the traffic...? Sorry for going of the deep end..
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Stefan Monnier@monnier@iro.umontreal.ca to comp.arch on Thu Apr 9 09:49:02 2026
    From Newsgroup: comp.arch

    Seems that shared libraries don't work too well on Linux,
    either, or there would be no need for snap:

    $ find ~/snap -name '*.so' | wc -l
    162
    $ find /snap -name '*.so' 2>/dev/null | wc -l
    21567

    I've never seen "DLL hell" (or thereabouts) mentioned as the motivation
    for snaps.

    If they are shipping shared libraries for single applications, why not
    just do away with the shared library overhead and link these in
    statically?

    There are so many things that make no sense about snaps.
    I think for this particular question the answer is "because it's
    expedient" (AFAIK this same answer explains most of the other problems
    with snaps).


    === Stefan
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Thu Apr 9 14:49:27 2026
    From Newsgroup: comp.arch

    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 4/6/2026 5:21 AM, Dan Cross wrote:
    In article <jZuAR.982150$WDc7.366361@fx16.iad>,
    Scott Lurndal <slp53@pacbell.net> wrote:
    jgd@cix.co.uk (John Dallman) writes:
    In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>>> (Michael S) wrote:

    I think IBM wants to run ARM software to give their mainframes more
    "relevance" to current fashions in computing, and ARM wants to learn
    about high-grade RAS.

    Indeed, although I would suggest that ARM already leads the microprocessor >>> market in understanding high-grade RAS, designing it into the ARMv8
    architecture from the start.

    Better than AMD with MCAX and their own internal enhancements?

    I feel like RAS is something that's now highly relevant to the
    x86 ecosystem in the same way it is to ARM.

    Intel and AMD assume all software already runs on their platforms. ARM >>>> don't, and - from personal experience - can be quite effective in helping >>>> with transitions. I think IBM wants to take advantage of that. Also,
    adding ARM cores to an IBM processor die will be easier than Intel or AMD >>>> cores, simply because that's ARM's business model.

    ARM9 cores are quite powerful and require very little area
    on the chip. It would be very straightforward to slap an
    ARM subsystem on a large die (or MCM or chiplet) that
    would compete with the ARM servers offered by the existing
    cloud vendors, which would potentially attract more customers to IBMs
    cloud offering.

    Indeed. ARM isn't just for your cellphone anymore; it is in,
    and has been in, data centers for some time now

    IBM Motherbords with clusters of ARM among others "plexed" in a way to >handle the traffic...? Sorry for going of the deep end..

    DAGS "Google Axion"
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.arch on Thu Apr 9 17:51:18 2026
    From Newsgroup: comp.arch

    On Wed, 8 Apr 2026 20:54:43 -0000 (UTC), cross@spitfire.i.gajendra.net
    (Dan Cross) wrote:

    In article <10r6a2l$57g$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >>According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ... >>>
    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or >>otherwise compatible you just bump the minor number. Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work
    OK on FreeBSD. What happens on linux?

    That works fine, as long as it's actually done. It's when the
    plethora of dependencies a given program uses don't play by the
    rules that one runs into problems.

    I suppose the issue is that on Windows it cannot be avoided due
    to a technical limitation, while on Linux, it can be, but often
    is not.

    - Dan C.

    NTFS had the capability to hard link files since the beginning: the
    links were used to support dual (long and 8.3) file names. Other than
    renaming files, there was no support for it.

    NTFS 3.0 (Windows 2000) added symlinks. Initially there was utility
    support (linkd.exe,junction.exe) only for directory symlinks. A
    utility for manipulating file links (mklink.exe) finally appeared in
    Vista.


    Windows executables (programs and libraries) have always had embedded
    version information, but the LoadLibrary_ functions did not permit
    specifying what versions were acceptible.
    [And the developer had to remember to update the build version.
    Microsoft's tool chain, by default, did not do this automatically.]

    It was fine to have multiple versions of a DLL in the same directory
    (modulo unique filenames), but if a program cared about what version
    it used, it had to check manually and load the DLL explicitly (rather
    than letting the system loader do it).


    dotNET tried to fix this - at least for shared (system) DLLs - by
    maintaining a list of them in the registry. The dotNET loader would
    look for the right version and load it if possible.

    However, installers still could break it by overwriting existing files
    without checking versions. Or by not updating the registry.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Fri Apr 10 11:47:20 2026
    From Newsgroup: comp.arch

    In article <2d2gtkpm5ihtc9jkvfm6gq1dojco8fgtaf@4ax.com>,
    George Neuner <gneuner2@comcast.net> wrote:
    On Wed, 8 Apr 2026 20:54:43 -0000 (UTC), cross@spitfire.i.gajendra.net
    (Dan Cross) wrote:

    In article <10r6a2l$57g$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    Maybe he's thinking of what's known as DLL Hell, which resulted
    from Microsoft's neglecting to put a version number in DLL filenames. ... >>>>
    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    That's kind of sad.

    ELF library names include multi-part version numbers, with the plan being that
    if the API changes you bump the major number, while if it's a bug fix or >>>otherwise compatible you just bump the minor number. Then you symlink the name
    with the minor version number back to the name with just the major number so the
    lists of imported library names just have the major number. This seems to work
    OK on FreeBSD. What happens on linux?

    That works fine, as long as it's actually done. It's when the
    plethora of dependencies a given program uses don't play by the
    rules that one runs into problems.

    I suppose the issue is that on Windows it cannot be avoided due
    to a technical limitation, while on Linux, it can be, but often
    is not.

    NTFS had the capability to hard link files since the beginning: the
    links were used to support dual (long and 8.3) file names. Other than >renaming files, there was no support for it.

    NTFS 3.0 (Windows 2000) added symlinks. Initially there was utility
    support (linkd.exe,junction.exe) only for directory symlinks. A
    utility for manipulating file links (mklink.exe) finally appeared in
    Vista.

    Windows executables (programs and libraries) have always had embedded
    version information, but the LoadLibrary_ functions did not permit
    specifying what versions were acceptible.
    [And the developer had to remember to update the build version.
    Microsoft's tool chain, by default, did not do this automatically.]

    It was fine to have multiple versions of a DLL in the same directory
    (modulo unique filenames), but if a program cared about what version
    it used, it had to check manually and load the DLL explicitly (rather
    than letting the system loader do it).

    dotNET tried to fix this - at least for shared (system) DLLs - by
    maintaining a list of them in the registry. The dotNET loader would
    look for the right version and load it if possible.

    However, installers still could break it by overwriting existing files >without checking versions. Or by not updating the registry.

    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that
    dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly. The number of programs
    that may depend on a library is obviously unbounded (up to the
    physical limits of the universe, before someone jumps in with
    some overly pedantic interpretation of that statement).
    Computing the required functionality set is thus non-trivial,
    and if programs make mutually exlusive demands of it, then
    undecideable generally. Hence, something like containers or
    localized copies to isolate the transitive closure of
    dependencies.

    Wouldn't it be nice if software wasn't such a mess?

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Sat Apr 11 15:09:56 2026
    From Newsgroup: comp.arch

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that
    dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    This sounds like "don't do that" territory. If you're going to
    provide a library, it needs to have a stable API. If the
    API changes, change the version and document the change. If
    it can be compiled in different ways that change the API (as
    opposed to, say, using different optimizations) those need to
    have different names. I realize that too many people do not
    understand why this matters.

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, ....

    Sorry but what's "them" here? The library, some group of
    libraries, every program that uses the library?
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From MitchAlsup@user5857@newsgrouper.org.invalid to comp.arch on Sat Apr 11 17:53:04 2026
    From Newsgroup: comp.arch


    John Levine <johnl@taugh.com> posted:

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that >dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    This sounds like "don't do that" territory. If you're going to
    provide a library, it needs to have a stable API.

    Why are APIs changing all the time--it seems to me that the API
    is not set until one has a stable interface.

    It also seems to me that when additions are made to an API, the
    additions go in a different dynamic library than the original.

    What am I missing ??
    If the
    API changes, change the version and document the change. If
    it can be compiled in different ways that change the API (as
    opposed to, say, using different optimizations) those need to
    have different names. I realize that too many people do not
    understand why this matters.

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, ....

    Sorry but what's "them" here? The library, some group of
    libraries, every program that uses the library?

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Sat Apr 11 19:20:35 2026
    From Newsgroup: comp.arch

    According to MitchAlsup <user5857@newsgrouper.org.invalid>:

    John Levine <johnl@taugh.com> posted:

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that
    dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    This sounds like "don't do that" territory. If you're going to
    provide a library, it needs to have a stable API.

    Why are APIs changing all the time--it seems to me that the API
    is not set until one has a stable interface.

    It also seems to me that when additions are made to an API, the
    additions go in a different dynamic library than the original.

    What am I missing ??

    Nothing. I just upgraded MySQL from version 8.0 to 8.4, and
    the shared library name changed from libmysqlclient.so.21
    to libmysqlclient.so.24. There's new stuff in the .24 library
    that wasn't in the .21.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sat Apr 11 21:37:36 2026
    From Newsgroup: comp.arch

    On Sat, 11 Apr 2026 17:53:04 GMT, MitchAlsup wrote:

    Why are APIs changing all the time--it seems to me that the API is
    not set until one has a stable interface.

    So long as it changes in bacward-compatible fashion, there should be
    zero impact on existing code.

    It also seems to me that when additions are made to an API, the
    additions go in a different dynamic library than the original.

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might have
    some more fields added to it. The setup call sets those fields to
    sensible defaults, so existing client code can be recompiled against
    the new interface, linked against the new library version, and
    continue to work unchanged.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.arch on Sat Apr 11 14:49:21 2026
    From Newsgroup: comp.arch

    On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:
    On Sat, 11 Apr 2026 17:53:04 GMT, MitchAlsup wrote:

    Why are APIs changing all the time--it seems to me that the API is
    not set until one has a stable interface.

    So long as it changes in bacward-compatible fashion, there should be
    zero impact on existing code.

    It also seems to me that when additions are made to an API, the
    additions go in a different dynamic library than the original.

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might have
    some more fields added to it. The setup call sets those fields to
    sensible defaults, so existing client code can be recompiled against
    the new interface, linked against the new library version, and
    continue to work unchanged.

    Windows on Alpha, windows on MIPS, ect...
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Sat Apr 11 23:37:13 2026
    From Newsgroup: comp.arch

    On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:

    On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might
    have some more fields added to it. The setup call sets those fields
    to sensible defaults, so existing client code can be recompiled
    against the new interface, linked against the new library version,
    and continue to work unchanged.

    Windows on Alpha, windows on MIPS, ect...

    Windows doesn’t do shared library versioning though, does it?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Sun Apr 12 20:17:40 2026
    From Newsgroup: comp.arch

    In article <10rdo84$e4s$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that >>dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    This sounds like "don't do that" territory. If you're going to
    provide a library, it needs to have a stable API. If the
    API changes, change the version and document the change. If
    it can be compiled in different ways that change the API (as
    opposed to, say, using different optimizations) those need to
    have different names. I realize that too many people do not
    understand why this matters.

    It may not even affect the external API. Consider a library
    that can, optionally, be compiled to take advantage of multiple
    threads of execution internally. Software that expects to use
    that may have to take additional care to avoid conflicts between
    threads (e.g., perhaps the library takes a reference to a
    callback function that can be invoked by one of these threads;
    now, the callback has to be carefully written to accommodate
    the possibility of concurrent callback invocation, whereas in
    the single-threaded version, it does not). In this case, I
    would argue that the API is the same, though I could see an
    argument that it is not.

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, ....

    Sorry but what's "them" here? The library, some group of
    libraries, every program that uses the library?

    The set of libraries on the system, any one of which has only
    one verison installed.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Sun Apr 12 20:22:41 2026
    From Newsgroup: comp.arch

    In article <1775929984-5857@newsgrouper.org>,
    MitchAlsup <user5857@newsgrouper.org.invalid> wrote:

    John Levine <johnl@taugh.com> posted:

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It occurs to me that another problem with the Linux way of doing
    things (and really, this is true of any Unix-style system, and
    probably Windows as well; perhaps any system generally) is that
    dependencies can be compiled in different ways, and expose
    different functionality as a result, with no change in version.

    This sounds like "don't do that" territory. If you're going to
    provide a library, it needs to have a stable API.

    Why are APIs changing all the time--it seems to me that the API
    is not set until one has a stable interface.

    It also seems to me that when additions are made to an API, the
    additions go in a different dynamic library than the original.

    What am I missing ??

    It's not that the API is changing. It's that the behavior of
    the library changes depending on how it's compiled; this may, of
    course, present as a different interface (indeed, hopefully it
    does), but that need not be the case.

    For a example, a number of Unix-y packages over the years have
    wanted to link against a key/value store library, like DBM,
    NDBM, GDBM, or Berkeley DB. Which is chosen when and how might
    be a matter of site policy, but this introduces obvious
    differences in terms of what files are produced by those
    packages, what tools are needed to inspect those files, and so
    on.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From George Neuner@gneuner2@comcast.net to comp.arch on Sun Apr 12 23:03:22 2026
    From Newsgroup: comp.arch

    On Sat, 11 Apr 2026 23:37:13 -0000 (UTC), Lawrence D´Oliveiro
    <ldo@nz.invalid> wrote:

    On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:

    On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might
    have some more fields added to it. The setup call sets those fields
    to sensible defaults, so existing client code can be recompiled
    against the new interface, linked against the new library version,
    and continue to work unchanged.

    Windows on Alpha, windows on MIPS, ect...

    Windows doesn’t do shared library versioning though, does it?

    Yes and no.

    For the most part, on Windows you /presume/ that libraries are
    backward compatible unless the release notes say differently.

    FWIW, Microsoft does /try/ not to break things: they add new functions
    instead of changing old ones, etc., and - other than fixing
    acknowledged bugs - rarely break the behavior of an old function with
    a new implementation. They haven't always been successful, but in 30
    years I can think of only a couple of instances.


    The binaries absolutely do have version stamps [which you can see by
    examining the file properties], but Microsoft's tools did not
    automatically version - the programmer had to do that manually (or
    script it), and installers generally just replaced an old file with a
    new file of the same name.

    The Windows system loader works solely by pathnames and does not have
    any way to specify what file versions are acceptible. So if some load
    file is not compatible, typically you don't find out about it until
    your program misbehaves/crashes trying to use it.


    Given identical filenames, the only alternative is /not/ to use the
    toolchain to link your programs and /not/ to use the system loader.
    Rather you load and use libraries dynamically via
    LoadLibrary/GetProcAddress [the Windows analogues of dlopen/dlsym].
    That way the program can query a DLL's property data and check its
    version before loading it, and maybe fail gracefully rather than
    crashing.


    dotNET tried to fix the problem by using different filenames and
    recording version information in the Windows registry. The dotNET
    loader then would look up the library's base name in the registry, see
    if there was a mapping for a compatible version, and then load that
    file. This worked (mostly) as long as program installers played by the
    rules.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Mon Apr 13 03:36:13 2026
    From Newsgroup: comp.arch

    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It may not even affect the external API. Consider a library
    that can, optionally, be compiled to take advantage of multiple
    threads of execution internally. Software that expects to use
    that may have to take additional care to avoid conflicts between
    threads (e.g., perhaps the library takes a reference to a
    callback function that can be invoked by one of these threads;
    now, the callback has to be carefully written to accommodate
    the possibility of concurrent callback invocation, whereas in
    the single-threaded version, it does not). In this case, I
    would argue that the API is the same, though I could see an
    argument that it is not.

    That still sounds like "don't do that." If the library might
    be multithreaded, the calling program needs to deal with that.
    A broken program that sometimes works due to luck is still broken.

    This all seems pretty hypothetical, though. Yes, there are plenty
    of packages that can be compiled with different flavors of dbm but
    I don't ever recall seeing one that then exported a library to
    be used by other applications. I mostly work on FreeBSD so I
    could believe that linux packages are sloppier.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Mon Apr 13 11:27:52 2026
    From Newsgroup: comp.arch

    In article <10rhobd$19hs$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It may not even affect the external API. Consider a library
    that can, optionally, be compiled to take advantage of multiple
    threads of execution internally. Software that expects to use
    that may have to take additional care to avoid conflicts between
    threads (e.g., perhaps the library takes a reference to a
    callback function that can be invoked by one of these threads;
    now, the callback has to be carefully written to accommodate
    the possibility of concurrent callback invocation, whereas in
    the single-threaded version, it does not). In this case, I
    would argue that the API is the same, though I could see an
    argument that it is not.

    That still sounds like "don't do that."

    It does. And yet, in the real world, that sort of thing has
    been a factor in production outages. One can but shake one's
    head in disbelief at the folly of one's fellows.

    If the library might
    be multithreaded, the calling program needs to deal with that.
    A broken program that sometimes works due to luck is still broken.

    There was a messy period of time when threading was not quiet
    ubiqutious yet, but not particularly stable, either. A lot of
    programs that had been written assuming a single-threaded
    environment went through a painful period of adjustment as the
    ecosystem overall went to multithreading. Or, putting it
    another way, those programs were written when they could assume
    that their dependencies weren't playing fast and loose with
    threads under the scenes (or at least not in a way that they
    could observe), and at some point during the lifecycles of those
    programs, that all changed.

    This all seems pretty hypothetical, though. Yes, there are plenty
    of packages that can be compiled with different flavors of dbm but
    I don't ever recall seeing one that then exported a library to
    be used by other applications. I mostly work on FreeBSD so I
    could believe that linux packages are sloppier.

    It is admittedly hypothetical, though I have actually had the
    DB thing be an issue; probably due to some assumption in a shell
    script a sysadmin had written or something.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Andy Valencia@vandys@vsta.org to comp.arch on Mon Apr 13 09:21:09 2026
    From Newsgroup: comp.arch

    John Levine <johnl@taugh.com> writes:
    According to Dan Cross <cross@spitfire.i.gajendra.net>:
    It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.
    That's kind of sad.

    It has become a vicious cycle. Devs have lost interest in compatibility, especially WRT API's. Users of the API thus have to make each app a snpashot of the one blend of libraries which work with each other. This frees the
    devs to change their API willy-nilly, since it's just a candidate for
    whatever montage snapshot happens to make it work this month/year.

    And then a security flaw is found, and it's actually becoming more cost efficient to just ignore it rather than find and fix it in each one-off snapshot in all the myriad types of library combinatoric snapshotting
    out there.

    I came up the SW ranks with an expectation of rigor and attention to detail. Since these show no sign of ever returning, perhaps it's this AI thing which will eventually be used to jump out of this pit.

    Andy Valencia
    Home page: https://www.vsta.org/andy/
    To contact me: https://www.vsta.org/contact/andy.html
    No AI was used in the composition of this message
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Mon Apr 13 17:21:23 2026
    From Newsgroup: comp.arch

    Andy Valencia <vandys@vsta.org> schrieb:

    I came up the SW ranks with an expectation of rigor and attention to detail. Since these show no sign of ever returning, perhaps it's this AI thing which will eventually be used to jump out of this pit.

    Right now, it seems people are managing to dig themselves deeper into
    the pit, much faster, with the help of AI.

    To quote something that just got forwarded to me today:

    "It's more like asking the agent to build a building from floorplans
    and spec, and it produces everything in the right measurements and
    passes all tests. Except then you find out that the walls and
    beams are made of foam and the art is load-bearing".

    Andy Valencia
    Home page: https://www.vsta.org/andy/
    To contact me: https://www.vsta.org/contact/andy.html
    No AI was used in the composition of this message

    See my own .sig :-)
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From legalize+jeeves@legalize+jeeves@mail.xmission.com (Richard) to comp.arch on Mon Apr 13 18:55:36 2026
    From Newsgroup: comp.arch

    Thomas Koenig <tkoenig@netcologne.de> spake the secret code <10rj8mj$3fme6$1@dont-email.me> thusly:

    Andy Valencia <vandys@vsta.org> schrieb:

    I came up the SW ranks with an expectation of rigor and attention to detail. >> Since these show no sign of ever returning, perhaps it's this AI thing which >> will eventually be used to jump out of this pit.

    Right now, it seems people are managing to dig themselves deeper into
    the pit, much faster, with the help of AI.

    My take on the situation (and I've been using the AI tools more and
    more over time) is that the tools are great in the hands of someone
    with enough experience to "call the AI out on it's shit". I'm not
    sure how well a n00b or junior engineer is able to spot the obvious
    bad decisions the AI sometimes makes. The AI definitely acts like a
    "confident incompetent". It will always admit failure when you point
    it out, but what happens if you don't?

    Also, crafting prompts and properly seeding the context is essential
    to getting the most out of the AI and this is a new skill that, while
    not difficult to obtain, you have to invest in.
    --
    "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
    The Terminals Wiki <http://terminals-wiki.org>
    The Computer Graphics Museum <http://computergraphicsmuseum.org>
    Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Mon Apr 13 19:56:46 2026
    From Newsgroup: comp.arch

    On Mon, 13 Apr 2026 09:21:09 -0700, Andy Valencia wrote:

    It has become a vicious cycle. Devs have lost interest in
    compatibility, especially WRT API's. Users of the API thus have to
    make each app a snpashot of the one blend of libraries which work
    with each other. This frees the devs to change their API
    willy-nilly, since it's just a candidate for whatever montage
    snapshot happens to make it work this month/year.

    And then a security flaw is found, and it's actually becoming more
    cost efficient to just ignore it rather than find and fix it in each
    one-off snapshot in all the myriad types of library combinatoric
    snapshotting out there.

    This may be an accurate description of the proprietary world, but I’m
    not aware of this sort of thing happening in the open-source world.
    Can you give an example or two?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Mon Apr 13 20:15:03 2026
    From Newsgroup: comp.arch

    In article <10rje78$1d4f3$1@news.xmission.com>, Richard <> wrote:
    Thomas Koenig <tkoenig@netcologne.de> spake the secret code ><10rj8mj$3fme6$1@dont-email.me> thusly:

    Andy Valencia <vandys@vsta.org> schrieb:

    I came up the SW ranks with an expectation of rigor and attention to detail.
    Since these show no sign of ever returning, perhaps it's this AI thing which
    will eventually be used to jump out of this pit.

    Right now, it seems people are managing to dig themselves deeper into
    the pit, much faster, with the help of AI.

    My take on the situation (and I've been using the AI tools more and
    more over time) is that the tools are great in the hands of someone
    with enough experience to "call the AI out on it's shit". I'm not
    sure how well a n00b or junior engineer is able to spot the obvious
    bad decisions the AI sometimes makes. The AI definitely acts like a >"confident incompetent". It will always admit failure when you point
    it out, but what happens if you don't?

    Also, crafting prompts and properly seeding the context is essential
    to getting the most out of the AI and this is a new skill that, while
    not difficult to obtain, you have to invest in.

    I've been working with these things to get a sense of their
    capabilities, and so far it seems like they work best when you
    can constrain their behavior by pinning them to some set of
    external criteria that they cannot fudge. For example, I've
    if you have a formal specification of a thing, and have checked
    it with some automated system (e.g., expressed in a modeling
    language like Promela, TLA+ or Alloy and checked using a tool
    like SPIN or tlc or something) and you tell the LLM that the
    model is both immutable and the basis for what they must do,
    then you can get it to do a decent job.

    On the other hand, prompts alone don't seem to be sufficient to
    keep it honest, but maybe I am just not good at generating them.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Mon Apr 13 20:20:46 2026
    From Newsgroup: comp.arch

    Richard <legalize+jeeves@mail.xmission.com> schrieb:
    The AI definitely acts like a
    "confident incompetent". It will always admit failure when you point
    it out, but what happens if you don't?

    The amusing thing is that it will also admit failure when it is
    actually right.

    Or pretend to have corrected something when it hasn't, especially in
    pictures.

    For a long time, I've wanted it to draw a path which splits into
    two and then comes together again. The result has been unmitigated
    disaster.

    But at least, after years of trying, I have made one of
    my favorite misquotes into a picture: "Before the gates of
    scale-up the high gods have placed scale-down." It is a wistom
    that too few lab chemists heed. (Warning: shameless plug) https://www.linkedin.com/feed/update/urn:li:activity:7448403583332413440/
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From legalize+jeeves@legalize+jeeves@mail.xmission.com (Richard) to comp.arch on Tue Apr 14 04:51:16 2026
    From Newsgroup: comp.arch

    Thomas Koenig <tkoenig@netcologne.de> spake the secret code <10rjj6u$3j33t$1@dont-email.me> thusly:

    For a long time, I've wanted it to draw a path which splits into
    two and then comes together again. The result has been unmitigated
    disaster.

    I've had good results using it as a coding assistant.

    I tried using google gemini for writing terminals wiki articles and it
    was pretty horrible. When pinned down to explain it's horrible
    ability to do what I asked, it simply confessed that it's primary job
    was trying to make me happy, everything else be damned. So it would
    hallucinate URLs to "primary sources" and make up a bunch of other
    stuff in the generated prose that I could spot as obvious B.S. Worse,
    it wouldn't consistently follow my rules, even though that's how it's
    supposed to work.

    Working with chatgpt was much less frustrating, was more reliable at
    providing me links to primary sources and followed my rules pretty
    well.

    Then I tried to get them to generate a picture and I found the process
    entirely frustrating. Most of my experiments have been with
    researching vintage computing stuff and writing code, both areas where
    I have enough personal knowledge to spot hallucinations.

    Again, I think it comes back to having the bot do the grunt work while
    being overseen by someone with enough knowledge to spot the b.s.
    --
    "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
    The Terminals Wiki <http://terminals-wiki.org>
    The Computer Graphics Museum <http://computergraphicsmuseum.org>
    Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Thomas Koenig@tkoenig@netcologne.de to comp.arch on Tue Apr 14 05:54:20 2026
    From Newsgroup: comp.arch

    Richard <legalize+jeeves@mail.xmission.com> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> spake the secret code
    <10rjj6u$3j33t$1@dont-email.me> thusly:

    For a long time, I've wanted it to draw a path which splits into
    two and then comes together again. The result has been unmitigated >>disaster.

    I've had good results using it as a coding assistant.

    I tried using google gemini for writing terminals wiki articles and it
    was pretty horrible. When pinned down to explain it's horrible
    ability to do what I asked, it simply confessed that it's primary job
    was trying to make me happy, everything else be damned. So it would hallucinate URLs to "primary sources" and make up a bunch of other
    stuff in the generated prose that I could spot as obvious B.S. Worse,
    it wouldn't consistently follow my rules, even though that's how it's supposed to work.

    You know Wikipedia has a policy on AI-generated content, https://en.wikipedia.org/wiki/Wikipedia:LLM_use_disclosure ,
    and an "AI style guide" to spot it, https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing ?
    --
    This USENET posting was made without artificial intelligence,
    artificial impertinence, artificial arrogance, artificial stupidity,
    artificial flavorings or artificial colorants.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From legalize+jeeves@legalize+jeeves@mail.xmission.com (Richard) to comp.arch on Tue Apr 14 16:23:14 2026
    From Newsgroup: comp.arch

    Thomas Koenig <tkoenig@netcologne.de> spake the secret code <10rkkqc$3t7iu$1@dont-email.me> thusly:

    Richard <legalize+jeeves@mail.xmission.com> schrieb:
    I tried using google gemini for writing terminals wiki articles [...]

    You know Wikipedia has a policy on AI-generated content,

    Terminals wiki is mine and I decide the policies; IDGAF what
    wikipenis(tm) decides to do :).
    --
    "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
    The Terminals Wiki <http://terminals-wiki.org>
    The Computer Graphics Museum <http://computergraphicsmuseum.org>
    Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Wed Apr 15 12:23:40 2026
    From Newsgroup: comp.arch

    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared
    libraries on Unix-like systems:

    1) Breaking up the operating system functionality into manageable-size
    chunks of sensibly related material. This is the case that shared library system designers are usually dealing with. In some cases, notably macOS,
    there are unspoken assumptions that this is _always_ the case, and shared libraries have strings embedded in them that are copied into programs
    built against them. The purpose of these strings is to tell the loader
    where to find them. This works fine for libraries that have a canonical location in the filesystem, but see below for ones that don't.

    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial
    applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That avoids the need for an application to have a fixed installation directory, which implies that you can't have two different versions installed.

    3) The fairly rare case of shared libraries intended as "software
    components," to be used in many different applications, but which are
    _not_ extensions to the operating system. Different applications may (and likely do) have different versions of such libraries. On macOS, this
    requires the application developer to modify the embedded strings in the
    shared library before linking against it. Apple provide a tool for doing
    this, but it's fairly obscure. The stuff I work on is in this category.

    John
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Wed Apr 15 12:23:40 2026
    From Newsgroup: comp.arch

    In article <10r5eor$b6q$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    Containers were designed to make it easy to run lots of different
    applications on the same cloud servers. The companies that offer cloud
    services don't want to solve such problems in the applications - it's
    hard to blame them - and the SaaS companies have learned that their
    customers want cheap, not good, software.

    John
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 15 13:51:41 2026
    From Newsgroup: comp.arch

    In article <memo.20260415122259.25212A@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared >libraries on Unix-like systems:

    I don't know what this has to do with what I wrote that you
    quoted, but I'm afraid it's mostly incorrect.

    The usual use cases for shared objects are a) sharing of text
    and r/o data between processes linked against the same image,
    b) providing fixes to libraries without having to relink
    programs, and c) providing extensibility via the ability to
    dynamically load shared objects into the address space of a
    running process, find e.g. callable functions in those objects
    by looking up entries in their symbol tables, and accessing
    functionality provided by those objects by calling them (using
    a well-defined ABI).

    The last bit is a particularly powerful thing, and is how a
    language interpreter can so easily take advantage of advanced
    functionality that is not built-in or written in that language.
    C.f. Python and its use in the data processing ecosystem, which
    relies heavily on FFI calls to numerical analysis libraries
    written in FORTRAN and C.

    1) Breaking up the operating system functionality into manageable-size
    chunks of sensibly related material. This is the case that shared library >system designers are usually dealing with.

    This doesn't make much sense to me. It is not at all clear what
    you mean by, "operating system functionality." What do you mean
    by "manageable-size chunks of sensibly related material"?

    These statements are so vague I can only speculate what they may
    mean. You write that this is, "the case that shared library
    system designers are usually dealing with", and I confess that I
    have no idea what that might possibly mean.

    Do you mean something like `libc`? If so, bear in mind that
    Unix-style systems have provided libraries for the use of user
    space code since approximately the beginning; they did so with
    static libraries (".a" files) for at least a decade before Unix
    grew support for shared libraries in anything resembling the
    modern sense.

    In some cases, notably macOS,
    there are unspoken assumptions that this is _always_ the case, and shared >libraries have strings embedded in them that are copied into programs
    built against them. The purpose of these strings is to tell the loader
    where to find them. This works fine for libraries that have a canonical >location in the filesystem, but see below for ones that don't.

    Pretty much every dynamic executable has a list of libraries
    that must be loaded by the runtime linker; macOS isn't
    particularly noteable in this regard, though they chose to stick
    with Mach-O and .dylib's as the file format, and not something
    like ELF; that makes then a bit of an outlier, but comparing
    `otool -L` on my Mac workstation to `ldd` on Linux doesn't show
    much that is conceptually different:

    ```
    mac% otool -L /bin/ls
    /bin/ls:
    /usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)
    /usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1356.0.0)
    mac%
    ```

    (Aside: I presume the dependency on `ncurses` is so that `ls`
    can figure out how wide the terminal window is for columnated
    output; possibly for handling colors or something as well. I
    dunno; I try to turn as much of that off as I can.)

    ```
    linux% ldd /bin/ls
    linux-vdso.so.1 (0x00007fc9cc29c000)
    libcap.so.2 => /usr/lib/libcap.so.2 (0x00007fc9cc23f000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007fc9cc04e000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fc9cc29e000)
    linux%
    ```

    Of course, there are _some_ variations: Linux has the vDSO,
    and macOS has libSystem (it's probably worth nothing that the
    system call interface on macOS is opaque, versus linux where the
    KBI is rigidly defined; perhaps this is what you meant when you
    wrote, "unspoken assumptions that this is _always_ the case"
    above).

    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial >applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single >application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That >avoids the need for an application to have a fixed installation directory, >which implies that you can't have two different versions installed.

    Huh. That's an interesting idea, but really it's something that
    is facilitated by having shared objects, not something that was
    (or is) a primary motivating factor for shared libraries in the
    first place.

    3) The fairly rare case of shared libraries intended as "software >components," to be used in many different applications, but which are
    _not_ extensions to the operating system. Different applications may (and >likely do) have different versions of such libraries. On macOS, this
    requires the application developer to modify the embedded strings in the >shared library before linking against it. Apple provide a tool for doing >this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Everything that I installed from homebrew that cares about, say,
    the PNG library is picking up a single shared object: /opt/homebrew/opt/libpng/lib/libpng16.16.dylib.

    Again, the motivation was primarily sharing; any program using
    the same shared objects running concurrently shares the text,
    read-only data, and metadata of those objects with every other
    such program, as opposed to each statically linked binary
    getting its own copy. It does so at the expense of some
    additional bookkeeping in the operating system, but the overhead
    is not that bad.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.arch on Wed Apr 15 13:53:31 2026
    From Newsgroup: comp.arch

    In article <memo.20260415122259.25212B@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10r5eor$b6q$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Quite possibly! It's interesting to note that Linux has also
    suffered from similar issues, exacerbated by libraries that
    don't handle versioning well. Containers were the solution.

    Containers were designed to make it easy to run lots of different >applications on the same cloud servers. The companies that offer cloud >services don't want to solve such problems in the applications - it's
    hard to blame them - and the SaaS companies have learned that their
    customers want cheap, not good, software.

    The companies that are offering such things on "cloud servers"
    are not allowing their customers to run those applications on
    the bare metal.

    - Dan C.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Wed Apr 15 14:43:31 2026
    From Newsgroup: comp.arch

    cross@spitfire.i.gajendra.net (Dan Cross) writes:
    In article <memo.20260415122259.25212A@jgd.cix.co.uk>,
    John Dallman <jgd@cix.co.uk> wrote:
    In article <10rao08$jv9$1@reader1.panix.com>,
    cross@spitfire.i.gajendra.net (Dan Cross) wrote:

    Systems that desire to have one copy of a library installed in
    one place suffer from the obvious problem of needing to expose
    the union of functionality expected of all programs that depend
    on them, either directly or indirectly.

    There are at least three related, but different, use cases for shared >>libraries on Unix-like systems:

    I don't know what this has to do with what I wrote that you
    quoted, but I'm afraid it's mostly incorrect.

    The usual use cases for shared objects are a) sharing of text
    and r/o data between processes linked against the same image,
    b) providing fixes to libraries without having to relink
    programs, and c) providing extensibility via the ability to
    dynamically load shared objects into the address space of a
    running process, find e.g. callable functions in those objects
    by looking up entries in their symbol tables, and accessing
    functionality provided by those objects by calling them (using
    a well-defined ABI).

    The last bit is a particularly powerful thing, and is how a
    language interpreter can so easily take advantage of advanced
    functionality that is not built-in or written in that language.
    C.f. Python and its use in the data processing ecosystem, which
    relies heavily on FFI calls to numerical analysis libraries
    written in FORTRAN and C.

    This describes a major use of shared objects. The SoC simulator
    that I work on models a number of discrete SoCs and dynamically loads
    various shared objects based on the model of SoC being simulated.

    <snip>


    2) Breaking up applications into related chunks of functionality. This
    can be helpful for organisation of code, for producing commercial >>applications with subsets of "full" functionality, and so on. The
    important point here is that the shared libraries are used by a single >>application, or a suite of related applications. On macOS, this is
    tackled by some special values in the embedded strings that tell the
    loader to look in a filesystem location relative to the application. That >>avoids the need for an application to have a fixed installation directory, >>which implies that you can't have two different versions installed.

    Huh. That's an interesting idea, but really it's something that
    is facilitated by having shared objects, not something that was
    (or is) a primary motivating factor for shared libraries in the
    first place.

    Indeed, and Unix-like systems have LD_LIBRARY_PATH which supports
    a mechanism "telling the loader were to look for shared object"
    and it also supports binding the path into the ELF executable at
    link time.


    3) The fairly rare case of shared libraries intended as "software >>components," to be used in many different applications, but which are >>_not_ extensions to the operating system. Different applications may (and >>likely do) have different versions of such libraries. On macOS, this >>requires the application developer to modify the embedded strings in the >>shared library before linking against it. Apple provide a tool for doing >>this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Indeed. Thinks like libxml and libxsl, for example. Or openssl.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.arch on Wed Apr 15 12:55:43 2026
    From Newsgroup: comp.arch

    On 4/11/2026 4:37 PM, Lawrence D’Oliveiro wrote:
    On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:

    On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:

    Shared library versioning is for dealing with backward-incompatible
    changes to the ABI, not (necessarily) the API.

    For example, some struct that is passed to a library call might
    have some more fields added to it. The setup call sets those fields
    to sensible defaults, so existing client code can be recompiled
    against the new interface, linked against the new library version,
    and continue to work unchanged.

    Windows on Alpha, windows on MIPS, ect...

    Windows doesn’t do shared library versioning though, does it?

    DLL HELL? A fun one.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Wed Apr 15 22:28:43 2026
    From Newsgroup: comp.arch

    On Wed, 15 Apr 2026 12:22 +0100 (BST), John Dallman wrote:

    Containers were designed to make it easy to run lots of different applications on the same cloud servers.

    Linux systems have been serving multirole operations for years. Time
    was that the built-in multiuser capabilities were sufficient to keep
    apps isolated from each other.

    The problem seems to be with increasing use of proprietary apps. The
    developers of those seem to be accustomed to thinking that they have
    full control over the machine their software is running on.

    So virtualization became popular as a way of dealing with this, by
    isolating each problem app into what it thinks is its own machine.

    Full virtualization has a certain cost in terms of resources used. For
    example, each VM needs its own OS installation. Containerization is a
    much lighter-weight solution, that shares the OS kernel among multiple userlands, while still keeping the latter isolated from each other.

    This requires special features of the OS kernel that are only
    available in Linux. It also means that the apps must be developed for
    Linux. This seems to have happened anyway; Windows Server has very
    little presence in the cloud, even in Microsoft’s cloud.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.arch on Thu Apr 16 00:49:52 2026
    From Newsgroup: comp.arch

    On Wed, 15 Apr 2026 12:22 +0100 (BST), John Dallman wrote:

    There are at least three related, but different, use cases for
    shared libraries on Unix-like systems:

    The first two of which you mention have to do with “libraries”, not necessarily “shared libraries”.

    3) The fairly rare case of shared libraries intended as "software components," to be used in many different applications, but which
    are _not_ extensions to the operating system.

    I don’t know why you think these are “rare”. Look at this collection <http://ftp.nz.debian.org/debian/pool/main/> of the standard packages
    for Debian, just for example: notice how half of the names of the
    subdirectory groupings begin with “lib”? Those are all shared
    libraries.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From John Levine@johnl@taugh.com to comp.arch on Thu Apr 16 01:24:26 2026
    From Newsgroup: comp.arch

    According to Scott Lurndal <slp53@pacbell.net>:
    3) The fairly rare case of shared libraries intended as "software >>>components," to be used in many different applications, but which are >>>_not_ extensions to the operating system. Different applications may (and >>>likely do) have different versions of such libraries. On macOS, this >>>requires the application developer to modify the embedded strings in the >>>shared library before linking against it. Apple provide a tool for doing >>>this, but it's fairly obscure. The stuff I work on is in this category.

    Actually, I'd posit that this is very common.

    Indeed. Thinks like libxml and libxsl, for example. Or openssl.

    Or for that matter, libc.so which is linked into every linux or BSD
    program. In my experience that is by far the major use of shared libraries.

    An important but distant second is loadable code modules like the ones
    many python libraries use.
    --
    Regards,
    John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
    Please consider the environment before reading this e-mail. https://jl.ly
    --- Synchronet 3.21f-Linux NewsLink 1.2