https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing
I would consider this a profoundly _unnatural_ appendage to IBM's
history of hardware innovation... but then what do I know?
On Sat, 4 Apr 2026 20:39:19 -0000 (UTC), quadi wrote:
I would consider this a profoundly _unnatural_ appendage to IBM's
history of hardware innovation... but then what do I know?
What next? Build a small machine based on another non-IBM-made processor
... say an Intel 8088? And have it run a completely non-IBM-made OS,
licensed from some small upstart software house?
With an open expansion bus that anyone can make cards for?
But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit external bus, which allowed cheaper computers to be built using this architecture. It wasn't until much later that Motorola came out with the equivalent 68008, which allowed the Sinclair QL to reach the market.
https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing
Now, interesting question is what will happen to Power? Using Arm
may mean that IBM no longer considers Power a viable platform. Also,
if IBM delivers machines capable of running both Arm and Z code,
then likely consequence will be demise of Z as a hardware
architecture. That is, once performance crritical things run on Arm
they would no longer need real hardware for Z, software emulation
will be good enough.
Michael S <already5chosen@yahoo.com> wrote:
https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing
That anonoucement has little content.
In particular I do not see
"Arm within Z" statement. Since Z people are involved this is
likely, but equally likely IBM may wish to deliver servers with
Arm cores, but peripherial system taken from Z. Or something
entirely different.
Now, interesting question is what will happen to Power?
Using
Arm may mean that IBM no longer considers Power a viable platform.
Also, if IBM delivers machines capable of running both Arm and
Z code, then likely consequence will be demise of Z as a hardware architecture.
That is, once performance crritical things run on
Arm they would no longer need real hardware for Z, software emulation
will be good enough.
On Sun, 5 Apr 2026 02:35:39 -0000 (UTC), Waldek Hebisch wrote:
Now, interesting question is what will happen to Power? Using Arm
may mean that IBM no longer considers Power a viable platform. Also,
if IBM delivers machines capable of running both Arm and Z code,
then likely consequence will be demise of Z as a hardware
architecture. That is, once performance crritical things run on Arm
they would no longer need real hardware for Z, software emulation
will be good enough.
Mainframes were never about “performance critical” stuff -- unless
your meaning of “performance” was purely about I/O throughput, not CPU power. Because I/O throughput is what mainframes are/were optimized
for.
But nowadays you can get high I/O throughput mure more cheaply and
simply, with a storage server with say 24, 48 or 72 disk trays,
running Linux.
On 4/4/26 6:32 PM, quadi wrote:
But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit >> external bus, which allowed cheaper computers to be built using this
architecture. It wasn't until much later that Motorola came out with the
equivalent 68008, which allowed the Sinclair QL to reach the market.
Not as big of a problem as you might think.
Sure the memory bus was 16 bits but most implementations would be using >enough memory chips that it would be a wash. The IBM PC motherboard had >space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
32KX16 is that your cheapest model has 8 more DRAM chips in it.
For peripherals, the 68000 had a 6800 compatibility mode so that you--- Synchronet 3.21f-Linux NewsLink 1.2
could use those old 8 bit I/O chips if you desired.
I think IBM's mainframe business survives (in fact generates
billions of dollars in revenue for IBM) based on two factors.
[factors omitted]
On 4/4/2026 7:35 PM, Waldek Hebisch wrote:
Using Arm may mean that IBM no longer considers Power a viable
platform.
They make a lot of money from it.
On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:
I think IBM's mainframe business survives (in fact generates
billions of dollars in revenue for IBM) based on two factors.
[factors omitted]
I don’t think it does generate “billions of dollars in revenue” any more.
Arvind Krishna, IBM chairman, president, and chief executive officer, was speaking as the company turned in full-year results that showed a leap in mainframe sales.
However, the strongest growth came in IBM's infrastructure business, which grew revenues 12 percent for the year, and 21 percent in the fourth quarter.
That infrastructure boost was in large part powered by the launch of IBM's z17 series of mainframes.
In his prepared remarks, Krishna said: "Innovation value can also be seen in our IBM Z performance, up 48 percent this year, achieving the highest annual revenue for Z in about 20 years."
CFO James Kavanaugh referred to a "record z17 launch, achieving the highest annual revenue for IBM Z in about 20 years and outpacing z16 over the first three quarters of the program."
That anonoucement has little content. In particular I do not see "Arm
within Z" statement. Since Z people are involved this is likely, but
equally likely IBM may wish to deliver servers with Arm cores, but peripherial system taken from Z. Or something entirely different.
On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:I am pretty sure that you got it backward: most likely IBM lost serious
I think IBM's mainframe business survives (in fact generates
billions of dollars in revenue for IBM) based on two factors.
[factors omitted]
I don’t think it does generate “billions of dollars in revenue” any more. Which is why, after years, decades of losses and layoffs, the
entire IBM company is but a shadow of its former self. Which is why it started embracing Linux in a big way a quarter century ago, and why it acquired Red Hat. And why it is now looking to support ARM-based
workloads on top of that.
Sure the memory bus was 16 bits but most implementations would be using >enough memory chips that it would be a wash. The IBM PC motherboard had >space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
32KX16 is that your cheapest model has 8 more DRAM chips in it.
On Sat, 4 Apr 2026 20:56:50 -0700, Stephen Fuld wrote:Depends on what you consider significant.
On 4/4/2026 7:35 PM, Waldek Hebisch wrote:
Using Arm may mean that IBM no longer considers Power a viable
platform.
They make a lot of money from it.
POWER still has a significant presence in the Top500 list of the
world’s most powerful computers. I’m sure there’s a decent bit of profit to be made in one of the few low-volume, high-margin sectors
still left in the computing market.
Also remember RAM was very expensive in 1981: the 64KB PC cost 50%
more than the 32KB PC - roughly $3000 vs $2000 at retail - with the
only difference being the amount of DRAM.
Wouldn't they
also have needed 16 bits worth of ROMs for the BIOS?
https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collabor ation-with-arm-to-shape-the-future-of-enterprise-computing
IBM and Arm aim to extend this track record of innovation by
combining IBM's enterprise leadership in systems reliability,
security, and scalability with Arm's own leadership in power-
efficient architecture, workload enablement expertise, and
broad software ecosystem,
On Sat, 4 Apr 2026 21:11:05 -0700, Stephen Fuld wrote:
I think IBM's mainframe business survives (in fact generates
billions of dollars in revenue for IBM) based on two factors.
[factors omitted]
I don’t think it does generate “billions of dollars in revenue” any >more.
In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >(Michael S) wrote:
I think IBM wants to run ARM software to give their mainframes more >"relevance" to current fashions in computing, and ARM wants to learn
about high-grade RAS.
Intel and AMD assume all software already runs on their platforms. ARM
don't, and - from personal experience - can be quite effective in helping >with transitions. I think IBM wants to take advantage of that. Also,
adding ARM cores to an IBM processor die will be easier than Intel or AMD >cores, simply because that's ARM's business model.
On 4/4/26 6:32 PM, quadi wrote:
But Intel had made the 8088 available, which gave the 16-bit 8086 an 8-bit >> external bus, which allowed cheaper computers to be built using this
architecture. It wasn't until much later that Motorola came out with the
equivalent 68008, which allowed the Sinclair QL to reach the market.
Not as big of a problem as you might think.
Sure the memory bus was 16 bits but most implementations would be using enough memory chips that it would be a wash. The IBM PC motherboard had space for 64KB of 16KX1 DRAM. The only difference between 64KX8 and
32KX16 is that your cheapest model has 8 more DRAM chips in it.
For peripherals, the 68000 had a 6800 compatibility mode so that you
could use those old 8 bit I/O chips if you desired.
On Sun, 5 Apr 2026 02:35:39 -0000 (UTC), Waldek Hebisch wrote:
Now, interesting question is what will happen to Power? Using Arm
may mean that IBM no longer considers Power a viable platform. Also,
if IBM delivers machines capable of running both Arm and Z code,
then likely consequence will be demise of Z as a hardware
architecture. That is, once performance crritical things run on Arm
they would no longer need real hardware for Z, software emulation
will be good enough.
Mainframes were never about “performance critical” stuff -- unless
your meaning of “performance” was purely about I/O throughput, not CPU power. Because I/O throughput is what mainframes are/were optimized
for.
On 4/4/2026 7:35 PM, Waldek Hebisch wrote:
Michael S <already5chosen@yahoo.com> wrote:
https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing
That anonoucement has little content.
Agreed.
In particular I do not see
"Arm within Z" statement. Since Z people are involved this is
likely, but equally likely IBM may wish to deliver servers with
Arm cores, but peripherial system taken from Z. Or something
entirely different.
Of course, I may be way off base here. But I thought it could be
something like - Z series currently has specialized cores or some
mechanism for running crypto stuff, and at least some AI stuff. So I thought it may be a mechanism to have ARM cores as specialized cores to
run some Android like stuff. Perhaps the advantage of running this
stuff in the Z series itself instead of remotely is better communication between the Android app and whatever is running on the Z series.
Now, interesting question is what will happen to Power?
Probably nothing. Power and Z are pretty much independent.
Using
Arm may mean that IBM no longer considers Power a viable platform.
They make a lot of money from it.
Also, if IBM delivers machines capable of running both Arm and
Z code, then likely consequence will be demise of Z as a hardware
architecture.
I don't think so. IBM is unlikely to port things like CICS, or TPF to
ARM and without those, ARM couldn't replace Z series CPUs.
That is, once performance crritical things run on
Arm they would no longer need real hardware for Z, software emulation
will be good enough.
Again, I may be totally off base here, but I saw nothing to indicate
that they were going to move any substantial amount of software to ARM.
On Sun, 05 Apr 2026 02:35:39 +0000, Waldek Hebisch wrote:
That anonoucement has little content. In particular I do not see "Arm
within Z" statement. Since Z people are involved this is likely, but
equally likely IBM may wish to deliver servers with Arm cores, but
peripherial system taken from Z. Or something entirely different.
Well, it began by referring to "dual-architecture hardware", so it seems >like it does refer to a way of making ARM code run on System z mainframes.
Why? Well, the announcement also focuses on running Arm applications on
IBM hardware. Obviously, a System z mainframe is a lot heavier than a >smartphone. But ARM is trying to enter the server marketplace too.
Note that IBM includes mainframe sales in "infrastructure"
However, the strongest growth came in IBM's infrastructure
business, which grew revenues 12 percent for the year, and 21
percent in the fourth quarter.
show that mainframe sales are significant to IBM.
Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
Mainframes were never about “performance critical” stuff -- unless
your meaning of “performance” was purely about I/O throughput, not
CPU power. Because I/O throughput is what mainframes are/were
optimized for.
Business have some job to do and some performance expectation. If
performance did not matter they would only use lower end Z machines
and IBM would not bother to make bigger Z boxes.
But there are obstacles for anyone who wants to run on
Z/Architecture: it's big-endian, which lots of newer software has
never supported ...
Business have some job to do and some performance expectation.
If performance did not matter they would only use lower end Z machines
and IBM would not bother to make bigger Z boxes.
In fact, if performance did not matter IBM could go with emulation
(possibly using similar approach to AS/400 and later i series). Or they
could use standard FPGA: that probably would be slower than emulation
but IBM could still clain that Z runs on real hardware, which has some marketing advantage.
To avoid reputational damage IBM may keep making Power
for long time, but by adapting ARM they no longer pretend that is is competitive. In other words, legacy business.
2) The superior reliability features of IBM's mainframes.
On Sun, 05 Apr 2026 17:04:24 +0000, Waldek Hebisch wrote:
To avoid reputational damage IBM may keep making Power
for long time, but by adapting ARM they no longer pretend that is is competitive. In other words, legacy business.
That could well be true. I am not aware of IBM making great efforts
to make new generations of POWER chips that are on smaller processes
and include innovations to improve performance further.
jgd@cix.co.uk (John Dallman) writes:
In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>(Michael S) wrote:
I think IBM wants to run ARM software to give their mainframes more >>"relevance" to current fashions in computing, and ARM wants to learn
about high-grade RAS.
Indeed, although I would suggest that ARM already leads the microprocessor >market in understanding high-grade RAS, designing it into the ARMv8 >architecture from the start.
Intel and AMD assume all software already runs on their platforms. ARM >>don't, and - from personal experience - can be quite effective in helping >>with transitions. I think IBM wants to take advantage of that. Also,
adding ARM cores to an IBM processor die will be easier than Intel or AMD >>cores, simply because that's ARM's business model.
ARM9 cores are quite powerful and require very little area
on the chip. It would be very straightforward to slap an
ARM subsystem on a large die (or MCM or chiplet) that
would compete with the ARM servers offered by the existing
cloud vendors, which would potentially attract more customers to IBMs
cloud offering.
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
In article <jZuAR.982150$WDc7.366361@fx16.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
jgd@cix.co.uk (John Dallman) writes:
In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>>(Michael S) wrote:
I think IBM wants to run ARM software to give their mainframes more >>>"relevance" to current fashions in computing, and ARM wants to learn >>>about high-grade RAS.
Indeed, although I would suggest that ARM already leads the microprocessor >>market in understanding high-grade RAS, designing it into the ARMv8 >>architecture from the start.
Better than AMD with MCAX and their own internal enhancements?
I feel like RAS is something that's now highly relevant to the
x86 ecosystem in the same way it is to ARM.
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on
Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.
On 4/4/26 11:14 PM, George Neuner wrote:
Also remember RAM was very expensive in 1981: the 64KB PC cost 50%
more than the 32KB PC - roughly $3000 vs $2000 at retail - with the
only difference being the amount of DRAM.
I remember 1981 and 16K DRAM didn't cost anywhere near that much. A
quick check of a random add in the September 1981 BYTE says $4 each.
Retail.
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on
Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on
Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.
I find it interesting that you actually accept L'do's bullshit about
IBM history. The idea that "batch systems were never designed for
high uptime" is just another in a long string of incorrect statements
about computing history from L'do.
In article <1775493332-5857@newsgrouper.org>,
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on
Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.
I'm not sure that's at all true, but I suppose it depends on the
definition of an "application". What precisely do you mean?
Also, to echo what Scott said, Lawrence is a troll.
- Dan C.
cross@spitfire.i.gajendra.net (Dan Cross) posted:
In article <1775493332-5857@newsgrouper.org>,
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a
paper on Bitsavers dated 1986 that says that, to turn daylight
saving on or off on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to
transport an application from one machine to another while Linux
made it absolutely simple.
I'm not sure that's at all true, but I suppose it depends on the
definition of an "application". What precisely do you mean?
Say, MS-Office or CorelDraw.
Also, to echo what Scott said, Lawrence is a troll.
- Dan C.
I find it interesting that MS made it virtually impossible to transport
an application from one machine to another while Linux made it
absolutely simple.
The reasons for this are well known, but I don't think any of them
include anything nefarious on the part of Microsoft.
On Mon, 06 Apr 2026 16:35:32 +0000, MitchAlsup wrote:
I find it interesting that MS made it virtually impossible to transport
an application from one machine to another while Linux made it
absolutely simple.
I didn't think this has anything to do with anything that Microsoft (specifically) did.
Applications written for Windows are distributed as binaries. Applications written for Linux are available as source code, which anyone can recompile.
There is a lot of commercial software for Linux distributed in
binary-only form.
cross@spitfire.i.gajendra.net (Dan Cross) posted:
In article <1775493332-5857@newsgrouper.org>,
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> posted:
On Mon, 6 Apr 2026 03:57:07 -0000 (UTC), quadi wrote:
2) The superior reliability features of IBM's mainframes.
Batch systems were never designed for high uptime. There is a paper on
Bitsavers dated 1986 that says that, to turn daylight saving on or off
on an IBM mainframe, you have to reboot.
I find it interesting that MS made it virtually impossible to transport an >> >application from one machine to another while Linux made it absolutely
simple.
I'm not sure that's at all true, but I suppose it depends on the
definition of an "application". What precisely do you mean?
Say, MS-Office or CorelDraw.
Also, to echo what Scott said, Lawrence is a troll.
On Mon, 06 Apr 2026 16:35:32 +0000, MitchAlsup wrote:
I find it interesting that MS made it virtually impossible to transport
an application from one machine to another while Linux made it
absolutely simple.
I didn't think this has anything to do with anything that Microsoft >(specifically) did.
Applications written for Windows are distributed as binaries. Applications >written for Linux are available as source code, which anyone can recompile.
The reasons for this are well known, but I don't think any of them include >anything nefarious on the part of Microsoft. Naturally, like other
operating systems developers, they knew that the way to sell many copies
of their operating system and make money was to encourage people to write >software for it, by letting them make money too. Which meant not having >stuff like the GPL, or even the LGPL, which doesn't require disclosure of >source, but which does require making hooks into one's code possible.
I find it interesting that MS made it virtually impossible to transport an >application from one machine to another while Linux made it absolutely >simple.
Maybe, maybe not. Setting aside the matter of binary-only
programs (that are not uncommon on Linux, FWIW) there is _also_
the matter of software that has a long list of requirements, and
may or many not work particularly well on any given distribution
of Linux (of which there are far, far too many).
Maybe, maybe not. Setting aside the matter of binary-only
programs (that are not uncommon on Linux, FWIW) there is _also_
the matter of software that has a long list of requirements, and
may or many not work particularly well on any given distribution
of Linux (of which there are far, far too many).
Maybe he's thinking of what's known as DLL Hell, which resulted
from Microsoft's neglecting to put a version number in DLL filenames. >Programs typically ship with the library DLLs they use, and each
time a program was installed, its DLLs would replace any previous
ones with the same name, even though it might be a different
incompatible version.
They've mostly fixed that, I think by having each program in
some sort of environment so it's tied to the libraries it was
using when it was installed.
Maybe he's thinking of what's known as DLL Hell, which resulted
from Microsoft's neglecting to put a version number in DLL filenames. ...
Quite possibly! It's interesting to note that Linux has also
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
According to Dan Cross <cross@spitfire.i.gajendra.net>:
Maybe, maybe not. Setting aside the matter of binary-only
programs (that are not uncommon on Linux, FWIW) there is _also_
the matter of software that has a long list of requirements, and
may or many not work particularly well on any given distribution
of Linux (of which there are far, far too many).
Maybe he's thinking of what's known as DLL Hell, which resulted
from Microsoft's neglecting to put a version number in DLL filenames. Programs typically ship with the library DLLs they use, and each
time a program was installed, its DLLs would replace any previous
ones with the same name, even though it might be a different
incompatible version.
They've mostly fixed that, I think by having each program in
some sort of environment so it's tied to the libraries it was
using when it was installed.
ELF library names include multi-part version numbers, with the plan being that
if the API changes you bump the major number, while if it's a bug fix or otherwise compatible you just bump the minor number. Then you symlink the name
with the minor version number back to the name with just the major number so the
lists of imported library names just have the major number. This seems to work
OK on FreeBSD. What happens on linux?
Maybe he's thinking of what's known as DLL Hell, which resulted
from Microsoft's neglecting to put a version number in DLL filenames. ...
Quite possibly! It's interesting to note that Linux has also
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
That's kind of sad.
ELF library names include multi-part version numbers, with the plan being that >if the API changes you bump the major number, while if it's a bug fix or >otherwise compatible you just bump the minor number. Then you symlink the name
with the minor version number back to the name with just the major number so the
lists of imported library names just have the major number. This seems to work >OK on FreeBSD. What happens on linux?
In article <10r6a2l$57g$1@gal.iecc.com>, John Levine
<johnl@taugh.com> wrote:
According to Dan Cross <cross@spitfire.i.gajendra.net>:
Maybe he's thinking of what's known as DLL Hell, which resulted
from Microsoft's neglecting to put a version number in DLL
filenames. ...
Quite possibly! It's interesting to note that Linux has also
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
That's kind of sad.
ELF library names include multi-part version numbers, with the plan
being that if the API changes you bump the major number, while if
it's a bug fix or otherwise compatible you just bump the minor
number. Then you symlink the name with the minor version number
back to the name with just the major number so the lists of imported >library names just have the major number. This seems to work OK on
FreeBSD. What happens on linux?
That works fine, as long as it's actually done. It's when the
plethora of dependencies a given program uses don't play by the
rules that one runs into problems.
I suppose the issue is that on Windows it cannot be avoided due
to a technical limitation, while on Linux, it can be, but often
is not.
- Dan C.
https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-future-of-enterprise-computing
In article <jZuAR.982150$WDc7.366361@fx16.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
jgd@cix.co.uk (John Dallman) writes:
In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>> (Michael S) wrote:
I think IBM wants to run ARM software to give their mainframes more
"relevance" to current fashions in computing, and ARM wants to learn
about high-grade RAS.
Indeed, although I would suggest that ARM already leads the microprocessor >> market in understanding high-grade RAS, designing it into the ARMv8
architecture from the start.
Better than AMD with MCAX and their own internal enhancements?
I feel like RAS is something that's now highly relevant to the
x86 ecosystem in the same way it is to ARM.
Intel and AMD assume all software already runs on their platforms. ARM
don't, and - from personal experience - can be quite effective in helping >>> with transitions. I think IBM wants to take advantage of that. Also,
adding ARM cores to an IBM processor die will be easier than Intel or AMD >>> cores, simply because that's ARM's business model.
ARM9 cores are quite powerful and require very little area
on the chip. It would be very straightforward to slap an
ARM subsystem on a large die (or MCM or chiplet) that
would compete with the ARM servers offered by the existing
cloud vendors, which would potentially attract more customers to IBMs
cloud offering.
Indeed. ARM isn't just for your cellphone anymore; it is in,
and has been in, data centers for some time now
Seems that shared libraries don't work too well on Linux,
either, or there would be no need for snap:
$ find ~/snap -name '*.so' | wc -l
162
$ find /snap -name '*.so' 2>/dev/null | wc -l
21567
If they are shipping shared libraries for single applications, why not
just do away with the shared library overhead and link these in
statically?
On 4/6/2026 5:21 AM, Dan Cross wrote:
In article <jZuAR.982150$WDc7.366361@fx16.iad>,
Scott Lurndal <slp53@pacbell.net> wrote:
jgd@cix.co.uk (John Dallman) writes:
In article <20260404213401.0000593a@yahoo.com>, already5chosen@yahoo.com >>>> (Michael S) wrote:
I think IBM wants to run ARM software to give their mainframes more
"relevance" to current fashions in computing, and ARM wants to learn
about high-grade RAS.
Indeed, although I would suggest that ARM already leads the microprocessor >>> market in understanding high-grade RAS, designing it into the ARMv8
architecture from the start.
Better than AMD with MCAX and their own internal enhancements?
I feel like RAS is something that's now highly relevant to the
x86 ecosystem in the same way it is to ARM.
Intel and AMD assume all software already runs on their platforms. ARM >>>> don't, and - from personal experience - can be quite effective in helping >>>> with transitions. I think IBM wants to take advantage of that. Also,
adding ARM cores to an IBM processor die will be easier than Intel or AMD >>>> cores, simply because that's ARM's business model.
ARM9 cores are quite powerful and require very little area
on the chip. It would be very straightforward to slap an
ARM subsystem on a large die (or MCM or chiplet) that
would compete with the ARM servers offered by the existing
cloud vendors, which would potentially attract more customers to IBMs
cloud offering.
Indeed. ARM isn't just for your cellphone anymore; it is in,
and has been in, data centers for some time now
IBM Motherbords with clusters of ARM among others "plexed" in a way to >handle the traffic...? Sorry for going of the deep end..
In article <10r6a2l$57g$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote: >>According to Dan Cross <cross@spitfire.i.gajendra.net>:
Maybe he's thinking of what's known as DLL Hell, which resultedQuite possibly! It's interesting to note that Linux has also
from Microsoft's neglecting to put a version number in DLL filenames. ... >>>
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
That's kind of sad.
ELF library names include multi-part version numbers, with the plan being that
if the API changes you bump the major number, while if it's a bug fix or >>otherwise compatible you just bump the minor number. Then you symlink the name
with the minor version number back to the name with just the major number so the
lists of imported library names just have the major number. This seems to work
OK on FreeBSD. What happens on linux?
That works fine, as long as it's actually done. It's when the
plethora of dependencies a given program uses don't play by the
rules that one runs into problems.
I suppose the issue is that on Windows it cannot be avoided due
to a technical limitation, while on Linux, it can be, but often
is not.
- Dan C.
On Wed, 8 Apr 2026 20:54:43 -0000 (UTC), cross@spitfire.i.gajendra.net
(Dan Cross) wrote:
In article <10r6a2l$57g$1@gal.iecc.com>, John Levine <johnl@taugh.com> wrote:
According to Dan Cross <cross@spitfire.i.gajendra.net>:
Maybe he's thinking of what's known as DLL Hell, which resultedQuite possibly! It's interesting to note that Linux has also
from Microsoft's neglecting to put a version number in DLL filenames. ... >>>>
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
That's kind of sad.
ELF library names include multi-part version numbers, with the plan being that
if the API changes you bump the major number, while if it's a bug fix or >>>otherwise compatible you just bump the minor number. Then you symlink the name
with the minor version number back to the name with just the major number so the
lists of imported library names just have the major number. This seems to work
OK on FreeBSD. What happens on linux?
That works fine, as long as it's actually done. It's when the
plethora of dependencies a given program uses don't play by the
rules that one runs into problems.
I suppose the issue is that on Windows it cannot be avoided due
to a technical limitation, while on Linux, it can be, but often
is not.
NTFS had the capability to hard link files since the beginning: the
links were used to support dual (long and 8.3) file names. Other than >renaming files, there was no support for it.
NTFS 3.0 (Windows 2000) added symlinks. Initially there was utility
support (linkd.exe,junction.exe) only for directory symlinks. A
utility for manipulating file links (mklink.exe) finally appeared in
Vista.
Windows executables (programs and libraries) have always had embedded
version information, but the LoadLibrary_ functions did not permit
specifying what versions were acceptible.
[And the developer had to remember to update the build version.
Microsoft's tool chain, by default, did not do this automatically.]
It was fine to have multiple versions of a DLL in the same directory
(modulo unique filenames), but if a program cared about what version
it used, it had to check manually and load the DLL explicitly (rather
than letting the system loader do it).
dotNET tried to fix this - at least for shared (system) DLLs - by
maintaining a list of them in the registry. The dotNET loader would
look for the right version and load it if possible.
However, installers still could break it by overwriting existing files >without checking versions. Or by not updating the registry.
It occurs to me that another problem with the Linux way of doing
things (and really, this is true of any Unix-style system, and
probably Windows as well; perhaps any system generally) is that
dependencies can be compiled in different ways, and expose
different functionality as a result, with no change in version.
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, ....
According to Dan Cross <cross@spitfire.i.gajendra.net>:
It occurs to me that another problem with the Linux way of doing
things (and really, this is true of any Unix-style system, and
probably Windows as well; perhaps any system generally) is that >dependencies can be compiled in different ways, and expose
different functionality as a result, with no change in version.
This sounds like "don't do that" territory. If you're going to
provide a library, it needs to have a stable API.
If the
API changes, change the version and document the change. If
it can be compiled in different ways that change the API (as
opposed to, say, using different optimizations) those need to
have different names. I realize that too many people do not
understand why this matters.
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, ....
Sorry but what's "them" here? The library, some group of
libraries, every program that uses the library?
John Levine <johnl@taugh.com> posted:
According to Dan Cross <cross@spitfire.i.gajendra.net>:
It occurs to me that another problem with the Linux way of doing
things (and really, this is true of any Unix-style system, and
probably Windows as well; perhaps any system generally) is that
dependencies can be compiled in different ways, and expose
different functionality as a result, with no change in version.
This sounds like "don't do that" territory. If you're going to
provide a library, it needs to have a stable API.
Why are APIs changing all the time--it seems to me that the API
is not set until one has a stable interface.
It also seems to me that when additions are made to an API, the
additions go in a different dynamic library than the original.
What am I missing ??
Why are APIs changing all the time--it seems to me that the API is
not set until one has a stable interface.
It also seems to me that when additions are made to an API, the
additions go in a different dynamic library than the original.
On Sat, 11 Apr 2026 17:53:04 GMT, MitchAlsup wrote:
Why are APIs changing all the time--it seems to me that the API is
not set until one has a stable interface.
So long as it changes in bacward-compatible fashion, there should be
zero impact on existing code.
It also seems to me that when additions are made to an API, the
additions go in a different dynamic library than the original.
Shared library versioning is for dealing with backward-incompatible
changes to the ABI, not (necessarily) the API.
For example, some struct that is passed to a library call might have
some more fields added to it. The setup call sets those fields to
sensible defaults, so existing client code can be recompiled against
the new interface, linked against the new library version, and
continue to work unchanged.
On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:
Shared library versioning is for dealing with backward-incompatible
changes to the ABI, not (necessarily) the API.
For example, some struct that is passed to a library call might
have some more fields added to it. The setup call sets those fields
to sensible defaults, so existing client code can be recompiled
against the new interface, linked against the new library version,
and continue to work unchanged.
Windows on Alpha, windows on MIPS, ect...
It occurs to me that another problem with the Linux way of doing
things (and really, this is true of any Unix-style system, and
probably Windows as well; perhaps any system generally) is that >>dependencies can be compiled in different ways, and expose
different functionality as a result, with no change in version.
This sounds like "don't do that" territory. If you're going to
provide a library, it needs to have a stable API. If the
API changes, change the version and document the change. If
it can be compiled in different ways that change the API (as
opposed to, say, using different optimizations) those need to
have different names. I realize that too many people do not
understand why this matters.
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, ....
Sorry but what's "them" here? The library, some group of
libraries, every program that uses the library?
John Levine <johnl@taugh.com> posted:
According to Dan Cross <cross@spitfire.i.gajendra.net>:
It occurs to me that another problem with the Linux way of doing
things (and really, this is true of any Unix-style system, and
probably Windows as well; perhaps any system generally) is that
dependencies can be compiled in different ways, and expose
different functionality as a result, with no change in version.
This sounds like "don't do that" territory. If you're going to
provide a library, it needs to have a stable API.
Why are APIs changing all the time--it seems to me that the API
is not set until one has a stable interface.
It also seems to me that when additions are made to an API, the
additions go in a different dynamic library than the original.
What am I missing ??
On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:
On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:
Shared library versioning is for dealing with backward-incompatible
changes to the ABI, not (necessarily) the API.
For example, some struct that is passed to a library call might
have some more fields added to it. The setup call sets those fields
to sensible defaults, so existing client code can be recompiled
against the new interface, linked against the new library version,
and continue to work unchanged.
Windows on Alpha, windows on MIPS, ect...
Windows doesn’t do shared library versioning though, does it?
It may not even affect the external API. Consider a library
that can, optionally, be compiled to take advantage of multiple
threads of execution internally. Software that expects to use
that may have to take additional care to avoid conflicts between
threads (e.g., perhaps the library takes a reference to a
callback function that can be invoked by one of these threads;
now, the callback has to be carefully written to accommodate
the possibility of concurrent callback invocation, whereas in
the single-threaded version, it does not). In this case, I
would argue that the API is the same, though I could see an
argument that it is not.
It may not even affect the external API. Consider a library
that can, optionally, be compiled to take advantage of multiple
threads of execution internally. Software that expects to use
that may have to take additional care to avoid conflicts between
threads (e.g., perhaps the library takes a reference to a
callback function that can be invoked by one of these threads;
now, the callback has to be carefully written to accommodate
the possibility of concurrent callback invocation, whereas in
the single-threaded version, it does not). In this case, I
would argue that the API is the same, though I could see an
argument that it is not.
That still sounds like "don't do that."
If the library might
be multithreaded, the calling program needs to deal with that.
A broken program that sometimes works due to luck is still broken.
This all seems pretty hypothetical, though. Yes, there are plenty
of packages that can be compiled with different flavors of dbm but
I don't ever recall seeing one that then exported a library to
be used by other applications. I mostly work on FreeBSD so I
could believe that linux packages are sloppier.
According to Dan Cross <cross@spitfire.i.gajendra.net>:
It's interesting to note that Linux has alsoThat's kind of sad.
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
I came up the SW ranks with an expectation of rigor and attention to detail. Since these show no sign of ever returning, perhaps it's this AI thing which will eventually be used to jump out of this pit.
Andy Valencia
Home page: https://www.vsta.org/andy/
To contact me: https://www.vsta.org/contact/andy.html
No AI was used in the composition of this message
Andy Valencia <vandys@vsta.org> schrieb:
I came up the SW ranks with an expectation of rigor and attention to detail. >> Since these show no sign of ever returning, perhaps it's this AI thing which >> will eventually be used to jump out of this pit.
Right now, it seems people are managing to dig themselves deeper into
the pit, much faster, with the help of AI.
It has become a vicious cycle. Devs have lost interest in
compatibility, especially WRT API's. Users of the API thus have to
make each app a snpashot of the one blend of libraries which work
with each other. This frees the devs to change their API
willy-nilly, since it's just a candidate for whatever montage
snapshot happens to make it work this month/year.
And then a security flaw is found, and it's actually becoming more
cost efficient to just ignore it rather than find and fix it in each
one-off snapshot in all the myriad types of library combinatoric
snapshotting out there.
Thomas Koenig <tkoenig@netcologne.de> spake the secret code ><10rj8mj$3fme6$1@dont-email.me> thusly:
Andy Valencia <vandys@vsta.org> schrieb:
I came up the SW ranks with an expectation of rigor and attention to detail.
Since these show no sign of ever returning, perhaps it's this AI thing which
will eventually be used to jump out of this pit.
Right now, it seems people are managing to dig themselves deeper into
the pit, much faster, with the help of AI.
My take on the situation (and I've been using the AI tools more and
more over time) is that the tools are great in the hands of someone
with enough experience to "call the AI out on it's shit". I'm not
sure how well a n00b or junior engineer is able to spot the obvious
bad decisions the AI sometimes makes. The AI definitely acts like a >"confident incompetent". It will always admit failure when you point
it out, but what happens if you don't?
Also, crafting prompts and properly seeding the context is essential
to getting the most out of the AI and this is a new skill that, while
not difficult to obtain, you have to invest in.
The AI definitely acts like a
"confident incompetent". It will always admit failure when you point
it out, but what happens if you don't?
For a long time, I've wanted it to draw a path which splits into
two and then comes together again. The result has been unmitigated
disaster.
Thomas Koenig <tkoenig@netcologne.de> spake the secret code
<10rjj6u$3j33t$1@dont-email.me> thusly:
For a long time, I've wanted it to draw a path which splits into
two and then comes together again. The result has been unmitigated >>disaster.
I've had good results using it as a coding assistant.
I tried using google gemini for writing terminals wiki articles and it
was pretty horrible. When pinned down to explain it's horrible
ability to do what I asked, it simply confessed that it's primary job
was trying to make me happy, everything else be damned. So it would hallucinate URLs to "primary sources" and make up a bunch of other
stuff in the generated prose that I could spot as obvious B.S. Worse,
it wouldn't consistently follow my rules, even though that's how it's supposed to work.
Richard <legalize+jeeves@mail.xmission.com> schrieb:
I tried using google gemini for writing terminals wiki articles [...]
You know Wikipedia has a policy on AI-generated content,
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, either directly or indirectly.
Quite possibly! It's interesting to note that Linux has also
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
In article <10rao08$jv9$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, either directly or indirectly.
There are at least three related, but different, use cases for shared >libraries on Unix-like systems:
1) Breaking up the operating system functionality into manageable-size
chunks of sensibly related material. This is the case that shared library >system designers are usually dealing with.
In some cases, notably macOS,
there are unspoken assumptions that this is _always_ the case, and shared >libraries have strings embedded in them that are copied into programs
built against them. The purpose of these strings is to tell the loader
where to find them. This works fine for libraries that have a canonical >location in the filesystem, but see below for ones that don't.
2) Breaking up applications into related chunks of functionality. This
can be helpful for organisation of code, for producing commercial >applications with subsets of "full" functionality, and so on. The
important point here is that the shared libraries are used by a single >application, or a suite of related applications. On macOS, this is
tackled by some special values in the embedded strings that tell the
loader to look in a filesystem location relative to the application. That >avoids the need for an application to have a fixed installation directory, >which implies that you can't have two different versions installed.
3) The fairly rare case of shared libraries intended as "software >components," to be used in many different applications, but which are
_not_ extensions to the operating system. Different applications may (and >likely do) have different versions of such libraries. On macOS, this
requires the application developer to modify the embedded strings in the >shared library before linking against it. Apple provide a tool for doing >this, but it's fairly obscure. The stuff I work on is in this category.
In article <10r5eor$b6q$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Quite possibly! It's interesting to note that Linux has also
suffered from similar issues, exacerbated by libraries that
don't handle versioning well. Containers were the solution.
Containers were designed to make it easy to run lots of different >applications on the same cloud servers. The companies that offer cloud >services don't want to solve such problems in the applications - it's
hard to blame them - and the SaaS companies have learned that their
customers want cheap, not good, software.
In article <memo.20260415122259.25212A@jgd.cix.co.uk>,
John Dallman <jgd@cix.co.uk> wrote:
In article <10rao08$jv9$1@reader1.panix.com>,
cross@spitfire.i.gajendra.net (Dan Cross) wrote:
Systems that desire to have one copy of a library installed in
one place suffer from the obvious problem of needing to expose
the union of functionality expected of all programs that depend
on them, either directly or indirectly.
There are at least three related, but different, use cases for shared >>libraries on Unix-like systems:
I don't know what this has to do with what I wrote that you
quoted, but I'm afraid it's mostly incorrect.
The usual use cases for shared objects are a) sharing of text
and r/o data between processes linked against the same image,
b) providing fixes to libraries without having to relink
programs, and c) providing extensibility via the ability to
dynamically load shared objects into the address space of a
running process, find e.g. callable functions in those objects
by looking up entries in their symbol tables, and accessing
functionality provided by those objects by calling them (using
a well-defined ABI).
The last bit is a particularly powerful thing, and is how a
language interpreter can so easily take advantage of advanced
functionality that is not built-in or written in that language.
C.f. Python and its use in the data processing ecosystem, which
relies heavily on FFI calls to numerical analysis libraries
written in FORTRAN and C.
2) Breaking up applications into related chunks of functionality. This
can be helpful for organisation of code, for producing commercial >>applications with subsets of "full" functionality, and so on. The
important point here is that the shared libraries are used by a single >>application, or a suite of related applications. On macOS, this is
tackled by some special values in the embedded strings that tell the
loader to look in a filesystem location relative to the application. That >>avoids the need for an application to have a fixed installation directory, >>which implies that you can't have two different versions installed.
Huh. That's an interesting idea, but really it's something that
is facilitated by having shared objects, not something that was
(or is) a primary motivating factor for shared libraries in the
first place.
3) The fairly rare case of shared libraries intended as "software >>components," to be used in many different applications, but which are >>_not_ extensions to the operating system. Different applications may (and >>likely do) have different versions of such libraries. On macOS, this >>requires the application developer to modify the embedded strings in the >>shared library before linking against it. Apple provide a tool for doing >>this, but it's fairly obscure. The stuff I work on is in this category.
Actually, I'd posit that this is very common.
On Sat, 11 Apr 2026 14:49:21 -0700, Chris M. Thomasson wrote:
On 4/11/2026 2:37 PM, Lawrence D’Oliveiro wrote:
Shared library versioning is for dealing with backward-incompatible
changes to the ABI, not (necessarily) the API.
For example, some struct that is passed to a library call might
have some more fields added to it. The setup call sets those fields
to sensible defaults, so existing client code can be recompiled
against the new interface, linked against the new library version,
and continue to work unchanged.
Windows on Alpha, windows on MIPS, ect...
Windows doesn’t do shared library versioning though, does it?
Containers were designed to make it easy to run lots of different applications on the same cloud servers.
There are at least three related, but different, use cases for
shared libraries on Unix-like systems:
3) The fairly rare case of shared libraries intended as "software components," to be used in many different applications, but which
are _not_ extensions to the operating system.
3) The fairly rare case of shared libraries intended as "software >>>components," to be used in many different applications, but which are >>>_not_ extensions to the operating system. Different applications may (and >>>likely do) have different versions of such libraries. On macOS, this >>>requires the application developer to modify the embedded strings in the >>>shared library before linking against it. Apple provide a tool for doing >>>this, but it's fairly obscure. The stuff I work on is in this category.
Actually, I'd posit that this is very common.
Indeed. Thinks like libxml and libxsl, for example. Or openssl.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,113 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492337:11:31 |
| Calls: | 14,238 |
| Files: | 186,312 |
| D/L today: |
3,909 files (1,274M bytes) |
| Messages: | 2,514,893 |