• Downwardly Scalable Systems

    From Ben Collver@bencollver@tilde.pink to comp.misc on Sat Apr 13 15:56:23 2024
    From Newsgroup: comp.misc

    Downwardly Scalable Systems
    ===========================
    David N. Welton
    davidw@dedasys.com
    2004-11-14

    A lot of thought has been dedicated to scalability in computer
    systems. However, outside of the embedded systems arena, most of this
    effort has gone into making systems ever larger and faster. I would
    like to take the opposite approach, and discuss the implications of
    programming languages that "scale down". I choose programming
    languages because it's a subject I take a great deal of interest in,
    but I believe that the basic ideas here are applicable to many
    complex systems that require human interaction.

    Good systems should be able to scale down as well as up. They should
    run on slower computers that don't have as much memory or disk
    storage as the latest models. Likewise, from the human point of view, downwardly scalable systems should also be small enough to learn and
    use without being an expert programmer. It's not always possible to
    have all of these things. Engineering is often the art of balancing
    several compromises, after all, and at times, it's necessary to add
    computing power in order to attain simplicity. However, it is always
    a good idea to keep all these targets in mind, even if design
    dictates that one or the other of them may be less important.

    Naturally, we can't forget the "scalability" part of the equation
    while focusing on "simple". It's not that hard to make something
    small and easy to use. The trick is that it's also got to be
    extensible, in order to grow. The system should be easy to pick up,
    but it should also be easy to add to, and should also provide tools
    to modularize and otherwise compartmentalize the growing system in
    order to not end up with what is known in technical terms as a
    "godawful mess". And of course it should still be fun to work with
    for the more experienced programmer.

    Another part of the balancing act is that often times "simple"
    equates to hidden complexity. For example programming languages like
    Tcl or Python are "simpler" than C, but this is a result of the
    implementation hiding a lot of the complexity from the programmer. In
    this case, scaling down from a human point of view conflicts with
    scaling down resource usage to some degree, in that these languages
    require a bigger runtime, and more processor power to accomplish
    similar tasks.

    Why scale down?
    ===============
    Ease of use vs breadth of user base

    Systems that scale down are not just nice in theory, but have many
    benefits. To start with, they have the potential to create more
    value. By being accessible to people with less experience and talent,
    they allow more people to get more things done than a system
    requiring more advanced skills would. The nicest tool in the world is
    of no use to you if cannot figure out how make use of it in the time
    you have. You may object, saying that not everyone should be able to
    use every tool, and of course that's true. We don't all expect to be
    able to fly airplanes, and yet they are still valuable to us. The
    point is, however, that where two systems are capable of doing the
    same job equally well, the more accessible tool is more valuable in
    that more people are productive with it.

    It's especially important to realize that in some cases, this is more
    than just a matter of degree - there is actually a cutoff point,
    meaning that some people will be able to do no work with a tool that
    is too difficult. So to these people, the complex, hard to use option
    is simply not an option at all!

    Of course, having more users isn't necessarily a benefit in and of
    itself, but if you have a reasonably good user community, it often is
    [1].

    Disruptive Technologies
    =======================
    In some ways, systems that scale down may be winners in the long term
    even if they don't scale up as well as others. Consider the
    definition of "disruptive technology":

    A disruptive technology is a new technological innovation, product,
    or service that eventually overturns the existing dominant
    technology in the market, despite the fact that the disruptive
    technology is both radically different than the leading technology
    and that it often initially performs worse than the leading
    technology according to existing measures of performance. [2]

    In short: when the programming technology that scales down is "good
    enough", it may open up a field of programming to people who
    previously were not able to accomplish anything at all - and give
    many more people the chance to write "good enough" code much faster.

    Perhaps experienced programmers are grumbling about "the right tool
    for the job", and complaining about putting sharp tools in the hands
    of novices.

    To a degree, they're right. It's best to use hammers for nails, and
    saws to cut boards, and to know something about them before taking
    them to a board. Computer systems, especially programming languages,
    are very complex tools though, and take a lot longer to learn for
    many people than learning to hit nails with a hammer. What this means
    is that many people find it valuable to be able to do more things
    with the tools they know how to use. Maybe the tool isn't perfect,
    but if it does a reasonable job of solving the problem at hand, it's
    probably a net gain to be able to write the code in the first place.
    For example, I once did some work for a client who was, at heart, a
    marketing guy who came up with a clever idea for a product. He wrote
    the initial code that made it work, and...frankly, it was pretty
    obvious that he was a marketing guy rather than a programmer. But -
    and this is crucial - he was able to make it work with the scripting
    language he used. If he'd had to use something like Java or C that
    was over his head, he might not have got his product to the point
    where he was making enough money to hire me to clean up his code.

    Scaling down doesn't necessarily equate to a disruptive technology,
    of course. In many cases it's easy to rejig things just a bit, or
    create a better, clearer interface without revolutionizing either
    your product or the market in which it resides. Truly disruptive
    technologies often create new markets where none existed.

    "Case Studies"
    ==============
    Let's look at some real-world examples:

    Dr. John Ousterhout, author of the Tcl programming language and the
    Tk GUI toolkit realized that scripting languages were a winning
    proposition early on, creating a revolutionary way to easily write
    graphical applications. This made Tcl and Tk a big success, and they
    continue to be very widely used, long after complex, proprietary
    rivals such as Motif have fallen by the wayside. Tcl has had some
    struggles with scaling up, but in general hasn't managed too badly.
    Perhaps Tk isn't well suited to writing a huge, complex application
    like an office suite, however the speed of a scripting language is
    perfectly adequate for many GUI's. After all, the time it takes you
    to move the mouse and click is an eternity in computer cycles.

    PHP is another example. While PHP is not a particularly elegant
    system, and suffers from some flaws in its approach to simplicity
    (it's too tied to the web world, which has limited its uptake for
    other uses), it has done a phenomenal job of making dynamic web
    programming available to many people who would otherwise not be able
    to, or would have to spend much more time to make their code work.
    "But what about all the bad PHP code out there?!". Sure - it's there,
    but consider that the people doing the ugly programming are almost
    certainly not trained programmers. Perhaps they are doctors,
    businessmen, artists with deep knowledge of other fields. PHP is
    therefore the tool that, through its approachability, makes it
    possible for them to do something that they otherwise could not do.
    If they are successful in their endeavors, maybe they will attract
    "real" programmers who will help them out either because it's a
    successful business venture, or because it's a popular open source
    idea. In any case, they are better off with an ugly program they
    wrote to do the job than an elegant programming language that they
    can't make heads nor tails of.

    Contrast these languages with Java. Java isn't a bad language,
    particularly. It's easier than C in some ways, and certainly safer
    for the average programmer, who has to worry less about memory
    issues, walking off then end of arrays, and things of that ilk that
    in C, not only crash the system, but may create security risks. Java
    is fairly resource intensive - it requires lots of disk space,
    memory. Worse, it's not something that is easy to pick up quickly for
    the 'average guy'. It has a verbose syntax that is unpleasant to use
    if you don't know your tools well, or even if you just want to whip
    up something quickly. The target for "downwardly scalable languages"
    that we're talking about are the individuals who, due to their lack
    of experience, are more likely to have bad tools than a veteran
    programmer who can play emacs like a piano. Java also makes it
    difficult to do easy things. To even get started, you have to have
    some notions of object oriented programming, you have to split your
    code up into lots of little files that must be properly named, and
    you must have a few ideas about packages to even print something to
    the screen, with System.out.println("hello world"); Of course, if you
    are writing thousands upon thousands of lines of code, some of these
    language features that initially seem like annoyances turn into
    advantages, and would be sensible rules to follow for any programming
    team wishing to keep their sanity. While it's fine to say "it just
    wasn't meant for that", perhaps Java could have been made easier for
    the casual user. Java scales up to projects with multiple teams and
    many programmers better than other systems do. So they have done
    something right, but haven't been able (or willing) to make the
    system downwardly scalable.

    Ok, but how?
    ============
    Making systems that are easy to use is an art in and of itself, and
    high quality code doesn't necessarily translate into something that automatically scales down. Here are some ideas on how to make your
    product scale down.

    * Make it small, and make it easy to extend. This way, you don't have
    a lot of luggage to carry around, and yet you can extend the system
    to do more things later. The C programming language is a pretty
    good example of this. The Scheme language got the small part right,
    but until more recently has had trouble with the 'extend it easily'
    bit.

    * Make it simple and coherent. Don't give it too many fancy rules.
    Make it internally coherent in the way it's used, so that people
    can make reasonable assumptions when they see a new extension or
    library. They will see patterns in your system which will enhance
    their comprehension. Tcl and Python show these qualities (despite
    having grown some warts over time). C++ and Perl tend toward the
    opposite pole: big and complex, with many rules to remember. You
    can always choose to use a subset, but when you think about
    communicating functionality to other people, perhaps in terms of
    source code, it means that in the wider world, you can't stick to a
    simpler set of rules to keep in mind.

    * Make it do something "out of the box". It should be easy to get set
    up and doing productive things with the system. Complex
    configurations with thousands of options are useless if it's
    impossible to get a basic configuration working. The Apache web
    server requires little configuration to make it serve web pages, so
    while it is in some ways complex (configuring it from scratch might
    prove quite difficult, infact), it scales down by being useful
    right away.

    * Given several options, make the common one the default, and the
    others a possibility. As programmers, we tend to think that we
    should give the user all kinds of clever options. However, most
    people are going to use one of the options most of the time, so in
    reality, it's often best to make that easy, and add a means of
    performing the less frequently used options when they are desired.
    Tcl, when it gets things right, does a great job of this. For
    instance, its socket command is very easy to use. The default is to
    open a socket to a given hostname and port. Then, if you need to
    configure things further, such as setting the character encoding,
    blocking or non-blocking, and so forth, Tcl provides a command to
    configure options. So people who just want a socket get it very
    simply, with no clutter: set sk [socket www.google.com 80]

    * Use simple, open protocols. If you have to transmit data, or have
    configuration files, make them easy to understand, and simple to
    write - by hand or via other programs. This lets people interact
    with your system in new and interesting ways, as well as keeping it
    accessible to the newcomer.

    * Always keep on eye on the future, when you may need to scale up, to
    large groups of programmers, for work that requires speed and the
    ability to work with large quantities of data. Design the system so
    that scaling up is easy, or at least possible.

    Conclusions
    ===========
    Naturally, it's not always a good idea to scale down. There are
    systems that really do need to be as fast as possible, or to deal
    with very complex problems, and trying to broaden the user base is a
    secondary consideration compared with these other goals. However, in
    many cases these goals are not incompatible, and the simplicity and
    clarity involved in scaling down will make for a larger, more vibrant
    market for your product.

    References
    ==========
    * Worse is Better
    * Scripting: Higher Level Programming for the 21st Century
    * About Face 2.0
    * Scalable computer programming languages
    * The Innovator's Dilemma

    Copyright © 2000-2015 David N. Welton

    From: <https://web.archive.org/web/20170427181823/ http://www.welton.it/articles/scalable_systems.html>
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.misc on Sat Apr 13 17:17:34 2024
    From Newsgroup: comp.misc

    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    programming languages that "scale down".

    David forgot to tell use what it means for a programming language
    to "scale down".

    Tcl or Python are "simpler" than C, but this is a result of the

    And again, he uses those quotes! How to define or measure the
    "simplicity" of a programming language?

    difficult to do easy things. To even get started, you have to have
    some notions of object oriented programming, you have to split your
    code up into lots of little files that must be properly named, and

    This is a Java program. It usually should be in a file "Main.java".

    public final class Main
    { public static void main( final java.lang.String[] args )
    { java.lang.System.out.println
    ( "Hello world!" ); }}

    . In my Basic course I tell the participants to ignore the first
    three lines: They are just boilerplate material that is copied to
    every program. Then we explore what can be done in the last line!

    As long as your programs are small (even with many more lines than
    just four), there is no need "to split your code up into lots of
    little files that must be properly named". And when you split up
    your Python code into modules, they must be properly named, too.

    But as the course develops, gradually, my participants learn the
    meaning and purpose of the first three lines.

    So, when you start out with vague terms like programming
    languages that "scale down" and have a certain "simplicity",
    you can then give vague recommendations!
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David LaRue@huey.dll@tampabay.rr.com to comp.misc on Sat Apr 13 18:33:30 2024
    From Newsgroup: comp.misc

    ram@zedat.fu-berlin.de (Stefan Ram) wrote in news:Java-20240413181713 @ram.dialup.fu-berlin.de:

    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    programming languages that "scale down".

    David forgot to tell use what it means for a programming language
    to "scale down".

    <snip>
    I've worked for many companies that used C/C++ or even a wide variety of languages with features that not every programmer would know. For me,
    scaling a language down is just limiting what language features you should
    use in your program.

    Teachers and beginning programmers do this in reverse. They first learn what a language can do and find some way to solve their particular problem. Later the learner can revisit their solutions to perhaps solve the problem with
    less lines, new features they've learned, or just something new they'd like
    to explore as some new concept.

    Consider BASIC. Learners start with print and perhaps math statements.
    Later they can add input, loops, and branching operations to build a more complete program starting from their original programs goal.

    Another reason to scale down a language is to accomodate more primative versions of the same language or perhaps add support for different processors or platforms. Sometimes more advanced features of a language aren't
    supported or available on a different target machine's compiler/interpreter.

    I'm not sure what else "scaling down" a language might mean.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.misc on Sat Apr 13 18:51:19 2024
    From Newsgroup: comp.misc

    David LaRue <huey.dll@tampabay.rr.com> writes: For me, >scaling a language down is just limiting what language features you should >use in your program.

    I can see how that'd be one take on "scaling down a language",
    kinda reminds me of that quote:

    |An interesting difference between "Effective Java" and
    |"Effective C++" is that my reaction to the latter was to come
    |up with a set of SOPs that mainly boil down to "don't use C++
    |feature x".
    Bjorn Borud.

    (BTW: Bjorn also wrote:

    |Java books are usually about doing stuff. C++ books are
    |mostly about not shooting yourself in the foot. That just
    |about sums up how you use the two environments too.

    .)
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Sat Apr 13 19:28:47 2024
    From Newsgroup: comp.misc

    When I think about the utility of scaling systems down, I think more about libraries, applications and operating systems than programming languages and development tools.

    But having used programming languages such as PL/1 and Ada where the language features were so extensive that no one programmer really knew all of them
    and everyone programmed in their own subset, I can say that giant languages with a lot of features lead to maintenance issues in spite of their advantages for initial coding. Scaling down may help.

    And who ever thought it was a good idea to add pointers to Fortran 90. Engineers should never be allowed to touch pointers. This is like giving
    a Zippo lighter to a baby. Scaling down is needed here.
    --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From not@not@telling.you.invalid (Computer Nerd Kev) to comp.misc on Sun Apr 14 08:48:53 2024
    From Newsgroup: comp.misc

    Stefan Ram <ram@zedat.fu-berlin.de> wrote:
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    programming languages that "scale down".

    David forgot to tell use what it means for a programming language
    to "scale down".

    Wasn't that in the second paragraph?

    "Good systems should be able to scale down as well as up. They
    should run on slower computers that don't have as much memory or
    disk storage as the latest models. Likewise, from the human point
    of view, downwardly scalable systems should also be small enough to
    learn and use without being an expert programmer." ...

    I read it mainly out of interest in his ideas for the first aspect
    with running on slower computers, but it turns out he doesn't
    really discuss that at all. They tend to be contradictory goals, so
    without proposing a way to unify them it makes that aspect purely
    aspirational.

    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ben Collver@bencollver@tilde.pink to comp.misc on Sat Apr 13 23:54:07 2024
    From Newsgroup: comp.misc

    On 2024-04-13, Computer Nerd Kev <not@telling.you.invalid> wrote:
    "Good systems should be able to scale down as well as up. They
    should run on slower computers that don't have as much memory or
    disk storage as the latest models. Likewise, from the human point
    of view, downwardly scalable systems should also be small enough to
    learn and use without being an expert programmer." ...

    I read it mainly out of interest in his ideas for the first aspect
    with running on slower computers, but it turns out he doesn't
    really discuss that at all. They tend to be contradictory goals, so
    without proposing a way to unify them it makes that aspect purely aspirational.

    I like your distinction. My perspective:

    1. Scale down to cheap or embedded hardware.

    2. Scale down to "human scale."

    Both imply restraint, which goes against the flow. It takes
    discipline to trim the fat and simplify design.

    Our task is not to find the maximum amount of content in a work of
    art, much less to squeeze more content out of the work than is
    already there. Our task is to cut back content so that we can see
    the thing at all.

    From: <https://dadadrummer.substack.com/p/against-innovation>
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From nospam@nospam@example.net to comp.misc on Sun Apr 14 20:48:28 2024
    From Newsgroup: comp.misc



    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:

    Stefan Ram <ram@zedat.fu-berlin.de> wrote:
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    programming languages that "scale down".

    David forgot to tell use what it means for a programming language
    to "scale down".

    Wasn't that in the second paragraph?

    "Good systems should be able to scale down as well as up. They
    should run on slower computers that don't have as much memory or
    disk storage as the latest models. Likewise, from the human point
    of view, downwardly scalable systems should also be small enough to
    learn and use without being an expert programmer." ...

    I read it mainly out of interest in his ideas for the first aspect
    with running on slower computers, but it turns out he doesn't
    really discuss that at all. They tend to be contradictory goals, so
    without proposing a way to unify them it makes that aspect purely aspirational.

    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards? I mean larger binaries must come with some
    benefit right?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From not@not@telling.you.invalid (Computer Nerd Kev) to comp.misc on Mon Apr 15 08:12:36 2024
    From Newsgroup: comp.misc

    D <nospam@example.net> wrote:
    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:

    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards?

    I'd be quite interested to find out as well. When it comes to the
    more fine-tuned optimisation options (a set of which -Os enables),
    the GCC documentation is often very lacking in detail, especially
    when it comes to changes between versions.

    I mean larger binaries must come with some benefit right?

    The benchmarks that they're chasing are for speed rather than
    binary size. -Os turns on some optimisations which may make a
    program run a little slower in return for a smaller binary. My
    guess is that the GCC developers aren't very interested in -Os
    anymore, but I haven't seen an easy path to understanding why
    exactly it keeps getting less effective than in earlier GCC
    versions.
    --
    __ __
    #_ < |\| |< _#
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From nospam@nospam@example.net to comp.misc on Mon Apr 15 12:17:04 2024
    From Newsgroup: comp.misc



    On Mon, 15 Apr 2024, Computer Nerd Kev wrote:

    D <nospam@example.net> wrote:
    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:

    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards?

    I'd be quite interested to find out as well. When it comes to the
    more fine-tuned optimisation options (a set of which -Os enables),
    the GCC documentation is often very lacking in detail, especially
    when it comes to changes between versions.

    I mean larger binaries must come with some benefit right?

    The benchmarks that they're chasing are for speed rather than
    binary size. -Os turns on some optimisations which may make a
    program run a little slower in return for a smaller binary. My
    guess is that the GCC developers aren't very interested in -Os
    anymore, but I haven't seen an easy path to understanding why
    exactly it keeps getting less effective than in earlier GCC
    versions.

    Got it! Thank you for the information. I guess perhaps it's similar to the
    old argument that emacs is "too big". With todays disks/ssds it matters
    less and less.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.misc on Mon Apr 15 14:30:10 2024
    From Newsgroup: comp.misc

    D <nospam@example.net> wrote at 10:17 this Monday (GMT):


    On Mon, 15 Apr 2024, Computer Nerd Kev wrote:

    D <nospam@example.net> wrote:
    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:

    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards?

    I'd be quite interested to find out as well. When it comes to the
    more fine-tuned optimisation options (a set of which -Os enables),
    the GCC documentation is often very lacking in detail, especially
    when it comes to changes between versions.

    I mean larger binaries must come with some benefit right?

    The benchmarks that they're chasing are for speed rather than
    binary size. -Os turns on some optimisations which may make a
    program run a little slower in return for a smaller binary. My
    guess is that the GCC developers aren't very interested in -Os
    anymore, but I haven't seen an easy path to understanding why
    exactly it keeps getting less effective than in earlier GCC
    versions.

    Got it! Thank you for the information. I guess perhaps it's similar to the old argument that emacs is "too big". With todays disks/ssds it matters
    less and less.


    Most programs are below 1gb.
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Mon Apr 15 18:40:16 2024
    From Newsgroup: comp.misc

    not@telling.you.invalid (Computer Nerd Kev) writes:
    D <nospam@example.net> wrote:
    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards?

    I'd be quite interested to find out as well. When it comes to the
    more fine-tuned optimisation options (a set of which -Os enables),
    the GCC documentation is often very lacking in detail, especially
    when it comes to changes between versions.

    Interesting question, and I don’t know the answer, but it’s not hard to come up with a small concrete example. https://godbolt.org/z/sG5d99v5z
    has the same code compiled at -Os with three different versions, plus
    -O2 for comparison, and it does get a bit longer somewhere between 9.1
    and 11.1.

    The longer code uses one extra register (ebp) and because it’s run out
    of callee-owned registers it must generate a push/pop pair for it
    (adding 2 bytes to a 90-byte function, in this case).

    It’s not easily explainable why it would do so: the gcc 9.1 object code happily uses eax for the same purpose, without having to shuffle else
    anything around, since eax isn’t being used for anything else at the
    time.

    Perhaps worth a bug report.
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From nospam@nospam@example.net to comp.misc on Mon Apr 15 21:40:52 2024
    From Newsgroup: comp.misc

    This message is in MIME format. The first part should be readable text,
    while the remaining parts are likely unreadable without MIME-aware tools.

    --8323328-1556197638-1713210054=:965
    Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8BIT



    On Mon, 15 Apr 2024, Richard Kettlewell wrote:

    not@telling.you.invalid (Computer Nerd Kev) writes:
    D <nospam@example.net> wrote:
    On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
    In fact in terms of memory and disk storage GCC keeps going
    backwards that even for C/C++. Compiling large C/C++ programs with
    -Os in ever newer GCC versions keeps producing ever bigger binaries
    for unchanged code. Of course other compilers are available and I'm
    not sure how other popular ones compare.

    Why do they go backwards?

    I'd be quite interested to find out as well. When it comes to the
    more fine-tuned optimisation options (a set of which -Os enables),
    the GCC documentation is often very lacking in detail, especially
    when it comes to changes between versions.

    Interesting question, and I don’t know the answer, but it’s not hard to come up with a small concrete example. https://godbolt.org/z/sG5d99v5z
    has the same code compiled at -Os with three different versions, plus
    -O2 for comparison, and it does get a bit longer somewhere between 9.1
    and 11.1.

    The longer code uses one extra register (ebp) and because it’s run out
    of callee-owned registers it must generate a push/pop pair for it
    (adding 2 bytes to a 90-byte function, in this case).

    It’s not easily explainable why it would do so: the gcc 9.1 object code happily uses eax for the same purpose, without having to shuffle else anything around, since eax isn’t being used for anything else at the
    time.

    Perhaps worth a bug report.

    Or maybe, just maybe... another legendary supply chain attack? ;) --8323328-1556197638-1713210054=:965--
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From kludge@kludge@panix.com (Scott Dorsey) to comp.misc on Tue Apr 16 01:47:54 2024
    From Newsgroup: comp.misc

    D <nospam@example.net> wrote:

    Why do they go backwards? I mean larger binaries must come with some
    benefit right?

    The hello world executable generated with gcc under Oracle Linux 8 is
    42Mb long, which is more MASS STORAGE than I had on the first Unix
    system I ever used. I can't see this as being a good thing.
    --scott
    --
    "C'est un Nagra. C'est suisse, et tres, tres precis."
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Kettlewell@invalid@invalid.invalid to comp.misc on Tue Apr 16 08:31:36 2024
    From Newsgroup: comp.misc

    kludge@panix.com (Scott Dorsey) writes:
    D <nospam@example.net> wrote:
    Why do they go backwards? I mean larger binaries must come with some >>benefit right?

    The hello world executable generated with gcc under Oracle Linux 8 is
    42Mb long, which is more MASS STORAGE than I had on the first Unix
    system I ever used. I can't see this as being a good thing.

    I’m not sure how you managed to make it be 42MB. If I use the official container imagine it comes out under 20KB.

    [root@4b3cfcf8c484 /]# cat t.c
    #include <stdio.h>

    int main(void) { return printf("Hello, world\n"); }
    [root@4b3cfcf8c484 /]# gcc -O2 -o t t.c
    [root@4b3cfcf8c484 /]# ./t
    Hello, world
    [root@4b3cfcf8c484 /]# ls -l t
    -rwxr-xr-x 1 root root 18096 Apr 16 07:27 t
    --
    https://www.greenend.org.uk/rjk/
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From nospam@nospam@example.net to comp.misc on Tue Apr 16 10:52:32 2024
    From Newsgroup: comp.misc



    On Tue, 16 Apr 2024, Scott Dorsey wrote:

    D <nospam@example.net> wrote:

    Why do they go backwards? I mean larger binaries must come with some
    benefit right?

    The hello world executable generated with gcc under Oracle Linux 8 is
    42Mb long, which is more MASS STORAGE than I had on the first Unix
    system I ever used. I can't see this as being a good thing.
    --scott


    Well, for you and me it is not. But for the "cool" k8s crowd and the
    "modern man" storage is infinite, so why bother with a hello world of 42
    Mb? ;)
    --- Synchronet 3.20a-Linux NewsLink 1.114