• Re: Functional programming is not always what it seems

    From David Brown@david.brown@hesbynett.no to comp.lang.misc on Mon Nov 21 16:23:32 2022
    From Newsgroup: comp.lang.misc

    On 18/11/2022 14:14, Dmitry A. Kazakov wrote:
    On 2022-11-18 13:36, David Brown wrote:

    If you want to know what a function does before running it, look at
    its definition - along with the definition of any other functions it
    calls, what data it has, what constants it uses, and so on.  That
    applies to all languages.

    That is exactly the problem. In mathematics and in declarative
    approaches that follow closely, definition is the function itself. E.g.
    all sorts of recursive definitions. Analysis of function behavior is in
    the core of mathematics. Basically it presents something, you do not
    know what, though its definition is before your eyes! Your objective is
    to figure it out, to study it. This is because mathematical functions
    exist on their own.


    "Functions" in functional programming and in mathematics are not the
    same thing.

    In mathematics, the functions "sin(x)" and "cos(x + π/2)" are exactly
    the same thing. They are indistinguishable. Any definition that gives
    the same mapping from the source domain to the destination domain is the
    same function.

    In functional programming languages - real, practical functional
    programming languages, that is not the case. You might argue that in a hypothetical sense two functional programming language functions that
    give the same results are the same function - but you can argue exactly
    the same about any kind of programming language. In reality, functional programming language functions are much like functions in imperative
    languages - compilers can manipulate them to make variants that give the
    same results more efficiently (this is "optimisation"), but otherwise
    they do what the function definition says.

    A difference, perhaps, is that imperative functions go from the bottom
    up (or start to finish) while functional programming language functions
    are often more top-down (describe the end result that you want, and then
    the partial results to get there).

    This is not how engineering and programming as engineering activity
    work. There you create something in order to achieve something.

    Sorry, but you are completely wrong in your distinction.

    Programming with FP languages is as much "engineering" as programming
    with imperative languages or any other paradigm. When telephone
    switching systems are programmed in the functional programming language
    Erlang (which was developed with that application in mind), do you think
    it is not "achieving something"? When people make programmable logic
    designs using FP languages such as Lava (build on Haskell), Confluence
    (from OCaml), or SpinalHDL (from Scala), it is as clear and solid
    engineering as you can get. And of course, "normal" functional
    programming is programming just like any other programming.


    Pragmatically, separation of specification and implementation is
    difficult when functions become first class objects.

    Again, that is an imaginary distinction.

    Whether the language supports first-class functions or not makes no
    difference as to how well specified functions are, or whether the implementation follows that specification or not. The only difference
    is that with functional programming, it is often easier to see that the function implements the specification - but regardless of the paradigm,
    this depends on the complexity of the function.


    Another issue is treatment of types when each function is an operation
    on some types and nothing else.


    I can't understand what you mean. Functional programming language
    functions are not operations on types. Conversely, all functions in all languages operate on some types and nothing else.

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Nov 21 17:08:22 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-21 16:23, David Brown wrote:
    On 18/11/2022 14:14, Dmitry A. Kazakov wrote:
    On 2022-11-18 13:36, David Brown wrote:

    If you want to know what a function does before running it, look at
    its definition - along with the definition of any other functions it
    calls, what data it has, what constants it uses, and so on.  That
    applies to all languages.

    That is exactly the problem. In mathematics and in declarative
    approaches that follow closely, definition is the function itself.
    E.g. all sorts of recursive definitions. Analysis of function behavior
    is in the core of mathematics. Basically it presents something, you do
    not know what, though its definition is before your eyes! Your
    objective is to figure it out, to study it. This is because
    mathematical functions exist on their own.

    "Functions" in functional programming and in mathematics are not the
    same thing.

    In mathematics, the functions "sin(x)" and "cos(x + π/2)" are exactly
    the same thing.  They are indistinguishable.  Any definition that gives the same mapping from the source domain to the destination domain is the same function.

    In functional programming languages - real, practical functional
    programming languages, that is not the case.  You might argue that in a hypothetical sense two functional programming language functions that
    give the same results are the same function - but you can argue exactly
    the same about any kind of programming language.  In reality, functional programming language functions are much like functions in imperative languages - compilers can manipulate them to make variants that give the same results more efficiently (this is "optimisation"), but otherwise
    they do what the function definition says.

    Yes. In short. Functional paradigm does not stand for its promises. That
    is an open secret... (:-))

    A difference, perhaps, is that imperative functions go from the bottom
    up (or start to finish) while functional programming language functions
    are often more top-down (describe the end result that you want, and then
    the partial results to get there).

    Well, not really. Whether the ultimate program is

    do_it;

    or

    let_it_be_done;

    makes little difference. Levels of indirection not necessary translate
    into higher abstraction and conversely. Imperative approach has one
    level less.

    This is not how engineering and programming as engineering activity
    work. There you create something in order to achieve something.

    Sorry, but you are completely wrong in your distinction.

    Programming with FP languages is as much "engineering" as programming
    with imperative languages or any other paradigm.  When telephone
    switching systems are programmed in the functional programming language Erlang (which was developed with that application in mind), do you think
    it is not "achieving something"?  When people make programmable logic designs using FP languages such as Lava (build on Haskell), Confluence
    (from OCaml), or SpinalHDL (from Scala), it is as clear and solid engineering as you can get.  And of course, "normal" functional
    programming is programming just like any other programming.

    You confuse application with intent. Yes, you can achieve something by programming in awful languages in awful manner. I do not mean here FPL specifically. Just for the sake of argument. You will wonder to know
    what software was written in VisualBasic...

    Pragmatically, separation of specification and implementation is
    difficult when functions become first class objects.

    Again, that is an imaginary distinction.

    In is not imaginary. If you cannot use functions as a vehicle do define interface you need to come up with something else or drop the idea
    altogether.

    Whether the language supports first-class functions or not makes no difference as to how well specified functions are, or whether the implementation follows that specification or not.

    Specification is a declarative layer on top of the object language. In imperative procedural languages that layer is declaration of
    subprograms. In OOPL it is types (classes) defined in terms of methods (members are built-in getter/setter methods). In FPL, typically, there
    is none, unless you introduce some meta functions, whatever. E.g. generics/templates exist for infinity and still have no reasonable specifications, and thus, are fundamentally non-testable. I do not care
    even a little bit about FP, but, my guess is that it must have similar
    issues.

    Another issue is treatment of types when each function is an operation
    on some types and nothing else.

    I can't understand what you mean.  Functional programming language functions are not operations on types.

    Yep.

    Conversely, all functions in all
    languages operate on some types and nothing else.

    No, in OOPL a method operates on the class. A "free function" takes some arguments in unrelated types.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Mon Nov 21 20:56:51 2022
    From Newsgroup: comp.lang.misc

    On 21/11/2022 17:08, Dmitry A. Kazakov wrote:
    On 2022-11-21 16:23, David Brown wrote:
    On 18/11/2022 14:14, Dmitry A. Kazakov wrote:
    On 2022-11-18 13:36, David Brown wrote:

    If you want to know what a function does before running it, look at
    its definition - along with the definition of any other functions it
    calls, what data it has, what constants it uses, and so on.  That
    applies to all languages.

    That is exactly the problem. In mathematics and in declarative
    approaches that follow closely, definition is the function itself.
    E.g. all sorts of recursive definitions. Analysis of function
    behavior is in the core of mathematics. Basically it presents
    something, you do not know what, though its definition is before your
    eyes! Your objective is to figure it out, to study it. This is
    because mathematical functions exist on their own.

    "Functions" in functional programming and in mathematics are not the
    same thing.

    In mathematics, the functions "sin(x)" and "cos(x + π/2)" are exactly
    the same thing.  They are indistinguishable.  Any definition that
    gives the same mapping from the source domain to the destination
    domain is the same function.

    In functional programming languages - real, practical functional
    programming languages, that is not the case.  You might argue that in
    a hypothetical sense two functional programming language functions
    that give the same results are the same function - but you can argue
    exactly the same about any kind of programming language.  In reality,
    functional programming language functions are much like functions in
    imperative languages - compilers can manipulate them to make variants
    that give the same results more efficiently (this is "optimisation"),
    but otherwise they do what the function definition says.

    Yes. In short. Functional paradigm does not stand for its promises. That
    is an open secret... (:-))


    Functional programming languages are programming languages where
    functions are first-class objects. I don't know what you are thinking
    about, but it is /not/ functional programming languages.

    A difference, perhaps, is that imperative functions go from the bottom
    up (or start to finish) while functional programming language
    functions are often more top-down (describe the end result that you
    want, and then the partial results to get there).

    Well, not really. Whether the ultimate program is

       do_it;

    That would be imperative coding.


    or

       let_it_be_done;

    That would be declarative. (Note that functional programming is not the
    only kind of declarative programming.)


    makes little difference. Levels of indirection not necessary translate
    into higher abstraction and conversely. Imperative approach has one
    level less.

    I think perhaps, like many programmers, you have worked all your life
    with imperative programming and can't imagine or understand other
    paradigms. You consider imperative as "the best" or "the most natural", simply because it is the most familiar to you.


    This is not how engineering and programming as engineering activity
    work. There you create something in order to achieve something.

    Sorry, but you are completely wrong in your distinction.

    Programming with FP languages is as much "engineering" as programming
    with imperative languages or any other paradigm.  When telephone
    switching systems are programmed in the functional programming
    language Erlang (which was developed with that application in mind),
    do you think it is not "achieving something"?  When people make
    programmable logic designs using FP languages such as Lava (build on
    Haskell), Confluence (from OCaml), or SpinalHDL (from Scala), it is as
    clear and solid engineering as you can get.  And of course, "normal"
    functional programming is programming just like any other programming.

    You confuse application with intent. Yes, you can achieve something by programming in awful languages in awful manner. I do not mean here FPL specifically. Just for the sake of argument. You will wonder to know
    what software was written in VisualBasic...


    Oh, so you think functional programming languages are designed and
    intended to be useless, and it is only by accident or stubbornness that
    anyone can actually make use of them? That is an "interesting" argument.

    Pragmatically, separation of specification and implementation is
    difficult when functions become first class objects.

    Again, that is an imaginary distinction.

    In is not imaginary. If you cannot use functions as a vehicle do define interface you need to come up with something else or drop the idea altogether.

    Again, I can't figure out what you are talking about.


    Whether the language supports first-class functions or not makes no
    difference as to how well specified functions are, or whether the
    implementation follows that specification or not.

    Specification is a declarative layer on top of the object language. In imperative procedural languages that layer is declaration of
    subprograms.

    No, that is not what "specification" means. A declaration is just the
    name, parameters, types, etc., of a function. A specification says what
    the function /does/. In most programming, specifications are not
    written in the language itself though some allow a limited form of
    formal specification (such as pre-conditions and post-conditions).

    In OOPL it is types (classes) defined in terms of methods
    (members are built-in getter/setter methods). In FPL, typically, there
    is none, unless you introduce some meta functions, whatever. E.g. generics/templates exist for infinity and still have no reasonable specifications, and thus, are fundamentally non-testable. I do not care
    even a little bit about FP, but, my guess is that it must have similar issues.


    So you think that in C, this is a "specification" :

    int times_two(int);

    while the Haskell equivalent :

    times :: Int -> Int

    is somehow completely different?


    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming language
    functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages don't
    have types or act on types? Maybe you are thinking of Forth, rather
    than functional programming languages? Certainly you seem to have got a
    very strange idea of functional programming.

    Functional programming languages have supported generic programming and
    type inference for a lot longer than most imperative languages, but
    those are both standard for any serious modern imperative language (in
    C++ you have templates and "auto", in other languages you have similar features).


    Conversely, all functions in all languages operate on some types and
    nothing else.

    No, in OOPL a method operates on the class. A "free function" takes some arguments in unrelated types.


    Methods in OOPL (or most people think of as Object Oriented programming,
    such as C++, Java, Python, etc., rather than the original intention of
    OOP which is now commonly called "actors" paradigm) are syntactic sugar
    for a function with the class instance as the first parameter.




    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Nov 21 22:25:21 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-21 20:56, David Brown wrote:
    On 21/11/2022 17:08, Dmitry A. Kazakov wrote:

    I think perhaps, like many programmers, you have worked all your life
    with imperative programming and can't imagine or understand other paradigms.  You consider imperative as "the best" or "the most natural", simply because it is the most familiar to you.

    Actually I worked a lot with declarative languages in AI (Prologue,
    expert systems etc) and pattern matching (e.g. SNOBOL). My deep distrust
    to declarative approach come from that time greatly supported by
    relational paradigm.

    Oh, so you think functional programming languages are designed and
    intended to be useless, and it is only by accident or stubbornness that anyone can actually make use of them?  That is an "interesting" argument.

    Absolutely. Most languages fall into this category. It is not a unique
    feature of FPLs...

    Pragmatically, separation of specification and implementation is
    difficult when functions become first class objects.

    Again, that is an imaginary distinction.

    In is not imaginary. If you cannot use functions as a vehicle do
    define interface you need to come up with something else or drop the
    idea altogether.

    Again, I can't figure out what you are talking about.

    About specifications stating some part of behavior, but not implementing
    the behavior.

    Whether the language supports first-class functions or not makes no
    difference as to how well specified functions are, or whether the
    implementation follows that specification or not.

    Specification is a declarative layer on top of the object language. In
    imperative procedural languages that layer is declaration of subprograms.

    No, that is not what "specification" means.  A declaration is just the name, parameters, types, etc., of a function.  A specification says what the function /does/.

    That is same. When you specify type, to say what the object "does" by
    being of that type.

    In most programming, specifications are not
    written in the language itself though some allow a limited form of
    formal specification (such as pre-conditions and post-conditions).

    You have them in the languages or you have not. Clearly specifications
    form a meta language on top of the object core language.

    In OOPL it is types (classes) defined in terms of methods (members are
    built-in getter/setter methods). In FPL, typically, there is none,
    unless you introduce some meta functions, whatever. E.g.
    generics/templates exist for infinity and still have no reasonable
    specifications, and thus, are fundamentally non-testable. I do not
    care even a little bit about FP, but, my guess is that it must have
    similar issues.


    So you think that in C, this is a "specification" :

        int times_two(int);

    while the Haskell equivalent :

        times :: Int -> Int

    is somehow completely different?

    No. But also there is nothing specifically "functional" in these
    primitive specifications. They are not even first class. You didn't wrote:

    int (int) times_two; // Some functional C (no pun intended (:-))

    The point is that if you take some really fancy functional stuff, it
    would be difficult or, maybe, useless to formally describe in some meta language of specifications.

    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming language
    functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages don't
    have types or act on types?

    It was you who said "Functional programming language functions are not operations on types." I only agreed with you. Again, it might be
    possible to build a complete type algebra on top of "functional" quirks.
    But there are enough unresolved problems already without bringing
    first-class functions in. So, why bother? Passing a subprogram as
    parameter (downward closure) covers all my needs. Objects parametrized
    by functions? That looks too much. Yes, I hate templates and generics,
    before you ask... (:-))

    Functional programming languages have supported generic programming and
    type inference for a lot longer than most imperative languages, but
    those are both standard for any serious modern imperative language (in
    C++ you have templates and "auto", in other languages you have similar features).

    Type inference is a separate, and very controversial issue.

    Conversely, all functions in all languages operate on some types and
    nothing else.

    No, in OOPL a method operates on the class. A "free function" takes
    some arguments in unrelated types.

    Methods in OOPL (or most people think of as Object Oriented programming, such as C++, Java, Python, etc., rather than the original intention of
    OOP which is now commonly called "actors" paradigm) are syntactic sugar
    for a function with the class instance as the first parameter.

    Not really, because methods dispatch. A method acts on the class and its implementation consists of separate bodies, which is the core of OO decomposition as opposed to other paradigms.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Nov 22 15:11:02 2022
    From Newsgroup: comp.lang.misc

    On 21/11/2022 22:25, Dmitry A. Kazakov wrote:
    On 2022-11-21 20:56, David Brown wrote:
    On 21/11/2022 17:08, Dmitry A. Kazakov wrote:

    I think perhaps, like many programmers, you have worked all your life
    with imperative programming and can't imagine or understand other
    paradigms.  You consider imperative as "the best" or "the most
    natural", simply because it is the most familiar to you.

    Actually I worked a lot with declarative languages in AI (Prologue,
    expert systems etc) and pattern matching (e.g. SNOBOL). My deep distrust
    to declarative approach come from that time greatly supported by
    relational paradigm.

    Assuming you mean Prolog (there may be a language called Prologue that I
    don't know about), it is declarative (rather than imperative) but it is
    not a functional programming language. And SNOBOL is considered
    imperative, not declarative, but is really in a class of its own.
    Familiarity with these gives you a broader background than many
    programmers, but no insight or experience with functional programming.


    Oh, so you think functional programming languages are designed and
    intended to be useless, and it is only by accident or stubbornness
    that anyone can actually make use of them?  That is an "interesting"
    argument.

    Absolutely. Most languages fall into this category. It is not a unique feature of FPLs...


    That is quite a cynical viewpoint!

    Pragmatically, separation of specification and implementation is
    difficult when functions become first class objects.

    Again, that is an imaginary distinction.

    In is not imaginary. If you cannot use functions as a vehicle do
    define interface you need to come up with something else or drop the
    idea altogether.

    Again, I can't figure out what you are talking about.

    About specifications stating some part of behavior, but not implementing
    the behavior.

    Whether the language supports first-class functions or not makes no
    difference as to how well specified functions are, or whether the
    implementation follows that specification or not.

    Specification is a declarative layer on top of the object language.
    In imperative procedural languages that layer is declaration of
    subprograms.

    No, that is not what "specification" means.  A declaration is just the
    name, parameters, types, etc., of a function.  A specification says
    what the function /does/.

    That is same. When you specify type, to say what the object "does" by
    being of that type.


    No, they are not the same at all. A declaration is needed to use the
    function (or other object) in the language. A specification says what
    the function does (or should do). There may be a bit of overlap (maybe
    both say they take an integer input parameter), but that's usually all.

    In most programming, specifications are not written in the language
    itself though some allow a limited form of formal specification (such
    as pre-conditions and post-conditions).

    You have them in the languages or you have not. Clearly specifications
    form a meta language on top of the object core language.


    No, it is not "clearly" at all. If a programming language allows specifications of some sort as part of the language, then it is part of
    the language. If it does not, then it is not part of the language - specifications then have to be written independently (such as in a
    separate human-language document) and are not in a "meta language".

    In OOPL it is types (classes) defined in terms of methods (members
    are built-in getter/setter methods). In FPL, typically, there is
    none, unless you introduce some meta functions, whatever. E.g.
    generics/templates exist for infinity and still have no reasonable
    specifications, and thus, are fundamentally non-testable. I do not
    care even a little bit about FP, but, my guess is that it must have
    similar issues.


    So you think that in C, this is a "specification" :

         int times_two(int);

    while the Haskell equivalent :

         times :: Int -> Int

    is somehow completely different?

    No. But also there is nothing specifically "functional" in these
    primitive specifications.

    There was not intended to be anything "functional" about them. You said
    that in imperative languages, a declaration is a specification, while in functional programming languages you have no specifications. I showed
    that you can have exactly the same kind of declarations in both
    languages - the differences you are claiming are imaginary. (Such declarations are not specifications in either language.)

    Incidentally, you /do/ understand that "object oriented" is orthogonal
    to imperative/declarative ? Many functional programming are object
    oriented, just as many imperative languages are not.

    They are not even first class.

    "Int" is a first class object type in Haskell and C. The "times"
    function is a first class object type in Haskell, but not in C.

    You didn't wrote:

       int (int) times_two;  // Some functional C (no pun intended (:-))


    Indeed I didn't write that - it is not syntactically correct in either
    sample language.

    I could have written a declaration for a higher order function in Haskell:

    do_twice :: (Int -> Int) -> (Int -> Int)

    That takes a function that is int-to-int, and returns a function that is int-to-int.

    But then it would have been impossible to give the equivalent in C. The nearest you could do would be :

    typedef int (*int_to_int_fp)(int);
    int_to_int_fp do_twice(int_to_int_fp);

    It needs a typedef (there is, AFAIK, no way to return a function pointer
    type without it), and it is in terms of function pointers, not functions.

    And again - these are declarations, not specifications.

    The point is that if you take some really fancy functional stuff, it
    would be difficult or, maybe, useless to formally describe in some meta language of specifications.


    A function that cannot sensibly be described is of little use to anyone.
    That is completely independent of the programming language paradigm.
    If no one can tell you want "foo" does, you can't use it. I cannot see
    any way in which imperative languages differ from declarative languages
    in that respect.

    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming language
    functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages don't
    have types or act on types?

    It was you who said "Functional programming language functions are not operations on types." I only agreed with you.

    I think I see the source of confusion. When you wrote "each function is
    an operation on some types", you mean functions that operate on
    /instances/ of certain types. There is a vast difference between
    operating on /instances/, and operating on /types/.

    In functional programming languages (and languages that support
    functional programming features, such as C++), functions can operate on /instances/ (like "foo(100)") and on /functions/ (like "foo(bar)"), but
    not on /types/ (like "foo(int)"). A function on types could be, say, a "make_pair" function that took a type as its parameter and returned a
    type that was a tuple of two of that type.

    Some languages support this kind of thing, to some extent at least. In
    a functional programming language you might have "constructor" functions
    that create new values of a given type, and of course you could have a higher-order function that takes constructor functions and returns
    constructor functions. In C++, you can have a class template that takes
    other types as parameters. In Python, you can operate on classes as first-class objects. In C, you can do some type manipulation with macros.


    I think what you meant to say was that in imperative languages, you
    define functions to act on particular types (the types of the
    parameters), while in functional programming you don't give the types of
    the parameters so they operate on "anything".

    This is, of course, wrong in both senses. More advanced imperative let
    you define functions that operate on many types - they are known as
    template functions in C++, generics in Ada, and similarly in other
    languages. And in functional programming languages you can specify
    types as precisely or generically as you want.


    Again, it might be
    possible to build a complete type algebra on top of "functional" quirks.
    But there are enough unresolved problems already without bringing first-class functions in. So, why bother? Passing a subprogram as
    parameter (downward closure) covers all my needs. Objects parametrized
    by functions? That looks too much. Yes, I hate templates and generics, before you ask... (:-))

    You like a programming language where you can understand a near
    one-to-one correspondence between the source language and the generated assembly. Fair enough - that's a valid and reasonable preference.

    I have nothing against preferences. I just don't understand how people
    can dismiss other options as impractical, useless, unintuitive,
    impossible to use, or whatever, simply because those other languages are
    not a style that they are familiar with or not a style they like.


    Functional programming languages have supported generic programming
    and type inference for a lot longer than most imperative languages,
    but those are both standard for any serious modern imperative language
    (in C++ you have templates and "auto", in other languages you have
    similar features).

    Type inference is a separate, and very controversial issue.

    It is only as controversial as any other feature where you have a
    trade-off between implicit and explicit information. It is not really
    any more controversial than the implicit conversions found in many
    languages and user types.

    And it is not a separate issue - unless you are using a dynamic language
    that supports objects of any type in any place (with run-time checking
    when the objects are used), type inference is essential to how generic programming works. A function (or class, or whatever) is defined with a generic parameter, and when it is used in the code, type inference is
    used to determine the real type in use and therefore the real function
    to make and use. (Type inference can also be used without generics,
    such as the "__auto_type" gcc extension to C, which may be part of
    future C23.)


    Conversely, all functions in all languages operate on some types and
    nothing else.

    No, in OOPL a method operates on the class. A "free function" takes
    some arguments in unrelated types.

    Methods in OOPL (or most people think of as Object Oriented
    programming, such as C++, Java, Python, etc., rather than the original
    intention of OOP which is now commonly called "actors" paradigm) are
    syntactic sugar for a function with the class instance as the first
    parameter.

    Not really, because methods dispatch. A method acts on the class and its implementation consists of separate bodies, which is the core of OO decomposition as opposed to other paradigms.


    Methods do not act on classes - they act on instances of a class (plus
    any static members of the class). They can be very neat and convenient syntactic sugar, but that's all they are. Some languages allow syntaxes "x.foo()" and "foo(x)" as alternatives that mean exactly the same thing.
    The precise method of matching up the call syntax with the function
    code to use varies - you have different kinds of name lookups, with some
    done at compile-time, some at run-time, some via names and some via
    structure information (that would include C++ virtual methods).

    Tying function code to method names or free function names is just
    naming syntax details, not an operation on the class or type itself.

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Nov 22 16:24:44 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-22 15:11, David Brown wrote:

    Familiarity with these gives you a broader background than many
    programmers, but no insight or experience with functional programming.

    I don't pretend. I said I don't buy declarative approach and I don't buy first-class functions, however you shape them.

    Oh, so you think functional programming languages are designed and
    intended to be useless, and it is only by accident or stubbornness
    that anyone can actually make use of them?  That is an "interesting"
    argument.

    Absolutely. Most languages fall into this category. It is not a unique
    feature of FPLs...

    That is quite a cynical viewpoint!

    Grows from familiarity... (:-))

    Whether the language supports first-class functions or not makes no >>>>> difference as to how well specified functions are, or whether the
    implementation follows that specification or not.

    Specification is a declarative layer on top of the object language.
    In imperative procedural languages that layer is declaration of
    subprograms.

    No, that is not what "specification" means.  A declaration is just
    the name, parameters, types, etc., of a function.  A specification
    says what the function /does/.

    That is same. When you specify type, to say what the object "does" by
    being of that type.

    No, they are not the same at all.  A declaration is needed to use the function (or other object) in the language.

    Almost no language declare naked names. Most combine that with
    specification of what these names are supposed to mean = behave.

    In most programming, specifications are not written in the language
    itself though some allow a limited form of formal specification (such
    as pre-conditions and post-conditions).

    You have them in the languages or you have not. Clearly specifications
    form a meta language on top of the object core language.

    No, it is not "clearly" at all.  If a programming language allows specifications of some sort as part of the language, then it is part of
    the language.  If it does not, then it is not part of the language - specifications then have to be written independently (such as in a
    separate human-language document) and are not in a "meta language".

    If it is not a part of the language, then there is nothing to talk about.

    In OOPL it is types (classes) defined in terms of methods (members
    are built-in getter/setter methods). In FPL, typically, there is
    none, unless you introduce some meta functions, whatever. E.g.
    generics/templates exist for infinity and still have no reasonable
    specifications, and thus, are fundamentally non-testable. I do not
    care even a little bit about FP, but, my guess is that it must have
    similar issues.


    So you think that in C, this is a "specification" :

         int times_two(int);

    while the Haskell equivalent :

         times :: Int -> Int

    is somehow completely different?

    No. But also there is nothing specifically "functional" in these
    primitive specifications.

    There was not intended to be anything "functional" about them.  You said that in imperative languages, a declaration is a specification, while in functional programming languages you have no specifications.  I showed
    that you can have exactly the same kind of declarations in both
    languages - the differences you are claiming are imaginary.  (Such declarations are not specifications in either language.)

    You showed a non-functional part. I said that marrying specifications
    with "functional" would be IMO difficult. Too much power of function construction to predict and constrain the behavior.

    Incidentally, you /do/ understand that "object oriented" is orthogonal
    to imperative/declarative ?  Many functional programming are object oriented, just as many imperative languages are not.

    They are not even first class.

    "Int" is a first class object type in Haskell and C.  The "times"
    function is a first class object type in Haskell, but not in C.

    You didn't wrote:

        int (int) times_two;  // Some functional C (no pun intended (:-))


    Indeed I didn't write that - it is not syntactically correct in either sample language.

    I could have written a declaration for a higher order function in Haskell:

        do_twice :: (Int -> Int) -> (Int -> Int)

    That takes a function that is int-to-int, and returns a function that is int-to-int.

    Not the same. int (int) times_two; is supposedly a variable which values
    are int-valued functions taking one int argument. That would be the
    least possible case of first-class functions.

    And again - these are declarations, not specifications.

    They are rudimentary specifications. To elaborate them is difficult in C
    and in an FPL.

    The point is that if you take some really fancy functional stuff, it
    would be difficult or, maybe, useless to formally describe in some
    meta language of specifications.

    A function that cannot sensibly be described is of little use to anyone.
     That is completely independent of the programming language paradigm.
    If no one can tell you want "foo" does, you can't use it.  I cannot see
    any way in which imperative languages differ from declarative languages
    in that respect.

    That comment was on the first-class functions. If you have a
    sufficiently complex algebra generating functions, especially during
    run-time, it becomes difficult to describe the result in specifications.

    Declarative approach is merely difficult to understand and thus to reuse
    and maintain code.

    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming language >>>>> functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages don't
    have types or act on types?

    It was you who said "Functional programming language functions are not
    operations on types." I only agreed with you.

    I think I see the source of confusion.  When you wrote "each function is
    an operation on some types", you mean functions that operate on
    /instances/ of certain types.  There is a vast difference between
    operating on /instances/, and operating on /types/.

    I did not mean first-class type objects (types of types). They raise the
    same objections as first-class procedural objects (types of
    subprograms). It would be surely interesting as an academia research and
    a nightmare in production code.

    Acting on a type means being a function of the type domain. E.g. sine
    acts on R. R is a type ("field" etc).

    I think what you meant to say was that in imperative languages, you
    define functions to act on particular types (the types of the
    parameters), while in functional programming you don't give the types of
    the parameters so they operate on "anything".

    No, that would be untyped. Most FPLs are typed, I hope.

    This is, of course, wrong in both senses.  More advanced imperative let
    you define functions that operate on many types - they are known as
    template functions in C++, generics in Ada, and similarly in other languages.

    That is acting on a class. Class is a set of types, e.g. built by
    derivation closure in OO or ad-hoc as "any macro expansion the compiler
    would swallow" in the case of templates... (:-))

    And in functional programming languages you can specify
    types as precisely or generically as you want.

    I don't see why an FPL could not include some "non-functional" core.
    Like C++ contains non-OO stuff borrowed from C. But talking about
    advantages and disadvantages of a given paradigm we must discuss the new stuff. Old stuff usually indicates incompleteness of the paradigm. E.g.
    C++, Java, Ada are nowhere close to be 100% OO. On the contrary, they
    are non-OO most of the time.

    You like a programming language where you can understand a near
    one-to-one correspondence between the source language and the generated assembly.  Fair enough - that's a valid and reasonable preference.

    Much weaker that that. I want to be able to recognize complexity and
    algorithm and have limited effects on the computational environment. Basically, small things must look small and big things big. If there is
    a loop or recursion I'd like to be aware of these. I also like to have "uninteresting" details hidden. But I want sufficient stuff like memory management, blocking etc exposed.

    I have nothing against preferences.  I just don't understand how people
    can dismiss other options as impractical, useless, unintuitive,
    impossible to use, or whatever, simply because those other languages are
    not a style that they are familiar with or not a style they like.

    You get me wrong. It is not about style, though that is often more
    important than anything else. My concern is with the paradigm as a whole.
    Functional programming languages have supported generic programming
    and type inference for a lot longer than most imperative languages,
    but those are both standard for any serious modern imperative
    language (in C++ you have templates and "auto", in other languages
    you have similar features).

    Type inference is a separate, and very controversial issue.

    It is only as controversial as any other feature where you have a
    trade-off between implicit and explicit information.  It is not really
    any more controversial than the implicit conversions found in many
    languages and user types.

    Yes, and there should be none in a well-typed program.

    And it is not a separate issue - unless you are using a dynamic language that supports objects of any type in any place (with run-time checking
    when the objects are used), type inference is essential to how generic programming works.

    No, you can have static polymorphism without inference. You possibly
    mean some evil automatic instantiations of generics instead, like in
    C++. I don't want automatic instantiation either.

    Conversely, all functions in all languages operate on some types
    and nothing else.

    No, in OOPL a method operates on the class. A "free function" takes
    some arguments in unrelated types.

    Methods in OOPL (or most people think of as Object Oriented
    programming, such as C++, Java, Python, etc., rather than the
    original intention of OOP which is now commonly called "actors"
    paradigm) are syntactic sugar for a function with the class instance
    as the first parameter.

    Not really, because methods dispatch. A method acts on the class and
    its implementation consists of separate bodies, which is the core of
    OO decomposition as opposed to other paradigms.

    Methods do not act on classes - they act on instances of a class (plus
    any static members of the class).

    Rather on the closure of the class. Class is a set of types. Method acts
    on all values from all instances of the set.

    Tying function code to method names or free function names is just
    naming syntax details, not an operation on the class or type itself.

    No, free function does not dispatch if class is involved. It has the
    same body is valid for values of all instances. A method selects a body according to the actual instance. The selected body could be composed
    out of several bodies like C++'s constructors and destructions, which
    are no methods, but one could imagine a language with that sort of
    composition of methods.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Nov 22 20:01:15 2022
    From Newsgroup: comp.lang.misc

    On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:
    [...] It is not really any more controversial than the implicit
    conversions found in many languages and user types.
    Yes, and there should be none in a well-typed program.

    Virtually every computer language since time began has glossed
    over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions]. You can
    use an implicit dereference or you can make things harder for programmers;
    I know which I prefer.

    Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions. If these are well-chosen,
    you can [and should] avoid almost all explicit conversions. I see little
    point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)" [or "sqrt((real) i))"] instead; can't write "print (x)" but
    must write "(void) print (x)" [as "print" happens to return the number of characters printed, which you happen not to care about on this occasion];
    or, for consistency, can't write "j := i" but must write "j := (deref) i" instead. It's not as though in any of these constructs there is reason
    ever to expect the implicit coercions not to apply, so that some errors
    are missed. But some people seem to prefer hair shirts.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Hertel
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Nov 22 21:26:26 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-22 21:01, Andy Walker wrote:
    On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:
    [...] It is not really any more controversial than the implicit
    conversions found in many languages and user types.
    Yes, and there should be none in a well-typed program.

        Virtually every computer language since time began has glossed
    over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions].  You can
    use an implicit dereference or you can make things harder for programmers;
    I know which I prefer.

        Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions.

    If you want no conversions you must state that the value is of a subtype.

    If these are well-chosen,

    By the programmer, see above.

    you can [and should] avoid almost all explicit conversions.  I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)"

    They could, should 2 be a subtype of Float (or whatever). There is no
    reason why this must be determined by the language. Technically, it
    would introduce a lot of ambiguities and thus make programming more
    difficult because the programmer must then use qualified expressions to disambiguate.

    or, for consistency, can't write "j := i" but must write "j := (deref) i" instead.

    Surely a pointer type can be a subtype of the target type. It is so in
    Ada for array index, record member, function call. Not for assignment,
    though. But again, that must be up to the programmer. The type system
    must provide ad-hoc subtypes.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Nov 23 11:32:31 2022
    From Newsgroup: comp.lang.misc

    On 22/11/2022 16:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:

    Familiarity with these gives you a broader background than many
    programmers, but no insight or experience with functional programming.

    I don't pretend. I said I don't buy declarative approach and I don't buy first-class functions, however you shape them.


    OK, I suppose - but I hope you'll forgive me if I don't believe you have demonstrated enough experience or knowledge for your thoughts on
    functional programming languages to carry any weight beyond personal
    dislike. (I won't argue with personal tastes, as long as it is clear
    that it is /your/ preferences for the programming /you/ do.)

    Oh, so you think functional programming languages are designed and
    intended to be useless, and it is only by accident or stubbornness
    that anyone can actually make use of them?  That is an "interesting" >>>> argument.

    Absolutely. Most languages fall into this category. It is not a
    unique feature of FPLs...

    That is quite a cynical viewpoint!

    Grows from familiarity... (:-))

    Whether the language supports first-class functions or not makes
    no difference as to how well specified functions are, or whether
    the implementation follows that specification or not.

    Specification is a declarative layer on top of the object language. >>>>> In imperative procedural languages that layer is declaration of
    subprograms.

    No, that is not what "specification" means.  A declaration is just
    the name, parameters, types, etc., of a function.  A specification
    says what the function /does/.

    That is same. When you specify type, to say what the object "does" by
    being of that type.

    No, they are not the same at all.  A declaration is needed to use the
    function (or other object) in the language.

    Almost no language declare naked names. Most combine that with
    specification of what these names are supposed to mean = behave.


    Again - you are mixing up specifications and declarations.

    "int foo(int x)" is a /declaration/. It is all you need in C to be able
    to use the function to compile code, and similar declarations are all
    you need in most other languages. It is /not/ a /specification/. It
    says /nothing/ about what the function means, does, or how it behaves.

    If I ask you "write a function called "check" that takes a double and
    returns a boolean", what would you do? You can't write that function -
    you have no idea what it is going to do. I, on the other hand, can put
    "bool check(double);" in my code and compile my program. How the whole
    thing behaves is another matter, because you have no specification for
    the function - merely its declaration.

    /Please/ tell me you understand the difference! We are not going to get
    far if you don't see that.

    In most programming, specifications are not written in the language
    itself though some allow a limited form of formal specification
    (such as pre-conditions and post-conditions).

    You have them in the languages or you have not. Clearly
    specifications form a meta language on top of the object core language.

    No, it is not "clearly" at all.  If a programming language allows
    specifications of some sort as part of the language, then it is part
    of the language.  If it does not, then it is not part of the language
    - specifications then have to be written independently (such as in a
    separate human-language document) and are not in a "meta language".

    If it is not a part of the language, then there is nothing to talk about.

    In OOPL it is types (classes) defined in terms of methods (members
    are built-in getter/setter methods). In FPL, typically, there is
    none, unless you introduce some meta functions, whatever. E.g.
    generics/templates exist for infinity and still have no reasonable
    specifications, and thus, are fundamentally non-testable. I do not
    care even a little bit about FP, but, my guess is that it must have >>>>> similar issues.


    So you think that in C, this is a "specification" :

         int times_two(int);

    while the Haskell equivalent :

         times :: Int -> Int

    is somehow completely different?

    No. But also there is nothing specifically "functional" in these
    primitive specifications.

    There was not intended to be anything "functional" about them.  You
    said that in imperative languages, a declaration is a specification,
    while in functional programming languages you have no specifications.
    I showed that you can have exactly the same kind of declarations in
    both languages - the differences you are claiming are imaginary.
    (Such declarations are not specifications in either language.)

    You showed a non-functional part. I said that marrying specifications
    with "functional" would be IMO difficult. Too much power of function construction to predict and constrain the behavior.


    I haven't given any specifications - nor have you. We have not
    discussed any particular concrete functions that could be specified.

    If you like, we could write this in Haskell :

    do_twice :: (a -> a) -> (a -> a)
    do_twice f x = f (f x)

    The first line is the declaration, which is often not necessary as the compiler will figure it out for itself. Programmers might want more
    control - for example, if you only want the function to handle int to
    int functions, you could write :

    do_twice :: (Int -> Int) -> (Int -> Int)
    do_twice f x = f (f x)


    As a specification, you could say "function "do_twice" should take a
    function as an argument, and return a function that has the same effect
    as applying the original function twice".

    So, now you have an example of a higher level function and its
    specification - /and/ an implementation, /and/ a declaration. Was that
    so difficult?


    Incidentally, you /do/ understand that "object oriented" is orthogonal
    to imperative/declarative ?  Many functional programming are object
    oriented, just as many imperative languages are not.

    They are not even first class.

    "Int" is a first class object type in Haskell and C.  The "times"
    function is a first class object type in Haskell, but not in C.

    You didn't wrote:

        int (int) times_two;  // Some functional C (no pun intended (:-)) >>>

    Indeed I didn't write that - it is not syntactically correct in either
    sample language.

    I could have written a declaration for a higher order function in
    Haskell:

         do_twice :: (Int -> Int) -> (Int -> Int)

    That takes a function that is int-to-int, and returns a function that
    is int-to-int.

    Not the same. int (int) times_two; is supposedly a variable which values
    are int-valued functions taking one int argument. That would be the
    least possible case of first-class functions.


    I am sorry, I really cannot understand what you are trying to say here.
    You seem to be mixing up types, variables and functions here.

    I could have written :

    type IntFunc = Int -> Int
    do_twice :: Func -> Func
    do_twice f x = f (f x)

    This would have been the same thing as the previous Int only "do_twice".


    And again - these are declarations, not specifications.

    They are rudimentary specifications. To elaborate them is difficult in C
    and in an FPL.


    No, declarations are not specifications at all. They give you the name
    and the types involved - they don't say what the function should do.
    The nearest you have to a specification is a good choice of name.

    Arguably, functional programming is often just a formalised and precise language for writing your specifications. Functional programming is
    primarily concerned with what a function should /do/, and much less
    concerned about /how/ it should do it. The Haskell implementation of "do_twice" above is as clear a specification as the English language specification I gave.


    For real functional code, you will often define functions in a way that
    gives the general direction for how it will work. For example, a simple quicksort Haskell function could be :

    qs [] = []
    qs (x : xs) = (qs left) ++ [x] ++ (qs right)
    where
    left = filter (< x) xs
    right = filter (>= x) xs

    That could be considered a technical specification of a quicksort
    algorithm. A more advanced Haskell implementation would use in-place
    sorting for efficiency.


    The point is that if you take some really fancy functional stuff, it
    would be difficult or, maybe, useless to formally describe in some
    meta language of specifications.

    A function that cannot sensibly be described is of little use to
    anyone.   That is completely independent of the programming language
    paradigm. If no one can tell you want "foo" does, you can't use it.  I
    cannot see any way in which imperative languages differ from
    declarative languages in that respect.

    That comment was on the first-class functions. If you have a
    sufficiently complex algebra generating functions, especially during run-time, it becomes difficult to describe the result in specifications.


    Is your dislike of functional programming really just that it is
    possible to write higher order functions that you can't understand or describe? People can write crap in any language. Just have a look at <https://thedailywtf.com/> and you can see examples of indescribable
    code in every language you can imagine.

    Declarative approach is merely difficult to understand and thus to reuse
    and maintain code.


    It is okay to say "/I/ don't understand language X, so I don't use it".
    It is not okay to make general claims about what other people
    understand. You can be sure that people who program in functional
    programming languages /do/ understand what they are doing, and /can/
    maintain and reuse their code.

    Some languages have a reputation for being "write only", with Perl being
    the prime contender. I've never heard that about any functional
    programming language - and I challenge you to back up your claims with references. Fail that, and you are just another whiner who mocks things
    they don't understand rather accept they don't know everything.

    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming language >>>>>> functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages don't
    have types or act on types?

    It was you who said "Functional programming language functions are
    not operations on types." I only agreed with you.

    I think I see the source of confusion.  When you wrote "each function
    is an operation on some types", you mean functions that operate on
    /instances/ of certain types.  There is a vast difference between
    operating on /instances/, and operating on /types/.

    I did not mean first-class type objects (types of types). They raise the same objections as first-class procedural objects (types of
    subprograms). It would be surely interesting as an academia research and
    a nightmare in production code.

    Acting on a type means being a function of the type domain. E.g. sine
    acts on R. R is a type ("field" etc).

    So when you write "act on a type", you mean "act on an instance of a
    type" - just like "sine" acts on real numbers, or members of R, and not
    on the set R.


    I think what you meant to say was that in imperative languages, you
    define functions to act on particular types (the types of the
    parameters), while in functional programming you don't give the types
    of the parameters so they operate on "anything".

    No, that would be untyped. Most FPLs are typed, I hope.

    No, it would be generic programming. Generic programming is not the
    same as untyped programming - in generic programming your functions are defined to work with many types but each /use/ of the function is on
    values of specific types. A fair bit of functional programming is
    generic, but some of it is type-specific - you can have both (just as
    you can in many languages).


    This is, of course, wrong in both senses.  More advanced imperative
    let you define functions that operate on many types - they are known
    as template functions in C++, generics in Ada, and similarly in other
    languages.

    That is acting on a class. Class is a set of types, e.g. built by
    derivation closure in OO or ad-hoc as "any macro expansion the compiler would swallow" in the case of templates... (:-))


    Again - no. While the definition of "class" (and even "type") varies
    between languages, classes are not "sets of types". (And again, you are mixing up "acting on something" with "acting on instances of
    something".) What you are describing there is a type hierarchy in a
    language with nominal subtyping - basically, the way class inheritance
    works in C++ and Java. That is not the only way to do object-oriented
    typing in a language.

    And in functional programming languages you can specify types as
    precisely or generically as you want.

    I don't see why an FPL could not include some "non-functional" core.
    Like C++ contains non-OO stuff borrowed from C. But talking about
    advantages and disadvantages of a given paradigm we must discuss the new stuff. Old stuff usually indicates incompleteness of the paradigm. E.g.
    C++, Java, Ada are nowhere close to be 100% OO. On the contrary, they
    are non-OO most of the time.

    Smalltalk is one of the few languages that could be considered entirely object-oriented.

    But yes, many successful programming are multi-paradigm, and support
    different ways of coding. C++ and Python support many different ways, including functional, object-oriented, generic and procedural coding.
    OCaml is primarily a functional programming language but it also
    supports imperative coding. (It is also object oriented, but as I've
    said before that is orthogonal to functional/imperative choices.)

    Supporting multiple types of coding makes a language more flexible. But
    it also comes at a cost. C++'s support for function overloading and functional coding means there is no longer the near one-to-one
    relationship between source code and assembly code that is so useful in
    C for low-level debugging. OCaml's support for modifiable variables
    means you don't get the level of correctness provability or thread
    safety found in purer functional programming languages.


    You like a programming language where you can understand a near
    one-to-one correspondence between the source language and the
    generated assembly.  Fair enough - that's a valid and reasonable
    preference.

    Much weaker that that. I want to be able to recognize complexity and algorithm and have limited effects on the computational environment. Basically, small things must look small and big things big. If there is
    a loop or recursion I'd like to be aware of these. I also like to have "uninteresting" details hidden. But I want sufficient stuff like memory management, blocking etc exposed.


    You are still drawing an arbitrary line and saying "I like stuff on this
    side, I don't want anything to do with stuff on the other side". Again, there's nothing wrong with that - I think everyone does this.
    (Personally, I work with a very wide range of programming. The line I
    use for small embedded systems is in a very different place from the one
    I use for PC programming.)

    What is /not/ fine is to take your line and say "The stuff on this side
    is good - it is good engineering and programming, letting people write
    clear and maintainable code. The stuff on the other side is
    incomprehensible, impractical nonsense that doesn't work."

    (I've been through this with others - I really cannot understand how
    some people get so narrow-minded and insular that they believe anything different from their familiar little world is /wrong/.)


    I have nothing against preferences.  I just don't understand how
    people can dismiss other options as impractical, useless, unintuitive,
    impossible to use, or whatever, simply because those other languages
    are not a style that they are familiar with or not a style they like.

    You get me wrong. It is not about style, though that is often more
    important than anything else. My concern is with the paradigm as a whole.
    Functional programming languages have supported generic programming
    and type inference for a lot longer than most imperative languages,
    but those are both standard for any serious modern imperative
    language (in C++ you have templates and "auto", in other languages
    you have similar features).

    Type inference is a separate, and very controversial issue.

    It is only as controversial as any other feature where you have a
    trade-off between implicit and explicit information.  It is not really
    any more controversial than the implicit conversions found in many
    languages and user types.

    Yes, and there should be none in a well-typed program.


    Nonsense.

    And it is not a separate issue - unless you are using a dynamic
    language that supports objects of any type in any place (with run-time
    checking when the objects are used), type inference is essential to
    how generic programming works.

    No, you can have static polymorphism without inference. You possibly
    mean some evil automatic instantiations of generics instead, like in
    C++. I don't want automatic instantiation either.


    You cannot have generic programming without type inference - that's what
    I said, and that's what I mean. What you /want/, or what you personally choose to use, is irrelevant.

    template <class T>
    T add_one(T x) { return x + 1; }

    int foo(int x) {
    return add_one(x);
    }

    Type inference is how the compiler calls the right "add_one" instance
    and how it determines what the type "T" actually is in the concrete
    function call.

    Type inference is not magic - compilers already have to figure out the
    type of object instances in order to do type checking, implicit
    conversions, and the like. gcc's "typeof" operator gives you it as a
    simple extension to C (and which will perhaps become part of C23).

    Using it too much in a language like C++ can make code harder to
    understand - when you have "auto x = bar();", anyone reading the code
    needs to make more of an effort to find the real type of "x". But if
    the types involved are complicated, then it makes code much clearer,
    more flexible and maintainable, and far more likely to be correct. It
    is a sharp tool, and must be used responsibly.

    Conversely, all functions in all languages operate on some types
    and nothing else.

    No, in OOPL a method operates on the class. A "free function" takes >>>>> some arguments in unrelated types.

    Methods in OOPL (or most people think of as Object Oriented
    programming, such as C++, Java, Python, etc., rather than the
    original intention of OOP which is now commonly called "actors"
    paradigm) are syntactic sugar for a function with the class instance
    as the first parameter.

    Not really, because methods dispatch. A method acts on the class and
    its implementation consists of separate bodies, which is the core of
    OO decomposition as opposed to other paradigms.

    Methods do not act on classes - they act on instances of a class (plus
    any static members of the class).

    Rather on the closure of the class. Class is a set of types. Method acts
    on all values from all instances of the set.


    I am assuming you are not using the word "closure" in the sense normally
    used in programming. You mean a hierarchy of object types in a nominal
    typing system with inheritance. Methods act on instances of a class,
    and the same method may be able to act on instances of more than one
    class in the hierarchy. Any given invocation is on a specific instance
    of a specific class.

    Tying function code to method names or free function names is just
    naming syntax details, not an operation on the class or type itself.

    No, free function does not dispatch if class is involved. It has the
    same body is valid for values of all instances. A method selects a body according to the actual instance. The selected body could be composed
    out of several bodies like C++'s constructors and destructions, which
    are no methods, but one could imagine a language with that sort of composition of methods.


    Suppose you have a class "A" with a non-virtual method "foo", and a
    virtual method "bar". class "B" inherits from "A" and overrides both
    methods.

    A a;
    B b;
    A * ap;
    B * bp;

    a.foo(); // A_foo(&a)
    b.foo(); // B_foo(&b)

    ap = &a;
    foo(); // A_foo(&a)

    ap = &b;
    foo(); // A_foo(&b) - probably wrong

    bp = &a; // Compile-time error

    bp = &b;
    foo(); // B_foo(&b)


    a.bar(); // A_bar(&a)
    b.bar(); // B_bar(&b)

    ap = &a;
    bar(); // A_bar(&a) - dynamic dispatch

    ap = &b;
    bar(); // B_bar(&b) - dynamic dispatch

    bp = &a; // Compile-time error

    bp = &b;
    bar(); // B_bar(&b)


    For the non-virtual methods, you could have written :

    void foo(A* a) { A_foo(a); }
    void foo(B* b) { B_foo(b); }

    Now there is no difference between "x.foo();" and "foo(x);".

    Method calls for non-virtual methods are just the same as free
    functions, but with a convenient syntax.


    The virtual methods are more interesting, since the dispatch is dynamic.
    This is handled by giving each instance of the classes a hidden
    pointer to a virtual method table. So now the free function "bar" is a
    bit more complicated:

    void bar(A* a) { a->__vt[index_of_bar](a); }

    (Multiple inheritance makes this messier, but the principle is the same.)






    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Nov 23 11:44:25 2022
    From Newsgroup: comp.lang.misc

    On 22/11/2022 21:01, Andy Walker wrote:
    On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:
    [...] It is not really any more controversial than the implicit
    conversions found in many languages and user types.
    Yes, and there should be none in a well-typed program.

        Virtually every computer language since time began has glossed
    over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions].  You can
    use an implicit dereference or you can make things harder for programmers;
    I know which I prefer.

        Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions.  If these are well-chosen, you can [and should] avoid almost all explicit conversions.  I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)" [or "sqrt((real) i))"] instead;  can't write "print (x)" but must write "(void) print (x)" [as "print" happens to return the number of characters printed, which you happen not to care about on this occasion];
    or, for consistency, can't write "j := i" but must write "j := (deref) i" instead.  It's not as though in any of these constructs there is reason
    ever to expect the implicit coercions not to apply, so that some errors
    are missed.  But some people seem to prefer hair shirts.


    There are always trade-offs.

    int x;
    unsigned int y;

    What should be the result of "x + y" ? A signed int? An unsigned int?
    Whichever choice you made, you could get an overflow depending on the
    values of x and y, while the other choice would have been fine. Should
    both types be bumped up to a bigger size of integer so the result is
    always correct? Should it be an error and require an explicit
    conversion to say what the programmer actually wants?

    If you write "x / 2", should you have implicit conversion to floating
    point to get a mathematically correct result?

    Sometimes I want the integer square root if I use a square root function
    on an integer - it is massively faster than floating point square root
    on small systems.

    Explicit conversions give the programmer more choices and more control,
    and avoid many types of error. But they also need more code, can
    distract from the "important bits" of the code, and can be too constraining.

    A good compromise in some cases is to support function overloads. You
    can define :

    double sqrt(double);

    and if integer square root is a useful feature for what you are doing,
    you can also define :

    int sqrt(int);

    and

    complex float sqrt(complex float);

    or whatever interests you.

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Nov 23 15:29:19 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-23 11:32, David Brown wrote:
    On 22/11/2022 16:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:

    Familiarity with these gives you a broader background than many
    programmers, but no insight or experience with functional programming.

    I don't pretend. I said I don't buy declarative approach and I don't
    buy first-class functions, however you shape them.


    OK, I suppose - but I hope you'll forgive me if I don't believe you have demonstrated enough experience or knowledge for your thoughts on
    functional programming languages to carry any weight beyond personal dislike.  (I won't argue with personal tastes, as long as it is clear
    that it is /your/ preferences for the programming /you/ do.)

    Just as with with astrology, it is my preference not to have much
    experience with. Consider it a personal dislike.

    [If you want argue objective properties of one or another approach, you
    are welcome]

    Almost no language declare naked names. Most combine that with
    specification of what these names are supposed to mean = behave.

    Again - you are mixing up specifications and declarations.

    Again, there is no difference. If no behavior is specified, no
    declaration is even needed. Even forward declarations specify some
    minimal behavior.

    "int foo(int x)" is a /declaration/.

    and a specification. E.g. it does not return double.

    It is all you need in C to be able
    to use the function to compile code, and similar declarations are all
    you need in most other languages.  It is /not/ a /specification/.  It
    says /nothing/ about what the function means, does, or how it behaves.

    It says which arguments and results it has and thus defines a call frame
    as well as things necessary for linking.

    No specification defines all behavior. That is the idea of having them.
    They can constrain behavior allowing multiple implementations.
    Specifications can be written in a more powerful declarative language
    than the object language itself. E.g. some specification might be non-implementable, far easier to write.

    /Please/ tell me you understand the difference!  We are not going to get far if you don't see that.

    There is no qualitative difference. Most specifications stop at being
    simply typed. The next step is specifications ensuring substitutability
    of subtypes. The next step are contracts on the results and so on.

    As a specification, you could say "function "do_twice" should take a function as an argument, and return a function that has the same effect
    as applying the original function twice".

    So, now you have an example of a higher level function and its
    specification - /and/ an implementation, /and/ a declaration.  Was that
    so difficult?

    Not a helpful contract as it says nothing about the result. E.g.
    considering an implementation of sine function as an example.

    I am sorry, I really cannot understand what you are trying to say here.
     You seem to be mixing up types, variables and functions here.

    I thought you wanted to demonstrate declaration of a variable with a function-type. But you actually did a higher-level function.

    Arguably, functional programming is often just a formalised and precise language for writing your specifications.  Functional programming is primarily concerned with what a function should /do/, and much less concerned about /how/ it should do it.

    Since there is nothing else what = how. The program is what you see.
    What happens behind the scenes is the "hardware." You cannot have it
    both ways, which is the major argument against declarative approach.

    For real functional code, you will often define functions in a way that gives the general direction for how it will work.  For example, a simple quicksort Haskell function could be :

        qs [] = []
        qs (x : xs) = (qs left) ++ [x] ++ (qs right)
            where
            left = filter (< x) xs
            right = filter (>= x) xs

    That could be considered a technical specification of a quicksort
    algorithm.

    No, it is a program. You can argue that some simple algorithms can be expressed in a very naive form. Yes, that is true, but it worthless for programming, just like the school square root formula is.

    A more advanced Haskell implementation would use in-place
    sorting for efficiency.

    And that is a program too.

    The point is that if you take some really fancy functional stuff, it
    would be difficult or, maybe, useless to formally describe in some
    meta language of specifications.

    A function that cannot sensibly be described is of little use to
    anyone.   That is completely independent of the programming language
    paradigm. If no one can tell you want "foo" does, you can't use it.
    I cannot see any way in which imperative languages differ from
    declarative languages in that respect.

    That comment was on the first-class functions. If you have a
    sufficiently complex algebra generating functions, especially during
    run-time, it becomes difficult to describe the result in specifications.


    Is your dislike of functional programming really just that it is
    possible to write higher order functions that you can't understand or describe?

    No, my dislike stems from extensive using patterns, which some consider precursor of functional languages. These things become unmaintainable in
    a blink of the eye.

    People can write crap in any language.  Just have a look at <https://thedailywtf.com/> and you can see examples of indescribable
    code in every language you can imagine.

    And there is no reason helping them.

    Declarative approach is merely difficult to understand and thus to
    reuse and maintain code.

    It is okay to say "/I/ don't understand language X, so I don't use it".
    It is not okay to make general claims about what other people
    understand.  You can be sure that people who program in functional programming languages /do/ understand what they are doing, and /can/ maintain and reuse their code.

    People are using and enjoy even stranger things.

    Some languages have a reputation for being "write only", with Perl being
    the prime contender.  I've never heard that about any functional programming language - and I challenge you to back up your claims with references.  Fail that, and you are just another whiner who mocks things they don't understand rather accept they don't know everything.

    I doubt there is any serious research on the subject of programming
    paradigms in context of software engineering. Personal anecdotes are entertaining but of little interest.

    Another issue is treatment of types when each function is an
    operation on some types and nothing else.

    I can't understand what you mean.  Functional programming
    language functions are not operations on types.

    Yep.

    Oh, so you think functions in functional programming languages
    don't have types or act on types?

    It was you who said "Functional programming language functions are
    not operations on types." I only agreed with you.

    I think I see the source of confusion.  When you wrote "each function
    is an operation on some types", you mean functions that operate on
    /instances/ of certain types.  There is a vast difference between
    operating on /instances/, and operating on /types/.

    I did not mean first-class type objects (types of types). They raise
    the same objections as first-class procedural objects (types of
    subprograms). It would be surely interesting as an academia research
    and a nightmare in production code.

    Acting on a type means being a function of the type domain. E.g. sine
    acts on R. R is a type ("field" etc).

    So when you write "act on a type", you mean "act on an instance of a
    type" - just like "sine" acts on real numbers, or members of R, and not
    on the set R.

    Not sure.

    instance of a type = value of the type
    real numbers = members of real
    set R is ambiguous.

    Usually set R presumes real numbers rather than just a bunch of real
    values. E.g. you assume some structure elements of the set has that
    makes it real numbers. Basically that is the difference between a type
    and a set of its values. Type includes operations acting on the type =
    taking some values of the type as arguments and/or returning results of.

    I think what you meant to say was that in imperative languages, you
    define functions to act on particular types (the types of the
    parameters), while in functional programming you don't give the types
    of the parameters so they operate on "anything".

    No, that would be untyped. Most FPLs are typed, I hope.

    No, it would be generic programming.  Generic programming is not the
    same as untyped programming - in generic programming your functions are defined to work with many types but each /use/ of the function is on
    values of specific types.

    That is irrelevant. Typing applies to the operation itself (to the
    "macro") not to the effect of ("macro expansion"). In the case of
    generics, C++ templates are untyped as you can use anything for the
    generic parameter, it is basically textual substitution. Ada generics
    are mostly weakly typed. But that is how generics are and why there
    should be none.

    A fair bit of functional programming is
    generic, but some of it is type-specific - you can have both (just as
    you can in many languages).

    If you mean higher-level functions, if structured equivalence is used,
    then they are likely weakly- to untyped.

    This is, of course, wrong in both senses.  More advanced imperative
    let you define functions that operate on many types - they are known
    as template functions in C++, generics in Ada, and similarly in other
    languages.

    That is acting on a class. Class is a set of types, e.g. built by
    derivation closure in OO or ad-hoc as "any macro expansion the
    compiler would swallow" in the case of templates... (:-))


    Again - no.  While the definition of "class" (and even "type") varies between languages, classes are not "sets of types".  (And again, you are mixing up "acting on something" with "acting on instances of
    something".)  What you are describing there is a type hierarchy in a language with nominal subtyping - basically, the way class inheritance
    works in C++ and Java.  That is not the only way to do object-oriented typing in a language.

    No, it is not limited to dynamic polymorphism Class can be ad-hoc. E.g.
    types from generic instances form an ad-hoc generic class. Class can be implicit, e.g. all integer types in C form a class. The difference is
    what you can do with values from the class, e.g. define operations
    *acting* on the class. You cannot do that with C's class of integers.
    You can do that at compile time only with generics. You can do that
    unlimited with OO classes since they have run-time values.

    And in functional programming languages you can specify types as
    precisely or generically as you want.

    I don't see why an FPL could not include some "non-functional" core.
    Like C++ contains non-OO stuff borrowed from C. But talking about
    advantages and disadvantages of a given paradigm we must discuss the
    new stuff. Old stuff usually indicates incompleteness of the paradigm.
    E.g. C++, Java, Ada are nowhere close to be 100% OO. On the contrary,
    they are non-OO most of the time.

    Smalltalk is one of the few languages that could be considered entirely object-oriented.

    No, it is not. Entirely OO would have classes of all first-class types.
    E.g. you could derive from Boolean and override "and". They you could
    have a variable which specific type is determined at run-time to be
    Boolean or the derived type.

    What is /not/ fine is to take your line and say "The stuff on this side
    is good - it is good engineering and programming, letting people write
    clear and maintainable code.  The stuff on the other side is incomprehensible, impractical nonsense that doesn't work."

    Your argument that no objective criteria exist. I disagree.

    (I've been through this with others - I really cannot understand how
    some people get so narrow-minded and insular that they believe anything different from their familiar little world is /wrong/.)

    Sometimes it is called experience. In other cases - prudence.

    And it is not a separate issue - unless you are using a dynamic
    language that supports objects of any type in any place (with
    run-time checking when the objects are used), type inference is
    essential to how generic programming works.

    No, you can have static polymorphism without inference. You possibly
    mean some evil automatic instantiations of generics instead, like in
    C++. I don't want automatic instantiation either.

    You cannot have generic programming without type inference - that's what
    I said, and that's what I mean.  What you /want/, or what you personally choose to use, is irrelevant.

        template <class T>
        T add_one(T x) { return x + 1; }

        int foo(int x) {
            return add_one(x);
        }

    Type inference is how the compiler calls the right "add_one" instance
    and how it determines what the type "T" actually is in the concrete
    function call.

    Of course I can. Here is an example without any type inference:

    --
    -- Specification
    --
    generic
    type T is range <>;
    function Add_One (X : T) return T;
    --
    -- Implementation
    --
    function Add_One (X : T) return T is
    begin
    return X + 1;
    end Add_One;
    --
    -- Instantiation, note instantiations are explicit in Ada
    --
    function Integer_Add_One is new Add_One (Integer);
    --
    -- Usage
    --
    function Foo (X : Integer) return Integer is
    begin
    return Integer_Add_One (X);
    end Foo;

    Using it too much in a language like C++ can make code harder to
    understand - when you have "auto x = bar();", anyone reading the code
    needs to make more of an effort to find the real type of "x".  But if
    the types involved are complicated, then it makes code much clearer,

    Here is a contradiction. If you cannot figure out the type, how is it
    clearer?

    more flexible and maintainable, and far more likely to be correct.

    No, you cannot make it incorrect by attributing a type. You can make it illegal. So, it could be less fragile.

    Yet there is always a danger that if you generalize things to much, then
    a lot of unexpected stuff might become inferable where you do not expect
    it al all. A manifested declaration is a check against a "klugscheisser" compiler.

    It is a sharp tool, and must be used responsibly.

    I prefer a design where you would not need to infer the type, but also
    not forced to specify obvious types. It is a very fine balance.

    Conversely, all functions in all languages operate on some types >>>>>>> and nothing else.

    No, in OOPL a method operates on the class. A "free function"
    takes some arguments in unrelated types.

    Methods in OOPL (or most people think of as Object Oriented
    programming, such as C++, Java, Python, etc., rather than the
    original intention of OOP which is now commonly called "actors"
    paradigm) are syntactic sugar for a function with the class
    instance as the first parameter.

    Not really, because methods dispatch. A method acts on the class and
    its implementation consists of separate bodies, which is the core of
    OO decomposition as opposed to other paradigms.

    Methods do not act on classes - they act on instances of a class
    (plus any static members of the class).

    Rather on the closure of the class. Class is a set of types. Method
    acts on all values from all instances of the set.


    I am assuming you are not using the word "closure" in the sense normally used in programming.

    I use closure in mathematical sense. All values of all types from the
    class. In the case of OO it is the values of the root type and values of
    all types derived from it.

    You mean a hierarchy of object types in a nominal
    typing system with inheritance.  Methods act on instances of a class,

    Rather type-specific implementations do, method is a combination of
    these implementations, so the method acts on the whole class (closure)
    using dispatch to the specific implementation. That's the idea of
    dynamic polymorphism: having operations acting on multiple types. E.g. a virtual function in C++ is a method. It can be called on the base type
    (C++ names it "class") and any type derived from it.

    and the same method may be able to act on instances of more than one
    class in the hierarchy.

    A type-specific implementation can be inherited by composing the
    original implementation with some type/references conversions.

    Any given invocation is on a specific instance
    of a specific class.

    When the language is properly typed that applies to all subprograms anyway.

    Now there is no difference between "x.foo();" and "foo(x);".

    Method calls for non-virtual methods are just the same as free
    functions, but with a convenient syntax.

    Right. Non-virtual "methods" are not methods, they are just
    free-functions. It is C++'s unfortunate choice to bind dispatch with the
    first argument and dot-notation. There is no reason for that. It bars
    C++ from multiple dispatch.

    BTW, there existed some proposals to introduce multiple dispatch in C++. However that would throw away much of cherished C++ idiosyncrasies:
    hidden arguments, method declarations nested inside class declarations,
    prefix notation.

    The virtual methods are more interesting, since the dispatch is dynamic.
     This is handled by giving each instance of the classes a hidden
    pointer to a virtual method table.  So now the free function "bar" is a
    bit more complicated:

        void bar(A* a) { a->__vt[index_of_bar](a); }

    Embedding the type tag (e.g. vptr in C++) is not a requirement. You can
    have classes with no embedded tags. If you wanted a fully OO language
    were even int could have a class, such object must have no type tag in
    their representation. That would be impossible for C++, which conflates class-wide types and specific types, but possible in Ada which
    distinguishes them.

    (Multiple inheritance makes this messier, but the principle is the same.)

    Multiple dispatch is far more messier as the dispatching table becomes multi-dimensional.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Nov 23 19:11:23 2022
    From Newsgroup: comp.lang.misc

    On 22/11/2022 20:26, Dmitry A. Kazakov wrote:
    [I wrote:]
         Personally, in the interests of making programming easier, I'm in >> favour of more, not fewer, implicit conversions.
    If you want no conversions you must state that the value is of a subtype.

    Was some part of "more, not fewer" causing difficulties?

    If these are well-chosen,
    By the programmer, see above.

    The whole point of implicit conversions is that they take place
    with no overt action needed by the programmer. Of course, that places a responsibility on the language design to make such conversions safe, so
    that you can't normally stumble into one that gives unexpected results.

    you can [and should] avoid almost all explicit conversions.  I see little >> point in telling people that they can't use "sqrt(2)" but must write
    "sqrt(2.0)"
    They could, should 2 be a subtype of Float (or whatever). There is no
    reason why this must be determined by the language. Technically, it
    would introduce a lot of ambiguities and thus make programming more
    difficult because the programmer must then use qualified expressions
    to disambiguate.

    If, as is commonly the case, "sqrt" is a function taking a real
    [or "float"] parameter and there is no competing "sqrt", then there is
    no plausible ambiguity. Either "2" [or "i + 1" or whatever] can be
    implicitly converted to type "real", in which case that is, beyond
    reasonable doubt, what the programmer intended, or it cannot, in which
    case it's a compile-time error. For the specific case of "int -> real",
    there is no sensible competing meaning; for cases such as "bool -> int"
    or "char -> int", or "pointer to pointer to int -> real", there could be,
    and that is then up to the language designer. If conversions are not
    implicit, then they /must/ be provided by the programmer, which is always
    at least as difficult as /sometimes/ being needed [as it can do no harm
    to supply a conversion which is also the default behaviour]. Somehow,
    many languages manage to define implicit conversions without the raft
    of difficulties you imply.

    or, for consistency, can't write "j := i" but must write "j := (deref) i"
    instead.
    Surely a pointer type can be a subtype of the target type. It is so
    in Ada for array index, record member, function call. Not for
    assignment, though. But again, that must be up to the programmer. The
    type system must provide ad-hoc subtypes.

    If the assignment [in the context of "int i, j;" with the usual meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"
    and "i" are not interchangeable in general [you can't write "2 := j;",
    for example], there must be an implicit conversion [eg, from "integer
    variable" to "integer"] taking place. You can call it "ad hoc" if you
    like, but it's much more convenient than having to make that conversion explicit, which is what you seem to favour.

    The whole point of HLLs is to make programming easier, not to
    enforce some strict purity regime. Sometimes, the easier way is also
    the purer way; if not, then the easier way is better. Note that
    "easier" includes issues such as bug detection, esp at compile time,
    safety, regularity of design, and so on; I'm not advocating slapdash programming or language design.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Marpurg
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Nov 23 20:52:33 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-23 20:11, Andy Walker wrote:
    On 22/11/2022 20:26, Dmitry A. Kazakov wrote:

    If these are well-chosen,
    By the programmer, see above.

        The whole point of implicit conversions is that they take place
    with no overt action needed by the programmer.

    This is what I meant. You declare that A is a subtype B (inheriting in-
    or out- or all operations) once. Then you enjoy the effect distributed everywhere that declaration has effect.

    Of course, that places a
    responsibility on the language design to make such conversions safe, so
    that you can't normally stumble into one that gives unexpected results.

    These conversions are fundamentally unsafe because different types are,
    well, different. If they weren't then one type would suffice and there
    would be no need in conversions.

    you can [and should] avoid almost all explicit conversions.  I see
    little
    point in telling people that they can't use "sqrt(2)" but must write
    "sqrt(2.0)"
    They could, should 2 be a subtype of Float (or whatever). There is no
    reason why this must be determined by the language. Technically, it
    would introduce a lot of ambiguities and thus make programming more
    difficult because the programmer must then use qualified expressions
    to disambiguate.

        If, as is commonly the case, "sqrt" is a function taking a real
    [or "float"] parameter and there is no competing "sqrt", then there is
    no plausible ambiguity.

    It could be integer-valued or complex-valued sqrt. Or a sqrt returning a lesser precision type etc.

    Either "2" [or "i + 1" or whatever] can be
    implicitly converted to type "real", in which case that is, beyond
    reasonable doubt, what the programmer intended, or it cannot, in which
    case it's a compile-time error.

    You are talking about type-error here or something else?

    Anyway there could be many floating-point, fixed-point, complex types
    around. Imagine it as a disjoint graph in a strongly typed language. Conversions introduce paths then graph. Paths gets connected and
    suddenly you have a dish full of spaghetti.

     Somehow,
    many languages manage to define implicit conversions without the raft
    of difficulties you imply.

    With a rudimentary type system of C or PL/1 you possibly could do that
    and enjoy the resulting mess. But in presence user-defined types, you
    cannot do that. So, leave that to the programmer, who hopefully knows
    what he does.

    or, for consistency, can't write "j := i" but must write "j :=
    (deref) i"
    instead.
    Surely a pointer type can be a subtype of the target type. It is so
    in Ada for array index, record member, function call. Not for
    assignment, though. But again, that must be up to the programmer. The
    type system must provide ad-hoc subtypes.

        If the assignment [in the context of "int i, j;" with the usual meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"
    and "i" are not interchangeable in general [you can't write "2 := j;",
    for example], there must be an implicit conversion [eg, from "integer variable" to "integer"] taking place.

    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument of
    := is mutable.

        The whole point of HLLs is to make programming easier, not to enforce some strict purity regime.

    Sure. The disagreement is about achieving that ease.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Fri Nov 25 11:28:20 2022
    From Newsgroup: comp.lang.misc

    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    [implicit conversions:]
    By the programmer, see above.
         The whole point of implicit conversions is that they take place
    with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting
    in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

    If you have to declare that "int" is a subtype of "real", then it's
    not "implicit"; overt action by the programmer is needed. Further, ...

    Of course, that places a
    responsibility on the language design to make such conversions safe, so
    that you can't normally stumble into one that gives unexpected results.
    These conversions are fundamentally unsafe because different types
    are, well, different. If they weren't then one type would suffice and
    there would be no need in conversions.

    ..., allowing such /overt/ action means that the compiler has to
    check whether the declaration is sensible [I surely can't declare that
    "real" is a sub-type of "int"] and also has to check for ambiguities and
    other safety issues. The point of implicit "int -> real" conversion, for example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied
    instead. The types are different, but the conversion is safe.

    Note 1: There is no implication that the conversion is /always/ applied
    regardless of context; there is a pre-condition that the compiler knows
    that "real" is required. Eg, "print (2)" should output "2", not "2.00". Note 2: The conversion does not extend to other types, such as variables;
    if a real variable is required, you can't supply an integer variable.

    you can [and should] avoid almost all explicit conversions.  I see little >>>> point in telling people that they can't use "sqrt(2)" but must write
    "sqrt(2.0)"
    They could, should 2 be a subtype of Float (or whatever). There is no
    reason why this must be determined by the language. Technically, it
    would introduce a lot of ambiguities and thus make programming more
    difficult because the programmer must then use qualified expressions
    to disambiguate.

    The whole point of it being determined by the language is that the language knows that there is no ambiguity in "int -> real" /in contexts
    where "real" is expected/. In most languages, such contexts are well- understood by the compiler and described by the language standard.

         If, as is commonly the case, "sqrt" is a function taking a real
    [or "float"] parameter and there is no competing "sqrt", then there is
    no plausible ambiguity.
    It could be integer-valued or complex-valued sqrt. Or a sqrt
    returning a lesser precision type etc.

    It could indeed. But the compiler knows the signature of whichever "sqrt" is in scope, and therefore still knows that a "real" [or "complex"
    or "long long real" or ...] parameter is needed, and so can arrange to
    convert the supplied "int" with no action by the programmer.

    Either "2" [or "i + 1" or whatever] can be
    implicitly converted to type "real", in which case that is, beyond
    reasonable doubt, what the programmer intended, or it cannot, in which
    case it's a compile-time error.
    You are talking about type-error here or something else?

    I'm talking about a hypothetical language in which "int" is not reliably a sub-type of "real", in which case "x := 2" is indeed a type
    error. That particular case is, IRL, unlikely, but conversions such as
    "char -> int", "X -> array of X with a single element", ... are more problematic [possible implicitly in some languages, not in others].

    Anyway there could be many floating-point, fixed-point, complex types
    around. Imagine it as a disjoint graph in a strongly typed language. Conversions introduce paths then graph. Paths gets connected and
    suddenly you have a dish full of spaghetti.

    That's why the language has to rule on which such paths are
    possible, and in what circumstances. The objective should always be
    to make life easier for the programmer, not to pile complexity on top
    of complexity. You can't expect engineers, physicists, biologists,
    ... to understand arcane rules of programming that make sense only to
    CS professionals and a sub-set of mathematicians.

    [...]>>      If the assignment [in the context of "int i, j;" with the usual
    meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"
    and "i" are not interchangeable in general [you can't write "2 := j;",
    for example], there must be an implicit conversion [eg, from "integer
    variable" to "integer"] taking place.
    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.

    Yes. That shows that "2" and "j" have different types. Which
    shows, in turn, that in places where you can write "2" /or/ "j" with
    the same meaning, an implicit conversion must have taken place. If
    you look at the compiled code for "i := 2" and "i := j", you will find
    that there is a difference.

         The whole point of HLLs is to make programming easier, not to
    enforce some strict purity regime.
    Sure. The disagreement is about achieving that ease.

    Yes. I would suggest that the evidence of code snippets posted
    here in relation to Ada and C shows that neither is even close. Nor is,
    for example, Pascal. Some dialects of Basic are much better, at least
    within a limited class of problems. In the interests of /not/ starting
    "holier than thou" language wars, I shall say no more.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Handel
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Fri Nov 25 16:46:00 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    [implicit conversions:]
    By the programmer, see above.
         The whole point of implicit conversions is that they take place >>> with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting
    in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

        If you have to declare that "int" is a subtype of "real", then it's not "implicit";  overt action by the programmer is needed.  Further, ...

    Same is when the programmer's actions are needed to declare X int and Y
    real. Implicit here is the conversion in X + Y, not the declarations.

    The difference is that by default int and real are unrelated types. The programmer could explicitly put them in some common class, e.g. a class
    of additive types having + operation. That would make + an operation
    from the class (int, real). The implementation of now valid cross operation:

    + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

    + : int : int -> int
    + : int : real -> real
    + : real : int -> real

    This is the mechanics the language can provide in order to move the
    nasty stuff out of its core.

    BTW, people like James would surely ask for

    + : real : real -> int

    for the cases when the result is a whole integer. But we won't let them!
    (:-))

    Of course, that places a
    responsibility on the language design to make such conversions safe, so
    that you can't normally stumble into one that gives unexpected results.
    These conversions are fundamentally unsafe because different types
    are, well, different. If they weren't then one type would suffice and
    there would be no need in conversions.

        ..., allowing such /overt/ action means that the compiler has to check whether the declaration is sensible [I surely can't declare that
    "real" is a sub-type of "int"] and also has to check for ambiguities and other safety issues.  The point of implicit "int -> real" conversion, for example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied instead.  The types are different, but the conversion is safe.

    The result would be a very rigid over-specified language. E.g. what if
    the programmer wanted to implement a custom floating-point type? How the compiler would know that this type must enjoy implicit conversions to int?

    Note 1:  There is no implication that the conversion is /always/ applied
      regardless of context;  there is a pre-condition that the compiler knows
      that "real" is required.  Eg, "print (2)" should output "2", not "2.00".

    In C++ it is called dominance rules, a big can of nasty worms, BTW.

    Note 2:  The conversion does not extend to other types, such as variables;
      if a real variable is required, you can't supply an integer variable.

    You probably mean some type algebraic types built upon it, like a
    pointer to real. It is not so obvious. You might want some operations of
    such types to enjoy conversions as well. For example, array indexing,
    record members etc:

    Real_Array (I) := 2; -- No?
    Real_Array (1..3) := (1, 2, 3); -- No?

    etc. IMO, the cleanest way is to have this stuff properly typed (=no conversions at all) and achieve desired effects through inter-type relationships designed by the programmer.

    you can [and should] avoid almost all explicit conversions.  I see >>>>> little
    point in telling people that they can't use "sqrt(2)" but must write >>>>> "sqrt(2.0)"
    They could, should 2 be a subtype of Float (or whatever). There is no
    reason why this must be determined by the language. Technically, it
    would introduce a lot of ambiguities and thus make programming more
    difficult because the programmer must then use qualified expressions
    to disambiguate.

        The whole point of it being determined by the language is that the language knows that there is no ambiguity in "int -> real" /in contexts
    where "real" is expected/.  In most languages, such contexts are well- understood by the compiler and described by the language standard.

    I doubt one could formulate such rules consistently without introducing ambiguities. In Ada, which differently to bottom-up languages, indeed considers the context when resolving types, still has cases when types
    must be disambiguated using type qualifiers. And that already without
    integer to real conversions!

         If, as is commonly the case, "sqrt" is a function taking a real >>> [or "float"] parameter and there is no competing "sqrt", then there is
    no plausible ambiguity.
    It could be integer-valued or complex-valued sqrt. Or a sqrt
    returning a lesser precision type etc.

        It could indeed.  But the compiler knows the signature of whichever "sqrt" is in scope, and therefore still knows that a "real" [or "complex"
    or "long long real" or ...] parameter is needed, and so can arrange to convert the supplied "int" with no action by the programmer.

    The typical case in Ada is that you have competing resolutions. E.g. you
    have something like

    print(sqrt(2))

    There are lots of prints there, accepting all possible sqrts.

    It requires a lot of fine tuning to make such things usable. You cannot
    do that at the language level, IMO.

    Either "2" [or "i + 1" or whatever] can be
    implicitly converted to type "real", in which case that is, beyond
    reasonable doubt, what the programmer intended, or it cannot, in which
    case it's a compile-time error.
    You are talking about type-error here or something else?

        I'm talking about a hypothetical language in which "int" is not reliably a sub-type of "real", in which case "x := 2" is indeed a type error.  That particular case is, IRL, unlikely, but conversions such as "char -> int", "X -> array of X with a single element", ... are more problematic [possible implicitly in some languages, not in others].

    Yes. Ada, for example, has a special rule for array aggregates that
    requires keyed notation for single element arrays. The reason is to disambiguate

    (100)

    When array aggregate, it must be keyed

    (1 => 100)

    while

    (100,200)

    is OK to be positional.

    Anyway there could be many floating-point, fixed-point, complex types
    around. Imagine it as a disjoint graph in a strongly typed language.
    Conversions introduce paths then graph. Paths gets connected and
    suddenly you have a dish full of spaghetti.

        That's why the language has to rule on which such paths are possible, and in what circumstances.  The objective should always be
    to make life easier for the programmer, not to pile complexity on top
    of complexity.  You can't expect engineers, physicists, biologists,
    ... to understand arcane rules of programming that make sense only to
    CS professionals and a sub-set of mathematicians.

    Usually a language has subsets of different complexity for different
    audience. Designing a reusable module with types convertible to each
    other is not for an engineer. He would be an end-user in this case.

    [...]>>      If the assignment [in the context of "int i, j;" with the usual
    meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"
    and "i" are not interchangeable in general [you can't write "2 := j;",
    for example], there must be an implicit conversion [eg, from "integer
    variable" to "integer"] taking place.
    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.

        Yes.  That shows that "2" and "j" have different types.

    Different, yet related.

    Which
    shows, in turn, that in places where you can write "2" /or/ "j" with
    the same meaning, an implicit conversion must have taken place.

    Right because of the subtyping relation, though the conversion would
    likely be void since both would have same representation.

    If
    you look at the compiled code for "i := 2" and "i := j", you will find
    that there is a difference.

    Yes, one could play with different representations and constant
    foldings. But that does not influence the semantics, which tells that
    the set of values of both types is same. The set of operations differ
    and the representations may differ too.

         The whole point of HLLs is to make programming easier, not to
    enforce some strict purity regime.
    Sure. The disagreement is about achieving that ease.

        Yes.  I would suggest that the evidence of code snippets posted here in relation to Ada and C shows that neither is even close.  Nor is,
    for example, Pascal.  Some dialects of Basic are much better, at least within a limited class of problems.  In the interests of /not/ starting "holier than thou" language wars, I shall say no more.

    My take on this is that it is not possible to do on the language level.
    I think that the language must be much simpler than even Ada, which is
    four or so times smaller than modern C++. Yet it must be powerful enough
    to express the ideas like implicit conversions at the library level.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sun Nov 27 23:28:19 2022
    From Newsgroup: comp.lang.misc

    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    [I wrote:]
         If you have to declare that "int" is a subtype of "real", then it's
    not "implicit";  overt action by the programmer is needed.  Further, ...
    Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the
    declarations.

    In normal languages, the compiler has no reason to suppose that "X"
    is an "int" unless there is an overt declaration to that effect. In normal languages, there is a reason to suppose that the parameter to "sqrt" has
    type "real". So it is not /un/reasonable for a parameter of type "int" to
    be converted to "real" with no further action by the programmer. It is not compulsory that a language provide such an implicit conversion; but it
    makes life slightly easier for the programmer, and similarly for other such conversions. It is a matter of where to draw the line. Meanwhile, I don't personally know of any language in which the conversion from variable to constant in expressions such as "i := j + 2" is not implicit; which is not
    to claim that no such language exists [but it would be inconvenient for
    many simple programming tasks].

    The difference is that by default int and real are unrelated types.

    That may be the default in /some/ languages; others know, with no further action by the programmer, that "int"s may be converted to "real"s
    /when required/.

    The programmer could explicitly put them in some common class, e.g. a
    class of additive types having + operation. That would make + an
    operation from the class (int, real).

    Yes, lots of things are possible. Would you really want to use a language in which the programmer of [eg] "scientific" code with much use
    of integer and floating-point arithmetic has to invoke many lines of such placement before embarking on even the simplest of programs? We knew how
    to avoid such make-work more than 60 years ago; we should move on rather
    than back.

    [...] The point of implicit "int -> real" conversion, for
    example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied
    instead.  The types are different, but the conversion is safe.
    The result would be a very rigid over-specified language. E.g. what
    if the programmer wanted to implement a custom floating-point type?
    How the compiler would know that this type must enjoy implicit
    conversions to int?

    You presumably mean /from/ "int"? The compiler doesn't know that,
    and can't be expected to. If a conversion is not one of those specified
    by the language, then it is not implicit. What facilities the language supplied for extending the list of conversions is another matter.

    [...]
    Note 2:  The conversion does not extend to other types, such as variables; >>    if a real variable is required, you can't supply an integer variable.
    You probably mean some type algebraic types built upon it, like a
    pointer to real.

    No, I meant what I wrote. ...

    It is not so obvious. You might want some operations
    of such types to enjoy conversions as well. For example, array
    indexing, record members etc:
       Real_Array (I) := 2;  -- No?
       Real_Array (1..3) := (1, 2, 3);  -- No?

    ... Both of those are, for example, legal [mutatis mutandis] in
    Algol 68; the compiler knows that on the RHS a real constant, resp an
    array of real constants, is needed and is able to do the conversion.
    But if a pointer to real is required, then you cannot [in A68] supply
    a pointer to int instead. There is no default path from "pointer to
    int" to "pointer to real"; there /is/ one from "pointer to int" to
    "real". What other languages do is up to them.

    etc. IMO, the cleanest way is to have this stuff properly typed (=no conversions at all) and achieve desired effects through inter-type relationships designed by the programmer.

    That may [perhaps] be clean. It is not, IMO, something which
    should burden the average engineer, physicist, ... who merely wants to
    write simple programs,

         The whole point of it being determined by the language is that the >> language knows that there is no ambiguity in "int -> real" /in contexts
    where "real" is expected/.  In most languages, such contexts are well-
    understood by the compiler and described by the language standard.
    I doubt one could formulate such rules consistently without
    introducing ambiguities.

    Feel free to point out the ambiguities in the Algol 68 Revised
    Report Section 6, where it is all explained in seven pages, inc comments
    and examples. You may not [and many do not] agree with those rules, but
    they are [IMO] consistent and unambiguous.

    The typical case in Ada is that you have competing resolutions. E.g.
    you have something like
        print(sqrt(2))
    There are lots of prints there, accepting all possible sqrts.> It requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.

    Perhaps you should again look at A68; RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Hummel
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Nov 28 11:10:31 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-28 00:28, Andy Walker wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    [I wrote:]
         If you have to declare that "int" is a subtype of "real", then it's
    not "implicit";  overt action by the programmer is needed.  Further, ... >> Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the
    declarations.

        In normal languages, the compiler has no reason to suppose that "X" is an "int" unless there is an overt declaration to that effect.

    Is X declared int or not?

    The difference is that by default int and real are unrelated types.

        That may be the default in /some/ languages;  others know, with no further action by the programmer, that "int"s may be converted to "real"s /when required/.

    You ignore the question who requires conversion and who defines the
    conversion semantics including rounding, truncation, overflows,
    operation result like in the case of exponentiation etc. The language designer?

    The programmer could explicitly put them in some common class, e.g. a
    class of additive types having + operation. That would make + an
    operation from the class (int, real).

        Yes, lots of things are possible.  Would you really want to use a language in which the programmer of [eg] "scientific" code with much use
    of integer and floating-point arithmetic has to invoke many lines of such placement before embarking on even the simplest of programs?

    It is all about accuracy and precision of data involved, e.g. measured
    in a complex process of AD conversion and back to DA.

    We knew how
    to avoid such make-work more than 60 years ago;  we should move on rather than back.

    The nature and tasks of engineering involving computers has changed significantly since mainframes and punched tapes.

    The compiler doesn't know that,
    and can't be expected to.  If a conversion is not one of those specified
    by the language, then it is not implicit.

    Is integer to single precision IEEE float conversion safe?

    etc. IMO, the cleanest way is to have this stuff properly typed (=no
    conversions at all) and achieve desired effects through inter-type
    relationships designed by the programmer.

        That may [perhaps] be clean.  It is not, IMO, something which should burden the average engineer, physicist, ... who merely wants to
    write simple programs,

    Built-in numeric types are insufficient for typical engineering tasks
    like automation and control.

    The typical case in Ada is that you have competing resolutions. E.g.
    you have something like      print(sqrt(2))
    There are lots of prints there, accepting all possible sqrts.> It
    requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.

        Perhaps you should again look at A68;  RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".

    Then you should be able to explain how is it resolved between printing

    1. complex sqrt of 2
    2. single precision IEEE sqrt of 2
    3. double precision IEEE sqrt of 2
    4. long double sqrt of 2
    5. fixed-point (how many?) sqrt of 2
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Nov 28 18:04:41 2022
    From Newsgroup: comp.lang.misc

    On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
    Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the
    declarations.
         In normal languages, the compiler has no reason to suppose that "X"
    is an "int" unless there is an overt declaration to that effect.
    Is X declared int or not?

    Of course it is. Were you thinking of the implicit declarations
    in some languages, depending on [eg] the first letter of an identifier?
    But that has nothing to do with implicit conversions [eg] from "int" to
    "real" or from "int variable" to "int [constant]"?

    The difference is that by default int and real are unrelated types.
         That may be the default in /some/ languages;  others know, with no
    further action by the programmer, that "int"s may be converted to "real"s
    /when required/.
    You ignore the question who requires conversion and who defines the conversion semantics including rounding, truncation, overflows,
    operation result like in the case of exponentiation etc. The language designer?

    Yes; wasn't that implied by "some languages"? These are design decisions. But it would be surprising if "int -> real" conversion was difficult in these ways. Going the other way may raise problems.

    [...]
    If a conversion is not one of those specified
    by the language, then it is not implicit.
    Is integer to single precision IEEE float conversion safe?

    You would have to ask the language designer.

    It requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.
         Perhaps you should again look at A68;  RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".
    Then you should be able to explain how is it resolved between printing
    1. complex sqrt of 2
    2. single precision IEEE sqrt of 2
    3. double precision IEEE sqrt of 2
    4. long double sqrt of 2
    5. fixed-point (how many?) sqrt of 2

    The RR predates IEEE-754 by more than a decade, so there are no guarantees there about IEEE conformance [but A68G is set up to find out
    whether your computer supports IEEE, so I would expect it to work]. Lo,
    here is your task implemented in A68G, straight out of the box:

    $ cat Dmitry.a68g
    print (("complex: ", csqrt(2), newline,
    "complex: [with im part] ", csqrt(2 I 3), newline,
    "single: ", sqrt(2), newline,
    "double: ", long sqrt(2), newline,
    "long double: ", long long sqrt(2), newline,
    "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
    $ a68g Dmitry.a68g
    complex: +1.41421356237310e +0+0.00000000000000e +0
    complex: [with im part] +1.67414922803554e +0+8.95977476129838e -1
    single: +1.41421356237310e +0
    double: +1.4142135623730950488016887242096981e +0
    long double: +1.41421356237309504880168872420969807856967187537694807317667974e +0
    fixed: [eg] +1.414214
    $

    No explicit conversions of "2" to "real" or complex/long versions in
    sight. The compiler knows that [eg] "long sqrt" requires a "long real" parameter, and automatically converts "2" accordingly. Now can we see
    your Ada equivalent for a direct comparison of ease of use? In what
    way is the A68G version unusable or in need of fine tuning or not done
    at the language level? [If you're not happy with the default output,
    A68 also provides formatted transput (comparable with C's) and direct
    ways to convert numbers to strings.]
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Mayer
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Nov 28 20:59:08 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-28 19:04, Andy Walker wrote:
    On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
    Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the
    declarations.
         In normal languages, the compiler has no reason to suppose that "X"
    is an "int" unless there is an overt declaration to that effect.
    Is X declared int or not?

        Of course it is.  Were you thinking of the implicit declarations
    in some languages, depending on [eg] the first letter of an identifier?
    But that has nothing to do with implicit conversions [eg] from "int" to "real" or from "int variable" to "int [constant]"?

    Now you see that subtyping relation need not to be declared for each expression, not even for each object declaration, but just once!

        Yes;  wasn't that implied by "some languages"?  These are design decisions.  But it would be surprising if "int -> real" conversion was difficult in these ways.  Going the other way may raise problems.

    No, it is quite logical considering multitude of integer and real types
    with different properties.

    [...]
    If a conversion is not one of those specified
    by the language, then it is not implicit.
    Is integer to single precision IEEE float conversion safe?

        You would have to ask the language designer.

    I thought you were going to give an advise for the language designer on
    the subject. Like "Sure! Piece of cake! Go ahead!"

    It requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.
         Perhaps you should again look at A68;  RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".
    Then you should be able to explain how is it resolved between printing
    1. complex sqrt of 2
    2. single precision IEEE sqrt of 2
    3. double precision IEEE sqrt of 2
    4. long double sqrt of 2
    5. fixed-point (how many?) sqrt of 2

        The RR predates IEEE-754 by more than a decade, so there are no guarantees there about IEEE conformance [but A68G is set up to find out whether your computer supports IEEE, so I would expect it to work].  Lo, here is your task implemented in A68G, straight out of the box:

      $ cat Dmitry.a68g
      print (("complex: ", csqrt(2), newline,
              "complex: [with im part] ", csqrt(2 I 3), newline,
              "single: ", sqrt(2), newline,
              "double: ", long sqrt(2), newline,
              "long double: ", long long sqrt(2), newline,
              "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
      $ a68g Dmitry.a68g
      complex: +1.41421356237310e  +0+0.00000000000000e  +0
      complex: [with im part] +1.67414922803554e  +0+8.95977476129838e  -1
      single: +1.41421356237310e  +0
      double: +1.4142135623730950488016887242096981e  +0
      long double: +1.41421356237309504880168872420969807856967187537694807317667974e  +0
      fixed: [eg]  +1.414214
      $

    Ah, see? Because of the conversions you cannot have just sqrt. In Ada
    with has no implicit conversions sqrt is called sqrt for whatever
    argument as it should be!
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Nov 30 20:17:07 2022
    From Newsgroup: comp.lang.misc

    On 28/11/2022 19:59, Dmitry A. Kazakov wrote:
    Same is when the programmer's actions are needed to declare X int and >>>>> Y real. Implicit here is the conversion in X + Y, not the
    declarations.
         In normal languages, the compiler has no reason to suppose that "X"
    is an "int" unless there is an overt declaration to that effect.
    Is X declared int or not?
         Of course it is.  Were you thinking of the implicit declarations >> in some languages, depending on [eg] the first letter of an identifier?
    But that has nothing to do with implicit conversions [eg] from "int" to
    "real" or from "int variable" to "int [constant]"?
    Now you see that subtyping relation need not to be declared for each expression, not even for each object declaration, but just once!

    Not for the first time, I cannot make any sense of what you say. Implicit relationships don't need to be declared /at all/; not once, not
    every time, not at all, else they aren't implicit. Different languages
    have different rules about what is implicit.

         Yes;  wasn't that implied by "some languages"?  These are design >> decisions.  But it would be surprising if "int -> real" conversion was
    difficult in these ways.  Going the other way may raise problems.
    No, it is quite logical considering multitude of integer and real
    types with different properties.

    So are you claiming that conversions of "real" to "int" doesn't
    raise problems? Or what? Whether a language has a "multitude of integer
    and real types" is also something that varies between languages. Some languages [most of the early ones!] managed for a long time with only
    "int" and "real"; BCPL barely has types at all.

    Is integer to single precision IEEE float conversion safe?
         You would have to ask the language designer.
    I thought you were going to give an advise for the language designer
    on the subject. Like "Sure! Piece of cake! Go ahead!"

    Why would I be giving such advice to the language designer? It
    depends on the hardware and on the purpose of the language. What, in
    any case do you mean by "safe"? "Safe" as in "2" -> "2.00000" rather
    than "2.01234", or as in "the contexts where this conversion happens
    have been carefully designed"?

    It requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.
         Perhaps you should again look at A68;  RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".
    Then you should be able to explain how is it resolved between printing
    1. complex sqrt of 2
    2. single precision IEEE sqrt of 2
    3. double precision IEEE sqrt of 2
    4. long double sqrt of 2
    5. fixed-point (how many?) sqrt of 2
         The RR predates IEEE-754 by more than a decade, so there are no
    guarantees there about IEEE conformance [but A68G is set up to find out
    whether your computer supports IEEE, so I would expect it to work].  Lo,
    here is your task implemented in A68G, straight out of the box:

       $ cat Dmitry.a68g
       print (("complex: ", csqrt(2), newline,
               "complex: [with im part] ", csqrt(2 I 3), newline,
               "single: ", sqrt(2), newline,
               "double: ", long sqrt(2), newline,
               "long double: ", long long sqrt(2), newline,
               "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
       $ a68g Dmitry.a68g
       complex: +1.41421356237310e  +0+0.00000000000000e  +0
       complex: [with im part] +1.67414922803554e  +0+8.95977476129838e  -1 >>    single: +1.41421356237310e  +0
       double: +1.4142135623730950488016887242096981e  +0
       long double: +1.41421356237309504880168872420969807856967187537694807317667974e  +0
       fixed: [eg]  +1.414214
       $

    Ah, see? Because of the conversions you cannot have just sqrt. In Ada
    with has no implicit conversions sqrt is called sqrt for whatever
    argument as it should be!

    But you may note that (a) "2" is converted to all those different
    types safely and reliably, (b) "print" works safely and reliably with all
    those different types and (c) no explicit conversions are needed, despite
    all those different results. The RR does not include a jumbo "sqrt" that
    takes "any" parameter(s); if you want one, you get to roll your own, and
    the code in the RR for "print" will show you how. Again, other languages
    have made different choices about what is in the language and library.

    Meanwhile, I note that you have snipped/ducked the challenge in my
    PP, to show us the equivalent in Ada [or other language of your choice, if
    you prefer]. Is it six lines? Others can feel free to join in!
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Farnaby
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Nov 30 21:53:31 2022
    From Newsgroup: comp.lang.misc

    On 2022-11-30 21:17, Andy Walker wrote:
    On 28/11/2022 19:59, Dmitry A. Kazakov wrote:

        Not for the first time, I cannot make any sense of what you say. Implicit relationships don't need to be declared /at all/;  not once, not every time, not at all, else they aren't implicit.  Different languages
    have different rules about what is implicit.

    You claimed that subtyping relationship is equivalent to explicit
    conversions. I said that you are wrong. Subtyping relationship must be declared just once, while explicit type conversions must be applied each
    time for each instance of the type.

         Yes;  wasn't that implied by "some languages"?  These are design
    decisions.  But it would be surprising if "int -> real" conversion was
    difficult in these ways.  Going the other way may raise problems.
    No, it is quite logical considering multitude of integer and real
    types with different properties.

        So are you claiming that conversions of "real" to "int" doesn't raise problems?

    Maybe not. E.g. a fixed-point real type with delta 13 (difference
    between two adjacent values) and range of few thousand values can be
    converted to int.

    Whether a language has a "multitude of integer
    and real types" is also something that varies between languages.

    No. It is a proposition:

    IF there are multiple numeric types
    THEN implicit conversions do not fly

    You asked why, I explained.

    Is integer to single precision IEEE float conversion safe?
         You would have to ask the language designer.
    I thought you were going to give an advise for the language designer
    on the subject. Like "Sure! Piece of cake! Go ahead!"

        Why would I be giving such advice to the language designer?

    You already did. You basically said: reduce the numeric types to a bare minimum and then implicit conversions could possibly be defined. I never argued it otherwise. Yes, a language with a primitive type system can
    have them. PL/1 had, C had.

    It requires a lot of fine tuning to make such things usable. You
    cannot do that at the language level, IMO.
         Perhaps you should again look at A68;  RR 10.5.1d, which
    refers back to 10.3.31a which is where all the hard work is done.
    It is only a couple of pages, despite all the "lots of prints".
    Then you should be able to explain how is it resolved between printing >>>> 1. complex sqrt of 2
    2. single precision IEEE sqrt of 2
    3. double precision IEEE sqrt of 2
    4. long double sqrt of 2
    5. fixed-point (how many?) sqrt of 2
         The RR predates IEEE-754 by more than a decade, so there are no >>> guarantees there about IEEE conformance [but A68G is set up to find out
    whether your computer supports IEEE, so I would expect it to work].  Lo, >>> here is your task implemented in A68G, straight out of the box:

       $ cat Dmitry.a68g
       print (("complex: ", csqrt(2), newline,
               "complex: [with im part] ", csqrt(2 I 3), newline,
               "single: ", sqrt(2), newline,
               "double: ", long sqrt(2), newline,
               "long double: ", long long sqrt(2), newline,
               "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
       $ a68g Dmitry.a68g
       complex: +1.41421356237310e  +0+0.00000000000000e  +0
       complex: [with im part] +1.67414922803554e  +0+8.95977476129838e  -1 >>>    single: +1.41421356237310e  +0
       double: +1.4142135623730950488016887242096981e  +0
       long double:
    +1.41421356237309504880168872420969807856967187537694807317667974e  +0
       fixed: [eg]  +1.414214
       $

    Ah, see? Because of the conversions you cannot have just sqrt. In Ada
    with has no implicit conversions sqrt is called sqrt for whatever
    argument as it should be!

        But you may note that (a) "2" is converted to all those different types safely and reliably,

    At the price of Hungarian notation? Thanks, but no.

        Meanwhile, I note that you have snipped/ducked the challenge in my PP, to show us the equivalent in Ada [or other language of your choice, if you prefer].  Is it six lines?  Others can feel free to join in!

    It was MY example of ambiguous expressions impossible to resolve in
    presence of conversions (or, equivalently a subtyping relation). You
    resorted to mangling names. This is trivial to do in any language, in
    Ada, in C etc.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Fri Dec 2 13:36:16 2022
    From Newsgroup: comp.lang.misc

    On 30/11/2022 20:53, Dmitry A. Kazakov wrote:
    [I wrote:]
         Not for the first time, I cannot make any sense of what you say. >> Implicit relationships don't need to be declared /at all/;  not once, not >> every time, not at all, else they aren't implicit.  Different languages
    have different rules about what is implicit.
    You claimed that subtyping relationship is equivalent to explicit conversions.

    I have checked back through my contributions to this thread and
    can find nothing even remotely similar to such a claim.

    Whether a language has a "multitude of integer
    and real types" is also something that varies between languages.
    No.

    You surely mean "yes". Most early languages had only one integer
    type and one real type; that is not a "multitude". Some had two lengths
    of one or both; that is still not a multitude. Other, mostly more recent, languages do have many numeric types; that may or may not cause problems
    of conversion.

    It is a proposition>   IF there are multiple numeric types
      THEN implicit conversions do not fly
    You asked why, I explained.

    Another sub-thread where you seem to have gone off at a tangent
    from whatever you think I may have been asking. But as a proposition,
    that is clearly false [unless you are claiming that /only/ implicit
    conversions are to be allowed], as demonstrated in this thread with
    specific examples.

    Is integer to single precision IEEE float conversion safe?
         You would have to ask the language designer.
    I thought you were going to give an advise for the language designer
    on the subject. Like "Sure! Piece of cake! Go ahead!"
         Why would I be giving such advice to the language designer?
    You already did. You basically said: reduce the numeric types to a
    bare minimum and then implicit conversions could possibly be defined.
    I never argued it otherwise. Yes, a language with a primitive type
    system can have them. PL/1 had, C had.

    I said, basically or otherwise, no such thing.

    [Left in this article for convenient reference:]
    Lo, here is your task implemented in A68G, straight out of the box:

       $ cat Dmitry.a68g
       print (("complex: ", csqrt(2), newline,
               "complex: [with im part] ", csqrt(2 I 3), newline, >>>>            "single: ", sqrt(2), newline,
               "double: ", long sqrt(2), newline,
               "long double: ", long long sqrt(2), newline,
               "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline)) >>>>    $ a68g Dmitry.a68g
       complex: +1.41421356237310e  +0+0.00000000000000e  +0
       complex: [with im part] +1.67414922803554e  +0+8.95977476129838e  -1
       single: +1.41421356237310e  +0
       double: +1.4142135623730950488016887242096981e  +0
       long double: +1.41421356237309504880168872420969807856967187537694807317667974e  +0
       fixed: [eg]  +1.414214
       $
         But you may note that (a) "2" is converted to all those different >> types safely and reliably,
    At the price of Hungarian notation? Thanks, but no.

    You snipped (b) and (c); what you seem to want -- it's always hard
    to be sure -- is perfectly possible in A68G, but the language as supplied contains only the one "sqrt" function.

         Meanwhile, I note that you have snipped/ducked the challenge in my >> PP, to show us the equivalent in Ada [or other language of your choice, if >> you prefer].  Is it six lines?  Others can feel free to join in!

    Challenge ducked again.

    It was MY example of ambiguous expressions impossible to resolve in
    presence of conversions (or, equivalently a subtyping relation). You
    resorted to mangling names. This is trivial to do in any language, in
    Ada, in C etc.

    Then you will have no difficulty in meeting your own challenge in
    Ada or C. Your choice. But I'm guessing it will be harder to code and to explain than the A68G given above.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Valentine
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sat Dec 3 00:07:30 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-02 14:36, Andy Walker wrote:
    On 30/11/2022 20:53, Dmitry A. Kazakov wrote:
    [I wrote:]
         Not for the first time, I cannot make any sense of what you say. >>> Implicit relationships don't need to be declared /at all/;  not once,
    not
    every time, not at all, else they aren't implicit.  Different languages >>> have different rules about what is implicit.
    You claimed that subtyping relationship is equivalent to explicit
    conversions.

        I have checked back through my contributions to this thread and
    can find nothing even remotely similar to such a claim.

    Sorry, but I then don't understand your point. Conversions are implicit
    in both cases = arguments in expression appear as is. What's the
    objection again?

    Whether a language has a "multitude of integer
    and real types" is also something that varies between languages.
    No.

        You surely mean "yes".  Most early languages had only one integer type and one real type;  that is not a "multitude".

    This subthread was started by James about designing a *new* language. If
    you said that James should have looked no further than Excel, which had
    no integer type, then I would not care to respond. Your answer suggested
    than implicit conversion could somehow exist in a moderately *modern*
    and reasonably typed language.

    It is a proposition>    IF there are multiple numeric types
       THEN implicit conversions do not fly
    You asked why, I explained.

        Another sub-thread where you seem to have gone off at a tangent
    from whatever you think I may have been asking.  But as a proposition,
    that is clearly false [unless you are claiming that /only/ implicit conversions are to be allowed], as demonstrated in this thread with
    specific examples.

    I have no idea what you mean.

    [Left in this article for convenient reference:]
    Lo, here is your task implemented in A68G, straight out of the box:

       $ cat Dmitry.a68g
       print (("complex: ", csqrt(2), newline,
               "complex: [with im part] ", csqrt(2 I 3), newline, >>>>>            "single: ", sqrt(2), newline,
               "double: ", long sqrt(2), newline,
               "long double: ", long long sqrt(2), newline,
               "fixed: [eg] ", fixed (sqrt(2), 10, 6), newline)) >>>>>    $ a68g Dmitry.a68g
       complex: +1.41421356237310e  +0+0.00000000000000e  +0
       complex: [with im part] +1.67414922803554e
    +0+8.95977476129838e  -1
       single: +1.41421356237310e  +0
       double: +1.4142135623730950488016887242096981e  +0
       long double:
    +1.41421356237309504880168872420969807856967187537694807317667974e  +0 >>>>>    fixed: [eg]  +1.414214
       $
         But you may note that (a) "2" is converted to all those different >>> types safely and reliably,
    At the price of Hungarian notation? Thanks, but no.

        You snipped (b) and (c);  what you seem to want -- it's always hard to be sure -- is perfectly possible in A68G, but the language as supplied contains only the one "sqrt" function.

    Why then each line has sqrt spelt differently? To qualify it shall be this:

    print (("complex: ", sqrt(2), newline,
    "complex: [with im part] ", sqrt(2), newline,
    "single: ", sqrt(2), newline,
    "double: ", sqrt(2), newline,
    "long double: ", sqrt(2), newline,
    "fixed: [eg] ", sqrt(2), newline))

    That is *not* possible in any language.

         Meanwhile, I note that you have snipped/ducked the challenge in my
    PP, to show us the equivalent in Ada [or other language of your
    choice, if
    you prefer].  Is it six lines?  Others can feel free to join in!

        Challenge ducked again.

    It was MY example of ambiguous expressions impossible to resolve in
    presence of conversions (or, equivalently a subtyping relation). You
    resorted to mangling names. This is trivial to do in any language, in
    Ada, in C etc.

        Then you will have no difficulty in meeting your own challenge in Ada or C.  Your choice.  But I'm guessing it will be harder to code and to explain than the A68G given above.

    My example is impossible to implement due to ambiguities. The wrong
    answer you gave is trivial to have in any language. E.g. in Ada

    function sqrt (X : Integer) return Float is
    begin
    return Ada.Numerics.Elementary_Functions.sqrt (Float (X));
    end sqrt;

    Put_Line (sqrt(2)'Image);

    Done. If I wanted Long_Float as well, I would do

    function sqrt (X : Integer) return Long_Float is
    begin
    return
    Ada.Numerics.Long_Elementary_Functions.sqrt (Long_Float (X));
    end sqrt;

    Then without learning Hungarian I could write each sqrt as sqrt.

    Put_Line (Float'(sqrt(2))'Image);
    Put_Line (Long_Float'(sqrt(2))'Image);

    P.S. Ada does not support ad-hoc subtyping, which would remove necessity
    of writing a wrapper/delegator for each function the programmer wanted
    to export to another type. C++ has such support in very a limited form. See:


    https://www.ibm.com/docs/en/zos/2.1.0?topic=conversions-conversion-functions
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 11:21:39 2022
    From Newsgroup: comp.lang.misc

    On 22/11/2022 20:01, Andy Walker wrote:
    On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
    On 2022-11-22 15:11, David Brown wrote:
    [...] It is not really any more controversial than the implicit
    conversions found in many languages and user types.
    Yes, and there should be none in a well-typed program.

        Virtually every computer language since time began has glossed
    over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions].  You can
    use an implicit dereference or you can make things harder for programmers;
    I know which I prefer.

        Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions.  If these are well-chosen, you can [and should] avoid almost all explicit conversions.  I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)" [or "sqrt((real) i))"] instead;  can't write "print (x)" but must write "(void) print (x)" [as "print" happens to return the number of characters printed, which you happen not to care about on this occasion];
    or, for consistency, can't write "j := i" but must write "j := (deref) i" instead.  It's not as though in any of these constructs there is reason
    ever to expect the implicit coercions not to apply, so that some errors
    are missed.  But some people seem to prefer hair shirts.

    Speaking of errors being missed I find it's easy to miss some comments
    in such a monolithic blob of text! Do you find that paragraph easier to
    read than something which includes more whitespace?

    Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to be converted to float as needed. In fact, if "real" is a generic name for multiple types, as a programmer I'd want to be able to specify what I
    want done, e.g.

    real_32(i) ** 0.5

    I agree with your print example.

    For your latter assignment example I'd prefer

    j = i*

    where the trailing * indicates deref. That's not too much of a hair
    shirt, is it...?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 12:47:13 2022
    From Newsgroup: comp.lang.misc

    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    [implicit conversions:]

    ...

    The point of implicit "int -> real" conversion, for
    example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied instead.  The types are different, but the conversion is safe.

    "safe"? There's an assumption, perhaps from the C world, that
    int-to-float conversions are lossless but it's not true.

    Conversions from int to float may be lossless for small values (which
    can lead a programmer into a false sense of security) but lossy for
    large ones.

    The solution? Don't support implicit conversions which are potentially
    lossy.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sun Dec 4 15:20:37 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 12:47, James Harris wrote:
    [I wrote:]
    The point of implicit "int -> real" conversion, for
    example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied
    instead.  The types are different, but the conversion is safe.
    "safe"? There's an assumption, perhaps from the C world, that
    int-to-float conversions are lossless but it's not true.

    Lossless and safe are different concepts, and the difference was
    well understood decades before C, eg using computers where both "int" and "real" were 48 bits and "real" types had to include an exponent in that.
    Sadly, numerical analysis is somewhat of a lost art these days.

    Conversions from int to float may be lossless for small values (which
    can lead a programmer into a false sense of security) but lossy for
    large ones.

    I wouldn't want to do anything serious on a computer for which
    [eg] "maxint" was too large to be converted to "real". Whether the
    conversion is lossless is quite another matter.

    The solution? Don't support implicit conversions which are
    potentially lossy.

    How, in your mind, does "f(i)" [with an implicit conversion of
    the parameter to type "real"] differ from "f((real) i)" [where the
    parameter is explicitly cast]? The result is the same in both cases.
    I deduce that the problems, if any, are nothing to so with conversion
    being implicit, and everything to do with whatever guarantees and
    facilities /your/ language and hardware provide. If you want the
    conversion of "int" to "real" and back to "int" to be guaranteed to be
    lossless as well as safe, then write that into your language standard.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Daquin
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Sun Dec 4 16:00:47 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 11:21, James Harris wrote:
    Speaking of errors being missed I find it's easy to miss some
    comments in such a monolithic blob of text! Do you find that
    paragraph easier to read than something which includes more
    whitespace?

    It was only eleven lines! Easier to read than your paragraph,
    which consisted on one line of ~200 characters, which therefore has to
    be [re-]wrapped before reading and replying!

    Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
    be converted to float as needed. In fact, if "real" is a generic name
    for multiple types, as a programmer I'd want to be able to specify
    what I want done, e.g.
      real_32(i) ** 0.5

    As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
    clearer [IMO], and very possibly more efficient [eg, you can use
    Newton-Raphson effectively for square roots, whereas the more general
    case typically needs different treatments for very large or very small exponents]. As a practical programmer, I hate the idea of having to
    specify "real_32"; I'm sure there are CS reasons why it's sometimes
    useful, but my experience of astrophysicists and other scientists is
    that they just expect defaults to work properly.

    [...]
    For your latter assignment example I'd prefer
      j = i*
    where the trailing * indicates deref. That's not too much of a hair
    shirt, is it...?

    Good luck trying to get that past the average programmer!
    Perhaps worth noting that the particular choice of "*" as the "deref"
    symbol rather overloads it as you want it also for the multiplication
    sign, and in [your preference for] the power symbol, and perhaps in
    things like "j *= 2" and "/* comment */".
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Daquin
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 16:37:15 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 15:20, Andy Walker wrote:
    On 04/12/2022 12:47, James Harris wrote:
    [I wrote:]
    The point of implicit "int -> real" conversion, for
    example, is that the /language/ already knows about it, and knows, eg,
    that in contexts requiring a "real", an "int" may safely be supplied
    instead.  The types are different, but the conversion is safe.
    "safe"? There's an assumption, perhaps from the C world, that
    int-to-float conversions are lossless but it's not true.

        Lossless and safe are different concepts, and the difference was well understood decades before C, eg using computers where both "int" and "real" were 48 bits and "real" types had to include an exponent in that. Sadly, numerical analysis is somewhat of a lost art these days.

    What do you think of as unsafe and do you have an example of lossy but
    'safe'?


    Conversions from int to float may be lossless for small values (which
    can lead a programmer into a false sense of security) but lossy for
    large ones.

        I wouldn't want to do anything serious on a computer for which
    [eg] "maxint" was too large to be converted to "real".  Whether the conversion is lossless is quite another matter.

    It would be a very strange computer in which maxint could not be
    converted to real as long as you don't mind loss of precision.


    The solution? Don't support implicit conversions which are
    potentially lossy.

        How, in your mind, does "f(i)" [with an implicit conversion of
    the parameter to type "real"] differ from "f((real) i)" [where the
    parameter is explicitly cast]?  The result is the same in both cases.

    I've done very little on floats so far but I can say I don't intend to
    support implicit conversions to float. Nor would I have just one float
    type. The conversion you mention would have to be more explicit such as
    either of

    f(<float 32>(i))
    f(<float 64>(i))


    I deduce that the problems, if any, are nothing to so with conversion
    being implicit, and everything to do with whatever guarantees and
    facilities /your/ language and hardware provide.  If you want the
    conversion of "int" to "real" and back to "int" to be guaranteed to be lossless as well as safe, then write that into your language standard.

    Yes, it comes from a desire to avoid vagueness and to enforce rigour and portability.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 16:59:44 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 16:00, Andy Walker wrote:
    On 04/12/2022 11:21, James Harris wrote:
    Speaking of errors being missed I find it's easy to miss some
    comments in such a monolithic blob of text! Do you find that
    paragraph easier to read than something which includes more
    whitespace?

        It was only eleven lines!  Easier to read than your paragraph, which consisted on one line of ~200 characters, which therefore has to
    be [re-]wrapped before reading and replying!

    Do you have to rewrap my posts before replying? The two newsreaders I
    use both compose paragraphs without line breaks and that is
    significantly more logical but it must be a pain if your newsreader
    doesn't do the same.

    As for readability, paragraphs are fine if they are purely prose. Your
    post mixed prose and code examples in a single paragraph 'blob' which
    still makes my eyes water when I try to read it!


    Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
    be converted to float as needed. In fact, if "real" is a generic name
    for multiple types, as a programmer I'd want to be able to specify
    what I want done, e.g.
       real_32(i) ** 0.5

        As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
    clearer [IMO], and very possibly more efficient [eg, you can use Newton-Raphson effectively for square roots, whereas the more general
    case typically needs different treatments for very large or very small exponents].

    I was thinking that x ** 0.5 (with 0.5 being a literal) could be
    /implemented/ as sqrt(x) but it sounds as though there are traps
    awaiting for large and small exponents. I hate the numerical analysis
    stuff because I don't know enough of it and I never use floating point
    so I never have to deal with it.

    What I don't like about having a sqrt function is that while it is the
    most common root it's not the only one. It's hard to justify that a
    language should include a specific function for square root but not cube
    root and not fourth root, etc.

    As a practical programmer, I hate the idea of having to
    specify "real_32";  I'm sure there are CS reasons why it's sometimes
    useful, but my experience of astrophysicists and other scientists is
    that they just expect defaults to work properly.

    That's helpful to hear. I allow for and anticipate that users would
    write their own type names as in

    typedef real = float 64

    Then they could declare

    x: real
    y: real

    elsewhere in the code. Presumably the scientists you mention would be
    able to do that.


    [...]
    For your latter assignment example I'd prefer
       j = i*
    where the trailing * indicates deref. That's not too much of a hair
    shirt, is it...?

        Good luck trying to get that past the average programmer!
    Perhaps worth noting that the particular choice of "*" as the "deref"
    symbol rather overloads it as you want it also for the multiplication
    sign, and in [your preference for] the power symbol, and perhaps in
    things like "j *= 2" and "/* comment */".

    The trailing * makes more sense when you see it in the context of the
    language as a whole. It's akin to C's prefix * ... except that it's in
    the right place!
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 17:18:48 2022
    From Newsgroup: comp.lang.misc

    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    [implicit conversions:]

    ...

    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.

        Yes.  That shows that "2" and "j" have different types.

    Does it? They may have different protections (one is intrinsically
    read-only but being wrongly used in a context in which something
    writeable is required) but when did protections become part of the type?

    I am not saying that protection cannot be part of the type but I don't
    see the necessity to conflate the two concepts.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 17:26:54 2022
    From Newsgroup: comp.lang.misc

    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take place >>>> with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting
    in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", then it's
    not "implicit";  overt action by the programmer is needed.  Further, ...

    Same is when the programmer's actions are needed to declare X int and Y real. Implicit here is the conversion in X + Y, not the declarations.

    The difference is that by default int and real are unrelated types. The programmer could explicitly put them in some common class, e.g. a class
    of additive types having + operation. That would make + an operation
    from the class (int, real). The implementation of now valid cross
    operation:

       + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

       + : int  : int  -> int
       + : int  : real -> real
       + : real : int  -> real

    This is the mechanics the language can provide in order to move the
    nasty stuff out of its core.

    BTW, people like James would surely ask for

       + : real : real -> int

    for the cases when the result is a whole integer. But we won't let them! (:-))

    On the contrary, assuming I understand your notation I would have only

    + : int : int -> int
    + : uint : uint -> uint
    + : float : float -> float

    IOW different types would need one operand to be converted.

    ...

         The whole point of HLLs is to make programming easier, not to >>>> enforce some strict purity regime.
    Sure. The disagreement is about achieving that ease.

         Yes.  I would suggest that the evidence of code snippets posted >> here in relation to Ada and C shows that neither is even close.  Nor is,
    for example, Pascal.  Some dialects of Basic are much better, at least
    within a limited class of problems.  In the interests of /not/ starting
    "holier than thou" language wars, I shall say no more.

    My take on this is that it is not possible to do on the language level.
    I think that the language must be much simpler than even Ada, which is
    four or so times smaller than modern C++. Yet it must be powerful enough
    to express the ideas like implicit conversions at the library level.

    Which implicit conversions would you want and why could they not be
    explicit?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From ram@ram@zedat.fu-berlin.de (Stefan Ram) to comp.lang.misc on Sun Dec 4 17:31:20 2022
    From Newsgroup: comp.lang.misc

    James Harris <james.harris.1@gmail.com> writes:
    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    [implicit conversions:]
    ...
    That does not compute. 2 := j; is illegal merely because 2 (after >>>overloading resolution to int) is a constant and the first argument
    of := is mutable.
    Yes.  That shows that "2" and "j" have different types.
    Does it? They may have different protections (one is intrinsically
    read-only but being wrongly used in a context in which something
    writeable is required) but when did protections become part of the type?

    FWIW: In Python (at least in some recent versions of the
    standard implementation CPython), the following program
    "sets 2 to j"; it prints 65.

    import ctypes

    j = 65

    def deref( addr, typ ):
    return ctypes.cast( addr, ctypes.POINTER( typ ))

    deref( id( 2 ), ctypes.c_int )[ 6 ] = j

    print( 2 )


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Sun Dec 4 17:54:45 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 16:59, James Harris wrote:
    On 04/12/2022 16:00, Andy Walker wrote:
    On 04/12/2022 11:21, James Harris wrote:
    Speaking of errors being missed I find it's easy to miss some
    comments in such a monolithic blob of text! Do you find that
    paragraph easier to read than something which includes more
    whitespace?

         It was only eleven lines!  Easier to read than your paragraph,
    which consisted on one line of ~200 characters, which therefore has to
    be [re-]wrapped before reading and replying!

    Do you have to rewrap my posts before replying? The two newsreaders I
    use both compose paragraphs without line breaks and that is
    significantly more logical but it must be a pain if your newsreader
    doesn't do the same.

    As for readability, paragraphs are fine if they are purely prose. Your
    post mixed prose and code examples in a single paragraph 'blob' which
    still makes my eyes water when I try to read it!


    Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
    be converted to float as needed. In fact, if "real" is a generic name
    for multiple types, as a programmer I'd want to be able to specify
    what I want done, e.g.
       real_32(i) ** 0.5

         As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
    clearer [IMO], and very possibly more efficient [eg, you can use
    Newton-Raphson effectively for square roots, whereas the more general
    case typically needs different treatments for very large or very small
    exponents].

    I was thinking that x ** 0.5 (with 0.5 being a literal) could be /implemented/ as sqrt(x) but it sounds as though there are traps
    awaiting for large and small exponents. I hate the numerical analysis
    stuff because I don't know enough of it and I never use floating point
    so I never have to deal with it.

    What I don't like about having a sqrt function is that while it is the
    most common root it's not the only one. It's hard to justify that a
    language should include a specific function for square root but not cube root and not fourth root, etc.

    My Casio calculator has square root on its own button. Cube root is on a shifted button, and an Nth root is on another shifted button.

    If it's given special treatment on a calculator, when why not in a language?

    The same calculator has a dedicated button for 'squared', and another
    for x^n. My languages also dedicated operators for square root, square,
    and exponentiation.

    Also, sqrt is often a built-in processor instruction (as are min and max
    on x64), so another reason a language should treat them specially.


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Sun Dec 4 18:05:15 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 16:00, Andy Walker wrote:
    On 04/12/2022 11:21, James Harris wrote:
    Speaking of errors being missed I find it's easy to miss some
    comments in such a monolithic blob of text! Do you find that
    paragraph easier to read than something which includes more
    whitespace?

        It was only eleven lines!  Easier to read than your paragraph, which consisted on one line of ~200 characters, which therefore has to
    be [re-]wrapped before reading and replying!

    Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
    be converted to float as needed. In fact, if "real" is a generic name
    for multiple types, as a programmer I'd want to be able to specify
    what I want done, e.g.
       real_32(i) ** 0.5

        As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
    clearer [IMO], and very possibly more efficient [eg, you can use Newton-Raphson effectively for square roots, whereas the more general
    case typically needs different treatments for very large or very small exponents].  As a practical programmer, I hate the idea of having to
    specify "real_32";  I'm sure there are CS reasons why it's sometimes
    useful,

    If you are using libraries such as OpenGL and Raylib, then 32-bit floats
    are used extensively. Your language needs to be capable of denoting
    suitable types.

    The approach I use is to provide:

    real
    real32 or r32 or f32 (I like to give a choice!)
    real64 or r64 or f64

    Use 'real' when you just want to default floating point type (which used
    to be 32 bits at one time, now is 64 bits); use specific widths when an
    API demands it, or the choice of width is significant and needs to be highlighted.


    but my experience of astrophysicists and other scientists is
    that they just expect defaults to work properly.

    Didn't Fortran have types like REAL*4 and REAL*8, as well as REAL and DOUBLEPRECISION?

    (I used to have the *N syntax too, then I decided I didn't need a
    special syntax as the possibilities for N were very limited.)
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 4 20:40:40 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-04 18:26, James Harris wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take place >>>>> with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting
    in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", then it's
    not "implicit";  overt action by the programmer is needed.  Further, ... >>
    Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the declarations.

    The difference is that by default int and real are unrelated types.
    The programmer could explicitly put them in some common class, e.g. a
    class of additive types having + operation. That would make + an
    operation from the class (int, real). The implementation of now valid
    cross operation:

        + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

        + : int  : int  -> int
        + : int  : real -> real
        + : real : int  -> real

    This is the mechanics the language can provide in order to move the
    nasty stuff out of its core.

    BTW, people like James would surely ask for

        + : real : real -> int

    for the cases when the result is a whole integer. But we won't let
    them! (:-))

    On the contrary, assuming I understand your notation I would have only

      + : int : int -> int
      + : uint : uint -> uint
      + : float : float -> float

    IOW different types would need one operand to be converted.

    The example was about introducing *user-defined* ad-hoc subtyping.

         The whole point of HLLs is to make programming easier, not to >>>>> enforce some strict purity regime.
    Sure. The disagreement is about achieving that ease.

         Yes.  I would suggest that the evidence of code snippets posted >>> here in relation to Ada and C shows that neither is even close.  Nor is, >>> for example, Pascal.  Some dialects of Basic are much better, at least
    within a limited class of problems.  In the interests of /not/ starting >>> "holier than thou" language wars, I shall say no more.

    My take on this is that it is not possible to do on the language
    level. I think that the language must be much simpler than even Ada,
    which is four or so times smaller than modern C++. Yet it must be
    powerful enough to express the ideas like implicit conversions at the
    library level.

    Which implicit conversions would you want and why could they not be explicit?

    A language without certain conversions would be intolerable. Start with
    access mode subtypes. Clearly 'in out T' is convertible to 'in T'. So
    are derived types inheriting operations etc.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 4 20:45:08 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-04 19:05, Bart wrote:

    Didn't Fortran have types like REAL*4 and REAL*8, as well as REAL and DOUBLEPRECISION?

    I used to pass REAL*4 to INTEGER*4 subroutines in order to use faster
    integer arithmetic e.g. for some tests. What a joy! (:-))

    FORTRAN-IV was an ideal implicit conversion language. Any type was
    convertible to any at zero cost! It simply didn't check anything...
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 20:02:30 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 18:26, James Harris wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take place
    with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting
    in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", then >>>> it's
    not "implicit";  overt action by the programmer is needed.  Further, >>>> ...

    Same is when the programmer's actions are needed to declare X int and
    Y real. Implicit here is the conversion in X + Y, not the declarations.

    The difference is that by default int and real are unrelated types.
    The programmer could explicitly put them in some common class, e.g. a
    class of additive types having + operation. That would make + an
    operation from the class (int, real). The implementation of now valid
    cross operation:

        + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

        + : int  : int  -> int
        + : int  : real -> real
        + : real : int  -> real

    This is the mechanics the language can provide in order to move the
    nasty stuff out of its core.

    BTW, people like James would surely ask for

        + : real : real -> int

    for the cases when the result is a whole integer. But we won't let
    them! (:-))

    On the contrary, assuming I understand your notation I would have only

       + : int : int -> int
       + : uint : uint -> uint
       + : float : float -> float

    IOW different types would need one operand to be converted.

    The example was about introducing *user-defined* ad-hoc subtyping.

    I never know what people mean by subtyping. To some it seems to be a
    smaller range of another type as in 'tiny' below.

    small is range 0..999
    tiny is small range 0..99

    To others an OO 'subtype' is a class which inherits from a superclass.

    In either case the subtype inherits operations from its parent.

    Maybe those examples are wrong. Maybe there are other kinds of
    'subtype'. I don't know. What do you mean by subtyping in the current
    context, Dmitry?


         The whole point of HLLs is to make programming easier, not to >>>>>> enforce some strict purity regime.
    Sure. The disagreement is about achieving that ease.

         Yes.  I would suggest that the evidence of code snippets posted >>>> here in relation to Ada and C shows that neither is even close.  Nor >>>> is,
    for example, Pascal.  Some dialects of Basic are much better, at least >>>> within a limited class of problems.  In the interests of /not/ starting >>>> "holier than thou" language wars, I shall say no more.

    My take on this is that it is not possible to do on the language
    level. I think that the language must be much simpler than even Ada,
    which is four or so times smaller than modern C++. Yet it must be
    powerful enough to express the ideas like implicit conversions at the
    library level.

    Which implicit conversions would you want and why could they not be
    explicit?

    A language without certain conversions would be intolerable. Start with access mode subtypes. Clearly 'in out T' is convertible to 'in T'. So
    are derived types inheriting operations etc.

    Putting aside access modes (which I see as orthogonal to types) what
    other implicit conversions would you see the absence of as intolerable?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 20:14:45 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 17:54, Bart wrote:
    On 04/12/2022 16:59, James Harris wrote:

    ...

    I was thinking that x ** 0.5 (with 0.5 being a literal) could be
    /implemented/ as sqrt(x) but it sounds as though there are traps
    awaiting for large and small exponents. I hate the numerical analysis
    stuff because I don't know enough of it and I never use floating point
    so I never have to deal with it.

    What I don't like about having a sqrt function is that while it is the
    most common root it's not the only one. It's hard to justify that a
    language should include a specific function for square root but not
    cube root and not fourth root, etc.

    My Casio calculator has square root on its own button. Cube root is on a shifted button, and an Nth root is on another shifted button.

    If it's given special treatment on a calculator, when why not in a
    language?

    Because languages are not designed by Casio...?

    Calculators may also have keys for percent, factorial, and 000. Does
    that mean a language should do the same?


    The same calculator has a dedicated button for 'squared', and another
    for x^n. My languages also dedicated operators for square root, square,
    and exponentiation.

    You have

    square(x)

    as well as

    x ** 2

    ?


    Also, sqrt is often a built-in processor instruction (as are min and max
    on x64), so another reason a language should treat them specially.

    As I say, x ** 0.5 could be implemented by a sqrt instruction (numerical analyses permitting).

    I could see it either way. I am not really averse to having a few sqrt functions (one for each float type) but I am interested to hear that
    people think they /should/ be included.

    Someone (David, I think) even suggested an integer square root. That
    would open up a can of worms called 'rounding'!
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 20:26:05 2022
    From Newsgroup: comp.lang.misc

    On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
    On 2022-11-28 00:28, Andy Walker wrote:

    ...

    The compiler doesn't know that,
    and can't be expected to.  If a conversion is not one of those specified
    by the language, then it is not implicit.

    Is integer to single precision IEEE float conversion safe?

    By my definition of 'safe', no. Numbers above 16.7 million (2^24) or thereabouts will lose precision. I presume that was the point you were
    making.

    Understandably, I often see programmers overlook that converting a
    32-bit int to a 32-bit float will lose information. Perhaps it's because
    (1) C silently converts int to float and (2) small numbers are
    unchanged, leading people into a false sense of security.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 4 22:24:35 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-04 21:26, James Harris wrote:
    On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
    On 2022-11-28 00:28, Andy Walker wrote:

    ...

    The compiler doesn't know that,
    and can't be expected to.  If a conversion is not one of those specified >>> by the language, then it is not implicit.

    Is integer to single precision IEEE float conversion safe?

    By my definition of 'safe', no. Numbers above 16.7 million (2^24) or thereabouts will lose precision. I presume that was the point you were making.

    Technical terms are:

    - Accuracy
    - Precision

    Understandably, I often see programmers overlook that converting a
    32-bit int to a 32-bit float will lose information.

    It will lose precision. Accuracy depends on how you interpret the
    mantissa. If you do it as an interval, with the bounds defined by the
    mantissa and exponent, e.g. M**EXP +/- EPS, then it is accurate when
    original integer is inside that interval.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 4 22:40:43 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-04 21:02, James Harris wrote:
    On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 18:26, James Harris wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take place
    with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting >>>>>> in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", then >>>>> it's
    not "implicit";  overt action by the programmer is needed.
    Further, ...

    Same is when the programmer's actions are needed to declare X int
    and Y real. Implicit here is the conversion in X + Y, not the
    declarations.

    The difference is that by default int and real are unrelated types.
    The programmer could explicitly put them in some common class, e.g.
    a class of additive types having + operation. That would make + an
    operation from the class (int, real). The implementation of now
    valid cross operation:

        + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

        + : int  : int  -> int
        + : int  : real -> real
        + : real : int  -> real

    This is the mechanics the language can provide in order to move the
    nasty stuff out of its core.

    BTW, people like James would surely ask for

        + : real : real -> int

    for the cases when the result is a whole integer. But we won't let
    them! (:-))

    On the contrary, assuming I understand your notation I would have only

       + : int : int -> int
       + : uint : uint -> uint
       + : float : float -> float

    IOW different types would need one operand to be converted.

    The example was about introducing *user-defined* ad-hoc subtyping.

    I never know what people mean by subtyping. To some it seems to be a
    smaller range of another type as in 'tiny' below.

      small is range 0..999
      tiny is small range 0..99

    These are new types. The correct syntax is

    subtype Small is Integer range 0..999;

    Now Small is subtype of Integer.

    To others an OO 'subtype' is a class which inherits from a superclass.

    Both are same. E.g. Small inherits + from Integer:

    X : Small;
    Y : Integer;
    begin
    Y := Y + X; -- See any conversion here? That is what subtype does.

    In either case the subtype inherits operations from its parent.

    Yes.

    Subtype (in Liskov definition) means that you can substitute X of S, a
    subtype of T in an operation Foo of T.

    Since you can do that, you do not need any explicit conversions. Elementary.

    Putting aside access modes (which I see as orthogonal to types) what
    other implicit conversions would you see the absence of as intolerable?

    Nope, not putting them aside. Again, definition:

    type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is another
    type.

    Other implicit conversion you mentioned yourself. All inherited methods
    in OO. If S inherits Foo from T you need not to explicitly convert to T
    when calling Foo on an S.

    Yet another example is transparent pointers. E.g.

    type Ptr is access String; -- Pointer to String
    begin
    Ptr (1..2) := "ab";

    That is OK, Ptr is a subtype of String in array indexing operations.
    Implicit type conversion here is pointer dereferencing.

    C++ conversion operators were already mentioned. They introduce an
    ad-hoc subtype.

    C++ references T& are subtypes of T etc.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 4 22:21:08 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:
    On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 18:26, James Harris wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take >>>>>>>> place
    with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting >>>>>>> in- or out- or all operations) once. Then you enjoy the effect
    distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", >>>>>> then it's
    not "implicit";  overt action by the programmer is needed.
    Further, ...

    Same is when the programmer's actions are needed to declare X int
    and Y real. Implicit here is the conversion in X + Y, not the
    declarations.

    The difference is that by default int and real are unrelated types. >>>>> The programmer could explicitly put them in some common class, e.g. >>>>> a class of additive types having + operation. That would make + an
    operation from the class (int, real). The implementation of now
    valid cross operation:

        + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

        + : int  : int  -> int
        + : int  : real -> real
        + : real : int  -> real

    This is the mechanics the language can provide in order to move the >>>>> nasty stuff out of its core.

    BTW, people like James would surely ask for

        + : real : real -> int

    for the cases when the result is a whole integer. But we won't let
    them! (:-))

    On the contrary, assuming I understand your notation I would have only >>>>
       + : int : int -> int
       + : uint : uint -> uint
       + : float : float -> float

    IOW different types would need one operand to be converted.

    The example was about introducing *user-defined* ad-hoc subtyping.

    I never know what people mean by subtyping. To some it seems to be a
    smaller range of another type as in 'tiny' below.

       small is range 0..999
       tiny is small range 0..99

    These are new types. The correct syntax is

       subtype Small is Integer range 0..999;

    Now Small is subtype of Integer.

    OK. What is an operation which is out of range as in

    u : integer;
    v : small := 999;

    u := v + 1;
    v := v + 1;

    defined to leave in u and v?

    And does

    subtype tiny is small range 0..9;

    declare a further subtype compatible with integer and small?


    To others an OO 'subtype' is a class which inherits from a superclass.

    Both are same. E.g. Small inherits + from Integer:

       X : Small;
       Y : Integer;
    begin
       Y := Y + X; -- See any conversion here? That is what subtype does.

    Yes, although a subtype isn't required for that. As you know, C effects various promotions (some unwisely but they are effected nonetheless) and
    I define the narrower operand to be widened to match the other.

    Type /compatibility/ is an open issue for me. I gather that Ada allows multiple subtypes of integer all to be compatible with each other even
    if one is not derived from the other; that's different from inherited compatibility as neither is a superclass of the other.


    In either case the subtype inherits operations from its parent.

    Yes.

    Subtype (in Liskov definition) means that you can substitute X of S, a subtype of T in an operation Foo of T.

    Since you can do that, you do not need any explicit conversions.
    Elementary.

    Good point, Sherlock. ;-)


    Putting aside access modes (which I see as orthogonal to types) what
    other implicit conversions would you see the absence of as intolerable?

    Nope, not putting them aside. Again, definition:

       type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is another type.

    I won't go there. You and I have debated the meaning of 'type' before
    and we are not in total agreement.


    Other implicit conversion you mentioned yourself. All inherited methods
    in OO. If S inherits Foo from T you need not to explicitly convert to T
    when calling Foo on an S.

    Yet another example is transparent pointers. E.g.

       type Ptr is access String; -- Pointer to String
    begin
       Ptr (1..2) := "ab";

    That is OK, Ptr is a subtype of String in array indexing operations. Implicit type conversion here is pointer dereferencing.

    FWIW I would explicitly dereference such a pointer with

    Ptr*

    I have thought about declaring a reference which is automatically
    dereferenced a certain number of times so that it can be treated as an
    object but haven't bottomed out the concomitant issues yet such as when
    the programmer wants to mention the reference without all that automatic dereferencing how does he specify to?

    ...
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Sun Dec 4 23:29:26 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 20:14, James Harris wrote:
    On 04/12/2022 17:54, Bart wrote:
    On 04/12/2022 16:59, James Harris wrote:

    ...

    I was thinking that x ** 0.5 (with 0.5 being a literal) could be
    /implemented/ as sqrt(x) but it sounds as though there are traps
    awaiting for large and small exponents. I hate the numerical analysis
    stuff because I don't know enough of it and I never use floating
    point so I never have to deal with it.

    What I don't like about having a sqrt function is that while it is
    the most common root it's not the only one. It's hard to justify that
    a language should include a specific function for square root but not
    cube root and not fourth root, etc.

    My Casio calculator has square root on its own button. Cube root is on
    a shifted button, and an Nth root is on another shifted button.

    If it's given special treatment on a calculator, when why not in a
    language?

    Because languages are not designed by Casio...?

    Calculators may also have keys for percent, factorial, and 000. Does
    that mean a language should do the same?

    Casio designs for ergonomics. They consider square-root important enough
    to be given its own button. Factorial on mine is a shifted button.

    000 is more for data-entry, so would be editor-related not language.
    Percent, I've never used.


    The same calculator has a dedicated button for 'squared', and another
    for x^n. My languages also dedicated operators for square root,
    square, and exponentiation.

    You have

      square(x)

    as well as

      x ** 2

    ?

    Yes. I've always had it, while ** only came along later. It's also an
    easy optimisation for a primitive compiler.

    Besides, some implementations of ** (or pow() in C) are designed for and
    will use floating point.

    Even with integer **, you have to cross your fingers a little and hope
    that a compiler (even yours) will optimise X**2 into a simple multiply.
    With sqr(X) you can be 100% confident.


    Also, sqrt is often a built-in processor instruction (as are min and
    max on x64), so another reason a language should treat them specially.

    As I say, x ** 0.5 could be implemented by a sqrt instruction (numerical analyses permitting).

    I could see it either way. I am not really averse to having a few sqrt functions (one for each float type) but I am interested to hear that
    people think they /should/ be included.

    My sqrt operator can be overloaded. However, my compiler's type analyser
    will force operands of maths to f64, including sqrt. I've just changed
    that (it took a minute), and now, sqrt(x) is implemented as either
    `sqrtss` or `sqrtsd` simd instructions depending on the type of x.

    (When x has integer type, it converts to f64, with a result of f64.)

    That was worth doing since 32-bit sqrt is nearly 30% faster than 64-bit
    sqrt.

    And I can do it with a single 'sqrt' operator.

    In my scripting language, I haven't implemented for sqrt for bignum
    types. There, one difficulty is deciding how accurate the result should
    be, as with no limits, it can keep going forever.
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Dec 5 08:50:24 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-04 23:21, James Harris wrote:
    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:
    On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 18:26, James Harris wrote:
    On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
    On 2022-11-25 12:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    [implicit conversions:]

    ...

         The whole point of implicit conversions is that they take >>>>>>>>> place
    with no overt action needed by the programmer.
    This is what I meant. You declare that A is a subtype B (inheriting >>>>>>>> in- or out- or all operations) once. Then you enjoy the effect >>>>>>>> distributed everywhere that declaration has effect.

         If you have to declare that "int" is a subtype of "real", >>>>>>> then it's
    not "implicit";  overt action by the programmer is needed.
    Further, ...

    Same is when the programmer's actions are needed to declare X int >>>>>> and Y real. Implicit here is the conversion in X + Y, not the
    declarations.

    The difference is that by default int and real are unrelated
    types. The programmer could explicitly put them in some common
    class, e.g. a class of additive types having + operation. That
    would make + an operation from the class (int, real). The
    implementation of now valid cross operation:

        + : int : real -> real

    could be created by composing real + with int-to-real conversion
    (provided by the programmer). The full dispatching table could be

        + : int  : int  -> int
        + : int  : real -> real
        + : real : int  -> real

    This is the mechanics the language can provide in order to move
    the nasty stuff out of its core.

    BTW, people like James would surely ask for

        + : real : real -> int

    for the cases when the result is a whole integer. But we won't let >>>>>> them! (:-))

    On the contrary, assuming I understand your notation I would have only >>>>>
       + : int : int -> int
       + : uint : uint -> uint
       + : float : float -> float

    IOW different types would need one operand to be converted.

    The example was about introducing *user-defined* ad-hoc subtyping.

    I never know what people mean by subtyping. To some it seems to be a
    smaller range of another type as in 'tiny' below.

       small is range 0..999
       tiny is small range 0..99

    These are new types. The correct syntax is

        subtype Small is Integer range 0..999;

    Now Small is subtype of Integer.

    OK. What is an operation which is out of range as in

      u : integer;
      v : small := 999;

      u := v + 1;
      v := v + 1;

    defined to leave in u and v?

    "+" is inherited from Integer. Thus

    v + 1 = Integer'(v) + Integer'(1) = Integer'(1000)

    And does

      subtype tiny is small range 0..9;

    declare a further subtype compatible with integer and small?

    Yes.

    To others an OO 'subtype' is a class which inherits from a superclass.

    Both are same. E.g. Small inherits + from Integer:

        X : Small;
        Y : Integer;
    begin
        Y := Y + X; -- See any conversion here? That is what subtype does.

    Yes, although a subtype isn't required for that. As you know, C effects various promotions (some unwisely but they are effected nonetheless) and
    I define the narrower operand to be widened to match the other.

    C promotions are subtypes.

    Type /compatibility/ is an open issue for me. I gather that Ada allows multiple subtypes of integer all to be compatible with each other even
    if one is not derived from the other; that's different from inherited compatibility as neither is a superclass of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
    both Small<:Integer and Integer<:Small. Because Small exports its
    operations to Integer. E.g.

    procedure Foo (X : Small);

    Now

    Y : Integer;

    Foo (Y); -- This is OK, Small is a supertype of Integer

    This is why all Ada subtypes form an equivalence class.

    If I designed a new language I would have ad-hoc sub- and super-type
    separated and not require same representation. More like C++ conversion operators.

    Putting aside access modes (which I see as orthogonal to types) what
    other implicit conversions would you see the absence of as intolerable?

    Nope, not putting them aside. Again, definition:

        type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is
    another type.

    I won't go there. You and I have debated the meaning of 'type' before
    and we are not in total agreement.

    When you invent something better than the standard definition values + operations, let me know... (:-))

    Other implicit conversion you mentioned yourself. All inherited
    methods in OO. If S inherits Foo from T you need not to explicitly
    convert to T when calling Foo on an S.

    Yet another example is transparent pointers. E.g.

        type Ptr is access String; -- Pointer to String
    begin
        Ptr (1..2) := "ab";

    That is OK, Ptr is a subtype of String in array indexing operations.
    Implicit type conversion here is pointer dereferencing.

    FWIW I would explicitly dereference such a pointer with

      Ptr*

    I have thought about declaring a reference which is automatically dereferenced a certain number of times so that it can be treated as an object but haven't bottomed out the concomitant issues yet such as when
    the programmer wants to mention the reference without all that automatic dereferencing how does he specify to?

    1. Such cases do not exist. The only one is deep vs. shallow copy in assignment. In Ada assignment is not inherited. So P1 := P2 is shallow.
    P1.all := P2.all is deep.

    2. You can always qualify the type and/or operation. E.g. in Ada it is
    denoted as T'(E). T is the type, E is expression/object.

    Anyway, it is not specific to pointers. Automatic dereferencing is
    subtyping. So if you do not want to build in it in the language, you do
    not need to if you allowed the programmer to declare it a subtype.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Dec 5 14:45:59 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 16:37, James Harris wrote:
    What do you think of as unsafe and do you have an example of lossy
    but 'safe'?

    "Unsafe", to me, means that the behaviour is or could be undefined
    or could cause an [unexpected/untrapped] exception; there is lots of that around, esp if you forget the elementary checks [such as pointers being non-null, arithmetic not overflowing, indexes being within bounds, ...]. Converting "int -> real" is safe [esp if, as with Algol, the Standard
    defines it so -- RR 2.1.3.1e] but may be lossy [as discussed earlier].

         How, in your mind, does "f(i)" [with an implicit conversion of
    the parameter to type "real"] differ from "f((real) i)" [where the
    parameter is explicitly cast]?  The result is the same in both cases.
    I've done very little on floats so far but I can say I don't intend
    to support implicit conversions to float. Nor would I have just one
    float type. The conversion you mention would have to be more explicit
    such as either of
      f(<float 32>(i))
      f(<float 64>(i))

    If the implicit conversion is possible, then one of your two
    explicit conversions is wrong. I don't intend to write an essay here,
    but this is an area where Algol works very hard to make function calls
    such as "f(i)" work smoothly while also permitting operands the usual
    freedoms so that you can add all numeric types with the operator "+".
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Mendelssohn
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Dec 5 15:05:32 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 17:18, James Harris wrote:
    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.
         Yes.  That shows that "2" and "j" have different types.
    Does it? They may have different protections (one is intrinsically
    read-only but being wrongly used in a context in which something
    writeable is required) but when did protections become part of the
    type?

    Not to do with "protection". Unless your [or your language's]
    model of the computer is seriously weird, "j" is some allocated storage
    in the computer which contains an integer [which in the present case is
    2]. Storage is not the same sort of object as the thing it contains.
    Read-only storage is still not the same as its contents. Covering up
    that distinction, as with C's talk of "lvalues" and "rvalues" is not,
    IMO, helpful. Two objects of the same type ought to be syntactically
    and semantically interchangeable [modulo some quibbles not relevant
    here].

    I am not saying that protection cannot be part of the type but I
    don't see the necessity to conflate the two concepts.

    Lots of things can be part of the type! A significant part of
    the C standard is taken up by a discussion of the various adjectives
    than can adorn types. But storage is still not the same type as what
    is stored there.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Mendelssohn

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Mon Dec 5 15:33:58 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 15:05, Andy Walker wrote:
    On 04/12/2022 17:18, James Harris wrote:
    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.
         Yes.  That shows that "2" and "j" have different types.
    Does it? They may have different protections (one is intrinsically
    read-only but being wrongly used in a context in which something
    writeable is required) but when did protections become part of the
    type?

        Not to do with "protection".  Unless your [or your language's] model of the computer is seriously weird, "j" is some allocated storage
    in the computer which contains an integer [which in the present case is
    2].  Storage is not the same sort of object as the thing it contains. Read-only storage is still not the same as its contents.  Covering up
    that distinction, as with C's talk of "lvalues" and "rvalues" is not,
    IMO, helpful.  Two objects of the same type ought to be syntactically
    and semantically interchangeable [modulo some quibbles not relevant
    here].

    Do i and j in this bit of A68 have the same type:

    INT i=2;
    INT j:=3;

    print((i,j, newline));

    i:=j;

    print((i,j, newline))

    They both have values that display as 2 and 3, both integers. But the assignment doesn't work. This does however:

    j:=i;

    This depends you mean by the type. Sometimes only the target type (INT)
    is of interest, then the resulting values will have the same type.

    But also important is how you get there from the denotation. Algol68
    likes to make that distinction, other languages gloss over it unless
    some extra levels of indirection are added (explicit pointers), which is
    how mine works:

    const int a = 2
    int b := 3
    ref int c := &b

    println (1).typestr # i64
    println a.typestr # i64
    println b.typestr # i64
    println c.typestr # ref i64

    The discrepancy is because of the automatic dereference of a variable's
    name that is common to HLLs.


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Dec 5 17:22:41 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 15:33, Bart wrote:
    Do i and j in this bit of A68 have the same type:
      INT i=2;
      INT j:=3;

    No. We've been through this several times before! That code
    defines the identifier "i" to be a[n alternative] way of describing the
    integer 2 [more useful in cases such as "REAL pi = 4*arctan(1);"], and
    "j" to be a way of describing newly-allocated storage initialised to
    contain the integer 3. So "i" has type "INT" and "j" has type "REF INT"
    [ie, storage suitable to hold an "INT"].

    *** It is universally acknowledged that this is confusing. ***
    It was driven by the necessity to conform with other "scientific"
    languages of the period. A completely new invention, without the
    historical baggage, would/should look quite different, though since
    I share that baggage [as do C, Pascal, Ada, ...], I have no concrete
    proposals for what "different" should actually be.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Mendelssohn
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 5 22:41:07 2022
    From Newsgroup: comp.lang.misc

    On 04/12/2022 23:29, Bart wrote:
    On 04/12/2022 20:14, James Harris wrote:
    On 04/12/2022 17:54, Bart wrote:

    ...

    The same calculator has a dedicated button for 'squared', and another
    for x^n. My languages also dedicated operators for square root,
    square, and exponentiation.

    You have

       square(x)

    as well as

       x ** 2

    ?

    Yes. I've always had it, while ** only came along later. It's also an
    easy optimisation for a primitive compiler.

    Besides, some implementations of ** (or pow() in C) are designed for and will use floating point.

    Even with integer **, you have to cross your fingers a little and hope
    that a compiler (even yours) will optimise X**2 into a simple multiply.
    With sqr(X) you can be 100% confident.

    Why could a compiler not be guaranteed to convert x ** 2 into x * x?
    Isn't it a traditional strength reduction which would evaluate to
    exactly the same answer?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 5 23:04:11 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:
    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:

    ...

    I never know what people mean by subtyping. To some it seems to be a
    smaller range of another type as in 'tiny' below.

       small is range 0..999
       tiny is small range 0..99

    These are new types. The correct syntax is

        subtype Small is Integer range 0..999;

    Now Small is subtype of Integer.

    OK. What is an operation which is out of range as in

       u : integer;
       v : small := 999;

       u := v + 1;
       v := v + 1;

    defined to leave in u and v?

    "+" is inherited from Integer. Thus

       v + 1 = Integer'(v) + Integer'(1) = Integer'(1000)

    That seems odd when v's limit is 999.


    And does

       subtype tiny is small range 0..9;

    declare a further subtype compatible with integer and small?

    Yes.

    OK.


    To others an OO 'subtype' is a class which inherits from a superclass.

    Both are same. E.g. Small inherits + from Integer:

        X : Small;
        Y : Integer;
    begin
        Y := Y + X; -- See any conversion here? That is what subtype does. >>
    Yes, although a subtype isn't required for that. As you know, C
    effects various promotions (some unwisely but they are effected
    nonetheless) and I define the narrower operand to be widened to match
    the other.

    C promotions are subtypes.

    Even C's promotions of int to float?


    Type /compatibility/ is an open issue for me. I gather that Ada allows
    multiple subtypes of integer all to be compatible with each other even
    if one is not derived from the other; that's different from inherited
    compatibility as neither is a superclass of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
    both Small<:Integer and Integer<:Small. Because Small exports its
    operations to Integer. E.g.

       procedure Foo (X : Small);

    Now

       Y : Integer;

       Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a bit mad.


    This is why all Ada subtypes form an equivalence class.

    If I designed a new language I would have ad-hoc sub- and super-type separated and not require same representation. More like C++ conversion operators.

    Not being familiar with C++ I'm not sure what that last paragraph means.
    ATM I feel there is a 'landing zone' for type compatibility but I cannot
    yet make it out.


    Putting aside access modes (which I see as orthogonal to types) what
    other implicit conversions would you see the absence of as intolerable? >>>
    Nope, not putting them aside. Again, definition:

        type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is
    another type.

    I won't go there. You and I have debated the meaning of 'type' before
    and we are not in total agreement.

    When you invent something better than the standard definition values + operations, let me know... (:-))

    Didn't you used to say values only? At least you've now added operations
    so you are getting there. ;-)


    Other implicit conversion you mentioned yourself. All inherited
    methods in OO. If S inherits Foo from T you need not to explicitly
    convert to T when calling Foo on an S.

    Yet another example is transparent pointers. E.g.

        type Ptr is access String; -- Pointer to String
    begin
        Ptr (1..2) := "ab";

    That is OK, Ptr is a subtype of String in array indexing operations.
    Implicit type conversion here is pointer dereferencing.

    FWIW I would explicitly dereference such a pointer with

       Ptr*

    I have thought about declaring a reference which is automatically
    dereferenced a certain number of times so that it can be treated as an
    object but haven't bottomed out the concomitant issues yet such as
    when the programmer wants to mention the reference without all that
    automatic dereferencing how does he specify to?

    1. Such cases do not exist. The only one is deep vs. shallow copy in assignment. In Ada assignment is not inherited. So P1 := P2 is shallow. P1.all := P2.all is deep.

    Shallow is fine. Deep is troublesome. Data structures have nodes at
    different depths.


    2. You can always qualify the type and/or operation. E.g. in Ada it is denoted as T'(E). T is the type, E is expression/object.

    Then one gets into the Algol68 approach of "resolve until you get a type match". If a programmer has three levels of declared-automatic reference before getting to the target, i.e.

    p -> 1 -> 2 -> target

    then (to repeat, for declared-automatic dereference) a use of p would
    normally access the target. But the programmer might want to access p or
    1 or 2 in different circumstances.

    No worries if you don't know what I mean. It's just something I've got
    to resolve.


    Anyway, it is not specific to pointers. Automatic dereferencing is subtyping. So if you do not want to build in it in the language, you do
    not need to if you allowed the programmer to declare it a subtype.

    Everything's subtyping these days! :-o
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 5 23:19:44 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 14:45, Andy Walker wrote:
    On 04/12/2022 16:37, James Harris wrote:
    What do you think of as unsafe and do you have an example of lossy
    but 'safe'?

        "Unsafe", to me, means that the behaviour is or could be undefined or could cause an [unexpected/untrapped] exception;  there is lots of that around, esp if you forget the elementary checks [such as pointers being non-null, arithmetic not overflowing, indexes being within bounds, ...]. Converting "int -> real" is safe [esp if, as with Algol, the Standard
    defines it so -- RR 2.1.3.1e] but may be lossy [as discussed earlier].

    OK.


         How, in your mind, does "f(i)" [with an implicit conversion of >>> the parameter to type "real"] differ from "f((real) i)" [where the
    parameter is explicitly cast]?  The result is the same in both cases.
    I've done very little on floats so far but I can say I don't intend
    to support implicit conversions to float. Nor would I have just one
    float type. The conversion you mention would have to be more explicit
    such as either of
       f(<float 32>(i))
       f(<float 64>(i))

        If the implicit conversion is possible, then one of your two explicit conversions is wrong.  I don't intend to write an essay here,
    but this is an area where Algol works very hard to make function calls
    such as "f(i)" work smoothly while also permitting operands the usual freedoms so that you can add all numeric types with the operator "+".

    The implicit conversion would not be possible. Programmers would have to specify the conversion because they would be changing the type. I see
    that you prefer a different approach and that's fine but I prefer such
    changes to be manifest in the syntax, especially if they can be lossy.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 5 23:28:57 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 15:05, Andy Walker wrote:
    On 04/12/2022 17:18, James Harris wrote:
    On 25/11/2022 11:28, Andy Walker wrote:
    On 23/11/2022 19:52, Dmitry A. Kazakov wrote:

    That does not compute. 2 := j; is illegal merely because 2 (after
    overloading resolution to int) is a constant and the first argument
    of := is mutable.
         Yes.  That shows that "2" and "j" have different types.
    Does it? They may have different protections (one is intrinsically
    read-only but being wrongly used in a context in which something
    writeable is required) but when did protections become part of the
    type?

        Not to do with "protection".  Unless your [or your language's] model of the computer is seriously weird, "j" is some allocated storage
    in the computer which contains an integer [which in the present case is
    2].  Storage is not the same sort of object as the thing it contains. Read-only storage is still not the same as its contents.  Covering up
    that distinction, as with C's talk of "lvalues" and "rvalues" is not,
    IMO, helpful.  Two objects of the same type ought to be syntactically
    and semantically interchangeable [modulo some quibbles not relevant
    here].

    Both j and 2 can be modelled as 'storage' and assigned a location. In
    fact, that gives a consistent picture of operands. While some literals
    (esp small integers) can be placed in the program code, in the general
    case (let's call them large literals) they won't fit and would need to
    be placed in storage. So why not initially place them all there?

    Even if the compiler initially assigns locations for all literals the optimiser can ensure that small integers are moved out of storage and
    into the program text so nothing is lost. But the models for j and 2 as
    seen in the program text can be the same.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 6 00:25:27 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 22:41, James Harris wrote:
    On 04/12/2022 23:29, Bart wrote:
    On 04/12/2022 20:14, James Harris wrote:
    On 04/12/2022 17:54, Bart wrote:

    ...

    The same calculator has a dedicated button for 'squared', and
    another for x^n. My languages also dedicated operators for square
    root, square, and exponentiation.

    You have

       square(x)

    as well as

       x ** 2

    ?

    Yes. I've always had it, while ** only came along later. It's also an
    easy optimisation for a primitive compiler.

    Besides, some implementations of ** (or pow() in C) are designed for
    and will use floating point.

    Even with integer **, you have to cross your fingers a little and hope
    that a compiler (even yours) will optimise X**2 into a simple
    multiply. With sqr(X) you can be 100% confident.

    Why could a compiler not be guaranteed to convert x ** 2 into x * x?
    Isn't it a traditional strength reduction which would evaluate to
    exactly the same answer?
    Laziness? I doubt my compilers bother with that particular reduction,
    and testing them now, they don't.

    Why should I when I already have sqr? (I might want to look at x**0 and
    x**1, but any use of ** is uncommon.)

    Neither does CPython 3.10. C of course doesn't even have integer power
    ops, however pow(a,2) gets optimised with gcc-O1 or above. With
    integers, that still involves converting to and from floats.

    You can of course have a language where you are expected to do x**2 and
    x**0.5 instead of sqr(x) and sqrt(x).

    You can also have one where you do exp(log(x)*2) and exp(log(x)*0.5)
    instead of x**2 and x**0.5.

    It's about convenience and also making your intentions absolutely clear.

    If cube roots were that common, would you write that as x**0.33333333333
    or x**(1.0/3.0)? There you would welcome cuberoot(x)!


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 6 09:09:56 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:
    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:

    ...

    I never know what people mean by subtyping. To some it seems to be
    a smaller range of another type as in 'tiny' below.

       small is range 0..999
       tiny is small range 0..99

    These are new types. The correct syntax is

        subtype Small is Integer range 0..999;

    Now Small is subtype of Integer.

    OK. What is an operation which is out of range as in

       u : integer;
       v : small := 999;

       u := v + 1;
       v := v + 1;

    defined to leave in u and v?

    "+" is inherited from Integer. Thus

        v + 1 = Integer'(v) + Integer'(1) = Integer'(1000)

    That seems odd when v's limit is 999.

    That is v's limit, not the limit of v + 1 which is Integer. BTW,

    v := v + 1;

    Gives Constraint_Error, but not because of +, because of :=.

    [ The technical term for the subject is covariance vs. contravariance.
    If + were covariant in its result, then inherited by Small it returned
    Small and so v + 1 would raise exception. But + is contravariant and the result remains Integer. Merits of covariance vs contravariance is a
    story for another day. Ada's choice for numbers is motivated by the
    design principle, that when the result of an expression is
    mathematically correct, then it is not a error. ]

    C promotions are subtypes.

    Even C's promotions of int to float?

    Sure. If you implicitly convert int to float in some operation f then
    int is a subtype of float in f.

    Type /compatibility/ is an open issue for me. I gather that Ada
    allows multiple subtypes of integer all to be compatible with each
    other even if one is not derived from the other; that's different
    from inherited compatibility as neither is a superclass of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
    both Small<:Integer and Integer<:Small. Because Small exports its
    operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a bit
    mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    If I designed a new language I would have ad-hoc sub- and super-type
    separated and not require same representation. More like C++
    conversion operators.

    Not being familiar with C++ I'm not sure what that last paragraph means.
    ATM I feel there is a 'landing zone' for type compatibility but I cannot
    yet make it out.

    In C++ you can

    class T
    {
    public
    operator int ();
    ...
    };

    then

    T X;
    int Y := Y + X; -- Implicit conversion

    Effectively class T is a subtype of int.

    Putting aside access modes (which I see as orthogonal to types)
    what other implicit conversions would you see the absence of as
    intolerable?

    Nope, not putting them aside. Again, definition:

        type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is
    another type.

    I won't go there. You and I have debated the meaning of 'type' before
    and we are not in total agreement.

    When you invent something better than the standard definition values +
    operations, let me know... (:-))

    Didn't you used to say values only?

    Me? Never.

    At least you've now added operations
    so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    Other implicit conversion you mentioned yourself. All inherited
    methods in OO. If S inherits Foo from T you need not to explicitly
    convert to T when calling Foo on an S.

    Yet another example is transparent pointers. E.g.

        type Ptr is access String; -- Pointer to String
    begin
        Ptr (1..2) := "ab";

    That is OK, Ptr is a subtype of String in array indexing operations.
    Implicit type conversion here is pointer dereferencing.

    FWIW I would explicitly dereference such a pointer with

       Ptr*

    I have thought about declaring a reference which is automatically
    dereferenced a certain number of times so that it can be treated as
    an object but haven't bottomed out the concomitant issues yet such as
    when the programmer wants to mention the reference without all that
    automatic dereferencing how does he specify to?

    1. Such cases do not exist. The only one is deep vs. shallow copy in
    assignment. In Ada assignment is not inherited. So P1 := P2 is
    shallow. P1.all := P2.all is deep.

    Shallow is fine. Deep is troublesome. Data structures have nodes at different depths.

    Nope. It seems that you still do not accept types as a fundamental
    concept. When you assign type it is an operation as any else. How it
    copies or not is of no interest to you. You just call it. Done.

    When pointer to T is a subtype of T in assignment then you have to
    decide which implementation you take (overriding vs. inheriting).

    2. You can always qualify the type and/or operation. E.g. in Ada it is
    denoted as T'(E). T is the type, E is expression/object.

    Then one gets into the Algol68 approach of "resolve until you get a type match". If a programmer has three levels of declared-automatic reference before getting to the target, i.e.

      p -> 1 -> 2 -> target

    then (to repeat, for declared-automatic dereference) a use of p would normally access the target. But the programmer might want to access p or
    1 or 2 in different circumstances.

    I still see no problem. Whatever object you want, it is has a type and
    that type has a name. Use the name in the qualifier.

    Anyway, it is not specific to pointers. Automatic dereferencing is
    subtyping. So if you do not want to build in it in the language, you
    do not need to if you allowed the programmer to declare it a subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Dec 6 16:30:10 2022
    From Newsgroup: comp.lang.misc

    On 05/12/2022 23:28, James Harris wrote:
    [I wrote:]
    [...] Two objects of the same type ought to be syntactically
    and semantically interchangeable [modulo some quibbles not relevant
    here].
    Both j and 2 can be modelled as 'storage' and assigned a location. In
    fact, that gives a consistent picture of operands. While some
    literals (esp small integers) can be placed in the program code, in
    the general case (let's call them large literals) they won't fit and
    would need to be placed in storage. So why not initially place them
    all there?

    /Implementation/ details don't affect types! Yes, you can if you
    like implement "2" as

    int secret := 2;

    but that gives "secret" and "j" the same type, not "j" and "2". The fact
    will remain that there are many contexts in which you can use "j" but not
    "2" [and you can't, as a programmer, use "secret", because it's secret].

    Even if the compiler initially assigns locations for all literals the optimiser can ensure that small integers are moved out of storage and
    into the program text so nothing is lost. But the models for j and 2
    as seen in the program text can be the same.

    What the compiler and optimiser do is up to them. But if the
    /program text/ fails to distinguish an integer from storage containing
    an integer, it's going to make programming "interesting". As in those languages where "2" is just an identifier, and you /can/ assign "2 := 3"
    so that "2 * 2 == 9" [unless "9" has also been re-defined!]. I hope
    you're not going down that route.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Grieg
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Dec 6 21:09:43 2022
    From Newsgroup: comp.lang.misc

    On 02/12/2022 23:07, Dmitry A. Kazakov wrote:
    You claimed that subtyping relationship is equivalent to explicit
    conversions.
         I have checked back through my contributions to this thread and
    can find nothing even remotely similar to such a claim.
    Sorry, but I then don't understand your point. Conversions are
    implicit in both cases = arguments in expression appear as is. What's
    the objection again?

    I object to you saying that I claimed X when there is nothing even remotely resembling X in the thread. Nor can I parse your second sentence above into anything sensible; what are the "both cases"?

    Whether a language has a "multitude of integer
    and real types" is also something that varies between languages.
    No.
         You surely mean "yes".

    [IOW, you can surely not be denying that some languages have many
    and some have rather few integer/real types?]

    Most early languages had only one integer
    type and one real type;  that is not a "multitude".
    This subthread was started by James about designing a *new* language.
    If you said that James should have looked no further than Excel,
    which had no integer type, then I would not care to respond. Your
    answer suggested than implicit conversion could somehow exist in a
    moderately *modern* and reasonably typed language.

    Of course it can. You yourself [at the bottom of your article]
    point us at the implicit conversions in the 2023 C++ standard; and any language loosely related to C has them in its decays and promotions.

    It is a proposition
        IF there are multiple numeric types
       THEN implicit conversions do not fly
    You asked why, I explained.
         Another sub-thread where you seem to have gone off at a tangent
    from whatever you think I may have been asking.  But as a proposition,
    that is clearly false [unless you are claiming that /only/ implicit
    conversions are to be allowed], as demonstrated in this thread with
    specific examples.
    I have no idea what you mean.

    You produced a proposition. C and C++, as well as more modern languages related to them, show the proposition to be false.

    [...]
         You snipped (b) and (c);  what you seem to want -- it's always hard
    to be sure -- is perfectly possible in A68G, but the language as supplied
    contains only the one "sqrt" function.
    Why then each line has sqrt spelt differently?

    Because A68G, as supplied, contains only one "sqrt" function [it also has "longsqrt", "csqrt", "shortshortsqrt", "longlonglonglonglongsqrt" and many others, but they are different functions (different, though related, names, different return types, different parameter types)].

    To qualify it shall be this:> print (("complex: ", sqrt(2), newline,
            "complex: [with im part] ", sqrt(2), newline,
            "single: ", sqrt(2), newline,
            "double: ", sqrt(2), newline,
            "long double: ", sqrt(2), newline,
            "fixed: [eg] ", sqrt(2), newline))
    That is *not* possible in any language.

    But that's not what you /said/ you wanted, and it's unreasonable. I don't want to use a strongly-typed language where you can't deduce the type
    of each construct. As usual, you create a puzzle by writing unfathomable sentences and expecting others to interpret them.

    [...]
    My example is impossible to implement due to ambiguities. The wrong
    answer you gave is trivial to have in any language. E.g. in Ada
       function sqrt (X : Integer) return Float is
       begin
          return Ada.Numerics.Elementary_Functions.sqrt (Float (X));
       end sqrt;
       Put_Line (sqrt(2)'Image);

    If that's "trivial" compared with the A68G equivalent, I'd hate
    to see some non-trivial code.

    [... snip further code ...]
    Then without learning Hungarian I could write each sqrt as sqrt.
       Put_Line (Float'(sqrt(2))'Image);
       Put_Line (Long_Float'(sqrt(2))'Image);

    Why do you regard "Long_Float'(sqrt(2))" as more readable than "longsqrt(2)" [which works in A68G with no further "trivial" code to
    write]? You've just mangled names in a different and less clear way.
    Algol "coercions" [implicit casts] just work; they're safe, accord
    with common sense, and avoid the programmer having to write explicit
    coercions for no other reason than so that the compiler can check that
    you got them all right.

    [...]
    https://www.ibm.com/docs/en/zos/2.1.0?topic=conversions-conversion-functions
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Grieg
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 6 23:06:55 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-06 22:09, Andy Walker wrote:
    On 02/12/2022 23:07, Dmitry A. Kazakov wrote:
    You claimed that subtyping relationship is equivalent to explicit
    conversions.
         I have checked back through my contributions to this thread and >>> can find nothing even remotely similar to such a claim.
    Sorry, but I then don't understand your point. Conversions are
    implicit in both cases = arguments in expression appear as is. What's
    the objection again?

        I object to you saying that I claimed X when there is nothing even remotely resembling X in the thread.  Nor can I parse your second sentence above into anything sensible;  what are the "both cases"?

    Built-in conversions vs. user-defined subtypes.

    Whether a language has a "multitude of integer
    and real types" is also something that varies between languages.
    No.
         You surely mean "yes".

        [IOW, you can surely not be denying that some languages have many and some have rather few integer/real types?]

    There exist some languages of no interest in this particular discussion,
    as far as I understood the James' question and from what he described
    about his type system. He will not have single integer type.

                    Most early languages had only one integer >>> type and one real type;  that is not a "multitude".
    This subthread was started by James about designing a *new* language.
    If you said that James should have looked no further than Excel,
    which had no integer type, then I would not care to respond. Your
    answer suggested than implicit conversion could somehow exist in a
    moderately *modern* and reasonably typed language.

        Of course it can.  You yourself [at the bottom of your article] point us at the implicit conversions in the 2023 C++ standard;  and any language loosely related to C has them in its decays and promotions.

    They are *user-defined*, which was the whole point about how implicit conversions could be *reasonable* introduced if any.

    It is a proposition
        IF there are multiple numeric types
       THEN implicit conversions do not fly
    You asked why, I explained.
         Another sub-thread where you seem to have gone off at a tangent >>> from whatever you think I may have been asking.  But as a proposition,
    that is clearly false [unless you are claiming that /only/ implicit
    conversions are to be allowed], as demonstrated in this thread with
    specific examples.
    I have no idea what you mean.

        You produced a proposition.  C and C++, as well as more modern languages related to them, show the proposition to be false.

    No, it is true for C and C++ where implicit conversions in the form of
    type promotions are considered inherently unsafe and still do not remove ambiguities.

                                To qualify it shall be this:> print
    (("complex: ", sqrt(2), newline,
             "complex: [with im part] ", sqrt(2), newline,
             "single: ", sqrt(2), newline,
             "double: ", sqrt(2), newline,
             "long double: ", sqrt(2), newline,
             "fixed: [eg] ", sqrt(2), newline))
    That is *not* possible in any language.

        But that's not what you /said/ you wanted, and it's unreasonable.

    I said that implicit conversions lead to ambiguities. All that is
    provided sqrt spells "sqrt", print spells "print", 2 spells "2" etc. If
    in your language they are called differently for each possible type, or
    maybe depend on the line number, then my deepest condolences, you left
    the race before it even started...

    Then without learning Hungarian I could write each sqrt as sqrt.
        Put_Line (Float'(sqrt(2))'Image);
        Put_Line (Long_Float'(sqrt(2))'Image);

        Why do you regard "Long_Float'(sqrt(2))" as more readable than "longsqrt(2)"

    My point was about inevitable ambiguities. Any properly designed
    language provides tools to resolve such ambiguities without resorting to
    silly naming games.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Dec 7 16:42:20 2022
    From Newsgroup: comp.lang.misc

    On 06/12/2022 00:25, Bart wrote:

    ...

    You can of course have a language where you are expected to do x**2 and x**0.5 instead of sqr(x) and sqrt(x).

    Yes.


    You can also have one where you do exp(log(x)*2) and exp(log(x)*0.5)
    instead of x**2 and x**0.5.

    It's about convenience and also making your intentions absolutely clear.

    Yes. The only question is whether square root should be given special treatment or not.


    If cube roots were that common, would you write that as x**0.33333333333
    or x**(1.0/3.0)? There you would welcome cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a programmer
    wanted to put it in a function there would be nothing stopping him.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Dec 7 17:53:41 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a programmer wanted to put it in a function there would be nothing stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch to it.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Dec 7 17:42:35 2022
    From Newsgroup: comp.lang.misc

    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:
    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:

    I never know what people mean by subtyping.

    ...

    C promotions are subtypes.

    Even C's promotions of int to float?

    Sure. If you implicitly convert int to float in some operation f then
    int is a subtype of float in f.

    int may have a defined conversion to float but that doesn't make it
    below ('sub') the other even for a specific operation. Is this
    OO-specific terminology and unrelated to other programming?



    Type /compatibility/ is an open issue for me. I gather that Ada
    allows multiple subtypes of integer all to be compatible with each
    other even if one is not derived from the other; that's different
    from inherited compatibility as neither is a superclass of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
    both Small<:Integer and Integer<:Small. Because Small exports its
    operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a
    bit mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    Conversions are OK. Saying each is a subtype of the other seems to be
    abusing the English language.

    ...

    Putting aside access modes (which I see as orthogonal to types)
    what other implicit conversions would you see the absence of as
    intolerable?

    Nope, not putting them aside. Again, definition:

        type = values + operations

    Does 'in T' has same operations of 'in out T'? No, ergo, this is
    another type.

    I won't go there. You and I have debated the meaning of 'type'
    before and we are not in total agreement.

    When you invent something better than the standard definition values
    + operations, let me know... (:-))

    Didn't you used to say values only?

    Me? Never.

    OK. That's surprising.

    ...

    Anyway, it is not specific to pointers. Automatic dereferencing is
    subtyping. So if you do not want to build in it in the language, you
    do not need to if you allowed the programmer to declare it a subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.

    Where was it decided that implicit conversions implied a subtype
    relationship?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Dec 7 18:09:42 2022
    From Newsgroup: comp.lang.misc

    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type. Just because in some contexts one is not allowed to modify it (perhaps simply because of promising not to do so) changes
    neither its type nor the operations that can be applied to it.

    An apple is an edible fruit. Someone may promise not to eat it but it is
    still an edible fruit.

    ...

    2. You can always qualify the type and/or operation. E.g. in Ada it
    is denoted as T'(E). T is the type, E is expression/object.

    Then one gets into the Algol68 approach of "resolve until you get a
    type match". If a programmer has three levels of declared-automatic
    reference before getting to the target, i.e.

       p -> 1 -> 2 -> target

    then (to repeat, for declared-automatic dereference) a use of p would
    normally access the target. But the programmer might want to access p
    or 1 or 2 in different circumstances.

    I still see no problem. Whatever object you want, it is has a type and
    that type has a name. Use the name in the qualifier.

    In C terms the above chain has four objects:

    p
    *p
    **p
    ***p

    where the first three are pointers and ***p is the target. I could use
    the same simple model but there are situations in which it may be
    convenient for the programmer to declare a reference, p, which will be
    treated as the target so that writing

    p + 1

    would mean

    target + 1

    Call it auto dereferencing. I showed three auto dereferences to make a
    point though there would normally be only one.

    What I was saying was that if I allow such declarations then the
    question arises over what a programmer could write if instead of the
    target he wanted to access one of the references in the chain. I guess
    it may be something like

    refchain(p, 0) ;p itself
    refchain(p, 1) ;1 away from p
    refchain(p, 2) ;2 away from p
    refchain(p, -1) ;one before the target

    The last two would both refer to object "2" in the chain

    p -> 1 -> 2 -> target
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Dec 7 20:37:35 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-07 18:42, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:
    On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
    On 2022-12-04 21:02, James Harris wrote:

    I never know what people mean by subtyping.

    ...

    C promotions are subtypes.

    Even C's promotions of int to float?

    Sure. If you implicitly convert int to float in some operation f then
    int is a subtype of float in f.

    int may have a defined conversion to float but that doesn't make it
    below ('sub') the other even for a specific operation.

    Sub is not necessarily below. Compare: subset, subzero. Here it means included, less.

    Is this
    OO-specific terminology and unrelated to other programming?

    It is related to types.

    Type /compatibility/ is an open issue for me. I gather that Ada
    allows multiple subtypes of integer all to be compatible with each
    other even if one is not derived from the other; that's different
    from inherited compatibility as neither is a superclass of the other. >>>>
    Subtyping is a transitive relation. A<:B<:C. Ada's subtype
    introduces both Small<:Integer and Integer<:Small. Because Small
    exports its operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a
    bit mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    Conversions are OK. Saying each is a subtype of the other seems to be abusing the English language.

    Why?

    Firstly neither sub- nor type are English words! (:-))

    Secondly subtype means a part of a type. Which part? The inherited
    operations, the substitutable values. OK?

    Anyway, it is not specific to pointers. Automatic dereferencing is
    subtyping. So if you do not want to build in it in the language, you
    do not need to if you allowed the programmer to declare it a subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.

    Where was it decided that implicit conversions implied a subtype relationship?

    Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
    (float). You can substitute integer for float in sqrt. This is the
    Liskov's definition of subtyping (ignoring behavior).
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Wed Dec 7 20:39:55 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do so.
    But if you accept the standard definition you must also accept all consequences of.

    Just because in some contexts one is not allowed to
    modify it (perhaps simply because of promising not to do so) changes
    neither its type nor the operations that can be applied to it.

    An apple is an edible fruit. Someone may promise not to eat it but it is still an edible fruit.

    1. This is obviously wrong. For a cat apple is not edible.

    2. This not in the least resembles the case. Which is about operations (properties) added/removed. E.g. a car without wheels is still a car,
    yet is one you might tread a bit differently from one with wheels.

    2. You can always qualify the type and/or operation. E.g. in Ada it
    is denoted as T'(E). T is the type, E is expression/object.

    Then one gets into the Algol68 approach of "resolve until you get a
    type match". If a programmer has three levels of declared-automatic
    reference before getting to the target, i.e.

       p -> 1 -> 2 -> target

    then (to repeat, for declared-automatic dereference) a use of p would
    normally access the target. But the programmer might want to access p
    or 1 or 2 in different circumstances.

    I still see no problem. Whatever object you want, it is has a type and
    that type has a name. Use the name in the qualifier.

    In C terms the above chain has four objects:

      p
      *p
      **p
      ***p

    where the first three are pointers and ***p is the target.

    Good to them.

    Do they have types? Name them! Let them be T, T1, T2, T3. Let X be declared

    X : T3;

    Let all types T, T1, T2, T3 have operation named Bar. You want to call
    Bar of T1 on X? Just say so:

    Bar (T1'(X))

    IF T3 is a subtype of T1, X will be dereferenced to T2, then to T1
    because that is the conversion attached to the subtyping relationship
    and because subtyping is transitive: T3<:T2<:T1<:T.

    IF is not, you get a type error.

    What's the problem, again?

    I could use
    the same simple model but there are situations in which it may be
    convenient for the programmer to declare a reference, p, which will be treated as the target so that writing

      p + 1

    would mean

      target + 1

    Call it auto dereferencing.

    No, call it subtyping! (:-))

    What I was saying was that if I allow such declarations then the
    question arises over what a programmer could write if instead of the
    target he wanted to access one of the references in the chain.

    He would qualify the type. Each pointer type is a type. Each type has a
    name. Each name can be used to disambiguate object's type. What's the
    problem?

    I guess
    it may be something like

      refchain(p, 0) ;p itself
      refchain(p, 1) ;1 away from p
      refchain(p, 2) ;2 away from p
      refchain(p, -1) ;one before the target

    The last two would both refer to object "2" in the chain

      p -> 1 -> 2 -> target

    Looks disgusting, but that was the intent, right? (:-)) Anyway it is
    beside the point. See above.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Fri Dec 9 21:48:18 2022
    From Newsgroup: comp.lang.misc

    On 06/12/2022 22:06, Dmitry A. Kazakov wrote:
    [...] Conversions are implicit in both cases = arguments in
    expression appear as is.
    [...] Nor can I parse your second sentence above into anything
    sensible; what are the "both cases"?
    Built-in conversions vs. user-defined subtypes.

    A /user-defined/ subtype is not "implicit", any more than "x"
    is implicit merely because the type of "x" is defined in its defining
    instance and is not repeated every time "x" is used.

    [...] Your answer suggested than implicit conversion could
    somehow exist in a moderately *modern* and reasonably typed
    language.
    Of course it can. You yourself [at the bottom of your article]
    point us at the implicit conversions in the 2023 C++ standard; and
    any language loosely related to C has them in its decays and
    promotions.
    They are *user-defined*, which was the whole point about how
    implicit conversions could be *reasonable* introduced if any.

    The integer promotions and conversions [eg N2478, 6.3.1.1.1,2] of
    C and the lvalue conversion [6.3.2.1.2] are not user-defined. If you want
    to add user-defined conversions to a program, then indeed there are great potential difficulties in setting out the formal rules that make that
    possible and unambiguous, but that's quite another matter.

    I said that implicit conversions lead to ambiguities. All that is
    provided sqrt spells "sqrt", print spells "print", 2 spells "2" etc.

    Nothing to do with "sqrt" except as an example. Algol went to
    great lengths to avoid ambiguities in its implicit conversions. Feel
    free to try to find one; but AFAIK none have been found in the best
    part of half a century. Sadly, other languages have not been defined
    as formally as Algol, with inevitable consequences.

    If in your language they are called differently for each possible
    type, or maybe depend on the line number, then my deepest
    condolences, you left the race before it even started...

    In Algol and C [in particular; other languages are available]
    an applied instance of an identifier always relates back to a defining occurrence, where the type of the identifier is specified. You may
    prefer languages where types are more vague. For procedures, the type
    includes the types of the result and of any parameters. Different type, different identifier. Anything else would be confusing. So I still
    don't see why you are surprised by the fact that, in Algol, the square
    root function that takes a real parameter and returns a real is called
    "sqrt" while the square root function that takes and returns "long real"
    is called "longsqrt", and I'll leave you to guess what the parameter and
    return types of "shortshortsqrt" and "longlonglonglongsqrt" are. Even
    less do I see why you are confused by those and apparently prefer to
    write out a type specifier with each call. But if you want it in
    Algol, you may have it. You like:

    Then without learning Hungarian I could write each sqrt as sqrt.
    Put_Line (Float'(sqrt(2))'Image); Put_Line
    (Long_Float'(sqrt(2))'Image);

    So here, just for you, is some [uncommented, sorry!] Algol:

    $ cat Dmitry2.a68g
    MODE U = UNION (INT, REAL, LONG REAL),
    OP UI = (U x) INT: ( x | (INT i): i | ROUND UR x ),
    UR = (U x) REAL: ( x | (INT i): i, (REAL r): r | SHORTEN UL x ),
    UL = (U x) LONG REAL: ( x | (LONG REAL l): l | UR x);
    PROC mysqrt = (U x) U: ( x | (LONG REAL l): long sqrt (l) | sqrt (UR x) );

    ( PROC (U) U sqrt = mysqrt;
    print (( UI sqrt (17), UR sqrt (17), newline, UL sqrt (LONG 17.0), newline)) )
    $

    [note the last line], with output:

    $ a68g ---no-warnings Dmitry2.a68g
    +4+4.12310562561766e +0
    +4.1231056256176605498214098559740770e +0
    $

    Why do you regard "Long_Float'(sqrt(2))" as more readable than
    "longsqrt(2)"
    My point was about inevitable ambiguities. Any properly designed
    language provides tools to resolve such ambiguities without resorting
    to silly naming games.

    Algol has no such ambiguities, so they aren't "inevitable"; and
    you have still not explained why "Long_Float'(sqrt(2))" is good but "longsqrt(2)" is a "silly naming game". Algol provides long and short
    versions of a wide variety of functions, operators and constants, making
    the life of a programmer much simpler. In particular, much simpler than
    the "Dmitry2" program above which is in essence the same as the Ada you
    gave in your previous article.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Strauss
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 17:00:40 2022
    From Newsgroup: comp.lang.misc

    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
    On 2022-12-07 18:42, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:

    ...

    Type /compatibility/ is an open issue for me. I gather that Ada
    allows multiple subtypes of integer all to be compatible with each >>>>>> other even if one is not derived from the other; that's different >>>>>> from inherited compatibility as neither is a superclass of the other. >>>>>
    Subtyping is a transitive relation. A<:B<:C. Ada's subtype
    introduces both Small<:Integer and Integer<:Small. Because Small
    exports its operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a
    bit mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    Conversions are OK. Saying each is a subtype of the other seems to be
    abusing the English language.

    Why?

    Firstly neither sub- nor type are English words! (:-))

    Secondly subtype means a part of a type. Which part? The inherited operations, the substitutable values. OK?

    It's true that where there is an implicit conversion the two types would likely share a subset of operations but the common subset would be a
    subset of both, without the two types being sub-anything of each other.
    If the two types are T and U they may share a set of values and
    operations S. One could say that:

    S is a subtype of T
    S is a subtype of U

    but that does not imply that T and U are subtypes of each other.


    Anyway, it is not specific to pointers. Automatic dereferencing is
    subtyping. So if you do not want to build in it in the language,
    you do not need to if you allowed the programmer to declare it a
    subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.

    Where was it decided that implicit conversions implied a subtype
    relationship?

    Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
    (float). You can substitute integer for float in sqrt. This is the
    Liskov's definition of subtyping (ignoring behavior).

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 17:18:47 2022
    From Newsgroup: comp.lang.misc

    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do so.
    But if you accept the standard definition you must also accept all consequences of.

    Potentially fair but where did you read a definition of 'type' which
    backs up your point?


    Just because in some contexts one is not allowed to modify it (perhaps
    simply because of promising not to do so) changes neither its type nor
    the operations that can be applied to it.

    An apple is an edible fruit. Someone may promise not to eat it but it
    is still an edible fruit.

    1. This is obviously wrong. For a cat apple is not edible.

    Same with gold apples but irrelevant. Whatever a cat apple is assume we
    are talking about edible apples, as stated.


    2. This not in the least resembles the case. Which is about operations (properties) added/removed. E.g. a car without wheels is still a car,
    yet is one you might tread a bit differently from one with wheels.

    That analogy is inapt and irrelevant. And seemingly evasive....


    2. You can always qualify the type and/or operation. E.g. in Ada it >>>>> is denoted as T'(E). T is the type, E is expression/object.

    Then one gets into the Algol68 approach of "resolve until you get a
    type match". If a programmer has three levels of declared-automatic
    reference before getting to the target, i.e.

       p -> 1 -> 2 -> target

    then (to repeat, for declared-automatic dereference) a use of p
    would normally access the target. But the programmer might want to
    access p or 1 or 2 in different circumstances.

    I still see no problem. Whatever object you want, it is has a type
    and that type has a name. Use the name in the qualifier.

    In C terms the above chain has four objects:

       p
       *p
       **p
       ***p

    where the first three are pointers and ***p is the target.

    Good to them.

    Do they have types? Name them! Let them be T, T1, T2, T3. Let X be declared

       X : T3;

    Let all types T, T1, T2, T3 have operation named Bar. You want to call
    Bar of T1 on X? Just say so:

       Bar (T1'(X))

    IF T3 is a subtype of T1, X will be dereferenced to T2, then to T1
    because that is the conversion attached to the subtyping relationship
    and because subtyping is transitive: T3<:T2<:T1<:T.

    IF is not, you get a type error.

    What's the problem, again?

    I think I see what you mean. The types would be

    ref ref ref T
    ref ref T
    ref T
    T

    so there would be no ambiguity as to what was being referred to,
    although it would require a departure from normal bottom-up type
    propagation. Such inconsistency is something that should be resisted.


    I could use the same simple model but there are situations in which it
    may be convenient for the programmer to declare a reference, p, which
    will be treated as the target so that writing

       p + 1

    would mean

       target + 1

    Call it auto dereferencing.

    No, call it subtyping! (:-))

    What I was saying was that if I allow such declarations then the
    question arises over what a programmer could write if instead of the
    target he wanted to access one of the references in the chain.

    He would qualify the type. Each pointer type is a type. Each type has a name. Each name can be used to disambiguate object's type. What's the problem?

    I guess it may be something like

       refchain(p, 0) ;p itself
       refchain(p, 1) ;1 away from p
       refchain(p, 2) ;2 away from p
       refchain(p, -1) ;one before the target

    The last two would both refer to object "2" in the chain

       p -> 1 -> 2 -> target

    Looks disgusting,

    Why?

    but that was the intent, right? (:-)) Anyway it is
    beside the point. See above.

    As mentioned, using types would work for a language such as Algol68
    which already defines dereferencing until types match but not in a
    language in which type changes are propagated bottom up.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 11 18:21:23 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
    On 2022-12-07 18:42, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:

    Type /compatibility/ is an open issue for me. I gather that Ada >>>>>>> allows multiple subtypes of integer all to be compatible with
    each other even if one is not derived from the other; that's
    different from inherited compatibility as neither is a superclass >>>>>>> of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype
    introduces both Small<:Integer and Integer<:Small. Because Small
    exports its operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems a >>>>> bit mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    Conversions are OK. Saying each is a subtype of the other seems to be
    abusing the English language.

    Why?

    Firstly neither sub- nor type are English words! (:-))

    Secondly subtype means a part of a type. Which part? The inherited
    operations, the substitutable values. OK?

    It's true that where there is an implicit conversion the two types would likely share a subset of operations but the common subset would be a
    subset of both, without the two types being sub-anything of each other.
    If the two types are T and U they may share a set of values and
    operations S. One could say that:

      S is a subtype of T
      S is a subtype of U

    but that does not imply that T and U are subtypes of each other.

    I am not sure what are you trying to say.

    Subtyping is a relation denoted as S<:T. It is transitive, i.e.

    S<:T and T<:U => S<:U.

    From S<:T and S<:U follows nothing.

    Ada's subtypes introduce both S:<T and T<:S. Which is why per
    transitivity two Ada subtypes get connected:

    S<:T and T<:S and U<:T and T<:U => S<:U and U<:S

    [ "Sharing" is not a word. You must formalize it. E.g. Unsigned_32 and
    and Unsigned_64 may mathematically share + and 1 that does not make them formal subtypes until you declare them as such. Then you create a mapping:

    1 of Unsigned_32 corresponds to 1 of Unsigned_64
    2 of Unsigned_32 corresponds to 1 of Unsigned_64
    ...
    + of Unsigned_32 corresponds to 1 of Unsigned_64

    Etc. Once you done all that you know at the language level what happens
    with conversions and all, likely nasty, implications.

    I presume we are talking about nominal type equivalence. ]

    Anyway, it is not specific to pointers. Automatic dereferencing is >>>>>> subtyping. So if you do not want to build in it in the language,
    you do not need to if you allowed the programmer to declare it a
    subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.

    Where was it decided that implicit conversions implied a subtype
    relationship?

    Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
    (float). You can substitute integer for float in sqrt. This is the
    Liskov's definition of subtyping (ignoring behavior).

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution. How would you
    substitute int for float without a conversion?
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 17:34:16 2022
    From Newsgroup: comp.lang.misc

    On 06/12/2022 16:30, Andy Walker wrote:
    On 05/12/2022 23:28, James Harris wrote:
    [I wrote:]
    [...] Two objects of the same type ought to be syntactically
    and semantically interchangeable [modulo some quibbles not relevant
    here].
    Both j and 2 can be modelled as 'storage' and assigned a location. In
    fact, that gives a consistent picture of operands. While some
    literals (esp small integers) can be placed in the program code, in
    the general case (let's call them large literals) they won't fit and
    would need to be placed in storage. So why not initially place them
    all there?

        /Implementation/ details don't affect types!  Yes, you can if you like implement "2" as

      int secret := 2;

    but that gives "secret" and "j" the same type, not "j" and "2".  The fact will remain that there are many contexts in which you can use "j" but not
    "2" [and you can't, as a programmer, use "secret", because it's secret].

    Even if the compiler initially assigns locations for all literals the
    optimiser can ensure that small integers are moved out of storage and
    into the program text so nothing is lost. But the models for j and 2
    as seen in the program text can be the same.

        What the compiler and optimiser do is up to them.  But if the /program text/ fails to distinguish an integer from storage containing
    an integer, it's going to make programming "interesting".  As in those languages where "2" is just an identifier, and you /can/ assign "2 := 3"
    so that "2 * 2 == 9" [unless "9" has also been re-defined!].  I hope
    you're not going down that route.

    You'll be glad to know I won't support any of that nonsense (such as
    allowing a redefinition of what 2 means) but what do you mean about the program text? Say the text has

    two: const = 2

    f(2)
    f(two)

    I don't see any semantic difference between 2 and two. Nor does either
    form imply either storage or the lack of storage.

    That being the case, do you still see a semantic difference between 2
    and two?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 11 18:43:06 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do so.
    But if you accept the standard definition you must also accept all
    consequences of.

    Potentially fair but where did you read a definition of 'type' which
    backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose logical
    behavior is defined by a set of values and a set of operations"; this is analogous to an algebraic structure in mathematics."

    I think I see what you mean. The types would be

      ref ref ref T
      ref ref T
      ref T
      T

    so there would be no ambiguity as to what was being referred to,
    although it would require a departure from normal bottom-up type propagation.

    How is that related to the way you resolve the types? If your resolver
    is incapable to resolve types, you have an ambiguity. An ambiguity can *always* be resolved by qualifying types. Nothing more, nothing less.

    Such inconsistency is something that should be resisted.

    What inconsistency?

    What I was saying was that if I allow such declarations then the
    question arises over what a programmer could write if instead of the
    target he wanted to access one of the references in the chain.

    He would qualify the type. Each pointer type is a type. Each type has
    a name. Each name can be used to disambiguate object's type. What's
    the problem?

    I guess it may be something like

       refchain(p, 0) ;p itself
       refchain(p, 1) ;1 away from p
       refchain(p, 2) ;2 away from p
       refchain(p, -1) ;one before the target

    The last two would both refer to object "2" in the chain

       p -> 1 -> 2 -> target

    Looks disgusting,

    Why?

    Judging by weird stomach movements just on look at it causes... (:-))

    but that was the intent, right? (:-)) Anyway it is beside the point.
    See above.

    As mentioned, using types would work for a language such as Algol68
    which already defines dereferencing until types match but not in a
    language in which type changes are propagated bottom up.

    No, idea. My impression of Algol68 is of a language full of weird and
    plainly wrong concepts.

    If you want automatic derefencing there is a wide range of how to limit
    it. E.g. you can introduce ref T<:T but not ref ref T<:ref T. Then it
    will stop. It is all up to the programmer if you expose the ad-hoc
    subtyping mechanism.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 18:01:09 2022
    From Newsgroup: comp.lang.misc

    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a programmer
    wanted to put it in a function there would be nothing stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch to it.

    If you and Bart mean that a cuberoot function would have to decide on a rounding direction for 1.0/3.0 then I agree; it's a good point. I
    haven't worked through the many issues surrounding floats, yet, but they
    would probably have similar to integers, i.e. delimited areas of code
    could have a default rounding mode. One could write something along the
    lines of

    with float-rounding-mode = round-to-positive-infinity
    return x ** (1.0 / 3.0)
    with end

    In such code the division would be rounded up to something like 0.3333334.

    To incorporate that much control in a function would require many
    functions such as

    cuberoot_round_up
    cuberoot_round_down
    cuberoot_round_towards_zero

    and other similar horrors. Those names are, in fact, misleading as they
    seem to apply to the cube root rather than the division within it -
    which makes the idea of a function even worse.

    All the more reason to have a programmer write cube root calculations explicitly rather than wrapping the subtleties in functions.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 18:25:43 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
    On 2022-12-07 18:42, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
    On 2022-12-04 23:21, James Harris wrote:

    Type /compatibility/ is an open issue for me. I gather that Ada >>>>>>>> allows multiple subtypes of integer all to be compatible with >>>>>>>> each other even if one is not derived from the other; that's
    different from inherited compatibility as neither is a
    superclass of the other.

    Subtyping is a transitive relation. A<:B<:C. Ada's subtype
    introduces both Small<:Integer and Integer<:Small. Because Small >>>>>>> exports its operations to Integer. E.g.

        procedure Foo (X : Small);

    Now

        Y : Integer;

        Foo (Y); -- This is OK, Small is a supertype of Integer

    So Small is both a subtype and a supertype of Integer? That seems >>>>>> a bit mad.

    Why, if that is desired effect? You want Foo (Y) illegal?

    Conversions are OK. Saying each is a subtype of the other seems to
    be abusing the English language.

    Why?

    Firstly neither sub- nor type are English words! (:-))

    Secondly subtype means a part of a type. Which part? The inherited
    operations, the substitutable values. OK?

    It's true that where there is an implicit conversion the two types
    would likely share a subset of operations but the common subset would
    be a subset of both, without the two types being sub-anything of each
    other. If the two types are T and U they may share a set of values and
    operations S. One could say that:

       S is a subtype of T
       S is a subtype of U

    but that does not imply that T and U are subtypes of each other.

    I am not sure what are you trying to say.

    Subtyping is a relation denoted as S<:T. It is transitive, i.e.

       S<:T and T<:U => S<:U.

    To be clear, that's not what I described.


    From S<:T and S<:U follows nothing.

    Ada's subtypes introduce both S:<T and T<:S. Which is why per
    transitivity two Ada subtypes get connected:

       S<:T and T<:S and U<:T and T<:U => S<:U and U<:S

    [ "Sharing" is not a word. You must formalize it. E.g. Unsigned_32 and
    and Unsigned_64 may mathematically share + and 1 that does not make them formal subtypes until you declare them as such. Then you create a mapping:

      1 of Unsigned_32 corresponds to 1 of Unsigned_64
      2 of Unsigned_32 corresponds to 1 of Unsigned_64
      ...
      + of Unsigned_32 corresponds to 1 of Unsigned_64

    Etc. Once you done all that you know at the language level what happens
    with conversions and all, likely nasty, implications.

    I presume we are talking about nominal type equivalence. ]

    Anyway, it is not specific to pointers. Automatic dereferencing >>>>>>> is subtyping. So if you do not want to build in it in the
    language, you do not need to if you allowed the programmer to
    declare it a subtype.

    Everything's subtyping these days! :-o

    Implicit conversions are.

    Where was it decided that implicit conversions implied a subtype
    relationship?

    Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
    (float). You can substitute integer for float in sqrt. This is the
    Liskov's definition of subtyping (ignoring behavior).

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    How would you
    substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float,
    character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.
    --
    James Harris



    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Sun Dec 11 18:45:57 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-)

    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do so.
    But if you accept the standard definition you must also accept all
    consequences of.

    Potentially fair but where did you read a definition of 'type' which
    backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose logical behavior is defined by a set of values and a set of operations"; this is analogous to an algebraic structure in mathematics."

    That rather backs up my assertion that types are of /objects/ rather
    than of /uses/. IOW a piece of code could treat an object as read-only
    but that doesn't change the type of the object.


    I think I see what you mean. The types would be

       ref ref ref T
       ref ref T
       ref T
       T

    so there would be no ambiguity as to what was being referred to,
    although it would require a departure from normal bottom-up type
    propagation.

    How is that related to the way you resolve the types? If your resolver
    is incapable to resolve types, you have an ambiguity. An ambiguity can *always* be resolved by qualifying types. Nothing more, nothing less.

    Bottom-up type resolution follows type-inference rules. The only
    /automatic/ change I allow is widening. For example,

    [int16] s
    [int32] t
    [float64] g

    s = t ;prohibited because is narrowing
    t = s ;permitted because is widening
    return s + t ;permitted and results in wider type (int32)
    g = s ;prohibited as different types (even though in range)

    Similarly for references,

    [int32] i, j
    [ref int32] ri, rj
    [ref ref int32] rri, rrj

    ri = j ;prohibited due to type mismatch
    ri = rj ;permitted as types match
    ri = rrj ;prohibited due to type mismatch


    Such inconsistency is something that should be resisted.

    What inconsistency?

    See the example, above, on references. It would be inconsistent to automatically dereference when the rest of the language requires
    explicit type matching.


    What I was saying was that if I allow such declarations then the
    question arises over what a programmer could write if instead of the
    target he wanted to access one of the references in the chain.

    He would qualify the type. Each pointer type is a type. Each type has
    a name. Each name can be used to disambiguate object's type. What's
    the problem?

    I guess it may be something like

       refchain(p, 0) ;p itself
       refchain(p, 1) ;1 away from p
       refchain(p, 2) ;2 away from p
       refchain(p, -1) ;one before the target

    The last two would both refer to object "2" in the chain

       p -> 1 -> 2 -> target

    Looks disgusting,

    Why?

    Judging by weird stomach movements just on look at it causes... (:-))

    :-o
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 11 20:08:07 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-11 19:25, James Harris wrote:
    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    It did not say it is. I said it an implementation of.

    How would you substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float, character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.

    Call implicit automatic, automatic implicit. It does not change the
    semantics and the intent. The intent and the meaning is to substitute
    one type for another transparently to the syntax.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 11 20:21:47 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-11 19:45, James Harris wrote:
    On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-) >>>>>>
    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do
    so. But if you accept the standard definition you must also accept
    all consequences of.

    Potentially fair but where did you read a definition of 'type' which
    backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical
    type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose logical
    behavior is defined by a set of values and a set of operations"; this
    is analogous to an algebraic structure in mathematics."

    That rather backs up my assertion that types are of /objects/ rather
    than of /uses/. IOW a piece of code could treat an object as read-only
    but that doesn't change the type of the object.

    No idea what this is supposed to mean. You asked where it comes from, I
    gave you the quote.

    Again, you do not accept common definition, give your own and explain
    how stupid the rest of the world is by not using it.

    I think I see what you mean. The types would be

       ref ref ref T
       ref ref T
       ref T
       T

    so there would be no ambiguity as to what was being referred to,
    although it would require a departure from normal bottom-up type
    propagation.

    How is that related to the way you resolve the types? If your resolver
    is incapable to resolve types, you have an ambiguity. An ambiguity can
    *always* be resolved by qualifying types. Nothing more, nothing less.

    Bottom-up type resolution follows type-inference rules. The only
    /automatic/ change I allow is widening. For example,

      [int16] s
      [int32] t
      [float64] g

      s = t ;prohibited because is narrowing
      t = s ;permitted because is widening
      return s + t ;permitted and results in wider type (int32)
      g = s ;prohibited as different types (even though in range)

    Similarly for references,

      [int32] i, j
      [ref int32] ri, rj
      [ref ref int32] rri, rrj

      ri = j ;prohibited due to type mismatch
      ri = rj ;permitted as types match
      ri = rrj ;prohibited due to type mismatch

    So what? Again any particular weakness of your type system by no means
    change the point. Which, let me repeat is, you can always resolve
    ambiguities by qualifying types.

    Such inconsistency is something that should be resisted.

    What inconsistency?

    See the example, above, on references. It would be inconsistent to automatically dereference when the rest of the language requires
    explicit type matching.

    I do not see any inconsistency. See the definition of:

    Wikipedia:

    "In classical deductive logic, a consistent theory is one that does not
    lead to a logical contradiction."
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Sun Dec 11 20:32:26 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-11 19:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on a rounding direction for 1.0/3.0 then I agree; it's a good point. I
    haven't worked through the many issues surrounding floats, yet, but they would probably have similar to integers, i.e. delimited areas of code
    could have a default rounding mode. One could write something along the lines of

      with float-rounding-mode = round-to-positive-infinity
        return x ** (1.0 / 3.0)
      with end

    You cannot do that with available hardware in a reasonable way. Hardware rounding is set.

    In such code the division would be rounded up to something like 0.3333334.

    To incorporate that much control in a function would require many
    functions such as

      cuberoot_round_up
      cuberoot_round_down
      cuberoot_round_towards_zero

    and other similar horrors.

    It is no horrors, it is interval computations. Interval-valued function
    F returns an interval containing the mathematically correct result.
    Hardware implementations are interval-valued with the interval width of
    two adjacent machine numbers. Then rounding chooses one of the bounds.

    But that does not resolve the problem. You need some solid numeric
    analysis of relation between err1 and err2 in

    X ** (1/3 + err1)

    and

    X ** 1/3 + err2

    Exponentiation of positive powers below 1 is a "good-behaving" function
    and one could use one for another to a point, nevertheless.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Mon Dec 12 00:20:54 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 18:01, James Harris wrote:
    If [Bmitry] and Bart mean that a cuberoot function would have to
    decide on a rounding direction for 1.0/3.0 then I agree; it's a good
    point.
    Cube roots are arguably used often enough to be worth implementing separately from "** (1/3)" or near equivalent. Then the error from "1/3"
    is irrelevant. But if this matters to you, you would need to use serious numerical analysis and computation beyond the normal scope of this group,
    such as interval arithmetic and Chebychev polynomials.

    [A68G provides a stand-alone "cbrt" function (and, presumably to
    annoy Dmitry, "long long long long cbrt", "short cbrt" and so on), which
    I've used a few times. But that's an extension to true A68.]
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Dandrieu
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Mon Dec 12 10:11:01 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 01:20, Andy Walker wrote:
    On 11/12/2022 18:01, James Harris wrote:
    If [Bmitry] and Bart mean that a cuberoot function would have to
    decide on a rounding direction for 1.0/3.0 then I agree; it's a good
    point.
        Cube roots are arguably used often enough to be worth implementing separately from "** (1/3)" or near equivalent.

    Really? In what context? Square roots occur all over the place, but I
    can't think of a single realistic use of cube roots that are not general
    nth roots (such as calculating geometric means). Maybe I just haven't
    had enough coffee yet, and there are some "obvious" cases that I am
    missing. Feel free to give some examples of where roots bigger than 2
    are common-place in programming. (And please don't say "for solving
    cubic equations" without also saying when you would want to solve cubic equations.)

      Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use serious numerical analysis and computation beyond the normal scope of this group, such as interval arithmetic and Chebychev polynomials.


    You would almost certainly not use Chebychev's or other polynomials for calculating cube roots, unless you were making something like a
    specialised pipelined implementation in an ASIC or FPGA. Newton-Raphson iteration, or related algorithms, are simpler and converge quickly
    without all the messiness of ranges.
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Mon Dec 12 10:26:10 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 18:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on a rounding direction for 1.0/3.0 then I agree; it's a good point.

    Actually, I was concerned with aesthetics.

    The precision of 1.0/3 is something I hadn't considered, but DAK's point
    is, if a compiler knew of a fast cube root algorithm and wanted to special-case A**B when B was 1.0/3, then how close to 'one third' would
    B have to be before it could assume that a cube-root was in fact what
    was intended? Given that there is no precise representation of 'one
    third' anyway.

    Far easier to just a provide something like 'cuberoot()' then it will
    know, and so will people reading the code.

    I
    haven't worked through the many issues surrounding floats, yet, but they would probably have similar to integers, i.e. delimited areas of code
    could have a default rounding mode. One could write something along the lines of

      with float-rounding-mode = round-to-positive-infinity
        return x ** (1.0 / 3.0)
      with end

    In such code the division would be rounded up to something like 0.3333334.



    To incorporate that much control in a function would require many
    functions such as

      cuberoot_round_up
      cuberoot_round_down
      cuberoot_round_towards_zero

    Huh? Who cares about the rounding of cube root?! As I said it was merely
    about detecting whether this was a cube root.

    and other similar horrors. Those names are, in fact, misleading as they
    seem to apply to the cube root rather than the division within it -
    which makes the idea of a function even worse.

    Oh, you are talking abut that! Maybe you need a function like isthisreallyathirdorjustsomethingclose() instead.

    All the more reason to have a programmer write cube root calculations explicitly rather than wrapping the subtleties in functions.

    And as David said, cube roots aren't that common. It might have a button
    for it on my Casio, that might be because it's oriented towards solving
    school maths problems.




    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Mon Dec 12 11:31:34 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 10:11, David Brown wrote:
    On 12/12/2022 01:20, Andy Walker wrote:

      Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use serious >> numerical analysis and computation beyond the normal scope of this group,
    such as interval arithmetic and Chebychev polynomials.

    You would almost certainly not use Chebychev's or other polynomials for calculating cube roots, unless you were making something like a
    specialised pipelined implementation in an ASIC or FPGA.

    Yes, Chebyshev's approximation is not good for these. E.g. Luke it his
    book gives for cube square a Padé approximation.

    Then, when implementing a library function rather than doing some
    specific computations, there is a problem with handing machine numbers precision which iterative methods are better at.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Mon Dec 12 16:18:24 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a button
    for it on my Casio, that might be because it's oriented towards solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root
    function would make some sense. Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:04:17 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 19:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:45, James Harris wrote:
    On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-) >>>>>>>
    Good. Now you see why in T and in out T cannot be the same type?

    No. Types are not the only control in a programming language. An
    /object/ has a type.

    If you want to use the term for something else you are free to do
    so. But if you accept the standard definition you must also accept
    all consequences of.

    Potentially fair but where did you read a definition of 'type' which
    backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical
    type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose logical
    behavior is defined by a set of values and a set of operations"; this
    is analogous to an algebraic structure in mathematics."

    That rather backs up my assertion that types are of /objects/ rather
    than of /uses/. IOW a piece of code could treat an object as read-only
    but that doesn't change the type of the object.

    No idea what this is supposed to mean. You asked where it comes from, I
    gave you the quote.

    You referred to an ADT rather than a type. From the same source:

    "A data type constrains the possible values that an expression, such as
    a variable or a function, might take. This data type defines the
    operations that can be done on the data, the meaning of the data, and
    the way values of that type can be stored."

    https://en.wikipedia.org/wiki/Data_type

    Note, the operations which "can be done on the data", not "can be done
    by a certain function".


    Again, you do not accept common definition, give your own and explain
    how stupid the rest of the world is by not using it.

    Ahem, I'm happy with the aforementioned definition. It's you who is
    seeking an alteration.


    I think I see what you mean. The types would be

       ref ref ref T
       ref ref T
       ref T
       T

    so there would be no ambiguity as to what was being referred to,
    although it would require a departure from normal bottom-up type
    propagation.

    How is that related to the way you resolve the types? If your
    resolver is incapable to resolve types, you have an ambiguity. An
    ambiguity can *always* be resolved by qualifying types. Nothing more,
    nothing less.

    Bottom-up type resolution follows type-inference rules. The only
    /automatic/ change I allow is widening. For example,

       [int16] s
       [int32] t
       [float64] g

       s = t ;prohibited because is narrowing
       t = s ;permitted because is widening
       return s + t ;permitted and results in wider type (int32)
       g = s ;prohibited as different types (even though in range)

    Similarly for references,

       [int32] i, j
       [ref int32] ri, rj
       [ref ref int32] rri, rrj

       ri = j ;prohibited due to type mismatch
       ri = rj ;permitted as types match
       ri = rrj ;prohibited due to type mismatch

    So what? Again any particular weakness of your type system by no means change the point. Which, let me repeat is, you can always resolve ambiguities by qualifying types.

    There's no weakness in that type system. On the contrary, it requires
    and would enforce precision and explicitness from the programmer.


    Such inconsistency is something that should be resisted.

    What inconsistency?

    See the example, above, on references. It would be inconsistent to
    automatically dereference when the rest of the language requires
    explicit type matching.

    I do not see any inconsistency. See the definition of:

    Wikipedia:

    "In classical deductive logic, a consistent theory is one that does not
    lead to a logical contradiction."

    I mean 'inconsistency' such as requiring explicit conversions in one
    place but not in another.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:15:58 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 19:32, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)! >>>>
    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on
    a rounding direction for 1.0/3.0 then I agree; it's a good point. I
    haven't worked through the many issues surrounding floats, yet, but
    they would probably have similar to integers, i.e. delimited areas of
    code could have a default rounding mode. One could write something
    along the lines of

       with float-rounding-mode = round-to-positive-infinity
         return x ** (1.0 / 3.0)
       with end

    You cannot do that with available hardware in a reasonable way. Hardware rounding is set.

    Not so. Rounding towards infinities is provided in hardware.

    A certain language (Ada?) might have you believe there is only one
    rounding mode in hardware but that's not so. IEEE 754 defines five
    according to

    https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules

    I currently define 12 rounding modes for integers and I guess that
    similar may apply for floats but I've not explored that yet. Either way,
    if a programmer wanted a mode which the IEEE did not define for hardware
    it would be the compiler's job to emit equivalent instructions.


    In such code the division would be rounded up to something like
    0.3333334.

    To incorporate that much control in a function would require many
    functions such as

       cuberoot_round_up
       cuberoot_round_down
       cuberoot_round_towards_zero

    and other similar horrors.

    It is no horrors, it is interval computations. Interval-valued function
    F returns an interval containing the mathematically correct result.
    Hardware implementations are interval-valued with the interval width of
    two adjacent machine numbers. Then rounding chooses one of the bounds.

    OT but I'd like to see a float implementation which instead of a single
    value always calculated upper and lower bounds for fp results.


    But that does not resolve the problem. You need some solid numeric
    analysis of relation between err1 and err2 in

       X ** (1/3 + err1)

    and

       X ** 1/3 + err2

    Exponentiation of positive powers below 1 is a "good-behaving" function
    and one could use one for another to a point, nevertheless.

    --
    James Harris

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:22:09 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 19:08, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:25, James Harris wrote:
    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    It did not say it is. I said it an implementation of.

    How would you substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float,
    character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.

    Call implicit automatic, automatic implicit. It does not change the semantics and the intent. The intent and the meaning is to substitute
    one type for another transparently to the syntax.

    Fine but that still doesn't imply a 'sub' type - for any reasonable
    meaning of the term.

    I accept that you have a different interpretation of 'subtype' and
    that's fine but it seems a confusing use of language. Maybe there's a
    better term that everyone would be happy with but I think of a subtype
    as being diminutive compared to another, such as integers which have a
    smaller range.

    thou: integer range 0..999
    hund: s range 0..99

    where hund could be understood to be a subtype of thou. But not the
    other way round!
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:30:26 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 15:18, David Brown wrote:
    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a
    button for it on my Casio, that might be because it's oriented towards
    solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root function would make some sense.  Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do something similar, perhaps with the root first to permit results to be
    tuples.

    root(3, ....) ;cube root

    so that

    root(3, 8) -> 2
    root(3, 8, 1000) -> (2, 10)
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:41:29 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 10:26, Bart wrote:
    On 11/12/2022 18:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)! >>>>
    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on
    a rounding direction for 1.0/3.0 then I agree; it's a good point.

    Actually, I was concerned with aesthetics.

    That's also important.


    The precision of 1.0/3 is something I hadn't considered, but DAK's point
    is, if a compiler knew of a fast cube root algorithm and wanted to special-case A**B when B was 1.0/3, then how close to 'one third' would
    B have to be before it could assume that a cube-root was in fact what
    was intended? Given that there is no precise representation of 'one
    third' anyway.

    Far easier to just a provide something like 'cuberoot()' then it will
    know, and so will people reading the code.

    What of David's suggestion, mentioned below?


    I haven't worked through the many issues surrounding floats, yet, but
    they would probably have similar to integers, i.e. delimited areas of
    code could have a default rounding mode. One could write something
    along the lines of

       with float-rounding-mode = round-to-positive-infinity
         return x ** (1.0 / 3.0)
       with end

    In such code the division would be rounded up to something like
    0.3333334.



    To incorporate that much control in a function would require many
    functions such as

       cuberoot_round_up
       cuberoot_round_down
       cuberoot_round_towards_zero

    Huh? Who cares about the rounding of cube root?! As I said it was merely about detecting whether this was a cube root.

    Since floats are, in general, /approximations/ the rounding mode on any particular operation can matter to how precise a result is; though IME programming languages tend to brush that fact under the carpet.


    and other similar horrors. Those names are, in fact, misleading as
    they seem to apply to the cube root rather than the division within it
    - which makes the idea of a function even worse.

    Oh, you are talking abut that! Maybe you need a function like isthisreallyathirdorjustsomethingclose() instead.

    What do you make of David's suggestion to have a "root" function which I
    would probably have as

    root(2, x) -> square root of x
    root(3, x) -> cube root of x

    etc?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Dec 13 00:16:08 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 17:34, James Harris wrote:
    You'll be glad to know I won't support any of that nonsense (such as
    allowing a redefinition of what 2 means) but what do you mean about
    the program text? Say the text has
      two: const = 2
      f(2)
      f(two)
    I don't see any semantic difference between 2 and two. Nor does
    either form imply either storage or the lack of storage.

    I'm assuming that the intention is similar to, perhaps identical
    with, the Algol 68 "int two = 2;"? But we don't really have enough of a
    formal definition of your language to be sure.

    That being the case, do you still see a semantic difference between 2
    and two?

    The above A68 simply gives a new name, "two", to the integer 2.
    If your language effectively does the same, then I see no interesting difference between "2" and "two". But I could certainly imagine cases
    where there would be a difference: eg,

    begin int two = 2; ... two ... end; ... two ... # two no longer valid #
    begin ... 2 ... end; ... 2 ... # but 2 still works (of course) #

    Whether you regard that as "interesting" is a matter of taste. Another potential problem is with pointers; if you regard "const" as merely an adjective [so that "two" is an integer "variable" that happens to be a constant], then there is presumably no objection, for you, to setting up
    a pointer to point to "two", whereas there might be to pointing at "2"
    [which may require an intermediate integer variable]. Without a much
    fuller description of your language, we can't tell whether this is a
    real difference.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Haydn
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 00:27:45 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function which I would probably have as

      root(2, x) -> square root of x
      root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and others
    of interest.

    Probably I wouldn't have it as built-in, as I have no great interest in
    cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language in
    '75, and I see no reason to drop it.
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 09:40:47 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:22, James Harris wrote:
    On 11/12/2022 19:08, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:25, James Harris wrote:
    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    It did not say it is. I said it an implementation of.

    How would you substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float,
    character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.

    Call implicit automatic, automatic implicit. It does not change the
    semantics and the intent. The intent and the meaning is to substitute
    one type for another transparently to the syntax.

    Fine but that still doesn't imply a 'sub' type - for any reasonable
    meaning of the term.

    It does not imply, it *is* subtype. There is a difference between A=>B
    and A=B. It is just the thing as defined.

    I accept that you have a different interpretation of 'subtype' and
    that's fine but it seems a confusing use of language. Maybe there's a
    better term that everyone would be happy with but I think of a subtype
    as being diminutive compared to another, such as integers which have a smaller range.

      thou: integer range 0..999
      hund: s range 0..99

    where hund could be understood to be a subtype of thou. But not the
    other way round!

    Special cases of subtyping regarding the sets of values and/or
    operations of a type are:

    - Generalization (you add members to the sets)
    - Specialization (you remove members from the sets)

    What you mentioned is setting up a constraint, like range, in order to
    create a new subtype. It is a case of specialization.

    E.g. circle is a specialized ellipse. Ellipse is a generalized circle.
    Any can be a subtype of another, or both, it is a free choice of the programmer modeling them [*].

    There are mixed cases, of course. But none is relevant to the meaning of
    "sub" in subtypes. The meaning is same as "sub" in substitution. It
    means a direction: S is taken for T (in some context, read, some
    operation). S appears to be T for the reader. How this is achieved is implementation = irrelevant. OK?

    ---------------------
    * Neither choice gives LSP behavioral substitutability, because that is impossible. Which is why internet is full of pointless discussions if Circle/Square is Ellipse/Rectangle. But for practical purposes language
    level substitutability is good enough for either choice.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 10:02:54 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:04, James Harris wrote:
    On 11/12/2022 19:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:45, James Harris wrote:
    On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-) >>>>>>>>
    Good. Now you see why in T and in out T cannot be the same type? >>>>>>>
    No. Types are not the only control in a programming language. An >>>>>>> /object/ has a type.

    If you want to use the term for something else you are free to do >>>>>> so. But if you accept the standard definition you must also accept >>>>>> all consequences of.

    Potentially fair but where did you read a definition of 'type'
    which backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical
    type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose
    logical behavior is defined by a set of values and a set of
    operations"; this is analogous to an algebraic structure in
    mathematics."

    That rather backs up my assertion that types are of /objects/ rather
    than of /uses/. IOW a piece of code could treat an object as
    read-only but that doesn't change the type of the object.

    No idea what this is supposed to mean. You asked where it comes from,
    I gave you the quote.

    You referred to an ADT rather than a type.

    That is same. Historically abstract meant non-machine. Clearly any
    machine type is abstract for some other machine or emulator.

    From the same source:

    "A data type constrains the possible values that an expression, such as
    a variable or a function, might take. This data type defines the
    operations that can be done on the data, the meaning of the data, and
    the way values of that type can be stored."

    Right. Replace sloppy "constrains" with "has" and you have the standard definition. The author probably unconsciously tried to take untyped
    languages on board, moving from untyped to typed by constraining the
    former. That is not the way things are done, e.g. in mathematics. You
    always start "typed" and don't even consider things outside the scope.
    E.g. talking about integer numbers nobody says, ah, BTW, it could be a
    tensor too.

      https://en.wikipedia.org/wiki/Data_type

    Note, the operations which "can be done on the data", not "can be done
    by a certain function".

    "Certain function" = operation of the type. All functions that take
    arguments and/or return values of the type are "certain." OK?

    Again, you do not accept common definition, give your own and explain
    how stupid the rest of the world is by not using it.

    Ahem, I'm happy with the aforementioned definition. It's you who is
    seeking an alteration.

    Nope, the definitions above minus sloppy wording are just fine.

    So what? Again any particular weakness of your type system by no means
    change the point. Which, let me repeat is, you can always resolve
    ambiguities by qualifying types.

    There's no weakness in that type system. On the contrary, it requires
    and would enforce precision and explicitness from the programmer.

    OK, let me reformulate, the excellence and unprecedented strength of
    your type system ... and we continue ... is irrelevant as you can always resolve ambiguities by qualifying types.

    Better?
    Such inconsistency is something that should be resisted.

    What inconsistency?

    See the example, above, on references. It would be inconsistent to
    automatically dereference when the rest of the language requires
    explicit type matching.

    I do not see any inconsistency. See the definition of:

    Wikipedia:

    "In classical deductive logic, a consistent theory is one that does
    not lead to a logical contradiction."

    I mean 'inconsistency' such as requiring explicit conversions in one
    place but not in another.

    The word is "irregularity."

    It is a language designer's choice. I was talking about means to achieve substitution an advanced type system may offer. Other questions are:

    - If a language should offer automatic dereferencing
    - If a language should allow the programmer to introduce automatic dereferincing
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 10:17:43 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:15, James Harris wrote:
    On 11/12/2022 19:32, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome
    cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on
    a rounding direction for 1.0/3.0 then I agree; it's a good point. I
    haven't worked through the many issues surrounding floats, yet, but
    they would probably have similar to integers, i.e. delimited areas of
    code could have a default rounding mode. One could write something
    along the lines of

       with float-rounding-mode = round-to-positive-infinity
         return x ** (1.0 / 3.0)
       with end

    You cannot do that with available hardware in a reasonable way.
    Hardware rounding is set.

    Not so. Rounding towards infinities is provided in hardware.

    And why it is not so?

    A certain language (Ada?) might have you believe there is only one
    rounding mode in hardware but that's not so.

    Ada does not believe in anything. In Ada you can query the machine
    rounding, E.g. Float'Machine_Rounds returns True if machine would round performing machine operations. But you cannot influence it.

    IEEE 754 defines five
    according to

      https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules

    I currently define 12 rounding modes for integers and I guess that
    similar may apply for floats but I've not explored that yet. Either way,
    if a programmer wanted a mode which the IEEE did not define for hardware
    it would be the compiler's job to emit equivalent instructions.

    Yes, the problem is if the machine:

    1. Supports the rounding mode you need.

    2. Let you set this kind of rounding for the given type/core/thread independently on others without massive overhead.

    Maybe you could such hardware somewhere...

    My implementation of interval arithmetic tries to use whatever rounding machine does to get at the interval boundaries.

    In such code the division would be rounded up to something like
    0.3333334.

    To incorporate that much control in a function would require many
    functions such as

       cuberoot_round_up
       cuberoot_round_down
       cuberoot_round_towards_zero

    and other similar horrors.

    It is no horrors, it is interval computations. Interval-valued
    function F returns an interval containing the mathematically correct
    result. Hardware implementations are interval-valued with the interval
    width of two adjacent machine numbers. Then rounding chooses one of
    the bounds.

    OT but I'd like to see a float implementation which instead of a single value always calculated upper and lower bounds for fp results.

    Welcome to interval arithmetic my friend! I too would like to ditch floating-point for floating-point intervals. Memory is not an issue
    anymore while rounding errors always are.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 15:06:18 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 23:30, James Harris wrote:
    On 12/12/2022 15:18, David Brown wrote:
    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a
    button for it on my Casio, that might be because it's oriented
    towards solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root
    function would make some sense.  Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do something similar, perhaps with the root first to permit results to be tuples.

      root(3, ....)  ;cube root

    so that

      root(3, 8) -> 2
      root(3, 8, 1000) -> (2, 10)


    This last one is :

    root(n, a, b) -> (a ^ 1/n, b ^ 1/n)

    ?

    Is that a general feature you have - allowing functions to take extra arguments and returning tuples? If so, what is the point? (I don't
    mean I think it is pointless, I mean I'd like to know what you think is
    the use-case!)


    Just to annoy Bart :-), you could do this by implementing currying in
    your language along with syntactic sugar for the common functional
    programming "map" function (which applies a function to every element in
    a list).

    Given a function "foo(a, b)" taking two inputs, "currying" would let the
    user treat "foo(a)" as a function that takes one input "b" and returns
    "foo(a, b)". Thus "foo(a, b)" and "foo(a)(b)" do the same thing. (In Haskell, you don't need the parentheses around parameters, and
    associativity means that "foo a b" is he same thing as "(foo a) b" - all
    use of multiple parameters is by currying.)

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input
    function to a list/tuple should return a list/tuple of that function
    applied to each element.

    The user (or library) can then define the single function "root(n, x)",
    and the user can write "root(3)[8, 1000]" to get "[2, 10]" without any
    special consideration in the definition of "root".


    If only I could think of a good use of this feature!

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 15:10:04 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function which
    I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and others
    of interest.

    Probably I wouldn't have it as built-in, as I have no great interest in
    cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them. At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.


    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library. And if you can't
    make library functions as efficient as builtins, find a better way to
    handle your standard library - standard libraries don't need to follow platform ABI's.)


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 15:06:13 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 14:06, David Brown wrote:
    On 12/12/2022 23:30, James Harris wrote:

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do
    something similar, perhaps with the root first to permit results to be
    tuples.

       root(3, ....)  ;cube root

    so that

       root(3, 8) -> 2
       root(3, 8, 1000) -> (2, 10)


    This last one is :

        root(n, a, b) -> (a ^ 1/n, b ^ 1/n)

    ?

    Is that a general feature you have - allowing functions to take extra arguments and returning tuples?  If so, what is the point?  (I don't
    mean I think it is pointless, I mean I'd like to know what you think is
    the use-case!)


    Just to annoy Bart :-), you could do this by implementing currying in
    your language along with syntactic sugar for the common functional programming "map" function (which applies a function to every element in
    a list).

    Given a function "foo(a, b)" taking two inputs, "currying" would let the user treat "foo(a)" as a function that takes one input "b" and returns "foo(a, b)".  Thus "foo(a, b)" and "foo(a)(b)" do the same thing.  (In Haskell, you don't need the parentheses around parameters, and
    associativity means that "foo a b" is he same thing as "(foo a) b" - all
    use of multiple parameters is by currying.)

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
    applied to each element.

    The user (or library) can then define the single function "root(n, x)",
    and the user can write "root(3)[8, 1000]" to get "[2, 10]" without any special consideration in the definition of "root".

    It doesn't need to be that complicated. I can get that functionality
    just by using a form of 'map':

    println mapsv(root, 3, (8,27,64))

    fun root(n, x) = x**(1/n)

    Output is (2.0, 3.0, 4.0). But with dedicated operators or functions,
    it's a bit sweeter:

    fun cuberoot(x) = root(3,x)

    println mapv(sqrt, (10,20,30)) # sqrt is an operator
    println mapv(cuberoot, (10,20,30))

    To get rid of that instrusive `map`, there are another ways, easier with dynamic code:

    func nthroot(n, x) =
    if x.islist then
    mapsv(root, n, x)
    else
    root(n, x)
    fi
    end

    println nthroot(3, 10)
    println nthroot(3, (10,20,30))

    To further remove the parentheses around (10,20,30), requires a form of variadic parameters, something I'm not keen on.

    But there is a cruder method (requiring a tweak in the language) which
    is to allow some functions not to need parentheses around the one
    paremeter. Then the call would be:

    nthroot (3, 10, 20, 30)

    The single argument is the list (3,10,20,30), and it can take it from
    there (eg. it can return mapsv(root, head(x), tail(x))).

    There are other ways too, eg. using a piping operator:

    print (10,20,30) -> nthroot(3)

    Very many possibilities without going into a full-blown functional
    language, however my example do make use of dynamic types.

    I think James' language is not that high level so functional would not
    be a good fit, but he says this is making use of 'tuples'.

    I've never really got tuples (so many languages use them in place of
    proper record types), but if that is generally available in the language
    for function parameters, and to provide multiple return values, then it
    can be available for 'root' too.




    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:09:27 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 15:06, David Brown wrote:

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
    applied to each element.

    There is nothing "functional" (no pun intended) in that. In mathematical notation it is normal to do that for vectors and matrices.

    However, be warned, exp (A) when A is a square matrix is not exp of its elements (unless diagonal).

    There are lots of ways to compose functions with series and sequences.
    Whether they should have special syntax is another question. It depends
    on how frequently such stuff is used. Not much, because there are
    pitfalls when it comes to real-life programs. Real-life programs are
    more about things looking correct on paper but not working as expected.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:13:00 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language
    in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you can't make library functions as efficient as builtins, find a better way to
    handle your standard library - standard libraries don't need to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 15:22:38 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 15:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need to
    follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    I had thought the same thing. But "//" is more useful for other things
    (aside from comments), eg Python uses it for integer divide; I'd
    reserved it to construct rational numbers.

    But then thinking about it some more:

    x**3 means x*x*x
    x//3 doesn't means x/x/x


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 16:32:18 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language
    in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need to
    follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps? :-)

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:37:00 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 16:22, Bart wrote:
    On 13/12/2022 15:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    I had thought the same thing. But "//" is more useful for other things (aside from comments), eg Python uses it for integer divide; I'd
    reserved it to construct rational numbers.

    But then thinking about it some more:

       x**3 means x*x*x
       x//3 doesn't means x/x/x

    x*3 means x+x+x
    x/3 does not mean x-x-x

    The logic here is different:

    exp(log(x**y)) = exp(log(x)*y) --> **
    exp(log(x**(1/y))) = exp(log(x)/y) --> //

    Another logic could be this. Let us denote exp(log(x)... as x*... then exp(log(x)/y) could be x*/y (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:38:36 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an >>>> ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 18:07:13 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 16:38, Dmitry A. Kazakov wrote:
    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great
    interest in cube roots (other than allowing ∛(x) would be cool).
    Maybe as an ordinary macro or function that implements it on top of >>>>> **, but is a clearer alternative. nthroot() is a another
    possibility (as is an infix version `3 root x`, with precedence the >>>>> same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an
    individual function for them.  At the very least, the implementation >>>> of "root" should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))


    To some (C programmers), "x ^ y" looks like "x xor y", while to others (logicians) it looks like "x and y". We have too few symbols,
    especially if we restrict ourselves to ones easily typed on many keyboards!

    And what operator shall we use for tetration? "x ^^ y" ? "x *** y" ?



    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 18:35:42 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 18:07, David Brown wrote:
    On 13/12/2022 16:38, Dmitry A. Kazakov wrote:
    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function >>>>>>> which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great
    interest in cube roots (other than allowing ∛(x) would be cool). >>>>>> Maybe as an ordinary macro or function that implements it on top
    of **, but is a clearer alternative. nthroot() is a another
    possibility (as is an infix version `3 root x`, with precedence
    the same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an
    individual function for them.  At the very least, the
    implementation of "root" should have a special case for handling
    roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you >>>>> can't make library functions as efficient as builtins, find a
    better way to handle your standard library - standard libraries
    don't need to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))

    To some (C programmers), "x ^ y" looks like "x xor y", while to others (logicians) it looks like "x and y".  We have too few symbols,
    especially if we restrict ourselves to ones easily typed on many keyboards!

    And what operator shall we use for tetration?  "x ^^ y" ?  "x *** y" ?

    ^^ or ****, if the rule is to double the previous operation notation.
    What about infinite exponentiation? E.g. a function defined by the equation

    f(x) = f(x)**x

    I vote an infinite sequence of * as the most annoying ... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Dec 14 00:42:43 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 09:11, David Brown wrote:
    [I wrote:]
         Cube roots are arguably used often enough to be worth implementing >> separately from "** (1/3)" or near equivalent.
    Really? In what context? Square roots occur all over the place, but
    I can't think of a single realistic use of cube roots that are not
    general nth roots (such as calculating geometric means). [...]

    Well, the most obvious one is finding the edge of a cube of given volume! Yes, I've done that occasionally. I found a couple of occurrences
    of "cbrt" in my solutions to some [~200] of the Euler problems [I tackle
    only ones that seem interesting], in both cases to find upper bounds on how
    far a calculation of n^3 needs to go for some condition to hold. Another
    use was when our prof of number theory came to me with some twisted cubics
    [qv] and asked me to find some solutions to associated cubic equations
    [sadly, it was ~15 years ago and I've forgotten the details]. My most
    recent personal use seems to have been in constructing a colour map for a Mandelbrot program, so aesthetic rather than important. IOW, I wouldn't
    claim major usage, but not negligible either.

    Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use serious >> numerical analysis and computation beyond the normal scope of this group,
    such as interval arithmetic and Chebychev polynomials.
    You would almost certainly not use Chebychev's or other polynomials
    for calculating cube roots, unless you were making something like a specialised pipelined implementation in an ASIC or FPGA.

    Chebychev polynomials have the advantage of minimising the error
    over a range, so typically converge significantly faster than other ways
    of calculating library functions /as long as/ you can pre-compute the
    number of terms needed and thus the actual conversion back into a normal polynomial. This typically saves an iteration or two /and/ having a
    loop [as you can unroll it] /and/ testing whether to go round again,
    compared with an iterative method. I had a colleague who was very keen
    on Pade approximations, but I didn't find them any faster; admittedly,
    that was in the '60s and '70s when I was heavily involved in numerical
    work for astrophysics [and hitting limits on f-p numbers, store sizes,
    time allocations, ...]; I don't know what the current state of the art
    is on modern hardware.

    Newton-Raphson iteration, or related algorithms, are simpler and
    converge quickly without all the messiness of ranges.

    Simpler for casual use, certainly, esp if you have no inside
    knowledge of the hardware. But you still need to do range reduction,
    as N-R is only linear if you start a long way from the root. [Atlas
    had a nifty facility whereby the f-p exponent was a register, so you
    could get reasonably close to an n-th root simply by dividing that
    register by n.]

    [But my real point here, as in my previous article, is not to
    promote any particular numerical technique, but to point out that the calculation of library functions in the Real World is not an amateur
    activity, and needs serious NA, well beyond what most undergraduates
    encounter, and well beyond the normal scope of this group.]
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Fisher
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Dec 14 15:12:50 2022
    From Newsgroup: comp.lang.misc

    On 14/12/2022 01:42, Andy Walker wrote:
    On 12/12/2022 09:11, David Brown wrote:
    [I wrote:]
         Cube roots are arguably used often enough to be worth implementing
    separately from "** (1/3)" or near equivalent.
    Really?  In what context?  Square roots occur all over the place, but
    I can't think of a single realistic use of cube roots that are not
    general nth roots (such as calculating geometric means). [...]

        Well, the most obvious one is finding the edge of a cube of given volume!  Yes, I've done that occasionally.

    That's got to count as a pretty obscure need - not something worth
    having a standard library function to handle.

      I found a couple of occurrences
    of "cbrt" in my solutions to some [~200] of the Euler problems [I tackle
    only ones that seem interesting], in both cases to find upper bounds on how far a calculation of n^3 needs to go for some condition to hold.

    Again, pretty unusual. And if you are fond of doing maths problems or calculations (I am), then I still don't see any reason to consider cube
    root worthy of a function by itself. I am as likely to use other roots
    as I am to use cube roots. (Square roots are, of course, overwhelmingly
    more useful.)

    Another
    use was when our prof of number theory came to me with some twisted cubics [qv] and asked me to find some solutions to associated cubic equations [sadly, it was ~15 years ago and I've forgotten the details].  My most recent personal use seems to have been in constructing a colour map for a Mandelbrot program, so aesthetic rather than important.  IOW, I wouldn't claim major usage, but not negligible either.


    My use of cubics usually goes the other way - cubic splines for
    approximating functions, rather than having to solve the cubics.

    Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use
    serious
    numerical analysis and computation beyond the normal scope of this
    group,
    such as interval arithmetic and Chebychev polynomials.
    You would almost certainly not use Chebychev's or other polynomials
    for calculating cube roots, unless you were making something like a
    specialised pipelined implementation in an ASIC or FPGA.

        Chebychev polynomials have the advantage of minimising the error over a range, so typically converge significantly faster than other ways
    of calculating library functions /as long as/ you can pre-compute the
    number of terms needed and thus the actual conversion back into a normal polynomial.

    Having the minimum RMS error over a range, for a given polynomial
    degree, does not necessarily translate to fastest calculation.
    Certainly Chebychev's are used for many kinds of function approximation,
    but iterative processes like Newton-Raphson or related algorithms can
    often converge much faster. Real-life performance can, however, depend
    on division speeds which are often much slower than multiplications, and
    can depend on pipelining and OOO execution. An ideal implementation
    might use a short polynomial approximation to get a good starting point,
    then a couple of rounds of NR to fill out the accuracy.

    This typically saves an iteration or two /and/ having a
    loop [as you can unroll it] /and/ testing whether to go round again,
    compared with an iterative method.  I had a colleague who was very keen
    on Pade approximations, but I didn't find them any faster;  admittedly,
    that was in the '60s and '70s when I was heavily involved in numerical
    work for astrophysics [and hitting limits on f-p numbers, store sizes,
    time allocations, ...];  I don't know what the current state of the art
    is on modern hardware.


    The challenge here is that the answer is usually "it's complicated". If
    you can push the calculations into SIMD, you can do lots in parallel -
    but the latency for a single calculation is going to be much higher.
    (Mind you, if you are just doing one cube root, who cares about its speed?)

    Newton-Raphson iteration, or related algorithms, are simpler and
    converge quickly without all the messiness of ranges.

        Simpler for casual use, certainly, esp if you have no inside knowledge of the hardware.  But you still need to do range reduction,
    as N-R is only linear if you start a long way from the root.  [Atlas
    had a nifty facility whereby the f-p exponent was a register, so you
    could get reasonably close to an n-th root simply by dividing that
    register by n.]

        [But my real point here, as in my previous article, is not to promote any particular numerical technique, but to point out that the calculation of library functions in the Real World is not an amateur activity, and needs serious NA, well beyond what most undergraduates encounter, and well beyond the normal scope of this group.]


    Fair enough.

    Amateurs can have fun with it, however!
    --- Synchronet 3.19c-Linux NewsLink 1.113