• Re: Functional programming is not always what it seems

    From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:22:09 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 19:08, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:25, James Harris wrote:
    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    It did not say it is. I said it an implementation of.

    How would you substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float,
    character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.

    Call implicit automatic, automatic implicit. It does not change the semantics and the intent. The intent and the meaning is to substitute
    one type for another transparently to the syntax.

    Fine but that still doesn't imply a 'sub' type - for any reasonable
    meaning of the term.

    I accept that you have a different interpretation of 'subtype' and
    that's fine but it seems a confusing use of language. Maybe there's a
    better term that everyone would be happy with but I think of a subtype
    as being diminutive compared to another, such as integers which have a
    smaller range.

    thou: integer range 0..999
    hund: s range 0..99

    where hund could be understood to be a subtype of thou. But not the
    other way round!
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:30:26 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 15:18, David Brown wrote:
    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a
    button for it on my Casio, that might be because it's oriented towards
    solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root function would make some sense.  Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do something similar, perhaps with the root first to permit results to be
    tuples.

    root(3, ....) ;cube root

    so that

    root(3, 8) -> 2
    root(3, 8, 1000) -> (2, 10)
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Mon Dec 12 22:41:29 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 10:26, Bart wrote:
    On 11/12/2022 18:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)! >>>>
    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on
    a rounding direction for 1.0/3.0 then I agree; it's a good point.

    Actually, I was concerned with aesthetics.

    That's also important.


    The precision of 1.0/3 is something I hadn't considered, but DAK's point
    is, if a compiler knew of a fast cube root algorithm and wanted to special-case A**B when B was 1.0/3, then how close to 'one third' would
    B have to be before it could assume that a cube-root was in fact what
    was intended? Given that there is no precise representation of 'one
    third' anyway.

    Far easier to just a provide something like 'cuberoot()' then it will
    know, and so will people reading the code.

    What of David's suggestion, mentioned below?


    I haven't worked through the many issues surrounding floats, yet, but
    they would probably have similar to integers, i.e. delimited areas of
    code could have a default rounding mode. One could write something
    along the lines of

       with float-rounding-mode = round-to-positive-infinity
         return x ** (1.0 / 3.0)
       with end

    In such code the division would be rounded up to something like
    0.3333334.



    To incorporate that much control in a function would require many
    functions such as

       cuberoot_round_up
       cuberoot_round_down
       cuberoot_round_towards_zero

    Huh? Who cares about the rounding of cube root?! As I said it was merely about detecting whether this was a cube root.

    Since floats are, in general, /approximations/ the rounding mode on any particular operation can matter to how precise a result is; though IME programming languages tend to brush that fact under the carpet.


    and other similar horrors. Those names are, in fact, misleading as
    they seem to apply to the cube root rather than the division within it
    - which makes the idea of a function even worse.

    Oh, you are talking abut that! Maybe you need a function like isthisreallyathirdorjustsomethingclose() instead.

    What do you make of David's suggestion to have a "root" function which I
    would probably have as

    root(2, x) -> square root of x
    root(3, x) -> cube root of x

    etc?
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Tue Dec 13 00:16:08 2022
    From Newsgroup: comp.lang.misc

    On 11/12/2022 17:34, James Harris wrote:
    You'll be glad to know I won't support any of that nonsense (such as
    allowing a redefinition of what 2 means) but what do you mean about
    the program text? Say the text has
      two: const = 2
      f(2)
      f(two)
    I don't see any semantic difference between 2 and two. Nor does
    either form imply either storage or the lack of storage.

    I'm assuming that the intention is similar to, perhaps identical
    with, the Algol 68 "int two = 2;"? But we don't really have enough of a
    formal definition of your language to be sure.

    That being the case, do you still see a semantic difference between 2
    and two?

    The above A68 simply gives a new name, "two", to the integer 2.
    If your language effectively does the same, then I see no interesting difference between "2" and "two". But I could certainly imagine cases
    where there would be a difference: eg,

    begin int two = 2; ... two ... end; ... two ... # two no longer valid #
    begin ... 2 ... end; ... 2 ... # but 2 still works (of course) #

    Whether you regard that as "interesting" is a matter of taste. Another potential problem is with pointers; if you regard "const" as merely an adjective [so that "two" is an integer "variable" that happens to be a constant], then there is presumably no objection, for you, to setting up
    a pointer to point to "two", whereas there might be to pointing at "2"
    [which may require an intermediate integer variable]. Without a much
    fuller description of your language, we can't tell whether this is a
    real difference.
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Haydn
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 00:27:45 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function which I would probably have as

      root(2, x) -> square root of x
      root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and others
    of interest.

    Probably I wouldn't have it as built-in, as I have no great interest in
    cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language in
    '75, and I see no reason to drop it.
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 09:40:47 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:22, James Harris wrote:
    On 11/12/2022 19:08, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:25, James Harris wrote:
    On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:00, James Harris wrote:
    On 07/12/2022 19:37, Dmitry A. Kazakov wrote:

    But where did you read that the specific term 'subtype' should
    thereafter be applied to implicit conversions?

    Conversion is an implementation of substitution.

    Conversion is a conversion, not a substitution!

    It did not say it is. I said it an implementation of.

    How would you substitute int for float without a conversion?

    I don't have implicit conversions of declared types. They must be
    written explicitly. Undeclared types (big int, big uint, big float,
    character, string, boolean etc) would have automatic conversions. Not
    sure if that affects your assessment.

    Call implicit automatic, automatic implicit. It does not change the
    semantics and the intent. The intent and the meaning is to substitute
    one type for another transparently to the syntax.

    Fine but that still doesn't imply a 'sub' type - for any reasonable
    meaning of the term.

    It does not imply, it *is* subtype. There is a difference between A=>B
    and A=B. It is just the thing as defined.

    I accept that you have a different interpretation of 'subtype' and
    that's fine but it seems a confusing use of language. Maybe there's a
    better term that everyone would be happy with but I think of a subtype
    as being diminutive compared to another, such as integers which have a smaller range.

      thou: integer range 0..999
      hund: s range 0..99

    where hund could be understood to be a subtype of thou. But not the
    other way round!

    Special cases of subtyping regarding the sets of values and/or
    operations of a type are:

    - Generalization (you add members to the sets)
    - Specialization (you remove members from the sets)

    What you mentioned is setting up a constraint, like range, in order to
    create a new subtype. It is a case of specialization.

    E.g. circle is a specialized ellipse. Ellipse is a generalized circle.
    Any can be a subtype of another, or both, it is a free choice of the programmer modeling them [*].

    There are mixed cases, of course. But none is relevant to the meaning of
    "sub" in subtypes. The meaning is same as "sub" in substitution. It
    means a direction: S is taken for T (in some context, read, some
    operation). S appears to be T for the reader. How this is achieved is implementation = irrelevant. OK?

    ---------------------
    * Neither choice gives LSP behavioral substitutability, because that is impossible. Which is why internet is full of pointless discussions if Circle/Square is Ellipse/Rectangle. But for practical purposes language
    level substitutability is good enough for either choice.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 10:02:54 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:04, James Harris wrote:
    On 11/12/2022 19:21, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:45, James Harris wrote:
    On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
    On 2022-12-11 18:18, James Harris wrote:
    On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
    On 2022-12-07 19:09, James Harris wrote:
    On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
    On 2022-12-06 00:04, James Harris wrote:
    On 05/12/2022 07:50, Dmitry A. Kazakov wrote:

    ...

    At least you've now added operations so you are getting there. ;-) >>>>>>>>
    Good. Now you see why in T and in out T cannot be the same type? >>>>>>>
    No. Types are not the only control in a programming language. An >>>>>>> /object/ has a type.

    If you want to use the term for something else you are free to do >>>>>> so. But if you accept the standard definition you must also accept >>>>>> all consequences of.

    Potentially fair but where did you read a definition of 'type'
    which backs up your point?

    It is a commonly accepted definition AFAIK coming from mathematical
    type theories of the beginning of the last century.

    Wikipedia (ADT = abstract datatype):

    "Formally, an ADT may be defined as a "class of objects whose
    logical behavior is defined by a set of values and a set of
    operations"; this is analogous to an algebraic structure in
    mathematics."

    That rather backs up my assertion that types are of /objects/ rather
    than of /uses/. IOW a piece of code could treat an object as
    read-only but that doesn't change the type of the object.

    No idea what this is supposed to mean. You asked where it comes from,
    I gave you the quote.

    You referred to an ADT rather than a type.

    That is same. Historically abstract meant non-machine. Clearly any
    machine type is abstract for some other machine or emulator.

    From the same source:

    "A data type constrains the possible values that an expression, such as
    a variable or a function, might take. This data type defines the
    operations that can be done on the data, the meaning of the data, and
    the way values of that type can be stored."

    Right. Replace sloppy "constrains" with "has" and you have the standard definition. The author probably unconsciously tried to take untyped
    languages on board, moving from untyped to typed by constraining the
    former. That is not the way things are done, e.g. in mathematics. You
    always start "typed" and don't even consider things outside the scope.
    E.g. talking about integer numbers nobody says, ah, BTW, it could be a
    tensor too.

      https://en.wikipedia.org/wiki/Data_type

    Note, the operations which "can be done on the data", not "can be done
    by a certain function".

    "Certain function" = operation of the type. All functions that take
    arguments and/or return values of the type are "certain." OK?

    Again, you do not accept common definition, give your own and explain
    how stupid the rest of the world is by not using it.

    Ahem, I'm happy with the aforementioned definition. It's you who is
    seeking an alteration.

    Nope, the definitions above minus sloppy wording are just fine.

    So what? Again any particular weakness of your type system by no means
    change the point. Which, let me repeat is, you can always resolve
    ambiguities by qualifying types.

    There's no weakness in that type system. On the contrary, it requires
    and would enforce precision and explicitness from the programmer.

    OK, let me reformulate, the excellence and unprecedented strength of
    your type system ... and we continue ... is irrelevant as you can always resolve ambiguities by qualifying types.

    Better?
    Such inconsistency is something that should be resisted.

    What inconsistency?

    See the example, above, on references. It would be inconsistent to
    automatically dereference when the rest of the language requires
    explicit type matching.

    I do not see any inconsistency. See the definition of:

    Wikipedia:

    "In classical deductive logic, a consistent theory is one that does
    not lead to a logical contradiction."

    I mean 'inconsistency' such as requiring explicit conversions in one
    place but not in another.

    The word is "irregularity."

    It is a language designer's choice. I was talking about means to achieve substitution an advanced type system may offer. Other questions are:

    - If a language should offer automatic dereferencing
    - If a language should allow the programmer to introduce automatic dereferincing
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 10:17:43 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-12 23:15, James Harris wrote:
    On 11/12/2022 19:32, Dmitry A. Kazakov wrote:
    On 2022-12-11 19:01, James Harris wrote:
    On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
    On 2022-12-07 17:42, James Harris wrote:
    On 06/12/2022 00:25, Bart wrote:

    If cube roots were that common, would you write that as
    x**0.33333333333 or x**(1.0/3.0)? There you would welcome
    cuberoot(x)!

    In language terms I think I'd go for x ** (1.0 / 3.0). If a
    programmer wanted to put it in a function there would be nothing
    stopping him.

    I think the point Bart was making was that 1/3 had no exact
    representation in binary floating-point numbers. If cube root used a
    special algorithm, you would have a trouble to decide when to switch
    to it.

    If you and Bart mean that a cuberoot function would have to decide on
    a rounding direction for 1.0/3.0 then I agree; it's a good point. I
    haven't worked through the many issues surrounding floats, yet, but
    they would probably have similar to integers, i.e. delimited areas of
    code could have a default rounding mode. One could write something
    along the lines of

       with float-rounding-mode = round-to-positive-infinity
         return x ** (1.0 / 3.0)
       with end

    You cannot do that with available hardware in a reasonable way.
    Hardware rounding is set.

    Not so. Rounding towards infinities is provided in hardware.

    And why it is not so?

    A certain language (Ada?) might have you believe there is only one
    rounding mode in hardware but that's not so.

    Ada does not believe in anything. In Ada you can query the machine
    rounding, E.g. Float'Machine_Rounds returns True if machine would round performing machine operations. But you cannot influence it.

    IEEE 754 defines five
    according to

      https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules

    I currently define 12 rounding modes for integers and I guess that
    similar may apply for floats but I've not explored that yet. Either way,
    if a programmer wanted a mode which the IEEE did not define for hardware
    it would be the compiler's job to emit equivalent instructions.

    Yes, the problem is if the machine:

    1. Supports the rounding mode you need.

    2. Let you set this kind of rounding for the given type/core/thread independently on others without massive overhead.

    Maybe you could such hardware somewhere...

    My implementation of interval arithmetic tries to use whatever rounding machine does to get at the interval boundaries.

    In such code the division would be rounded up to something like
    0.3333334.

    To incorporate that much control in a function would require many
    functions such as

       cuberoot_round_up
       cuberoot_round_down
       cuberoot_round_towards_zero

    and other similar horrors.

    It is no horrors, it is interval computations. Interval-valued
    function F returns an interval containing the mathematically correct
    result. Hardware implementations are interval-valued with the interval
    width of two adjacent machine numbers. Then rounding chooses one of
    the bounds.

    OT but I'd like to see a float implementation which instead of a single value always calculated upper and lower bounds for fp results.

    Welcome to interval arithmetic my friend! I too would like to ditch floating-point for floating-point intervals. Memory is not an issue
    anymore while rounding errors always are.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 15:06:18 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 23:30, James Harris wrote:
    On 12/12/2022 15:18, David Brown wrote:
    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a
    button for it on my Casio, that might be because it's oriented
    towards solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root
    function would make some sense.  Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do something similar, perhaps with the root first to permit results to be tuples.

      root(3, ....)  ;cube root

    so that

      root(3, 8) -> 2
      root(3, 8, 1000) -> (2, 10)


    This last one is :

    root(n, a, b) -> (a ^ 1/n, b ^ 1/n)

    ?

    Is that a general feature you have - allowing functions to take extra arguments and returning tuples? If so, what is the point? (I don't
    mean I think it is pointless, I mean I'd like to know what you think is
    the use-case!)


    Just to annoy Bart :-), you could do this by implementing currying in
    your language along with syntactic sugar for the common functional
    programming "map" function (which applies a function to every element in
    a list).

    Given a function "foo(a, b)" taking two inputs, "currying" would let the
    user treat "foo(a)" as a function that takes one input "b" and returns
    "foo(a, b)". Thus "foo(a, b)" and "foo(a)(b)" do the same thing. (In Haskell, you don't need the parentheses around parameters, and
    associativity means that "foo a b" is he same thing as "(foo a) b" - all
    use of multiple parameters is by currying.)

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input
    function to a list/tuple should return a list/tuple of that function
    applied to each element.

    The user (or library) can then define the single function "root(n, x)",
    and the user can write "root(3)[8, 1000]" to get "[2, 10]" without any
    special consideration in the definition of "root".


    If only I could think of a good use of this feature!

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 15:10:04 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function which
    I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and others
    of interest.

    Probably I wouldn't have it as built-in, as I have no great interest in
    cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them. At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.


    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library. And if you can't
    make library functions as efficient as builtins, find a better way to
    handle your standard library - standard libraries don't need to follow platform ABI's.)


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 15:06:13 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 14:06, David Brown wrote:
    On 12/12/2022 23:30, James Harris wrote:

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do
    something similar, perhaps with the root first to permit results to be
    tuples.

       root(3, ....)  ;cube root

    so that

       root(3, 8) -> 2
       root(3, 8, 1000) -> (2, 10)


    This last one is :

        root(n, a, b) -> (a ^ 1/n, b ^ 1/n)

    ?

    Is that a general feature you have - allowing functions to take extra arguments and returning tuples?  If so, what is the point?  (I don't
    mean I think it is pointless, I mean I'd like to know what you think is
    the use-case!)


    Just to annoy Bart :-), you could do this by implementing currying in
    your language along with syntactic sugar for the common functional programming "map" function (which applies a function to every element in
    a list).

    Given a function "foo(a, b)" taking two inputs, "currying" would let the user treat "foo(a)" as a function that takes one input "b" and returns "foo(a, b)".  Thus "foo(a, b)" and "foo(a)(b)" do the same thing.  (In Haskell, you don't need the parentheses around parameters, and
    associativity means that "foo a b" is he same thing as "(foo a) b" - all
    use of multiple parameters is by currying.)

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
    applied to each element.

    The user (or library) can then define the single function "root(n, x)",
    and the user can write "root(3)[8, 1000]" to get "[2, 10]" without any special consideration in the definition of "root".

    It doesn't need to be that complicated. I can get that functionality
    just by using a form of 'map':

    println mapsv(root, 3, (8,27,64))

    fun root(n, x) = x**(1/n)

    Output is (2.0, 3.0, 4.0). But with dedicated operators or functions,
    it's a bit sweeter:

    fun cuberoot(x) = root(3,x)

    println mapv(sqrt, (10,20,30)) # sqrt is an operator
    println mapv(cuberoot, (10,20,30))

    To get rid of that instrusive `map`, there are another ways, easier with dynamic code:

    func nthroot(n, x) =
    if x.islist then
    mapsv(root, n, x)
    else
    root(n, x)
    fi
    end

    println nthroot(3, 10)
    println nthroot(3, (10,20,30))

    To further remove the parentheses around (10,20,30), requires a form of variadic parameters, something I'm not keen on.

    But there is a cruder method (requiring a tweak in the language) which
    is to allow some functions not to need parentheses around the one
    paremeter. Then the call would be:

    nthroot (3, 10, 20, 30)

    The single argument is the list (3,10,20,30), and it can take it from
    there (eg. it can return mapsv(root, head(x), tail(x))).

    There are other ways too, eg. using a piping operator:

    print (10,20,30) -> nthroot(3)

    Very many possibilities without going into a full-blown functional
    language, however my example do make use of dynamic types.

    I think James' language is not that high level so functional would not
    be a good fit, but he says this is making use of 'tuples'.

    I've never really got tuples (so many languages use them in place of
    proper record types), but if that is generally available in the language
    for function parameters, and to provide multiple return values, then it
    can be available for 'root' too.




    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:09:27 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 15:06, David Brown wrote:

    Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
    thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
    applied to each element.

    There is nothing "functional" (no pun intended) in that. In mathematical notation it is normal to do that for vectors and matrices.

    However, be warned, exp (A) when A is a square matrix is not exp of its elements (unless diagonal).

    There are lots of ways to compose functions with series and sequences.
    Whether they should have special syntax is another question. It depends
    on how frequently such stuff is used. Not much, because there are
    pitfalls when it comes to real-life programs. Real-life programs are
    more about things looking correct on paper but not working as expected.
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:13:00 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language
    in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you can't make library functions as efficient as builtins, find a better way to
    handle your standard library - standard libraries don't need to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Bart@bc@freeuk.com to comp.lang.misc on Tue Dec 13 15:22:38 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 15:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need to
    follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    I had thought the same thing. But "//" is more useful for other things
    (aside from comments), eg Python uses it for integer divide; I'd
    reserved it to construct rational numbers.

    But then thinking about it some more:

    x**3 means x*x*x
    x//3 doesn't means x/x/x


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 16:32:18 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an
    ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a language
    in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need to
    follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps? :-)

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:37:00 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 16:22, Bart wrote:
    On 13/12/2022 15:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    I had thought the same thing. But "//" is more useful for other things (aside from comments), eg Python uses it for integer divide; I'd
    reserved it to construct rational numbers.

    But then thinking about it some more:

       x**3 means x*x*x
       x//3 doesn't means x/x/x

    x*3 means x+x+x
    x/3 does not mean x-x-x

    The logic here is different:

    exp(log(x**y)) = exp(log(x)*y) --> **
    exp(log(x**(1/y))) = exp(log(x)/y) --> //

    Another logic could be this. Let us denote exp(log(x)... as x*... then exp(log(x)/y) could be x*/y (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 16:38:36 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great interest
    in cube roots (other than allowing ∛(x) would be cool). Maybe as an >>>> ordinary macro or function that implements it on top of **, but is a
    clearer alternative. nthroot() is a another possibility (as is an
    infix version `3 root x`, with precedence the same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an individual
    function for them.  At the very least, the implementation of "root"
    should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Tue Dec 13 18:07:13 2022
    From Newsgroup: comp.lang.misc

    On 13/12/2022 16:38, Dmitry A. Kazakov wrote:
    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function
    which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great
    interest in cube roots (other than allowing ∛(x) would be cool).
    Maybe as an ordinary macro or function that implements it on top of >>>>> **, but is a clearer alternative. nthroot() is a another
    possibility (as is an infix version `3 root x`, with precedence the >>>>> same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an
    individual function for them.  At the very least, the implementation >>>> of "root" should have a special case for handling roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you
    can't make library functions as efficient as builtins, find a better
    way to handle your standard library - standard libraries don't need
    to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))


    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))


    To some (C programmers), "x ^ y" looks like "x xor y", while to others (logicians) it looks like "x and y". We have too few symbols,
    especially if we restrict ourselves to ones easily typed on many keyboards!

    And what operator shall we use for tetration? "x ^^ y" ? "x *** y" ?



    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Dmitry A. Kazakov@mailbox@dmitry-kazakov.de to comp.lang.misc on Tue Dec 13 18:35:42 2022
    From Newsgroup: comp.lang.misc

    On 2022-12-13 18:07, David Brown wrote:
    On 13/12/2022 16:38, Dmitry A. Kazakov wrote:
    On 2022-12-13 16:32, David Brown wrote:
    On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
    On 2022-12-13 15:10, David Brown wrote:
    On 13/12/2022 01:27, Bart wrote:
    On 12/12/2022 22:41, James Harris wrote:
    On 12/12/2022 10:26, Bart wrote:

    What do you make of David's suggestion to have a "root" function >>>>>>> which I would probably have as

       root(2, x) -> square root of x
       root(3, x) -> cube root of x

    It's OK. It solves the problem of special-casing cube-roots, and
    others of interest.

    Probably I wouldn't have it as built-in, as I have no great
    interest in cube roots (other than allowing ∛(x) would be cool). >>>>>> Maybe as an ordinary macro or function that implements it on top
    of **, but is a clearer alternative. nthroot() is a another
    possibility (as is an infix version `3 root x`, with precedence
    the same as **).

    But I would continue to use sqrt since I first saw that in a
    language in '75, and I see no reason to drop it.

    Square roots are very common, so it makes sense to have an
    individual function for them.  At the very least, the
    implementation of "root" should have a special case for handling
    roots 0, 1 and 2.

    (As always, I would never have something as a built-in function or
    keyword if it could equally well be made in a library.  And if you >>>>> can't make library functions as efficient as builtins, find a
    better way to handle your standard library - standard libraries
    don't need to follow platform ABI's.)

    If x**y denotes power, then x//y should do the opposite. So x//2 is
    square root. I cannot help the proponents of x^y notation... (:-))

    x ⌄ y, perhaps?  :-)

    I dismissed it because it looks as x or/union y... (:-))

    To some (C programmers), "x ^ y" looks like "x xor y", while to others (logicians) it looks like "x and y".  We have too few symbols,
    especially if we restrict ourselves to ones easily typed on many keyboards!

    And what operator shall we use for tetration?  "x ^^ y" ?  "x *** y" ?

    ^^ or ****, if the rule is to double the previous operation notation.
    What about infinite exponentiation? E.g. a function defined by the equation

    f(x) = f(x)**x

    I vote an infinite sequence of * as the most annoying ... (:-))
    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From Andy Walker@anw@cuboid.co.uk to comp.lang.misc on Wed Dec 14 00:42:43 2022
    From Newsgroup: comp.lang.misc

    On 12/12/2022 09:11, David Brown wrote:
    [I wrote:]
         Cube roots are arguably used often enough to be worth implementing >> separately from "** (1/3)" or near equivalent.
    Really? In what context? Square roots occur all over the place, but
    I can't think of a single realistic use of cube roots that are not
    general nth roots (such as calculating geometric means). [...]

    Well, the most obvious one is finding the edge of a cube of given volume! Yes, I've done that occasionally. I found a couple of occurrences
    of "cbrt" in my solutions to some [~200] of the Euler problems [I tackle
    only ones that seem interesting], in both cases to find upper bounds on how
    far a calculation of n^3 needs to go for some condition to hold. Another
    use was when our prof of number theory came to me with some twisted cubics
    [qv] and asked me to find some solutions to associated cubic equations
    [sadly, it was ~15 years ago and I've forgotten the details]. My most
    recent personal use seems to have been in constructing a colour map for a Mandelbrot program, so aesthetic rather than important. IOW, I wouldn't
    claim major usage, but not negligible either.

    Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use serious >> numerical analysis and computation beyond the normal scope of this group,
    such as interval arithmetic and Chebychev polynomials.
    You would almost certainly not use Chebychev's or other polynomials
    for calculating cube roots, unless you were making something like a specialised pipelined implementation in an ASIC or FPGA.

    Chebychev polynomials have the advantage of minimising the error
    over a range, so typically converge significantly faster than other ways
    of calculating library functions /as long as/ you can pre-compute the
    number of terms needed and thus the actual conversion back into a normal polynomial. This typically saves an iteration or two /and/ having a
    loop [as you can unroll it] /and/ testing whether to go round again,
    compared with an iterative method. I had a colleague who was very keen
    on Pade approximations, but I didn't find them any faster; admittedly,
    that was in the '60s and '70s when I was heavily involved in numerical
    work for astrophysics [and hitting limits on f-p numbers, store sizes,
    time allocations, ...]; I don't know what the current state of the art
    is on modern hardware.

    Newton-Raphson iteration, or related algorithms, are simpler and
    converge quickly without all the messiness of ranges.

    Simpler for casual use, certainly, esp if you have no inside
    knowledge of the hardware. But you still need to do range reduction,
    as N-R is only linear if you start a long way from the root. [Atlas
    had a nifty facility whereby the f-p exponent was a register, so you
    could get reasonably close to an n-th root simply by dividing that
    register by n.]

    [But my real point here, as in my previous article, is not to
    promote any particular numerical technique, but to point out that the calculation of library functions in the Real World is not an amateur
    activity, and needs serious NA, well beyond what most undergraduates
    encounter, and well beyond the normal scope of this group.]
    --
    Andy Walker, Nottingham.
    Andy's music pages: www.cuboid.me.uk/andy/Music
    Composer of the day: www.cuboid.me.uk/andy/Music/Composers/Fisher
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Dec 14 15:12:50 2022
    From Newsgroup: comp.lang.misc

    On 14/12/2022 01:42, Andy Walker wrote:
    On 12/12/2022 09:11, David Brown wrote:
    [I wrote:]
         Cube roots are arguably used often enough to be worth implementing
    separately from "** (1/3)" or near equivalent.
    Really?  In what context?  Square roots occur all over the place, but
    I can't think of a single realistic use of cube roots that are not
    general nth roots (such as calculating geometric means). [...]

        Well, the most obvious one is finding the edge of a cube of given volume!  Yes, I've done that occasionally.

    That's got to count as a pretty obscure need - not something worth
    having a standard library function to handle.

      I found a couple of occurrences
    of "cbrt" in my solutions to some [~200] of the Euler problems [I tackle
    only ones that seem interesting], in both cases to find upper bounds on how far a calculation of n^3 needs to go for some condition to hold.

    Again, pretty unusual. And if you are fond of doing maths problems or calculations (I am), then I still don't see any reason to consider cube
    root worthy of a function by itself. I am as likely to use other roots
    as I am to use cube roots. (Square roots are, of course, overwhelmingly
    more useful.)

    Another
    use was when our prof of number theory came to me with some twisted cubics [qv] and asked me to find some solutions to associated cubic equations [sadly, it was ~15 years ago and I've forgotten the details].  My most recent personal use seems to have been in constructing a colour map for a Mandelbrot program, so aesthetic rather than important.  IOW, I wouldn't claim major usage, but not negligible either.


    My use of cubics usually goes the other way - cubic splines for
    approximating functions, rather than having to solve the cubics.

    Then the error from "1/3"
    is irrelevant.  But if this matters to you, you would need to use
    serious
    numerical analysis and computation beyond the normal scope of this
    group,
    such as interval arithmetic and Chebychev polynomials.
    You would almost certainly not use Chebychev's or other polynomials
    for calculating cube roots, unless you were making something like a
    specialised pipelined implementation in an ASIC or FPGA.

        Chebychev polynomials have the advantage of minimising the error over a range, so typically converge significantly faster than other ways
    of calculating library functions /as long as/ you can pre-compute the
    number of terms needed and thus the actual conversion back into a normal polynomial.

    Having the minimum RMS error over a range, for a given polynomial
    degree, does not necessarily translate to fastest calculation.
    Certainly Chebychev's are used for many kinds of function approximation,
    but iterative processes like Newton-Raphson or related algorithms can
    often converge much faster. Real-life performance can, however, depend
    on division speeds which are often much slower than multiplications, and
    can depend on pipelining and OOO execution. An ideal implementation
    might use a short polynomial approximation to get a good starting point,
    then a couple of rounds of NR to fill out the accuracy.

    This typically saves an iteration or two /and/ having a
    loop [as you can unroll it] /and/ testing whether to go round again,
    compared with an iterative method.  I had a colleague who was very keen
    on Pade approximations, but I didn't find them any faster;  admittedly,
    that was in the '60s and '70s when I was heavily involved in numerical
    work for astrophysics [and hitting limits on f-p numbers, store sizes,
    time allocations, ...];  I don't know what the current state of the art
    is on modern hardware.


    The challenge here is that the answer is usually "it's complicated". If
    you can push the calculations into SIMD, you can do lots in parallel -
    but the latency for a single calculation is going to be much higher.
    (Mind you, if you are just doing one cube root, who cares about its speed?)

    Newton-Raphson iteration, or related algorithms, are simpler and
    converge quickly without all the messiness of ranges.

        Simpler for casual use, certainly, esp if you have no inside knowledge of the hardware.  But you still need to do range reduction,
    as N-R is only linear if you start a long way from the root.  [Atlas
    had a nifty facility whereby the f-p exponent was a register, so you
    could get reasonably close to an n-th root simply by dividing that
    register by n.]

        [But my real point here, as in my previous article, is not to promote any particular numerical technique, but to point out that the calculation of library functions in the Real World is not an amateur activity, and needs serious NA, well beyond what most undergraduates encounter, and well beyond the normal scope of this group.]


    Fair enough.

    Amateurs can have fun with it, however!
    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Tue Dec 12 17:07:10 2023
    From Newsgroup: comp.lang.misc

    On 13/12/2022 14:06, David Brown wrote:
    On 12/12/2022 23:30, James Harris wrote:
    On 12/12/2022 15:18, David Brown wrote:
    On 12/12/2022 11:26, Bart wrote:

    And as David said, cube roots aren't that common. It might have a
    button for it on my Casio, that might be because it's oriented
    towards solving school maths problems.


    I think rather than having a cube root function, perhaps an nth root
    function would make some sense.  Then you could write "root(x, 3)" -
    that's about as neat and potentially as accurate as any other solution.

    I like that! I'd have to work through issues including integers vs
    floats but it would appear to be simple, clear, and general. I may do
    something similar, perhaps with the root first to permit results to be
    tuples.

       root(3, ....)  ;cube root

    so that

       root(3, 8) -> 2
       root(3, 8, 1000) -> (2, 10)


    This last one is :

        root(n, a, b) -> (a ^ 1/n, b ^ 1/n)

    ?

    Yes, although not restricted to one or two values.


    Is that a general feature you have - allowing functions to take extra arguments and returning tuples?  If so, what is the point?  (I don't
    mean I think it is pointless, I mean I'd like to know what you think is
    the use-case!)

    No, it was not meant to be any kind of proposal at this stage (or of a functional approach such as you mentioned in text now snipped), only a principle of including fixed parameters before any that may potentially
    be repeated.
    --
    James Harris


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Tue Dec 12 17:10:50 2023
    From Newsgroup: comp.lang.misc

    On 13/12/2022 15:06, Bart wrote:

    ...


    I think James' language is not that high level so functional would not
    be a good fit, but he says this is making use of 'tuples'.

    You are right. I wouldn't gratuitously add high-level features to a
    low-level language.

    And I went off tuples after hearing too many people call them tupples.

    :-)
    --
    James Harris


    --- Synchronet 3.20a-Linux NewsLink 1.114