On 2022-11-18 13:36, David Brown wrote:
If you want to know what a function does before running it, look at
its definition - along with the definition of any other functions it
calls, what data it has, what constants it uses, and so on. That
applies to all languages.
That is exactly the problem. In mathematics and in declarative
approaches that follow closely, definition is the function itself. E.g.
all sorts of recursive definitions. Analysis of function behavior is in
the core of mathematics. Basically it presents something, you do not
know what, though its definition is before your eyes! Your objective is
to figure it out, to study it. This is because mathematical functions
exist on their own.
This is not how engineering and programming as engineering activity
work. There you create something in order to achieve something.
Pragmatically, separation of specification and implementation is
difficult when functions become first class objects.
Another issue is treatment of types when each function is an operation
on some types and nothing else.
On 18/11/2022 14:14, Dmitry A. Kazakov wrote:
On 2022-11-18 13:36, David Brown wrote:
If you want to know what a function does before running it, look at
its definition - along with the definition of any other functions it
calls, what data it has, what constants it uses, and so on. That
applies to all languages.
That is exactly the problem. In mathematics and in declarative
approaches that follow closely, definition is the function itself.
E.g. all sorts of recursive definitions. Analysis of function behavior
is in the core of mathematics. Basically it presents something, you do
not know what, though its definition is before your eyes! Your
objective is to figure it out, to study it. This is because
mathematical functions exist on their own.
"Functions" in functional programming and in mathematics are not the
same thing.
In mathematics, the functions "sin(x)" and "cos(x + π/2)" are exactly
the same thing. They are indistinguishable. Any definition that gives the same mapping from the source domain to the destination domain is the same function.
In functional programming languages - real, practical functional
programming languages, that is not the case. You might argue that in a hypothetical sense two functional programming language functions that
give the same results are the same function - but you can argue exactly
the same about any kind of programming language. In reality, functional programming language functions are much like functions in imperative languages - compilers can manipulate them to make variants that give the same results more efficiently (this is "optimisation"), but otherwise
they do what the function definition says.
A difference, perhaps, is that imperative functions go from the bottom
up (or start to finish) while functional programming language functions
are often more top-down (describe the end result that you want, and then
the partial results to get there).
This is not how engineering and programming as engineering activity
work. There you create something in order to achieve something.
Sorry, but you are completely wrong in your distinction.
Programming with FP languages is as much "engineering" as programming
with imperative languages or any other paradigm. When telephone
switching systems are programmed in the functional programming language Erlang (which was developed with that application in mind), do you think
it is not "achieving something"? When people make programmable logic designs using FP languages such as Lava (build on Haskell), Confluence
(from OCaml), or SpinalHDL (from Scala), it is as clear and solid engineering as you can get. And of course, "normal" functional
programming is programming just like any other programming.
Pragmatically, separation of specification and implementation is
difficult when functions become first class objects.
Again, that is an imaginary distinction.
Whether the language supports first-class functions or not makes no difference as to how well specified functions are, or whether the implementation follows that specification or not.
Another issue is treatment of types when each function is an operation
on some types and nothing else.
I can't understand what you mean. Functional programming language functions are not operations on types.
Conversely, all functions in all
languages operate on some types and nothing else.
On 2022-11-21 16:23, David Brown wrote:
On 18/11/2022 14:14, Dmitry A. Kazakov wrote:
On 2022-11-18 13:36, David Brown wrote:
If you want to know what a function does before running it, look at
its definition - along with the definition of any other functions it
calls, what data it has, what constants it uses, and so on. That
applies to all languages.
That is exactly the problem. In mathematics and in declarative
approaches that follow closely, definition is the function itself.
E.g. all sorts of recursive definitions. Analysis of function
behavior is in the core of mathematics. Basically it presents
something, you do not know what, though its definition is before your
eyes! Your objective is to figure it out, to study it. This is
because mathematical functions exist on their own.
"Functions" in functional programming and in mathematics are not the
same thing.
In mathematics, the functions "sin(x)" and "cos(x + π/2)" are exactly
the same thing. They are indistinguishable. Any definition that
gives the same mapping from the source domain to the destination
domain is the same function.
In functional programming languages - real, practical functional
programming languages, that is not the case. You might argue that in
a hypothetical sense two functional programming language functions
that give the same results are the same function - but you can argue
exactly the same about any kind of programming language. In reality,
functional programming language functions are much like functions in
imperative languages - compilers can manipulate them to make variants
that give the same results more efficiently (this is "optimisation"),
but otherwise they do what the function definition says.
Yes. In short. Functional paradigm does not stand for its promises. That
is an open secret... (:-))
A difference, perhaps, is that imperative functions go from the bottom
up (or start to finish) while functional programming language
functions are often more top-down (describe the end result that you
want, and then the partial results to get there).
Well, not really. Whether the ultimate program is
do_it;
or
let_it_be_done;
makes little difference. Levels of indirection not necessary translate
into higher abstraction and conversely. Imperative approach has one
level less.
This is not how engineering and programming as engineering activity
work. There you create something in order to achieve something.
Sorry, but you are completely wrong in your distinction.
Programming with FP languages is as much "engineering" as programming
with imperative languages or any other paradigm. When telephone
switching systems are programmed in the functional programming
language Erlang (which was developed with that application in mind),
do you think it is not "achieving something"? When people make
programmable logic designs using FP languages such as Lava (build on
Haskell), Confluence (from OCaml), or SpinalHDL (from Scala), it is as
clear and solid engineering as you can get. And of course, "normal"
functional programming is programming just like any other programming.
You confuse application with intent. Yes, you can achieve something by programming in awful languages in awful manner. I do not mean here FPL specifically. Just for the sake of argument. You will wonder to know
what software was written in VisualBasic...
Pragmatically, separation of specification and implementation is
difficult when functions become first class objects.
Again, that is an imaginary distinction.
In is not imaginary. If you cannot use functions as a vehicle do define interface you need to come up with something else or drop the idea altogether.
Whether the language supports first-class functions or not makes no
difference as to how well specified functions are, or whether the
implementation follows that specification or not.
Specification is a declarative layer on top of the object language. In imperative procedural languages that layer is declaration of
subprograms.
In OOPL it is types (classes) defined in terms of methods
(members are built-in getter/setter methods). In FPL, typically, there
is none, unless you introduce some meta functions, whatever. E.g. generics/templates exist for infinity and still have no reasonable specifications, and thus, are fundamentally non-testable. I do not care
even a little bit about FP, but, my guess is that it must have similar issues.
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming language
functions are not operations on types.
Yep.
Conversely, all functions in all languages operate on some types and
nothing else.
No, in OOPL a method operates on the class. A "free function" takes some arguments in unrelated types.
On 21/11/2022 17:08, Dmitry A. Kazakov wrote:
I think perhaps, like many programmers, you have worked all your life
with imperative programming and can't imagine or understand other paradigms. You consider imperative as "the best" or "the most natural", simply because it is the most familiar to you.
Oh, so you think functional programming languages are designed and
intended to be useless, and it is only by accident or stubbornness that anyone can actually make use of them? That is an "interesting" argument.
Pragmatically, separation of specification and implementation is
difficult when functions become first class objects.
Again, that is an imaginary distinction.
In is not imaginary. If you cannot use functions as a vehicle do
define interface you need to come up with something else or drop the
idea altogether.
Again, I can't figure out what you are talking about.
Whether the language supports first-class functions or not makes no
difference as to how well specified functions are, or whether the
implementation follows that specification or not.
Specification is a declarative layer on top of the object language. In
imperative procedural languages that layer is declaration of subprograms.
No, that is not what "specification" means. A declaration is just the name, parameters, types, etc., of a function. A specification says what the function /does/.
In most programming, specifications are not
written in the language itself though some allow a limited form of
formal specification (such as pre-conditions and post-conditions).
In OOPL it is types (classes) defined in terms of methods (members are
built-in getter/setter methods). In FPL, typically, there is none,
unless you introduce some meta functions, whatever. E.g.
generics/templates exist for infinity and still have no reasonable
specifications, and thus, are fundamentally non-testable. I do not
care even a little bit about FP, but, my guess is that it must have
similar issues.
So you think that in C, this is a "specification" :
int times_two(int);
while the Haskell equivalent :
times :: Int -> Int
is somehow completely different?
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming language
functions are not operations on types.
Yep.
Oh, so you think functions in functional programming languages don't
have types or act on types?
Functional programming languages have supported generic programming and
type inference for a lot longer than most imperative languages, but
those are both standard for any serious modern imperative language (in
C++ you have templates and "auto", in other languages you have similar features).
Conversely, all functions in all languages operate on some types and
nothing else.
No, in OOPL a method operates on the class. A "free function" takes
some arguments in unrelated types.
Methods in OOPL (or most people think of as Object Oriented programming, such as C++, Java, Python, etc., rather than the original intention of
OOP which is now commonly called "actors" paradigm) are syntactic sugar
for a function with the class instance as the first parameter.
On 2022-11-21 20:56, David Brown wrote:
On 21/11/2022 17:08, Dmitry A. Kazakov wrote:
I think perhaps, like many programmers, you have worked all your life
with imperative programming and can't imagine or understand other
paradigms. You consider imperative as "the best" or "the most
natural", simply because it is the most familiar to you.
Actually I worked a lot with declarative languages in AI (Prologue,
expert systems etc) and pattern matching (e.g. SNOBOL). My deep distrust
to declarative approach come from that time greatly supported by
relational paradigm.
Oh, so you think functional programming languages are designed and
intended to be useless, and it is only by accident or stubbornness
that anyone can actually make use of them? That is an "interesting"
argument.
Absolutely. Most languages fall into this category. It is not a unique feature of FPLs...
Pragmatically, separation of specification and implementation is
difficult when functions become first class objects.
Again, that is an imaginary distinction.
In is not imaginary. If you cannot use functions as a vehicle do
define interface you need to come up with something else or drop the
idea altogether.
Again, I can't figure out what you are talking about.
About specifications stating some part of behavior, but not implementing
the behavior.
Whether the language supports first-class functions or not makes no
difference as to how well specified functions are, or whether the
implementation follows that specification or not.
Specification is a declarative layer on top of the object language.
In imperative procedural languages that layer is declaration of
subprograms.
No, that is not what "specification" means. A declaration is just the
name, parameters, types, etc., of a function. A specification says
what the function /does/.
That is same. When you specify type, to say what the object "does" by
being of that type.
In most programming, specifications are not written in the language
itself though some allow a limited form of formal specification (such
as pre-conditions and post-conditions).
You have them in the languages or you have not. Clearly specifications
form a meta language on top of the object core language.
In OOPL it is types (classes) defined in terms of methods (members
are built-in getter/setter methods). In FPL, typically, there is
none, unless you introduce some meta functions, whatever. E.g.
generics/templates exist for infinity and still have no reasonable
specifications, and thus, are fundamentally non-testable. I do not
care even a little bit about FP, but, my guess is that it must have
similar issues.
So you think that in C, this is a "specification" :
int times_two(int);
while the Haskell equivalent :
times :: Int -> Int
is somehow completely different?
No. But also there is nothing specifically "functional" in these
primitive specifications.
They are not even first class.
You didn't wrote:
int (int) times_two; // Some functional C (no pun intended (:-))
The point is that if you take some really fancy functional stuff, it
would be difficult or, maybe, useless to formally describe in some meta language of specifications.
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming language
functions are not operations on types.
Yep.
Oh, so you think functions in functional programming languages don't
have types or act on types?
It was you who said "Functional programming language functions are not operations on types." I only agreed with you.
Again, it might be
possible to build a complete type algebra on top of "functional" quirks.
But there are enough unresolved problems already without bringing first-class functions in. So, why bother? Passing a subprogram as
parameter (downward closure) covers all my needs. Objects parametrized
by functions? That looks too much. Yes, I hate templates and generics, before you ask... (:-))
Functional programming languages have supported generic programming
and type inference for a lot longer than most imperative languages,
but those are both standard for any serious modern imperative language
(in C++ you have templates and "auto", in other languages you have
similar features).
Type inference is a separate, and very controversial issue.
Conversely, all functions in all languages operate on some types and
nothing else.
No, in OOPL a method operates on the class. A "free function" takes
some arguments in unrelated types.
Methods in OOPL (or most people think of as Object Oriented
programming, such as C++, Java, Python, etc., rather than the original
intention of OOP which is now commonly called "actors" paradigm) are
syntactic sugar for a function with the class instance as the first
parameter.
Not really, because methods dispatch. A method acts on the class and its implementation consists of separate bodies, which is the core of OO decomposition as opposed to other paradigms.
Familiarity with these gives you a broader background than many
programmers, but no insight or experience with functional programming.
Oh, so you think functional programming languages are designed and
intended to be useless, and it is only by accident or stubbornness
that anyone can actually make use of them? That is an "interesting"
argument.
Absolutely. Most languages fall into this category. It is not a unique
feature of FPLs...
That is quite a cynical viewpoint!
Whether the language supports first-class functions or not makes no >>>>> difference as to how well specified functions are, or whether the
implementation follows that specification or not.
Specification is a declarative layer on top of the object language.
In imperative procedural languages that layer is declaration of
subprograms.
No, that is not what "specification" means. A declaration is just
the name, parameters, types, etc., of a function. A specification
says what the function /does/.
That is same. When you specify type, to say what the object "does" by
being of that type.
No, they are not the same at all. A declaration is needed to use the function (or other object) in the language.
In most programming, specifications are not written in the language
itself though some allow a limited form of formal specification (such
as pre-conditions and post-conditions).
You have them in the languages or you have not. Clearly specifications
form a meta language on top of the object core language.
No, it is not "clearly" at all. If a programming language allows specifications of some sort as part of the language, then it is part of
the language. If it does not, then it is not part of the language - specifications then have to be written independently (such as in a
separate human-language document) and are not in a "meta language".
In OOPL it is types (classes) defined in terms of methods (members
are built-in getter/setter methods). In FPL, typically, there is
none, unless you introduce some meta functions, whatever. E.g.
generics/templates exist for infinity and still have no reasonable
specifications, and thus, are fundamentally non-testable. I do not
care even a little bit about FP, but, my guess is that it must have
similar issues.
So you think that in C, this is a "specification" :
int times_two(int);
while the Haskell equivalent :
times :: Int -> Int
is somehow completely different?
No. But also there is nothing specifically "functional" in these
primitive specifications.
There was not intended to be anything "functional" about them. You said that in imperative languages, a declaration is a specification, while in functional programming languages you have no specifications. I showed
that you can have exactly the same kind of declarations in both
languages - the differences you are claiming are imaginary. (Such declarations are not specifications in either language.)
Incidentally, you /do/ understand that "object oriented" is orthogonal
to imperative/declarative ? Many functional programming are object oriented, just as many imperative languages are not.
They are not even first class.
"Int" is a first class object type in Haskell and C. The "times"
function is a first class object type in Haskell, but not in C.
You didn't wrote:
int (int) times_two; // Some functional C (no pun intended (:-))
Indeed I didn't write that - it is not syntactically correct in either sample language.
I could have written a declaration for a higher order function in Haskell:
do_twice :: (Int -> Int) -> (Int -> Int)
That takes a function that is int-to-int, and returns a function that is int-to-int.
And again - these are declarations, not specifications.
The point is that if you take some really fancy functional stuff, it
would be difficult or, maybe, useless to formally describe in some
meta language of specifications.
A function that cannot sensibly be described is of little use to anyone.
That is completely independent of the programming language paradigm.
If no one can tell you want "foo" does, you can't use it. I cannot see
any way in which imperative languages differ from declarative languages
in that respect.
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming language >>>>> functions are not operations on types.
Yep.
Oh, so you think functions in functional programming languages don't
have types or act on types?
It was you who said "Functional programming language functions are not
operations on types." I only agreed with you.
I think I see the source of confusion. When you wrote "each function is
an operation on some types", you mean functions that operate on
/instances/ of certain types. There is a vast difference between
operating on /instances/, and operating on /types/.
I think what you meant to say was that in imperative languages, you
define functions to act on particular types (the types of the
parameters), while in functional programming you don't give the types of
the parameters so they operate on "anything".
This is, of course, wrong in both senses. More advanced imperative let
you define functions that operate on many types - they are known as
template functions in C++, generics in Ada, and similarly in other languages.
And in functional programming languages you can specify
types as precisely or generically as you want.
You like a programming language where you can understand a near
one-to-one correspondence between the source language and the generated assembly. Fair enough - that's a valid and reasonable preference.
I have nothing against preferences. I just don't understand how people
can dismiss other options as impractical, useless, unintuitive,
impossible to use, or whatever, simply because those other languages are
not a style that they are familiar with or not a style they like.
Functional programming languages have supported generic programming
and type inference for a lot longer than most imperative languages,
but those are both standard for any serious modern imperative
language (in C++ you have templates and "auto", in other languages
you have similar features).
Type inference is a separate, and very controversial issue.
It is only as controversial as any other feature where you have a
trade-off between implicit and explicit information. It is not really
any more controversial than the implicit conversions found in many
languages and user types.
And it is not a separate issue - unless you are using a dynamic language that supports objects of any type in any place (with run-time checking
when the objects are used), type inference is essential to how generic programming works.
Conversely, all functions in all languages operate on some types
and nothing else.
No, in OOPL a method operates on the class. A "free function" takes
some arguments in unrelated types.
Methods in OOPL (or most people think of as Object Oriented
programming, such as C++, Java, Python, etc., rather than the
original intention of OOP which is now commonly called "actors"
paradigm) are syntactic sugar for a function with the class instance
as the first parameter.
Not really, because methods dispatch. A method acts on the class and
its implementation consists of separate bodies, which is the core of
OO decomposition as opposed to other paradigms.
Methods do not act on classes - they act on instances of a class (plus
any static members of the class).
Tying function code to method names or free function names is just
naming syntax details, not an operation on the class or type itself.
On 2022-11-22 15:11, David Brown wrote:
[...] It is not really any more controversial than the implicitYes, and there should be none in a well-typed program.
conversions found in many languages and user types.
On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
On 2022-11-22 15:11, David Brown wrote:
[...] It is not really any more controversial than the implicitYes, and there should be none in a well-typed program.
conversions found in many languages and user types.
Virtually every computer language since time began has glossed
over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions]. You can
use an implicit dereference or you can make things harder for programmers;
I know which I prefer.
Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions.
If these are well-chosen,
you can [and should] avoid almost all explicit conversions. I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)"
or, for consistency, can't write "j := i" but must write "j := (deref) i" instead.
On 2022-11-22 15:11, David Brown wrote:
Familiarity with these gives you a broader background than many
programmers, but no insight or experience with functional programming.
I don't pretend. I said I don't buy declarative approach and I don't buy first-class functions, however you shape them.
Oh, so you think functional programming languages are designed and
intended to be useless, and it is only by accident or stubbornness
that anyone can actually make use of them? That is an "interesting" >>>> argument.
Absolutely. Most languages fall into this category. It is not a
unique feature of FPLs...
That is quite a cynical viewpoint!
Grows from familiarity... (:-))
Whether the language supports first-class functions or not makes
no difference as to how well specified functions are, or whether
the implementation follows that specification or not.
Specification is a declarative layer on top of the object language. >>>>> In imperative procedural languages that layer is declaration of
subprograms.
No, that is not what "specification" means. A declaration is just
the name, parameters, types, etc., of a function. A specification
says what the function /does/.
That is same. When you specify type, to say what the object "does" by
being of that type.
No, they are not the same at all. A declaration is needed to use the
function (or other object) in the language.
Almost no language declare naked names. Most combine that with
specification of what these names are supposed to mean = behave.
In most programming, specifications are not written in the language
itself though some allow a limited form of formal specification
(such as pre-conditions and post-conditions).
You have them in the languages or you have not. Clearly
specifications form a meta language on top of the object core language.
No, it is not "clearly" at all. If a programming language allows
specifications of some sort as part of the language, then it is part
of the language. If it does not, then it is not part of the language
- specifications then have to be written independently (such as in a
separate human-language document) and are not in a "meta language".
If it is not a part of the language, then there is nothing to talk about.
In OOPL it is types (classes) defined in terms of methods (members
are built-in getter/setter methods). In FPL, typically, there is
none, unless you introduce some meta functions, whatever. E.g.
generics/templates exist for infinity and still have no reasonable
specifications, and thus, are fundamentally non-testable. I do not
care even a little bit about FP, but, my guess is that it must have >>>>> similar issues.
So you think that in C, this is a "specification" :
int times_two(int);
while the Haskell equivalent :
times :: Int -> Int
is somehow completely different?
No. But also there is nothing specifically "functional" in these
primitive specifications.
There was not intended to be anything "functional" about them. You
said that in imperative languages, a declaration is a specification,
while in functional programming languages you have no specifications.
I showed that you can have exactly the same kind of declarations in
both languages - the differences you are claiming are imaginary.
(Such declarations are not specifications in either language.)
You showed a non-functional part. I said that marrying specifications
with "functional" would be IMO difficult. Too much power of function construction to predict and constrain the behavior.
Incidentally, you /do/ understand that "object oriented" is orthogonal
to imperative/declarative ? Many functional programming are object
oriented, just as many imperative languages are not.
They are not even first class.
"Int" is a first class object type in Haskell and C. The "times"
function is a first class object type in Haskell, but not in C.
You didn't wrote:
int (int) times_two; // Some functional C (no pun intended (:-)) >>>
Indeed I didn't write that - it is not syntactically correct in either
sample language.
I could have written a declaration for a higher order function in
Haskell:
do_twice :: (Int -> Int) -> (Int -> Int)
That takes a function that is int-to-int, and returns a function that
is int-to-int.
Not the same. int (int) times_two; is supposedly a variable which values
are int-valued functions taking one int argument. That would be the
least possible case of first-class functions.
And again - these are declarations, not specifications.
They are rudimentary specifications. To elaborate them is difficult in C
and in an FPL.
The point is that if you take some really fancy functional stuff, it
would be difficult or, maybe, useless to formally describe in some
meta language of specifications.
A function that cannot sensibly be described is of little use to
anyone. That is completely independent of the programming language
paradigm. If no one can tell you want "foo" does, you can't use it. I
cannot see any way in which imperative languages differ from
declarative languages in that respect.
That comment was on the first-class functions. If you have a
sufficiently complex algebra generating functions, especially during run-time, it becomes difficult to describe the result in specifications.
Declarative approach is merely difficult to understand and thus to reuse
and maintain code.
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming language >>>>>> functions are not operations on types.
Yep.
Oh, so you think functions in functional programming languages don't
have types or act on types?
It was you who said "Functional programming language functions are
not operations on types." I only agreed with you.
I think I see the source of confusion. When you wrote "each function
is an operation on some types", you mean functions that operate on
/instances/ of certain types. There is a vast difference between
operating on /instances/, and operating on /types/.
I did not mean first-class type objects (types of types). They raise the same objections as first-class procedural objects (types of
subprograms). It would be surely interesting as an academia research and
a nightmare in production code.
Acting on a type means being a function of the type domain. E.g. sine
acts on R. R is a type ("field" etc).
I think what you meant to say was that in imperative languages, you
define functions to act on particular types (the types of the
parameters), while in functional programming you don't give the types
of the parameters so they operate on "anything".
No, that would be untyped. Most FPLs are typed, I hope.
This is, of course, wrong in both senses. More advanced imperative
let you define functions that operate on many types - they are known
as template functions in C++, generics in Ada, and similarly in other
languages.
That is acting on a class. Class is a set of types, e.g. built by
derivation closure in OO or ad-hoc as "any macro expansion the compiler would swallow" in the case of templates... (:-))
And in functional programming languages you can specify types as
precisely or generically as you want.
I don't see why an FPL could not include some "non-functional" core.
Like C++ contains non-OO stuff borrowed from C. But talking about
advantages and disadvantages of a given paradigm we must discuss the new stuff. Old stuff usually indicates incompleteness of the paradigm. E.g.
C++, Java, Ada are nowhere close to be 100% OO. On the contrary, they
are non-OO most of the time.
You like a programming language where you can understand a near
one-to-one correspondence between the source language and the
generated assembly. Fair enough - that's a valid and reasonable
preference.
Much weaker that that. I want to be able to recognize complexity and algorithm and have limited effects on the computational environment. Basically, small things must look small and big things big. If there is
a loop or recursion I'd like to be aware of these. I also like to have "uninteresting" details hidden. But I want sufficient stuff like memory management, blocking etc exposed.
I have nothing against preferences. I just don't understand how
people can dismiss other options as impractical, useless, unintuitive,
impossible to use, or whatever, simply because those other languages
are not a style that they are familiar with or not a style they like.
You get me wrong. It is not about style, though that is often more
important than anything else. My concern is with the paradigm as a whole.
Functional programming languages have supported generic programming
and type inference for a lot longer than most imperative languages,
but those are both standard for any serious modern imperative
language (in C++ you have templates and "auto", in other languages
you have similar features).
Type inference is a separate, and very controversial issue.
It is only as controversial as any other feature where you have a
trade-off between implicit and explicit information. It is not really
any more controversial than the implicit conversions found in many
languages and user types.
Yes, and there should be none in a well-typed program.
And it is not a separate issue - unless you are using a dynamic
language that supports objects of any type in any place (with run-time
checking when the objects are used), type inference is essential to
how generic programming works.
No, you can have static polymorphism without inference. You possibly
mean some evil automatic instantiations of generics instead, like in
C++. I don't want automatic instantiation either.
Conversely, all functions in all languages operate on some types
and nothing else.
No, in OOPL a method operates on the class. A "free function" takes >>>>> some arguments in unrelated types.
Methods in OOPL (or most people think of as Object Oriented
programming, such as C++, Java, Python, etc., rather than the
original intention of OOP which is now commonly called "actors"
paradigm) are syntactic sugar for a function with the class instance
as the first parameter.
Not really, because methods dispatch. A method acts on the class and
its implementation consists of separate bodies, which is the core of
OO decomposition as opposed to other paradigms.
Methods do not act on classes - they act on instances of a class (plus
any static members of the class).
Rather on the closure of the class. Class is a set of types. Method acts
on all values from all instances of the set.
Tying function code to method names or free function names is just
naming syntax details, not an operation on the class or type itself.
No, free function does not dispatch if class is involved. It has the
same body is valid for values of all instances. A method selects a body according to the actual instance. The selected body could be composed
out of several bodies like C++'s constructors and destructions, which
are no methods, but one could imagine a language with that sort of composition of methods.
foo(); // A_foo(&a)
foo(); // A_foo(&b) - probably wrong
foo(); // B_foo(&b)
bar(); // A_bar(&a) - dynamic dispatch
bar(); // B_bar(&b) - dynamic dispatch
bar(); // B_bar(&b)
On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
On 2022-11-22 15:11, David Brown wrote:
[...] It is not really any more controversial than the implicitYes, and there should be none in a well-typed program.
conversions found in many languages and user types.
Virtually every computer language since time began has glossed
over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions]. You can
use an implicit dereference or you can make things harder for programmers;
I know which I prefer.
Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions. If these are well-chosen, you can [and should] avoid almost all explicit conversions. I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)" [or "sqrt((real) i))"] instead; can't write "print (x)" but must write "(void) print (x)" [as "print" happens to return the number of characters printed, which you happen not to care about on this occasion];
or, for consistency, can't write "j := i" but must write "j := (deref) i" instead. It's not as though in any of these constructs there is reason
ever to expect the implicit coercions not to apply, so that some errors
are missed. But some people seem to prefer hair shirts.
On 22/11/2022 16:24, Dmitry A. Kazakov wrote:
On 2022-11-22 15:11, David Brown wrote:
Familiarity with these gives you a broader background than many
programmers, but no insight or experience with functional programming.
I don't pretend. I said I don't buy declarative approach and I don't
buy first-class functions, however you shape them.
OK, I suppose - but I hope you'll forgive me if I don't believe you have demonstrated enough experience or knowledge for your thoughts on
functional programming languages to carry any weight beyond personal dislike. (I won't argue with personal tastes, as long as it is clear
that it is /your/ preferences for the programming /you/ do.)
Almost no language declare naked names. Most combine that with
specification of what these names are supposed to mean = behave.
Again - you are mixing up specifications and declarations.
"int foo(int x)" is a /declaration/.
It is all you need in C to be able
to use the function to compile code, and similar declarations are all
you need in most other languages. It is /not/ a /specification/. It
says /nothing/ about what the function means, does, or how it behaves.
/Please/ tell me you understand the difference! We are not going to get far if you don't see that.
As a specification, you could say "function "do_twice" should take a function as an argument, and return a function that has the same effect
as applying the original function twice".
So, now you have an example of a higher level function and its
specification - /and/ an implementation, /and/ a declaration. Was that
so difficult?
I am sorry, I really cannot understand what you are trying to say here.
You seem to be mixing up types, variables and functions here.
Arguably, functional programming is often just a formalised and precise language for writing your specifications. Functional programming is primarily concerned with what a function should /do/, and much less concerned about /how/ it should do it.
For real functional code, you will often define functions in a way that gives the general direction for how it will work. For example, a simple quicksort Haskell function could be :
qs [] = []
qs (x : xs) = (qs left) ++ [x] ++ (qs right)
where
left = filter (< x) xs
right = filter (>= x) xs
That could be considered a technical specification of a quicksort
algorithm.
A more advanced Haskell implementation would use in-place
sorting for efficiency.
The point is that if you take some really fancy functional stuff, it
would be difficult or, maybe, useless to formally describe in some
meta language of specifications.
A function that cannot sensibly be described is of little use to
anyone. That is completely independent of the programming language
paradigm. If no one can tell you want "foo" does, you can't use it.
I cannot see any way in which imperative languages differ from
declarative languages in that respect.
That comment was on the first-class functions. If you have a
sufficiently complex algebra generating functions, especially during
run-time, it becomes difficult to describe the result in specifications.
Is your dislike of functional programming really just that it is
possible to write higher order functions that you can't understand or describe?
People can write crap in any language. Just have a look at <https://thedailywtf.com/> and you can see examples of indescribable
code in every language you can imagine.
Declarative approach is merely difficult to understand and thus to
reuse and maintain code.
It is okay to say "/I/ don't understand language X, so I don't use it".
It is not okay to make general claims about what other people
understand. You can be sure that people who program in functional programming languages /do/ understand what they are doing, and /can/ maintain and reuse their code.
Some languages have a reputation for being "write only", with Perl being
the prime contender. I've never heard that about any functional programming language - and I challenge you to back up your claims with references. Fail that, and you are just another whiner who mocks things they don't understand rather accept they don't know everything.
Another issue is treatment of types when each function is an
operation on some types and nothing else.
I can't understand what you mean. Functional programming
language functions are not operations on types.
Yep.
Oh, so you think functions in functional programming languages
don't have types or act on types?
It was you who said "Functional programming language functions are
not operations on types." I only agreed with you.
I think I see the source of confusion. When you wrote "each function
is an operation on some types", you mean functions that operate on
/instances/ of certain types. There is a vast difference between
operating on /instances/, and operating on /types/.
I did not mean first-class type objects (types of types). They raise
the same objections as first-class procedural objects (types of
subprograms). It would be surely interesting as an academia research
and a nightmare in production code.
Acting on a type means being a function of the type domain. E.g. sine
acts on R. R is a type ("field" etc).
So when you write "act on a type", you mean "act on an instance of a
type" - just like "sine" acts on real numbers, or members of R, and not
on the set R.
I think what you meant to say was that in imperative languages, you
define functions to act on particular types (the types of the
parameters), while in functional programming you don't give the types
of the parameters so they operate on "anything".
No, that would be untyped. Most FPLs are typed, I hope.
No, it would be generic programming. Generic programming is not the
same as untyped programming - in generic programming your functions are defined to work with many types but each /use/ of the function is on
values of specific types.
A fair bit of functional programming is
generic, but some of it is type-specific - you can have both (just as
you can in many languages).
This is, of course, wrong in both senses. More advanced imperative
let you define functions that operate on many types - they are known
as template functions in C++, generics in Ada, and similarly in other
languages.
That is acting on a class. Class is a set of types, e.g. built by
derivation closure in OO or ad-hoc as "any macro expansion the
compiler would swallow" in the case of templates... (:-))
Again - no. While the definition of "class" (and even "type") varies between languages, classes are not "sets of types". (And again, you are mixing up "acting on something" with "acting on instances of
something".) What you are describing there is a type hierarchy in a language with nominal subtyping - basically, the way class inheritance
works in C++ and Java. That is not the only way to do object-oriented typing in a language.
And in functional programming languages you can specify types as
precisely or generically as you want.
I don't see why an FPL could not include some "non-functional" core.
Like C++ contains non-OO stuff borrowed from C. But talking about
advantages and disadvantages of a given paradigm we must discuss the
new stuff. Old stuff usually indicates incompleteness of the paradigm.
E.g. C++, Java, Ada are nowhere close to be 100% OO. On the contrary,
they are non-OO most of the time.
Smalltalk is one of the few languages that could be considered entirely object-oriented.
What is /not/ fine is to take your line and say "The stuff on this side
is good - it is good engineering and programming, letting people write
clear and maintainable code. The stuff on the other side is incomprehensible, impractical nonsense that doesn't work."
(I've been through this with others - I really cannot understand how
some people get so narrow-minded and insular that they believe anything different from their familiar little world is /wrong/.)
And it is not a separate issue - unless you are using a dynamic
language that supports objects of any type in any place (with
run-time checking when the objects are used), type inference is
essential to how generic programming works.
No, you can have static polymorphism without inference. You possibly
mean some evil automatic instantiations of generics instead, like in
C++. I don't want automatic instantiation either.
You cannot have generic programming without type inference - that's what
I said, and that's what I mean. What you /want/, or what you personally choose to use, is irrelevant.
template <class T>
T add_one(T x) { return x + 1; }
int foo(int x) {
return add_one(x);
}
Type inference is how the compiler calls the right "add_one" instance
and how it determines what the type "T" actually is in the concrete
function call.
Using it too much in a language like C++ can make code harder to
understand - when you have "auto x = bar();", anyone reading the code
needs to make more of an effort to find the real type of "x". But if
the types involved are complicated, then it makes code much clearer,
more flexible and maintainable, and far more likely to be correct.
It is a sharp tool, and must be used responsibly.
Conversely, all functions in all languages operate on some types >>>>>>> and nothing else.
No, in OOPL a method operates on the class. A "free function"
takes some arguments in unrelated types.
Methods in OOPL (or most people think of as Object Oriented
programming, such as C++, Java, Python, etc., rather than the
original intention of OOP which is now commonly called "actors"
paradigm) are syntactic sugar for a function with the class
instance as the first parameter.
Not really, because methods dispatch. A method acts on the class and
its implementation consists of separate bodies, which is the core of
OO decomposition as opposed to other paradigms.
Methods do not act on classes - they act on instances of a class
(plus any static members of the class).
Rather on the closure of the class. Class is a set of types. Method
acts on all values from all instances of the set.
I am assuming you are not using the word "closure" in the sense normally used in programming.
You mean a hierarchy of object types in a nominal
typing system with inheritance. Methods act on instances of a class,
and the same method may be able to act on instances of more than one
class in the hierarchy.
Any given invocation is on a specific instance
of a specific class.
Now there is no difference between "x.foo();" and "foo(x);".
Method calls for non-virtual methods are just the same as free
functions, but with a convenient syntax.
The virtual methods are more interesting, since the dispatch is dynamic.
This is handled by giving each instance of the classes a hidden
pointer to a virtual method table. So now the free function "bar" is a
bit more complicated:
void bar(A* a) { a->__vt[index_of_bar](a); }
(Multiple inheritance makes this messier, but the principle is the same.)
Personally, in the interests of making programming easier, I'm in >> favour of more, not fewer, implicit conversions.If you want no conversions you must state that the value is of a subtype.
If these are well-chosen,By the programmer, see above.
you can [and should] avoid almost all explicit conversions. I see little >> point in telling people that they can't use "sqrt(2)" but must writeThey could, should 2 be a subtype of Float (or whatever). There is no
"sqrt(2.0)"
reason why this must be determined by the language. Technically, it
would introduce a lot of ambiguities and thus make programming more
difficult because the programmer must then use qualified expressions
to disambiguate.
or, for consistency, can't write "j := i" but must write "j := (deref) i"Surely a pointer type can be a subtype of the target type. It is so
instead.
in Ada for array index, record member, function call. Not for
assignment, though. But again, that must be up to the programmer. The
type system must provide ad-hoc subtypes.
On 22/11/2022 20:26, Dmitry A. Kazakov wrote:
If these are well-chosen,By the programmer, see above.
The whole point of implicit conversions is that they take place
with no overt action needed by the programmer.
Of course, that places a
responsibility on the language design to make such conversions safe, so
that you can't normally stumble into one that gives unexpected results.
you can [and should] avoid almost all explicit conversions. I seeThey could, should 2 be a subtype of Float (or whatever). There is no
little
point in telling people that they can't use "sqrt(2)" but must write
"sqrt(2.0)"
reason why this must be determined by the language. Technically, it
would introduce a lot of ambiguities and thus make programming more
difficult because the programmer must then use qualified expressions
to disambiguate.
If, as is commonly the case, "sqrt" is a function taking a real
[or "float"] parameter and there is no competing "sqrt", then there is
no plausible ambiguity.
Either "2" [or "i + 1" or whatever] can be
implicitly converted to type "real", in which case that is, beyond
reasonable doubt, what the programmer intended, or it cannot, in which
case it's a compile-time error.
Somehow,
many languages manage to define implicit conversions without the raft
of difficulties you imply.
or, for consistency, can't write "j := i" but must write "j :=Surely a pointer type can be a subtype of the target type. It is so
(deref) i"
instead.
in Ada for array index, record member, function call. Not for
assignment, though. But again, that must be up to the programmer. The
type system must provide ad-hoc subtypes.
If the assignment [in the context of "int i, j;" with the usual meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"
and "i" are not interchangeable in general [you can't write "2 := j;",
for example], there must be an implicit conversion [eg, from "integer variable" to "integer"] taking place.
The whole point of HLLs is to make programming easier, not to enforce some strict purity regime.
This is what I meant. You declare that A is a subtype B (inheritingBy the programmer, see above.The whole point of implicit conversions is that they take place
with no overt action needed by the programmer.
in- or out- or all operations) once. Then you enjoy the effect
distributed everywhere that declaration has effect.
Of course, that places aThese conversions are fundamentally unsafe because different types
responsibility on the language design to make such conversions safe, so
that you can't normally stumble into one that gives unexpected results.
are, well, different. If they weren't then one type would suffice and
there would be no need in conversions.
you can [and should] avoid almost all explicit conversions. I see little >>>> point in telling people that they can't use "sqrt(2)" but must writeThey could, should 2 be a subtype of Float (or whatever). There is no
"sqrt(2.0)"
reason why this must be determined by the language. Technically, it
would introduce a lot of ambiguities and thus make programming more
difficult because the programmer must then use qualified expressions
to disambiguate.
If, as is commonly the case, "sqrt" is a function taking a realIt could be integer-valued or complex-valued sqrt. Or a sqrt
[or "float"] parameter and there is no competing "sqrt", then there is
no plausible ambiguity.
returning a lesser precision type etc.
Either "2" [or "i + 1" or whatever] can beYou are talking about type-error here or something else?
implicitly converted to type "real", in which case that is, beyond
reasonable doubt, what the programmer intended, or it cannot, in which
case it's a compile-time error.
Anyway there could be many floating-point, fixed-point, complex types
around. Imagine it as a disjoint graph in a strongly typed language. Conversions introduce paths then graph. Paths gets connected and
suddenly you have a dish full of spaghetti.
meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"That does not compute. 2 := j; is illegal merely because 2 (after
and "i" are not interchangeable in general [you can't write "2 := j;",
for example], there must be an implicit conversion [eg, from "integer
variable" to "integer"] taking place.
overloading resolution to int) is a constant and the first argument
of := is mutable.
The whole point of HLLs is to make programming easier, not toSure. The disagreement is about achieving that ease.
enforce some strict purity regime.
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
This is what I meant. You declare that A is a subtype B (inheritingBy the programmer, see above.The whole point of implicit conversions is that they take place >>> with no overt action needed by the programmer.
in- or out- or all operations) once. Then you enjoy the effect
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", then it's not "implicit"; overt action by the programmer is needed. Further, ...
Of course, that places aThese conversions are fundamentally unsafe because different types
responsibility on the language design to make such conversions safe, so
that you can't normally stumble into one that gives unexpected results.
are, well, different. If they weren't then one type would suffice and
there would be no need in conversions.
..., allowing such /overt/ action means that the compiler has to check whether the declaration is sensible [I surely can't declare that
"real" is a sub-type of "int"] and also has to check for ambiguities and other safety issues. The point of implicit "int -> real" conversion, for example, is that the /language/ already knows about it, and knows, eg,
that in contexts requiring a "real", an "int" may safely be supplied instead. The types are different, but the conversion is safe.
Note 1: There is no implication that the conversion is /always/ applied
regardless of context; there is a pre-condition that the compiler knows
that "real" is required. Eg, "print (2)" should output "2", not "2.00".
Note 2: The conversion does not extend to other types, such as variables;
if a real variable is required, you can't supply an integer variable.
you can [and should] avoid almost all explicit conversions. I see >>>>> littleThey could, should 2 be a subtype of Float (or whatever). There is no
point in telling people that they can't use "sqrt(2)" but must write >>>>> "sqrt(2.0)"
reason why this must be determined by the language. Technically, it
would introduce a lot of ambiguities and thus make programming more
difficult because the programmer must then use qualified expressions
to disambiguate.
The whole point of it being determined by the language is that the language knows that there is no ambiguity in "int -> real" /in contexts
where "real" is expected/. In most languages, such contexts are well- understood by the compiler and described by the language standard.
If, as is commonly the case, "sqrt" is a function taking a real >>> [or "float"] parameter and there is no competing "sqrt", then there isIt could be integer-valued or complex-valued sqrt. Or a sqrt
no plausible ambiguity.
returning a lesser precision type etc.
It could indeed. But the compiler knows the signature of whichever "sqrt" is in scope, and therefore still knows that a "real" [or "complex"
or "long long real" or ...] parameter is needed, and so can arrange to convert the supplied "int" with no action by the programmer.
Either "2" [or "i + 1" or whatever] can beYou are talking about type-error here or something else?
implicitly converted to type "real", in which case that is, beyond
reasonable doubt, what the programmer intended, or it cannot, in which
case it's a compile-time error.
I'm talking about a hypothetical language in which "int" is not reliably a sub-type of "real", in which case "x := 2" is indeed a type error. That particular case is, IRL, unlikely, but conversions such as "char -> int", "X -> array of X with a single element", ... are more problematic [possible implicitly in some languages, not in others].
Anyway there could be many floating-point, fixed-point, complex types
around. Imagine it as a disjoint graph in a strongly typed language.
Conversions introduce paths then graph. Paths gets connected and
suddenly you have a dish full of spaghetti.
That's why the language has to rule on which such paths are possible, and in what circumstances. The objective should always be
to make life easier for the programmer, not to pile complexity on top
of complexity. You can't expect engineers, physicists, biologists,
... to understand arcane rules of programming that make sense only to
CS professionals and a sub-set of mathematicians.
[...]>> If the assignment [in the context of "int i, j;" with the usual
meaning] "j := 2;" is legal and also "j := i;" is legal, then since "2"That does not compute. 2 := j; is illegal merely because 2 (after
and "i" are not interchangeable in general [you can't write "2 := j;",
for example], there must be an implicit conversion [eg, from "integer
variable" to "integer"] taking place.
overloading resolution to int) is a constant and the first argument
of := is mutable.
Yes. That shows that "2" and "j" have different types.
Which
shows, in turn, that in places where you can write "2" /or/ "j" with
the same meaning, an implicit conversion must have taken place.
If
you look at the compiled code for "i := 2" and "i := j", you will find
that there is a difference.
The whole point of HLLs is to make programming easier, not toSure. The disagreement is about achieving that ease.
enforce some strict purity regime.
Yes. I would suggest that the evidence of code snippets posted here in relation to Ada and C shows that neither is even close. Nor is,
for example, Pascal. Some dialects of Basic are much better, at least within a limited class of problems. In the interests of /not/ starting "holier than thou" language wars, I shall say no more.
If you have to declare that "int" is a subtype of "real", then it'sSame is when the programmer's actions are needed to declare X int and
not "implicit"; overt action by the programmer is needed. Further, ...
Y real. Implicit here is the conversion in X + Y, not the
declarations.
The difference is that by default int and real are unrelated types.
The programmer could explicitly put them in some common class, e.g. a
class of additive types having + operation. That would make + an
operation from the class (int, real).
[...] The point of implicit "int -> real" conversion, forThe result would be a very rigid over-specified language. E.g. what
example, is that the /language/ already knows about it, and knows, eg,
that in contexts requiring a "real", an "int" may safely be supplied
instead. The types are different, but the conversion is safe.
if the programmer wanted to implement a custom floating-point type?
How the compiler would know that this type must enjoy implicit
conversions to int?
Note 2: The conversion does not extend to other types, such as variables; >> if a real variable is required, you can't supply an integer variable.You probably mean some type algebraic types built upon it, like a
pointer to real.
It is not so obvious. You might want some operations
of such types to enjoy conversions as well. For example, array
indexing, record members etc:
Real_Array (I) := 2; -- No?
Real_Array (1..3) := (1, 2, 3); -- No?
etc. IMO, the cleanest way is to have this stuff properly typed (=no conversions at all) and achieve desired effects through inter-type relationships designed by the programmer.
The whole point of it being determined by the language is that the >> language knows that there is no ambiguity in "int -> real" /in contextsI doubt one could formulate such rules consistently without
where "real" is expected/. In most languages, such contexts are well-
understood by the compiler and described by the language standard.
introducing ambiguities.
The typical case in Ada is that you have competing resolutions. E.g.
you have something like
print(sqrt(2))
There are lots of prints there, accepting all possible sqrts.> It requires a lot of fine tuning to make such things usable. You
cannot do that at the language level, IMO.
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
[I wrote:]
If you have to declare that "int" is a subtype of "real", then it'sY real. Implicit here is the conversion in X + Y, not the
not "implicit"; overt action by the programmer is needed. Further, ... >> Same is when the programmer's actions are needed to declare X int and
declarations.
In normal languages, the compiler has no reason to suppose that "X" is an "int" unless there is an overt declaration to that effect.
The difference is that by default int and real are unrelated types.
That may be the default in /some/ languages; others know, with no further action by the programmer, that "int"s may be converted to "real"s /when required/.
The programmer could explicitly put them in some common class, e.g. a
class of additive types having + operation. That would make + an
operation from the class (int, real).
Yes, lots of things are possible. Would you really want to use a language in which the programmer of [eg] "scientific" code with much use
of integer and floating-point arithmetic has to invoke many lines of such placement before embarking on even the simplest of programs?
We knew how
to avoid such make-work more than 60 years ago; we should move on rather than back.
The compiler doesn't know that,
and can't be expected to. If a conversion is not one of those specified
by the language, then it is not implicit.
etc. IMO, the cleanest way is to have this stuff properly typed (=no
conversions at all) and achieve desired effects through inter-type
relationships designed by the programmer.
That may [perhaps] be clean. It is not, IMO, something which should burden the average engineer, physicist, ... who merely wants to
write simple programs,
The typical case in Ada is that you have competing resolutions. E.g.
you have something like print(sqrt(2))
There are lots of prints there, accepting all possible sqrts.> It
requires a lot of fine tuning to make such things usable. You
cannot do that at the language level, IMO.
Perhaps you should again look at A68; RR 10.5.1d, which
refers back to 10.3.31a which is where all the hard work is done.
It is only a couple of pages, despite all the "lots of prints".
Is X declared int or not?Same is when the programmer's actions are needed to declare X int andIn normal languages, the compiler has no reason to suppose that "X"
Y real. Implicit here is the conversion in X + Y, not the
declarations.
is an "int" unless there is an overt declaration to that effect.
You ignore the question who requires conversion and who defines the conversion semantics including rounding, truncation, overflows,The difference is that by default int and real are unrelated types.That may be the default in /some/ languages; others know, with no
further action by the programmer, that "int"s may be converted to "real"s
/when required/.
operation result like in the case of exponentiation etc. The language designer?
If a conversion is not one of those specifiedIs integer to single precision IEEE float conversion safe?
by the language, then it is not implicit.
Then you should be able to explain how is it resolved between printingIt requires a lot of fine tuning to make such things usable. YouPerhaps you should again look at A68; RR 10.5.1d, which
cannot do that at the language level, IMO.
refers back to 10.3.31a which is where all the hard work is done.
It is only a couple of pages, despite all the "lots of prints".
1. complex sqrt of 2
2. single precision IEEE sqrt of 2
3. double precision IEEE sqrt of 2
4. long double sqrt of 2
5. fixed-point (how many?) sqrt of 2
On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
Is X declared int or not?Same is when the programmer's actions are needed to declare X int andIn normal languages, the compiler has no reason to suppose that "X"
Y real. Implicit here is the conversion in X + Y, not the
declarations.
is an "int" unless there is an overt declaration to that effect.
Of course it is. Were you thinking of the implicit declarations
in some languages, depending on [eg] the first letter of an identifier?
But that has nothing to do with implicit conversions [eg] from "int" to "real" or from "int variable" to "int [constant]"?
Yes; wasn't that implied by "some languages"? These are design decisions. But it would be surprising if "int -> real" conversion was difficult in these ways. Going the other way may raise problems.
[...]
If a conversion is not one of those specifiedIs integer to single precision IEEE float conversion safe?
by the language, then it is not implicit.
You would have to ask the language designer.
Then you should be able to explain how is it resolved between printingIt requires a lot of fine tuning to make such things usable. YouPerhaps you should again look at A68; RR 10.5.1d, which
cannot do that at the language level, IMO.
refers back to 10.3.31a which is where all the hard work is done.
It is only a couple of pages, despite all the "lots of prints".
1. complex sqrt of 2
2. single precision IEEE sqrt of 2
3. double precision IEEE sqrt of 2
4. long double sqrt of 2
5. fixed-point (how many?) sqrt of 2
The RR predates IEEE-754 by more than a decade, so there are no guarantees there about IEEE conformance [but A68G is set up to find out whether your computer supports IEEE, so I would expect it to work]. Lo, here is your task implemented in A68G, straight out of the box:
$ cat Dmitry.a68g
print (("complex: ", csqrt(2), newline,
"complex: [with im part] ", csqrt(2 I 3), newline,
"single: ", sqrt(2), newline,
"double: ", long sqrt(2), newline,
"long double: ", long long sqrt(2), newline,
"fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
$ a68g Dmitry.a68g
complex: +1.41421356237310e +0+0.00000000000000e +0
complex: [with im part] +1.67414922803554e +0+8.95977476129838e -1
single: +1.41421356237310e +0
double: +1.4142135623730950488016887242096981e +0
long double: +1.41421356237309504880168872420969807856967187537694807317667974e +0
fixed: [eg] +1.414214
$
Now you see that subtyping relation need not to be declared for each expression, not even for each object declaration, but just once!Of course it is. Were you thinking of the implicit declarations >> in some languages, depending on [eg] the first letter of an identifier?Is X declared int or not?Same is when the programmer's actions are needed to declare X int and >>>>> Y real. Implicit here is the conversion in X + Y, not theIn normal languages, the compiler has no reason to suppose that "X"
declarations.
is an "int" unless there is an overt declaration to that effect.
But that has nothing to do with implicit conversions [eg] from "int" to
"real" or from "int variable" to "int [constant]"?
Yes; wasn't that implied by "some languages"? These are design >> decisions. But it would be surprising if "int -> real" conversion wasNo, it is quite logical considering multitude of integer and real
difficult in these ways. Going the other way may raise problems.
types with different properties.
I thought you were going to give an advise for the language designerIs integer to single precision IEEE float conversion safe?You would have to ask the language designer.
on the subject. Like "Sure! Piece of cake! Go ahead!"
The RR predates IEEE-754 by more than a decade, so there are noThen you should be able to explain how is it resolved between printingIt requires a lot of fine tuning to make such things usable. YouPerhaps you should again look at A68; RR 10.5.1d, which
cannot do that at the language level, IMO.
refers back to 10.3.31a which is where all the hard work is done.
It is only a couple of pages, despite all the "lots of prints".
1. complex sqrt of 2
2. single precision IEEE sqrt of 2
3. double precision IEEE sqrt of 2
4. long double sqrt of 2
5. fixed-point (how many?) sqrt of 2
guarantees there about IEEE conformance [but A68G is set up to find out
whether your computer supports IEEE, so I would expect it to work]. Lo,
here is your task implemented in A68G, straight out of the box:
$ cat Dmitry.a68g
print (("complex: ", csqrt(2), newline,
"complex: [with im part] ", csqrt(2 I 3), newline,
"single: ", sqrt(2), newline,
"double: ", long sqrt(2), newline,
"long double: ", long long sqrt(2), newline,
"fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
$ a68g Dmitry.a68g
complex: +1.41421356237310e +0+0.00000000000000e +0
complex: [with im part] +1.67414922803554e +0+8.95977476129838e -1 >> single: +1.41421356237310e +0
double: +1.4142135623730950488016887242096981e +0
long double: +1.41421356237309504880168872420969807856967187537694807317667974e +0
fixed: [eg] +1.414214
$
Ah, see? Because of the conversions you cannot have just sqrt. In Ada
with has no implicit conversions sqrt is called sqrt for whatever
argument as it should be!
On 28/11/2022 19:59, Dmitry A. Kazakov wrote:
Not for the first time, I cannot make any sense of what you say. Implicit relationships don't need to be declared /at all/; not once, not every time, not at all, else they aren't implicit. Different languages
have different rules about what is implicit.
Yes; wasn't that implied by "some languages"? These are designNo, it is quite logical considering multitude of integer and real
decisions. But it would be surprising if "int -> real" conversion was
difficult in these ways. Going the other way may raise problems.
types with different properties.
So are you claiming that conversions of "real" to "int" doesn't raise problems?
Whether a language has a "multitude of integer
and real types" is also something that varies between languages.
I thought you were going to give an advise for the language designerIs integer to single precision IEEE float conversion safe?You would have to ask the language designer.
on the subject. Like "Sure! Piece of cake! Go ahead!"
Why would I be giving such advice to the language designer?
The RR predates IEEE-754 by more than a decade, so there are no >>> guarantees there about IEEE conformance [but A68G is set up to find outThen you should be able to explain how is it resolved between printing >>>> 1. complex sqrt of 2It requires a lot of fine tuning to make such things usable. YouPerhaps you should again look at A68; RR 10.5.1d, which
cannot do that at the language level, IMO.
refers back to 10.3.31a which is where all the hard work is done.
It is only a couple of pages, despite all the "lots of prints".
2. single precision IEEE sqrt of 2
3. double precision IEEE sqrt of 2
4. long double sqrt of 2
5. fixed-point (how many?) sqrt of 2
whether your computer supports IEEE, so I would expect it to work]. Lo, >>> here is your task implemented in A68G, straight out of the box:
$ cat Dmitry.a68g
print (("complex: ", csqrt(2), newline,
"complex: [with im part] ", csqrt(2 I 3), newline,
"single: ", sqrt(2), newline,
"double: ", long sqrt(2), newline,
"long double: ", long long sqrt(2), newline,
"fixed: [eg] ", fixed (sqrt(2), 10, 6), newline))
$ a68g Dmitry.a68g
complex: +1.41421356237310e +0+0.00000000000000e +0
complex: [with im part] +1.67414922803554e +0+8.95977476129838e -1 >>> single: +1.41421356237310e +0
double: +1.4142135623730950488016887242096981e +0
long double:
+1.41421356237309504880168872420969807856967187537694807317667974e +0
fixed: [eg] +1.414214
$
Ah, see? Because of the conversions you cannot have just sqrt. In Ada
with has no implicit conversions sqrt is called sqrt for whatever
argument as it should be!
But you may note that (a) "2" is converted to all those different types safely and reliably,
Meanwhile, I note that you have snipped/ducked the challenge in my PP, to show us the equivalent in Ada [or other language of your choice, if you prefer]. Is it six lines? Others can feel free to join in!
Not for the first time, I cannot make any sense of what you say. >> Implicit relationships don't need to be declared /at all/; not once, not >> every time, not at all, else they aren't implicit. Different languagesYou claimed that subtyping relationship is equivalent to explicit conversions.
have different rules about what is implicit.
Whether a language has a "multitude of integerNo.
and real types" is also something that varies between languages.
It is a proposition> IF there are multiple numeric types
THEN implicit conversions do not fly
You asked why, I explained.
You already did. You basically said: reduce the numeric types to aWhy would I be giving such advice to the language designer?I thought you were going to give an advise for the language designerIs integer to single precision IEEE float conversion safe?You would have to ask the language designer.
on the subject. Like "Sure! Piece of cake! Go ahead!"
bare minimum and then implicit conversions could possibly be defined.
I never argued it otherwise. Yes, a language with a primitive type
system can have them. PL/1 had, C had.
At the price of Hungarian notation? Thanks, but no.But you may note that (a) "2" is converted to all those different >> types safely and reliably,Lo, here is your task implemented in A68G, straight out of the box:
$ cat Dmitry.a68g
print (("complex: ", csqrt(2), newline,
"complex: [with im part] ", csqrt(2 I 3), newline, >>>> "single: ", sqrt(2), newline,
"double: ", long sqrt(2), newline,
"long double: ", long long sqrt(2), newline,
"fixed: [eg] ", fixed (sqrt(2), 10, 6), newline)) >>>> $ a68g Dmitry.a68g
complex: +1.41421356237310e +0+0.00000000000000e +0
complex: [with im part] +1.67414922803554e +0+8.95977476129838e -1
single: +1.41421356237310e +0
double: +1.4142135623730950488016887242096981e +0
long double: +1.41421356237309504880168872420969807856967187537694807317667974e +0
fixed: [eg] +1.414214
$
Meanwhile, I note that you have snipped/ducked the challenge in my >> PP, to show us the equivalent in Ada [or other language of your choice, if >> you prefer]. Is it six lines? Others can feel free to join in!
It was MY example of ambiguous expressions impossible to resolve in
presence of conversions (or, equivalently a subtyping relation). You
resorted to mangling names. This is trivial to do in any language, in
Ada, in C etc.
On 30/11/2022 20:53, Dmitry A. Kazakov wrote:
[I wrote:]
Not for the first time, I cannot make any sense of what you say. >>> Implicit relationships don't need to be declared /at all/; not once,You claimed that subtyping relationship is equivalent to explicit
not
every time, not at all, else they aren't implicit. Different languages >>> have different rules about what is implicit.
conversions.
I have checked back through my contributions to this thread and
can find nothing even remotely similar to such a claim.
Whether a language has a "multitude of integerNo.
and real types" is also something that varies between languages.
You surely mean "yes". Most early languages had only one integer type and one real type; that is not a "multitude".
It is a proposition> IF there are multiple numeric types
THEN implicit conversions do not fly
You asked why, I explained.
Another sub-thread where you seem to have gone off at a tangent
from whatever you think I may have been asking. But as a proposition,
that is clearly false [unless you are claiming that /only/ implicit conversions are to be allowed], as demonstrated in this thread with
specific examples.
[Left in this article for convenient reference:]
At the price of Hungarian notation? Thanks, but no.But you may note that (a) "2" is converted to all those different >>> types safely and reliably,Lo, here is your task implemented in A68G, straight out of the box:
$ cat Dmitry.a68g
print (("complex: ", csqrt(2), newline,
"complex: [with im part] ", csqrt(2 I 3), newline, >>>>> "single: ", sqrt(2), newline,
"double: ", long sqrt(2), newline,
"long double: ", long long sqrt(2), newline,
"fixed: [eg] ", fixed (sqrt(2), 10, 6), newline)) >>>>> $ a68g Dmitry.a68g
complex: +1.41421356237310e +0+0.00000000000000e +0
complex: [with im part] +1.67414922803554e
+0+8.95977476129838e -1
single: +1.41421356237310e +0
double: +1.4142135623730950488016887242096981e +0
long double:
+1.41421356237309504880168872420969807856967187537694807317667974e +0 >>>>> fixed: [eg] +1.414214
$
You snipped (b) and (c); what you seem to want -- it's always hard to be sure -- is perfectly possible in A68G, but the language as supplied contains only the one "sqrt" function.
Meanwhile, I note that you have snipped/ducked the challenge in my
PP, to show us the equivalent in Ada [or other language of your
choice, if
you prefer]. Is it six lines? Others can feel free to join in!
Challenge ducked again.
It was MY example of ambiguous expressions impossible to resolve in
presence of conversions (or, equivalently a subtyping relation). You
resorted to mangling names. This is trivial to do in any language, in
Ada, in C etc.
Then you will have no difficulty in meeting your own challenge in Ada or C. Your choice. But I'm guessing it will be harder to code and to explain than the A68G given above.
On 22/11/2022 15:24, Dmitry A. Kazakov wrote:
On 2022-11-22 15:11, David Brown wrote:
[...] It is not really any more controversial than the implicitYes, and there should be none in a well-typed program.
conversions found in many languages and user types.
Virtually every computer language since time began has glossed
over the distinction between constants and variables in simple arithmetic expressions [and in contexts such as parameters to functions]. You can
use an implicit dereference or you can make things harder for programmers;
I know which I prefer.
Personally, in the interests of making programming easier, I'm in favour of more, not fewer, implicit conversions. If these are well-chosen, you can [and should] avoid almost all explicit conversions. I see little point in telling people that they can't use "sqrt(2)" but must write "sqrt(2.0)" [or "sqrt((real) i))"] instead; can't write "print (x)" but must write "(void) print (x)" [as "print" happens to return the number of characters printed, which you happen not to care about on this occasion];
or, for consistency, can't write "j := i" but must write "j := (deref) i" instead. It's not as though in any of these constructs there is reason
ever to expect the implicit coercions not to apply, so that some errors
are missed. But some people seem to prefer hair shirts.
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
The point of implicit "int -> real" conversion, for
example, is that the /language/ already knows about it, and knows, eg,
that in contexts requiring a "real", an "int" may safely be supplied instead. The types are different, but the conversion is safe.
The point of implicit "int -> real" conversion, for"safe"? There's an assumption, perhaps from the C world, that
example, is that the /language/ already knows about it, and knows, eg,
that in contexts requiring a "real", an "int" may safely be supplied
instead. The types are different, but the conversion is safe.
int-to-float conversions are lossless but it's not true.
Conversions from int to float may be lossless for small values (which
can lead a programmer into a false sense of security) but lossy for
large ones.
The solution? Don't support implicit conversions which are
potentially lossy.
Speaking of errors being missed I find it's easy to miss some
comments in such a monolithic blob of text! Do you find that
paragraph easier to read than something which includes more
whitespace?
Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
be converted to float as needed. In fact, if "real" is a generic name
for multiple types, as a programmer I'd want to be able to specify
what I want done, e.g.
real_32(i) ** 0.5
For your latter assignment example I'd prefer
j = i*
where the trailing * indicates deref. That's not too much of a hair
shirt, is it...?
On 04/12/2022 12:47, James Harris wrote:
[I wrote:]
The point of implicit "int -> real" conversion, for"safe"? There's an assumption, perhaps from the C world, that
example, is that the /language/ already knows about it, and knows, eg,
that in contexts requiring a "real", an "int" may safely be supplied
instead. The types are different, but the conversion is safe.
int-to-float conversions are lossless but it's not true.
Lossless and safe are different concepts, and the difference was well understood decades before C, eg using computers where both "int" and "real" were 48 bits and "real" types had to include an exponent in that. Sadly, numerical analysis is somewhat of a lost art these days.
Conversions from int to float may be lossless for small values (which
can lead a programmer into a false sense of security) but lossy for
large ones.
I wouldn't want to do anything serious on a computer for which
[eg] "maxint" was too large to be converted to "real". Whether the conversion is lossless is quite another matter.
The solution? Don't support implicit conversions which are
potentially lossy.
How, in your mind, does "f(i)" [with an implicit conversion of
the parameter to type "real"] differ from "f((real) i)" [where the
parameter is explicitly cast]? The result is the same in both cases.
I deduce that the problems, if any, are nothing to so with conversion
being implicit, and everything to do with whatever guarantees and
facilities /your/ language and hardware provide. If you want the
conversion of "int" to "real" and back to "int" to be guaranteed to be lossless as well as safe, then write that into your language standard.
On 04/12/2022 11:21, James Harris wrote:
Speaking of errors being missed I find it's easy to miss some
comments in such a monolithic blob of text! Do you find that
paragraph easier to read than something which includes more
whitespace?
It was only eleven lines! Easier to read than your paragraph, which consisted on one line of ~200 characters, which therefore has to
be [re-]wrapped before reading and replying!
Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
be converted to float as needed. In fact, if "real" is a generic name
for multiple types, as a programmer I'd want to be able to specify
what I want done, e.g.
real_32(i) ** 0.5
As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
clearer [IMO], and very possibly more efficient [eg, you can use Newton-Raphson effectively for square roots, whereas the more general
case typically needs different treatments for very large or very small exponents].
As a practical programmer, I hate the idea of having to
specify "real_32"; I'm sure there are CS reasons why it's sometimes
useful, but my experience of astrophysicists and other scientists is
that they just expect defaults to work properly.
[...]
For your latter assignment example I'd prefer
j = i*
where the trailing * indicates deref. That's not too much of a hair
shirt, is it...?
Good luck trying to get that past the average programmer!
Perhaps worth noting that the particular choice of "*" as the "deref"
symbol rather overloads it as you want it also for the multiplication
sign, and in [your preference for] the power symbol, and perhaps in
things like "j *= 2" and "/* comment */".
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
That does not compute. 2 := j; is illegal merely because 2 (after
overloading resolution to int) is a constant and the first argument
of := is mutable.
Yes. That shows that "2" and "j" have different types.
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
The whole point of implicit conversions is that they take place >>>> with no overt action needed by the programmer.This is what I meant. You declare that A is a subtype B (inheriting
in- or out- or all operations) once. Then you enjoy the effect
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", then it's
not "implicit"; overt action by the programmer is needed. Further, ...
Same is when the programmer's actions are needed to declare X int and Y real. Implicit here is the conversion in X + Y, not the declarations.
The difference is that by default int and real are unrelated types. The programmer could explicitly put them in some common class, e.g. a class
of additive types having + operation. That would make + an operation
from the class (int, real). The implementation of now valid cross
operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move the
nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let them! (:-))
The whole point of HLLs is to make programming easier, not to >>>> enforce some strict purity regime.Sure. The disagreement is about achieving that ease.
Yes. I would suggest that the evidence of code snippets posted >> here in relation to Ada and C shows that neither is even close. Nor is,
for example, Pascal. Some dialects of Basic are much better, at least
within a limited class of problems. In the interests of /not/ starting
"holier than thou" language wars, I shall say no more.
My take on this is that it is not possible to do on the language level.
I think that the language must be much simpler than even Ada, which is
four or so times smaller than modern C++. Yet it must be powerful enough
to express the ideas like implicit conversions at the library level.
On 25/11/2022 11:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:...
[implicit conversions:]
Does it? They may have different protections (one is intrinsicallyThat does not compute. 2 := j; is illegal merely because 2 (after >>>overloading resolution to int) is a constant and the first argumentYes. That shows that "2" and "j" have different types.
of := is mutable.
read-only but being wrongly used in a context in which something
writeable is required) but when did protections become part of the type?
On 04/12/2022 16:00, Andy Walker wrote:
On 04/12/2022 11:21, James Harris wrote:
Speaking of errors being missed I find it's easy to miss some
comments in such a monolithic blob of text! Do you find that
paragraph easier to read than something which includes more
whitespace?
It was only eleven lines! Easier to read than your paragraph,
which consisted on one line of ~200 characters, which therefore has to
be [re-]wrapped before reading and replying!
Do you have to rewrap my posts before replying? The two newsreaders I
use both compose paragraphs without line breaks and that is
significantly more logical but it must be a pain if your newsreader
doesn't do the same.
As for readability, paragraphs are fine if they are purely prose. Your
post mixed prose and code examples in a single paragraph 'blob' which
still makes my eyes water when I try to read it!
Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
be converted to float as needed. In fact, if "real" is a generic name
for multiple types, as a programmer I'd want to be able to specify
what I want done, e.g.
real_32(i) ** 0.5
As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
clearer [IMO], and very possibly more efficient [eg, you can use
Newton-Raphson effectively for square roots, whereas the more general
case typically needs different treatments for very large or very small
exponents].
I was thinking that x ** 0.5 (with 0.5 being a literal) could be /implemented/ as sqrt(x) but it sounds as though there are traps
awaiting for large and small exponents. I hate the numerical analysis
stuff because I don't know enough of it and I never use floating point
so I never have to deal with it.
What I don't like about having a sqrt function is that while it is the
most common root it's not the only one. It's hard to justify that a
language should include a specific function for square root but not cube root and not fourth root, etc.
On 04/12/2022 11:21, James Harris wrote:
Speaking of errors being missed I find it's easy to miss some
comments in such a monolithic blob of text! Do you find that
paragraph easier to read than something which includes more
whitespace?
It was only eleven lines! Easier to read than your paragraph, which consisted on one line of ~200 characters, which therefore has to
be [re-]wrapped before reading and replying!
Contrary to your viewpoint I'd prefer 2.0 ** 0.5 or to require i to
be converted to float as needed. In fact, if "real" is a generic name
for multiple types, as a programmer I'd want to be able to specify
what I want done, e.g.
real_32(i) ** 0.5
As a programmer, I'd prefer "sqrt" to "** 0.5" every time --
clearer [IMO], and very possibly more efficient [eg, you can use Newton-Raphson effectively for square roots, whereas the more general
case typically needs different treatments for very large or very small exponents]. As a practical programmer, I hate the idea of having to
specify "real_32"; I'm sure there are CS reasons why it's sometimes
useful,
but my experience of astrophysicists and other scientists is
that they just expect defaults to work properly.
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
...
Same is when the programmer's actions are needed to declare X int andThe whole point of implicit conversions is that they take place >>>>> with no overt action needed by the programmer.This is what I meant. You declare that A is a subtype B (inheriting
in- or out- or all operations) once. Then you enjoy the effect
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", then it's
not "implicit"; overt action by the programmer is needed. Further, ... >>
Y real. Implicit here is the conversion in X + Y, not the declarations.
The difference is that by default int and real are unrelated types.
The programmer could explicitly put them in some common class, e.g. a
class of additive types having + operation. That would make + an
operation from the class (int, real). The implementation of now valid
cross operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move the
nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let
them! (:-))
On the contrary, assuming I understand your notation I would have only
+ : int : int -> int
+ : uint : uint -> uint
+ : float : float -> float
IOW different types would need one operand to be converted.
The whole point of HLLs is to make programming easier, not to >>>>> enforce some strict purity regime.Sure. The disagreement is about achieving that ease.
Yes. I would suggest that the evidence of code snippets posted >>> here in relation to Ada and C shows that neither is even close. Nor is, >>> for example, Pascal. Some dialects of Basic are much better, at least
within a limited class of problems. In the interests of /not/ starting >>> "holier than thou" language wars, I shall say no more.
My take on this is that it is not possible to do on the language
level. I think that the language must be much simpler than even Ada,
which is four or so times smaller than modern C++. Yet it must be
powerful enough to express the ideas like implicit conversions at the
library level.
Which implicit conversions would you want and why could they not be explicit?
Didn't Fortran have types like REAL*4 and REAL*8, as well as REAL and DOUBLEPRECISION?
On 2022-12-04 18:26, James Harris wrote:
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
...
The whole point of implicit conversions is that they take placeThis is what I meant. You declare that A is a subtype B (inheriting
with no overt action needed by the programmer.
in- or out- or all operations) once. Then you enjoy the effect
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", then >>>> it's
not "implicit"; overt action by the programmer is needed. Further, >>>> ...
Same is when the programmer's actions are needed to declare X int and
Y real. Implicit here is the conversion in X + Y, not the declarations.
The difference is that by default int and real are unrelated types.
The programmer could explicitly put them in some common class, e.g. a
class of additive types having + operation. That would make + an
operation from the class (int, real). The implementation of now valid
cross operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move the
nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let
them! (:-))
On the contrary, assuming I understand your notation I would have only
+ : int : int -> int
+ : uint : uint -> uint
+ : float : float -> float
IOW different types would need one operand to be converted.
The example was about introducing *user-defined* ad-hoc subtyping.
The whole point of HLLs is to make programming easier, not to >>>>>> enforce some strict purity regime.Sure. The disagreement is about achieving that ease.
Yes. I would suggest that the evidence of code snippets posted >>>> here in relation to Ada and C shows that neither is even close. Nor >>>> is,
for example, Pascal. Some dialects of Basic are much better, at least >>>> within a limited class of problems. In the interests of /not/ starting >>>> "holier than thou" language wars, I shall say no more.
My take on this is that it is not possible to do on the language
level. I think that the language must be much simpler than even Ada,
which is four or so times smaller than modern C++. Yet it must be
powerful enough to express the ideas like implicit conversions at the
library level.
Which implicit conversions would you want and why could they not be
explicit?
A language without certain conversions would be intolerable. Start with access mode subtypes. Clearly 'in out T' is convertible to 'in T'. So
are derived types inheriting operations etc.
On 04/12/2022 16:59, James Harris wrote:
I was thinking that x ** 0.5 (with 0.5 being a literal) could be
/implemented/ as sqrt(x) but it sounds as though there are traps
awaiting for large and small exponents. I hate the numerical analysis
stuff because I don't know enough of it and I never use floating point
so I never have to deal with it.
What I don't like about having a sqrt function is that while it is the
most common root it's not the only one. It's hard to justify that a
language should include a specific function for square root but not
cube root and not fourth root, etc.
My Casio calculator has square root on its own button. Cube root is on a shifted button, and an Nth root is on another shifted button.
If it's given special treatment on a calculator, when why not in a
language?
The same calculator has a dedicated button for 'squared', and another
for x^n. My languages also dedicated operators for square root, square,
and exponentiation.
Also, sqrt is often a built-in processor instruction (as are min and max
on x64), so another reason a language should treat them specially.
On 2022-11-28 00:28, Andy Walker wrote:
The compiler doesn't know that,
and can't be expected to. If a conversion is not one of those specified
by the language, then it is not implicit.
Is integer to single precision IEEE float conversion safe?
On 28/11/2022 10:10, Dmitry A. Kazakov wrote:
On 2022-11-28 00:28, Andy Walker wrote:
...
The compiler doesn't know that,
and can't be expected to. If a conversion is not one of those specified >>> by the language, then it is not implicit.
Is integer to single precision IEEE float conversion safe?
By my definition of 'safe', no. Numbers above 16.7 million (2^24) or thereabouts will lose precision. I presume that was the point you were making.
Understandably, I often see programmers overlook that converting a
32-bit int to a 32-bit float will lose information.
On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
On 2022-12-04 18:26, James Harris wrote:
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
...
The whole point of implicit conversions is that they take placeThis is what I meant. You declare that A is a subtype B (inheriting >>>>>> in- or out- or all operations) once. Then you enjoy the effect
with no overt action needed by the programmer.
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", then >>>>> it's
not "implicit"; overt action by the programmer is needed.
Further, ...
Same is when the programmer's actions are needed to declare X int
and Y real. Implicit here is the conversion in X + Y, not the
declarations.
The difference is that by default int and real are unrelated types.
The programmer could explicitly put them in some common class, e.g.
a class of additive types having + operation. That would make + an
operation from the class (int, real). The implementation of now
valid cross operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move the
nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let
them! (:-))
On the contrary, assuming I understand your notation I would have only
+ : int : int -> int
+ : uint : uint -> uint
+ : float : float -> float
IOW different types would need one operand to be converted.
The example was about introducing *user-defined* ad-hoc subtyping.
I never know what people mean by subtyping. To some it seems to be a
smaller range of another type as in 'tiny' below.
small is range 0..999
tiny is small range 0..99
To others an OO 'subtype' is a class which inherits from a superclass.
In either case the subtype inherits operations from its parent.
Putting aside access modes (which I see as orthogonal to types) what
other implicit conversions would you see the absence of as intolerable?
On 2022-12-04 21:02, James Harris wrote:
On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
On 2022-12-04 18:26, James Harris wrote:
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
...
The whole point of implicit conversions is that they take >>>>>>>> placeThis is what I meant. You declare that A is a subtype B (inheriting >>>>>>> in- or out- or all operations) once. Then you enjoy the effect
with no overt action needed by the programmer.
distributed everywhere that declaration has effect.
If you have to declare that "int" is a subtype of "real", >>>>>> then it's
not "implicit"; overt action by the programmer is needed.
Further, ...
Same is when the programmer's actions are needed to declare X int
and Y real. Implicit here is the conversion in X + Y, not the
declarations.
The difference is that by default int and real are unrelated types. >>>>> The programmer could explicitly put them in some common class, e.g. >>>>> a class of additive types having + operation. That would make + an
operation from the class (int, real). The implementation of now
valid cross operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move the >>>>> nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let
them! (:-))
On the contrary, assuming I understand your notation I would have only >>>>
+ : int : int -> int
+ : uint : uint -> uint
+ : float : float -> float
IOW different types would need one operand to be converted.
The example was about introducing *user-defined* ad-hoc subtyping.
I never know what people mean by subtyping. To some it seems to be a
smaller range of another type as in 'tiny' below.
small is range 0..999
tiny is small range 0..99
These are new types. The correct syntax is
subtype Small is Integer range 0..999;
Now Small is subtype of Integer.
To others an OO 'subtype' is a class which inherits from a superclass.
Both are same. E.g. Small inherits + from Integer:
X : Small;
Y : Integer;
begin
Y := Y + X; -- See any conversion here? That is what subtype does.
In either case the subtype inherits operations from its parent.
Yes.
Subtype (in Liskov definition) means that you can substitute X of S, a subtype of T in an operation Foo of T.
Since you can do that, you do not need any explicit conversions.
Elementary.
Putting aside access modes (which I see as orthogonal to types) what
other implicit conversions would you see the absence of as intolerable?
Nope, not putting them aside. Again, definition:
type = values + operations
Does 'in T' has same operations of 'in out T'? No, ergo, this is another type.
Other implicit conversion you mentioned yourself. All inherited methods
in OO. If S inherits Foo from T you need not to explicitly convert to T
when calling Foo on an S.
Yet another example is transparent pointers. E.g.
type Ptr is access String; -- Pointer to String
begin
Ptr (1..2) := "ab";
That is OK, Ptr is a subtype of String in array indexing operations. Implicit type conversion here is pointer dereferencing.
On 04/12/2022 17:54, Bart wrote:
On 04/12/2022 16:59, James Harris wrote:
...
I was thinking that x ** 0.5 (with 0.5 being a literal) could be
/implemented/ as sqrt(x) but it sounds as though there are traps
awaiting for large and small exponents. I hate the numerical analysis
stuff because I don't know enough of it and I never use floating
point so I never have to deal with it.
What I don't like about having a sqrt function is that while it is
the most common root it's not the only one. It's hard to justify that
a language should include a specific function for square root but not
cube root and not fourth root, etc.
My Casio calculator has square root on its own button. Cube root is on
a shifted button, and an Nth root is on another shifted button.
If it's given special treatment on a calculator, when why not in a
language?
Because languages are not designed by Casio...?
Calculators may also have keys for percent, factorial, and 000. Does
that mean a language should do the same?
The same calculator has a dedicated button for 'squared', and another
for x^n. My languages also dedicated operators for square root,
square, and exponentiation.
You have
square(x)
as well as
x ** 2
?
Also, sqrt is often a built-in processor instruction (as are min and
max on x64), so another reason a language should treat them specially.
As I say, x ** 0.5 could be implemented by a sqrt instruction (numerical analyses permitting).
I could see it either way. I am not really averse to having a few sqrt functions (one for each float type) but I am interested to hear that
people think they /should/ be included.
On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
On 2022-12-04 21:02, James Harris wrote:
On 04/12/2022 19:40, Dmitry A. Kazakov wrote:
On 2022-12-04 18:26, James Harris wrote:
On 25/11/2022 15:46, Dmitry A. Kazakov wrote:
On 2022-11-25 12:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
[implicit conversions:]
...
The whole point of implicit conversions is that they take >>>>>>>>> placeThis is what I meant. You declare that A is a subtype B (inheriting >>>>>>>> in- or out- or all operations) once. Then you enjoy the effect >>>>>>>> distributed everywhere that declaration has effect.
with no overt action needed by the programmer.
If you have to declare that "int" is a subtype of "real", >>>>>>> then it's
not "implicit"; overt action by the programmer is needed.
Further, ...
Same is when the programmer's actions are needed to declare X int >>>>>> and Y real. Implicit here is the conversion in X + Y, not the
declarations.
The difference is that by default int and real are unrelated
types. The programmer could explicitly put them in some common
class, e.g. a class of additive types having + operation. That
would make + an operation from the class (int, real). The
implementation of now valid cross operation:
+ : int : real -> real
could be created by composing real + with int-to-real conversion
(provided by the programmer). The full dispatching table could be
+ : int : int -> int
+ : int : real -> real
+ : real : int -> real
This is the mechanics the language can provide in order to move
the nasty stuff out of its core.
BTW, people like James would surely ask for
+ : real : real -> int
for the cases when the result is a whole integer. But we won't let >>>>>> them! (:-))
On the contrary, assuming I understand your notation I would have only >>>>>
+ : int : int -> int
+ : uint : uint -> uint
+ : float : float -> float
IOW different types would need one operand to be converted.
The example was about introducing *user-defined* ad-hoc subtyping.
I never know what people mean by subtyping. To some it seems to be a
smaller range of another type as in 'tiny' below.
small is range 0..999
tiny is small range 0..99
These are new types. The correct syntax is
subtype Small is Integer range 0..999;
Now Small is subtype of Integer.
OK. What is an operation which is out of range as in
u : integer;
v : small := 999;
u := v + 1;
v := v + 1;
defined to leave in u and v?
And does
subtype tiny is small range 0..9;
declare a further subtype compatible with integer and small?
To others an OO 'subtype' is a class which inherits from a superclass.
Both are same. E.g. Small inherits + from Integer:
X : Small;
Y : Integer;
begin
Y := Y + X; -- See any conversion here? That is what subtype does.
Yes, although a subtype isn't required for that. As you know, C effects various promotions (some unwisely but they are effected nonetheless) and
I define the narrower operand to be widened to match the other.
Type /compatibility/ is an open issue for me. I gather that Ada allows multiple subtypes of integer all to be compatible with each other even
if one is not derived from the other; that's different from inherited compatibility as neither is a superclass of the other.
Putting aside access modes (which I see as orthogonal to types) what
other implicit conversions would you see the absence of as intolerable?
Nope, not putting them aside. Again, definition:
type = values + operations
Does 'in T' has same operations of 'in out T'? No, ergo, this is
another type.
I won't go there. You and I have debated the meaning of 'type' before
and we are not in total agreement.
Other implicit conversion you mentioned yourself. All inherited
methods in OO. If S inherits Foo from T you need not to explicitly
convert to T when calling Foo on an S.
Yet another example is transparent pointers. E.g.
type Ptr is access String; -- Pointer to String
begin
Ptr (1..2) := "ab";
That is OK, Ptr is a subtype of String in array indexing operations.
Implicit type conversion here is pointer dereferencing.
FWIW I would explicitly dereference such a pointer with
Ptr*
I have thought about declaring a reference which is automatically dereferenced a certain number of times so that it can be treated as an object but haven't bottomed out the concomitant issues yet such as when
the programmer wants to mention the reference without all that automatic dereferencing how does he specify to?
What do you think of as unsafe and do you have an example of lossy
but 'safe'?
How, in your mind, does "f(i)" [with an implicit conversion ofI've done very little on floats so far but I can say I don't intend
the parameter to type "real"] differ from "f((real) i)" [where the
parameter is explicitly cast]? The result is the same in both cases.
to support implicit conversions to float. Nor would I have just one
float type. The conversion you mention would have to be more explicit
such as either of
f(<float 32>(i))
f(<float 64>(i))
On 25/11/2022 11:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:Does it? They may have different protections (one is intrinsically
That does not compute. 2 := j; is illegal merely because 2 (afterYes. That shows that "2" and "j" have different types.
overloading resolution to int) is a constant and the first argument
of := is mutable.
read-only but being wrongly used in a context in which something
writeable is required) but when did protections become part of the
type?
I am not saying that protection cannot be part of the type but I
don't see the necessity to conflate the two concepts.
On 04/12/2022 17:18, James Harris wrote:
On 25/11/2022 11:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:Does it? They may have different protections (one is intrinsically
That does not compute. 2 := j; is illegal merely because 2 (afterYes. That shows that "2" and "j" have different types.
overloading resolution to int) is a constant and the first argument
of := is mutable.
read-only but being wrongly used in a context in which something
writeable is required) but when did protections become part of the
type?
Not to do with "protection". Unless your [or your language's] model of the computer is seriously weird, "j" is some allocated storage
in the computer which contains an integer [which in the present case is
2]. Storage is not the same sort of object as the thing it contains. Read-only storage is still not the same as its contents. Covering up
that distinction, as with C's talk of "lvalues" and "rvalues" is not,
IMO, helpful. Two objects of the same type ought to be syntactically
and semantically interchangeable [modulo some quibbles not relevant
here].
Do i and j in this bit of A68 have the same type:
INT i=2;
INT j:=3;
On 04/12/2022 20:14, James Harris wrote:
On 04/12/2022 17:54, Bart wrote:
The same calculator has a dedicated button for 'squared', and another
for x^n. My languages also dedicated operators for square root,
square, and exponentiation.
You have
square(x)
as well as
x ** 2
?
Yes. I've always had it, while ** only came along later. It's also an
easy optimisation for a primitive compiler.
Besides, some implementations of ** (or pow() in C) are designed for and will use floating point.
Even with integer **, you have to cross your fingers a little and hope
that a compiler (even yours) will optimise X**2 into a simple multiply.
With sqr(X) you can be 100% confident.
On 2022-12-04 23:21, James Harris wrote:
On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
On 2022-12-04 21:02, James Harris wrote:
I never know what people mean by subtyping. To some it seems to be a
smaller range of another type as in 'tiny' below.
small is range 0..999
tiny is small range 0..99
These are new types. The correct syntax is
subtype Small is Integer range 0..999;
Now Small is subtype of Integer.
OK. What is an operation which is out of range as in
u : integer;
v : small := 999;
u := v + 1;
v := v + 1;
defined to leave in u and v?
"+" is inherited from Integer. Thus
v + 1 = Integer'(v) + Integer'(1) = Integer'(1000)
And does
subtype tiny is small range 0..9;
declare a further subtype compatible with integer and small?
Yes.
Yes, although a subtype isn't required for that. As you know, CTo others an OO 'subtype' is a class which inherits from a superclass.
Both are same. E.g. Small inherits + from Integer:
X : Small;
Y : Integer;
begin
Y := Y + X; -- See any conversion here? That is what subtype does. >>
effects various promotions (some unwisely but they are effected
nonetheless) and I define the narrower operand to be widened to match
the other.
C promotions are subtypes.
Type /compatibility/ is an open issue for me. I gather that Ada allows
multiple subtypes of integer all to be compatible with each other even
if one is not derived from the other; that's different from inherited
compatibility as neither is a superclass of the other.
Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
both Small<:Integer and Integer<:Small. Because Small exports its
operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
This is why all Ada subtypes form an equivalence class.
If I designed a new language I would have ad-hoc sub- and super-type separated and not require same representation. More like C++ conversion operators.
Putting aside access modes (which I see as orthogonal to types) whatNope, not putting them aside. Again, definition:
other implicit conversions would you see the absence of as intolerable? >>>
type = values + operations
Does 'in T' has same operations of 'in out T'? No, ergo, this is
another type.
I won't go there. You and I have debated the meaning of 'type' before
and we are not in total agreement.
When you invent something better than the standard definition values + operations, let me know... (:-))
Other implicit conversion you mentioned yourself. All inherited
methods in OO. If S inherits Foo from T you need not to explicitly
convert to T when calling Foo on an S.
Yet another example is transparent pointers. E.g.
type Ptr is access String; -- Pointer to String
begin
Ptr (1..2) := "ab";
That is OK, Ptr is a subtype of String in array indexing operations.
Implicit type conversion here is pointer dereferencing.
FWIW I would explicitly dereference such a pointer with
Ptr*
I have thought about declaring a reference which is automatically
dereferenced a certain number of times so that it can be treated as an
object but haven't bottomed out the concomitant issues yet such as
when the programmer wants to mention the reference without all that
automatic dereferencing how does he specify to?
1. Such cases do not exist. The only one is deep vs. shallow copy in assignment. In Ada assignment is not inherited. So P1 := P2 is shallow. P1.all := P2.all is deep.
2. You can always qualify the type and/or operation. E.g. in Ada it is denoted as T'(E). T is the type, E is expression/object.
Anyway, it is not specific to pointers. Automatic dereferencing is subtyping. So if you do not want to build in it in the language, you do
not need to if you allowed the programmer to declare it a subtype.
On 04/12/2022 16:37, James Harris wrote:
What do you think of as unsafe and do you have an example of lossy
but 'safe'?
"Unsafe", to me, means that the behaviour is or could be undefined or could cause an [unexpected/untrapped] exception; there is lots of that around, esp if you forget the elementary checks [such as pointers being non-null, arithmetic not overflowing, indexes being within bounds, ...]. Converting "int -> real" is safe [esp if, as with Algol, the Standard
defines it so -- RR 2.1.3.1e] but may be lossy [as discussed earlier].
How, in your mind, does "f(i)" [with an implicit conversion of >>> the parameter to type "real"] differ from "f((real) i)" [where theI've done very little on floats so far but I can say I don't intend
parameter is explicitly cast]? The result is the same in both cases.
to support implicit conversions to float. Nor would I have just one
float type. The conversion you mention would have to be more explicit
such as either of
f(<float 32>(i))
f(<float 64>(i))
If the implicit conversion is possible, then one of your two explicit conversions is wrong. I don't intend to write an essay here,
but this is an area where Algol works very hard to make function calls
such as "f(i)" work smoothly while also permitting operands the usual freedoms so that you can add all numeric types with the operator "+".
On 04/12/2022 17:18, James Harris wrote:
On 25/11/2022 11:28, Andy Walker wrote:
On 23/11/2022 19:52, Dmitry A. Kazakov wrote:
Does it? They may have different protections (one is intrinsicallyThat does not compute. 2 := j; is illegal merely because 2 (afterYes. That shows that "2" and "j" have different types.
overloading resolution to int) is a constant and the first argument
of := is mutable.
read-only but being wrongly used in a context in which something
writeable is required) but when did protections become part of the
type?
Not to do with "protection". Unless your [or your language's] model of the computer is seriously weird, "j" is some allocated storage
in the computer which contains an integer [which in the present case is
2]. Storage is not the same sort of object as the thing it contains. Read-only storage is still not the same as its contents. Covering up
that distinction, as with C's talk of "lvalues" and "rvalues" is not,
IMO, helpful. Two objects of the same type ought to be syntactically
and semantically interchangeable [modulo some quibbles not relevant
here].
On 04/12/2022 23:29, Bart wrote:Laziness? I doubt my compilers bother with that particular reduction,
On 04/12/2022 20:14, James Harris wrote:
On 04/12/2022 17:54, Bart wrote:
...
The same calculator has a dedicated button for 'squared', and
another for x^n. My languages also dedicated operators for square
root, square, and exponentiation.
You have
square(x)
as well as
x ** 2
?
Yes. I've always had it, while ** only came along later. It's also an
easy optimisation for a primitive compiler.
Besides, some implementations of ** (or pow() in C) are designed for
and will use floating point.
Even with integer **, you have to cross your fingers a little and hope
that a compiler (even yours) will optimise X**2 into a simple
multiply. With sqr(X) you can be 100% confident.
Why could a compiler not be guaranteed to convert x ** 2 into x * x?
Isn't it a traditional strength reduction which would evaluate to
exactly the same answer?
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
On 2022-12-04 21:02, James Harris wrote:
...
I never know what people mean by subtyping. To some it seems to be
a smaller range of another type as in 'tiny' below.
small is range 0..999
tiny is small range 0..99
These are new types. The correct syntax is
subtype Small is Integer range 0..999;
Now Small is subtype of Integer.
OK. What is an operation which is out of range as in
u : integer;
v : small := 999;
u := v + 1;
v := v + 1;
defined to leave in u and v?
"+" is inherited from Integer. Thus
v + 1 = Integer'(v) + Integer'(1) = Integer'(1000)
That seems odd when v's limit is 999.
C promotions are subtypes.
Even C's promotions of int to float?
Type /compatibility/ is an open issue for me. I gather that Ada
allows multiple subtypes of integer all to be compatible with each
other even if one is not derived from the other; that's different
from inherited compatibility as neither is a superclass of the other.
Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
both Small<:Integer and Integer<:Small. Because Small exports its
operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems a bit
mad.
If I designed a new language I would have ad-hoc sub- and super-type
separated and not require same representation. More like C++
conversion operators.
Not being familiar with C++ I'm not sure what that last paragraph means.
ATM I feel there is a 'landing zone' for type compatibility but I cannot
yet make it out.
Putting aside access modes (which I see as orthogonal to types)
what other implicit conversions would you see the absence of as
intolerable?
Nope, not putting them aside. Again, definition:
type = values + operations
Does 'in T' has same operations of 'in out T'? No, ergo, this is
another type.
I won't go there. You and I have debated the meaning of 'type' before
and we are not in total agreement.
When you invent something better than the standard definition values +
operations, let me know... (:-))
Didn't you used to say values only?
At least you've now added operations
so you are getting there. ;-)
Other implicit conversion you mentioned yourself. All inherited
methods in OO. If S inherits Foo from T you need not to explicitly
convert to T when calling Foo on an S.
Yet another example is transparent pointers. E.g.
type Ptr is access String; -- Pointer to String
begin
Ptr (1..2) := "ab";
That is OK, Ptr is a subtype of String in array indexing operations.
Implicit type conversion here is pointer dereferencing.
FWIW I would explicitly dereference such a pointer with
Ptr*
I have thought about declaring a reference which is automatically
dereferenced a certain number of times so that it can be treated as
an object but haven't bottomed out the concomitant issues yet such as
when the programmer wants to mention the reference without all that
automatic dereferencing how does he specify to?
1. Such cases do not exist. The only one is deep vs. shallow copy in
assignment. In Ada assignment is not inherited. So P1 := P2 is
shallow. P1.all := P2.all is deep.
Shallow is fine. Deep is troublesome. Data structures have nodes at different depths.
2. You can always qualify the type and/or operation. E.g. in Ada it is
denoted as T'(E). T is the type, E is expression/object.
Then one gets into the Algol68 approach of "resolve until you get a type match". If a programmer has three levels of declared-automatic reference before getting to the target, i.e.
p -> 1 -> 2 -> target
then (to repeat, for declared-automatic dereference) a use of p would normally access the target. But the programmer might want to access p or
1 or 2 in different circumstances.
Anyway, it is not specific to pointers. Automatic dereferencing is
subtyping. So if you do not want to build in it in the language, you
do not need to if you allowed the programmer to declare it a subtype.
Everything's subtyping these days! :-o
[...] Two objects of the same type ought to be syntacticallyBoth j and 2 can be modelled as 'storage' and assigned a location. In
and semantically interchangeable [modulo some quibbles not relevant
here].
fact, that gives a consistent picture of operands. While some
literals (esp small integers) can be placed in the program code, in
the general case (let's call them large literals) they won't fit and
would need to be placed in storage. So why not initially place them
all there?
Even if the compiler initially assigns locations for all literals the optimiser can ensure that small integers are moved out of storage and
into the program text so nothing is lost. But the models for j and 2
as seen in the program text can be the same.
Sorry, but I then don't understand your point. Conversions areYou claimed that subtyping relationship is equivalent to explicitI have checked back through my contributions to this thread and
conversions.
can find nothing even remotely similar to such a claim.
implicit in both cases = arguments in expression appear as is. What's
the objection again?
You surely mean "yes".Whether a language has a "multitude of integerNo.
and real types" is also something that varies between languages.
Most early languages had only one integerThis subthread was started by James about designing a *new* language.
type and one real type; that is not a "multitude".
If you said that James should have looked no further than Excel,
which had no integer type, then I would not care to respond. Your
answer suggested than implicit conversion could somehow exist in a
moderately *modern* and reasonably typed language.
I have no idea what you mean.It is a propositionAnother sub-thread where you seem to have gone off at a tangent
IF there are multiple numeric types
THEN implicit conversions do not fly
You asked why, I explained.
from whatever you think I may have been asking. But as a proposition,
that is clearly false [unless you are claiming that /only/ implicit
conversions are to be allowed], as demonstrated in this thread with
specific examples.
You snipped (b) and (c); what you seem to want -- it's always hardWhy then each line has sqrt spelt differently?
to be sure -- is perfectly possible in A68G, but the language as supplied
contains only the one "sqrt" function.
To qualify it shall be this:> print (("complex: ", sqrt(2), newline,
"complex: [with im part] ", sqrt(2), newline,
"single: ", sqrt(2), newline,
"double: ", sqrt(2), newline,
"long double: ", sqrt(2), newline,
"fixed: [eg] ", sqrt(2), newline))
That is *not* possible in any language.
My example is impossible to implement due to ambiguities. The wrong
answer you gave is trivial to have in any language. E.g. in Ada
function sqrt (X : Integer) return Float is
begin
return Ada.Numerics.Elementary_Functions.sqrt (Float (X));
end sqrt;
Put_Line (sqrt(2)'Image);
Then without learning Hungarian I could write each sqrt as sqrt.
Put_Line (Float'(sqrt(2))'Image);
Put_Line (Long_Float'(sqrt(2))'Image);
https://www.ibm.com/docs/en/zos/2.1.0?topic=conversions-conversion-functions--
On 02/12/2022 23:07, Dmitry A. Kazakov wrote:
Sorry, but I then don't understand your point. Conversions areYou claimed that subtyping relationship is equivalent to explicitI have checked back through my contributions to this thread and >>> can find nothing even remotely similar to such a claim.
conversions.
implicit in both cases = arguments in expression appear as is. What's
the objection again?
I object to you saying that I claimed X when there is nothing even remotely resembling X in the thread. Nor can I parse your second sentence above into anything sensible; what are the "both cases"?
You surely mean "yes".Whether a language has a "multitude of integerNo.
and real types" is also something that varies between languages.
[IOW, you can surely not be denying that some languages have many and some have rather few integer/real types?]
Most early languages had only one integer >>> type and one real type; that is not a "multitude".This subthread was started by James about designing a *new* language.
If you said that James should have looked no further than Excel,
which had no integer type, then I would not care to respond. Your
answer suggested than implicit conversion could somehow exist in a
moderately *modern* and reasonably typed language.
Of course it can. You yourself [at the bottom of your article] point us at the implicit conversions in the 2023 C++ standard; and any language loosely related to C has them in its decays and promotions.
I have no idea what you mean.It is a propositionAnother sub-thread where you seem to have gone off at a tangent >>> from whatever you think I may have been asking. But as a proposition,
IF there are multiple numeric types
THEN implicit conversions do not fly
You asked why, I explained.
that is clearly false [unless you are claiming that /only/ implicit
conversions are to be allowed], as demonstrated in this thread with
specific examples.
You produced a proposition. C and C++, as well as more modern languages related to them, show the proposition to be false.
To qualify it shall be this:> print
(("complex: ", sqrt(2), newline,
"complex: [with im part] ", sqrt(2), newline,
"single: ", sqrt(2), newline,
"double: ", sqrt(2), newline,
"long double: ", sqrt(2), newline,
"fixed: [eg] ", sqrt(2), newline))
That is *not* possible in any language.
But that's not what you /said/ you wanted, and it's unreasonable.
Then without learning Hungarian I could write each sqrt as sqrt.
Put_Line (Float'(sqrt(2))'Image);
Put_Line (Long_Float'(sqrt(2))'Image);
Why do you regard "Long_Float'(sqrt(2))" as more readable than "longsqrt(2)"
You can of course have a language where you are expected to do x**2 and x**0.5 instead of sqr(x) and sqrt(x).
You can also have one where you do exp(log(x)*2) and exp(log(x)*0.5)
instead of x**2 and x**0.5.
It's about convenience and also making your intentions absolutely clear.
If cube roots were that common, would you write that as x**0.33333333333
or x**(1.0/3.0)? There you would welcome cuberoot(x)!
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that as
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!
In language terms I think I'd go for x ** (1.0 / 3.0). If a programmer wanted to put it in a function there would be nothing stopping him.
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
On 2022-12-04 21:02, James Harris wrote:
I never know what people mean by subtyping.
C promotions are subtypes.
Even C's promotions of int to float?
Sure. If you implicitly convert int to float in some operation f then
int is a subtype of float in f.
Type /compatibility/ is an open issue for me. I gather that Ada
allows multiple subtypes of integer all to be compatible with each
other even if one is not derived from the other; that's different
from inherited compatibility as neither is a superclass of the other.
Subtyping is a transitive relation. A<:B<:C. Ada's subtype introduces
both Small<:Integer and Integer<:Small. Because Small exports its
operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems a
bit mad.
Why, if that is desired effect? You want Foo (Y) illegal?
Putting aside access modes (which I see as orthogonal to types)
what other implicit conversions would you see the absence of as
intolerable?
Nope, not putting them aside. Again, definition:
type = values + operations
Does 'in T' has same operations of 'in out T'? No, ergo, this is
another type.
I won't go there. You and I have debated the meaning of 'type'
before and we are not in total agreement.
When you invent something better than the standard definition values
+ operations, let me know... (:-))
Didn't you used to say values only?
Me? Never.
Anyway, it is not specific to pointers. Automatic dereferencing is
subtyping. So if you do not want to build in it in the language, you
do not need to if you allowed the programmer to declare it a subtype.
Everything's subtyping these days! :-o
Implicit conversions are.
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
At least you've now added operations so you are getting there. ;-)
Good. Now you see why in T and in out T cannot be the same type?
2. You can always qualify the type and/or operation. E.g. in Ada it
is denoted as T'(E). T is the type, E is expression/object.
Then one gets into the Algol68 approach of "resolve until you get a
type match". If a programmer has three levels of declared-automatic
reference before getting to the target, i.e.
p -> 1 -> 2 -> target
then (to repeat, for declared-automatic dereference) a use of p would
normally access the target. But the programmer might want to access p
or 1 or 2 in different circumstances.
I still see no problem. Whatever object you want, it is has a type and
that type has a name. Use the name in the qualifier.
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
On 04/12/2022 21:40, Dmitry A. Kazakov wrote:
On 2022-12-04 21:02, James Harris wrote:
I never know what people mean by subtyping.
...
C promotions are subtypes.
Even C's promotions of int to float?
Sure. If you implicitly convert int to float in some operation f then
int is a subtype of float in f.
int may have a defined conversion to float but that doesn't make it
below ('sub') the other even for a specific operation.
Is this
OO-specific terminology and unrelated to other programming?
Type /compatibility/ is an open issue for me. I gather that AdaSubtyping is a transitive relation. A<:B<:C. Ada's subtype
allows multiple subtypes of integer all to be compatible with each
other even if one is not derived from the other; that's different
from inherited compatibility as neither is a superclass of the other. >>>>
introduces both Small<:Integer and Integer<:Small. Because Small
exports its operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems a
bit mad.
Why, if that is desired effect? You want Foo (Y) illegal?
Conversions are OK. Saying each is a subtype of the other seems to be abusing the English language.
Anyway, it is not specific to pointers. Automatic dereferencing is
subtyping. So if you do not want to build in it in the language, you
do not need to if you allowed the programmer to declare it a subtype.
Everything's subtyping these days! :-o
Implicit conversions are.
Where was it decided that implicit conversions implied a subtype relationship?
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-)
Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
Just because in some contexts one is not allowed to
modify it (perhaps simply because of promising not to do so) changes
neither its type nor the operations that can be applied to it.
An apple is an edible fruit. Someone may promise not to eat it but it is still an edible fruit.
2. You can always qualify the type and/or operation. E.g. in Ada it
is denoted as T'(E). T is the type, E is expression/object.
Then one gets into the Algol68 approach of "resolve until you get a
type match". If a programmer has three levels of declared-automatic
reference before getting to the target, i.e.
p -> 1 -> 2 -> target
then (to repeat, for declared-automatic dereference) a use of p would
normally access the target. But the programmer might want to access p
or 1 or 2 in different circumstances.
I still see no problem. Whatever object you want, it is has a type and
that type has a name. Use the name in the qualifier.
In C terms the above chain has four objects:
p
*p
**p
***p
where the first three are pointers and ***p is the target.
I could use
the same simple model but there are situations in which it may be
convenient for the programmer to declare a reference, p, which will be treated as the target so that writing
p + 1
would mean
target + 1
Call it auto dereferencing.
What I was saying was that if I allow such declarations then the
question arises over what a programmer could write if instead of the
target he wanted to access one of the references in the chain.
I guess
it may be something like
refchain(p, 0) ;p itself
refchain(p, 1) ;1 away from p
refchain(p, 2) ;2 away from p
refchain(p, -1) ;one before the target
The last two would both refer to object "2" in the chain
p -> 1 -> 2 -> target
Built-in conversions vs. user-defined subtypes.[...] Conversions are implicit in both cases = arguments in[...] Nor can I parse your second sentence above into anything
expression appear as is.
sensible; what are the "both cases"?
They are *user-defined*, which was the whole point about how[...] Your answer suggested than implicit conversion couldOf course it can. You yourself [at the bottom of your article]
somehow exist in a moderately *modern* and reasonably typed
language.
point us at the implicit conversions in the 2023 C++ standard; and
any language loosely related to C has them in its decays and
promotions.
implicit conversions could be *reasonable* introduced if any.
I said that implicit conversions lead to ambiguities. All that is
provided sqrt spells "sqrt", print spells "print", 2 spells "2" etc.
If in your language they are called differently for each possible
type, or maybe depend on the line number, then my deepest
condolences, you left the race before it even started...
Then without learning Hungarian I could write each sqrt as sqrt.
Put_Line (Float'(sqrt(2))'Image); Put_Line
(Long_Float'(sqrt(2))'Image);
Why do you regard "Long_Float'(sqrt(2))" as more readable thanMy point was about inevitable ambiguities. Any properly designed
"longsqrt(2)"
language provides tools to resolve such ambiguities without resorting
to silly naming games.
On 2022-12-07 18:42, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
Type /compatibility/ is an open issue for me. I gather that AdaSubtyping is a transitive relation. A<:B<:C. Ada's subtype
allows multiple subtypes of integer all to be compatible with each >>>>>> other even if one is not derived from the other; that's different >>>>>> from inherited compatibility as neither is a superclass of the other. >>>>>
introduces both Small<:Integer and Integer<:Small. Because Small
exports its operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems a
bit mad.
Why, if that is desired effect? You want Foo (Y) illegal?
Conversions are OK. Saying each is a subtype of the other seems to be
abusing the English language.
Why?
Firstly neither sub- nor type are English words! (:-))
Secondly subtype means a part of a type. Which part? The inherited operations, the substitutable values. OK?
Anyway, it is not specific to pointers. Automatic dereferencing is
subtyping. So if you do not want to build in it in the language,
you do not need to if you allowed the programmer to declare it a
subtype.
Everything's subtyping these days! :-o
Implicit conversions are.
Where was it decided that implicit conversions implied a subtype
relationship?
Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
(float). You can substitute integer for float in sqrt. This is the
Liskov's definition of subtyping (ignoring behavior).
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-)
Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
If you want to use the term for something else you are free to do so.
But if you accept the standard definition you must also accept all consequences of.
Just because in some contexts one is not allowed to modify it (perhaps
simply because of promising not to do so) changes neither its type nor
the operations that can be applied to it.
An apple is an edible fruit. Someone may promise not to eat it but it
is still an edible fruit.
1. This is obviously wrong. For a cat apple is not edible.
2. This not in the least resembles the case. Which is about operations (properties) added/removed. E.g. a car without wheels is still a car,
yet is one you might tread a bit differently from one with wheels.
2. You can always qualify the type and/or operation. E.g. in Ada it >>>>> is denoted as T'(E). T is the type, E is expression/object.
Then one gets into the Algol68 approach of "resolve until you get a
type match". If a programmer has three levels of declared-automatic
reference before getting to the target, i.e.
p -> 1 -> 2 -> target
then (to repeat, for declared-automatic dereference) a use of p
would normally access the target. But the programmer might want to
access p or 1 or 2 in different circumstances.
I still see no problem. Whatever object you want, it is has a type
and that type has a name. Use the name in the qualifier.
In C terms the above chain has four objects:
p
*p
**p
***p
where the first three are pointers and ***p is the target.
Good to them.
Do they have types? Name them! Let them be T, T1, T2, T3. Let X be declared
X : T3;
Let all types T, T1, T2, T3 have operation named Bar. You want to call
Bar of T1 on X? Just say so:
Bar (T1'(X))
IF T3 is a subtype of T1, X will be dereferenced to T2, then to T1
because that is the conversion attached to the subtyping relationship
and because subtyping is transitive: T3<:T2<:T1<:T.
IF is not, you get a type error.
What's the problem, again?
I could use the same simple model but there are situations in which it
may be convenient for the programmer to declare a reference, p, which
will be treated as the target so that writing
p + 1
would mean
target + 1
Call it auto dereferencing.
No, call it subtyping! (:-))
What I was saying was that if I allow such declarations then the
question arises over what a programmer could write if instead of the
target he wanted to access one of the references in the chain.
He would qualify the type. Each pointer type is a type. Each type has a name. Each name can be used to disambiguate object's type. What's the problem?
I guess it may be something like
refchain(p, 0) ;p itself
refchain(p, 1) ;1 away from p
refchain(p, 2) ;2 away from p
refchain(p, -1) ;one before the target
The last two would both refer to object "2" in the chain
p -> 1 -> 2 -> target
Looks disgusting,
but that was the intent, right? (:-)) Anyway it is
beside the point. See above.
On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
On 2022-12-07 18:42, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
Type /compatibility/ is an open issue for me. I gather that Ada >>>>>>> allows multiple subtypes of integer all to be compatible with
each other even if one is not derived from the other; that's
different from inherited compatibility as neither is a superclass >>>>>>> of the other.
Subtyping is a transitive relation. A<:B<:C. Ada's subtype
introduces both Small<:Integer and Integer<:Small. Because Small
exports its operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems a >>>>> bit mad.
Why, if that is desired effect? You want Foo (Y) illegal?
Conversions are OK. Saying each is a subtype of the other seems to be
abusing the English language.
Why?
Firstly neither sub- nor type are English words! (:-))
Secondly subtype means a part of a type. Which part? The inherited
operations, the substitutable values. OK?
It's true that where there is an implicit conversion the two types would likely share a subset of operations but the common subset would be a
subset of both, without the two types being sub-anything of each other.
If the two types are T and U they may share a set of values and
operations S. One could say that:
S is a subtype of T
S is a subtype of U
but that does not imply that T and U are subtypes of each other.
Anyway, it is not specific to pointers. Automatic dereferencing is >>>>>> subtyping. So if you do not want to build in it in the language,
you do not need to if you allowed the programmer to declare it a
subtype.
Everything's subtyping these days! :-o
Implicit conversions are.
Where was it decided that implicit conversions implied a subtype
relationship?
Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
(float). You can substitute integer for float in sqrt. This is the
Liskov's definition of subtyping (ignoring behavior).
But where did you read that the specific term 'subtype' should
thereafter be applied to implicit conversions?
On 05/12/2022 23:28, James Harris wrote:
[I wrote:]
[...] Two objects of the same type ought to be syntacticallyBoth j and 2 can be modelled as 'storage' and assigned a location. In
and semantically interchangeable [modulo some quibbles not relevant
here].
fact, that gives a consistent picture of operands. While some
literals (esp small integers) can be placed in the program code, in
the general case (let's call them large literals) they won't fit and
would need to be placed in storage. So why not initially place them
all there?
/Implementation/ details don't affect types! Yes, you can if you like implement "2" as
int secret := 2;
but that gives "secret" and "j" the same type, not "j" and "2". The fact will remain that there are many contexts in which you can use "j" but not
"2" [and you can't, as a programmer, use "secret", because it's secret].
Even if the compiler initially assigns locations for all literals the
optimiser can ensure that small integers are moved out of storage and
into the program text so nothing is lost. But the models for j and 2
as seen in the program text can be the same.
What the compiler and optimiser do is up to them. But if the /program text/ fails to distinguish an integer from storage containing
an integer, it's going to make programming "interesting". As in those languages where "2" is just an identifier, and you /can/ assign "2 := 3"
so that "2 * 2 == 9" [unless "9" has also been re-defined!]. I hope
you're not going down that route.
On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-)
Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
If you want to use the term for something else you are free to do so.
But if you accept the standard definition you must also accept all
consequences of.
Potentially fair but where did you read a definition of 'type' which
backs up your point?
I think I see what you mean. The types would be
ref ref ref T
ref ref T
ref T
T
so there would be no ambiguity as to what was being referred to,
although it would require a departure from normal bottom-up type propagation.
Such inconsistency is something that should be resisted.
What I was saying was that if I allow such declarations then the
question arises over what a programmer could write if instead of the
target he wanted to access one of the references in the chain.
He would qualify the type. Each pointer type is a type. Each type has
a name. Each name can be used to disambiguate object's type. What's
the problem?
I guess it may be something like
refchain(p, 0) ;p itself
refchain(p, 1) ;1 away from p
refchain(p, 2) ;2 away from p
refchain(p, -1) ;one before the target
The last two would both refer to object "2" in the chain
p -> 1 -> 2 -> target
Looks disgusting,
Why?
but that was the intent, right? (:-)) Anyway it is beside the point.
See above.
As mentioned, using types would work for a language such as Algol68
which already defines dereferencing until types match but not in a
language in which type changes are propagated bottom up.
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that as
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!
In language terms I think I'd go for x ** (1.0 / 3.0). If a programmer
wanted to put it in a function there would be nothing stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch to it.
On 2022-12-11 18:00, James Harris wrote:
On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
On 2022-12-07 18:42, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
On 2022-12-04 23:21, James Harris wrote:
Type /compatibility/ is an open issue for me. I gather that Ada >>>>>>>> allows multiple subtypes of integer all to be compatible with >>>>>>>> each other even if one is not derived from the other; that's
different from inherited compatibility as neither is a
superclass of the other.
Subtyping is a transitive relation. A<:B<:C. Ada's subtype
introduces both Small<:Integer and Integer<:Small. Because Small >>>>>>> exports its operations to Integer. E.g.
procedure Foo (X : Small);
Now
Y : Integer;
Foo (Y); -- This is OK, Small is a supertype of Integer
So Small is both a subtype and a supertype of Integer? That seems >>>>>> a bit mad.
Why, if that is desired effect? You want Foo (Y) illegal?
Conversions are OK. Saying each is a subtype of the other seems to
be abusing the English language.
Why?
Firstly neither sub- nor type are English words! (:-))
Secondly subtype means a part of a type. Which part? The inherited
operations, the substitutable values. OK?
It's true that where there is an implicit conversion the two types
would likely share a subset of operations but the common subset would
be a subset of both, without the two types being sub-anything of each
other. If the two types are T and U they may share a set of values and
operations S. One could say that:
S is a subtype of T
S is a subtype of U
but that does not imply that T and U are subtypes of each other.
I am not sure what are you trying to say.
Subtyping is a relation denoted as S<:T. It is transitive, i.e.
S<:T and T<:U => S<:U.
From S<:T and S<:U follows nothing.
Ada's subtypes introduce both S:<T and T<:S. Which is why per
transitivity two Ada subtypes get connected:
S<:T and T<:S and U<:T and T<:U => S<:U and U<:S
[ "Sharing" is not a word. You must formalize it. E.g. Unsigned_32 and
and Unsigned_64 may mathematically share + and 1 that does not make them formal subtypes until you declare them as such. Then you create a mapping:
1 of Unsigned_32 corresponds to 1 of Unsigned_64
2 of Unsigned_32 corresponds to 1 of Unsigned_64
...
+ of Unsigned_32 corresponds to 1 of Unsigned_64
Etc. Once you done all that you know at the language level what happens
with conversions and all, likely nasty, implications.
I presume we are talking about nominal type equivalence. ]
Anyway, it is not specific to pointers. Automatic dereferencing >>>>>>> is subtyping. So if you do not want to build in it in the
language, you do not need to if you allowed the programmer to
declare it a subtype.
Everything's subtyping these days! :-o
Implicit conversions are.
Where was it decided that implicit conversions implied a subtype
relationship?
Because if sqrt(2) is OK, then 2 looks as if 2 (integer) were 2.0
(float). You can substitute integer for float in sqrt. This is the
Liskov's definition of subtyping (ignoring behavior).
But where did you read that the specific term 'subtype' should
thereafter be applied to implicit conversions?
Conversion is an implementation of substitution.
How would you
substitute int for float without a conversion?
On 2022-12-11 18:18, James Harris wrote:
On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-)
Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
If you want to use the term for something else you are free to do so.
But if you accept the standard definition you must also accept all
consequences of.
Potentially fair but where did you read a definition of 'type' which
backs up your point?
It is a commonly accepted definition AFAIK coming from mathematical type theories of the beginning of the last century.
Wikipedia (ADT = abstract datatype):
"Formally, an ADT may be defined as a "class of objects whose logical behavior is defined by a set of values and a set of operations"; this is analogous to an algebraic structure in mathematics."
I think I see what you mean. The types would be
ref ref ref T
ref ref T
ref T
T
so there would be no ambiguity as to what was being referred to,
although it would require a departure from normal bottom-up type
propagation.
How is that related to the way you resolve the types? If your resolver
is incapable to resolve types, you have an ambiguity. An ambiguity can *always* be resolved by qualifying types. Nothing more, nothing less.
Such inconsistency is something that should be resisted.
What inconsistency?
What I was saying was that if I allow such declarations then the
question arises over what a programmer could write if instead of the
target he wanted to access one of the references in the chain.
He would qualify the type. Each pointer type is a type. Each type has
a name. Each name can be used to disambiguate object's type. What's
the problem?
I guess it may be something like
refchain(p, 0) ;p itself
refchain(p, 1) ;1 away from p
refchain(p, 2) ;2 away from p
refchain(p, -1) ;one before the target
The last two would both refer to object "2" in the chain
p -> 1 -> 2 -> target
Looks disgusting,
Why?
Judging by weird stomach movements just on look at it causes... (:-))
On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
On 2022-12-11 18:00, James Harris wrote:
On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
But where did you read that the specific term 'subtype' should
thereafter be applied to implicit conversions?
Conversion is an implementation of substitution.
Conversion is a conversion, not a substitution!
How would you substitute int for float without a conversion?
I don't have implicit conversions of declared types. They must be
written explicitly. Undeclared types (big int, big uint, big float, character, string, boolean etc) would have automatic conversions. Not
sure if that affects your assessment.
On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
On 2022-12-11 18:18, James Harris wrote:
On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-) >>>>>>Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
If you want to use the term for something else you are free to do
so. But if you accept the standard definition you must also accept
all consequences of.
Potentially fair but where did you read a definition of 'type' which
backs up your point?
It is a commonly accepted definition AFAIK coming from mathematical
type theories of the beginning of the last century.
Wikipedia (ADT = abstract datatype):
"Formally, an ADT may be defined as a "class of objects whose logical
behavior is defined by a set of values and a set of operations"; this
is analogous to an algebraic structure in mathematics."
That rather backs up my assertion that types are of /objects/ rather
than of /uses/. IOW a piece of code could treat an object as read-only
but that doesn't change the type of the object.
I think I see what you mean. The types would be
ref ref ref T
ref ref T
ref T
T
so there would be no ambiguity as to what was being referred to,
although it would require a departure from normal bottom-up type
propagation.
How is that related to the way you resolve the types? If your resolver
is incapable to resolve types, you have an ambiguity. An ambiguity can
*always* be resolved by qualifying types. Nothing more, nothing less.
Bottom-up type resolution follows type-inference rules. The only
/automatic/ change I allow is widening. For example,
[int16] s
[int32] t
[float64] g
s = t ;prohibited because is narrowing
t = s ;permitted because is widening
return s + t ;permitted and results in wider type (int32)
g = s ;prohibited as different types (even though in range)
Similarly for references,
[int32] i, j
[ref int32] ri, rj
[ref ref int32] rri, rrj
ri = j ;prohibited due to type mismatch
ri = rj ;permitted as types match
ri = rrj ;prohibited due to type mismatch
Such inconsistency is something that should be resisted.
What inconsistency?
See the example, above, on references. It would be inconsistent to automatically dereference when the rest of the language requires
explicit type matching.
On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that as
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!
In language terms I think I'd go for x ** (1.0 / 3.0). If a
programmer wanted to put it in a function there would be nothing
stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch
to it.
If you and Bart mean that a cuberoot function would have to decide on a rounding direction for 1.0/3.0 then I agree; it's a good point. I
haven't worked through the many issues surrounding floats, yet, but they would probably have similar to integers, i.e. delimited areas of code
could have a default rounding mode. One could write something along the lines of
with float-rounding-mode = round-to-positive-infinity
return x ** (1.0 / 3.0)
with end
In such code the division would be rounded up to something like 0.3333334.
To incorporate that much control in a function would require many
functions such as
cuberoot_round_up
cuberoot_round_down
cuberoot_round_towards_zero
and other similar horrors.
If [Bmitry] and Bart mean that a cuberoot function would have toCube roots are arguably used often enough to be worth implementing separately from "** (1/3)" or near equivalent. Then the error from "1/3"
decide on a rounding direction for 1.0/3.0 then I agree; it's a good
point.
On 11/12/2022 18:01, James Harris wrote:
If [Bmitry] and Bart mean that a cuberoot function would have toCube roots are arguably used often enough to be worth implementing separately from "** (1/3)" or near equivalent.
decide on a rounding direction for 1.0/3.0 then I agree; it's a good
point.
Then the error from "1/3"
is irrelevant. But if this matters to you, you would need to use serious numerical analysis and computation beyond the normal scope of this group, such as interval arithmetic and Chebychev polynomials.
On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that as
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)!
In language terms I think I'd go for x ** (1.0 / 3.0). If a
programmer wanted to put it in a function there would be nothing
stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch
to it.
If you and Bart mean that a cuberoot function would have to decide on a rounding direction for 1.0/3.0 then I agree; it's a good point.
I
haven't worked through the many issues surrounding floats, yet, but they would probably have similar to integers, i.e. delimited areas of code
could have a default rounding mode. One could write something along the lines of
with float-rounding-mode = round-to-positive-infinity
return x ** (1.0 / 3.0)
with end
In such code the division would be rounded up to something like 0.3333334.
To incorporate that much control in a function would require many
functions such as
cuberoot_round_up
cuberoot_round_down
cuberoot_round_towards_zero
and other similar horrors. Those names are, in fact, misleading as they
seem to apply to the cube root rather than the division within it -
which makes the idea of a function even worse.
All the more reason to have a programmer write cube root calculations explicitly rather than wrapping the subtleties in functions.
On 12/12/2022 01:20, Andy Walker wrote:
Then the error from "1/3"
is irrelevant. But if this matters to you, you would need to use serious >> numerical analysis and computation beyond the normal scope of this group,
such as interval arithmetic and Chebychev polynomials.
You would almost certainly not use Chebychev's or other polynomials for calculating cube roots, unless you were making something like a
specialised pipelined implementation in an ASIC or FPGA.
And as David said, cube roots aren't that common. It might have a button
for it on my Casio, that might be because it's oriented towards solving school maths problems.
On 2022-12-11 19:45, James Harris wrote:
On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
On 2022-12-11 18:18, James Harris wrote:
On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
At least you've now added operations so you are getting there. ;-) >>>>>>>Good. Now you see why in T and in out T cannot be the same type?
No. Types are not the only control in a programming language. An
/object/ has a type.
If you want to use the term for something else you are free to do
so. But if you accept the standard definition you must also accept
all consequences of.
Potentially fair but where did you read a definition of 'type' which
backs up your point?
It is a commonly accepted definition AFAIK coming from mathematical
type theories of the beginning of the last century.
Wikipedia (ADT = abstract datatype):
"Formally, an ADT may be defined as a "class of objects whose logical
behavior is defined by a set of values and a set of operations"; this
is analogous to an algebraic structure in mathematics."
That rather backs up my assertion that types are of /objects/ rather
than of /uses/. IOW a piece of code could treat an object as read-only
but that doesn't change the type of the object.
No idea what this is supposed to mean. You asked where it comes from, I
gave you the quote.
Again, you do not accept common definition, give your own and explain
how stupid the rest of the world is by not using it.
I think I see what you mean. The types would be
ref ref ref T
ref ref T
ref T
T
so there would be no ambiguity as to what was being referred to,
although it would require a departure from normal bottom-up type
propagation.
How is that related to the way you resolve the types? If your
resolver is incapable to resolve types, you have an ambiguity. An
ambiguity can *always* be resolved by qualifying types. Nothing more,
nothing less.
Bottom-up type resolution follows type-inference rules. The only
/automatic/ change I allow is widening. For example,
[int16] s
[int32] t
[float64] g
s = t ;prohibited because is narrowing
t = s ;permitted because is widening
return s + t ;permitted and results in wider type (int32)
g = s ;prohibited as different types (even though in range)
Similarly for references,
[int32] i, j
[ref int32] ri, rj
[ref ref int32] rri, rrj
ri = j ;prohibited due to type mismatch
ri = rj ;permitted as types match
ri = rrj ;prohibited due to type mismatch
So what? Again any particular weakness of your type system by no means change the point. Which, let me repeat is, you can always resolve ambiguities by qualifying types.
Such inconsistency is something that should be resisted.
What inconsistency?
See the example, above, on references. It would be inconsistent to
automatically dereference when the rest of the language requires
explicit type matching.
I do not see any inconsistency. See the definition of:
Wikipedia:
"In classical deductive logic, a consistent theory is one that does not
lead to a logical contradiction."
On 2022-12-11 19:01, James Harris wrote:
On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that asIn language terms I think I'd go for x ** (1.0 / 3.0). If a
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)! >>>>
programmer wanted to put it in a function there would be nothing
stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch
to it.
If you and Bart mean that a cuberoot function would have to decide on
a rounding direction for 1.0/3.0 then I agree; it's a good point. I
haven't worked through the many issues surrounding floats, yet, but
they would probably have similar to integers, i.e. delimited areas of
code could have a default rounding mode. One could write something
along the lines of
with float-rounding-mode = round-to-positive-infinity
return x ** (1.0 / 3.0)
with end
You cannot do that with available hardware in a reasonable way. Hardware rounding is set.
In such code the division would be rounded up to something like
0.3333334.
To incorporate that much control in a function would require many
functions such as
cuberoot_round_up
cuberoot_round_down
cuberoot_round_towards_zero
and other similar horrors.
It is no horrors, it is interval computations. Interval-valued function
F returns an interval containing the mathematically correct result.
Hardware implementations are interval-valued with the interval width of
two adjacent machine numbers. Then rounding chooses one of the bounds.
But that does not resolve the problem. You need some solid numeric
analysis of relation between err1 and err2 in
X ** (1/3 + err1)
and
X ** 1/3 + err2
Exponentiation of positive powers below 1 is a "good-behaving" function
and one could use one for another to a point, nevertheless.
On 2022-12-11 19:25, James Harris wrote:
On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
On 2022-12-11 18:00, James Harris wrote:
On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
But where did you read that the specific term 'subtype' should
thereafter be applied to implicit conversions?
Conversion is an implementation of substitution.
Conversion is a conversion, not a substitution!
It did not say it is. I said it an implementation of.
How would you substitute int for float without a conversion?
I don't have implicit conversions of declared types. They must be
written explicitly. Undeclared types (big int, big uint, big float,
character, string, boolean etc) would have automatic conversions. Not
sure if that affects your assessment.
Call implicit automatic, automatic implicit. It does not change the semantics and the intent. The intent and the meaning is to substitute
one type for another transparently to the syntax.
On 12/12/2022 11:26, Bart wrote:
And as David said, cube roots aren't that common. It might have a
button for it on my Casio, that might be because it's oriented towards
solving school maths problems.
I think rather than having a cube root function, perhaps an nth root function would make some sense. Then you could write "root(x, 3)" -
that's about as neat and potentially as accurate as any other solution.
On 11/12/2022 18:01, James Harris wrote:
On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that asIn language terms I think I'd go for x ** (1.0 / 3.0). If a
x**0.33333333333 or x**(1.0/3.0)? There you would welcome cuberoot(x)! >>>>
programmer wanted to put it in a function there would be nothing
stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch
to it.
If you and Bart mean that a cuberoot function would have to decide on
a rounding direction for 1.0/3.0 then I agree; it's a good point.
Actually, I was concerned with aesthetics.
The precision of 1.0/3 is something I hadn't considered, but DAK's point
is, if a compiler knew of a fast cube root algorithm and wanted to special-case A**B when B was 1.0/3, then how close to 'one third' would
B have to be before it could assume that a cube-root was in fact what
was intended? Given that there is no precise representation of 'one
third' anyway.
Far easier to just a provide something like 'cuberoot()' then it will
know, and so will people reading the code.
I haven't worked through the many issues surrounding floats, yet, but
they would probably have similar to integers, i.e. delimited areas of
code could have a default rounding mode. One could write something
along the lines of
with float-rounding-mode = round-to-positive-infinity
return x ** (1.0 / 3.0)
with end
In such code the division would be rounded up to something like
0.3333334.
To incorporate that much control in a function would require many
functions such as
cuberoot_round_up
cuberoot_round_down
cuberoot_round_towards_zero
Huh? Who cares about the rounding of cube root?! As I said it was merely about detecting whether this was a cube root.
and other similar horrors. Those names are, in fact, misleading as
they seem to apply to the cube root rather than the division within it
- which makes the idea of a function even worse.
Oh, you are talking abut that! Maybe you need a function like isthisreallyathirdorjustsomethingclose() instead.
You'll be glad to know I won't support any of that nonsense (such as
allowing a redefinition of what 2 means) but what do you mean about
the program text? Say the text has
two: const = 2
f(2)
f(two)
I don't see any semantic difference between 2 and two. Nor does
either form imply either storage or the lack of storage.
That being the case, do you still see a semantic difference between 2
and two?
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
On 11/12/2022 19:08, Dmitry A. Kazakov wrote:
On 2022-12-11 19:25, James Harris wrote:
On 11/12/2022 17:21, Dmitry A. Kazakov wrote:
On 2022-12-11 18:00, James Harris wrote:
On 07/12/2022 19:37, Dmitry A. Kazakov wrote:
But where did you read that the specific term 'subtype' should
thereafter be applied to implicit conversions?
Conversion is an implementation of substitution.
Conversion is a conversion, not a substitution!
It did not say it is. I said it an implementation of.
How would you substitute int for float without a conversion?
I don't have implicit conversions of declared types. They must be
written explicitly. Undeclared types (big int, big uint, big float,
character, string, boolean etc) would have automatic conversions. Not
sure if that affects your assessment.
Call implicit automatic, automatic implicit. It does not change the
semantics and the intent. The intent and the meaning is to substitute
one type for another transparently to the syntax.
Fine but that still doesn't imply a 'sub' type - for any reasonable
meaning of the term.
I accept that you have a different interpretation of 'subtype' and
that's fine but it seems a confusing use of language. Maybe there's a
better term that everyone would be happy with but I think of a subtype
as being diminutive compared to another, such as integers which have a smaller range.
thou: integer range 0..999
hund: s range 0..99
where hund could be understood to be a subtype of thou. But not the
other way round!
On 11/12/2022 19:21, Dmitry A. Kazakov wrote:
On 2022-12-11 19:45, James Harris wrote:
On 11/12/2022 17:43, Dmitry A. Kazakov wrote:
On 2022-12-11 18:18, James Harris wrote:
On 07/12/2022 19:39, Dmitry A. Kazakov wrote:
On 2022-12-07 19:09, James Harris wrote:
On 06/12/2022 08:09, Dmitry A. Kazakov wrote:
On 2022-12-06 00:04, James Harris wrote:
On 05/12/2022 07:50, Dmitry A. Kazakov wrote:
...
No. Types are not the only control in a programming language. An >>>>>>> /object/ has a type.At least you've now added operations so you are getting there. ;-) >>>>>>>>Good. Now you see why in T and in out T cannot be the same type? >>>>>>>
If you want to use the term for something else you are free to do >>>>>> so. But if you accept the standard definition you must also accept >>>>>> all consequences of.
Potentially fair but where did you read a definition of 'type'
which backs up your point?
It is a commonly accepted definition AFAIK coming from mathematical
type theories of the beginning of the last century.
Wikipedia (ADT = abstract datatype):
"Formally, an ADT may be defined as a "class of objects whose
logical behavior is defined by a set of values and a set of
operations"; this is analogous to an algebraic structure in
mathematics."
That rather backs up my assertion that types are of /objects/ rather
than of /uses/. IOW a piece of code could treat an object as
read-only but that doesn't change the type of the object.
No idea what this is supposed to mean. You asked where it comes from,
I gave you the quote.
You referred to an ADT rather than a type.
From the same source:
"A data type constrains the possible values that an expression, such as
a variable or a function, might take. This data type defines the
operations that can be done on the data, the meaning of the data, and
the way values of that type can be stored."
https://en.wikipedia.org/wiki/Data_type
Note, the operations which "can be done on the data", not "can be done
by a certain function".
Again, you do not accept common definition, give your own and explain
how stupid the rest of the world is by not using it.
Ahem, I'm happy with the aforementioned definition. It's you who is
seeking an alteration.
So what? Again any particular weakness of your type system by no means
change the point. Which, let me repeat is, you can always resolve
ambiguities by qualifying types.
There's no weakness in that type system. On the contrary, it requires
and would enforce precision and explicitness from the programmer.
Such inconsistency is something that should be resisted.
What inconsistency?
See the example, above, on references. It would be inconsistent to
automatically dereference when the rest of the language requires
explicit type matching.
I do not see any inconsistency. See the definition of:
Wikipedia:
"In classical deductive logic, a consistent theory is one that does
not lead to a logical contradiction."
I mean 'inconsistency' such as requiring explicit conversions in one
place but not in another.
On 11/12/2022 19:32, Dmitry A. Kazakov wrote:
On 2022-12-11 19:01, James Harris wrote:
On 07/12/2022 16:53, Dmitry A. Kazakov wrote:
On 2022-12-07 17:42, James Harris wrote:
On 06/12/2022 00:25, Bart wrote:
If cube roots were that common, would you write that as
x**0.33333333333 or x**(1.0/3.0)? There you would welcome
cuberoot(x)!
In language terms I think I'd go for x ** (1.0 / 3.0). If a
programmer wanted to put it in a function there would be nothing
stopping him.
I think the point Bart was making was that 1/3 had no exact
representation in binary floating-point numbers. If cube root used a
special algorithm, you would have a trouble to decide when to switch
to it.
If you and Bart mean that a cuberoot function would have to decide on
a rounding direction for 1.0/3.0 then I agree; it's a good point. I
haven't worked through the many issues surrounding floats, yet, but
they would probably have similar to integers, i.e. delimited areas of
code could have a default rounding mode. One could write something
along the lines of
with float-rounding-mode = round-to-positive-infinity
return x ** (1.0 / 3.0)
with end
You cannot do that with available hardware in a reasonable way.
Hardware rounding is set.
Not so. Rounding towards infinities is provided in hardware.
A certain language (Ada?) might have you believe there is only one
rounding mode in hardware but that's not so.
IEEE 754 defines five
according to
https://en.wikipedia.org/wiki/IEEE_754#Rounding_rules
I currently define 12 rounding modes for integers and I guess that
similar may apply for floats but I've not explored that yet. Either way,
if a programmer wanted a mode which the IEEE did not define for hardware
it would be the compiler's job to emit equivalent instructions.
In such code the division would be rounded up to something like
0.3333334.
To incorporate that much control in a function would require many
functions such as
cuberoot_round_up
cuberoot_round_down
cuberoot_round_towards_zero
and other similar horrors.
It is no horrors, it is interval computations. Interval-valued
function F returns an interval containing the mathematically correct
result. Hardware implementations are interval-valued with the interval
width of two adjacent machine numbers. Then rounding chooses one of
the bounds.
OT but I'd like to see a float implementation which instead of a single value always calculated upper and lower bounds for fp results.
On 12/12/2022 15:18, David Brown wrote:
On 12/12/2022 11:26, Bart wrote:
And as David said, cube roots aren't that common. It might have a
button for it on my Casio, that might be because it's oriented
towards solving school maths problems.
I think rather than having a cube root function, perhaps an nth root
function would make some sense. Then you could write "root(x, 3)" -
that's about as neat and potentially as accurate as any other solution.
I like that! I'd have to work through issues including integers vs
floats but it would appear to be simple, clear, and general. I may do something similar, perhaps with the root first to permit results to be tuples.
root(3, ....) ;cube root
so that
root(3, 8) -> 2
root(3, 8, 1000) -> (2, 10)
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function which
I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and others
of interest.
Probably I wouldn't have it as built-in, as I have no great interest in
cube roots (other than allowing ∛(x) would be cool). Maybe as an
ordinary macro or function that implements it on top of **, but is a
clearer alternative. nthroot() is a another possibility (as is an infix version `3 root x`, with precedence the same as **).
But I would continue to use sqrt since I first saw that in a language in '75, and I see no reason to drop it.
On 12/12/2022 23:30, James Harris wrote:
I like that! I'd have to work through issues including integers vs
floats but it would appear to be simple, clear, and general. I may do
something similar, perhaps with the root first to permit results to be
tuples.
root(3, ....) ;cube root
so that
root(3, 8) -> 2
root(3, 8, 1000) -> (2, 10)
This last one is :
root(n, a, b) -> (a ^ 1/n, b ^ 1/n)
?
Is that a general feature you have - allowing functions to take extra arguments and returning tuples? If so, what is the point? (I don't
mean I think it is pointless, I mean I'd like to know what you think is
the use-case!)
Just to annoy Bart :-), you could do this by implementing currying in
your language along with syntactic sugar for the common functional programming "map" function (which applies a function to every element in
a list).
Given a function "foo(a, b)" taking two inputs, "currying" would let the user treat "foo(a)" as a function that takes one input "b" and returns "foo(a, b)". Thus "foo(a, b)" and "foo(a)(b)" do the same thing. (In Haskell, you don't need the parentheses around parameters, and
associativity means that "foo a b" is he same thing as "(foo a) b" - all
use of multiple parameters is by currying.)
Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
applied to each element.
The user (or library) can then define the single function "root(n, x)",
and the user can write "root(3)[8, 1000]" to get "[2, 10]" without any special consideration in the definition of "root".
Now allow the syntax "foo [a, b, c]" to mean "map foo [a, b, c]" and
thus "[foo(a), foo(b), foo(c)]" - i.e., application of a single-input function to a list/tuple should return a list/tuple of that function
applied to each element.
On 13/12/2022 01:27, Bart wrote:
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function
which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and
others of interest.
Probably I wouldn't have it as built-in, as I have no great interest
in cube roots (other than allowing ∛(x) would be cool). Maybe as an
ordinary macro or function that implements it on top of **, but is a
clearer alternative. nthroot() is a another possibility (as is an
infix version `3 root x`, with precedence the same as **).
But I would continue to use sqrt since I first saw that in a language
in '75, and I see no reason to drop it.
Square roots are very common, so it makes sense to have an individual function for them. At the very least, the implementation of "root"
should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you can't make library functions as efficient as builtins, find a better way to
handle your standard library - standard libraries don't need to follow platform ABI's.)
On 2022-12-13 15:10, David Brown wrote:
Square roots are very common, so it makes sense to have an individual
function for them. At the very least, the implementation of "root"
should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you
can't make library functions as efficient as builtins, find a better
way to handle your standard library - standard libraries don't need to
follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
On 2022-12-13 15:10, David Brown wrote:
On 13/12/2022 01:27, Bart wrote:
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function
which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and
others of interest.
Probably I wouldn't have it as built-in, as I have no great interest
in cube roots (other than allowing ∛(x) would be cool). Maybe as an
ordinary macro or function that implements it on top of **, but is a
clearer alternative. nthroot() is a another possibility (as is an
infix version `3 root x`, with precedence the same as **).
But I would continue to use sqrt since I first saw that in a language
in '75, and I see no reason to drop it.
Square roots are very common, so it makes sense to have an individual
function for them. At the very least, the implementation of "root"
should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you
can't make library functions as efficient as builtins, find a better
way to handle your standard library - standard libraries don't need to
follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
On 13/12/2022 15:13, Dmitry A. Kazakov wrote:
On 2022-12-13 15:10, David Brown wrote:
Square roots are very common, so it makes sense to have an individual
function for them. At the very least, the implementation of "root"
should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you
can't make library functions as efficient as builtins, find a better
way to handle your standard library - standard libraries don't need
to follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
I had thought the same thing. But "//" is more useful for other things (aside from comments), eg Python uses it for integer divide; I'd
reserved it to construct rational numbers.
But then thinking about it some more:
x**3 means x*x*x
x//3 doesn't means x/x/x
On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
On 2022-12-13 15:10, David Brown wrote:
On 13/12/2022 01:27, Bart wrote:
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function
which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and
others of interest.
Probably I wouldn't have it as built-in, as I have no great interest
in cube roots (other than allowing ∛(x) would be cool). Maybe as an >>>> ordinary macro or function that implements it on top of **, but is a
clearer alternative. nthroot() is a another possibility (as is an
infix version `3 root x`, with precedence the same as **).
But I would continue to use sqrt since I first saw that in a
language in '75, and I see no reason to drop it.
Square roots are very common, so it makes sense to have an individual
function for them. At the very least, the implementation of "root"
should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you
can't make library functions as efficient as builtins, find a better
way to handle your standard library - standard libraries don't need
to follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
x ⌄ y, perhaps? :-)
On 2022-12-13 16:32, David Brown wrote:
On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
On 2022-12-13 15:10, David Brown wrote:
On 13/12/2022 01:27, Bart wrote:
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function
which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and
others of interest.
Probably I wouldn't have it as built-in, as I have no great
interest in cube roots (other than allowing ∛(x) would be cool).
Maybe as an ordinary macro or function that implements it on top of >>>>> **, but is a clearer alternative. nthroot() is a another
possibility (as is an infix version `3 root x`, with precedence the >>>>> same as **).
But I would continue to use sqrt since I first saw that in a
language in '75, and I see no reason to drop it.
Square roots are very common, so it makes sense to have an
individual function for them. At the very least, the implementation >>>> of "root" should have a special case for handling roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you
can't make library functions as efficient as builtins, find a better
way to handle your standard library - standard libraries don't need
to follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
x ⌄ y, perhaps? :-)
I dismissed it because it looks as x or/union y... (:-))
On 13/12/2022 16:38, Dmitry A. Kazakov wrote:
On 2022-12-13 16:32, David Brown wrote:
On 13/12/2022 16:13, Dmitry A. Kazakov wrote:
On 2022-12-13 15:10, David Brown wrote:
On 13/12/2022 01:27, Bart wrote:
On 12/12/2022 22:41, James Harris wrote:
On 12/12/2022 10:26, Bart wrote:
What do you make of David's suggestion to have a "root" function >>>>>>> which I would probably have as
root(2, x) -> square root of x
root(3, x) -> cube root of x
It's OK. It solves the problem of special-casing cube-roots, and
others of interest.
Probably I wouldn't have it as built-in, as I have no great
interest in cube roots (other than allowing ∛(x) would be cool). >>>>>> Maybe as an ordinary macro or function that implements it on top
of **, but is a clearer alternative. nthroot() is a another
possibility (as is an infix version `3 root x`, with precedence
the same as **).
But I would continue to use sqrt since I first saw that in a
language in '75, and I see no reason to drop it.
Square roots are very common, so it makes sense to have an
individual function for them. At the very least, the
implementation of "root" should have a special case for handling
roots 0, 1 and 2.
(As always, I would never have something as a built-in function or
keyword if it could equally well be made in a library. And if you >>>>> can't make library functions as efficient as builtins, find a
better way to handle your standard library - standard libraries
don't need to follow platform ABI's.)
If x**y denotes power, then x//y should do the opposite. So x//2 is
square root. I cannot help the proponents of x^y notation... (:-))
x ⌄ y, perhaps? :-)
I dismissed it because it looks as x or/union y... (:-))
To some (C programmers), "x ^ y" looks like "x xor y", while to others (logicians) it looks like "x and y". We have too few symbols,
especially if we restrict ourselves to ones easily typed on many keyboards!
And what operator shall we use for tetration? "x ^^ y" ? "x *** y" ?
Cube roots are arguably used often enough to be worth implementing >> separately from "** (1/3)" or near equivalent.Really? In what context? Square roots occur all over the place, but
I can't think of a single realistic use of cube roots that are not
general nth roots (such as calculating geometric means). [...]
Then the error from "1/3"You would almost certainly not use Chebychev's or other polynomials
is irrelevant. But if this matters to you, you would need to use serious >> numerical analysis and computation beyond the normal scope of this group,
such as interval arithmetic and Chebychev polynomials.
for calculating cube roots, unless you were making something like a specialised pipelined implementation in an ASIC or FPGA.
Newton-Raphson iteration, or related algorithms, are simpler and
converge quickly without all the messiness of ranges.
On 12/12/2022 09:11, David Brown wrote:
[I wrote:]
Cube roots are arguably used often enough to be worth implementingReally? In what context? Square roots occur all over the place, but
separately from "** (1/3)" or near equivalent.
I can't think of a single realistic use of cube roots that are not
general nth roots (such as calculating geometric means). [...]
Well, the most obvious one is finding the edge of a cube of given volume! Yes, I've done that occasionally.
I found a couple of occurrences
of "cbrt" in my solutions to some [~200] of the Euler problems [I tackle
only ones that seem interesting], in both cases to find upper bounds on how far a calculation of n^3 needs to go for some condition to hold.
Another
use was when our prof of number theory came to me with some twisted cubics [qv] and asked me to find some solutions to associated cubic equations [sadly, it was ~15 years ago and I've forgotten the details]. My most recent personal use seems to have been in constructing a colour map for a Mandelbrot program, so aesthetic rather than important. IOW, I wouldn't claim major usage, but not negligible either.
Then the error from "1/3"You would almost certainly not use Chebychev's or other polynomials
is irrelevant. But if this matters to you, you would need to use
serious
numerical analysis and computation beyond the normal scope of this
group,
such as interval arithmetic and Chebychev polynomials.
for calculating cube roots, unless you were making something like a
specialised pipelined implementation in an ASIC or FPGA.
Chebychev polynomials have the advantage of minimising the error over a range, so typically converge significantly faster than other ways
of calculating library functions /as long as/ you can pre-compute the
number of terms needed and thus the actual conversion back into a normal polynomial.
This typically saves an iteration or two /and/ having a
loop [as you can unroll it] /and/ testing whether to go round again,
compared with an iterative method. I had a colleague who was very keen
on Pade approximations, but I didn't find them any faster; admittedly,
that was in the '60s and '70s when I was heavily involved in numerical
work for astrophysics [and hitting limits on f-p numbers, store sizes,
time allocations, ...]; I don't know what the current state of the art
is on modern hardware.
Newton-Raphson iteration, or related algorithms, are simpler and
converge quickly without all the messiness of ranges.
Simpler for casual use, certainly, esp if you have no inside knowledge of the hardware. But you still need to do range reduction,
as N-R is only linear if you start a long way from the root. [Atlas
had a nifty facility whereby the f-p exponent was a register, so you
could get reasonably close to an n-th root simply by dividing that
register by n.]
[But my real point here, as in my previous article, is not to promote any particular numerical technique, but to point out that the calculation of library functions in the Real World is not an amateur activity, and needs serious NA, well beyond what most undergraduates encounter, and well beyond the normal scope of this group.]
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 764 |
Nodes: | 10 (2 / 8) |
Uptime: | 252:38:44 |
Calls: | 10,560 |
Calls today: | 2 |
Files: | 185,887 |
Messages: | 1,673,453 |