The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
On 11/4/23 2:21 PM, Steven G. Kargl wrote:
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
Le 16/11/2023 à 02:28, gah4 a écrit :
On 11/4/23 2:21 PM, Steven G. Kargl wrote:
The SCALE intrinsic allows one to change the floating point exponent
for a REAL entity.
For example,
program foo
real x x = 1 print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point rounding error.
Question. Anyone know why the Fortran standard (aka J3) restricted X
to be a REAL entity? It would seem that X could be COMPLEX with
obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing
it by (module,argument) for instance. Given that, the standard cannot guarantee the absence of rounding errors.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing
it by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Not to mention, the Fortran standard has introduced:
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
PS: If a Fortran vendor used magnitude and phase, then the vendor
would need to specify a sign convention for the phasor. I'm mpt
aware of any vendor that does.
Le 16/11/2023 à 02:28, gah4 a écrit :
On 11/4/23 2:21 PM, Steven G. Kargl wrote:
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex number is internally represented.
Le 16/11/2023 à 21:01, Steven G. Kargl a écrit :
The reason is maybe because the standard doesn't specify how a
complex number is internally represented. In practice it is
always represented by a pair (real,imag), but nothing would
prevent a compiler representing it by (module,argument) for
instance. Given that, the standard cannot guarantee the absence
of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
I fully agree that it would be strange, and I can't see any advantage
to such implementation. Yet, it is not prohibited by the standard.
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Yes, but it's at the conceptual level
Not to mention, the Fortran standard has introduced:
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
Yes again, but behind the hood c%re and c%im could be the functions
m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im =
<expr> the (m,p) pair could be fully recomputed.
PS: If a Fortran vendor used magnitude and phase, then the vendor
would need to specify a sign convention for the phasor. I'm mpt
aware of any vendor that does.
I don't think so, as the phase component would not be directly
accessible by the user. The vendor could choose any convention as
long as the whole internal stuff is consistent, he could also chose
to store a scaled version of the phase in order to have a better
accuracy...
pehache <pehache.7@gmail.com> schrieb:
Le 16/11/2023 à 02:28, gah4 a écrit :
On 11/4/23 2:21 PM, Steven G. Kargl wrote:
The SCALE intrinsic allows one to change the
floating point exponent for a REAL entity.
For example,
program foo
real x
x = 1
print *, scale(x,1) ! print 2
end program
This scaling does not incur a floating point
rounding error.
Question. Anyone know why the Fortran standard (aka J3)
restricted X to be a REAL entity? It would seem that X
could be COMPLEX with obvious equivalence of
SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))
Should the Fortran be amended?
Wow, no answer yet.
It does seem that sometimes Fortran is slow to add features, especially
when need for them isn't shown.
The reason is maybe because the standard doesn't specify how a complex
number is internally represented.
I disagree almost entirely.
Subclause 19.6.5 of F2018, "Events that cause variables to become
defined" has
(13) When a default complex entity becomes defined, all partially
associated default real entities become defined.
(14) When both parts of a default complex entity become defined as
a result of partially associated default real or default complex
entities becoming defined, the default complex entity becomes
defined.
Which means that something like
real :: a(2)
complex :: c
equivalence (a,c)
allows you to set values for a(1) and a(2) and you can expect the
components of c to get the corresponding values.
This is important for FFT.
There seems no reason why the standard might not be extended to allow
the two different types of representations of complex variables to
exist in the same program, as separate data-types, and to interact when required. Two major questions are:
(i) whether there are any applications that would be more readily and usefully programmed using the modulus-phase representation?
(ii) the relative speed of both addition and multiplication in the two representations.?
David Jones <dajhawkxx@nowherel.com> schrieb:
There seems no reason why the standard might not be extended to allow
the two different types of representations of complex variables to
exist in the same program, as separate data-types, and to interact when
required. Two major questions are:
(i) whether there are any applications that would be more readily and
usefully programmed using the modulus-phase representation?
(ii) the relative speed of both addition and multiplication in the two
representations.?
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the modulus
to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed consequences if the processor implements it via branches, and branches
can be quite expensive if mispredicted.
_Addition_ is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then call
atan2() (with a normalization) to get back the original representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:
David Jones <dajhawkxx@nowherel.com> schrieb:
allow >> the two different types of representations of complexThere seems no reason why the standard might not be extended to
variables to >> exist in the same program, as separate data-types,
and to interact when >> required. Two major questions are:
and >> usefully programmed using the modulus-phase representation?(i) whether there are any applications that would be more readily
two >> representations.?(ii) the relative speed of both addition and multiplication in the
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the
modulus to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed consequences if the processor implements it via branches, and
branches can be quite expensive if mispredicted.
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
If a vendor used magnitude and phase as the internal representation,
then that vendor would not be around very long. Consider cmplx(0,1).
The magnitude is easy. It is 1. Mathematically, the phase is
pi/2, which is of course not exactly representable.
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
Steven G. Kargl wrote:
On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:
David Jones <dajhawkxx@nowherel.com> schrieb:allow >> the two different types of representations of complex
There seems no reason why the standard might not be extended to
variables to >> exist in the same program, as separate data-types,
and to interact when >> required. Two major questions are:
and >> usefully programmed using the modulus-phase representation?
(i) whether there are any applications that would be more readily
two >> representations.?
(ii) the relative speed of both addition and multiplication in the
Multiplication and especially division would likely be faster - you
would have to multiply the two moduli and add and normalize the
modulus to lie between 0 and 2*pi.
However, the normalization step can have unintended execution speed
consequences if the processor implements it via branches, and
branches can be quite expensive if mispredicted.
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.
If you're doing a lot of multiplication, and not a lot of addition,
that could actually pay off.
If a vendor used magnitude and phase as the internal representation,
then that vendor would not be around very long. Consider cmplx(0,1).
The magnitude is easy. It is 1. Mathematically, the phase is
pi/2, which is of course not exactly representable.
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
If it were worth doing, the obvious thing to do would be to use a
formulation where you store a multiple of pi or 2*pi as the effective argument, with computations done to respect a standard range.
David Jones <dajhawkxx@nowherel.com> schrieb:
Steven G. Kargl wrote:
readily >> and >> usefully programmed using the modulus-phaseOn Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:
David Jones <dajhawkxx@nowherel.com> schrieb:allow >> the two different types of representations of complex
There seems no reason why the standard might not be extended to
variables to >> exist in the same program, as separate data-types,
and to interact when >> required. Two major questions are:
(i) whether there are any applications that would be more
representation? >> > >
the >> two >> representations.?(ii) the relative speed of both addition and multiplication in
you >> > would have to multiply the two moduli and add and normalize
Multiplication and especially division would likely be faster -
the >> > modulus to lie between 0 and 2*pi.
speed >> > consequences if the processor implements it via branches,
However, the normalization step can have unintended execution
and >> > branches can be quite expensive if mispredicted.
addition, >> > that could actually pay off.
Addition is very expensive indeed in polar notation. You have to
calculate the sin() and cos() of each number, add them, and then
call atan2() (with a normalization) to get back the original
representation.
If you're doing a lot of multiplication, and not a lot of
representation, >> then that vendor would not be around very long.
If a vendor used magnitude and phase as the internal
Consider cmplx(0,1). >> The magnitude is easy. It is 1.
Mathematically, the phase is >> pi/2, which is of course not exactly representable. >>
% tlibm acos -f -a 0.
x = 0.00000000e+00f, /* 0x00000000 */
libm = 1.57079637e+00f, /* 0x3fc90fdb */
mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
ULP = 0.36668
% tlibm cos -f -a 1.57079637
x = 1.57079625e+00f, /* 0x3fc90fda */
libm = 7.54979013e-08f, /* 0x33a22169 */
mpfr = 7.54979013e-08f, /* 0x33a22169 */
ULP = 0.24138
7.549... is significantly different when compared to 0.
If it were worth doing, the obvious thing to do would be to use a formulation where you store a multiple of pi or 2*pi as the
effective argument, with computations done to respect a standard
range.
It could also make sense to use a fixed-number representation for
the phase; having special accuracy around zero, as floating point
numbers do, may not be a large advantage.
The normalization step could then be a simple "and", masking
away the top bits.
This is, however, more along the lines of what a user-defined
complex type could look like, not what Fortran compilers could
reasonably provide :-)
Le 16/11/2023 à 21:01, Steven G. Kargl a écrit :
The reason is maybe because the standard doesn't specify how a complex
number is internally represented. In practice it is always represented
by a pair (real,imag), but nothing would prevent a compiler representing >>> it by (module,argument) for instance. Given that, the standard cannot
guarantee the absence of rounding errors.
You are correct that the Fortran standard does not specify
internal datails, and this could be extended to COMPLEX.
It would however be quite strange for a Fortran vendor to
use magnitude and phase
I fully agree that it would be strange, and I can't see any advantage to such implementation. Yet, it is not prohibited by the standard.
given that the Fortran standard does
quite often refer to the real and imaginary parts of a COMPLEX
entity.
Yes, but it's at the conceptual level
Not to mention, the Fortran standard has introduced:
3.60.1
complex part designator
9.4.4 Complex parts
R915 complex-part-designator is designator % RE
or designator % IM
Yes again, but behind the hood c%re and c%im could be the functions
m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im = <expr>
the (m,p) pair could be fully recomputed.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 991 |
Nodes: | 10 (0 / 10) |
Uptime: | 119:48:21 |
Calls: | 12,958 |
Files: | 186,574 |
Messages: | 3,265,641 |