My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
Thiago Adams <thiago.adams@gmail.com> writes:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constant
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
(Modulo issues not relevant to the debate, like if the expression
has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
modes of processing in the same implementation.)
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect >optimization.
GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
MPFR for floating-point), which are in part for this issue, I think.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect optimization.
On 2025-08-29 16:19, Kaz Kylheku wrote:
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code.
What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect
optimization.
Emulation is necessary only if the value of the constant expression
changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 9/1/2025 5:10 AM, David Brown wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer
arithmetic, it's no serious challenge - floating point is the biggie
for the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
Interesting.
Yes, I think for integers it is not so difficult.
if the compiler has the range int8_t ... int64_t then it is just a matter of selection the of fixed size according with the
abstract type for that platform.
For floating points I think at least for "desktop" computers the result
may be the same.
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 29/08/2025 22:10, Thiago Adams wrote:...
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams writes:
My curiosity is the following:
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Even a non-cross compiler might not be implemented in exactlyso in
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer >arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different.
(And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is >bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to >simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
David Brown <david.brown@hesbynett.no> wrote:For majority of "interesting" targets native compilers do not exist.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
So they do not use nativeI certainly would not be happy if compiler that I am using for embedded
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more "interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
Most likely because people that think that compilers make extraordinary effort in order to match FP results evaluated at compile time with
those evaluated at run time do not know what they are talking about.
As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
than at run time.
On 01/09/2025 23:11, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
I am afraid I don't know the details here, and to what extent it is
internal to the GCC project or external. I /think/, but I could easily
be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
match up with the target details.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,066 |
Nodes: | 10 (0 / 10) |
Uptime: | 184:50:53 |
Calls: | 13,709 |
Calls today: | 1 |
Files: | 186,950 |
D/L today: |
2,579 files (742M bytes) |
Messages: | 2,416,181 |