• how cross compilation works?

    From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 15:46:44 2025
    From Newsgroup: comp.lang.c

    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Fri Aug 29 12:54:27 2025
    From Newsgroup: comp.lang.c

    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Fri Aug 29 17:10:25 2025
    From Newsgroup: comp.lang.c

    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise. (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system. Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Aug 29 20:19:34 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined
    behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From cross@cross@spitfire.i.gajendra.net (Dan Cross) to comp.lang.c on Fri Aug 29 21:54:16 2025
    From Newsgroup: comp.lang.c

    In article <20250829131023.130@kylheku.com>,
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.

    (Modulo issues not relevant to the debate, like if the expression
    has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
    modes of processing in the same implementation.)

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect >optimization.

    GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
    MPFR for floating-point), which are in part for this issue, I think.

    Dealing with integer arithmetic, boolean expressions, character
    manipulation, and so on is often pretty straight-forward at to
    handle for a given target system at compile time. The thing,
    that throws a lot of systems off, is floating point: there exist
    FPUs with different hardware characteristics, even within a
    single architectural family, that can yield different results in
    a way that is simply unknowable until runtime. A classic
    example is hardware that uses 80-bit internal representations
    for double-precision FP arithmetic, versus a 64-bit
    representation. In that world, unless you know precisely what microarchitecture the program is going to run on, you just can't
    make a "correct" decision at compile time at all in the general
    case.

    - Dan C.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Fri Aug 29 20:20:14 2025
    From Newsgroup: comp.lang.c

    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code. >> What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Sat Aug 30 01:00:35 2025
    From Newsgroup: comp.lang.c

    On 2025-08-30, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 2025-08-29 16:19, Kaz Kylheku wrote:
    On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the code.
    What happens in this case?

    A constant expression must be evaluated in the way that would happen
    if it were translated to code on the target machine.

    Thus, if necessary, the features of the target machine's arithmetic must
    be simulated on the build machine.
    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    They have to; if a constant-folding optimization produces a different
    result (in an expression which has no issue) that is then an incorrect
    optimization.

    Emulation is necessary only if the value of the constant expression
    changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().

    But since the former situations occur regularly (e.g. dead code elimination based on conditionals with constant test expressions) you will need to implement that target evaluation strategy anyway. Then, if you have it, why wouldn't you just use it for all constant expressions, and not have to make arrangements for load-time initializations.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Mon Sep 1 10:10:17 2025
    From Newsgroup: comp.lang.c

    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the
    code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different. (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Thiago Adams@thiago.adams@gmail.com to comp.lang.c on Mon Sep 1 08:14:52 2025
    From Newsgroup: comp.lang.c

    On 9/1/2025 5:10 AM, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases.  For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different.  (And even if the compiler is native, different floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler.  I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.




    Interesting.
    Yes, I think for integers it is not so difficult.
    if the compiler has the range int8_t ... int64_t then it is just a
    matter of selection the of fixed size according with the
    abstract type for that platform.

    For floating points I think at least for "desktop" computers the result
    may be the same.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris M. Thomasson@chris.m.thomasson.1@gmail.com to comp.lang.c on Mon Sep 1 12:48:28 2025
    From Newsgroup: comp.lang.c

    On 9/1/2025 4:14 AM, Thiago Adams wrote:
    On 9/1/2025 5:10 AM, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases.  For things like integer
    arithmetic, it's no serious challenge - floating point is the biggie
    for the challenge of getting the details correct when the host and the
    target are different.  (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler.  I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.




    Interesting.
    Yes, I think for integers it is not so difficult.
    if the compiler has the range int8_t  ...  int64_t  then it is just a matter of selection the of fixed size according with the
    abstract type for that platform.

    For floating points I think at least for "desktop" computers the result
    may be the same.

    Think of a program that is sensitive to floating point errors...
    Something like this crazy shit:

    https://groups.google.com/g/comp.lang.c++/c/bB1wA4wvoFc/m/GdzmMd41AQAJ



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Keith Thompson@Keith.S.Thompson+u@gmail.com to comp.lang.c on Mon Sep 1 14:11:47 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.
    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Sep 2 06:40:43 2025
    From Newsgroup: comp.lang.c

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Tue Sep 2 13:06:09 2025
    From Newsgroup: comp.lang.c

    On 01/09/2025 23:11, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.


    I am afraid I don't know the details here, and to what extent it is
    internal to the GCC project or external. I /think/, but I could easily
    be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
    match up with the target details.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 14:48:33 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format. More so, it can not even emulate an exact
    behavior of IEEE-754 binary formats because it uses much wider range
    of exponent than any of them. Even IEEE binary256 has only 19
    exponent bits. OTOH AFAIR the range of exponent in MPFR can not be
    reduced below 32 bits.

    The problem is not dissimilar to inability of x87 FPU to emulate an
    exact behavior of IEEE binary32 and binary64, because x87 arithmetic
    OPs always use wider (16b) exponent.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Tue Sep 2 16:58:45 2025
    From Newsgroup: comp.lang.c

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there? I can easily see how such a library can be
    used as the underlying framework for that kind of computation.

    It provides the substrate in which an exact answer can be calculated in
    a platform-independent way, and could then be coerced into a
    platform-specific result according to the abstract rule by which the
    platform reduces the abstract result to the actual one.

    My hard-earned intuition (lovely oxymoron, ha!) says that this would be
    easier than ad-hoc methods not using a floating point calculation
    library.

    Disclaimer: the preceding remarks are just conjecture, not based on
    examining how MPFR is used in the GNU Compiler Collection.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Tue Sep 2 17:32:56 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that runs the >>>> code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between compile-time
    and run-time evaluation of floating point constants as a bug. And
    the may differ, with compile-time evaluation usually giving more
    accuracy. OTOH they care vary much that cross-compiler and native
    compiler produce the same results. So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.
    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.

    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more
    "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rosario19@Ros@invalid.invalid to comp.lang.c on Tue Sep 2 21:22:13 2025
    From Newsgroup: comp.lang.c

    On Mon, 1 Sep 2025 10:10:17 +0200, David Brown wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams writes:
    My curiosity is the following:

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)
    ...
    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work correctly.

    so in
    So in theory it has to be the same result. This may be hard do achieve.


    Yes, it can be hard to achieve in some cases. For things like integer >arithmetic, it's no serious challenge - floating point is the biggie for
    the challenge of getting the details correct when the host and the
    target are different.

    float point is not ieee stardadizated?

    (And even if the compiler is native, different
    floating point options can lead to significantly different results.)

    Compilers have to make sure they can do compile-time evaluation that is >bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used to >simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.



    i think there are few problems when one use a type of fixed size as
    u32 or unsigned int32_t

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 22:40:57 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
    wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use
    it as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to
    gcc? I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there?

    Most likely because people that think that compilers make extraordinary
    effort in order to match FP results evaluated at compile time with
    those evaluated at run time do not know what they are talking about.
    As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
    than at run time.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Tue Sep 2 22:59:59 2025
    From Newsgroup: comp.lang.c

    On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    On 29/08/2025 22:10, Thiago Adams wrote:
    Em 29/08/2025 16:54, Keith Thompson escreveu:
    Thiago Adams <thiago.adams@gmail.com> writes:
    My curiosity is the following:

    Given a constant expression, the result maybe be different in the
    machine that compiles the code compared with the machine that
    runs the code.
    What happens in this case?

    The solution I can think of is emulation when evaluating constant
    expressions. But I don't if any compiler is doing this way.

    For example, 65535u + 1u will evaluate to 0u if the target system
    has 16-bit int, 65536u otherwise.  (I picked an example that
    doesn't depend on UINT_MAX or any other macros defined in the
    standard library.)

    yes this is the kind of sample I had in mind.

    Any compiler, cross- or not, is required to evaluate constant
    expressions correctly for the target system.  Whether they do so
    by some sort of emulation is an implementation detail.

    Even a non-cross compiler might not be implemented in exactly
    the same language and configuration as the code it's compiling,
    so evaluating constant expressions locally might not work
    correctly.
    so in
    So in theory it has to be the same result. This may be hard do
    achieve.

    Yes, it can be hard to achieve in some cases. For things like
    integer arithmetic, it's no serious challenge - floating point is
    the biggie for the challenge of getting the details correct when
    the host and the target are different. (And even if the compiler
    is native, different floating point options can lead to
    significantly different results.)

    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use it
    as an optimisation - any risk of an error is a bug in the compiler.
    I don't know about other compilers, but gcc has a /huge/ library
    that is used to simulate floating point on a wide range of targets
    and options, precisely so that it can get this right.

    AFAIK in normal mode gcc do not consider differences between
    compile-time and run-time evaluation of floating point constants as a
    bug. And the may differ, with compile-time evaluation usually giving
    more accuracy. OTOH they care vary much that cross-compiler and
    native compiler produce the same results.
    For majority of "interesting" targets native compilers do not exist.
    So they do not use native
    floating point arithmetic to evaluate constants. Rather, both
    native compiler and cross complier use the same partable library
    (that is MPFR). One can probably request more strict mode.
    If it is available, then I do not know how it is done. One
    possible approach is to delay anything non-trivial to runtime.
    I certainly would not be happy if compiler that I am using for embedded
    targets that typically do not have hardware support for 'double' will
    fail to evaluate DP constant expressions in compile time.
    Luckily, it never happens.
    Non-trivial for floating point is likely mean transcendental
    functions: different libraries almost surely will produce different
    results and for legal reasons alone compiler can not assume access
    to target library.

    Right now in C, including C23, transcendental functions can not be parts
    of constant expression.
    Ordinary four arithmetic operations for IEEE are easy: rounding is
    handled by MPFR and things like overflow, infinities, etc, are just
    a bunch of tediouis special cases. But tanscendental functions
    usually do not have well specified rounding behaviour, so exact
    rounding in MPFR is of no help when trying to reproduce results
    from runtime libraries.

    Old (and possibly some new) embedded targets are in a sense more "interesting", as they implemented basic operations in software,
    frequently taking some shortcuts to gain speed.

    Why "some new"? Ovewhelming majority of microcontrollers, both old and
    new, do not implement double precision FP math in hardware.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From David Brown@david.brown@hesbynett.no to comp.lang.c on Wed Sep 3 08:46:27 2025
    From Newsgroup: comp.lang.c

    On 02/09/2025 21:40, Michael S wrote:
    On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
    wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation
    that is bit-perfect to run-time evaluation before they can use
    it as an optimisation - any risk of an error is a bug in the
    compiler. I don't know about other compilers, but gcc has a
    /huge/ library that is used to simulate floating point on a wide
    range of targets and options, precisely so that it can get this
    right.

    What library are you referring to? Is it something internal to
    gcc? I know gcc uses GMP, but that's not really floating-point
    emulation.

    GCC also uses not only GNU GMP but also GNU MPFR.


    MPFR is of no help when you want to emulate an exact behavior of
    particular hardware format.

    Then why is it there?

    Most likely because people that think that compilers make extraordinary effort in order to match FP results evaluated at compile time with
    those evaluated at run time do not know what they are talking about.
    As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
    than at run time.


    Doing the calculations at higher precision does not necessarily help.

    If a run-time calculation can give slightly inaccurate results because
    the rounding errors happen to build up that way, a compile-time
    calculation has to replicate that if it is a valid optimisation.

    I don't know the details of how GCC achieves this, but I /do/ know that
    a significant body of code is used to get this right - across widely
    different targets, and supporting a range of different floating point
    options. If the compiler can't see how to get it bit-perfect at compile
    time, it has to do the calculation at run-time.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From aph@aph@littlepinkcloud.invalid to comp.lang.c on Wed Sep 3 09:16:56 2025
    From Newsgroup: comp.lang.c

    David Brown <david.brown@hesbynett.no> wrote:
    On 01/09/2025 23:11, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    Compilers have to make sure they can do compile-time evaluation that
    is bit-perfect to run-time evaluation before they can use it as an
    optimisation - any risk of an error is a bug in the compiler. I don't
    know about other compilers, but gcc has a /huge/ library that is used
    to simulate floating point on a wide range of targets and options,
    precisely so that it can get this right.

    What library are you referring to? Is it something internal to gcc?
    I know gcc uses GMP, but that's not really floating-point emulation.


    I am afraid I don't know the details here, and to what extent it is
    internal to the GCC project or external. I /think/, but I could easily
    be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
    match up with the target details.

    Indeed. . There's emulation for everything, even decimal floating
    point.

    See https://github.com/gcc-mirror/gcc/blob/master/gcc/real.cc

    Andrew.
    --- Synchronet 3.21a-Linux NewsLink 1.2