• Re: Optimization of cascading recursions in "C" compilers?

    From Rosario19@Ros@invalid.invalid to comp.lang.c on Thu Oct 23 00:37:37 2025
    From Newsgroup: comp.lang.c

    On Tue, 30 Sep 2025 16:12:14 -0000 (UTC), Kaz Kylheku wrote:

    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more
    levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize
    "return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.

    iterative fibonacci it seems many time faster than the recursive one.
    I don't know if the compiler can optimize until that point...

    #include <stdio.h>

    unsigned fib(unsigned n){if(n<2)return n; return fib(n-1)+fib(n-2);}

    unsigned f(unsigned n)
    {unsigned r,i,j;
    if(n==0) return 0;
    n-=1;
    for(r=j=i=1;n;--n)
    {r=j;j+=i;i=r;}
    return r;
    }

    main(void)
    {unsigned i, r;

    for(i=0;i<40;++i)
    {r=fib(i);
    printf("fibonacciRic(%u)=%u\n", i, r);
    }
    for(i=0;i<40;++i)
    {r=f(i);
    printf("fibonacciIter(%u)=%u\n", i, r);
    }
    return 0;
    }

    ---------------
    fibonacciRic(0)=0
    fibonacciRic(1)=1
    fibonacciRic(2)=1
    fibonacciRic(3)=2
    fibonacciRic(4)=3
    fibonacciRic(5)=5
    fibonacciRic(6)=8
    fibonacciRic(7)=13
    fibonacciRic(8)=21
    fibonacciRic(9)=34
    fibonacciRic(10)=55
    fibonacciRic(11)=89
    fibonacciRic(12)=144
    fibonacciRic(13)=233
    fibonacciRic(14)=377
    fibonacciRic(15)=610
    fibonacciRic(16)=987
    fibonacciRic(17)=1597
    fibonacciRic(18)=2584
    fibonacciRic(19)=4181
    fibonacciRic(20)=6765
    fibonacciRic(21)=10946
    fibonacciRic(22)=17711
    fibonacciRic(23)=28657
    fibonacciRic(24)=46368
    fibonacciRic(25)=75025
    fibonacciRic(26)=121393
    fibonacciRic(27)=196418
    fibonacciRic(28)=317811
    fibonacciRic(29)=514229
    fibonacciRic(30)=832040
    fibonacciRic(31)=1346269
    fibonacciRic(32)=2178309
    fibonacciRic(33)=3524578
    fibonacciRic(34)=5702887
    fibonacciRic(35)=9227465
    fibonacciRic(36)=14930352
    fibonacciRic(37)=24157817
    fibonacciRic(38)=39088169
    fibonacciRic(39)=63245986
    fibonacciIter(0)=0
    fibonacciIter(1)=1
    fibonacciIter(2)=1
    fibonacciIter(3)=2
    fibonacciIter(4)=3
    fibonacciIter(5)=5
    fibonacciIter(6)=8
    fibonacciIter(7)=13
    fibonacciIter(8)=21
    fibonacciIter(9)=34
    fibonacciIter(10)=55
    fibonacciIter(11)=89
    fibonacciIter(12)=144
    fibonacciIter(13)=233
    fibonacciIter(14)=377
    fibonacciIter(15)=610
    fibonacciIter(16)=987
    fibonacciIter(17)=1597
    fibonacciIter(18)=2584
    fibonacciIter(19)=4181
    fibonacciIter(20)=6765
    fibonacciIter(21)=10946
    fibonacciIter(22)=17711
    fibonacciIter(23)=28657
    fibonacciIter(24)=46368
    fibonacciIter(25)=75025
    fibonacciIter(26)=121393
    fibonacciIter(27)=196418
    fibonacciIter(28)=317811
    fibonacciIter(29)=514229
    fibonacciIter(30)=832040
    fibonacciIter(31)=1346269
    fibonacciIter(32)=2178309
    fibonacciIter(33)=3524578
    fibonacciIter(34)=5702887
    fibonacciIter(35)=9227465
    fibonacciIter(36)=14930352
    fibonacciIter(37)=24157817
    fibonacciIter(38)=39088169
    fibonacciIter(39)=63245986


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Wed Oct 22 23:36:47 2025
    From Newsgroup: comp.lang.c

    On 2025-10-22, Rosario19 <Ros@invalid.invalid> wrote:
    On Tue, 30 Sep 2025 16:12:14 -0000 (UTC), Kaz Kylheku wrote:

    On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more >>levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize >>"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
    with explicit memoization or iterative rewrite.

    iterative fibonacci it seems many time faster than the recursive one.
    I don't know if the compiler can optimize until that point...

    iterative fibonacci can be expressed by tail recursion, which
    the compiler can optimize to iteration.

    You have to refactor fib for tail recursion; it may be easier
    to start with the iterative version and turn the loop into
    a tail call.

    Iterative fib(n) starts with the vector v = <0, 1> and then
    performs the step n times, returning the second element of
    the vector.

    The step is to multiply the vector by the matrix

    [ 0 1 ]
    v_next = [ 1 1 ] v

    In other words

    v_next[0] = v[1]
    v_next[1] = v[0] + v[1]

    Thus a tail-recursive fib can look like this. The
    recursive helper function:

    static int fib_tail(int v0, int v1, int n)
    {
    if (n == 0)
    return v0 + 1;
    else
    return fib_tail(v1, v0 + v1, n - 1);
    }

    plus the entry point that conforms to the expected fib API:

    int fib(int n)
    {
    return fib_tail(0, 1, n);
    }

    When I compile that with GCC11 for X86-64, using -O3 -S,
    I get this [edited for brevity by omitting various environment-related
    cruft]:

    fib:
    movl $1, %eax
    xorl %edx, %edx
    testl %edi, %edi
    jne .L2
    jmp .L8
    .L5:
    movl %ecx, %eax
    .L2:
    leal (%rdx,%rax), %ecx
    movl %eax, %edx
    subl $1, %edi
    jne .L5
    addl $1, %eax
    ret
    .L8:
    ret

    The static helper function fib_tail has disappeared, inlined into fib, and turned from recursion to iteration.

    GCC is not going to turn this:

    int fib(int n)
    {
    if (n < 2)
    return 1;
    else
    return fib(n - 1) + fib(n - 2);
    }

    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the code generated for the user's algorithm; that's recognizing and redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Rosario19@Ros@invalid.invalid to comp.lang.c on Fri Oct 24 09:53:17 2025
    From Newsgroup: comp.lang.c

    On Wed, 22 Oct 2025 23:36:47 -0000 (UTC), Kaz Kylheku wrote:
    On 2025-10-22, Rosario19 <Ros@invalid.invalid> wrote:
    On Tue, 30 Sep 2025 16:12:14 -0000 (UTC), Kaz Kylheku wrote:

    On 2025-09-30, Janis Papanagnou wrote:
    In another newsgroup the question came up to what degree the
    GNU "C" compilers can handle optimization of recursions like
    fib() or ack(), i.e. cascading ones. Somebody here may know?

    What exactly are you talking about? The ability to unroll one or more >>>levels of recursion by inlining cases of a function into itself
    to create a larger functon that makes fewer calls?

    The common way fib is written recursively doesn't lend to
    tail call optimization; it has to be refactored for that.

    The usual way Ackermann is written, it has some tail calls
    and a non-tail call, since one of its cases returns
    a call to ack, which has an argument computed by calling ack.

    Can we expect that (with the higher optimization levels) the
    exponential complexity from cascaded recursion gets linearized?

    I'm reasonably sure that GCC is not going to recognize and linearize >>>"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself >>>with explicit memoization or iterative rewrite.

    iterative fibonacci it seems many time faster than the recursive one.
    I don't know if the compiler can optimize until that point...

    iterative fibonacci can be expressed by tail recursion, which
    the compiler can optimize to iteration.

    You have to refactor fib for tail recursion; it may be easier
    to start with the iterative version and turn the loop into
    a tail call.

    Iterative fib(n) starts with the vector v = <0, 1> and then
    performs the step n times, returning the second element of
    the vector.

    The step is to multiply the vector by the matrix

    [ 0 1 ]
    v_next = [ 1 1 ] v

    In other words

    v_next[0] = v[1]
    v_next[1] = v[0] + v[1]

    Thus a tail-recursive fib can look like this. The
    recursive helper function:

    static int fib_tail(int v0, int v1, int n)
    {
    if (n == 0)
    return v0 + 1;
    else
    return fib_tail(v1, v0 + v1, n - 1);
    }

    plus the entry point that conforms to the expected fib API:

    int fib(int n)
    {
    return fib_tail(0, 1, n);
    }

    When I compile that with GCC11 for X86-64, using -O3 -S,
    I get this [edited for brevity by omitting various environment-related >cruft]:

    fib:
    movl $1, %eax
    xorl %edx, %edx
    testl %edi, %edi
    jne .L2
    jmp .L8
    .L5:
    movl %ecx, %eax
    .L2:
    leal (%rdx,%rax), %ecx
    movl %eax, %edx
    subl $1, %edi
    jne .L5
    addl $1, %eax
    ret
    .L8:
    ret

    yes this seems the iterative one

    The static helper function fib_tail has disappeared, inlined into fib, and >turned from recursion to iteration.

    GCC is not going to turn this:

    int fib(int n)
    {
    if (n < 2)
    return 1;
    else
    return fib(n - 1) + fib(n - 2);
    }


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the code generated >for the user's algorithm; that's recognizing and redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think fib()
    is O(n^2) instead fib1 is O(n).

    --------------------------------
    #include <stdio.h>

    unsigned fib(unsigned n){if(n<2)return n; return fib(n-1)+fib(n-2);}

    unsigned fibr(unsigned i, unsigned j, unsigned n)
    {if(n==0) return i;
    return fibr(j,i+j,n-1);
    }

    unsigned fib1(unsigned n){return fibr(0,1,n);}

    unsigned f(unsigned n)
    {unsigned r,i,j;
    if(n==0) return 0;
    n-=1;
    for(r=j=i=1;n;--n)
    {r=j;j+=i;i=r;}
    return r;
    }


    main(void)
    {unsigned i, r;

    for(i=0;i<40;++i)
    {r=fib1(i);
    printf("fibonacciRic(%u)=%u\n", i, r);
    }
    for(i=0;i<40;++i)
    {r=f(i);
    printf("fibonacciIter(%u)=%u\n", i, r);
    }
    return 0;
    }


    result:

    fibonacci
    fibonacciRic(0)=0
    fibonacciRic(1)=1
    fibonacciRic(2)=1
    fibonacciRic(3)=2
    fibonacciRic(4)=3
    fibonacciRic(5)=5
    fibonacciRic(6)=8
    fibonacciRic(7)=13
    fibonacciRic(8)=21
    fibonacciRic(9)=34
    fibonacciRic(10)=55
    fibonacciRic(11)=89
    fibonacciRic(12)=144
    fibonacciRic(13)=233
    fibonacciRic(14)=377
    fibonacciRic(15)=610
    fibonacciRic(16)=987
    fibonacciRic(17)=1597
    fibonacciRic(18)=2584
    fibonacciRic(19)=4181
    fibonacciRic(20)=6765
    fibonacciRic(21)=10946
    fibonacciRic(22)=17711
    fibonacciRic(23)=28657
    fibonacciRic(24)=46368
    fibonacciRic(25)=75025
    fibonacciRic(26)=121393
    fibonacciRic(27)=196418
    fibonacciRic(28)=317811
    fibonacciRic(29)=514229
    fibonacciRic(30)=832040
    fibonacciRic(31)=1346269
    fibonacciRic(32)=2178309
    fibonacciRic(33)=3524578
    fibonacciRic(34)=5702887
    fibonacciRic(35)=9227465
    fibonacciRic(36)=14930352
    fibonacciRic(37)=24157817
    fibonacciRic(38)=39088169
    fibonacciRic(39)=63245986
    fibonacciIter(0)=0
    fibonacciIter(1)=1
    fibonacciIter(2)=1
    fibonacciIter(3)=2
    fibonacciIter(4)=3
    fibonacciIter(5)=5
    fibonacciIter(6)=8
    fibonacciIter(7)=13
    fibonacciIter(8)=21
    fibonacciIter(9)=34
    fibonacciIter(10)=55
    fibonacciIter(11)=89
    fibonacciIter(12)=144
    fibonacciIter(13)=233
    fibonacciIter(14)=377
    fibonacciIter(15)=610
    fibonacciIter(16)=987
    fibonacciIter(17)=1597
    fibonacciIter(18)=2584
    fibonacciIter(19)=4181
    fibonacciIter(20)=6765
    fibonacciIter(21)=10946
    fibonacciIter(22)=17711
    fibonacciIter(23)=28657
    fibonacciIter(24)=46368
    fibonacciIter(25)=75025
    fibonacciIter(26)=121393
    fibonacciIter(27)=196418
    fibonacciIter(28)=317811
    fibonacciIter(29)=514229
    fibonacciIter(30)=832040
    fibonacciIter(31)=1346269
    fibonacciIter(32)=2178309
    fibonacciIter(33)=3524578
    fibonacciIter(34)=5702887
    fibonacciIter(35)=9227465
    fibonacciIter(36)=14930352
    fibonacciIter(37)=24157817
    fibonacciIter(38)=39088169
    fibonacciIter(39)=63245986

    fib1 and f here seems ugual fast
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 24 15:36:58 2025
    From Newsgroup: comp.lang.c

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the code generated
    for the user's algorithm; that's recognizing and redesigning the algorithm. >>
    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think fib()
    is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Fri Oct 24 17:35:49 2025
    From Newsgroup: comp.lang.c

    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 24 17:22:51 2025
    From Newsgroup: comp.lang.c

    On 24.10.2025 16:35, Michael S wrote:
    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Oct 24 17:14:29 2025
    From Newsgroup: comp.lang.c

    On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 24.10.2025 16:35, Michael S wrote:
    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).

    I think, whereas constant factors and constant terms are understood not
    to matter in asymptotic complexity, the choice of exponential base in exponential complexity does. Similarly to the choice of exponent
    in the category of polynomial time: e.g. cubic is slower than quadratic.

    If we have 1 < a < b then O(b^n) is a faster growth than O(a^n).

    2^n doubles with every increment in n.

    Whereas 1.02^n doubles --- here we can use the 'rule of 72' from
    finance --- for a +36 increase in n.

    If spending twice the computing time buys us only a +1 increase in
    the input parameter n, versus +36, that's a big deal.

    A term added to n in in 2^n wouldn't matter because 2^(n+c)
    is (2^c)(2^n) translating to a constant factor. But a coefficient
    on n would matter: 2^(kn) = (2^k)^n. It changes the base.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Fri Oct 24 20:31:27 2025
    From Newsgroup: comp.lang.c

    On 24.10.2025 19:14, Kaz Kylheku wrote:
    On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 24.10.2025 16:35, Michael S wrote:
    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).

    I think, whereas constant factors and constant terms are understood not
    to matter in asymptotic complexity, the choice of exponential base in exponential complexity does. Similarly to the choice of exponent
    in the category of polynomial time: e.g. cubic is slower than quadratic.

    I don't know about you, but my (40 years old) knowledge is reflected
    by the table you find in https://de.wikipedia.org/wiki/Landau-Symbole.
    We have differentiated O(1), O(log log N), O(log N), O(N), O(N log N),
    O(N^2), O(N^3) - other O(c^N) for c>3 were practically "irrelevant";
    no need to show me an example of O(N^4), or O(N^3 log N), I know there
    are examples - O(2^N), and O(n!). (There are two more listed on Wiki.)
    (Just note that all listed complexity classes are here(!) functions on
    N alone, where "1" is "N^0" and the "N^M" just an abbreviated notation
    to not list the individual mostly anyway irrelevant exponents.)

    I think that if we compare, as done here, different complexity classes,
    O(N) vs. O(2^N) the decimals are anyway irrelevant! If we're comparing functions within the same class O(2^N) it makes sense to differentiate
    an O(2^N) complexity with concrete numbers of 1.618**n vs. 2**n.
    But we've never said that these two were different complexity classes.

    The class O(a^N) (for a>1) is anyway just called exponential complexity.
    And since O defines an _order_ (with a specific characteristic) there's
    no point in differentiating a class O(1.618^N) from a class O(1.617^N),
    to accentuate the point. In case those classes are differentiated -
    say, in the US domain? - I can't tell, okay. I just think it makes no
    sense if we're speaking about complexity classes and orders.

    It could be that there's different schools of teaching if you compare
    e.g. the slight notation variances in the complexity class tables in https://de.wikipedia.org/wiki/Landau-Symbole vs. https://en.wikipedia.org/wiki/Big_O_notation
    or the definitions of the O metrics, but I'm not sure about that since
    the definitions are at least equal; lim sup{x->a} |f(x)|/|g(x)| < inf.

    (The reasoning given below is not convincing to me.)

    Relevant for this thread is: fib() is of exponential complexity, and
    the linearized form is of linear complexity.

    Janis


    If we have 1 < a < b then O(b^n) is a faster growth than O(a^n).

    2^n doubles with every increment in n.

    Whereas 1.02^n doubles --- here we can use the 'rule of 72' from
    finance --- for a +36 increase in n.

    If spending twice the computing time buys us only a +1 increase in
    the input parameter n, versus +36, that's a big deal.

    A term added to n in in 2^n wouldn't matter because 2^(n+c)
    is (2^c)(2^n) translating to a constant factor. But a coefficient
    on n would matter: 2^(kn) = (2^k)^n. It changes the base.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Fri Oct 24 23:59:33 2025
    From Newsgroup: comp.lang.c

    On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 24.10.2025 19:14, Kaz Kylheku wrote:
    On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 24.10.2025 16:35, Michael S wrote:
    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but >>>>>> fib() start to calculate from n goes dowwn until n=0, than goes up >>>>>> until n=input, instead fib1() go only down for n. It seen the calls >>>>>> (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1. >>>> So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).

    I think, whereas constant factors and constant terms are understood not
    to matter in asymptotic complexity, the choice of exponential base in
    exponential complexity does. Similarly to the choice of exponent
    in the category of polynomial time: e.g. cubic is slower than quadratic.

    I don't know about you, but my (40 years old) knowledge is reflected
    by the table you find in https://de.wikipedia.org/wiki/Landau-Symbole.
    We have differentiated O(1), O(log log N), O(log N), O(N), O(N log N), O(N^2), O(N^3) - other O(c^N) for c>3 were practically "irrelevant";
    no need to show me an example of O(N^4), or O(N^3 log N), I know there
    are examples - O(2^N), and O(n!). (There are two more listed on Wiki.)
    (Just note that all listed complexity classes are here(!) functions on
    N alone, where "1" is "N^0" and the "N^M" just an abbreviated notation
    to not list the individual mostly anyway irrelevant exponents.)

    I think that if we compare, as done here, different complexity classes,
    O(N) vs. O(2^N) the decimals are anyway irrelevant! If we're comparing functions within the same class O(2^N) it makes sense to differentiate
    an O(2^N) complexity with concrete numbers of 1.618**n vs. 2**n.
    But we've never said that these two were different complexity classes.

    The class O(a^N) (for a>1) is anyway just called exponential complexity.

    But that's the same as "polynomial time" including quadratic, cubic,
    quartic, ..

    And since O defines an _order_ (with a specific characteristic) there's
    no point in differentiating a class O(1.618^N) from a class O(1.617^N),

    But these orders are different.

    The complexity upper bound O(1.618^n) includes faster function growth than
    the complexity upper bound O(1.617^n).

    If we know something runs in K * 1.618^n time, then it isn't
    O(1.617^n); it blows past that tighter upper bound.

    For any positive constant factor K, given a sufficiently large n:

    1.617^n < K 1.68^n

    No matter how small you make K, after a certain n value, the right side
    will permanently overtake the left.

    Both complexities are in the exponential class, but they are not
    of the same order.

    Complexity classes are coarse-grained containers of complexity orders:

    https://en.wikipedia.org/wiki/Complexity_class

    For instance everything polynomial is lumped into the complexity
    class P, even though you make an algorithm have significantly better
    complexity if you can reduce it from cubed to quadratic.

    The exponential time complexity class contains a recognized subclass
    called E which is O(2^l(n)) where l(n) is a linear polynomial of n.

    Both O(2^n) and O(1.68^n) are found within E yet are not of the
    same order.

    1.68^n can be rewritten as approximately 2^(0.7485n), showing
    that it belongs to class E.

    But that 0.7485 coefficient on n cannot be ignored when it is inside exponentiation, it leads to slower growth of a lower order.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Mon Oct 27 05:48:08 2025
    From Newsgroup: comp.lang.c

    On 2025-10-24 11:22, Janis Papanagnou wrote:
    On 24.10.2025 16:35, Michael S wrote:
    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    Complexities are determined by the highest power of the relevant
    parameter; that power needn't be integral, though it usually is. I'd be interested in the derivation of that result.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).
    While you're correct, I wonder why you said that? I don't see any
    examples of anyone saying anything like that above.


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Janis Papanagnou@janis_papanagnou+ng@hotmail.com to comp.lang.c on Mon Oct 27 12:52:03 2025
    From Newsgroup: comp.lang.c

    On 27.10.2025 10:48, James Kuyper wrote:
    On 2025-10-24 11:22, Janis Papanagnou wrote:
    On 24.10.2025 16:35, Michael S wrote:
    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    Complexities are determined by the highest power of the relevant
    parameter; that power needn't be integral, though it usually is. I'd be interested in the derivation of that result.

    You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).

    While you're correct, I wonder why you said that? I don't see any
    examples of anyone saying anything like that above.

    (I haven't said or implied that anyone said that.)

    I tried to anticipate to avoid unnecessary follow-ups. - To
    expand on that...
    My point was that we used O-notation to classify and compare
    algorithms, with the goal to improve efficiencies. For that it
    was irrelevant whether a constant algorithm was characterized
    by a constant runtime factor of 20000 or 4; with increasing N
    it was still constant. Similar for linear algorithms; one may
    have a slower runtime depending on 3*N the other 2*N; both are
    O(N). The example above was just randomly chosen, since sorting
    appeared to me to be a commonly known task and algorithms known.
    Typical O(N**2) algorithms have a runtime of c*N*(N-1)/2. Some
    other, like Quicksort in its elementary form, cannot guarantee
    O(N*log N); its O(N**2). For exponential O-bounds we said that
    these are of order O(2**N); and that's where we were differing
    in our discussion. (I'm not keen to dispute whether one of the
    doctrines is the only right view. That's BTW one reason why I
    also didn't continue to Kaz's latest reply, another reason was
    that he seemed to have mostly just repeated his previous post,
    just adding another complexity measure that's IMO not clearing
    the O-notation question, but mixing different complexity areas.)
    For the initially mentioned goal to classify, compare and modify
    algorithms to get a better complexity it had in my application
    areas (and also back in my university time) been the goal to
    get better algorithms for specific tasks. We've done O(N)->O(1), O(N**2)->O(N*log N), or similar with higher order functions.
    (In practice the O(N) might have even a better runtime if the
    constant factor is huge and for some side-conditions the value
    of N is practically bounded. But that's not the point of using
    the O-notation to determine real-time runtime of algorithms.)
    The example in this thread was especially noteworthy since it
    was reducing an exponential complexity to even a linear order
    by a sensible transformation! It's irrelevant whether the
    original exponential complexity has a runtime 2**N or 1.6**N;
    we called it O(2**N), and others obviously call it O(c**N),
    depending on two parameters. It's irrelevant for the order we
    are talking about. But if others think it's important I'm fine
    with that.

    Janis

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Mon Oct 27 14:41:11 2025
    From Newsgroup: comp.lang.c

    On 2025-10-27 07:52, Janis Papanagnou wrote:
    On 27.10.2025 10:48, James Kuyper wrote:
    On 2025-10-24 11:22, Janis Papanagnou wrote:
    On 24.10.2025 16:35, Michael S wrote:
    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1. >>>> So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    Complexities are determined by the highest power of the relevant
    parameter; that power needn't be integral, though it usually is. I'd be
    interested in the derivation of that result.

    My apologies. For some reason I read that at N^1.618, not 1.618^N.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Michael S@already5chosen@yahoo.com to comp.lang.c on Mon Oct 27 22:33:19 2025
    From Newsgroup: comp.lang.c

    On Mon, 27 Oct 2025 14:41:11 -0400
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 2025-10-27 07:52, Janis Papanagnou wrote:
    On 27.10.2025 10:48, James Kuyper wrote:
    On 2025-10-24 11:22, Janis Papanagnou wrote:
    On 24.10.2025 16:35, Michael S wrote:
    Actually, the number of calls in naive recursive variant =
    2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with
    decimals.

    Complexities are determined by the highest power of the relevant
    parameter; that power needn't be integral, though it usually is.
    I'd be interested in the derivation of that result.

    My apologies. For some reason I read that at N^1.618, not 1.618^N.

    IMHO, you don't have to apologize. Janis's arguamnet make no sense
    inboth cases.

    Will A1+B1*pow(N, 2) always outpace A2+B2*pow(N, 1.618) when N is
    big enough ? Yes, it will.

    Will A1+B1*pow(2, N) always outpace A2+B2*pow(1.618, N) when N is
    big enough ? Yes, it will.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From James Kuyper@jameskuyper@alumni.caltech.edu to comp.lang.c on Mon Oct 27 16:52:09 2025
    From Newsgroup: comp.lang.c

    On 2025-10-27 16:33, Michael S wrote:
    On Mon, 27 Oct 2025 14:41:11 -0400
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 2025-10-27 07:52, Janis Papanagnou wrote:
    On 27.10.2025 10:48, James Kuyper wrote:
    On 2025-10-24 11:22, Janis Papanagnou wrote:
    On 24.10.2025 16:35, Michael S wrote:
    Actually, the number of calls in naive recursive variant =
    2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with
    decimals.

    Complexities are determined by the highest power of the relevant
    parameter; that power needn't be integral, though it usually is.
    I'd be interested in the derivation of that result.

    My apologies. For some reason I read that at N^1.618, not 1.618^N.

    IMHO, you don't have to apologize. Janis's arguamnet make no sense
    inboth cases.
    I wasn't addressing his argument, only his comment about (so I thought)
    whether you could have a complexity of O(N^1.618), which would lie
    between O(N) and O(N^2). O(1.618^N) is a very different thing; I think,
    for sufficiently large N, it's higher than any polynomial. I think
    they're both legitimate possibilities, but I would be very interested in
    the derivation of either one.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Kaz Kylheku@643-408-1753@kylheku.com to comp.lang.c on Mon Oct 27 21:03:09 2025
    From Newsgroup: comp.lang.c

    On 2025-10-27, Michael S <already5chosen@yahoo.com> wrote:
    Will A1+B1*pow(2, N) always outpace A2+B2*pow(1.618, N) when N is
    big enough ? Yes, it will.

    pow(1.618, N) is equivalent to approximately pow(2, 0.69421 * N).

    Both are in the complexity class of pow(2, L(N)) where L(N) is a liner
    function of N.

    This is called E, which is subset of EXPTIME.

    Within any complexity class, not all functions grow equally fast;
    that would be same order, not same class.

    E.g. in class P, pow(n, 2) grows slower than pow(n, 3).

    pow(2, N) unquestionably grows faster than pow(1.618, N).

    Given any positive K constant factor, there always a N such that
    such for all n >= N:

    pow(2, n) > K * pow(1.618, n)

    No matter how you scale the base 1.618 exponential by a constant,
    the base 2 exponential will outgrow it after some N.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.lang.c on Tue Oct 28 01:58:43 2025
    From Newsgroup: comp.lang.c

    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 24.10.2025 16:35, Michael S wrote:
    On Fri, 24 Oct 2025 15:36:58 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 24.10.2025 09:53, Rosario19 wrote:


    into the above pair of functions, or anything similar.

    That's not ordinary optimization of the type which improves the
    code generated for the user's algorithm; that's recognizing and
    redesigning the algorithm.

    The line between those is blurred, but not /that/ blurred.

    int fib1(int n){return fib_tail(0, 1, n);}

    is different from

    int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);


    both these function calculate from the 0 1 the number result, but
    fib() start to calculate from n goes dowwn until n=0, than goes up
    until n=input, instead fib1() go only down for n. It seen the calls
    (fib has 2 call each call... instead fib_tail() just 1), I think
    fib() is O(n^2) instead fib1 is O(n).

    It's worse; it's O(2^N).

    Janis


    Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
    So, O(1.618**n) - a little less bad than your suggestion.

    It's new to me that O-complexities would be written with decimals.

    Look and learn. True values of parameters frequently are irrational
    and sometimes there is no explicit formula. Decimals allow easy and
    reasonably good comparison of complexity classes.

    And to be clear: O((4/3)^n) and O(1.332^n) are two different Landau
    symbols corresponding to two different complexity classes.

    Strictly speaking 1.618 above is wrong, 1.618 < phi where phi is the
    golden ratio, so corresponding class is lower (smaller) than O(phi^n)
    (which is correct complexity class). But using decimal approximation
    gives reasonably good feel of the complexity and is established
    practice when writing about complexity results.
    --
    Waldek Hebisch
    --- Synchronet 3.21a-Linux NewsLink 1.2