On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more
levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize
"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
On Tue, 30 Sep 2025 16:12:14 -0000 (UTC), Kaz Kylheku wrote:
On 2025-09-30, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more >>levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize >>"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself
with explicit memoization or iterative rewrite.
iterative fibonacci it seems many time faster than the recursive one.
I don't know if the compiler can optimize until that point...
On 2025-10-22, Rosario19 <Ros@invalid.invalid> wrote:
On Tue, 30 Sep 2025 16:12:14 -0000 (UTC), Kaz Kylheku wrote:
On 2025-09-30, Janis Papanagnou wrote:
In another newsgroup the question came up to what degree the
GNU "C" compilers can handle optimization of recursions like
fib() or ack(), i.e. cascading ones. Somebody here may know?
What exactly are you talking about? The ability to unroll one or more >>>levels of recursion by inlining cases of a function into itself
to create a larger functon that makes fewer calls?
The common way fib is written recursively doesn't lend to
tail call optimization; it has to be refactored for that.
The usual way Ackermann is written, it has some tail calls
and a non-tail call, since one of its cases returns
a call to ack, which has an argument computed by calling ack.
Can we expect that (with the higher optimization levels) the
exponential complexity from cascaded recursion gets linearized?
I'm reasonably sure that GCC is not going to recognize and linearize >>>"return fib(n - 1) + fib(n - 2)" recursion. You have to do it yourself >>>with explicit memoization or iterative rewrite.
iterative fibonacci it seems many time faster than the recursive one.
I don't know if the compiler can optimize until that point...
iterative fibonacci can be expressed by tail recursion, which
the compiler can optimize to iteration.
You have to refactor fib for tail recursion; it may be easier
to start with the iterative version and turn the loop into
a tail call.
Iterative fib(n) starts with the vector v = <0, 1> and then
performs the step n times, returning the second element of
the vector.
The step is to multiply the vector by the matrix
[ 0 1 ]
v_next = [ 1 1 ] v
In other words
v_next[0] = v[1]
v_next[1] = v[0] + v[1]
Thus a tail-recursive fib can look like this. The
recursive helper function:
static int fib_tail(int v0, int v1, int n)
{
if (n == 0)
return v0 + 1;
else
return fib_tail(v1, v0 + v1, n - 1);
}
plus the entry point that conforms to the expected fib API:
int fib(int n)
{
return fib_tail(0, 1, n);
}
When I compile that with GCC11 for X86-64, using -O3 -S,
I get this [edited for brevity by omitting various environment-related >cruft]:
fib:
movl $1, %eax
xorl %edx, %edx
testl %edi, %edi
jne .L2
jmp .L8
.L5:
movl %ecx, %eax
.L2:
leal (%rdx,%rax), %ecx
movl %eax, %edx
subl $1, %edi
jne .L5
addl $1, %eax
ret
.L8:
ret
The static helper function fib_tail has disappeared, inlined into fib, and >turned from recursion to iteration.
GCC is not going to turn this:
int fib(int n)
{
if (n < 2)
return 1;
else
return fib(n - 1) + fib(n - 2);
}
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the code generated >for the user's algorithm; that's recognizing and redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
fibonaccifibonacciRic(0)=0
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the code generated
for the user's algorithm; that's recognizing and redesigning the algorithm. >>
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think fib()
is O(n^2) instead fib1 is O(n).
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
On Fri, 24 Oct 2025 15:36:58 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
On 24.10.2025 16:35, Michael S wrote:
On Fri, 24 Oct 2025 15:36:58 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).
On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 16:35, Michael S wrote:
On Fri, 24 Oct 2025 15:36:58 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).
I think, whereas constant factors and constant terms are understood not
to matter in asymptotic complexity, the choice of exponential base in exponential complexity does. Similarly to the choice of exponent
in the category of polynomial time: e.g. cubic is slower than quadratic.
If we have 1 < a < b then O(b^n) is a faster growth than O(a^n).
2^n doubles with every increment in n.
Whereas 1.02^n doubles --- here we can use the 'rule of 72' from
finance --- for a +36 increase in n.
If spending twice the computing time buys us only a +1 increase in
the input parameter n, versus +36, that's a big deal.
A term added to n in in 2^n wouldn't matter because 2^(n+c)
is (2^c)(2^n) translating to a constant factor. But a coefficient
on n would matter: 2^(kn) = (2^k)^n. It changes the base.
On 24.10.2025 19:14, Kaz Kylheku wrote:
On 2025-10-24, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 16:35, Michael S wrote:
On Fri, 24 Oct 2025 15:36:58 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but >>>>>> fib() start to calculate from n goes dowwn until n=0, than goes up >>>>>> until n=input, instead fib1() go only down for n. It seen the calls >>>>>> (fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1. >>>> So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).
I think, whereas constant factors and constant terms are understood not
to matter in asymptotic complexity, the choice of exponential base in
exponential complexity does. Similarly to the choice of exponent
in the category of polynomial time: e.g. cubic is slower than quadratic.
I don't know about you, but my (40 years old) knowledge is reflected
by the table you find in https://de.wikipedia.org/wiki/Landau-Symbole.
We have differentiated O(1), O(log log N), O(log N), O(N), O(N log N), O(N^2), O(N^3) - other O(c^N) for c>3 were practically "irrelevant";
no need to show me an example of O(N^4), or O(N^3 log N), I know there
are examples - O(2^N), and O(n!). (There are two more listed on Wiki.)
(Just note that all listed complexity classes are here(!) functions on
N alone, where "1" is "N^0" and the "N^M" just an abbreviated notation
to not list the individual mostly anyway irrelevant exponents.)
I think that if we compare, as done here, different complexity classes,
O(N) vs. O(2^N) the decimals are anyway irrelevant! If we're comparing functions within the same class O(2^N) it makes sense to differentiate
an O(2^N) complexity with concrete numbers of 1.618**n vs. 2**n.
But we've never said that these two were different complexity classes.
The class O(a^N) (for a>1) is anyway just called exponential complexity.
And since O defines an _order_ (with a specific characteristic) there's
no point in differentiating a class O(1.618^N) from a class O(1.617^N),
On 24.10.2025 16:35, Michael S wrote:
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).While you're correct, I wonder why you said that? I don't see any
On 2025-10-24 11:22, Janis Papanagnou wrote:
On 24.10.2025 16:35, Michael S wrote:
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
Complexities are determined by the highest power of the relevant
parameter; that power needn't be integral, though it usually is. I'd be interested in the derivation of that result.
You also don't say that 'insertion sort' is O((N^2)/2); it's O(N^2).
While you're correct, I wonder why you said that? I don't see any
examples of anyone saying anything like that above.
On 27.10.2025 10:48, James Kuyper wrote:
On 2025-10-24 11:22, Janis Papanagnou wrote:
On 24.10.2025 16:35, Michael S wrote:
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1. >>>> So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
Complexities are determined by the highest power of the relevant
parameter; that power needn't be integral, though it usually is. I'd be
interested in the derivation of that result.
On 2025-10-27 07:52, Janis Papanagnou wrote:
On 27.10.2025 10:48, James Kuyper wrote:
On 2025-10-24 11:22, Janis Papanagnou wrote:
On 24.10.2025 16:35, Michael S wrote:2*FIB(N)-1.
Actually, the number of calls in naive recursive variant =
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with
decimals.
Complexities are determined by the highest power of the relevant
parameter; that power needn't be integral, though it usually is.
I'd be interested in the derivation of that result.
My apologies. For some reason I read that at N^1.618, not 1.618^N.
On Mon, 27 Oct 2025 14:41:11 -0400I wasn't addressing his argument, only his comment about (so I thought)
James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 2025-10-27 07:52, Janis Papanagnou wrote:
On 27.10.2025 10:48, James Kuyper wrote:
On 2025-10-24 11:22, Janis Papanagnou wrote:
On 24.10.2025 16:35, Michael S wrote:2*FIB(N)-1.
Actually, the number of calls in naive recursive variant =
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with
decimals.
Complexities are determined by the highest power of the relevant
parameter; that power needn't be integral, though it usually is.
I'd be interested in the derivation of that result.
My apologies. For some reason I read that at N^1.618, not 1.618^N.
IMHO, you don't have to apologize. Janis's arguamnet make no sense
inboth cases.
Will A1+B1*pow(2, N) always outpace A2+B2*pow(1.618, N) when N is
big enough ? Yes, it will.
On 24.10.2025 16:35, Michael S wrote:
On Fri, 24 Oct 2025 15:36:58 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 24.10.2025 09:53, Rosario19 wrote:
into the above pair of functions, or anything similar.
That's not ordinary optimization of the type which improves the
code generated for the user's algorithm; that's recognizing and
redesigning the algorithm.
The line between those is blurred, but not /that/ blurred.
int fib1(int n){return fib_tail(0, 1, n);}
is different from
int fib(int n){if(n<2) return 1;return fib(n - 1) + fib(n - 2);
both these function calculate from the 0 1 the number result, but
fib() start to calculate from n goes dowwn until n=0, than goes up
until n=input, instead fib1() go only down for n. It seen the calls
(fib has 2 call each call... instead fib_tail() just 1), I think
fib() is O(n^2) instead fib1 is O(n).
It's worse; it's O(2^N).
Janis
Actually, the number of calls in naive recursive variant = 2*FIB(N)-1.
So, O(1.618**n) - a little less bad than your suggestion.
It's new to me that O-complexities would be written with decimals.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,075 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 114:05:23 |
| Calls: | 13,799 |
| Calls today: | 1 |
| Files: | 186,990 |
| D/L today: |
5,521 files (1,592M bytes) |
| Messages: | 2,439,079 |