1. Originally Posted by CaptainBlack
As f(x)>0 for all x>=0 all roots are negative. Let these be -b1, -b2, .. -bn,
with all the b's >0.

Then:

f(x) =(x+b1)(x+b2) ..(x+bn)

Also as the constant term is 1 we know that b1.b2. .. bn = 1.

Now 2 + bk = 1 + 1 + bk >= 3 cuberoot(1.1.bk), by the Arithmetic-Geometric
mean inequality.
So (2+bk) >= 3cuberoot(bk)

So for x=2:

f(2) = (2+b1)(2+b2) ...(2+bn) >= 3^n cuberoot(b1.b2. .. bn) = 3^n

RonL
That was the thing I needed. I suspected it was true, but I didn't know it was a theorem. Good problem.

2. Originally Posted by ecMathGeek
Was my proof wrong?
Yes, because the inductive step does not hold as you have the product
of the first k r's equal to 1, the k+1 st r must be 1. So this proof only works if
all the roots are -1, but this is in general not true.

RonL

3. Originally Posted by CaptainBlack
Yes, because the inductive step does not hold as you have the product
of the first k r's equal to 1, the k+1 st r must be 1. So this proof only works if
all the roots are -1, but this is in general not true.

RonL
I wasn't arguing that the r_{x} roots must be 1 (that is that every root must be 1), but that if the roots for the n = k case are r_1, r_2, ..., r_k, where r_1, r_2, ..., r_k are real numbers, and that the first "k" roots of the n = k + 1 case are r_1, r_2, ... , r_k, where r_1, r_2, ..., r_k are the same values from the n = k case, then the r_{k + 1} root must be 1.

 There is a problem with this logic that just occurred to me: The roots of n = k + 1 don't have to be the same as those of n = k.

Page 2 of 2 First 12