Originally Posted by

**Wiredg** Hello all,

I've read some of the previous posts in regards to the Big O Notation and this seems to remain my achillies heel in terms of discrete math.

The given function is my book is this:

|f(x)| <= C|g(x)| when x > k (as I've seen on other posts as well).

My confusion comes from the first example in the book, given here:

Show that f(x) = x^2 + 2x + 1 is O(n^2).

Solution:

We observe that we can readily estimate the size of f(x) when x > 1 because x < x^2 and 1 < x^2 when x > 1. It follows that:

0 <= x^2 + 2x + 1 <= x^2 + 2x^2 + x^2 = 4x^2 whenever x > 1. Consequently, we can take C = 4 and k = 1 as witnesses. That is f(x) = x^2 + 2x + 1 < 4x^2 whenever x > 1.

Alternately, we can estimate f(x) when x > 2. When x > 2, we have 2x <= x^2 and 1 <= x^2. Consequently, if x > 2, we have

0 <= x^2 + 2x + 1 <= x^2 + x^2 + x^2 = 3x^2, C = 3 and K =2

End Solution

I don't really understand the x > 1, x < x^2 and 1 < x^2 when x > 1 part, nor do I seem to understand how they reach x^2 + 2x^2 + x^2. Same goes for x > 2.

Anyone able to explain the above?

Thanks!

W

(First post!)