Big O notation understanding and problem

Hi there!

I am having huge trouble understanding big-O-notation. I don't understand at all why the definition of big-O-notation mentions the constant C. Here is the definition from my book:

"Let f and g be functions. We say that f(x) is O(g(x)) if there are constants C and k such that:

|f(x)| ≤ C|g(x)|

whenever x > k"

I understand why absolute values are used and why x must be bigger than k, but I don't understand the purpose of the C-constant. If anyone could explain that in a simple way, if there is a simple way, I would be really greatfull.

Now, here is a problem from my book:

Show that f(x) = x^{2} + 2x + 1 is O(x^{2})

Thanks so much to anyone who reads this!

Re: Big O notation understanding and problem

Quote:

Originally Posted by

**Nora314** I understand why absolute values are used and why x must be bigger than k, but I don't understand the purpose of the C-constant.

Big-O notation is often used to describe the complexity of computer algorithms. Suppose a C program reads an input n and allocates an array of *long int* of length n. Suppose also that on some architecture, long int takes 4 bytes. Therefore, the memory complexity in bytes of this program is 4n and we can say it is O(4n). But what if the same program is compiled on the architecture where long int takes 8 bytes? Then its complexity is 8n. If there is no constant C in the definition of big-O, then the complexity is no longer O(4n) because 8n is not O(4n).

For another example, suppose a program reads an input n and does n iterations of a loop with, say, 5 assignments. If we measure the time complexity as the number of loop iterations, then the complexity is O(n). However, if we measure the number of assignments or the number of assembly or machine code instructions after the program is compiled, then it is no longer O(n) if there is no constant C in the definition of big-O.

The presence of C is driven by the desire to have a rougher measure of complexity and not distinguish between various implementation details, such as architecture, programming language and so on. When we vary such details, complexity can usually be multiplied or divided by a constant. Therefore, big-O is designed to compare functions up to a constant factor.

Quote:

Originally Posted by

**Nora314** Show that f(x) = x^{2} + 2x + 1 is O(x^{2})

Note that for x ≥ 1, x^{2} + 2x + 1 ≤ 3x^{2}.

Re: Big O notation understanding and problem

Thank you so much for the long and nice answer! :)

I have also looked at the problem a lot, and I think I have figured out how to solve it now. I really admire all you people who know discrete math :p