Big-O notation is often used to describe the complexity of computer algorithms. Suppose a C program reads an input n and allocates an array oflong intof length n. Suppose also that on some architecture, long int takes 4 bytes. Therefore, the memory complexity in bytes of this program is 4n and we can say it is O(4n). But what if the same program is compiled on the architecture where long int takes 8 bytes? Then its complexity is 8n. If there is no constant C in the definition of big-O, then the complexity is no longer O(4n) because 8n is not O(4n).

For another example, suppose a program reads an input n and does n iterations of a loop with, say, 5 assignments. If we measure the time complexity as the number of loop iterations, then the complexity is O(n). However, if we measure the number of assignments or the number of assembly or machine code instructions after the program is compiled, then it is no longer O(n) if there is no constant C in the definition of big-O.

The presence of C is driven by the desire to have a rougher measure of complexity and not distinguish between various implementation details, such as architecture, programming language and so on. When we vary such details, complexity can usually be multiplied or divided by a constant. Therefore, big-O is designed to compare functions up to a constant factor.

Note that for x ≥ 1, x^{2}+ 2x + 1 ≤ 3x^{2}.