# System Performance - Big Oh and Big Omega

Printable View

• Jul 6th 2010, 03:48 AM
kerrymaid
System Performance - Big Oh and Big Omega
Hi,

Apologies in advance for the long mail but I have a lot to say! I’ve posted before about Big Oh notation and sometimes I think I get it and then something comes along and confuses me again.

As I currently understand it Big Oh notation represents the worst case performance for a system/algorithm. I also understand that Big Omega represents the best case performance for a system/algorithm and from what I can gather Big Theta is when the worst and best case are the same (Please correct me if I’m wrong).

So I’ve a paper and they’re measuring how long it takes for the pointers of every device in the system (every device has two pointers and there are N devices in the system) to become correct when all devices boot up. The paper calls this the time complexity. The devices have an opportunity to fix their pointers according to an interval t. So every interval t every device will try to fix its pointers.

The paper states that for the worst case scenario all devices will have the correct pointers at time= (n+1).t = N.t and for the best case all devices will have the correct pointers at time=(n).t = (N-1).t. So for example for a network with 1000 devices and an interval of 30s, the worst case time complexity is 30000 seconds whereas the best case is 29970 seconds. The paper further states that since both the lower and the upper bound, are O(N), there is a tight bound of Θ(N) on the time complexity – This is what I cannot understand as I thought the best case performance would be O(N-1).

Now when I posted previously I was told that O(n-1) = O(N) and that Big-O notation has nothing to do with specific values of N, but is an asymptotic property.

Can someone please explain this to me further (maybe in less mathematical language)? I don’t see how the best and worst case can be the same and if they’re the same “asymptotically” then what does this asymptotically mean? I don't understand how it would be useful to describe system performance in this way.

Many thanks in advance for help provided.
• Jul 6th 2010, 04:37 AM
CaptainBlack
Quote:

Originally Posted by kerrymaid
Hi,

Apologies in advance for the long mail but I have a lot to say! I’ve posted before about Big Oh notation and sometimes I think I get it and then something comes along and confuses me again.

As I currently understand it Big Oh notation represents the worst case performance for a system/algorithm. I also understand that Big Omega represents the best case performance for a system/algorithm and from what I can gather Big Theta is when the worst and best case are the same (Please correct me if I’m wrong).

So I’ve a paper and they’re measuring how long it takes for the pointers of every device in the system (every device has two pointers and there are N devices in the system) to become correct when all devices boot up. The paper calls this the time complexity. The devices have an opportunity to fix their pointers according to an interval t. So every interval t every device will try to fix its pointers.

The paper states that for the worst case scenario all devices will have the correct pointers at time= (n+1).t = N.t and for the best case all devices will have the correct pointers at time=(n).t = (N-1).t. So for example for a network with 1000 devices and an interval of 30s, the worst case time complexity is 30000 seconds whereas the best case is 29970 seconds. The paper further states that since both the lower and the upper bound, are O(N), there is a tight bound of Θ(N) on the time complexity – This is what I cannot understand as I thought the best case performance would be O(N-1).

Now when I posted previously I was told that O(n-1) = O(N) and that Big-O notation has nothing to do with specific values of N, but is an asymptotic property.

Can someone please explain this to me further (maybe in less mathematical language)? I don’t see how the best and worst case can be the same and if they’re the same “asymptotically” then what does this asymptotically mean? I don't understand how it would be useful to describe system performance in this way.

Many thanks in advance for help provided.

Well I can't explain with less mathematical language as it is a mathematical concept, but I can point to a conceptual error or misundera=standing that is evident in your post.

Big-O/Big-Theta notation is not directly anything to do with worst/best case performance of a system. The notation deals with the rate of growth of functions. So if the worst case performance of your system were (exactly) $\displaystyle f(n)=n^2+4n+7$ then we would say that $\displaystyle f(n)=O(n^2)$ , but that is because of the rate of growth of $\displaystyle f(n)$. We can as easily write $\displaystyle f(n)=\Theta(n^2)$ in this case, as for large $\displaystyle $$n: \displaystyle n^2<f(n)<12n^2 CB • Jul 6th 2010, 04:52 AM kerrymaid Hi CB, Ok - I clearly haven't been thinking about this correctly. The paper goes onto say: "We have shown that the time complexity to a consistent state is Θ(N). One might be apt to argue that the linear convergence behavior is only due to the assumption that there is a single join point. However, the time complexity from a set of k join points is Θ(k+ceil((N −k)/k)) = Θ(N), and thus the asymptotic time complexity of the bootstrapping protocol remains unchanged." Can you please explain why this is? • Jul 6th 2010, 12:55 PM CaptainBlack Quote: Originally Posted by kerrymaid Hi CB, Ok - I clearly haven't been thinking about this correctly. The paper goes onto say: "We have shown that the time complexity to a consistent state is Θ(N). One might be apt to argue that the linear convergence behavior is only due to the assumption that there is a single join point. However, the time complexity from a set of k join points is Θ(k+ceil((N −k)/k)) = Θ(N), and thus the asymptotic time complexity of the bootstrapping protocol remains unchanged." Can you please explain why this is? If \displaystyle f(N) \in \Theta\left(k + \left\lceil \frac{N-k}{k} \right\rceil \right) then there exist constants \displaystyle a,b>0 such that: \displaystyle a\left(k + \left\lceil \frac{N-k}{k} \right\rceil \right)<|f(N)|<b\left(k + \left\lceil \frac{N-k}{k} \right\rceil \right) but: \displaystyle U \le \lceil U \rceil \le U+1, so: \displaystyle a\left(k + \frac{N-k}{k} \right)<|f(N)|<b\left(k + \frac{N-k}{k}+1 \right) So for \displaystyle$$ N$ large enough and $\displaystyle k\ge 1$

$\displaystyle a\frac{N}{k} <|f(N)|<2b\frac{N}{k}$

and so $\displaystyle f(N)\in \Theta(N)$

(The above shows that $\displaystyle \Theta\left(k + \left\lceil \frac{N-k}{k} \right\rceil \right)\subseteq \Theta(N)$ which is all you really require, but the argument can be run in the reverse direction to show that these two equivalence classes of functions are equal)

CB