Apologies in advance for the long mail but I have a lot to say! Iíve posted before about Big Oh notation and sometimes I think I get it and then something comes along and confuses me again.
As I currently understand it Big Oh notation represents the worst case performance for a system/algorithm. I also understand that Big Omega represents the best case performance for a system/algorithm and from what I can gather Big Theta is when the worst and best case are the same (Please correct me if Iím wrong).
So Iíve a paper and theyíre measuring how long it takes for the pointers of every device in the system (every device has two pointers and there are N devices in the system) to become correct when all devices boot up. The paper calls this the time complexity. The devices have an opportunity to fix their pointers according to an interval t. So every interval t every device will try to fix its pointers.
The paper states that for the worst case scenario all devices will have the correct pointers at time= (n+1).t = N.t and for the best case all devices will have the correct pointers at time=(n).t = (N-1).t. So for example for a network with 1000 devices and an interval of 30s, the worst case time complexity is 30000 seconds whereas the best case is 29970 seconds. The paper further states that since both the lower and the upper bound, are O(N), there is a tight bound of Θ(N) on the time complexity Ė This is what I cannot understand as I thought the best case performance would be O(N-1).
Now when I posted previously I was told that O(n-1) = O(N) and that Big-O notation has nothing to do with specific values of N, but is an asymptotic property.
Can someone please explain this to me further (maybe in less mathematical language)? I donít see how the best and worst case can be the same and if theyíre the same ďasymptoticallyĒ then what does this asymptotically mean? I don't understand how it would be useful to describe system performance in this way.
Many thanks in advance for help provided.