No, that's not quite right.
Complexity analysis is a
class of functions that represent an algorithm’s behavior in relation to the size of its input. Its utility is more consequent for large input.
Big O is a tight upper bound == worst case behavior.
Big Ω is a tight lower bound == best case behavior.
Big Θ is a tight upper
and lower bound, meaning Big O == Big Ω.
Example for
bubble sort with early quit after no-swap pass:
• Big O = n² — input is in reverse order: n passes over n elements.
• Big Ω = n — input is already sorted: 1 pass over n elements.
• Big Θ = does not exist — there is no tight bound over bubble sort’s behavior.
Example for
heap sort:
• Big O = n log n — n “sink” operations, each “sink” costs log n.
• Big Ω = n log n — n “sink” operations, each “sink” costs log n.
• Big Θ = n log n — n “sink” operations, each “sink” costs log n (Big O == Big Ω).
(In other words, heap sort behaves the same no matter what the input sort order.)
In all cases, “Big” notation indicates a tight limit, meaning that the representative function is very close (if not identity) to the class function. For example, n²+2n is very tight to n². (n² dominates 2n as n→∞. In other words, n² represents the class of all functions where the dominant term is n².)
For a loose coupling, you use “Little” notation. This is rarely useful, and typically only applied when the underlying function is too complex or too poorly understood and a simpler function can be used as limit. For example, you could say heap sort has a Little o time behavior of o(n²). It also typically implies that f(n) < o(f(n)). And again, O/o is a strictly upper limit; Ω/ω is a strictly lower limit; and little ϑ does not exist.
Computing f(n)
This is the trick:
loops matter.
Example 1: constant time.
1 2 3 4
|
int identity( int n )
{
return n;
}
| |
This is pretty easy. There are no loops. No matter what
n
’s value, the function always returns immediately.
It does not actually matter what other things the function does; as long as the function does not need to perform any computation that depends on the size of
n
, it is O(1). We can even discount O(n) stuff if n is really small.
1 2 3 4 5 6 7 8 9
|
int quux( int n )
{
std::cout << "The value of n is " << n << ".\n"; // T(C+9+S(n)+4)
std::this_thread::sleep_for( 2s ); // T(2 seconds)
return n; // T(1)
}
// C is whatever it takes to get cout working
// 9+4 is the number of characters in the string literals
// S(n) is the cost to convert n to a string (which is actually O(n), for n < 32)
| |
For our purposes, this function is O(1). Remember, we can discount S(n) since n is tiny.
This idea that all constant values do not matter is important!
Example 2: Linear time
1 2 3 4 5 6 7 8
|
// Find index of first x in xs[n], else -1
int index_of( int x, int *xs, int n )
{
for (int index = 0; index < n; index++)
if (xs[ index ] == x)
return index;
return -1;
}
| |
There is one loop. Worst case is that
x
is not found in
xs[]
, so we will look at every element of
xs[]
before returning
-1
. That is O(n).
The best case is that
x
is the first element in
xs[]
, meaning that we perform exactly 1 operation before returning. That is Ω(1).
Since n ≠ 1, there is no Θ. (Remember, ‘n’ is important as n→∞, so only for arrays of size 1 is the function a degenerate constant case.)
Example 3: exponential time
1 2 3 4 5 6 7
|
void bubblesort( int xs[], int n )
{
for (int i = 0; i < n; i++)
for (int j = 1; j < n; j++)
if (xs[j] < xs[j-1])
swap( xs[j], xs[j-1] );
}
| |
The question you should ask yourself is: “How many times does this function loop?”
On line 3 we loop n times.
Inside that loop on line 4 we also loop n times. (n-1 is the same as n as n→∞.)
Lines 5 and 6 are O(1).
So that is n * (n + 2 + C), again dropping constants we get n². So this particular version of bubble sort is Θ(n²).
If we wish to add the “stop if no swaps” trick, that is easy enough:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
|
void bubblesort( int xs[], int n )
{
bool swapped = true;
while (swapped)
{
swapped = false;
for (int j = 1; j < n; j++)
if (xs[j] < xs[j-1])
{
std::swap( xs[j], xs[j-1] );
swapped = true;
}
}
}
| |
Analysis is similar. The number of times that the outer loop loops is entirely dependent on the sortedness of
xs[]
, so the best case is that it is sorted so the inner loop only runs once: Ω(n), and the worst case is that the inner loop runs
n
times because the element at x[0] must move to x[n-1] or vice versa: O(n²).
Well, hope this helps.