Analysis of algorithms is one of the core areas of study in computer science. It is usually one of the main topics in multiple undergraduate courses, and in graduate courses.
Analysis of algorithms is an attempt to understand the computational costs of various algorithms in a mathematical sense. Algorithms are dissected and studied carefully to determine the relationship between inputs, the algorithm, and the number of computations required.
One of the basic tools in algorithm analysis is the Big-O notation, or order of operations.
Since computing algorithms can be extremely complex, and their exact implementation will vary across different languages and hardware, predicting the exact behavior of an algorithm is too difficult a task to pursue. Part of the analysis process is eliminating factors that will not contribute significantly to the run time.
For example, the run-time of a fairly simple algorithm may turn out to be 4n2 - 5n + 4 seconds, where n is the number of elements in the input.
If n is 100, then 4n2 is 40,000, -5n is -500, and 4 is just 4. We see here that the -5n and 4 account for only about 1% of the run time. As n gets larger, they become even less significant, since n2 grows much faster than n as n increases. Furthermore, we are most interested in run times when n is large, and the run times are long. When n is very small, the -5n and 4 terms may be more significant, but who cares? The program will complete in practically no time anyway.
In short, the dominant term in the run time formula when n is large is 4n2. We therefore discard the other terms to simplify the process of predicting run time. We also discard the constant factor 4, since it will vary with the language and hardware, and therefore tells us nothing about the algorithm.
Finally, the actually run-time will depend on many factors that are difficult or impossible to predict, such as other load on the computer at the time it runs.
The algorithm itself determines what the run time will be proportional to, and the constant of proportionality will be determined empirically for each computer we run it on.
Hence, for this algorithm, we say the complexity is O(n2) ("order n squared"). This means that the run time is approximately proportional to n2.
If we know the order of operations for an algorithm and have a single sample run time, we can predict run times for other input sizes fairly accurately, if certain assumptions hold.
If an O(N2) program takes 10 seconds to process 10,000 elements on a given computer, how long will it take to process 50,000?
The assumption is that the value K is the same for both values of n. This will only be true if the CPU and memory performance are the same. One case where this might not hold is when n is sufficiently large to cause the system to run out of RAM and begin using swap space. On this case, memory performance will drop, and K will increase.
The order of operations is relatively easy to determine for many algorithms, whereas the exact run time is nearly impossible.
Determining the order of operations for a given algorithm requires carefully analyzing what the algorithm does.
Example 28.1. Selection Sort
Consider the selection sort algorithm. Sorting algorithms consist mainly of comparisons and swaps.
To find the smallest element in a list of N elements, the selection sort must perform N-1 comparisons. It then swaps this element with the first element.
To find the next smallest element, it must perform N-2 comparisons, and then does another swap.
Continuing on, it will do N-3 comparisons and another swap, N-4 and another swap, until the last round where it does 1 comparison and 1 swap.
In all, it does (N-1) + (N-2) + ... + 1 comparisons, which is approximately N2/2, and N-1 swaps. Since number of comparisons has the highest order, the algorithm is O(N2).
Example 28.2. Binary Search
Now consider searching a sorted list, such as a phone book. To conduct this search systematically, we might start in the middle, and see how the middle entry compares to the search key (the item we're searching for). If the item matches the key, we're done. If it is either less than or greater than the key, we have eliminated half the list as potential candidates. We then go to the middle of the half of the list that remains. We repeat this process until we either find the key, or we're down to one element. This algorithm is known as the Binary Search.
So what is the order of operations? If the search key is in the list, the actual number of comparisons is a matter of luck. We could hit it on the first try, or any try after. This is only predictable by examining the entire list. What we are interested in, though is the worst and average case. The worst case occurs when the key is either not found, or is the last item we check. The number of comparisons is the number of times we can divide the list in half, which is log2(n). ( 2n = the size of a list if we start with one element and double the list size n times. )
Hence, the order of operations is O(log(n)). We drop the base 2, since it is a constant factor: log2(n) = ln(n) / ln(2).
Most commonly used algorithms have been carefully studied and their orders of operation are well known.
Note that for multidimensional data, n is taken to be a single dimension. For example, for a 100 x 100 matrix, n is 100. For a non-square matrix, we can use n = sqrt(rows * cols).
Table 28.1. Common Orders of Operation
Algorithm | Order of Operation |
---|---|
Selection Sort | O(n2) |
Bubble Sort | O(n2) |
Quicksort | O(n*log(n)) |
Heapsort | O(n*log(n)) |
Matrix Addition | O(n2) |
Matrix Multiplication | O(n3) |
Traveling Salesman | O(2n) |
Note that the Traveling Salesman algorithm has exponential order. The goal of the traveling salesman problem is to determine the shortest route for a salesman to visit all of n cities once. It has been shown that to find the shortest path deterministically (with certainty) requires constructing every possible route. For n cities, there are 22 possible routes that visit each city once.
Any algorithm with an exponential order is considered intractable (unsolvable), even on a very fast computer, because the number of calculations required grows too quickly to make it possible to solve for large values of n. Even worse than exponential time are O(n!) and O(nn).
When faced with problems like this, computer scientists may look for heuristic (probabilistic) solutions. A good heuristic solution won't guarantee an optimal result, but will provide a near-optimal result with high probability.