# Class 34: Algorithm Analysis (2)

Back to Algorithm Analysis (1). On to Algorithm Analysis (3).

Held: Tuesday, April 4, 2006

Summary: Today we formalize techniques for comparatively evaluating the running time of algorithms.

Related Pages:

Notes:

• There are no readings for tomorrow.
• I was disappointed that so few of you remembered the efficient exponentiation algorithm. I was tempted to give you a quiz today, but decided that was overkill.
• Today will continue the lecture/discussion model. Tomorrow will be lab.

Overview:

• Asymptotic Analysis.
• Asymptotic Analysis in Practice.
• Supporting Asymptotic Analysis with Experiments.
• Eliminating Constants.
• Relating Functions.

## Asymptotic Analysis

• Noting problems in providing a precise analysis of the running time of programs, computer scientists developed a technique which is often called asymptotic analysis. In asymptotic analysis of algorithms, one describes the general behavior of algorithms in terms of the size of input, but without delving into precise details.
• The analysis is asymptotic in that we look at the behavior as the input gets larger.
• There are many issues to consider in analyzing the asymptotic behavior of a program. One particularly useful metric is an upper bound on the running time of an algorithm. We call this the Big-O of an algorithm.
• Computer scientists use Big-O notation in a variety of ways. Big-O notation has a formal, mathematical underpinning. However, most computer scientists use it more informally.
• Informally, Big-O gives the general shape of the curve of the graph of running time vs. input size for sufficiently big input.
• That is, is it linear, quadratic, logarithmic, ...
• Big-O is defined mathematically, as a relationship between functions.
• f(n) is in O(g(n)) iff
• there exists a number n0
• there exists a number d > 0
• for all but finitely many n > n0, abs(f(n)) <= abs(d*g(n))
• What does this say? It says that after a certain value, n0, f(n) is bounded above by a constant (that is, d) times g(n).
• The constant, d, helps accommodate the variation in the algorithm.
• We don't usually identify the d precisely.
• The n0 says for big enough n.
• We can apply big-O to algorithms.
• n is the size of the input (e.g., the number of items in a list or vector to be manipulated).
• f(n) is the running time of the algorithm.
• Some common Big-O bounds
• An algorithm that is in O(1) takes constant time. That is, the running time is independent of the input. Getting the size of a vector should be an O(1) algorithm.
• An algorithm that is in O(n) takes time linear in the size of the input. That is, we basically do constant work for each element of the input. Finding the smallest element in a list is often an O(n) algorithm.
• An algorithm that is in O(log2n) takes logarithmic time. While the running time is dependent on the size of the input, it is clear that not every element of the input is processed. Many such algorithms involve the strategy of divide and conquer.
• An algorithm that is in O(n2) is said to have quadratic running time.
• Many interesting algorithms are O(nlog2n), which we pronounce "en-log-en".

## Asymptotic Analysis in Practice

• We now have a theoretical grounding for asymptotic analysis. How do we do it in practice?
• For iterative algorithms, it's often best to count the steps in an algorithm and then add them up.
• Assume most things take one step.
• If you call a function, you'll need to analyze the running time of that function
• For loops, analyze the body of the loop and then multiply by the number of times the loop repeates.
• For conditionals, analyze the two halves of the conditional and take the largest.
• We may find other ways to count, too.
• What do we do about recursive functions? A few things.
• Rephrase the actions of the recursive function iteratively.
• After you've taken combinatorics, you can use recurrence relations for recursive functions.
• We may do some informal recurrence relations.
• Over the next few days, we'll look at a number of examples. Some starting ones.
• Finding the smallest/largest element in a vector of integers.
• Finding the average of all the elements in a vector of integers.
• Putting the largest element in an array at the end of the array. if we're only allowed to swap subsequent elements.
• Binary search
• ...

## Experimental Analysis

• Note that in addition to this informal analysis of running time, we often do experimental analysis to see if the data we gather match the predicted running times.
• What do I mean when I say analyze them experimentally? I mean that we can time them on a variety of inputs and graph the results.
• If the experimental and the abstract results match, we can be fairly confident in the abstract results.
• If they don't, we may need to reflect and try again.
• Our analysis may be correct and our implementation incorrect.
• Our analysis may be correct and our data may all be outliers.
• Our analysis may be incorrect.
• Our analysis is for worst case, and the data are often for average case.
• ...
• Note also that some analyses can be very difficult.

## Eliminating Constants

• One of the nice things about asymptotic analysis is that it makes constants unimportant because they can be hidden in the d.
• If f(n) is 100n seconds and g(n) is 0.5n seconds, then
• f(n) is in O(g(n)) [let d be 200]
• g(n) is in f(n) [let d be 1]
• If f(n) is 100n seconds and g(n) is n2 seconds, then f(n) is in O(g(n)) [let n0 be 100 and d be 1]
• However, g(n) is not in O(f(n)). Why not?
• Suppose there were an n0 and a d.
• Consider what happens for n = 101d.
• d*f(n) = d*100*101*d = d*d*100*101.
• However, g(n) = d*d*101*101, which is even larger.
• If n0 is greater than 101d, we'll still have this problem [proof left to reader].
• Since constants can be eliminated, we normally don't write them.
• That is, we say that the running time of an algorithm is O(n) or O(n2) or ....

## Relating Functions

• Note that some of the counting can be hard, so we may also need to estimate throughout the process.
• We can often express the running time of an algorithm as a composition of other functions.
• For example, we might have a primary control structure which repeats its contents O(h(n)) times and within that control structure, we have a subalgorithm that takes O(g(n)) time.
• Similarly, we might break our algorithm up into two independent algorithms, one of which takes O(h(n)) time and one of which takes O(g(n)) time.
• We might also look at other relationships.
• Here are some questions we might ask ourselves about composition of functions.
• If f(n) is in O(h(n)) and g(n) is O(k(n)), then what can we say about f(n) + g(n)?
• If f(n) is in O(h(n)) and g(n) is O(k(n)), then what can we say about f(n) * g(n)?
• If f(n) is in O(g(n)) and g(n) is O(h(n)) then what can we say about f(n) and h(n)?
• If f(n) is in O(kg(n)) and k is a constant, then what else can we say about f(n) and g(n)?
• If f(n) is in O(g(n) + h(n)) and g(n) is in O(h(n)), then what simpler thing can we say about f(n)?

Back to Algorithm Analysis (1). On to Algorithm Analysis (3).

Disclaimer: I usually create these pages on the fly, which means that I rarely proofread them and they may contain bad grammar and incorrect details. It also means that I tend to update them regularly (see the history for more details). Feel free to contact me with any suggestions for changes.

This document was generated by Siteweaver on Tue May 9 08:31:46 2006.
The source to the document was last modified on Thu Jan 12 14:58:06 2006.
This document may be found at `http://www.cs.grinnell.edu/~rebelsky/Courses/CS152/2006S/Outlines/outline.34.html`.

You may wish to validate this document's HTML ; ; Check with Bobby

Samuel A. Rebelsky, rebelsky@grinnell.edu