CSC152 2005S, Class 30: Algorithm Analysis (2)
Admin:
* Cookies!
* Assignment due!
* Exam to be returned Friday. (By special assignment from the president.)
* Homework for break:
* Figure out a way to keep Rolf in check
* Figure out a way to get the "East siders" to participate
* Jonathan also must figure out how to get Sam to stop throwing things
Overview:
* Iterative efficient exponentiation
* Analyzing algorithms: Ignoring the details
* Analyzing algorithms: Big O notation
* Analyzing algorithms: In practice
Iterative Efficient Exponentiation:
* See sample code
* Natural conclusion: Recursion is your friend
* Other conclusions:
* We can make recursive functions "iterative" (use while loops)
* Accumulators help
Given two algorithms that solve the same problem, how do we
determine which is "the fastest"?
* Experimentally/emperically:
Run lots of time tests on the two algorithms
* Note: If we had only tried 100, 200, 300 as exponents, we would have preferred v3 (while) to v2 (recursive)
* Count the number of steps each algorithm will take "in the worst possible case"
* Take the one that will be fastest
* We'll do both! (The experiments will help support our counting)
More precisesly, we'll write a function f(n), that gives "the number
of steps the algorithm takes on input of size n"
* Problem: Some steps take longer than others
* Addition almost always takes less time than multiplication
* Multiplying really large numbers takes longer than multiplying small numbers (at least if you're using BigDecimals)
* Multiplying floats or doubles takes longer than multiplying ints or longs
* Problem: The time an operation takes depends on the computer it runs on
* Problem: In experiments:
* Two inputs of the same size may seem to take different times
* The same input of the same size may seem to take different times
* Problem: Don't want to experiment with large inputs
* Goal: Predict running time for large inputs
* The mathematician's solution:
* Ignore particular details (e.g. constant multipliers)
* Predict the "shape" of the function
We'll use Lorelei's solution of "count the worst case" and then
throw away constant multipliers and smaller-order terms.
Formalisms:
* A function (typically the running time for a function) is in
Big-O of g(n), represented O(g(n)), iff,
* Exist c>0 and n0 s.t. for all but finitely many n > n0 |f(n)| <= |c*g(n)|
* Englishisms:
* n>n0 "for big enough n"
* c: "We don't care about constant multipliers"
* Related formalisms:
* Theta(g(n))
|f(n)| = |c*g(n)| for all but finitely many n > n0
* little o
|f(n)| > |c*g(n)| for all but finitely many n > n0
Our goal (in practice)
* Bound running time above 'closely'
* Accept approximations
Consider expt1, for (i = 0; i < n; i++) { result = result*x; }
This is O(n), assuming that each multiplciation takes a constant time
Consider expt3, while (n > 0) ...
At worst, in every-other repetition of the while, we split the exponent (n) in half
2 * "number of times we have to split n in half to get 1"
2 * log_2(n)
But we can throw away the 2* (as well as the five or so steps in the body)
O(log_2(n))
What do we do in general?
* For a simple statement (assignmet with some expression): Count 1
* For a conditional: Count both halves and take the max
* For a loop: Count (a) the number of times the loop repeats;
(b) the time each repetition takes; Multiply those two values
* For a functional/procedure/method call: Look up the amount of time that function takes (or analyze separately)
* Finally: Add 'em all up
adfasdfjkl;
asdfasdf;
ash/
asdaghagg/
asdfasddf;
while a(adfasdf)
a asdfasd;h;
if (asdfasd)
asdfasdf;
else
asdfashdfas;
Tomorrow: Examples!