Course:CPSC 320/Midterm 2 Reference Sheet

From UBC Wiki

CPSC 320 2009W2 Midterm Exam #2 Reference Sheet

Do not remove: This reference sheet is the appendix for Midterm #2. Only the first 4 printed pages are guaranteed to be printed; so be compact! Deadline for edits: 11:59PM Tuesday Mar 9.

Optimal vs. Greedy

  • Optimization problem: The problem of finding the best solution from all feasible solutions, according to some metric.
  • Greedy algorithm: Repeatedly make the "locally best choice" until the choices form a complete solution.
  • Optimal substructure: Optimal solution to the problem is composed of pieces which are themselves optimal solutions to subproblems.
  • Greedy-choice property: Locally optimal choices can be extended to a globally optimal solution.

How does one prove that a greedy solution is optimal? Two ways:

  • Greedy algorithm stays ahead: Measure the greedy algorithm's progress in a step-by-step fashion, and show that it does better than any other algorithm at each step.
  • Exchange argument: Gradually transform any given solution to the greedy solution.

e.g Exchange argument for Activity Selection

  • Let O be some optimal solution and G a greedy solution (where we pick the earliest finish time) to an activity selection problem.
  • Let Ai be the first activity that differs between G and O. That is, let Ai conflict with some elements ALT in O.
  • Three Cases
    • ALT = 0 (no conflicts). Contradiction because then we could add Ai to O and get a better sol'n so O was not optimal.
    • ALT = 1. We can now swap Ai and ALT because Ai ends earlier than ALT and does not conflict with anything else in O. If Ai did not end earlier then G would've just picked ALT instead.
    • ALT >= 2. Conflict have one element in ELT ending before Ai and some element ending after Ai. This is a contradiction because then G would've picked the element in ELT that ended before Ai.

Since O's conflict with G is always a single item that can be swapped, keep swapping until O == G. Now Size(O) == Size(G) and we G gave us an optimal sol'n.

Graphs

Graph definitions

A graph G = (V, E) is composed of vertices and edges where each connects two vertices .

  • sparse: is much less than
  • dense: is close to
  • acyclic - A graph is acyclic if it contains no cycles; unicyclic if it contains exactly one cycle; and pancyclic if it contains cycles of every possible length (from 3 to the order of the graph).
  • weakly connected - replacing all of its directed edges with undirected edges produces a connected (undirected) graph
  • strongly connected - there is a path from each vertex in the graph to every other vertex. Implies graph is directed
  • spanning tree: a subset of edges from a connected graph that touches all vertices in the graph (spans the graph) and forms a tree (is connected and contains no cycles)
  • minimum spanning tree: the spanning tree with the least total edge cost
  • cut (S, V-S)
  • light edge

Representing a graph in memory

  • adjacency-list representation: efficient for sparse graphs
    • memory required:
  • adjacency-matrix representation: efficient for dense graphs
    • memory required:

Kruskal (MST algorithm)

TODO: analyzed runtimes, and proof.

  1. Let be a min-priority queue of the edges in E
  2. Let (the minimum spanning tree being built)
  3. While is not empty do:
    1. If is not connected to
      1. Record that and are now connected.

Note: To efficiently implement the priority queue, use a binary heap. To efficiently implement checking for and changing connectivity, use the up-tree disjoint-set union-find data structure (with union-by-rank and path compression).

Where E is the number of edges in the graph and V is the number of vertices, Kruskal's algorithm can be shown to run in O(E log E) time, or equivalently, O(E log V) time, all with simple data structures.

Prim (MST algorithm)

The algorithm continuously increases the size of a tree starting with a single vertex until it spans all the vertices.

   * Input: A connected weighted graph with vertices V and edges E.
   * Initialize: Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew = {}
   * Repeat until Vnew = V:
         o Choose edge (u,v) with minimal weight such that u is in Vnew and v is not (if there are multiple edges with the same weight, choose arbitrarily but consistently)
         o Add v to Vnew, add (u, v) to Enew
   * Output: Vnew and Enew describe a minimal spanning tree

Time Complexity:

Minimum edge weight data structure Time complexity (total)
adjacency matrix, searching O(V2)
binary heap and adjacency list O((V + E) log(V)) = O(E log(V))
Fibonacci heap and adjacency list O(E + V log(V))

Proof: Let P be a connected, weighted graph. At every iteration of Prim's algorithm, an edge must be found that connects a vertex in a subgraph to a vertex outside the subgraph. Since P is connected, there will always be a path to every vertex. The output Y of Prim's algorithm is a tree, because the edge and vertex added to Y are connected. Let Y1 be a minimum spanning tree of P. If Y1=Y then Y is a minimum spanning tree. Otherwise, let e be the first edge added during the construction of Y that is not in Y1, and V be the set of vertices connected by the edges added before e. Then one endpoint of e is in V and the other is not. Since Y1 is a spanning tree of P, there is a path in Y1 joining the two endpoints. As one travels along the path, one must encounter an edge f joining a vertex in V to one that is not in V. Now, at the iteration when e was added to Y, f could also have been added and it would be added instead of e if its weight was less than e. Since f was not added, we conclude that

Let Y2 be the graph obtained by removing f and adding e from Y1. It is easy to show that Y2 is connected, has the same number of edges as Y1, and the total weights of its edges is not larger than that of Y1, therefore it is also a minimum spanning tree of P and it contains e and all the edges added before it during the construction of V. Repeat the steps above and we will eventually obtain a minimum spanning tree of P that is identical to Y. This shows Y is a minimum spanning tree.

Shortest-path (Dijkstra's)

Algorithm for Dijkstra's, analyzed runtimes, and proof.

Description of the algorithm

Suppose you want to find the shortest path between two intersections on a map, a starting point and a destination. To accomplish this, you could highlight the streets (tracing the streets with a marker) in a certain order, until you have a route highlighted from the starting point to the destination. The order is conceptually simple: at each iteration, create a set of intersections consisting of every unmarked intersection that is directly connected to a marked intersection, this will be your set of considered intersections. From that set of considered intersections, find the closest intersection to the destination (this is the "greedy" part, as described above) and highlight it and mark that street to that intersection, draw an arrow with the direction, then repeat. In each stage mark just one new intersection. When you get to the destination, follow the arrows backwards. There will be only one path back against the arrows, the shortest one.

Pseudocode

In the following algorithm, the code u := vertex in Q with smallest dist[], searches for the vertex u in the vertex set Q that has the least dist[u] value. That vertex is removed from the set Q and returned to the user. dist_between(u, v) calculates the length between the two neighbor-nodes u and v. The variable alt on line 13 is the length of the path from the root node to the neighbor node v if it were to go through u. If this path is shorter than the current shortest path recorded for v, that current path is replaced with this alt path. The previous array is populated with a pointer to the "next-hop" node on the source graph to get the shortest route to the source.

 1  function Dijkstra(Graph, source):
 2      for each vertex v in Graph:           // Initializations
 3          dist[v] := infinity               // Unknown distance function from source to v
 4          previous[v] := undefined          // Previous node in optimal path from source
 5      dist[source] := 0                     // Distance from source to source
 6      Q := the set of all nodes in Graph
        // All nodes in the graph are unoptimized - thus are in Q
 7      while Q is not empty:                 // The main loop
 8          u := vertex in Q with smallest dist[]
 9          if dist[u] = infinity:
10              break                         // all remaining vertices are inaccessible from source
11          remove u from Q
12          for each neighbor v of u:         // where v has not yet been removed from Q.
13              alt := dist[u] + dist_between(u, v) 
14              if alt < dist[v]:             // Relax (u,v,a)
15                  dist[v] := alt
16                  previous[v] := u
17      return dist[]

Old material

Master theorem

If and a constant in the base case, where can be either or , and then:

  • If for some , then
    • Dominated by leaf cost
  • If then,
    • Balanced cost
  • If for some and if for some constant and sufficiently large , then
    • Dominated by root cost - It is important in this case to check that f(n) is well behave. More specifically,
    • For sufficient large n, a*f(n/b) <= c*f(n)

The following equations cannot be solved using the master theorem:


  • a is not a constant

    non-polynomial difference between f(n) and

    a<1 cannot have less than one sub problem

    f(n) is not positive

    case 3 but regularity violation.

Also, Case 3 always hold when f = n^k and If

Log Laws

For all real a > 0, b > 0, c > 0, and n,

Summation

  • Arithmetic Series:
  • Sum of Squares:
  • Sum of Cubes:
  • Geometric Series: for real
  • Infinite decreasing:
  • Telescoping:

Decision Tree-Related Notes

  • for a list of elements, there's permutations
  • a binary tree of height has at most leaves
  • therefore, a binary tree with at least leaves must have height at least
  • to get the height of a tree: the longest path (say we divide by 3/2 at each level) finishes when , so the height is
  • , which we can establish by proving big-O and big-Ω bounds separately (pumping "up" or "down" the values of the terms in the factorial and the overall number of terms as needed)

Stirling's Approximation

Assignment Solutions

Consider our algorithm’s first choice of school on a particular problem, call it i. Compare to an optimal solution OPT. If i is in OPT, we’re done. If i is not in OPT, then select the school j in OPT that made the fewest requests for tickets (or even just any school in OPT). It cannot be that nj < ni or our algorithm would have selected j before it selected i. So, nj >= ni. In that case, we can build the solution (OPT - {j}) Union {i}, which demands no more tickets in total than OPT (and so is a feasible solution) and includes the same number of schools as OPT (and so is also optimal). Thus, our algorithm’s first choice can always be extended to an optimal solution. This corresponds tightly to our notion of the “greedy choice property”. An algorithm for a problem exhibits the greedy choice property if its initial choice can be extended to an optimal solution, i.e., it’s a “good” choice. (Note: we should be careful about the case where our algorithm does not allocate any tickets. Technically, that doesn’t clearly fall under this question, but let’s address it here. In that case, even the school that requests the fewest tickets requests too many to distribute. Alternatively, there is no smallest school because no schools request tickets. In either case, the optimal (and greedy) solution is to allocate no tickets.)

Assume S is an optimal solution to the problem of distributing tickets to schools other than i and OPT is an optimal solution to the overall problem. For contradiction, say that S union {i} is not an optimal solution. Then, OPT gives out tickets to more schools than S Union {i}; that is, |OPT| > |S U {i}|. Now, remove i from each solution to get S and OPT - {i}. Both are solutions to the subproblem (the former by assumption and the latter because removing i “frees up” just enough tickets for the change in capacity to the subproblem). |OPT-{i}| must also be greater than |S|, meaning S is not the optimal way to give out capacity - ni tickets, but this is a contradiction. QED This corresponds tightly to our notion of the “optimal substructure” property. A problem exhibits optimal substructure if we can form an optimal solution by combining any “good” choice that breaks the problem into smaller subproblem(s) and any optimal solution(s) to the subproblem(s). Note that (1) optimal substructure is a property of the problem, not the algorithm (unlike the “greedy choice property”) and (2) the “good” choice and the optimal subproblem solution(s) need not be obtained by a greedy algorithm or any particular algorithm (or even by the same algorithm as each other).

Consider two minimum spanning trees of a graph with different numbers of edges of a given weight. Consider the least weight w at which they differ. Arbitrarily order the edges of weight w and consider the first one (u; v) at which they differ. Without loss of generality, let MST1 contain (u; v) and MST2 exclude it. Then, MST1 contains (u; v) and no other path from u to v. MST2 does not contain (u; v) but does contain a different path from u to v. That path must include an edge (x; y) with weight w0 � w. (MST1 and MST2 don’t differ on edges with weights smaller than w. So, if the path has only such edges, thenMST1 also contains that path and therefore with the addition of (u; v) contains a cycle, which is a contradiction.) If w0 > w, thenMST2 was not a minimum spanning tree. If w0 = w, then the new tree is also a minimum spanning tree and has the same number of edges of weight w as MST2. However, we can repeat this process on the new tree and the oldMST1 (which are now “a pair of MSTs with different numbers of edges of weight w”) until we eliminate all differences between the two MSTs on edges of weight w. This is a contradiction.