Find a path from source --> end
Runtime: O(|V|)2 or O((|V|+|E|) log |V|)
Depends on implementation: the former using an array/linked this, and the latter using a binary heap.
Notes: Cannot process negative edge weights.
function dijkstra(G,s):
dist(v in V) = infinity
prev(v in V) = null
dist(s) = 0
PQ = PriorityQueue(V)
tree = []
while PQ is not empty:
u = PQ.removeMin()
for all edges (u,v):
if dist(v) > dist(u) + cost(u,v):
dist(v) = dist(u) + cost(u,v)
prev(v) = u
PQ.replaceKey(v, dist(v))
for v in V:
if v.prev is not null:
tree.append(connecting_edge(v,v.prev))
return tree
Runtime: O(|V|*|E|)
Notes: Relaxes |E| edges, |V|-1 times. Can process negative edge weights.
function BellmanFord(V,E,src):
for each vertex v in vertices:
v.dist = infinity
if v = src:
v.dist = 0
v.prev = null
repeat |V|-1 times:
for each edge (u,v) in E:
if u.dist + (u,v).weight < v.dist:
v.dist = u.dist + (u,v).weight
v.prev = u
// checking for negative cycles
for each edge (u,v) in E:
if u.dist + (u,v).weight < v.dist:
throw new NegativeCycleException()
// build tree
for v in V:
if v.prev is not null:
tree.append(connecting_edge(v, v.prev))
Runtime: O(|V|+|E|)
function TopSort(G):
S = Stack()
L = []
for each vertex in G:
if vertex has no incident edges:
S.push(vertex)
while S is not empty:
v = S.pop()
L.append(v)
for each outgoing edge e from v:
w = e.destination
delete e
if w has no incident edges:
S.push(w)
return L
Find the MSTree of a graph, or MSForest of multiple graphs
Runtime: O((|V|+|E|) log|V|)
Notes: Best to prove correctness by induction. This is a greedy algorithm.
function prim(G):
for all vertices v in G:
v.cost = infinity
v.prev = null
source = a random v in V
source.cost = 0
MST = []
PQ = PriorityQueue(V)
while PQ is not empty:
v = PQ>removeMin()
if v.prev != null:
MST.append((v,v.prev))
for all incident edges (v,u) of v:
if u.cost > (v,u).weight
u.cost = (v,u).weight
u.prev = v
PQ.replaceKey(u,u.cost)
return MST
Runtime: O(|E|log|E|)
Thanks to union find/path compression this is the runtime. Without union find/recursive path compression, the runtime is O(|V|3)
Notes: Sorts all edges of the graph by weight in ascending order, and if its addition doesn’t create a cycle, add it to the MST. Detect cycles with “clouds”.
function kruskal(G):
for vertices v in G:
v.parent = v
v.rank = 0
MST = []
for all edges(u,v) in G sorted by weight:
if u and v are not in same cloud:
if v.prev != null:
MST.append((u,v))
this.mergeClouds(u.parent, v.parent)
this.pathCompress(v)
return MST
Helper Functions
Runtime: Amortized O(1)
function mergeClouds(p1, p2): // union
if p1.rank > p2.rank:
p2.parent = p1
else if p1.rank < p2.rank:
p1.parent = p2
else:
p2.parent = p1
p1.rank++
function pathCompress(v): // find
if v != v.parent:
v.parent = pathCompress(v.parent)
return v.parent
Sort a list
Algorithm | Time | Notes |
---|---|---|
Selection sort | O(n2) | in place slow (good for small inputs) |
Insertion sort | O(n2) | in place slow (good for small inputs) |
Merge sort | O(n log n) | fast (good for large inputs) |
Quick sort | O(n log n) expected | randomized fastest (good for large inputs) |
Radix sort | O(nd) | d is number of digits in largest number basically linear when d is small |
Mnemonic: SQuIRM (n2, n log n, n2,nd, n log n)
Runtime: O(n log n)
Notes: not in-place
function merge_sort(A):
n = A.length()
if n <= 1:
return A
mid = n/2
left = mergeSort(A[0 … mid-1])
right = mergeSort(A[mid .. n-1])
return merge(left,right)
function merge(A, B):
result = []
aIndex = bIndex = 0
while aIndex < A.length() and bIndex < B.length():
if A[aIndex] < B[bIndex]:
result.append(A[aIndex])
aIndex++
if aIndex < A.length():
result = result + A[aIndex...end]
if bIndex < B.length():
result = result + B[bIndex...end]
return result
Runtime: O(n2)
Notes: usually written iteratively and in-place. finds the min and moves it to the first index (then second, third, etc.)
function selection_sort(A):
// Input: Unsorted List
// Output: Sorted List
n = A.length
for i = 0 to n-2:
min = argmin(A[i...n-1])
swap A[i] with A[min]
Runtime: O(n2)
Notes: very fast if already partially sorted
function insertion_sort(A):
n = A.length()
for i = 1 to n-1:
for j = i down to 1:
if A[j] < A[j-1]:
swap A[j] and A[j-1]
else:
break
Notes: in-place
function reverse(A):
n = A.length
for i = 0 to n/2:
temp = A[i]
A[i] = A[n−1−i]
A[n−1−i] = temp
Runtime: worst case O(n2) , expected O(n log n)
Notes: divide and conquer around a random pivot point. partition the input list into L, E, and G for numbers less than equal to or greater than the pivot point. recursively solve until in order.
function quick_sort(A):
// Input: Unsorted List
// Output: Sorted List
n = A.length
if n <= 1
return A
pivot = random element from A
L, E, G = [ ]
for each x in A:
if x < pivot:
L.append(x)
else if x > pivot:
G.append(x)
else:
E.append(x)
return quick_sort(L) + E + quick_sort(G)
Runtime: O(n log n)
Notes: not in-place
function radix_sort(A):
// Input: unsorted list
// Output: sorted list
buckets = array of 10 lists
for place = least to most significant
for number in A
d = digit in number at place
bucket[d].append(number)
A = concatenate all buckets in order
empty all buckets
return A
Algorithms to find things
Runtime:
expected: O(n)
worst case: O(n2)
function select(list, k):
\\ base case omitted
pivot = list[rand(0, list.size)]
L = E = R = []
for x in list:
if x < pivot: L.append(x)
if x = pivot: E.append(x)
if x > pivot: R.append(x)
if k <= L.size:
return select(L,k)
else if k > L.size and k <= (L.size + E.size):
return pivot
else:
return select(R,k - (L.size + E.size))
Runtime: O(n)
function momSelect(list, k):
//base cases omitted
miniLists = divide list into n/5 lists of 5
medians = []
for miniList in miniLists:
// sort is O(1) because list is always of length 5
sort(miniList)
medians.append(miniList[2])
pivot = momSelect(medians, medians.size()/2)
L = E = R = []
for x in list:
if x < pivot: L.append(x)
if x = pivot: E.append(x)
if x > pivot: R.append(x)
if k <= L.size:
return select(L,k)
else if k > L.size and k <= (L.size + E.size):
return pivot
else:
return select(R,k)
Runtime: O(log n)
Notes:
Input: A – a sorted array, x – the item to find
Output: true if x is in the array
function binarySearch(A, x):
lo = 0
hi = A.size - 1
while lo < hi:
mid = (lo + hi) / 2
if A[mid] == x:
return true
if A[mid] < x:
lo = mid + 1
if A[mid] > x:
hi = mid – 1
return A[lo] == x
Runtime: O(n log n)
Search through a tree
Runtime: O(n log n)
Notes: not in-place
function bft(root):
// Input: root node of tree
// Output: None
Q = new Queue()
enqueue root
while Q is not empty:
node = Q.dequeue()
visit(node)
enqueue node’s left & right children
function dft(root):
// Input: root node of tree
// Output: None
S = new Stack()
push root onto S
while S is not empty:
node = S.pop()
visit(node)
push node’s left & right children
function preorder(node):
// Input: root node of tree
// Output: None
visit(node)
if node has left child
preorder(node.left)
if node has right child
preorder(node.right)
function inorder(node):
// Input: root node of tree
// Output: None
if node has left child
preorder(node.left)
visit(node)
if node has right child
preorder(node.right)
function postorder(node):
// Input: root node of tree
// Output: None
if node has left child
postorder(node.left)
if node has right child
postorder(node.right)
visit(node)
When is a Graph a Tree?
A graph G is a tree if and only if it satisfies any of the following 5 conditions:
Various types, functions, runtimes, and properties
ADT | Structure | Main Methods | Properties |
---|---|---|---|
PQ | Heap | insert(key,element) O(log n) removeMin() O(log n) upheap() O(log n) downheap() O(log n) |
- Binary: each node has <= 2 children - Key/priority at each node - Left-complete - Preserves heap order: min or max - min: n.key >= n.parent.key - max: n.key <= n.parent.key |
Treap | BST + Heap | insert(key,element) O(log n) removeMin() O(log n) upheap() O(log n) downheap() O(log n) |
- A non-empty tree T with a key and a priority at each node - T has only one node, or - T has a root R whose left and right subtrees are both treaps - Node's priority preserves heap order - Node's key preserves BST ordering |
Tree | tree methods: size() isEmpty() root() node methods: parent() children() isInternal() isExternal() isRoot() binary tree node methods: hasLeft() hasRight() left() right() |
# nodes = n # edges = n - 1 height = h perfect binary tree: height = h = log(n+1)-1 (base 2) # nodes = n = 2(h+1)-1 = 2L-1 # leaves = L = 2h |
|
Trie | Prefix Tree | insert() find() | each node has a dictionary of children |
Use cases, Runtimes, Data Structures
Problem | Algorithm | Runtime | Data Structure |
---|---|---|---|
Sorting | Insertion sort | n2 | |
Selection sort | worst case: n2 expected: n | ||
Merge sort | n log n | ||
Quick sort | n log n | ||
Radix sort | dn | ||
Tree Traversals | Breadth first | Queue | |
Depth first | Stack | ||
Pre-order | |||
In-order | |||
Post-order | |||
Euler tour | Left side: pre-order Bottom: in-order Right side: post-order | ||
Graph stuff | BFT | V+E | Queue |
DFT | V+E | Stack | |
Top Sort | V+E | Stack, Queue, etc. | |
Shortest Paths | Dijkstra | |V|2 (|V|+|E|)* log|V| | Array/Linked list PQ / Binary Heap, etc. |
A* | (V+E)log(V) | PQ/Binary Heap, etc. | |
Bellman-Ford | VE | ||
MSTs | Prim-Jarnik | (V+E)logV | PQ |
Kruskals (plus union find) | ElogE |