Homework 1 Solutions Problem 1 In the following, use either a direct proof (by giving values for c and n0 in the definition of big-Oh notation) or cite one of the rules given in the book or in the lecture slides. (a) Show that if f (n) is O(g(n)) and d(n) is O(h(n)), then f (n) + d(n) is O(g(n) + h(n)). Solution Recall the definition of big-Oh notation: we need constants c > 0 and n0 ≥ 1 such that f (n) + d(n) ≤ c(g(n) + h(n)) for every integer n ≥ n0 . f (n) is O(g(n)) means that there exists cf > 0 and an integer n0f ≥ 1 such that f (n) ≤ cf g(n) for every n ≥ n0f . Similarly, d(n) is O(h(n)) means that there exists cd > 0 and an integer n0d ≥ 1 such that d(n) ≤ cd h(n) for every n ≥ n0d . Let n0 = max(n0f , n0d ), and c = max(cf , cd ). So f (n) + d(n) ≤ cf g(n) + cd h(n) ≤ c(g(n) + h(n)) for n ≥ n0 . Therefore f (n) + d(n) is O(g(n) + h(n)). (b) Show that 3(n + 1)7 + 2n log n is O(n7 ). Hint: Try applying the rules of proposition 1.16. Solution Let us apply rules of Proposition 1.16: • log n is O(n) (Rule 10) • 2n log n is O(2n2 ) (Rule 6) • 3(n + 1)7 is a polynomial of degree 7, therefore it is O(n7 ) (Rule 7) • 3(n + 1)7 + 2n log n is O(n7 + 2n2 ) (Problem 1.a of this homework) • 3(n + 1)7 + 2n log n is O(n7 ) (Rule 7) (c) Algorithm A executes 10n log n operations, while algorithm B executes n2 operations. Determine the minimum integer value n0 such that A executes fewer operations than B for n ≥ n0 . Solution We must find the minimum integer n0 such that 10n log n < n2 . Since n describes the size of the input data set that the algorithms operate upon, it will always be positive. Since n is positive, we may factor an n out of both sides of the inequality, giving us 10logn < n.Let us consider the left and right hand side of this inequality. These two functions have one intersection point for n > 1, and it is located between n = 58 and n = 59. Indeed, 10 log 58 ≈ 58.57981 > 58 and 10 log 59 = 58.82643 < 59. So for 1 ≤ n ≤ 58, 10n log n ≥ n2 , and for n ≥ 59, 10n log n < n2 . So n0 we are looking for is 59. Problem 2 (a) What does the following algorithm do? Analyze its worst-case running time, and express it using “Big-Oh” notation. Algorithm Foo (a, n): Input: two integers, a and n

CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

2

Output: ? k←0 b←1 while k < n do k ←k+1 b←b∗a return b Solution This algorithm computes an . The running time of this algorithm is O(n) because • the initial assignments take constant time • each iteration of the while loop takes constant time • there are exactly n iterations (b) What does the following algorithm do? Analyze its worst-case running time, and express it using “Big-Oh” notation. Algorithm Bar (a, n): Input: two integers, a and n Output: ? k←n b←1 c←a while k > 0 do if k mod 2 = 0 then k ← k/2 c ←c∗c else k ←k−1 b←b∗c return b Solution This algorithm does the same thing as the one in 1.a: it computes an . Its running time is O(log n) for the following reasons: The initialization and the if statement and its contents take constant time, so we need to figure out how many times the while loop gets called. Since k goes down (either gets halved or decremented by one) at each step, and it is equal to n initially, at worst the loop gets executed n times. But we can (and should) do better in our analysis. Note that if k is even, it gets halved, and if it is odd, it gets decremented, and halved in the next iteration. So at least every second iteration of the while loop halves k. One can halve a number n at most dlog ne times before it becomes ≤ 1 (each time we halve a number we shift it right by one bit, and a number has dlog ne bits). If we decrement the number in between halving it, we still get to halve no more then dlog ne times. Since we can only decrement k in between two halving iterations (unless n is odd or it is the last iteration), we get to do a decrementing iteration at most dlog ne + 2 times. So we can have at most 2dlog ne + 2 iterations. This is obviously O(log n). Problem 3

CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

3

(a) Describe the output of the following series of stack operations on a single, initially empty stack: push(5), push(3), pop(), push(2), push(8), pop(), pop(), push(9), push(1), pop(), push(7), push(6), pop(), pop(), push(4), pop(), pop(). Solution (bottom of the stack is to the left) 5 53 5 52 528 52 5 59 591 59 597 5976 597 59 594 59 5 (b) Describe the output of the following series of queue operations on a single, initially empty queue: enqueue(5), enqueue(3), dequeue(), enqueue(2), enqueue(8), dequeue(), dequeue(), enqueue(9), enqueue(1), dequeue(), enqueue(7), enqueue(6), dequeue(), dequeue(), enqueue(4), dequeue(), dequeue(). Solution The head of the queue is on the left. 5 53 3 32 328 28 8 89 891 91 917 9176 176 76 764 64 CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

4

4 (c) Describe in pseudo-code a linear-time algorithm for reversing a queue Q. To access the queue, you are only allowed to use the methods of queue ADT. Hint: Consider using an auxiliary data structure. Solution We empty queue Q into an initially empty stack S, and then empty S back into Q. Algorithm ReverseQueue(Q) Input: queue Q Output: queue Q in reverse order S is an empty stack while (! Q.isEmpty()) do S.push(Q.dequeue()) while (! S.isEmpty()) do Q.enqueue(S.pop()) (d) Describe how to implement two stacks using one array. The total number of elements in both stacks is limited by the array length; all stack operations should run in O(1) time. Solution Let us make the stacks (S1 and S2 ) grow from the beginning and the end of the array (A) in opposite directions. Let the indices T1 and T2 represent the tops of S1 and Stack 1

Stack 2

Top of Stack 1

Top of Stack 2

S2 correspondingly. S1 occupies places A[0 . . . T1 ], while S2 occupies places A[T2 . . . (n − 1)]. The size of S1 is T1 + 1; the size of S2 is n − T2 + 1. Stack S1 grows right while stack S2 grows left. Then we can perform all the stack operations in constant time similarly to how it is done in the basic array implementation of stack except for some modifications to account for the fact that the second stack is growing in a different direction. Also to check whether any one of the stacks is full, we check if S1 .size() + S2 .size()≥ n. In other words, the stacks do not overlap if their total length does not exceed n. Problem 4 In year 2069 the eleventh hovercraft of the class MARK III came off the assembly lines of the Boeing Company’s (misnamed) rotorcraft division. This hovercraft was called “Nebuchadnezzar.” Unfortunately, the core libraries of Nebuchadnezzar were corrupted during installation, so the only uncorrupted data structure left was a simple stack. Boeing software engineers set out to reimplement all the other data structures in terms of stacks, and they started out with queues. (a) The following are parts of their original implementation of a queue using two stacks (in stack and out stack). Analyze the worst-case running times of its enqueue and dequeue methods and express them using “Big-Oh” notation. CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

5

Algorithm enqueue(o) in stack.push(o) Algorithm dequeue() while (! in stack.isEmpty()) do out stack.push(in stack.pop()) if (out stack.isEmpty()) then throw a QueueEmptyException return obj ← out stack.pop() while (! out stack.isEmpty()) do in stack.push(out stack.pop()) return return obj; Solution In this queue implementation, we maintain the following invariant relation: the in stack from top to bottom forms the queue from tail to head. To dequeue an element, we push all of them into the out stack, get the top element of out stack (formerly the bottom element of in stack and thus the head of the queue), and then push the remaining elements back into the in stack (see the figure below). After the dust settles, the new head of the queue (marked N in the figure) is at the bottom of the in stack, just as the previous head of the queue before it. H

T

H

H

T (a)

(b)

N

T

T

N

(c)

(d)

Figure 1: Dequeue operation step by step (in stack is to the left) Note that whether we choose an array-based or a list-based implemenetation of stacks, stack operations are still all O(1) time. Thus enqueue operation takes O(1) time since all it does is perform a single push operation. Let n be the current size of the queue. The first while loop of the dequeue operation pops all elements off the in stack and pushes them into the out stack. Thus we have n iterations of the loop that take O(1) time each (one push and one pop operation). It takes O(1) time to get the head of the queue. Then we move the remaining elements from out stack back into the in stack. This takes n − 1 iterations of the second while loop, where each iteration takes O(1) time each again. So the time to perform the dequeue operation is CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

6

O(n + 1 + (n − 1)) = O(n). (b) (Description of the valiant fight of the resistance snipped) Algorithm enqueue(o) in stack.push(o) Algorithm dequeue() if (out stack.isEmpty()) then while (! in stack.isEmpty()) do out stack.push(in stack.pop()) if (out stack.isEmpty()) then throw a QueueEmptyException return out stack.pop() What is the worst-case complexity of performing a series of 2n enqueue and n dequeue operations in an unspecified order? Express this using “Big-Oh” notation. Hint: Try using techniques presented in section 3.1.3. Solution This implementation has the following invariance: stacks whose bottoms are put together, form a queue. Unlike the implementation in part a, here the dequeue operation only moves the contents of in stack to out stack if out stack is empty, and never moves anything back. The enqueue operation is the same as before, so it takes O(1) time. To in_stack

out_stack

T

H

(a) Dequeue when out_stack is not empty

T

H H

T

N

(b) Dequeue when out_stack is empty

T

H

T

T

H

N

H

analyze the dequeue operation we will have to use a technique described in section 3.1.3 called amortization. The trick is to notice that any element of the queue moves from one stack to another at CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00

DRAFT

7

most once. The life of the element in this queue consists of being put in the in stack, being moved to the out stack when it becomes empty, and then being popped from the out stack when it ends up on top, and a dequeue operations gets called. Let us “charge” 2 time units (or cyber-dollars as the book calls it) for each enqueue operation and 1 time unit for each dequeue operation. When out stack is empty and dequeue is called, we move the contents of in stack to out stack and then remove the top element of the out stack. The 1 time unit that we allocated for dequeue is going to pay for that last call to pop(), but we have already payed for the move by overcharging enqueue. So, the time it takes to perform 2n enqueue and n dequeue operations is O(2n ∗ 2 + n) = O(n). Conversely, we can charge 3 time units to enqueue and 0 time units to dequeue with similar results. Problem 5 A program Thunk written by one of the cs16 TAs uses an implementation of the sequence ADT as its main component. It performs atRank, insertAtRank and remove operations in some unspecified order. It is known that Thunk performs n2 atRank operations, 2n insertAtRank operations, and n remove operations. Which implementation of the sequence ADT should the TA use in the interest of efficiency: the array-based one or the one that uses a doubly-linked list? Explain. Solution See section 3.3.4 for running times of these methods. Array-based List-based atRank O(1) O(n) insertAtRank O(n) O(n) remove O(n) O(1) Total time n2 O(1) + 2nO(n) + nO(n) = n2 O(n) + 2nO(n) + nO(1) = 2 2 2 O(n + 2n + n ) = O(n3 + 2n2 + n) = 2 O(n ) O(n3 ) Since list-based implementation runs in O(n3 ) worst-case time, and array-based one runs in O(n2 ) time, we prefer the array-based one.

CS 16 — Introduction to Algorithms and Data Structures

Semester II, 99–00