## Lecture 8. Dynamic Programming

Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan Un...
Author: Juniper Chapman
Lecture 8. Dynamic Programming T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo [email protected]

Dynamic Programming   

Dynamic programming solves optimization problems by combining solutions to subproblems “Programming” refers to a tabular method with a series of choices, not “coding” Recall the divide-and-conquer approach  Partition the problem into independent subproblems  Solve the subproblems recursively  Combine solutions of subproblems

Dynamic programming is applicable when subproblems are not independent  i.e., subproblems share subsubproblems  Solve every subsubproblem only once and store the answer for use when

it reappears

 Algorithms

A divide-and-conquer approach will do more work than necessary Networking Laboratory 2/36

Dynamic Programming Solution 

4 Steps  1. Characterize the structure of an optimal solution  2. Recursively define the value of an optimal solution  3. Compute the value of an optimal solution in a bottom-up fashion  4. Construct an optimal solution from computed information

Algorithms

Networking Laboratory 3/36

Matrix Multiplication 

Matrix multiplication  Two matrices A and B can be multiplied  The number of columns of A must equal the number of rows of B 

A(p×q) x B(q×r)  C(p×r)

The number of scalar multiplication is p×q×r

 For an p×q matrix A and a q×r matrix B, the product AB is the

p×r matrix 

Algorithms

AB = C = [ci , j ] p×r

 q  ≡ ∑ ai ,k bk , j   k =1  p× r

Networking Laboratory 4/36

Matrix Multiplication

Algorithms

Networking Laboratory 5/36

Matrix Multiplication 

Algorithms

Matrix multiplication

Networking Laboratory 6/36

Matrix-Chain Multiplication 

Matrix multiplication  A(p×q) x B(q×r)  C(p×r)  The number of scalar multiplication is p×q×r

e.g. < A1(10×100), A2(100×5), A3(5×50) >  ((A1A2)A3)  10x100x5 + 10x5x50 = 5000 + 2500 = 7,500  (A1(A2A3))  100x5x50 + 10x100x50 = 25000 + 50000 = 75,000  10 times faster

Given a sequence (chain) of n matrices to be multiplied, where i=1,2,…, n, matrix Ai has dimension pi-1×pi, fully parenthesize the product A1A2…An in a way that minimizes the number of scalar multiplications  Determine an order for multiplying matrices that has the lowest cost

Algorithms

Networking Laboratory 7/36

Matrix-Chain Multiplication 

Determine an order for multiplying matrices that has the lowest cost

Counting the number of parenthesizations  A1 A2 … Ak Ak+1 …An-1 An 

1　　　　　　　　if　n = 1   n −1  P ( n) =   Ω( 2n ) ∑ P ( k ) P ( n − k ) 　　if　n ≥ 2  k =1 

 Exercise 15.2-3 on page 338

 Impractical to check all possible parenthesizations

Algorithms

Networking Laboratory 8/36

Step 1 

The structure of an optimal parenthesization  Notation: Ai..j 

Result from evaluating AiAi+1…Aj (i < j)

 Any parenthesization of AiAi+1…Aj must split

the product between Ak and Ak+1 for some integer k in the range i ≤ k < j

 The cost of this parenthesization 

Algorithms

cost of computing Ai..k + cost of computing Ak+1..j + cost of multiplying Ai..k and Ak+1..j together

Networking Laboratory 9/36

Step 1 

Suppose that an optimal parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1  The parenthesization of the prefix sub-chain AiAi+1…Ak

must be an optimal parenthesization of AiAi+1…Aj

 The parenthesization of the prefix sub-chain Ak+1Ai+1…Aj

must be an optimal parenthesization of AiAi+1…Aj

Algorithms

That is, the optimal solution to the problem contains within it the optimal solution to subproblems

Networking Laboratory 10/36

Step 1 Cost_A1..6

A1A2A3A4A5A6A7A8A9 Suppose

((A1A2)(A3((A4A5)A6))) ((A7A8)A9)

Minimal + Cost_A7..9+p0p6p9

is optimal

Then

(A1A2) (A3((A4A5)A6))

must be optimal for A1A2A3A4A5A6

Otherwise, if

(A1(A2 A3)) ((A4A5)A6)

is optimal for A1A2A3A4A5A6

Then

((A1(A2 A3)) ((A4A5)A6))

((A7A8)A9) will be better than ((A1A2)(A3((A4A5)A6))) ((A7A8)A9) Contradiction!

Algorithms

Networking Laboratory 11/36

Step 2 

A Recursive Solution  Subproblem 

Determine the minimum cost of a parenthesization of AiAi+1…Aj (1 ≤ i ≤ j ≤ n)

 m[i, j] 

the minimum number of scalar multiplications needed to compute the matrix Ai..j

 m[i, j] = m[i, k] + m[k+1, j] + pi-1pkpj  However, we do not know the value of k (=s[i,j]),

so we have to try all j-i possibilities

0 　　　　　　　　　　　　　　　if　i = j    m[i, j ] =   {m[i, k ] + m[ k + 1, j ]} + pi −1 pk p j 　if　i < j　 min     i≤k < j

 A recursive solution takes exponential time Algorithms

Networking Laboratory 12/36

Step 3 

Computing the optimal costs  How much subproblems in total? 

One for each choice of i and j satisfying 1 ≤ i ≤ j ≤ n

Θ(n2)

 MATRIX-CHAIN-ORDER(p)

Algorithms

Input: a sequence p = < p0, p1, p2,…, pn> (length[p] = n+1)

Try to fill in the table m in a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length

Lines 4-12: compute m[i, i+1], m[i, i+2], … each time

Networking Laboratory 13/36

Step 3 

e.g. < A1(8×3), A2(3×5), A3(5×10) >  m12 = 8x3x5 = 120  m23 = 3x5x10 = 150  m13 = 390 

m11 + m23 + p0p1p3 = 0 + 150 + 8x3x10 = 390

m12 + m33 + p0p2p3 = 120 + 0 + 8x5x10 = 520 j i 1 2 3

m

0

120 390 0

Algorithms

1

150

2

0

3

s

j

2

3

1

1

1

2

2

i

Networking Laboratory 14/36

Step 3 O(n3), Ω (n3) Θ(n3) running time Θ(n2) space

Algorithms

Networking Laboratory 15/36

l =3 l=2 35x15x5= 2625

10x20x25 =5000

m[3,5] = min

Algorithms

m[3,4]+m[5,5] + 15x10x20 =750 + 0 + 3000 = 3750 m[3,3]+m[4,5] + 15x5x20 =0 + 1000 + 1500 = 2500

Networking Laboratory 16/30

Step 4 

Constructing an optimal solution  Each entry s[i, j] records the value of k such that

the optimal parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1

 A1..n  A1..s[1..n] As[1..n]+1..n  A1..s[1..n]  A1..s[1, s[1..n]] As[1, s[1..n]]+1..s[1..n]  Recursive… 

Algorithms

Networking Laboratory 17/36

Step 4 

Constructing an optimal solution 

 ( A1 ( A2 A3 ) ) ( ( A4 A5 ) A6 )

Algorithms

Networking Laboratory 18/36

Elements of Dynamic Programming 

Optimal substructure  If an optimal solution contains within it optimal solutions to subproblems  Build an optimal solution from optimal solutions to subproblems

Example  Matrix-chain multiplication 

Algorithms

An optimal parenthesization of AiAi+1…Aj that splits the product between Ak and Ak+1 contains within it optimal solutions to the problem of parenthesizing AiAi+1…Ak and Ak+1Ak+2…Aj

Networking Laboratory 19/36

Elements of Dynamic Programming 

Overlapping subproblems  The space of subproblems must be small in the

sense that a recursive algorithm for the problem solves the same subproblems over and over, rather than always generating new subproblems

 Typically, the total number of distinct subproblems

is a polynomial in the input size

Algorithms

Divide-and-Conquer is suitable usually generate brand-new problems at each step of the recursion

Networking Laboratory 20/36

Characteristics of Optimal Substructure 

How many subproblems are used in an optimal solution to the original problem?  Matrix-Chain scheduling 

2 (A1A2…Ak and Ak+1Ak+2…Aj)

How may choice we have in determining which subproblems to use in an optimal solution?  Matrix-chain scheduling: j - i (choice for k)

Informally, the running time of a dynamic-programming algorithm relies on: the number of subproblems overall and how many choices we look at for each subproblem  Matrix-Chain scheduling: Θ(n2) * O(n) = O(n3)

Algorithms

Networking Laboratory 21/36

Overlapping Subproblems

Algorithms

Networking Laboratory 22/36

Overlapping Subproblems m[3,4] is computed twice

Algorithms

Networking Laboratory 23/36

Recursive Procedure for Matrix-Chain Multiplication 

The time to compute m[1,n] is at least exponential in n 

T (1) ≥ 1 n −1

T ( n ) ≥ 1 + ∑ (T ( k ) + T ( n − k ) + 1) k =1 n −1

T ( n ) ≥ 2 ∑ T (i ) + n i =1

 Prove T(n) = Ω(2n) using the substitution method 

Show that T(n) ≥ 2n-1 n −1

T ( n ) ≥ 2∑ 2 i =1

i −1

n−2

+ n = 2∑ 2 i + n i =0

= 2( 2 n −1 − 1) + n = ( 2 n − 2) + n ≥ 2 n −1

Algorithms

Networking Laboratory 24/36

Memoization 

A variation of dynamic programming that often offers the efficiency of the usual dynamic programming approach while maintaining a top-down strategy  Memoize the natural, but inefficient, recursive algorithm  Maintain a table with subproblem solutions, but

the control structure for filling in the table is more like the recursive algorithm

Memoization for matrix-chain multiplication  Calls in which m[i, j] = ∞  Θ(n2) calls  Calls in which m[i, j] < ∞  O(n3) calls  Turns an Ω(2n)-time into an O(n3)-time algorithm

Algorithms

Networking Laboratory 25/36

Memoization 

Algorithms

Networking Laboratory 26/36

Lookup-Chain(p, i, j) if m[i, j] < ∞ then return m[i, j] if i=j then m[i, j]  0 else for k  i to j-1 do qLOOKUP-CHAIN(p, i, k) + LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj if q < m[i, j] then m[i, j]  q return m[i, j] Algorithms

Networking Laboratory 27/36

Lookup-Chain(p, i, j) 

Algorithms

Networking Laboratory 28/36

Dynamic Programming vs. Memoization 

If all subproblems must be solved at least once, a bottom-up dynamic-programming algorithm usually outperforms a top-down memoized algorithm by a constant factor  No overhead for recursion and less overhead for maintaining table  There are some problems for which the regular pattern of table acces

ses in the dynamic programming algorithm can be exploited to reduce the time or space requirements even further

Algorithms

Networking Laboratory 29/36

Self-Study 

Two more dynamic-programming problems  Section 15.4 Longest Common Subsequence  Section 15.5 Optimal Binary Search Trees

Algorithms

Networking Laboratory 30/36

Longest Common Subsequence (LCS) 

Problem: Given two sequences X= and Y=, find the longest subsequence Z= that is common to X and Y.  A subsequence is a subset of elements from the sequence with

strictly increasing order (not necessarily contiguous)

 There are 2m subsequences of X  checking all subsequences is

impractical for long sequences

Example: X= and Y=  Common subsequences: ; ; ; ; ; ;

; ; ; etc.

 The longest common subsequences: ;

Algorithms

Networking Laboratory 31/36

Step 1: Optimal Structure of an LCS 

Let X= and Y= be sequences, and let Z= be any LCS of X and Y.  If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1.  If xm ≠ yn, then zk ≠ xm implies that Z is an LCS of Xm-1 and Y.  If xm ≠ yn, then zk ≠ yn implies that Z is an LCS of X and Yn-1.

Algorithms

Networking Laboratory 32/36

Step 2: Recursive Solution (1/2) 

Overlapping-subproblems

Algorithms

Networking Laboratory 33/36

Step 2: Recursive Solution (2/2) 

Define c[i, j] = length of LCS for Xi and Yj

Algorithms

Networking Laboratory 34/36

Step 3: Computing the length of an LCS

b[i, j] points to the table entry corresponding to the optimal subproblem solution chosen when computing c[i, j]

LCS-LENGTH(X, Y) is O(mn) Algorithms

Networking Laboratory 35/36

Step 4: Constructing an LCS

PRINT-LCS(b, X, i, j) is O(m+n) Algorithms

Networking Laboratory 36/36