π± We want to find an order in which to multiply the chain of matrices that has the least cost
M1ββ M2ββ M3ββ―Mnβ
which has the least cost (i.e. will be the fastest).
Assume that the matrices may have different dimensions, but that the multiplication is well defined (e.g. #colsΒ ofΒ M1β=#rowsΒ ofΒ Mi+1β
First consider the trivial case in which we want to multiply the three matrices M1ββ M2ββ M3β - this operation can be completed in two different ways:
The following is a straightforward algorithm to multiply two matrices:
Matrix A of dimension pΓq
Matrix B of dimension qΓr
mat_mult(A, B, p, q, r):
let C be a new p x r matrix
for i = 1 to p
for j = 1 to r
C[i][j] = 0
for k = 1 to q
C[i][j] = C[i][j] + A[i][k] + B[k][j]
The total number of multiplications to perform in the inner-loop is pΓqΓr
The total time complexity is Ξ(pΓqΓr)
In general, matrix multiplication is not commutative, that is,
M1ββ M2βξ =M2ββ M1β
However, it is associative
M1ββ (M2ββ M3β)=(M1ββ M2β)β M3β
Note that calculating the result of M1ββ M2ββ M3β can result in huge differences in time factors, depending on which of the parenthesising choices are made
Consider M1β is 10Γ100 and M2β is 100Γ5
The cost of M1ββ M2β is 10Γ100Γ5=5000 iterations
1.3 - Example - Importance of Order
Now consider M1β is 10Γ100 and M2β is 100Γ5 and M3β is 5Γ50
Case 1 (M1ββ M2β)β M3β
From before, we know that computing M1ββ M2β will take 5000 iterations
Given M1ββ M2β, computing (M1ββ M2β)β M3β requires an additional 10Γ5Γ50 iterations
In total, this is 7, 500 iterations
Case 2 (M1ββ (M2ββ M3β))
The cost of computing M2ββ M3β is 10Γ5Γ50=25,000
The cost of computing M1ββ (M2ββ M3β) given pre-computed M2ββ M3β is 10Γ100Γ50=50,000
In total, this is 75, 000 iterations - 10 times more than the other solution
1.4 - Task
π± We want to find an order in which to multiply the chain of matrices that has the least cost
M1ββ M2ββ M3ββ―Mnβ
which has the least cost (i.e. will be the fastest), assuming that each matrix Miβ has dimensions piβ1βΓpiβ so that by definition, each pair of adjacent matrices is compatible.
Additionally, the cost of Miββ Mi+1β=piβ1ββ piββ pi+1β, in which the resulting matrix Miββ Mi+1β is of dimension piβ1βΓpi+1β
1.4.1 - Naive Solution
Why canβt we just enumerate each possible way of multiplying n matrices together, and see which one is cheapest?
We notice that each sub-problem Ci..jβ depends on every sub(sub) problem Ci..kβ and Cj+1..jβ.
Furthermore, each subproblem may be re-calculated many times in the recursive definition
Converting this into a Dynamic programming solution allows us to change an exponential-time problem into a polynomial-time problem Ξ(n3)
We store the solution Ci..jβ at m[i,j] in an nΓn matrix, m.
Thus, m[i,j] is the minimum cost paranthesisation of Miββ―Mjβ
Additionally, we are only ever interested in entries m[i,j] where iβ€j, since the cost of C5..2β is nonsensical.
We should calculate sub-problems in an order such that when we need it, they are already computed.
For 1β€iβ€nβ1, cases m[i,Β i+1] depend on m[i,i] and m[i+1,Β i+1] and they should be calculated next.
Thus, we should:
Calculate all m[i,i] for all iβ€iβ€n
Calculate all m[i,Β i+1] for all 1β€iβ€nβ1
Calculate all m[i,Β i+2] for all 1β€iβ€nβ2
matrix_chain_order(p, n)
m = new int[n, n]
s = new int[n, n]
for i = 1 to n
m[i, i] = 0
s[i, i] = i
for L = 2 to n
for i = 1 to n - L + 1
j + i + L -1
assert i <= k
assert k < j
m[i, j] = min m[i, k] + m[k + 1, j] + (p_{i-1} * p_k * p_j)
s[i, j] = k
For Ξ(n2) sub-problems, we must find the minimum set of possibilities, which makes it an Ξ(n3) operation
We additionally create a matrix in which we store our solution space.