Encuentro Mexicano de Computación September 2005
John Hooker Carnegie Mellon University
Unifying Local and Exhaustive Search
– Simulated annealing, tabu search, genetic algorithms, GRASP (greedy randomized adaptive search procedure).
• Local search methods typically examine only a portion of the solution space.
– Branch and bound, Benders decomposition.
• They are generally regarded as very different. • Exhaustive methods examine every possible solution, at least implicitly.
Exhaustive vs. Local Search
– Branching methods. – Nogood-based search.
• Two classes of exhaustive search methods are very similar to corresponding local search methods:
– “Heuristic” is from the Greek ‘ευριςτέιν (to search, to find).
• However, exhaustive and local search are often closely related. • “Heuristic algorithm” = “search algorithm”
Exhaustive vs. Local Search
Local search examples
Simulated annealing GRASP Tabu search
Exhaustive search examples Branch and bound DPL for SAT Benders decomposition DPL with clause learning Partial order dynamic backtracking
Type of search
Branching
Nogood-based
– And vice-versa.
• Suggests how techniques used in exhaustive search can carry over to local search.
– Can move from exhaustive to inexhaustive options as problem size increases.
• Encourages design of algorithms that have several exhaustive and inexhaustive options.
Why Unify Exhaustive & Local Search?
– Exhaustive nogood-based search can suggest a generalization of a local search method (tabu search).
• The bounding mechanism in branch and bound also carries over to generalized GRASP.
– Exhaustive branching can suggest a generalization of a local search method (GRASP).
• We will use an example (traveling salesman problem with time windows) to show:
Why Unify Exhaustive & Local Search?
• Using exhaustive branching & generalized GRASP.
Generic algorithm (exhaustive & inexhaustive) Exhaustive example: Branch and bound Inexhaustive examples: Simulated annealing, GRASP. Solving TSP with time windows
– – – –
• Using exhaustive nogood-based search and generalized tabu search.
Generic algorithm (exhaustive & inexhaustive) Exhaustive example: Benders decomposition Inexhaustive example: Tabu search Solving TSP with time windows
• Nogood-based search.
– – – –
• Branching search.
Outline
– feas(P) = feasible set of P – relax(P) = a relaxation of P
• Keep branching until problem is “easy” to solve. • Notation:
– Add a new leaf node for each restriction.
• Branch by generating restrictions of P.
– Restriction = constraints are added.
• Each node of the branching tree corresponds to a restriction P of the original problem.
Branching Search
• If optimal value of relax(P) is better than previous best solution, then – If solution of relax(P) is feasible for P then P is “easy”; save the solution and remove P from tree – Else branch. • Else – Remove P from tree
– Else
• If solution of P is better than previous best solution, save it. • Remove P from tree.
– Select a problem P at a leaf node. – If P is “easy” to solve then
• Repeat while leaf nodes remain:
Branching Search Algorithm
• Generate new restrictions Pk+1, …, Pm and leaf nodes for them.
– Else
• Remove P from tree.
– If set restrictions P1, …, Pk of P so far generated is complete, then
• To branch:
Branching Search Algorithm
– In a heuristic algorithm, “complete” ≠ exhaustive
i
i
∪ feas( P ) = feas( P)
– In exhaustive search, “complete” = exhaustive
• Exhaustive vs. heuristic algorithm
• Generate new restrictions Pk+1, …, Pm and leaf nodes for them.
– Else
• Remove P from tree.
– If set restrictions P1, …, Pk of P so far generated is complete, then
• To branch:
Branching Search Algorithm
Currently at this leaf node
Previously removed nodes
Exhaustive search: Branch and bound Original problem
P
Leaf node
Every restriction P is initially too “hard” to solve. So, solve LP relaxation. If LP solution is feasible for P, then P is “easy.”
Previously removed nodes
Currently at this leaf node
Previously removed nodes
Exhaustive search: Branch and bound
P1
P2
.
P
...
Original problem
Pk
Leaf node
Every restriction P is initially too “hard” to solve. So, solve LP relaxation. If LP solution is feasible for P, then P is “easy.”
Previously removed nodes
Currently at this leaf node
Previously removed nodes
Exhaustive search: Branch and bound
P1
P2
.
P
...
Original problem
Pk
Create more branches if value of relax(P) is better than previous solution and P1, …, P2 are not exhaustive
Leaf node
Every restriction P is initially too “hard” to solve. So, solve LP relaxation. If LP solution is feasible for P, then P is “easy.”
Created by splitting variable domain
Branch and bound (exhaustive)
Local search phase
GRASP (inexhaustive) Greedy phase
Simulated annealing (inexhaustive)
Restriction P
Algorithm
Never. Always solve relax(P)
P easy enough to solve?
LP relaxation
relax(P)
P2
...
Original problem
Pk solution = x
.
Second level problems are always “easy” to solve by searching neighborhood of previous solution.
Search tree has 2 levels.
Previously removed nodes
P1
Heuristic algorithm: Simulated annealing
P
Currently at this leaf node, which was generated because {P1, …, Pk} is not complete
P2
...
Original problem
Pk solution = x
.
Second level problems are always “easy” to solve by searching neighborhood of previous solution.
Search tree has 2 levels.
Previously removed nodes
P1
Heuristic algorithm: Simulated annealing
Solution of P = y if y is better than x; otherwise, y with probability p, x with probability 1 − p.
Randomly select y ∈ feas(P)
feas(P) = neighborhood of x
P
Currently at this leaf node, which was generated because {P1, …, Pk} is not complete
Created by splitting variable domain Created by defining neighborhood of previous solution
Branch and bound (exhaustive)
Simulated annealing (inexhaustive)
Local search phase
GRASP (inexhaustive) Greedy phase
Restriction P
Algorithm
Always. Examine random element of neighborhood.
Never. Always solve relax(P)
P easy enough to solve?
Not used.
LP relaxation
relax(P)
x3 = v3
x2 = v2
“Hard” to solve. relax(P) contains no constraints x3 = v3
Greedy phase: select randomized greedy values until all variables fixed. P
Original problem
“Easy” to solve. Solution = v.
x1 = v1
Greedy randomized adaptive search procedure
Heuristic algorithm: GRASP
x3 = v3
x2 = v2
“Hard” to solve. relax(P) contains no constraints x3 = v3
Greedy phase: select randomized greedy values until all variables fixed. P
P1 P2
.
... P
Local search phase
Pk
Original problem
“Easy” to solve. Solution = v.
feas(P2) = neighborhood of v
x1 = v1
Greedy randomized adaptive search procedure
Heuristic algorithm: GRASP
x3 = v3
x2 = v2
“Hard” to solve. relax(P) contains no constraints x3 = v3
Greedy phase: select randomized greedy values until all variables fixed. P
P1 P2
.
... P
Local search phase
Pk
Original problem
“Easy” to solve. Solution = v.
feas(P2) = neighborhood of v
x1 = v1
Greedy randomized adaptive search procedure
Heuristic algorithm: GRASP
Stop local search when “complete” set of neighborhoods have been searched. Process now starts over with new greedy phase.
Restriction P Created by splitting variable domain Created by defining neighborhood of previous solution Created by assigning next variable a randomized greedy value. Created by defining neighborhood of previous solution.
Algorithm
Branch and bound (exhaustive)
Simulated annealing (inexhaustive)
GRASP (inexhaustive) Greedy phase
Local search phase
No constraints, so solution is always infeasible and branching necessary. Not used.
Always. Search neighborhood.
Not used.
LP relaxation
relax(P)
Not until all variables are assigned values.
Always. Examine random element of neighborhood.
Never. Always solve relax(P)
P easy enough to solve?
• A salesman must visit several cities. • Find a minimum-length tour that visits each city exactly once and returns to home base. • Each city must be visited within a time window.
TSP with Time Windows
An Example
[25,35]
E
7
Home base
5
A
3
4 6
[10,30]
D
6
5
5
7
B 8
Time window
C [15,25]
[20,35]
TSP with Time Windows
An Example
j∉ x0 ,…,xk
i∉{ j,x0 ,…,xk }
j∉{x0 ,…,xk }
Min time from last customer back to home
}
, min {tij} + min {t j0}
Min time from customer j ’s predecessor to j
xk j
min {t ∑ { }
Earliest time vehicle can leave customer k
T+
Then total travel time of completed route is bounded below by
Let tij = travel time from customer i to j .
Suppose that customers x0, x1, …, xk have been visited so far.
Relaxation of TSP
j∉ x0 ,…,xk
xk j
i∉{ j,x0 ,…,xk }
}
Min time from last customer back to home
j∉{x0 ,…,xk }
j
, min {tij} + min {t j0}
Min time from customer j ’s predecessor to j
min {t ∑ { }
Earliest time vehicle can leave customer k
T+
x0
x1
x2
xk
Sequence of customers visited
A⋅⋅⋅⋅A
Exhaustive Branch-and-Bound
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
ADC ⋅ ⋅ A ADB ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
Feasible Value = 36
ADCBEA
ADC ⋅ ⋅ A ADB ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
ADCEBA Feasible Value = 34
Feasible Value = 36
ADB ⋅ ⋅ A
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
ADCEBA Feasible Value = 34
Feasible Value = 36
Relaxation value = 36 Prune
ADB ⋅ ⋅ A
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
Feasible Value = 34
Feasible Value = 36
Relaxation value = 40 Prune
Relaxation value = 36 Prune
ADCEBA
ADE ⋅ ⋅ A
ADB ⋅ ⋅ A
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
Feasible Value = 34
Feasible Value = 36
Relaxation value = 40 Prune
Relaxation value = 36 Prune
ADCEBA
ADE ⋅ ⋅ A
ADB ⋅ ⋅ A
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
AB ⋅ ⋅ ⋅ A AE ⋅ ⋅ ⋅ A
Continue in this fashion
Relaxation value = 31
AC ⋅ ⋅ ⋅ A
Exhaustive Branch-and-Bound
Optimal solution
Exhaustive Branch-and-Bound
Created by fixing next variable
Branch and bound for TSPTW (exhaustive)
Local search phase
Generalized GRASP for TSPTW (inexhaustive) Greedy phase
Restriction P
Algorithm
Never. Always solve relax(P).
P easy enough to solve?
TSPTW relaxation
relax(P)
Sequence of customers visited
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Greedy phase
Visit customer than can be served earliest from A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
Next, visit customer than can be served earliest from D
Greedy phase
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Greedy phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
This solution is feasible. Save it.
Continue until all customers are visited.
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
Backtrack randomly
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A Delete subtree already traversed
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
ADE ⋅ ⋅ A Randomly select partial solution in neighborhood of current node
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Begin with greedy assignments that can be viewed as creating “branches”
GRASP = greedy solution + local search
Basically,
Greedy phase
ADE ⋅ ⋅ A
ADEBCA Infeasible
ADC ⋅ ⋅ A
ADCBEA
Feasible Value = 34
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Complete solution in greedy fashion
Generalized GRASP
Local search phase
ADE ⋅ ⋅ A
ADEBCA Infeasible
ADC ⋅ ⋅ A
ADCBEA
Feasible Value = 34
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A Randomly backtrack
Generalized GRASP
ADEBCA Infeasible
ADCBEA
Feasible Value = 34
Infeasible
ABDECA
ABD ⋅ ⋅ A
AB ⋅ ⋅ ⋅ A
Continue in similar fashion
Exhaustive search algorithm (branching search) suggests a generalization of a heuristic algorithm (GRASP).
ADE ⋅ ⋅ A
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP
Greedy phase AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A Exhaustive search suggests an improvement on a heuristic algorithm: use relaxation bounds to reduce the search.
Generalized GRASP with relaxation
Greedy phase
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP with relaxation
Greedy phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP with relaxation
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
Backtrack randomly
AD ⋅ ⋅ ⋅ A
A⋅⋅⋅⋅A
Generalized GRASP with relaxation
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A Relaxation value = 40 Prune
A⋅⋅⋅⋅A
Generalized GRASP
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A Relaxation value = 40 Prune
A⋅⋅⋅⋅A Randomly backtrack
Generalized GRASP
Local search phase
Feasible Value = 34
ADCBEA
ADC ⋅ ⋅ A
AD ⋅ ⋅ ⋅ A
ADE ⋅ ⋅ A Relaxation value = 40 Prune
A⋅⋅⋅⋅A
Generalized GRASP
AB ⋅ ⋅ ⋅ A Relaxation value = 38 Prune
Restriction P Created by fixing next variable
Created by assigning greedy value to next variable. Randomly backtrack to previous node.
Algorithm
Branch and bound for TSPTW (exhaustive)
Generalized GRASP for TSPTW (inexhaustive) Greedy phase
Local search phase
Yes. Randomly select value for next variable.
Never. Always solve relax(P).
Never. Always solve relax(P).
P easy enough to solve?
Not used.
TSPTW relaxation.
TSPTW relaxation
relax(P)
• Search stops when nogood set is complete in some sense.
– Nogoods may be processed so that nogood set is easy to solve.
• Next solution examined is solution of current nogood set.
– Nogood = constraint that excludes solutions already examined (explicitly or implicitly).
• Search is directed by nogoods.
Nogood-Based Search
– Else add to N a nogood that excludes x and perhaps other solutions that are infeasible. – Process nogoods in N.
• If x is the best solution so far, keep it. • Add to N a nogood that excludes x and perhaps other solutions that are no better.
– Select a restriction P of the original problem. – Select a solution x of relax(P) ∪ N. – If x is feasible for P then
• Let N be the set of nogoods, initially empty. • Repeat while N is incomplete:
Nogood-Based Search Algorithm
– Goal: make it easy to find feasible solution of N.
• Delete (redundant) nogoods if desired.
– Infer new nogoods from existing ones.
• To process the nogood set N:
Nogood-Based Search Algorithm
– In an exhaustive search, complete = infeasible. – In a heuristic algorithm, complete = large enough.
• Exhaustive vs. heuristic algorithm.
– Goal: make it easy to find feasible solution of N.
• Delete (redundant) nogoods if desired.
– Infer new nogoods from existing ones.
• To process the nogood set N:
Nogood-Based Search Algorithm
Nogoods are Benders cuts. They are not processed. N is complete when infeasible.
N = master problem constraints. relax(P) = ∅
Minimize f(x) + cy subject to g(x) + Ay ≥ b.
Exhaustive search: Benders decomposition
no
(x*,y*) is solution
Add nogoods v ≥ u(b − g(y)) + f(y) (Benders cut) and v < f(x*) + cy* to N, where u is dual solution of subproblem
Let y* minimize cy subject to Ay ≥ b − g(x*). (subproblem)
yes
Master problem feasible?
Let (v*,x*) minimize v subject to N (master problem)
Start with N = {v > ∞}
Formally, selected solution is (x*,y) where y is arbitrary
Select optimal solution of N.
Nogoods are Benders cuts. They are not processed. N is complete when infeasible.
N = master problem constraints. relax(P) = ∅
Minimize f(x) + cy subject to g(x) + Ay ≥ b.
Exhaustive search: Benders decomposition
no
(x*,y*) is solution
Add nogoods v ≥ u(b − g(y)) + f(y) (Benders cut) and v < f(x*) + cy* to N, where u is dual solution of subproblem
Let y* minimize cy subject to Ay ≥ b − g(x*). (subproblem)
yes
Master problem feasible?
Let (v*,x*) minimize v subject to N (master problem)
Start with N = {v > ∞}
Subproblem generates nogoods.
Formally, selected solution is (x*,y) where y is arbitrary
Select optimal solution of N.
Nogoods are Benders cuts. They are not processed. N is complete when infeasible.
N = master problem constraints. relax(P) = ∅
Minimize f(x) + cy subject to g(x) + Ay ≥ b.
Exhaustive search: Benders decomposition
no
(x*,y*) is solution
Add nogoods v ≥ u(b − g(y)) + f(y) (Benders cut) and v < f(x*) + cy* to N, where u is dual solution of subproblem
Let y* minimize cy subject to Ay ≥ b − g(x*). (subproblem)
yes
Master problem feasible?
Let (v*,x*) minimize v subject to N (master problem)
Start with N = {v > ∞}
Same as original problem.
Benders decomposition (exhaustive)
Partial-order dynamic backtracking for TSPTW (inexhaustive)
Partial-order dynamic backtracking for TSPTW (exhaustive)
Tabu search (inexhaustive)
Restriction P
Algorithm
No constraints
Relax(P)
Benders cuts, obtained by solving subproblem
Nogoods
None
Nogood processing
Optimal solution of master problem + arbitrary values for subproblem variables
Solution of relax(P) ∪ N
– Davis-Putnam-Loveland method for with clause learning (for propositional satisfiability problem). – Partial-order dynamic backtracking.
• Other forms of exhaustive nogood-based search:
Nogood-Based Search Algorithm
N is “complete” when one has searched long enough.
In each iteration, search neighborhood of current solution for best solution not on tabu list.
The nogood set N is the tabu list.
Heuristic algorithm: Tabu search
yes
Add nogood x ≠ x* to N. Process N by removing old nogoods.
Let x* be best solution of P ∪ N
no
N “complete”?
Let feasible set of P be neighborhood of x*.
Start with N = ∅
Stop
Neighborhood of current solution x* is feas(relax(P)).
N is “complete” when one has searched long enough.
In each iteration, search neighborhood of current solution for best solution not on tabu list.
The nogood set N is the tabu list.
Heuristic algorithm: Tabu search
yes
Add nogood x ≠ x* to N. Process N by removing old nogoods.
Let x* be best solution of P ∪ N
no
N “complete”?
Let feasible set of P be neighborhood of x*.
Start with N = ∅
Stop
Solve P ∪ N by searching neighborhood. Remove old nogoods from tabu list.
Neighborhood of current solution x* is feas(relax(P)).
N is “complete” when one has searched long enough.
In each iteration, search neighborhood of current solution for best solution not on tabu list.
The nogood set N is the tabu list.
Heuristic algorithm: Tabu search
yes
Add nogood x ≠ x* to N. Process N by removing old nogoods.
Let x* be best solution of P ∪ N
no
N “complete”?
Let feasible set of P be neighborhood of x*.
Start with N = ∅
Stop
Same as original problem.
Created by defining neighborhood of previous solution
Benders decomposition (exhaustive)
Tabu search (inexhaustive)
Partial-order dynamic backtracking for TSPTW (inexhaustive)
Partial-order dynamic backtracking for TSPTW (exhaustive)
Restriction P
Algorithm
Same as P
No constraints
Relax(P)
Tabu list
Benders cuts, obtained by solving subproblem
Nogoods
None
None
Nogood processing
Find best solution in neighborhood not on tabu list.
Optimal solution of master problem + arbitrary values for subproblem variables
Solution of relax(P) ∪ N
[25,35]
E
7
Home base
5
A
3
4 6
[10,30]
D
6
5
5
7
B 8
Time window
C [15,25]
[20,35]
TSP with Time Windows
An Example
ADCB
Current nogoods
ADCBEA
36
Excludes current solution by excluding any solution that begins ADCB
ADCB
This is a special case of partial-order dynamic backtracking.
So relax(P) ∪ N = N
In this problem, P is original problem, and relax(P) has no constraints.
0 1
Iter. relax( P ) ∪ N Solution of N Sol. value New nogoods
Exhaustive nogood-based search
ADC
2
36 34
ADCB ADCE
Greedy solution of current nogood set: Go to closest customer consistent with nogoods.
ADCBEA ADCEBA
This makes it possible to solve the nogood set with a greedy algorithm.
So process the nogood set by replacing ADCB, ADCE with their parallel resolvent ADC.
The current nogoods ADCB, ADCE rule out any solution beginning ADC.
ADCB
0 1
Iter. relax( P ) ∪ N Solution of N Sol. value New nogoods
Exhaustive nogood-based search
ADCB ADC ADB, ADC
0 1
2 3
infeasible
36 34
ADB
ADCB ADCE
Sol. value New nogoods
Not only is ADBEAC infeasible, but we observe that no solution beginning ADB can be completed within time windows.
ADBEAC
ADCBEA ADCEBA
Iter. relax( P ) ∪ N Solution of N
Exhaustive nogood-based search
ADC ADB, ADC AD
2 3
4
ADBEAC ADEBCA
ADCBEA ADCEBA
Process nogoods ADB, ADC, ADE to obtain parallel resolvent AD
ADCB
0 1
Iter. relax( P ) ∪ N Solution of N
infeasible infeasible
36 34
ADB ADE
ADCB ADCE
Sol. value New nogoods
Exhaustive nogood-based search
At the end of the search, the processed nogood set rules out all solutions (i.e, is infeasible).
Optimal solution.
ADCB ADC ADB, ADC AD ACDB, AD ACD, AD ACD, ACBE, AD ACB, AD ACB, ACEB, AD AC, AD ABD, ABC, AC, AD AB, AC, AD AB, AC, AD, AEB, AEC A
Iter. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
relax( P ) ∪ N
AEBCDA AEDBCA
ACDBEA ACDBEA ACBEDA ACBDEA ACEBDA ACEDBA ABDECA ABEDCA
Solution of N ADCBEA ADCEBA ADBEAC ADEBCA
infeasible infeasible
38 36 infeasible 40 infeasible 40 infeasible infeasible
AEB, AEC AEB, AEC
ACDB ACDE ACBE ACBD ACEB ACED ABD, ABC ABE
Sol. value New nogoods 36 ADCB 34 ADCE infeasible ADB infeasible ADE
Exhaustive nogood-based search
Same as original problem.
Created by defining neighborhood of previous solution Same as original problem
Benders decomposition (exhaustive)
Tabu search (inexhaustive)
Partial-order dynamic backtracking for TSPTW (exhaustive)
Partial-order dynamic backtracking for TSPTW (inexhaustive)
Restriction P
Algorithm
No constraints
Same as P
No constraints
Relax(P)
Rule out last sequence tried
Tabu list
Benders cuts, obtained by solving subproblem
Nogoods
Parallel resolution
Delete old nogoods.
None
Nogood processing
Greedy solution.
Find best solution in neighborhood not on tabu list.
Optimal solution of master problem + arbitrary values for subproblem variables
Solution of relax(P) ∪ N
ADC
2
36 34
ADCB ADCE
Sol. value New nogoods
Start as before.
ADCBEA ADCEBA
Solution of N
This requires more intensive processing (full resolution), which is possible because nogood set is small.
Generate stronger nogoods by ruling out subsequences other than those starting with A.
Remove old nogoods from nogood set. So method is inexhaustive
ADCB
0 1
Iter. relax( P ) ∪ N
Inexhaustive nogood-based search
ADC ABEC, ADB, ADC
2 3
ADBECA
Solution of N ADCBEA ADCEBA infeasible
This requires a full resolution algorithm.
ADB, EC
Sol. value New nogoods 36 ADCB 34 ADCE
Process nogood set: List all subsequences beginning with A that are ruled out by current nogoods.
ADCB
Iter. 0 1
relax( P ) ∪ N
Inexhaustive nogood-based search
ADEBCA ACDBEA
Solution of N ADCBEA ADCEBA ADBECA infeasible 38
ADE, EB ACBD
Sol. value New nogoods 36 ADCB 34 ADCE infeasible ADB, EC
Continue in this fashion, but start dropping old nogoods.
ABEC, ADB, ADC ABEC, ACEB, AD, AEB
ADCB ADC
relax( P ) ∪ N
So exhaustive nogood-based search suggests a more sophisticated variation of tabu search.
Stopping point is arbitrary.
Adjust length of nogood list is avoid cycling, as in tabu search.
3 4 ⋮
Iter. 0 1 2
Inexhaustive nogood-based search
Restriction P Same as original problem.
Created by defining neighborhood of previous solution Same as original problem
Same as original problem
Algorithm
Benders decomposition (exhaustive)
Tabu search (inexhaustive)
Partial-order dynamic backtracking for TSPTW (exhaustive)
Partial-order dynamic backtracking for TSPTW (inexhaustive)
No constraints
No constraints
Same as P
No constraints
Relax(P)
Rule out last seq. & infeasible subsequences
Rule out last sequence tried
Tabu list
Benders cuts, obtained by solving subproblem
Nogoods
Full resolution, delete old nogoods.
Parallel resolution
Delete old nogoods.
None
Nogood processing
Greedy solution.
Greedy solution.
Find best solution in neighborhood not on tabu list.
Optimal solution of master problem + arbitrary values for subproblem variables
Solution of relax(P) ∪ N