Local Search
Local search is a technique that takes as an input a feasible solution to a combinatorial optimization problem, and tries to produce a better solution. The idea, at a high level, is to search locally in a neighborhood of the input point for a solution that immediately improves the objective value. If no such solution can be found, our input is locally optimal and is returned. Otherwise, we recursively call our local search algorithm on the superior point found.
More formally, suppose we are given some optimization problem
\[ \begin{aligned} &\min & f(\mathbf x)\\ &\text{subject to}& \mathbf x &\in S \end{aligned} \]where \( S \subset \mathbb R^n \) and \( f \colon S \to \mathbb R \) . In local search, for every \( \mathbf x \in S \) , we must give some \( N_{\mathbf x} \) to be the neighborhood of \( \mathbf x \) . Then given some initial feasible solution \( \mathbf x_0 \) , for \( n = 1,2,\ldots \) , we let
\[ \mathbf x_{n} = \mathop{\rm arg\, min}_{\mathbf x \in N_{\mathbf x_{n-1}}} \left\{ f(\mathbf x)\right\} \]To run a local search algorithm you begin with \( \mathbf x_0 \) , and iterate the above process until it converges, i.e. \( \mathbf x_{n+1} = x_n \) .
The key to making local search work is how we choose \( N_{\mathbf x} \) . We need be able to efficiently optimize over \( N_{\mathbf x} \) , otherwise our local search heuristic will be no easier to solve than our original problem. However, if we define \( N_{\mathbf x} \) to be too small a set, then we may get stuck in a local optimum that isn't very good.
A few warnings about local search:
- Without special problem structure, local search provides no guarentee on the quality of the solution produced relative to the global optimum
- Generally speaking, we do not have a good upper bound on the number of iterations until local search converges. We have the trivial bounds of \( |S| \) , which may be exponentially large, and (if the objective is guaranteed to be integer), \( |f(\mathbf x^*) - f(\mathbf x_0)| \) , which can still often be exponential (assuming the problem has weights which are input in binary).
An example
TODO
Two-Opt for TSP
Two Opt is one of the simplest of many local search techniques for TSP. The idea is very simple: given a tour, choose any two (non-adjacent) edges, and swap the end points to make a new tour. For example, in the figure below, we begin with the tour given by the node ordering 1, 2, 3, 4, 5, 6, 7, 8, and we we pick the edge from 2 to 3 and the edge from 5 to 6 and swap, to produce the node ordering 1, 2, 5, 4, 3, 6, 7, 8.
|
For a given tour, the neighborhood is the set of tours that can be made with such a swap (if
\( |V| = n \)
, there are
\( \binom{n}{2} \)
). To optimize over all such neighborhoods, you simply enumerate them and calculate their cost relative to the existing tour (this can be done in
\( O(1) \)
as really you only need add and subtract the cost of the four edges you are adjusting). As discussed here, there are a variety of heuristics that can be used to speed up this process.
The origin of the name is as follows. For any tour, we can represent it as a vector \( \mathbf x \in \{0,1\}^{|E|} \) where for each \( e \in E \) , we have \( x_e = 1 \) if \( e \) is in the tour and \( x_e = 0 \) otherwise. If \( T \subset E \) is the set of edges in the current tour being considered by Two-Opt, and \( \mathcal T \subset \{0,1\}^{|E|} \) is the set of all feasible tours, then the set of tours considered by Two-Opt this round is exactly
\[ \begin{aligned} \sum_{e \in E \setminus T} x_e &\leq 2,\\ \mathbf x &\in \mathcal T. \end{aligned} \]More generally, one can consider swapping the end points of up to \( k \) edges, giving the local search algorithm \( k \) -Opt. However, with a naïve implementation, this will require \( O(n^k) \) time per call on \( n \) vertices (times the recursive depth!), the cost prohibitive.
Implementing Two-Opt for User Generated Solutions
We have provided you with a simple Two-Opt implementation, TwoOptSolver
. As a minor optimization, it caches ever tour it has ever checked the neighborhood of, so it can abort the search as soon as it has been determined that it will end up in a local optimum that has already been visited. It has a simple interface
Method Name |
Return Type |
Arguments |
Description |
---|---|---|---|
searchNeighborhoodEdgeList |
Set<E> |
List<E> startTour |
Applies Two-Opt recurssively to startTour, returns a locally optimal tour, or null if an already visited tour is hit. |
searchNeighborhoodEdgeSet |
Set<E> |
Set<E> startTour |
Same as above, but must pay a one time \( O(|V|^2) \) to convert into the List<E> format. |