h1. Local Search
Local search is a technique that takes as an input a feasible solution to a combinatorial optimization problem, and tries to produce a better solution. The idea, at a high level, is to search _locally_ in a _neighborhood_ of the input point for a solution that immediately improves the objective value. If no such solution can be found, our input is _locally optimal_ and is returned. Otherwise, we recursively call our local search algorithm on the superior point found.
More formally, suppose we are given some optimization problem
{mathdisplay}
\begin{aligned}
&\min & f(\mathbf x)\\
&\text{subject to}& \mathbf x &\in S
\end{aligned}
{mathdisplay}
where {mathinline} S \subset \mathbb R^n{mathinline} and {mathinline} f \colon S \to \mathbb R{mathinline}. In local search, for every {mathinline} \mathbf x \in S{mathinline}, we must give some {mathinline}N_{\mathbf x}{mathinline} to be the neighborhood of {mathinline} \mathbf x{mathinline}. Then given some initial feasible solution {mathinline} \mathbf x_0{mathinline}, for {mathinline} n = 1,2,\ldots{mathinline}, we let
{mathdisplay}
\mathbf x_{n} = \mathop{\rm arg\, min}_{\mathbf x \in N_{\mathbf x_{n-1}}} \left\{ f(\mathbf x)\right\}
{mathdisplay}
To run a local search algorithm you begin with {mathinline}\mathbf x_0{mathinline}, and iterate the above process until it converges, i.e. {mathinline}\mathbf x_{n+1} = x_n{mathinline}.
The key to making local search work is how we choose {mathinline}N_{\mathbf x}{mathinline}. We need be able to efficiently optimize over {mathinline}N_{\mathbf x}{mathinline}, otherwise our local search heuristic will be no easier to solve than our original problem. However, if we define {mathinline}N_{\mathbf x}{mathinline} to be too small a set, then we may get stuck in a _local optimum_ that isn't very good.
A few warnings about local search:
* Without special problem structure, local search provides *no guarentee* on the quality of the solution produced relative to the global optimum
* Generally speaking, we do not have a good upper bound on the number of iterations until local search converges. We have the trivial bounds of {mathinline}|S|{mathinline}, which may be exponentially large, and (if the objective is guaranteed to be integer), {mathinline}|f(\mathbf x^*) - f(\mathbf x_0)|{mathinline}, which can still often be exponential (assuming the problem has weights which are input in binary).
h3. An example
TODO
h1. Two-Opt for TSP
Two Opt is one of the simplest of many local search techniques for TSP. The idea is very simple: given a tour, choose any two (non-adjacent) edges, and swap the end points to make a new tour.
|