You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

The root page 15DOTs60ia13:Tutorial could not be found in space 15.S60 SSIM: Software Tools for Operations Research.

TSPLIB performance

Run your new code (through main, with both the lazy and userCut options) on the same TSPLIB problems as before and compare the performance. My running times were

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

0.11

0.77

1.98

5.09

17.57

With User Cuts

0.40

3.76

12.52

83.62

103.93

We see that the running time increased on every example! What is going on here? Lets add a few timely print statements to our UserCutCallback to try and better understand how the performance is worse.

System.out.println("hello word!");

Lets run the code again on d198 and take a closer look at the output.

output...

Running the code with and without user cuts, judging from the print statements (which are not entirely reliable), it seems that the bottle neck is actually solving the \( |V|-1 \) max flow min cut problems. While we know there are better ways to compute a global minimum cut from our previous discussion, instead we will explore some heuristics to vastly improve performance without writing a lot of code.

Look at the first column of CPLEX's output, which gives the number of nodes generated thus far. When this column is zero, that means we are still working on the root node. Here are a few observations about our cuts:

  • More often then not, when there is a violated cut, the most violated cut has weight zero.
  • This is particularly true towards the end of the optimization when we are doing branch and bound (perhaps the extreme action of forcing a variable to take the value one or zero causes the solution to make loops).
  • The value of the root LP is significantly larger (which is better) with the cutset constraints, as seen below
  • The total number of nodes created using the cutset constraints is much smaller, as seen below.

These final two bullet points indicate that we do stand to gain a lot from the cutset formulation if we can get the running time of our separation routine under control. We now give a few heuristics to improve performance.

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

99

99

99

99

99

With User Cuts

99.7

99.5

99.7

99.4

99.7

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

50

50

50

50

50

With User Cuts

2

19

44

429

138

Check Connected Components First

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

0.11

0.77

1.98

5.09

17.57

With User Cuts

5

6

7

8

20

Connected Components First

5

6

7

8

20

Use Only Connected Components After Root Node

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

0.11

0.77

1.98

5.09

17.57

With User Cuts

5

6

7

8

20

Connected Components First

5

6

7

8

20

Stop After Root

5

6

7

8

20

Use a Backoff Function After the Root Node

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

0.11

0.77

1.98

5.09

17.57

With User Cuts

5

6

7

8

20

Connected Components First

5

6

7

8

20

Stop After Root

5

6

7

8

20

Backoff after root

5

6

7

8

20

(Optional) Check Random s-t Cuts

Unknown macro: {chart}

Problem Name

eil51

bier127

ch130

ch150

d198

Without User Cuts

0.11

0.77

1.98

5.09

17.57

With User Cuts

5

6

7

8

20

Connected Components First

5

6

7

8

20

Stop After Root

5

6

7

8

20

Backoff after root

5

6

7

8

20

Randomized s-t

5

6

7

8

20

  • No labels