GENI Ants For The Traveling Salesman Problem
GENI Ants For The Traveling Salesman Problem
Abstract. In this paper, the probabilistic nearest neighbor heuristic, which is at the core of classical ant
colony systems for the Traveling Salesman Problem, is replaced by an alternative insertion procedure known
as the GENI heuristic. The benefits provided by GENI-based ants are empirically demonstrated on a set
of benchmark problems, through a comparison with the classical ant colony system and an iterated GENI
heuristic.
1. Introduction
Ant colony optimization (ACO) represents a relatively novel approach for solving com-
binatorial optimization problems (Dorigo and Di Caro, 1999). It is motivated from the
way real ants find short paths from their nest to food sources by laying down pheromone
on the ground. Ant-based algorithms have first been tested on the classical Traveling
Salesman Problem (TSP) (Dorigo, 1992; Dorigo, Maniezzo, and Colorni, 1996), al-
though other combinatorial problems have since been addressed (Bullnheimer, Hartl,
and Strauss, 1999; Dorigo, Di Caro, and Gambardella, 1999; Gambardella and Dorigo,
2000; Gambardella, Taillard, and Dorigo, 1999; Stützle and Hoos, 2000).
The TSP is a canonical problem which can be easily stated: given a complete graph
G = (V , E), where V is the set of n vertices and E is the set of edges, and a traveling
distance between each pair of vertices, the problem is to find a shortest tour that visits
each vertex exactly once (Lawler et al., 1985). In this paper, we focus on Euclidean
TSPs, where the distance between vertices is defined through the Euclidean metric. Ant-
based algorithms for the TSP construct solutions by iteratively adding new vertices to the
current partial tour. The choice of the next vertex to visit is based on a greedy heuristic,
typically the nearest neighbor (NN) heuristic, which considers both the distance to the
next vertex and the amount of pheromone deposited on the edge leading to that vertex.
The latter represents the “memory” of the system as it indicates how often that edge
was used in the past to find good solutions. In this paper, we propose the alternative,
more complex, GENI (GENeralized Insertion) heuristic (Gendreau, Hertz, and Laporte,
1992) for incrementally extending the partial tour. Our goal here is not to obtain the
best possible results on benchmark problems, but rather to empirically demonstrate that
sophisticated constructive ants positively impact solution quality, when compared with
188 LE LOUARN, GENDREAU AND POTVIN
classical nearest neighbor ants. We also demonstrate that this remains true when the
tours are further improved with a 2-opt local search heuristic.
The rest of the paper is organized as follows. Section 2 first introduces the Ant
Colony System (ACS), a recently developed ant-based algorithm. Sections 3 and 4 then
describe the GENI insertion heuristic for the TSP and its integration within the ACS
framework. Computational results on Euclidean TSPs are reported in section 5. Finally,
concluding remarks are made.
When ants look for food sources, they initially look around in a random manner. When
they find something, they come back to the nest, laying down a chemical compound on
the ground, known as pheromone, on their way back. The pheromone intensity is related
to the quality of the food source, namely the quantity of food and the distance from the
nest. The next ants will thus search in a less random fashion, as they will be attracted by
the pheromone trails. More ants will be attracted on paths with more pheromone, which
in turn will lead to more and more pheromone laid down on these paths. Ultimately, all
ants will be attracted by the more interesting paths. Ant colony optimization is motivated
by this way of communicating through pheromone trails to find good paths.
In the following, the ACS framework for solving the TSP (Dorigo and Gam-
bardella, 1997) is presented (readers interested in other ant-based metaphors are re-
ferred to (Bonabeau, Dorigo, and Theraulaz, 2000; Dorigo, Bonabeau, and Theraulaz,
2000; Dorigo, Di Caro, and Gambardella, 1999; Dorigo, Maniezzo, and Colorni, 1996)).
Broadly speaking, ACS works as follows: m ants are positioned on the n vertices, ac-
cording to some a priori assignment procedure (which may be a random assignment). At
each iteration of the algorithm, each ant builds a tour by repeatedly selecting a new ver-
tex to visit through the application of a probabilistic nearest neighbor heuristic. While
constructing a tour, the ant modifies the pheromone level on the edges it follows by
applying a local update rule. When all ants have completed their respective tours, the
pheromone level on each edge is modified again through a global update rule which fa-
vors the edges associated with the best tour found from the start (i.e., global-best update,
as opposed to iteration-best update (Dorigo and Gambardella, 1997)).
During the solution construction process, the ants are guided both by the amount
of pheromone on the edges and by the length of these edges. In particular, a short edge
with a large amount of pheromone is highly desirable, since it was part of many good
solutions and it only slightly increases the total length of the tour.
The ACS procedure is summarized below using a pseudo-code notation. The main
steps of this algorithm are then explained.
1. randomly assign each ant to a starting vertex;
2. put some a priori level of pheromone on every edge;
3. for I iterations do:
GENI ANTS 189
3.1 for each ant (sequentially), construct a tour and update the pheromone locally;
3.2 update the pheromone globally;
4. output the best solution found.
In step 3.1, the ant constructs a tour through the repeated addition of new vertices. As-
suming that ant r is currently located at vertex i, a state transition rule is used to select
the next vertex j to move to, namely:
β
argmax τik ηik , if q q0 ,
j= k∈Jr (1)
J, otherwise
where J is a random variable with the probability distribution
β
τij ηij
, if j ∈ Jr ,
p(J = j ) = τ η
β
(2)
k∈J r ik ik
0, otherwise
and where ηij is the inverse of the length cij of edge (i, j ), τij is the amount of
pheromone on edge (i, j ), β is a weighting parameter that determines the relative impor-
tance of length versus pheromone, q is a random number uniformly distributed between
0 and 1, q0 is a threshold parameter and Jr is the set of vertices not yet visited by ant r.
From these equations, we can see that edge (i, j ) looks better when its length is
shorter and the amount of pheromone is larger. Parameter β > 0 is used to put more or
less emphasis on length versus pheromone. When a randomly generated value q is less
than or equal to the threshold q0 , the best edge according to equation (1) is chosen to
reach the next vertex. Otherwise, equation (2) is used to probabilistically select the next
vertex, with a bias towards the best edges.
When ant r constructs a tour, the pheromone level on each edge (i, j ) visited by the ant
is updated as follows:
τij ← (1 − ρ)τij + ρτ0 (3)
where 0 < ρ < 1 is a parameter and τ0 is the initial pheromone level, which is set
to 0 in our computational experiments (Dorigo and Gambardella, 1997). When ρ is
close to one, the amount of pheromone is substantially reduced on the edges of the tour
constructed by the ant (assuming that τ0 is small). Consequently, the other ants will
less likely follow the same tour, and the search will be more diversified. This could be
beneficial, although care must be taken not to alter too much the pheromone trails to
avoid a random search pattern.
190 LE LOUARN, GENDREAU AND POTVIN
noted Np (u), is the set of p vertices on the tour that are the closest to u. Then, for
a given choice of parameter p, GENI only looks for insertions such that i, j ∈ Np (u),
k ∈ Np (i +1) and l ∈ Np (j +1). In addition, the insertion of u between two consecutive
vertices i and i + 1 is examined if i ∈ Np (u). In practice, p is set to a relatively small
value, usually between 4 and 7.
The GENI algorithm may now be summarized as follows:
1. create an initial tour by randomly selecting a subset of three vertices;
2. initialize the p-neighborhood of each vertex;
3. randomly select a vertex u not yet included in the tour;
4. insert it at least cost, by considering all insertions of type I and II;
5. update the p-neighborhood of each vertex (given that u is now in the tour);
6. if all vertices are in the tour, STOP. Otherwise, go back to 3.
192 LE LOUARN, GENDREAU AND POTVIN
4. The GACS
This section describes how the GENI heuristic presented in section 3, has been integrated
within the ACS framework to produce the GACS (GENI Ant Colony System). This
integration leads to two important modifications to GENI which impact the evaluation
and selection of the next move to be performed: (1) the cost of an edge now depends
on its length and the amount of pheromone deposited on it and (2) a probabilistic state
transition rule is used to choose the next insertion to be performed (cf. step 4 of the GENI
algorithm). Since the local and global update of pheromone trails remain unchanged
with regard to the original ACS, we focus only on these two issues in the following.
In the ACS framework, the amount of pheromone deposited on the edges is the mecha-
nism by which the ants communicate together to share information about good solutions.
These pheromone trails can thus be regarded as the memory of the system. In the clas-
sical ACS, the crucial point is the selection of the next vertex to be visited, given that
the insertion always takes place at the end of the current path. The pheromone trails
are thus used to modify the likelihood of visiting some (yet unvisited) vertex j after the
current vertex i. In the case of the GENI insertion heuristic, this is quite different. The
next vertex to be visited is selected at random, but the insertion place is carefully cho-
sen. Thus, pheromone trails are used to modify the edge costs, so that edges with more
pheromone look less costly. More precisely, the formula for adjusting the length of edge
(i, j ) is:
cij
cij = (6)
1 + γ · τijR
τij
τijR = . (7)
max(k,l)∈E (τkl )
In equation (7), the pheromone level on each edge is first normalized between
0 and 1. If the denominator is equal to 0 (i.e., there is no pheromone on the edges of
the graph), τijR is set to 0 for every edge. In equation (6), the relative importance of
pheromone is adjusted through parameter γ , which is similar to parameter β in the clas-
sical ACS. This equation provides well defined bounds on the range of values associated
with the modified edge costs. For example, with γ = 1, the modified edge costs cannot
be less than half the original costs. And, clearly, the modified costs never exceed the
original costs. It is worth noting that the edge with the largest amount of pheromone
does not necessarily need to be recalculated after every local pheromone update, given
that the maximal edge remains the same when it is not part of the current solution.
GENI ANTS 193
Consider the state transition rule of the classical ACS where some ant r, currently located
at vertex i, must choose which vertex to visit next among those that are yet unvisited. In
this case, vertex j is selected according to a probabilistic state transition rule, where the
β
selection probability of vertex j is proportional to the ratio of τij on cij . Then, vertex j is
inserted at the end of the sequence. Clearly, the solution produced would always be the
same, for a given starting vertex, without the probabilistic bias. The latter, by modifying
the selection order of the vertices, allows the method to generate different solutions.
Since the insertion mechanism at the core of GENI is more complex, it is less
sensitive to the selection order of the vertices. As opposed to the classical ACS, where
a different selection order necessarily leads to a different solution, GENI may well end
up with two identical solutions, even for two different selection orders. An additional
mechanism is thus provided to diversify the search. This is done by using a probabilistic
state transition rule similar to the one found in the ACS, to determine which insertion
will be performed on the randomly chosen vertex (instead of always performing the best
insertion). The number of transitions from the current solution thus corresponds to the
number of insertions of type I and II that may be applied to the randomly chosen vertex.
This number is typically much larger than in the case of the ACS (where the number of
transitions is equal to the number of vertices that have not yet been visited).
To associate a selection probability with each GENI insertion, a rank-based ap-
proach is used in place of the pure proportional scheme (Bullnheimer, Hartl, and Strauss,
1999; Whitley, 1989). This approach does not put any emphasis on the magnitude of the
difference between two competing insertions to derive the probability distribution. Only
their relative order is important. With this approach, two insertions of almost equal value
will be better discriminated, while a dominant insertion (i.e., an insertion which is much
better than the others) will not be assigned a selection probability too close to 1, to avoid
a too strongly deterministic search pattern.
The rank-based scheme works as follows for a given randomly selected vertex.
First, the S best GENI insertions, over all insertions of types I and II, are considered
and ranked from the one that leads to the smallest increase to the tour length (rank
S) to the one that leads to the largest increase (rank 1). Preliminary tests have shown
that parameter S is important and should be set to a small value, like S = 5 in our
experiments, to avoid the selection of a bad insertion that could be difficult to “repair”
in the following iterations. The state transition rule then consists in selecting a rank t
between 1 and S, and applying the insertion with the corresponding rank, that is:
S, if q q0 ,
t= (8)
T, otherwise
where parameters q and q0 are the same as those found in equation (1) and T is a random
variable with the probability distribution
t 2t
p(T = t) = S = , t = 1, . . . , S. (9)
t =1 t S(S + 1)
194 LE LOUARN, GENDREAU AND POTVIN
Clearly, the insertion of rank t = S, which leads to the smallest increase to the
tour length, has the highest selection probability, according to equation (9). Conversely,
the insertion of rank t = 1, which leads to the largest increase, has the lowest selection
probability. The probabilities for the other insertions are linearly distributed between
these two values, with a constant gap between two insertions with consecutive ranks.
5. Computational results
The GACS was implemented in C and the tests were performed on a 400 MHz Ultra-
Sparc2 processor. We first examine the impact of different parameter settings on solu-
tion quality. Then, a comparison with the multi-start GENI heuristic, namely the GACS
without pheromone trails, and the ACS is presented on Euclidean problems found in the
TSPLIB library (Reinelt, 1991).
GENI ANTS 195
Table 1
Impact of parameter γ on solution quality.
Problem γ
0 0.25 0.5 0.75 1 5
kroA100 21282.0 21282.0 21282.0 21282.0 21282.0 21282.0
pr299 48442.3 48428.0 48429.2 48437.0 48422.7 48487.1
u574 37684.5 37715.7 37715.7 37682.8 37748.9 37828.6
alea100 775.8 776.4 775.7 775.7 776.0 777.0
alea300 3936.3 3936.1 3934.1 3937.2 3941.0 3950.3
alea500 8426.4 8423.6 8422.8 8428.9 8434.9 8455.0
To study the impact of different parameter values, six TSP instances have been used.
Three instances, alea100, alea300 and alea500 of size 100, 300 and 500, respectively,
were randomly generated. The three other instances, kroA100, pr299 and u574 of size
100, 299 and 574, respectively, were taken from the TSPLIB library. Each result is the
average of 10 different runs, based on n iterations, where n is the problem size. We used
m = 10 ants in all experiments (as in Dorigo and Gambardella, 1997). Also, the initial
level of pheromone τ0 was set to 0. Consequently, the adjusted length corresponds to the
original length at the start of the algorithm (see equation (6)).
When the value of one parameter was tested, the others were set at their default
value, namely: ρ = 0.5, α = 0.5, γ = 0.5 and q0 = 0.95. The value for the neighbor-
hood parameter in GENI was fixed at p = 7.
Parameter γ
Table 1 shows the impact of parameter γ on solution quality. Through this parameter, it
is possible to adjust the relative importance of pheromone trails when evaluating the cost
of an edge. These results indicate that incorporating pheromone into the edge cost has
generally a positive impact on solution quality. The values leading to the best solutions
vary widely between 0 and 1, but a larger number of best results are obtained with values
around γ = 0.5.
Parameter α
Table 2 shows the impact of parameter α on solution quality. This parameter is used
in the global update rule, when the pheromone is added on the edges of the best tour.
For α = 1, equation (4) was applied rather than equation (5), to avoid a division by 0.
Better solutions are found with smaller values of α between 0.1 and 0.5. This makes
sense, as larger values tend to strongly bias the search toward elite solutions and to lead
to premature convergence.
Parameter ρ
Table 3 shows the impact of parameter ρ on solution quality. This parameter is used in
the local update rule, when some pheromone is removed from the edges of the current
196 LE LOUARN, GENDREAU AND POTVIN
Table 2
Impact of parameter α on solution quality.
Problem α
0 0.1 0.25 0.5 0.75 0.9 1
kroA100 21282.0 21282.0 21282.0 21282.0 21282.0 21282.0 21326.3
pr299 48610.2 48457.6 48455.0 48429.2 48449.5 48594.7 49070.0
u574 37984.5 37695.6 37727.9 37715.7 37716.9 37993.6 38166.1
alea100 777.0 775.8 775.8 775.8 775.8 778.2 789.1
alea300 3957.0 3939.7 3940.2 3935.3 3937.2 3955.4 3996.0
alea500 8483.7 8428.1 8429.7 8422.8 8429.7 8491.3 8573.7
Table 3
Impact of parameter ρ on solution quality.
Problem ρ
0 0.1 0.25 0.5 0.75 0.9 1
kroA100 21282.0 21282.0 21282.0 21282.0 21282.0 21282.0 21282.0
pr299 48500.4 48446.6 48487.1 48429.2 48441.0 48642.8 48459.0
u574 37938.1 37735.4 37726.6 37715.7 37715.7 38018.6 37738.7
alea100 780.8 776.9 775.8 775.8 775.6 777.4 775.6
alea300 3946.6 3940.2 3937.2 3937.2 3935.7 3956.9 3936.0
alea500 8471.9 8442.5 8431.2 8422.8 8416.5 8485.7 8430.1
Table 4
Impact of parameter q0 on solution quality.
Problem q0
0.9 0.95 0.98 1
kroA100 21283.0 21282.0 21282.0 21282.0
pr299 48543.6 48429.2 48366.0 48384.7
u574 37773.3 37715.7 37620.7 37616.8
alea100 775.8 775.8 775.4 776.4
alea300 3942.3 3937.2 3930.6 3930.6
alea500 8443.4 8422.8 8410.7 8420.9
tour to diversify the search. Clearly, the best values are found with larger values of ρ,
like ρ = 0.75, which indicates that a significant level of diversification is desirable.
Parameter q0
Table 4 shows the impact of parameter q0 on solution quality. This parameter is used
in equation (8) when the GENI insertion, to be applied to the currently selected vertex,
is chosen. When q0 = 1, the best GENI insertion is always chosen. When q0 < 1,
some perturbation is introduced, using a probability distribution biased toward the best
insertions. The numbers in the table indicate that values closer to 1 are better, although
a slight amount of perturbation is desirable.
GENI ANTS 197
Table 5
Results obtained with the best combination
of parameter values.
Problem Solution
kroA100 21282.0
pr299 48366.0
u574 37616.4
alea100 775.4
alea300 3934.0
alea500 8407.8
Given the results shown in the previous tables, the values γ = 0.5, α = 0.25,
ρ = 0.75 and q0 = 0.98 were chosen. The results obtained with this combination
of parameters on the six test problems are shown in table 5. Given the quality of the
solutions, it appears that no significant undesirable cross-effects are observed with this
combination of values. The same parameter sensitivity analysis was redone by setting
this new combination as the default one. We do not show the results here, because no
improvement was obtained after this second iteration.
The ants communicate with one another through pheromone trails, which contain in-
formation about the past history of the search. By removing those trails, we obtain a
multi-start GENI heuristic (mGENI), where each run is performed independently of the
previous ones. This section provides a comparison of GACS with this algorithm and the
classical ACS, using all Euclidean problems in the TSPLIB library (i.e., EUC_2D type)
with less than 1,000 vertices, except the three problems used in the parameter sensitivity
analysis. In the case of ACS, the parameter values in Dorigo and Gambardella, 1997
were used.
The first column in table 6 corresponds to the problem identifier. The column CPU
is the total computation time in seconds allocated to the three methods to solve each
problem (which is the time needed to run 5n iterations of GACS). Under the GACS,
mGENI and ACS columns, we show the gap in percentage with the optimum for the
average and best run, over three different runs, as well as the average CPU time in
seconds to reach the best solution (using the format: average run/best run/CPU).
These results show the superiority of GACS over mGENI and ACS. In fact,
a pairwise comparison of the three methods, based on the Wilcoxon signed rank test,
indicates that the hypothesis of the superiority of GACS over mGENI and ACS can be
accepted at a level of confidence well beyond 99%, for both the average and best runs
(note that GACS is either as good or better than its competitors on almost every problem
instance). The GENI-based heuristics also exhibit more stability than ACS, as indicated
by the gap between the best and average runs, and converge faster to their best solu-
tions.
198 LE LOUARN, GENDREAU AND POTVIN
Table 6
GACS, mGENI and ACS.
Problem CPU GACS mGENI ACS
a280 4095s 0.81%/0.43%/3509s 1.10%/0.81%/2691s 1.14%/0.43%/602s
berlin52 25s 0.00%/0.00%/1s 0.00%/0.00%/1s 0.40%/0.22%/1s
bier127 819s 0.12%/0.00%/427s 0.16%/0.09%/457s 1.07%/0.25%/521s
ch130 855s 0.01%/0.00%/185s 0.00%/0.00%/336s 0.88%/0.67%/230s
ch150 1157s 0.37%/0.32%/122s 0.36%/0.32%/392s 0.43%/0.32%/130s
d198 1937s 0.15%/0.13%/1035s 0.20%/0.13%/744s 1.32%/0.99%/1880s
d493 13822s 1.39%/1.18%/9549s 1.52%/1.44%/8901s 3.26%/2.73%/12181s
d657 25599s 1.97%/1.86%/7280s 1.81%/1.72%/10250s 2.08%/1.46%/20521s
eil51 55s 0.00%/0.00%/8s 0.00%/0.00%/7s 0.17%/0.17%/22s
eil76 275s 0.00%/0.00%/80s 0.06%/0.00%/81s 0.06%/0.00%/35s
eil101 496s 0.05%/0.00%/222s 0.11%/0.00%/172s 0.80%/0.11%/35s
fl417 8560s 0.22%/0.19%/3296s 0.32%/0.24%/2649s 0.82%/0.64%/8175s
gil262 3577s 0.41%/0.34%/1256s 0.57%/0.46%/1605s 1.89%/1.40%/1519s
kroB100 475s 0.00%/0.00%/170s 0.00%/0.00%/157s 0.38%/0.26%/19s
kroC100 550s 0.00%/0.00%/8s 0.00%/0.00%/14s 0.84%/0.00%/3s
kroD100 305s 0.00%/0.00%/37s 0.00%/0.00%/10s 0.65%/0.45%/83s
kroE100 480s 0.00%/0.00%/79s 0.00%/0.00%/110s 0.22%/0.00%/154s
kroA150 1130s 0.00%/0.00%/457s 0.00%/0.00%/713s 0.42%/0.12%/389s
kroB150 1125s 0.00%/0.00%/560s 0.01%/0.00%/676s 0.48%/0.24%/250s
kroA200 2060s 0.18%/0.12%/1026s 0.25%/0.15%/750s 1.07%/0.80%/83s
kroB200 2030s 0.04%/0.00%/1747s 0.04%/0.01%/676s 0.65%/0.15%/1074s
lin105 135s 0.00%/0.00%/5s 0.00%/0.00%/6s 0.10%/0.00%/14s
lin318 5300s 1.32%/1.14%/2157s 1.62%/1.32%/2350s 0.84%/0.30%/3202s
p654 23044s 0.57%/0.48%/11352s 0.70%/0.59%/10049s 1.98%/0.64%/17368s
pcb442 10919s 1.82%/1.69%/2746s 1.94%/1.89%/8597s 2.58%/2.40%/7785s
pr76 230s 0.00%/0.00%/34s 0.00%/0.00%/30s 0.00%/0.00%/10s
pr107 240s 0.00%/0.00%/17s 0.00%/0.00%/27s 0.50%/0.30%/25s
pr124 315s 0.00%/0.00%/36s 0.00%/0.00%/39s 0.13%/0.00%/55s
pr136 885s 0.32%/0.25%/567s 0.38%/0.34%/576s 0.82%/0.42%/264s
pr144 943s 0.02%/0.00%/215s 0.04%/0.00%/376s 0.05%/0.00%/784s
pr152 1045s 0.00%/0.00%/165s 0.06%/0.00%/255s 0.18%/0.18%/395s
pr226 2326s 0.01%/0.00%/852s 0.01%/0.00%/1430s 0.66%/0.34%/1010s
pr264 3559s 0.17%/0.09%/2977s 0.39%/0.30%/2382s 1.01%/0.79%/2311s
pr439 10626s 0.94%/0.82%/6058s 1.03%/0.87%/5663s 2.70%/0.86%/8731s
rat99 470s 0.00%/0.00%/80s 0.00%/0.00%/165s 0.69%/0.17%/142s
rat195 1968s 1.51%/1.29%/704s 1.85%/1.59%/957s 2.21%/1.73%/337s
rat575 19118s 2.63%/2.60%/8362s 2.92%/2.85%/8508s 1.98%/1.74%/9782s
rat783 41808s 2.97%/2.85%/13455s 2.98%/2.94%/15530s 2.71%/2.49%/29962s
rd100 320s 0.00%/0.00%/27s 0.00%/0.00%/73s 1.19%/0.85%/11s
rd400 8871s 1.25%/1.09%/5542s 1.41%/1.26%/4923s 1.74%/1.19%/4909s
st70 145s 0.00%/0.00%/3s 0.00%/0.00%/2s 0.74%/0.00%/5s
ts225 2732s 0.08%/0.06%/1201s 0.13%/0.09%/1018s 0.34%/0.26%/803s
tsp225 2654s 0.09%/0.00%/1682s 0.16%/0.00%/1452s 0.66%/0.53%/1037s
u159 1250s 0.00%/0.00%/113s 0.00%/0.00%/167s 0.53%/0.26%/114s
u724 33717s 2.45%/2.29%/16855s 2.63%/2.47%/18694s 2.76%/2.40%/26243s
Avg. 5912s 0.48%/0.43%/2361s 0.60%/0.53%/2803s 1.04%/0.68%/3626s
GENI ANTS 199
Table 7
GACS, mGENI and ACS plus 2-opt.
Problem CPU GACS mGENI ACS
a280 4095s 0.55%/0.38%/2400s 0.95%/0.51%/2703s 0.86%/0.38%/1514s
berlin52 25s 0.00%/0.00%/1s 0.00%/0.00%/1s 0.00%/0.00%/1s
bier127 819s 0.08%/0.00%/321s 0.16%/0.00%/461s 0.48%/0.10%/136s
ch130 855s 0.00%/0.00%/218s 0.00%/0.00%/338s 0.45%/0.21%/236s
ch150 1157s 0.28%/0.12%/123s 0.32%/0.25%/382s 0.39%/0.25%/21s
d198 1937s 0.14%/0.11%/1349s 0.16%/0.11%/948s 0.61%/0.55%/229s
d493 13822s 1.54%/1.32%/9807s 1.52%/1.44%/8854s 2.08%/1.92%/4924s
d657 25599s 1.61%/1.46%/10066s 1.70%/1.52%/14227s 1.50%/1.42%/14706s
eil51 55s 0.00%/0.00%/8s 0.00%/0.00%/7s 0.08%/0.00%/1s
eil76 275s 0.00%/0.00%/80s 0.00%/0.00%/83s 0.00%/0.00%/3s
eil101 496s 0.05%/0.00%/222s 0.11%/0.00%/165s 0.64%/0.08%/28s
fl417 8560s 0.11%/0.00%/4112s 0.16%/0.14%/2448s 0.26%/0.23%/5048s
gil262 3577s 0.41%/0.34%/910s 0.55%/0.44%/1550s 0.91%/0.75%/155s
kroB100 475s 0.00%/0.00%/135s 0.00%/0.00%/159s 0.23%/0.00%/55s
kroC100 550s 0.00%/0.00%/8s 0.00%/0.00%/14s 0.00%/0.00%/41s
kroD100 305s 0.00%/0.00%/31s 0.00%/0.00%/10s 0.06%/0.00%/19s
kroE100 480s 0.00%/0.00%/119s 0.00%/0.00%/105s 0.33%/0.15%/94s
kroA150 1130s 0.00%/0.00%/555s 0.00%/0.00%/718s 0.01%/0.00%/661s
kroB150 1125s 0.01%/0.00%/254s 0.01%/0.00%/681s 0.05%/0.05%/93s
kroA200 2060s 0.18%/0.12%/786s 0.15%/0.09%/759s 0.50%/0.28%/72s
kroB200 2030s 0.04%/0.00%/1358s 0.04%/0.01%/1808s 0.35%/0.03%/1459s
lin105 135s 0.00%/0.00%/8s 0.00%/0.00%/6s 0.00%/0.00%/9s
lin318 5300s 0.94%/0.86%/1718s 1.13%/0.97%/2350s 0.64%/0.35%/1121s
p654 23044s 0.43%/0.34%/10879s 0.53%/0.48%/10049s 1.10%/0.72%/13582s
pcb442 10919s 1.58%/1.52%/2398s 1.86%/1.68%/8524s 1.99%/1.60%/3219s
pr76 230s 0.00%/0.00%/10s 0.00%/0.00%/8s 0.00%/0.00%/3s
pr107 240s 0.00%/0.00%/19s 0.00%/0.00%/16s 0.40%/0.30%/15s
pr124 315s 0.00%/0.00%/36s 0.00%/0.00%/26s 0.05%/0.00%/9s
pr136 885s 0.28%/0.22%/516s 0.32%/0.26%/381s 0.51%/0.12%/104s
pr144 943s 0.00%/0.00%/375s 0.01%/0.00%/283s 0.00%/0.00%/813.7s
pr152 1045s 0.00%/0.00%/301s 0.00%/0.00%/479s 0.06%/0.00%/428s
pr226 2326s 0.01%/0.00%/1252s 0.01%/0.00%/1101s 0.31%/0.04%/840s
pr264 3559s 0.14%/0.08%/1629s 0.39%/0.30%/1394s 0.64%/0.27%/674s
pr439 10626s 0.84%/0.79%/5583s 0.92%/0.85%/5263s 1.75%/1.24%/6710s
rat99 470s 0.00%/0.00%/80s 0.00%/0.00%/165s 0.25%/0.00%/137s
rat195 1968s 1.24%/1.16%/666s 1.35%/1.19%/970s 1.44%/1.10%/464s
rat575 19118s 2.13%/2.05%/12624s 2.22%/2.05%/6250s 1.63%/1.49%/11955s
rat783 41808s 2.02%/1.94%/7048s 2.38%/2.24%/14530s 2.11%/1.97%/28120s
rd100 320s 0.00%/0.00%/27s 0.40%/0.10%/74s 0.77%/0.45%/9s
rd400 8871s 0.98%/0.87%/6465s 1.21%/1.06%/4923s 1.34%/1.18%/4581s
st70 145s 0.00%/0.00%/3s 0.00%/0.00%/2s 0.40%/0.00%/26s
ts225 2732s 0.08%/0.06%/587s 0.13%/0.09%/1022s 0.18%/0.10%/34s
tsp225 2654s 0.09%/0.00%/1046s 0.46%/0.29%/1401s 0.56%/0.40%/673s
u159 1250s 0.00%/0.00%/142s 0.00%/0.00%/68s 0.47%/0.19%/223s
u724 33717s 2.05%/1.94%/14398s 2.28%/2.02%/12300s 2.26%/1.85%/24340s
Avg. 5912s 0.39%/0.34%/2237s 0.48%/0.40%/2400s 0.63%/0.44%/2835s
200 LE LOUARN, GENDREAU AND POTVIN
Clearly, better solutions can be obtained by integrating local search improvement heuris-
tics into the above methods. This is illustrated here by locally optimizing each con-
structed solution with 2-opt (Lin, 1965) and by using the resulting solution to update the
pheromone trails. The format of table 7 is similar to table 6, except that each method
now applies a 2-opt to every constructed solution (until a local optimum is obtained). As
we can see, the three methods now generate better solutions, while converging faster to
their best solution. Although ACS has benefited the most from the postoptimization, thus
narrowing the gap with GACS and mGENI, the ranking of the three methods remains
the same.
As in the previous experiments, the Wilcoxon signed rank test indicates the superi-
ority of GACS over mGENI and ACS at a level of confidence beyond 99%, for both the
average and best runs.
6. Conclusion
This paper has proposed the use of sophisticated GENI ants within ant colony systems.
The superiority of this approach has been empirically demonstrated over a multi-start
GENI heuristic and classical nearest neighbor ants. This indicates that better solutions
emerge from the exploitation of pheromone information and from the use of more so-
phisticated tour construction strategies. The superiority remains when each constructed
tour is further improved with a 2-opt. It is thus beneficial to use the higher quality tours
constructed by GENI ants to feed the local search heuristic.
Acknowledgments
Financial support for this work was provided by the Canadian Natural Sciences and
Engineering Research Council (NSERC) and by the Quebec Fonds pour la Formation de
Chercheurs et l’Aide à la Recherche (FCAR). This support is gratefully acknowledged.
Thanks also to Serge Bisaillon and Jean-Philippe Tardif for running the computational
experiments.
References
Bonabeau, E., M. Dorigo, and G. Theraulaz. (2000). “Inspiration for Optimization from Social Insect Be-
havior.” Nature 406, 39–42.
Bullnheimer, B., R.F. Hartl, and C. Strauss. (1999). “Applying the Ant System to the Vehicle Routing
Problem.” In S. Voss, S. Martello, I.H. Osman, and C. Roucairol (eds.), Meta-Heuristics – Advances
and Trends in Local Search Paradigms for Optimization. Kluwer, pp. 285–296.
Dorigo, M. (1992). “Optimization, Learning and Natural Algorithms.” Ph.D. Dissertation, Departimento di
Elettronica, Politecnico di Milano, Italy (in Italian).
Dorigo, M., E. Bonabeau, and G. Theraulaz. (2000). “Ant Algorithms and Stigmergy.” Future Generation
Computer Systems 16, 851–871.
GENI ANTS 201
Dorigo, M. and G. Di Caro. (1999). “The Ant Colony Optimization Meta-Heuristic”, In D. Corne,
M. Dorigo, and F. Glover (eds.), New Ideas in Optimization. London, UK: McGraw-Hill, pp. 11–32.
Dorigo, M., G. Di Caro, and L.M. Gambardella. (1999). “Ant Algorithms for Discrete Optimization.” Arti-
ficial Life 5, 137–172.
Dorigo, M. and L.M. Gambardella. (1997). “Ant Colony System: A Cooperative Learning Approach to the
Traveling Salesman Problem.” IEEE Transactions on Evolutionary Computation 1, 53–66.
Dorigo, M., V. Maniezzo, and A. Colorni. (1996). “Ant System: Optimization by a Colony of Cooperating
Agents.” IEEE Transactions on Systems, Man and Cybernetics 26, 29–41.
Gambardella, L.-M. and M. Dorigo. (2000). “Ant Colony System Hybridized with a New Local Search for
the Sequential Ordering Problem.” INFORMS Journal on Computing 12, 237–255.
Gambardella, L.-M., E.D. Taillard, and M. Dorigo. (1999). “Ant Colonies for the Quadratic Assignment
Problem.” Journal of the Operational Research Society 50, 167–176.
Gendreau, M., A. Hertz, and G. Laporte. (1992). “New Insertion and Postoptimization Procedures for the
Traveling Salesman Problem.” Operations Research 40, 1086–1094.
Lawler, E.L., J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Schmoys (eds.). (1985). The Traveling Salesman
Problem. Wiley.
Lin, S. (1965). “Computer Solutions of the Traveling Salesman Problem.” Bell System Technical Journal
44, 2245–2269.
Reinelt, G. (1991). “TSPLIB – A Traveling Salesman Problem Library.” ORSA Journal on Computing 3,
376–384.
Stützle, T. and H.H. Hoos. (2000). “MAX-MIN Ant System.” Future Generation Computer Systems Journal
16, 889–914.
Whitley, D. (1989). “The GENITOR Algorithm: Why Rank-Based Allocation of Reproductive Trials Is
Best.” In J.D. Schaffer (ed.), Proceedings of the Third International Conference on Genetic Algorithms.
Morgan Kaufmann, pp. 116–121.