Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
4 views15 pages

11ApproximationAlgorithms 2x2

Uploaded by

theslyfalcon1968
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views15 pages

11ApproximationAlgorithms 2x2

Uploaded by

theslyfalcon1968
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Approximation Algorithms

Q. Suppose I need to solve an NP-hard problem. What should I do?


Chapter 11 A. Theory says you're unlikely to find a poly-time algorithm.

Must sacrifice one of three desired features.


Approximation Solve problem to optimality.
Algorithms 

Solve problem in poly-time.


Solve arbitrary instances of the problem.

ρ-approximation algorithm.
 Guaranteed to run in poly-time.
 Guaranteed to solve arbitrary instance of the problem
 Guaranteed to find solution within ratio ρ of true optimum.

Slides by Kevin Wayne.


Copyright @ 2005 Pearson-Addison Wesley. Challenge. Need to prove a solution's value is close to optimum, without
All rights reserved.
even knowing what optimum value is!

1 2

Load Balancing

11.1 Load Balancing Input. m identical machines; n jobs, job j has processing time tj.
 Job j must run contiguously on one machine.
 A machine can process at most one job at a time.

Def. Let J(i) be the subset of jobs assigned to machine i. The


load of machine i is Li = Σj ∈ J(i) tj.

Def. The makespan is the maximum load on any machine L = maxi Li.

Load balancing. Assign each job to a machine to minimize makespan.

4
Load Balancing: List Scheduling Load Balancing: List Scheduling Analysis

List-scheduling algorithm. Theorem. [Graham, 1966] Greedy algorithm is a 2-approximation.


 Consider n jobs in some fixed order. First worst-case analysis of an approximation algorithm.
 Assign job j to machine whose load is smallest so far. Need to compare resulting solution with optimal makespan L*.

List-Scheduling(m, n, t1,t2,…,tn) {
for i = 1 to m { Lemma 1. The optimal makespan L* ≥ maxj tj.
Li ← 0 load on machine i
jobs assigned to machine i
Pf. Some machine must process the most time-consuming job. ▪
J(i) ← φ
}
Lemma 2. The optimal makespan L * " m1 # j t j .
for j = 1 to n {
machine i has smallest load
Pf.
i = argmink Lk
J(i) ← J(i) ∪ {j} assign job j to machine i  The total processing time is Σj tj .
Li ← Li + tj update load of machine i  One of m machines must
! do at least a 1/m fraction of total work. ▪
}
return J(1), …, J(m)
}

Implementation. O(n log m) using a priority queue.

5 6

Load Balancing: List Scheduling Analysis Load Balancing: List Scheduling Analysis

Theorem. Greedy algorithm is a 2-approximation. Theorem. Greedy algorithm is a 2-approximation.


Pf. Consider load Li of bottleneck machine i. Pf. Consider load Li of bottleneck machine i.
 Let j be last job scheduled on machine i.  Let j be last job scheduled on machine i.
 When job j assigned to machine i, i had smallest load. Its load  When job j assigned to machine i, i had smallest load. Its load
before assignment is Li - tj ⇒ Li - tj ≤ Lk for all 1 ≤ k ≤ m. before assignment is Li - tj ⇒ Li - tj ≤ Lk for all 1 ≤ k ≤ m.
 Sum inequalities over all k and divide by m:

Li " tj # m1 $ k Lk
blue jobs scheduled before j = m1 $ k t k
Lemma 1 # L*

 Now Li = (Li " t j ) + t j # 2L *. ▪


1424 3 {
machine i j #!L* # L*

Lemma 2

!
0
Li - t j L = Li
7 8
Load Balancing: List Scheduling Analysis Load Balancing: List Scheduling Analysis

Q. Is our analysis tight? Q. Is our analysis tight?


A. Essentially yes. A. Essentially yes.

Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m

machine 2 idle
machine 3 idle
machine 4 idle
m = 10 machine 5 idle m = 10
machine 6 idle
machine 7 idle
machine 8 idle
machine 9 idle
machine 10 idle

list scheduling makespan = 19 optimal makespan = 10

9 10

Load Balancing: LPT Rule Load Balancing: LPT Rule

Longest processing time (LPT). Sort n jobs in descending order of Observation. If at most m jobs, then list-scheduling is optimal.
processing time, and then run list scheduling algorithm. Pf. Each job put on its own machine. ▪

Lemma 3. If there are more than m jobs, L* ≥ 2 tm+1.


LPT-List-Scheduling(m, n, t1,t2,…,tn) { Pf.
Sort jobs so that t1 ≥ t2 ≥ … ≥ tn  Consider first m+1 jobs t1, …, tm+1.
 Since the ti's are in descending order, each takes at least tm+1 time.
for i = 1 to m {
load on machine i
 There are m+1 jobs and m machines, so by pigeonhole principle, at
Li ← 0
J(i) ← φ jobs assigned to machine i least one machine gets two jobs. ▪
}
Theorem. LPT rule is a 3/2 approximation algorithm.
for j = 1 to n {
Pf. Same basic approach as for list scheduling.
i = argmink Lk machine i has smallest load
J(i) ← J(i) ∪ {j} assign job j to machine i
L i = (Li " t j ) + t j # 3 L *.
Li ← Li + tj update load of machine i 1424 3 { 2 ▪
} # L* # 12 L*
return J(1), …, J(m)
} Lemma 3
( by observation, can assume number of jobs > m )
11 ! 12
Load Balancing: LPT Rule

Q. Is our 3/2 analysis tight?


A. No.
11.2 Center Selection

Theorem. [Graham, 1969] LPT rule is a 4/3-approximation.


Pf. More sophisticated analysis of same algorithm.

Q. Is Graham's 4/3 analysis tight?


A. Essentially yes.

Ex: m machines, n = 2m+1 jobs, 2 jobs of length m+1, m+2, …, 2m-1 and
one job of length m.

13

Center Selection Problem Center Selection Problem

Input. Set of n sites s1, …, sn and integer k > 0. Input. Set of n sites s1, …, sn and integer k > 0.

Center selection problem. Select k centers C so that maximum Center selection problem. Select k centers C so that maximum
distance from a site to nearest center is minimized. distance from a site to nearest center is minimized.

k=4
Notation.
dist(x, y) = distance between x and y.
dist(si, C) = min c ∈ C dist(si, c) = distance from si to closest center.
r(C) = maxi dist(si, C) = smallest covering radius.
r(C)

Goal. Find set of centers C that minimizes r(C), subject to |C| = k.

Distance function properties.


center
 dist(x, x) = 0 (identity)
site  dist(x, y) = dist(y, x) (symmetry)
 dist(x, y) ≤ dist(x, z) + dist(z, y) (triangle inequality)

15 16
Center Selection Example Greedy Algorithm: A False Start

Ex: each site is a point in the plane, a center can be any point in the Greedy algorithm. Put the first center at the best possible location
plane, dist(x, y) = Euclidean distance. for a single center, and then keep adding centers so as to reduce the
covering radius each time by as much as possible.
Remark: search can be infinite!
Remark: arbitrarily bad!

r(C)
greedy center 1

center
k = 2 centers site

center
site

17 18

Center Selection: Greedy Algorithm Center Selection: Analysis of Greedy Algorithm

Greedy algorithm. Repeatedly choose the next center to be the site Theorem. Let C* be an optimal set of centers. Then r(C) ≤ 2r(C*).
farthest from any existing center. Pf. (by contradiction) Assume r(C*) < ½ r(C).
 For each site ci in C, consider ball of radius ½ r(C) around it.
Greedy-Center-Selection(k, n, s1,s2,…,sn) {  Exactly one ci* in each ball; let ci be the site paired with ci*.
 Consider any site s and its closest center ci* in C*.
C = φ  dist(s, C) ≤ dist(s, ci) ≤ dist(s, ci*) + dist(ci*, ci) ≤ 2r(C*).
repeat k times {  Thus r(C) ≤ 2r(C*). ▪
Select a site si with maximum dist(si, C) Δ-inequality ≤ r(C*) since ci* is closest center

Add si to C
} site farthest from any center
return C
} ½ r(C) ½ r(C)

Observation. Upon termination all centers in C are pairwise at least r(C)


apart. ½ r(C)
ci

Pf. By construction of algorithm. C*


ci *
sites s

19 20
Center Selection

Theorem. Let C* be an optimal set of centers. Then r(C) ≤ 2r(C*).


11.4 The Pricing Method: Vertex Cover
Theorem. Greedy algorithm is a 2-approximation for center selection
problem.

Remark. Greedy algorithm always places centers at sites, but is still


within a factor of 2 of best solution that is allowed to place centers
anywhere.
e.g., points in the plane

Question. Is there hope of a 3/2-approximation? 4/3?

Theorem. Unless P = NP, there no ρ-approximation for center-selection


problem for any ρ < 2.

21

Weighted Vertex Cover Pricing Method

Weighted vertex cover. Given a graph G with vertex weights, find a Pricing method. Each edge must be covered by some vertex.
vertex cover of minimum weight. Edge e = (i, j) pays price pe ≥ 0 to use vertex i and j.

Fairness. Edges incident to vertex i should pay ≤ wi in total.

2 4

for each vertex i : " pe ! wi


2 4 2 4 e =(i , j )

2 9

Lemma. For any vertex cover S and any fair prices pe: ∑e pe ≤ w(S).
2 9 2 9 Pf. ▪

" pe ! " " pe ! " wi = w( S ).


weight = 2 + 2 + 4 weight = 9 e# E i # S e =(i , j ) i#S

each edge e covered by sum fairness inequalities


at least one node in S for each node in S

23 24
Pricing Method Pricing Method

Pricing method. Set prices and find vertex cover simultaneously.

Weighted-Vertex-Cover-Approx(G, w) {
foreach e in E
pe = 0 ! pe = wi
e = (i , j )

while (∃ edge i-j such that neither i nor j are tight)


select such an edge e
increase pe as much as possible until i or j tight price of edge a-b
}
vertex weight
S ← set of all tight nodes
return S
}

Figure 11.8

25 26

Pricing Method: Analysis

Theorem. Pricing method is a 2-approximation.


Pf. 11.6 LP Rounding: Vertex Cover
 Algorithm terminates since at least one new node becomes tight
after each iteration of while loop.

 Let S = set of all tight nodes upon termination of algorithm. S is a


vertex cover: if some edge i-j is uncovered, then neither i nor j is
tight. But then while loop would not terminate.

 Let S* be optimal vertex cover. We show w(S) ≤ 2w(S*).

w(S) = # wi = # # pe $ # # pe = 2 # pe $ 2w(S*).
i" S i" S e=(i, j) i"V e=(i, j) e" E

all nodes in S are tight S ⊆ V, each edge counted twice fairness lemma
prices ≥ 0

!
27
Weighted Vertex Cover Weighted Vertex Cover: IP Formulation

Weighted vertex cover. Given an undirected graph G = (V, E) with Weighted vertex cover. Given an undirected graph G = (V, E) with
vertex weights wi ≥ 0, find a minimum weight subset of nodes S such vertex weights wi ≥ 0, find a minimum weight subset of nodes S such
that every edge is incident to at least one vertex in S. that every edge is incident to at least one vertex in S.

Integer programming formulation.


10 A F
6 9
 Model inclusion of each vertex i using a 0/1 variable xi.

16 B G
7 10 " 0 if vertex i is not in vertex cover
xi = #
$ 1 if vertex i is in vertex cover
6 C
3 H 9 Vertex covers in 1-1 correspondence with 0/1 assignments:
S = {i ∈ V : xi = 1}
23
!
D I 33
 Objective function: maximize Σi wi xi.
7 E J
10 32
 Must take either i or j: xi + xj ≥ 1.
total weight = 55

29 30

Weighted Vertex Cover: IP Formulation Integer Programming

Weighted vertex cover. Integer programming formulation. INTEGER-PROGRAMMING. Given integers aij and bi, find integers xj that
satisfy:

( ILP) min # wi xi n
i " V max c t x " aij x j # bi 1$ i $ m
s. t. xi + x j $ 1 (i, j) " E s. t. Ax " b
j=1
xj # 0 1$ j $ n
xi " {0,1} i " V x integral
xj integral 1 $ j $ n

!
!
Observation. If x* is optimal solution to (ILP), then S = {i ∈ V : x*i = 1} Observation. Vertex cover formulation
! proves that integer
is a min weight vertex cover. programming is NP-hard search problem.
even if all coefficients are 0/1 and
at most two variables per inequality

31 32
Linear Programming LP Feasible Region

Linear programming. Max/min linear objective function subject to LP geometry in 2D.


linear inequalities.
 Input: integers cj, bi, aij . x1 = 0

 Output: real numbers xj.


n
(P) max " cj xj
j=1
(P) max c t x n
s. t. " aij x j # bi 1 $ i $ m
s. t. Ax " b j=1
x " 0 xj # 0 1$ j $ n

Linear.
! No x2, xy, arccos(x), x(1-x), etc.
!

Simplex algorithm. [Dantzig 1947] Can solve LP in practice.


x2 = 0
Ellipsoid algorithm. [Khachian 1979] Can solve LP in poly-time.
x1 + 2x2 = 6
2x1 + x2 = 6
33 34

Weighted Vertex Cover: LP Relaxation Weighted Vertex Cover

Weighted vertex cover. Linear programming formulation. Theorem. If x* is optimal solution to (LP), then S = {i ∈ V : x*i ≥ ½} is a
vertex cover whose weight is at most twice the min possible weight.

( LP) min # wi xi
i " V Pf. [S is a vertex cover]
s. t. xi + x j $ 1 (i, j) " E  Consider an edge (i, j) ∈ E.
xi $ 0 i "V  Since x*i + x*j ≥ 1, either x*i ≥ ½ or x*j ≥ ½ ⇒ (i, j) covered.

Pf. [S has desired cost]


Observation.
! Optimal value of (LP) is ≤ optimal value of (ILP).  Let S* be optimal vertex cover. Then
Pf. LP has fewer constraints.
# wi $ # wi xi* $ 1
2 # wi
Note. LP is not equivalent to vertex cover. ½ ½ i " S* i"S i"S

LP is a relaxation x*i ≥ ½

Q. How can solving LP help us find a small vertex cover? ½ !


A. Solve LP and round fractional values.

35 36
Weighted Vertex Cover

Theorem. 2-approximation algorithm for weighted vertex cover.


* 11.7 Load Balancing Reloaded

Theorem. [Dinur-Safra 2001] If P ≠ NP, then no ρ-approximation


for ρ < 1.3607, even with unit weights.

10 √5 - 21

Open research problem. Close the gap.

37

Generalized Load Balancing Generalized Load Balancing: Integer Linear Program and Relaxation

Input. Set of m machines M; set of n jobs J. ILP formulation. xij = time machine i spends processing job j.
 Job j must run contiguously on an authorized machine in Mj ⊆ M.
 Job j has processing time tj. (IP) min L
 Each machine can process at most one job at a time. s. t. " x i j = tj for all j # J
i
" xi j $ L for all i # M
Def. Let J(i) be the subset of jobs assigned to machine i. The j
load of machine i is Li = Σj ∈ J(i) tj. xi j # {0, t j } for all j # J and i # M j
xi j = 0 for all j # J and i % M j
Def. The makespan is the maximum load on any machine = maxi Li.
LP relaxation.
!
(LP) min L
Generalized load balancing. Assign each job to an authorized machine
s. t. " x i j = tj for all j # J
to minimize makespan. i
" xi j $ L for all i # M
j
xi j % 0 for all j # J and i # M j
xi j = 0 for all j # J and i & M j
39 40
Generalized Load Balancing: Lower Bounds Generalized Load Balancing: Structure of LP Solution

Lemma 1. Let L be the optimal value to the LP. Then, the optimal Lemma 3. Let x be solution to LP. Let G(x) be the graph with an edge
makespan L* ≥ L. from machine i to job j if xij > 0. Then G(x) is acyclic.
Pf. LP has fewer constraints than IP formulation.
can transform x into another LP solution where
Pf. (deferred) G(x) is acyclic if LP solver doesn't return such an x
Lemma 2. The optimal makespan L* ≥ maxj tj.
Pf. Some machine must process the most time-consuming job. ▪

xij > 0

G(x) acyclic G(x) cyclic


job

machine

41 42

Generalized Load Balancing: Rounding Generalized Load Balancing: Analysis

Rounded solution. Find LP solution x where G(x) is a forest. Root Lemma 5. If job j is a leaf node and machine i = parent(j), then xij = tj.
forest G(x) at some arbitrary machine node r. Pf. Since i is a leaf, xij = 0 for all j ≠ parent(i). LP constraint
 If job j is a leaf node, assign j to its parent machine i. guarantees Σi xij = tj. ▪
 If job j is not a leaf node, assign j to one of its children.
Lemma 6. At most one non-leaf job is assigned to a machine.
Lemma 4. Rounded solution only assigns jobs to authorized machines. Pf. The only possible non-leaf job assigned to machine i is parent(i). ▪
Pf. If job j is assigned to machine i, then xij > 0. LP solution can only
assign positive value to authorized machines. ▪

job job

machine machine

43 44
Generalized Load Balancing: Analysis Generalized Load Balancing: Flow Formulation

Theorem. Rounded solution is a 2-approximation. Flow formulation of LP.


Pf.
 Let J(i) be the jobs assigned to machine i. ∞
 By Lemma 6, the load Li on machine i has two components:
" xi j = tj for all j # J
leaf nodes
i
– Lemma 5 LP Lemma 1 (LP is a relaxation) " xi j $ L for all i # M
j
xi j % 0 for all j # J and i # M j
# tj = # xij $ # xij $ L $ L *
j " J(i) j " J(i) j " J xi j = 0 for all j # J and i & M j
j is a leaf j is a leaf
optimal value of LP
Lemma 2

!
– parent(i)
! tparent(i) " L *

 Thus, the!overall load Li ≤ 2L*. ▪ Observation. Solution to feasible flow problem with value L are in one-
to-one correspondence with LP solutions of value L.

45 46

Generalized Load Balancing: Structure of Solution Conclusions

Lemma 3. Let (x, L) be solution to LP. Let G(x) be the graph with an Running time. The bottleneck operation in our 2-approximation is
edge from machine i to job j if xij > 0. We can find another solution (x', solving one LP with mn + 1 variables.
L) such that G(x') is acyclic.
Remark. Can solve LP using flow techniques on a graph with m+n+1 nodes:
Pf. Let C be a cycle in G(x). given L, find feasible flow if it exists. Binary search to find L*.
 Augment flow along the cycle C. flow conservation maintained

 At least one edge from C is removed (and none are added).


 Repeat until G(x') is acyclic. Extensions: unrelated parallel machines. [Lenstra-Shmoys-Tardos 1990]
 Job j takes tij time if processed on machine i.
 2-approximation algorithm via LP rounding.
3 3 3 3  No 3/2-approximation algorithm unless P = NP.
6 6
2 3
4 4
2 1

1 5 5

4 3 4 4
augment along C
G(x) G(x')
47 48
Polynomial Time Approximation Scheme

11.8 Knapsack Problem PTAS. (1 + ε)-approximation algorithm for any constant ε > 0.
Load balancing. [Hochbaum-Shmoys 1987]
Euclidean TSP. [Arora 1996]

Consequence. PTAS produces arbitrarily high quality solution, but trades


off accuracy for time.

This section. PTAS for knapsack problem via rounding and scaling.

50

Knapsack Problem Knapsack is NP-Complete

Knapsack problem. KNAPSACK: Given a finite set X, nonnegative weights wi, nonnegative
 Given n objects and a "knapsack." values vi, a weight limit W, and a target value V, is there a subset S ⊆ X
 Item i has value vi > 0 and weighs wi > 0. we'll assume wi ≤ W such that:
 Knapsack can carry weight up to W. # wi $ W
i"S
Goal: fill knapsack so as to maximize total value.

# vi % V
i"S
Item Value Weight
Ex: { 3, 4 } has value 40.
1 1 1 SUBSET-SUM: Given a finite set X, nonnegative values ui, and an integer
2 6 2 U, is there a subset S ⊆!X whose elements sum to exactly U?
W = 11
3 18 5
4 22 6 Claim. SUBSET-SUM ≤ P KNAPSACK.
5 28 7
Pf. Given instance (u1, …, un, U) of SUBSET-SUM, create KNAPSACK
instance:
vi = wi = ui # ui $ U
i"S
V = W =U # ui % U
i"S

51 52

!
Knapsack Problem: Dynamic Programming 1 Knapsack Problem: Dynamic Programming II

Def. OPT(i, w) = max value subset of items 1,..., i with weight limit w. Def. OPT(i, v) = min weight subset of items 1, …, i that yields value
Case 1: OPT does not select item i. exactly v.
– OPT selects best of 1, …, i–1 using up to weight limit w  Case 1: OPT does not select item i.
Case 2: OPT selects item i. – OPT selects best of 1, …, i-1 that achieves exactly value v
– new weight limit = w – wi  Case 2: OPT selects item i.
– OPT selects best of 1, …, i–1 using up to weight limit w – wi – consumes weight wi, new value needed = v – vi
– OPT selects best of 1, …, i-1 that achieves exactly value v

# 0 if i = 0 $ 0 if v = 0
%
OPT(i, w) = $ OPT(i "1, w) if wi > w &
& " if i = 0, v > 0
% max OPT(i "1, w), v + OPT(i "1, w " w ) otherwise OPT (i, v) = %
& { i i } OPT (i #1, v) if vi > v
&
&' min { OPT (i #1, v), wi + OPT (i #1, v # vi ) } otherwise
Running time. O(n W).
 W = weight limit.
! V* ≤ n vmax
 Not polynomial in input size!
! Running time. O(n V*) = O(n2 vmax).
 V* = optimal value = maximum v such that OPT(n, v) ≤ W.
 Not polynomial in input size!
53 54

Knapsack: FPTAS Knapsack: FPTAS

#v % #v %
Intuition for approximation algorithm. Knapsack FPTAS. Round up all values: vi = $ i & ", vˆi = $ i &
$"& $"&
 Round all values up to lie in smaller range.
 Run dynamic programming algorithm on rounded instance. – vmax = largest value in original instance
 Return optimal items in rounded instance. – ε = precision parameter
!
– θ = scaling factor = ε vmax / n

Item Value Weight Item Value Weight


Observation. Optimal solution to problems with v or vˆ are equivalent.
1 934,221 1 1 1 1
2 5,956,342 2 2 6 2 Intuition. v close to v so optimal solution using v is nearly optimal;
3 17,810,013 5 3 18 5 vˆ small and integral so dynamic programming algorithm is fast.
4 21,217,800 6 4 22 6 ! !
5 27,343,199 7 5 28 7 Running time. O(n3 / ε).
! Dynamic program II running time!is O(n2 vˆmax ) , where

W = 11 W = 11 ! #v % #n%
vˆmax = $ max & = $ &
$ " & $'&
original instance rounded instance !

55 56
!
Knapsack: FPTAS

#v %
Knapsack FPTAS. Round up all values: vi = $ i & "
$"& Extra Slides

Theorem. If S is solution found by our algorithm and S* is any other


!
feasible solution then (1+ " ) % vi # % vi
i$S i $ S*

Pf. Let S* be any feasible solution satisfying weight constraint.

! always round up
# vi $ # vi
i " S* i " S*

solve rounded instance optimally


$ # vi
i"S

never round up by more than θ


$ # (vi + %)
i"S

$ # vi + n% |S| ≤ n
i" S DP alg can take vmax

$ (1+ &) # vi n θ = ε vmax, vmax ≤ Σi∈S vi


i" S

57

Load Balancing on 2 Machines Center Selection: Hardness of Approximation

Claim. Load balancing is hard even if only 2 machines. Theorem. Unless P = NP, there is no ρ-approximation algorithm for
Pf. NUMBER-PARTITIONING ≤ P LOAD-BALANCE. metric k-center problem for any ρ < 2.

NP-complete by Exercise 8.26


Pf. We show how we could use a (2 - ε) approximation algorithm for k-
center to solve DOMINATING-SET in poly-time.
a b c d
 Let G = (V, E), k be an instance of DOMINATING-SET. see Exercise 8.29

 Construct instance G' of k-center with sites V and distances


e f g
– d(u, v) = 2 if (u, v) ∈ E
– d(u, v) = 1 if (u, v) ∉ E
length of job f
 Note that G' satisfies the triangle inequality.
 Claim: G has dominating set of size k iff there exists k centers C*
machine 1 a d Machine 1 f with r(C*) = 1.
yes  Thus, if G has a dominating set of size k, a (2 - ε)-approximation
machine 2 b c Machine e2 g algorithm on G' must find a solution C* with r(C*) = 1 since it cannot
use any edge of distance 2.
0 Time L

59 60

You might also like