Found Comput Math (2016) 16:329–342
DOI 10.1007/s10208-014-9243-7
Computing the Permanent of (Some) Complex Matrices
Alexander Barvinok
Received: 24 June 2014 / Revised: 22 November 2014 / Accepted: 30 November 2014 /
Published online: 6 January 2015
© SFoCM 2014
Abstract We present a deterministic algorithm,
which, for any given 0 < < 1 and
an n × n real or complex matrix A = ai j such that ai j − 1 ≤ 0.19 for all i, j
computes the permanent of A within relative error in n O(ln n−ln ) time. The method
can be extended to computing hafnians and multidimensional permanents.
Keywords Permanent · Hafnian · Algorithm
Mathematics Subject Classification 15A15 · 68C25 · 68W25
1 Introduction and Main Results
The permanent of an n × n matrix A = ai j is defined as
n
per A = aiσ (i) ,
σ ∈Sn i=1
where Sn is the symmetric group of permutations of the set {1, . . . , n}. The problem
of efficient computation of the permanent has attracted a lot of attention. It is # P-
hard already for 0–1 matrices [18], but a fully polynomial randomized approximation
Communicated by Stephen Cook.
This research was partially supported by NSF Grants DMS 0856640 and DMS 1361541.
A. Barvinok (B)
Department of Mathematics, University of Michigan, Ann Arbor, MI 48109-1043, USA
e-mail: [email protected]
123
330 Found Comput Math (2016) 16:329–342
scheme, based on the Markov Chain Monte Carlo approach, is constructed for all non-
negative matrices [14]. A deterministic polynomial time algorithm based on matrix
scaling for computing the permanent of non-negative matrices within a factor of en is
constructed in [15], and the bound was recently improved to 2n in [13]. An approach
based on the idea of “correlation decay” from statistical physics results in a determin-
istic polynomial time algorithm approximating per A within a factor of (1 + )n for
any > 0, fixed in advance, if A is the adjacency matrix of a constant degree expander
[11].
There is also interest in computing permanents of complex matrices [1]. The well-
known Ryser’s algorithm (see, for example, [16, Chapter 7]) computes the permanent
of a matrix A over any field in O (n2n ) time. A randomized approximation algorithm
of Fürer [10] computes the permanent
of a complex matrix within a (properly defined)
relative error in O 3n/2 −2 time. The randomized algorithm of Gurvits [12], see
also [1] for an exposition, computes the permanent of a complex matrix A in polyno-
mial in n and 1/ time within an additive error of An , where A is the operator
norm of A.
In this paper, we present a new approach
to computing permanents of real or com-
plex matrices A and show that if ai j − 1 ≤ γ for some absolute constant γ > 0 (we
can choose γ = 0.19) and all i and j, then, for any > 0 the value of per A can be
computed within relative error in n O(ln n−ln ) time (we say that α ∈ C approximates
per A within relative error 0 < < 1 if per A = α(1 + ρ) where |ρ| < ). We also
discuss how the method can be extended to computing hafnians of symmetric matrices
and multidimensional permanents of tensors.
1.1 The Idea of the Algorithm
Let J denote the n × n matrix filled with 1s. Given an n × n complex matrix A, we
consider (a branch of) the univariate function
f (z) = ln per J + z(A − J ) . (1.1)
Clearly,
f (0) = ln per J = ln n! and f (1) = ln per A.
Hence, our goal is to approximate f (1) and we do it by using the Taylor polynomial
expansion of f at z = 0:
m
1 dk
f (1) ≈ f (0) + f (z) . (1.2)
k! dz k z=0
k=1
It turns out that the right hand side of (1.2) can be computed in n O(m) time. We present
the algorithm in Sect. 2. The quality of the approximation (1.2) depends on the location
of complex zeros of the permanent.
123
Found Comput Math (2016) 16:329–342 331
Lemma 1.1 Suppose that there exists a real β > 1 such that
per J + z(A − J ) = 0 for all z ∈ C satisifying |z| ≤ β.
Then for all z ∈ C with |z| ≤ 1 the value of
f (z) = ln per J + z(A − J )
is well defined by the choice of the branch of the logarithm for which f (0) is a real
number, and the right hand side of (1.2) approximates f (1) within an additive error
of
n
.
(m + 1)β m (β − 1)
In particular, for a fixed β > 1, to ensure an additive error of 0 < < 1, we can
choose m = O (ln n − ln ), which results in the algorithm for approximating per A
within relative error in n O(ln n−ln ) time. We prove Lemma 1.1 in Sect. 2.
Thus, we have to identify a class of matrices A for which the number β > 1 of
Lemma 1.1 exists. We prove the following result.
Theorem 1.2 There
is an absolute constant δ > 0 (we can choose δ = 0.195) such
that if Z = z i j is a complex n × n matrix satisfying
z i j − 1 ≤ δ for all i, j
then
per Z = 0.
We prove Theorem 1.2
in Sect. 3.
For any matrix A = ai j satisfying
ai j − 1 ≤ 0.19 for all i, j,
we can choose β = 195/190 in Lemma 1.1 and thus obtain an approximation algorithm
for computing per A.
The sharp value of the constant δ in Theorem 1.2 is not known to the author. A
simple example of a 2 × 2 matrix
1+i 1−i
A= 2 2
1−i 1+i
2 2
for which per A = 0 shows that in Theorem 1.2 we must have
√
2
δ < ≈ 0.71.
2
123
332 Found Comput Math (2016) 16:329–342
What is also not clear is whether the constant δ can improve as the size of the matrix
grows.
1.2 Question
Is it true that for any 0 < < 1 there is a positive integer N () such that if Z = z i j
is a complex n × n matrix with n > N () and
z i j − 1 ≤ 1 − for all i, j
then per Z = 0?
In geometric terms, Theorem 1.2 asserts that the ∞ -distance from the matrix J
of all 1s to the complex hypersurface per Z = 0 in Cn×n is bounded from below
by a positive absolute constant, independent on n. The 2 -distance from a point to a
complex algebraic variety has been studied recently in [8].
We note that for any 0 < < 1, fixed in advance, a deterministic polynomial time
based on scaling approximates the permanent of a given n × n real matrix
algorithm
A = ai j satisfying
≤ ai j ≤ 1 for all i, j
within a multiplicative factor of n κ() for some κ() > 0 [6].
1.3 Ramifications
In Sect. 4, we discuss how our approach can be used for computing hafnians of sym-
metric matrices and multidimensional permanents of tensors. The same approach can
be used for computing partition functions associated with cliques in graphs [5] and
graph homomorphisms [7], although the most general framework under which our
approach works is still not quite clear. In each case, the main problem is to come up
with a version of Theorem 1.2 bounding the complex roots of the partition function
away from the vector of all 1s. Isolating zeros of complex extensions of real parti-
tion functions is a problem studied in statistical physics and also in connection to
combinatorics, see, for example, [17].
An anonymous referee asked what “basepoint” matrices other than J can be used
in the algorithm. As follows from Sect. 2, such a base matrix (call it X ) should have
the property that the permanents of its square submatrices are efficiently computable.
One candidate for such an X would be a matrix of a small (fixed in advance) rank, cf.
[3]. On the other hand, the way we prove Theorem 1.2 in Sect. 3 would require that the
arguments of entries of X (as complex numbers) are close to each other. The current
choice of J appears to be the easiest to handle and produces the best estimates.
2 The Algorithm
2.1 The Algorithm for Approximating the Permanent
Given an n × n complex matrix A = ai j , we present an algorithm which computes
the right hand side of the approximation (1.2) for the function f (z) defined by (1.1).
123
Found Comput Math (2016) 16:329–342 333
Let
g(z) = per J + z(A − J ) , (2.1)
so f (z) = ln g(z). Hence
g (z)
f (z) = and g (z) = g(z) f (z).
g(z)
Therefore, for k ≥ 1 we have
k−1
j d k− j
dk k−1 d
g(z) = g(z) f (z) (2.2)
dz k z=0 j dz j z=0 dz k− j z=0
j=0
(we agree that the 0th derivative of g is g).
We note that g(0) = n!. If we compute the values of
dk
g(z) for k = 1, . . . , m, (2.3)
dz k z=0
then the formulas (2.2) for k = 1, . . . , m provide a non-degenerate triangular system
of linear equations that allows us to compute
dk
k
f (z) for k = 1, . . . , m.
dz z=0
Hence our goal is to compute the values (2.3).
We have
dk d k
n
g(z) = 1 + z a iσ (i) − 1
dz k z=0 dz k z=0
σ ∈Sn i=1
= ai1 σ (i1 ) − 1 · · · aik σ (ik ) − 1
σ ∈Sn 1≤i 1 ,...,i k ≤n
=(n − k)! ai1 j1 − 1 · · · aik jk − 1 ,
1≤i 1 ,...,i k ≤n
1≤ j1 ,..., jk ≤n
where the last sum is over all pairs of ordered k-subsets (i 1 , . . . , i k ) and ( j1 , . . . , jk )
2
of the set {1, . . . , n}. Since the last sum contains n!/(n − k)! = n O(k) terms, the
complexity of the algorithm is indeed n O(m) .
As an anonymous referee pointed out, the kth number in (2.3) is k!(n − k)! times
the sum of permanents of all k × k submatrices of A − J and hence one can apply the
algorithm of Friedland and Gurvits [9] to speed up the computation of (2.3). If one
123
334 Found Comput Math (2016) 16:329–342
uses the algorithm
of Friedland and Gurvits [9], the complexity of computing (2.3)
becomes mn n O(1) provided m n, which is still n O(m) .
In the bit model of computation (assuming that the input matrix A is complex
rational), the complexity of the algorithm is L O(m) , where L is the length of the input.
Indeed, the complexity of computing (2.3) is obviously bounded by L O(m) and the
system (2.2) of linear equations is well conditioned, since the matrix of the system is
lower triangular with diagonal entries equal to g(0) = n!.
Proof of Lemma 1.1 The function g(z) defined by (2.1) is a polynomial in z of degree
d ≤ n with g(0) = n! = 0, so we factor
d
z
g(z) = g(0) 1− ,
αi
i=1
α1 , . . . , αd are the roots of g(z). By the condition of Lemma 1.1, we have
|αi | ≥ β > 1 for i = 1, . . . , d.
Therefore,
d
z
f (z) = ln g(z) = ln g(0) + ln 1 − for |z| ≤ 1, (2.4)
αi
i=1
where we choose the branch of ln g(z) that is real at z = 0. Using the standard Taylor
expansion, we obtain
m
1 1 1 k
ln 1 − =− + ζm ,
αi k αi
k=1
where
+∞
1 1 k
1
|ζm | = ≤ .
k αi (m + 1)β m (β − 1)
k=m+1
Therefore, from (2.4) we obtain
d
m
1 1 k
f (1) = f (0) + − + ηm ,
k αi
k=1 i=1
where
n
|ηm | ≤ .
(m + 1)β m (β − 1)
123
Found Comput Math (2016) 16:329–342 335
It remains to notice that
d
1 1 k 1 dk
− = f (z) .
k αi k! dz k z=0
i=1
As an anonymous referee pointed out, it follows from the proof of Lemma 1.1 that
choosing m = O(ln n − ln ) we achieve an additive error of /(ln n − ln ) in (1.2),
which is slightly better than just claimed in Sect. 1.1.
3 Proof of Theorem 1.2
Let us denote by U n×n (δ) ⊂ Cn×n the closed polydisc
U n×n (δ) = Z = z i j : z i j − 1 ≤ δ for all i, j .
Thus Theorem 1.2 asserts that per Z = 0 for Z ∈ U n×n (δ) and δ = 0.195.
First, we establish a simple geometric lemma.
Lemma 3.1 Let u 1 , . . . , u n ∈ Rd be nonzero vectors such that for some 0 ≤ α < π/2
the angle between any two vectors u i and u j does not exceed α. Let u = u 1 + . . . + u n .
Then
√
n
u ≥ cos α u i .
i=1
Proof We have
2
n
u =2
u i , u j ≥ u i u j cos α = (cos α) u i ,
1≤i, j≤n 1≤i, j≤n i=1
and the proof follows.
We prove Theorem 1.2 by induction on n, using Lemma 3.1 and the following two
lemmas.
Lemma 3.2 For an n × n matrix Z = z i j and j = 1, . . . , n, let Z j be the (n − 1) ×
(n − 1) matrix obtained from Z by crossing out the first row and the jth column of Z .
Suppose that for some δ > 0 and for some 0 < τ < 1, for any Z ∈ U n×n (δ) we
have per Z = 0 and
n
|per Z | ≥ τ z 1 j per Z j .
j=1
123
336 Found Comput Math (2016) 16:329–342
Let A, B ⊂ U n×n (δ) be any two n × n matrices that differ in one column (or in one
row) only. Then the angle between two complex numbers per A and per B, interpreted
as vectors in R2 = C does not exceed
2δ
θ= .
(1 − δ)τ
Proof Since per Z = 0 for all Z ∈ U n×n (δ), we may consider a branch of ln per Z
defined for Z ∈ U n×n (δ).
Using the expansion
n
per Z = z 1 j per Z j , (3.1)
j=1
we conclude that
∂ per Z j
ln per Z = for j = 1, . . . , n.
∂z 1 j per Z
Therefore, since z i j ≥ 1−δ for j = 1, . . . , n, we conclude that for any Z ∈ U n×n (δ),
we have
n
∂ 1
ln per Z ≤ . (3.2)
∂z (1 − δ)τ
j=1 1j
Since the permanent is invariant under permutations of rows, permutations of columns
and taking the transpose of the matrix, without loss of generality we may assume that
the matrix B ∈ U n×n (δ) is obtained from A ∈ U n×n (δ) by replacing the entries a1 j
by numbers b1 j such that
b1 j − 1 ≤ δ for j = 1, . . . , n.
Then
⎛ ⎞
n
∂
|ln per A − ln per B| ≤ ⎝ sup
ln per Z ⎠
max a1 j − b1 j .
∂z j=1,...,n
Z ∈U n×n (δ) j=1 1j
Since
b1 j − a1 j ≤ 2δ for all j = 1, . . . , n,
the proof follows from (3.2).
123
Found Comput Math (2016) 16:329–342 337
Lemma 3.3 Suppose that for some
π
0 ≤ θ < − 2 arcsin δ
2
and for any two matrices A, B ∈ U n×n (δ) which differ in one row (or in one column),
the angle between two complex numbers per A and per B, interpreted as vectors in
R2 = C does not exceed θ . Then for any matrix Z ∈ U (n+1)×(n+1) (δ), we have
n+1
|per Z | ≥ τ z 1 j per Z j
j=1
with
τ= cos (θ + 2 arcsin δ),
where Z j is the n × n matrix obtained from Z by crossing out the first row and the jth
column.
Proof We use the first row expansion (3.1) and observe that any two matrices Z j and
Z k , can be obtained from one from another by replacing one column and a permutation
of columns. Therefore, the angle between any two complex numbers per Z j and
per Z k does not exceed θ . Since
− arcsin δ ≤ arg z 1 j ≤ arcsin δ for j = 1, . . . , n,
the angle between any two numbers z 1 j per Z j and z 1k per Z k does not exceed θ +
2 arcsin δ. The proof follows by Lemma 3.1.
Proof of Theorem 1.2 One can see that for a sufficiently small δ > 0, the equation
2δ
θ= √ (3.3)
(1 − δ) cos(θ + 2 arcsin δ)
has a solution 0 < θ < π/2. Numerical computations show that we can choose
δ = 0.195 and
θ ≈ 0.7611025121.
Let
τ= cos(θ + 2 arcsin δ) ≈ 0.6365398112.
We proceed by induction on n. More precisely, we prove the following three state-
ments (3.4)–(3.6) by induction on n:
(3.4) For every Z ∈ U n×n (δ), we have per Z = 0;
123
338 Found Comput Math (2016) 16:329–342
(3.5) Suppose A, B ∈ U n×n (δ) are two matrices which differ by one row (or one
column). Then, the angle between two complex numbers per A and per B, interpreted
as vectors in R2 = C, does not exceed θ ;
(3.6) For a matrix Z ∈ U n×n (δ), Z = z i j , let Z j be the (n − 1) × (n − 1) matrix
obtained by crossing out the first row and the jth column. Then
n
|per Z | ≥ τ z 1 j per Z j .
j=1
For n = 1, the statement (3.4) is obviously true. Moreover, the angle between any
two numbers a, b ∈ U 1×1 (δ) does not exceed
2 arcsin δ ≈ 0.3925149004 < θ,
so (3.5) holds as well. The statement (3.6) is vacuous.
Lemma 3.3 implies that if the statement (3.5) holds for n × n matrices then the
statement (3.6) holds for (n + 1) × (n + 1) matrices.
The statement (3.6) for (n + 1) × (n + 1) matrices together with the statement (3.4)
for n × n matrices implies the statement (3.4) for (n + 1) × (n + 1) matrices.
Finally, Lemma 3.2 implies that if the statement (3.6) holds for (n + 1) × (n + 1)
matrices then the angle between two complex numbers per A and per B, where A, B ∈
U (n+1)×(n+1) (δ) are two matrices that differ in one row (or in one column) does not
exceed
2δ 2δ
= √ =θ
(1 − δ)τ (1 − δ) cos(θ + 2 arcsin δ)
and hence the statement (3.5) holds for (n + 1) × (n + 1) matrices.
This concludes the proof of (3.4)–(3.6) for all positive integer n.
4 Ramifications
A similar approach can be applied to computing other quantities of interest.
4.1 Hafnians
Let A = ai j be a 2n × 2n symmetric real or complex matrix. The quantity
haf A = ai1 j1 · · · ain jn ,
{i 1 , j1 },...,{i n , jn }
where sum is taken over all (2n)!/n!2n unordered partitions of the set {1, . . . , 2n} into
n pairwise disjoint unordered pairs {i 1 , j1 }, . . . , {i n , jn }, is called the hafnian of A,
see for example, [16, Section 8.2]. For any n × n matrix A we have
123
Found Comput Math (2016) 16:329–342 339
0 A
haf = per A
AT 0
and hence computing the permanent of an n × n matrix reduces to computing the
hafnian of a symmetric 2n × 2n matrix. The computational complexity of hafnians
is understood less well than that of permanents. Unlike in the case of the permanent,
no fully polynomial (randomized or deterministic) polynomial approximation scheme
is known to compute the hafnian of a non-negative real symmetric matrix. Unlike in
the case of the permanent, no deterministic polynomial time algorithm approximating
the hafnian of a 2n × 2n non-negative symmetric matrix within a factor of cn , where
c > 0 is an absolute constant, is known. On the other hand, there is a polynomial time
randomized algorithm based on the representation of the hafnian as the expectation
of the determinant of a random matrix, which approximates the hafnian of a given
non-negative symmetric 2n × 2n matrix within a factor of cn , where c ≈ 0.56 [4].
Also, for any 0 < < 1 fixed in advance, there is a deterministic polynomial time
algorithm based on scaling, which, given a 2n × 2n symmetric matrix A = ai j
satisfying
≤ ai j ≤ 1 for all i, j,
computes haf A within a multiplicative factor of n κ() for some κ() > 0 [6].
With minimal changes, the approach of this paper can be applied to computing
hafnians. Namely, let J denote the 2n × 2n matrix filled with 1s and let us define
f (z) = ln haf J + z(A − J ) .
Then
(2n)!
f (0) = ln haf J = ln and f (1) = ln haf A
n!2n
and one can use the Taylor polynomial approximation (1.2) to estimate f (1). As in
Sect. 2, one can compute the right hand side of (1.2) in n O(m) time. The statement and
the proof of Theorem 1.2 carries over to hafnians almost verbatim. Namely, let δ > 0
be a real for which the Eq. (3.3) has a solution 0 <
θ < π/2 (hence one can choose
δ = 0.195). Then haf Z = 0 as long as Z = z i j is a 2n × 2n symmetric complex
matrix satisfying
z i j − 1 ≤ δ for all i, j.
Instead of the row expansion of the permanent (3.1) used in Lemmas 3.2 and 3.3, one
should use the row expansion of the hafnian
2n
haf Z = z 1 j haf Z j ,
j=2
123
340 Found Comput Math (2016) 16:329–342
where Z j is the symmetric (2n − 2) × (2n − 2) matrix obtained from Z by crossing
out the first and the jth row and the first and the jth column. As in Sect. 2, we obtain
an algorithm of n O(ln n−ln ) complexity of approximating haf Z within relative error
> 0, where Z = Z i j is a 2n × 2n symmetric complex matrix satisfying
z i j − 1 ≤ γ , for all i, j.
and γ > 0 is an absolute constant (one can choose γ = 0.19).
4.2 Multidimensional Permanents
Let us fix an integer ν ≥ 2 and let
A = ai1 ...iν , 1 ≤ i 1 , . . . , i ν ≤ n,
be an ν-dimensional cubical n × · · · × n array of real or complex numbers. We define
n
PER A = aiσ1 (i)...σν−1 (i) .
σ1 ,...,σν−1 ∈Sn i=1
If ν = 2 then A is an n × n matrix and PER A = per A. For ν > 2, it is already
an NP-hard problem to tell PER A from 0 even if ai1 ...iν ∈ {0, 1} since the problem
reduces to detecting a perfect matching in a hypergraph, see, for example, [2, Problem
SP1]. However, for any 0 < < 1, fixed in advance, there is a polynomial time
deterministic algorithm based on scaling, which, given a real array A satisfying
≤ ai1 ...iν ≤ 1 for all 1 ≤ i 1 , . . . , i ν ≤ n
computes PER A within a multiplicative factor of n κ(,ν) for some κ(, ν) > 0 [6].
With some modifications, the method of this paper can be applied to computing
this multidimensional version of the permanent. Namely, let J be the array filled with
1s and let us define
f (z) = ln PER J + z(A − J ) .
Then
f (0) = ln PER J = (ν − 1) ln n! and f (1) = ln PER A
and one can use the Taylor polynomial approximation (1.2) to estimate f (1). As
in Sect. 2, one can compute the right hand side of (1.2) in n O(m) time, where the
implicit constant in “O(m)” depends on ν. The proof of Theorem 1.2 carries to multi-
dimensional permanents with some modifications. Namely, for some sufficiently small
δν > 0 the equation
123
Found Comput Math (2016) 16:329–342 341
2δν
θ=
(1 − δν ) cos (ν − 1)θ + 2 arcsin δν
has a solution θ ≥ 0 such that (ν − 1)θ + 2 arcsin δν < π/2. For ν = 2, we get
the Eq. (3.3) with a possible choice of δ2 = 0.195, while for ν = 3 we can choose
δ3 = 0.125 and for ν = 4 we can choose δ4 = 0.093. Then PER Z = 0 as long as
Z = z i1 ...iν is an array of complex numbers satisfying
z i − 1 ≤ δν for all 1 ≤ i 1 , . . . , i ν ≤ n.
1 ...i ν
We proceed as in the proof of Theorem 1.2, only instead of the first row expansion of
the permanent (3.1) used in Lemmas 3.2 and 3.3, we use the first index expansion
PER Z = z 1 j2 ... jν PER Z j2 ... jν ,
1≤ j2 ,..., jν ≤n
where Z j2 ... jν is the ν-dimensional array of size (n − 1) × · · · × (n − 1) obtained from
Z by crossing out the section with the first index 1, the section with the second index
j2 and so forth, concluding with crossing out the section with the last index jν . As in
Sect. 2, we obtain at algorithm of n O(ln n−ln ) complexity of approximating PER Z
within relative error > 0, where Z is a ν-dimensional cubic n × · · · × n array of
complex numbers satisfying
z i − 1 ≤ γν for all 1 ≤ i 1 , . . . , i ν ≤ n,
1 ...i ν
and 0 < γν < δν are absolute constants (one can choose γ2 = 0.19, γ3 = 0.12 and
γ4 = 0.09).
Acknowledgments The author is grateful to the anonymous referees for their careful reading of the paper,
useful suggestions and interesting questions.
References
1. S. Aaronson and A. Arkhipov, The computational complexity of linear optics, Theory of Computing 9
(2013), 143–252.
2. G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A.Marchetti-Spaccamela, and M. Protasi, Complex-
ity and Approximation.Combinatorial Optimization Problems and their Approximability Properties,
Springer-Verlag, Berlin 1999.
3. A. Barvinok, Two algorithmic results for the traveling salesman problem, Mathematics of Operations
Research 21 (1996), no. 1, 65–84.
4. A. Barvinok, Polynomial time algorithms to approximate permanents and mixed discriminants within
a simply exponential factor, Random Structures & Algorithms 14 (1999), no. 1, 29–61.
5. A. Barvinok, Computing the partition function for cliques in a graph, preprint arXiv:1405.1974 (2014).
6. A. Barvinok and A. Samorodnitsky, Computing the partition function for perfect matchings in a
hypergraph, Combinatorics, Probability and Computing 20 (2011), no. 6, 815–835.
7. A. Barvinok and P. Soberón, Computing the partition function for graph homomorphisms, preprint
arXiv:1406.1771 (2014).
8. J. Draisma, E. Horobet, G. Ottaviani, B. Sturmfels and R.R. Thomas, The Euclidean distance degree
of an algebraic variety, preprint arXiv:1309.0049 (2013).
123
342 Found Comput Math (2016) 16:329–342
9. S. Friedland and L. Gurvits, Generalized Friedland-Tverberg inequality: applications and extensions,
preprint arXiv:0603410 (2006).
10. M. Fürer, Approximating permanents of complex matrices, Proceedings of the Thirty-Second Annual
ACM Symposium on Theory of Computing, ACM, New York 2000, pp. 667–669.
11. D. Gamarnik and D. Katz, A deterministic approximation algorithm for computing the permanent of
a 0, 1 matrix, Journal of Computer and System Sciences 76 (2010), no. 8, 879–883.
12. L. Gurvits, On the complexity of mixed discriminants and related problems, Mathematical Foundations
of Computer Science 2005, Lecture Notes in Computer Science, vol. 3618, Springer, Berlin, 2005, pp.
447–458.
13. L. Gurvits and A. Samorodnitsky, Bounds on the permanent and some applications, preprint
arXiv:1408.0976 (2014).
14. M. Jerrum, A. Sinclair and E. Vigoda, A polynomial-time approximation algorithm for the permanent
of a matrix with nonnegative entries, Journal of the ACM 51 (2004), no. 4, 671–697.
15. N. Linial, A. Samorodnitsky, and A. Wigderson, A deterministic strongly polynomial algorithm for
matrix scaling and approximate permanents, Combinatorica 20 (2000), no. 4, 545–568.
16. H. Minc, Permanents. Encyclopedia of Mathematics and its Applications, Vol. 6, Addison-Wesley
Publishing Co., Reading, Mass., 1978.
17. A.D. Scott and A.D. Sokal, The repulsive lattice gas, the independent-set polynomial, and the Lovász
local lemma, Journal of Statistical Physics 118 (2005), no. 5–6, 1151–1261.
18. L.G. Valiant, The complexity of computing the permanent, Theoretical Computer Science 8 (1979),
no. 2, 189–201.
123