02 Convex Sets Notes Cvxopt f22
02 Convex Sets Notes Cvxopt f22
Convexity
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Convex sets
In this section, we will be introduced to some of the mathematical
fundamentals of convex sets. In order to motivate some of the defini-
tions, we will look at the closest point problem from several different
angles. The tools and concepts we develop here, however, have many
other applications both in this course and beyond.
A set C ⊂ RN is convex if
1
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Examples of convex (and nonconvex) sets
• Subspaces. Recall that if S is a subspace of RN , then
x, y ∈ S ⇒ ax + by ∈ S for all a, b ∈ R.
So S is clearly convex.
• Affine sets. Affine sets are just subspaces that have been offset
by the origin:
{x ∈ RN : x = y + v, y ∈ T }, T = subspace,
C = {x ∈ RN : `1 ≤ x1 ≤ u1, `2 ≤ x2 ≤ u2, . . . , `N ≤ xN ≤ uN }
{x ∈ RN : x1 + x2 + · · · + xN ≤ 1, x1, x2, . . . , xN ≥ 0}
is convex.
• Any subset of RN that can be expressed as a set of linear in-
equality constraints
{x ∈ RN : Ax ≤ b}
2
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
fall into this category — for the previous example, take
1 1 1 ··· 1 1
−1 0 0 · · · 0 0
A=
0. −1 .0 · · · 0
,
b=
0. .
.. .. ..
0 ··· −1 0
Br = {x : kxk ≤ r},
is a convex set.
• Ellipsoids. An ellipsoid is a set of the form
3
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
• The set
{x ∈ R2 : x21 − 2x1 − x2 + 1 ≥ 0}
is not convex.
• The set
{x ∈ R2 : x21 − 2x1 − x2 + 1 = 0}
is certainly not convex.
• Sets defined by linear equality constraints where only some of
the constraints have to hold are in general not convex. For
example
{x ∈ R2 : x1 − x2 ≤ −1 and x1 + x2 ≤ −1}
is convex, while
{x ∈ R2 : x1 − x2 ≤ −1 or x1 + x2 ≤ −1}
is not convex.
Cones
A cone is a set C such that
x∈C ⇒ θx ∈ C for all θ ≥ 0.
Convex cones are sets which are both convex and a cone. C is a
convex cone if
x1 , x2 ∈ C ⇒ θ1x1 + θ2x2 ∈ C for all θ1, θ2 ≥ 0.
Given an x1, x2, the set of all linear combinations with positive
weights makes a wedge. For practice, sketch the region below that
consists of all such combinations of x1 and x2:
4
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
x2
x1
Examples:
Non-negative orthant. The set of vectors whose entries are non-
negative,
RN
+ = {x ∈ R
N
: xn ≥ 0, for n = 1, . . . , N },
is a proper cone.
Positive semi-definite cone. The set of N × N symmetric matri-
ces with non-negative eigenvalues is a proper cone.
Non-negative polynomials. Vectors of coefficients of non-negative
polynomials on [0, 1],
{x ∈ RN : x1+x2t+x3t2+· · ·+xN tN −1 ≥ 0 for all 0 ≤ t ≤ 1},
form a proper cone. Notice that it is not necessary that all
the xn ≥ 0; for example t − t2 (x1 = 0, x2 = 1, x3 = −1) is
non-negative on [0, 1].
Norm cones. The subset of RN +1 defined by
{(x, t), x ∈ RN , t ∈ R : kxk ≤ t}
1
See Technical Details for precise definition.
5
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
is a proper cone for any valid norm k · k and t > 0. (We
encountered this cone in our discussion about SOCPs in the
last set of notes.)
x K y when y − x ∈ K.
We will typically just use when the context makes it clear. In fact,
for RN+ we will just write x ≤ y (as we did above) to mean that
the entries in x are component-by-component upper-bounded by the
entries in y.
x y, u v ⇒ x + u y + v.
6
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Affine sets
Recall the definition of a linear subspace: a set T ⊂ RN is a subspace
if
x, y ∈ T ⇒ αx + βy ∈ T , for all α, β ∈ R.
Affine sets (also referred to as affine spaces) are not fundamentally
different than subspaces. An affine set S is simply a subspace that
has been offset from the origin:
S = T + v0,
for some subspace T and v 0 ∈ RN . (It thus make sense to talk
about the dimension of S as being the dimension of this underlying
subspace.) We can recast this as a definition similar to the above: a
set S ⊂ RN is affine if
x, y ∈ S ⇒ λx + (1 − λ)y ∈ S, for all λ ∈ R.
Just as we can find the smallest subspace that contains a finite set
of vector {v 1, . . . , v K } by taking their span,
( K
)
X
Span({v 1, . . . , v K }) = x ∈ RN : x = αk v k , α k ∈ R ,
k=1
we can define the affine hull (the smallest affine set that contains
the vectors) as
( K K
)
X X
Aff({v 1, . . . , v K }) = x ∈ RN : x = λk v k , λk ∈ R, λk = 1 .
k=1 k=1
Example: Let
1 1/2
v1 = , v2 = .
0 1/2
7
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Then Span({v 1, v 2}) is all of R2 while Aff({v 1, v 2}) is the line that
connects v 1 and v 2,
Aff({v 1, v 2}) = x ∈ R2 : x1 + x2 = 1 .
x ∈ T ⇔ Ax = 0,
x ∈ S ⇔ Ax = b,
{x ∈ RN : hx, ai = t}
8
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
for some fixed vector a 6= 0 and scalar t. When t = 0, this set is
a subspace of dimension N − 1, and contains all vectors that are
orthogonal to a. For t 6= 0, this is an affine space consisting of all
the vectors orthogonal to a (call this set A⊥) offset to some x0:
{x ∈ RN : hx, ai = t} = {x ∈ RN : x = x0 + A⊥},
for any x0 with hx0, ai = t. We might take x0 = t · a/kak22, for
instance. The point is, a is a normal vector of the set.
5
5
4 t=1 t=5
4
3
3
2
2
<latexit sha1_base64="kc1SV4vC3OZ6kQgjaXPGd2X/2Ys=">AAACHXicbVDLSsNAFJ34rPVVdelmsAiuSlKKdiMU3LisYB/QhDKZ3LRDJ5MwMxFL6I+48VfcuFDEhRvxb5y2QbT1wDCHc+7l3nv8hDOlbfvLWlldW9/YLGwVt3d29/ZLB4dtFaeSQovGPJZdnyjgTEBLM82hm0ggkc+h44+upn7nDqRisbjV4wS8iAwECxkl2kj9Us31Yx6ocWS+jEzwJXZ9GDCR+RHRkt1PcNV1sYNdEMGP1i+V7Yo9A14mTk7KKEezX/pwg5imEQhNOVGq59iJ9jIiNaMcJkU3VZAQOiID6BkqSATKy2bXTfCpUQIcxtI8ofFM/d2RkUhNDzCVZr+hWvSm4n9eL9Vh3cuYSFINgs4HhSnHOsbTqHDAJFDNx4YQKpnZFdMhkYRqE2jRhOAsnrxM2tWKc16p3dTKjXoeRwEdoxN0hhx0gRroGjVRC1H0gJ7QC3q1Hq1n6816n5euWHnPEfoD6/Mb6qmiaA==</latexit>
1
<latexit sha1_base64="M1Fz+g8AWbpYRY64DpjJRqAdUBo=">AAACHXicbVDLSsNAFJ3UV62vqks3g0VwVRIp2o1QcOOygn1AE8pkctMOnUzCzEQsIT/ixl9x40IRF27Ev3H6QLT1wDCHc+7l3nv8hDOlbfvLKqysrq1vFDdLW9s7u3vl/YO2ilNJoUVjHsuuTxRwJqClmebQTSSQyOfQ8UdXE79zB1KxWNzqcQJeRAaChYwSbaR+ueb6MQ/UODJfRnJ8iV0fBkxkfkS0ZPc5dlwX29gFEfxo/XLFrtpT4GXizEkFzdHslz/cIKZpBEJTTpTqOXaivYxIzSiHvOSmChJCR2QAPUMFiUB52fS6HJ8YJcBhLM0TGk/V3x0ZidTkAFNp9huqRW8i/uf1Uh3WvYyJJNUg6GxQmHKsYzyJCgdMAtV8bAihkpldMR0SSag2gZZMCM7iycukfVZ1zqu1m1qlUZ/HUURH6BidIgddoAa6Rk3UQhQ9oCf0gl6tR+vZerPeZ6UFa95ziP7A+vwG54GiZg==</latexit>
2
1 a=
a= 1 1
0
1 1 2 3 4 5
1 1 2 3 4 5
t= 1 t=1 t=5
9
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
5
t=5
4
3 <latexit sha1_base64="kc1SV4vC3OZ6kQgjaXPGd2X/2Ys=">AAACHXicbVDLSsNAFJ34rPVVdelmsAiuSlKKdiMU3LisYB/QhDKZ3LRDJ5MwMxFL6I+48VfcuFDEhRvxb5y2QbT1wDCHc+7l3nv8hDOlbfvLWlldW9/YLGwVt3d29/ZLB4dtFaeSQovGPJZdnyjgTEBLM82hm0ggkc+h44+upn7nDqRisbjV4wS8iAwECxkl2kj9Us31Yx6ocWS+jEzwJXZ9GDCR+RHRkt1PcNV1sYNdEMGP1i+V7Yo9A14mTk7KKEezX/pwg5imEQhNOVGq59iJ9jIiNaMcJkU3VZAQOiID6BkqSATKy2bXTfCpUQIcxtI8ofFM/d2RkUhNDzCVZr+hWvSm4n9eL9Vh3cuYSFINgs4HhSnHOsbTqHDAJFDNx4YQKpnZFdMhkYRqE2jRhOAsnrxM2tWKc16p3dTKjXoeRwEdoxN0hhx0gRroGjVRC1H0gJ7QC3q1Hq1n6816n5euWHnPEfoD6/Mb6qmiaA==</latexit>
2
a=
1
2
1 1 2 3 4 5
Separating hyperplanes
If two convex sets are disjoint, then there is a hyperplane that sepa-
rates them. Here is a picture:
10
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
the next section). It is also not true in general if one of the sets is
nonconvex; observe:
Note that we can switch the roles of C and D above, i.e. we also say
H separates C and D if hd, ai ≤ t ≤ hc, ai for all c ∈ C, d ∈ D.
11
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Strong separating hyperplane theorem
Let C and D be disjoint nonempty closed convex sets and let C
be bounded. Then there is a hyperplane that strongly separates
C and D.
12
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
and show that for any point u ∈ D, we have f (u) ≥ .
Here is a picture to help visualize the proof:
{x : hx, ai = t}
<latexit sha1_base64="3pJds1SI+lNMyWdEoWfDHGD3MiM=">AAACOHicbVDLSgMxFM34tr6qLt0MFsGFlBkpKoIguHGngrWFppRMeqcNZjJDckcsw/Sv3PgZ7sSNC0Xc+gWmj4VaD4ScnHsuufcEiRQGPe/ZmZqemZ2bX1gsLC2vrK4V1zduTJxqDlUey1jXA2ZACgVVFCihnmhgUSChFtyeDeq1O9BGxOoaewk0I9ZRIhScoZVaxQsqIUSa0SCWbdOL7JXd5/3jPpVMdST81vd+PllO9dDTP+kj1aLTRZq3iiWv7A3hThJ/TEpkjMtW8Ym2Y55GoJBLZkzD9xJsZkyj4BLyAk0NJIzfsg40LFUsAtPMhovn7o5V2m4Ya3sUukP1Z0fGIjOY1jojhl3ztzYQ/6s1UgyPmplQSYqg+OijMJUuxu4gRbctNHCUPUsY18LO6vIu04yjzbpgQ/D/rjxJbvbL/kG5clUpnR6P41ggW2Sb7BKfHJJTck4uSZVw8kBeyBt5dx6dV+fD+RxZp5xxzyb5BefrG8yasJg=</latexit>
C
c
a=d c
kdk22 kck22
<latexit sha1_base64="Xc2eDSyrYLpJm12+9yykn/bsr/w=">AAACJnicbVDLSsNAFJ34rPUVdelmsAhuLEkpKkKh4MZlBfuAJobJZNIOnTyYmQglzde48VfcuKiIuPNTnLRZ2NYDwxzOuZd773FjRoU0jG9tbX1jc2u7tFPe3ds/ONSPjjsiSjgmbRyxiPdcJAijIWlLKhnpxZygwGWk647ucr/7TLigUfgoxzGxAzQIqU8xkkpy9IaEDWj5HOHUmlhuxDwxDtSXepk1cWpPNXgJFw1cGFlayxy9YlSNGeAqMQtSAQVajj61vAgnAQklZkiIvmnE0k4RlxQzkpWtRJAY4REakL6iIQqIsNPZmRk8V4oH/YirF0o4U/92pCgQ+ZKqMkByKJa9XPzP6yfSv7FTGsaJJCGeD/ITBmUE88ygRznBko0VQZhTtSvEQ6QykyrZsgrBXD55lXRqVfOqWn+oV5q3RRwlcArOwAUwwTVognvQAm2AwQt4A1Pwob1q79qn9jUvXdOKnhOwAO3nF9tzpgU=</latexit>
t=
2
d
First, we prove the basic geometric fact that for any two vectors x, y,
if kx + θyk2 ≥ kxk2 for all θ ∈ [0, 1] then hx, yi ≥ 0. (2)
To establish this, we expand the norm as
kx + θyk22 = kxk22 + θ2kyk22 + 2θhx, yi,
from which we can immediately deduce that
θ
kyk22 + hx, yi ≥ 0 for all θ ∈ [0, 1]
2
⇒ hx, yi ≥ 0.
13
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
and so by (2), we know that
hd − c, u − di ≥ 0.
This means
kdk22 − kck22
f (u) = hu, d − ci −
2
hd + c, d − ci
= hu, d − ci −
2
= hu − (d + c)/2, d − ci
= hu − d + d/2 − c/2, d − ci
kc − dk22
= hu − d, d − ci +
2
2
kc − dk2
≥ .
2
The argument that f (v) ≤ − for every v ∈ C is exactly the same.
We will not prove it here, but there is an even more interesting result
that says that the sets C and D do not even have to be disjoint —
they can intersect at one or more points along their boundaries as
shown here:
14
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Separating hyperplane theorem
Nonempty convex sets C, D ⊂ RN can be (properly) separated by
a hyperplane if and only if their relative interiors are disjoint:
relint(C) ∩ relint(D) = ∅.
See the Technical Details for what exactly is meant by “relative in-
terior” but it is basically everything not on the natural boundary of
the set once we account for the fact that it might have dimension
smaller than N .
15
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Supporting hyperplanes
A direct consequence of the separating hyperplane theorem is that
every point on the boundary of a convex set C can be separated from
its interior.
If a 6= 0 satisfies hx, ai ≤ hx0, ai for all x ∈ C, then
x0
16
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Proof of this theorem (and the general separating hyperplane theo-
rem above) can be found in [Roc70, Chap. 11].
Note that there might be more than one supporting hyperplane at
an boundary point:
x0
Let’s recall how we solve this problem in the special case where
C := T is a K-dimensional subspace. In this case, the solution
17
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
x̂ = PT (x0) is unique, and is characterized by the orthogonality
principle:
x0 − x̂ ⊥ T
meaning that hy, x0 − x̂i = 0 for all y ∈ T . The proof of this fact
is reviewed in the Technical Details section at the end of these notes.
solving for the expansion coefficients αk is the same as solving for x̂.
We know that
K
X
hx0 − αj v j , v k i = 0, for k = 1, . . . , K,
j=1
18
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Using these expansion coefficient to reconstruct x̂ yields
x̂ = V α̂ = V (V TV )−1V Tx0.
C = T + v 0 = {x : x = y + v 0, y ∈ C}.
19
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Uniqueness of closest point
If C ⊂ RN is closed and convex, then for any x0, the program
First, let’s argue that at least one minimizer to (4). Let x0 be any
point in C, and set B = {x : kx − x0k ≤ kx0 − x0k2}. By
construction, if a minimizer exists, it must be in the set C ∩ B. Since
C ∩B is closed and bounded and kx0 −yk2 is a continuous function of
y, by the Weierstrass extreme value theorem we know that there is at
least one point in the set where this functions achieves it minimum.
Hence there exists at least one solution x̂ to (4).
We can now argue that x̂ is the only minimizer of (4). Consider first
all the points y ∈ C such that y − x0 is co-aligned with x̂ − x0. Let
I = {α ∈ R : x̂ + α(x0 − x̂) ∈ C}.
(Note that if y = x̂ + α(x0 − x̂), then y − x0 = (1 − α)(x̂ − x0)
and so the two difference vectors are co-aligned.) Since C is convex
and closed, this is a closed interval of the real line (that contains at
least the point α = 0). The function
g(α) = kx0 − x̂ − α(x0 − x̂)k22 = (1 − α)2kx0 − x̂k22,
captures the distance of the co-aligned vector for every value of α.
Since, as a function of α, this a parabola with strictly positive second
derivative, it takes its minima at exactly one place on the interval
I and by construction this is α = 0. So any y 6= x̂ with y − x0
co-aligned with x̂ − x0 cannot be another minimizer of (4).
20
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Now let y be any point in C such that the difference vectors are not
co-aligned. We will show that y cannot minimize (4) because the
point x̂/2 + y/2 ∈ C is definitively closer to x0. We have
2
x̂ y x0 − x̂ x0 − y 2
x0 − − = +
2 2 2 2 2 2
2
kx0 − x̂k2 kx0 − ŷk22 hx0 − x̂, x0 − yi
= + +
4 4 2
2 2
kx0 − x̂k2 kx0 − ŷk2 kx0 − x̂k2 kx0 − yk2
< + +
4 4 2
2
kx0 − x̂k2 kx0 − yk2
= +
2 2
≤ kx0 − yk22.
Obtuseness principle
PC (x0) = x̂ if and only if
21
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
x0
x̂
22
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Technical Details: closest point to a subspace
In this section, we establish the orthogonality principle for projection
of a point x0 onto a subspace T . Let x̂ be a vector which obeys
ê = x − x̂ ⊥ T .
We will show that x̂ is the unique closest point to x0 in T . Let y
be any other vector in T , and set
e = x − y.
We will show that
kek > kêk (i.e. that kx − yk > kx − x̂k) .
Note that
kek2 = kx − yk2 = kê − (y − x̂)k2
= hê − (y − x̂) , ê − (y − x̂)i
= kêk2 + ky − x̂k2 − hê, y − x̂i − hy − x̂, êi.
Since y − x̂ ∈ T and ê ⊥ T ,
hê, y − x̂i = 0, and hy − x̂, êi = 0,
and so
kek2 = kêk2 + ky − x̂k2.
Since all three quantities in the expression above are positive and
ky − x̂k > 0 ⇔ y 6= x̂,
we see that
y 6= x̂ ⇔ kek > kêk.
We leave it as an exercise to establish the converse; that if hy, x̂ −
x0i = 0 for all y ∈ T , then x̂ is the projection of x0 onto T .
23
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
Technical Details: Basic analysis in RN
This section contains a brief review of basic topological concepts in
RN . Our discussion will take place using the standard Euclidean
distance measure (i.e. `2 norm), but all of these definitions can be
generalized to other metrics. An excellent source for this material is
[Rud76].
Basic topology
There are many ways to define closed sets. The easiest is that a
set X is closed if its complement is open. A more illuminating (and
24
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
equivalent) definition is that X is closed if it contains all of its limit
points. A vector x̂ is a limit point of X if there exists a sequence
of vectors {xk } ⊂ X that converge to x̂.
The closure of general set X , denoted cl(X ), is the set of all limit
points of X . Note that every x ∈ X is trivially a limit point (take
the sequence xk = x), so X ⊂ cl(X ). By construction, cl(X ) is the
smallest closed set that contains X .
Related to the definition of open and closed sets are the technical
definitions of boundary and interior. The interior of a set X is the
collection of points around which we can place a ball of finite width
which remains in the set:
25
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
the unit simplex in R3
∆ = {x ∈ R3 : x1 + x2 + x3 = 1, xi ≥ 0}.
where Aff(X ) is the smallest affine set that contains X . This means
that if the set we are analyzing can be embedded in a low-dimensional
affine space, then we define interior points relative to this set. For
the simplex, we have
Aff(∆) = {x : x1 + x2 + x3 = 1},
and
relint(∆) = {x ∈ ∆ : x1, x2, x3 > 0}.
26
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
For convex sets, we also have the equivalent and perhaps more intu-
itive definition of relative interior,
relint(C) = {x ∈ C : ∃y ∈ C, ∃λ > 1 such that λx+(1−λ)y ∈ C}.
The boundary of X is the set of points in cl(X ) that are not in the
(relative) interior:
bd(X ) = cl(X )\ relint(X ).
does not exist; there is no value x0 that we can choose where we can
definitively say that e−x0 ≤ e−x for all x ≥ 0. The infimum, however,
always exists:
inf e−x = 0.
x≥0
27
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022
When there is a point in X where f achieves its infimum, then of
course the operations agree, e.g.
inf (x − 1/2)2 = min (x − 1/2)2 = 0.
x∈[0,1] x∈[0,1]
Because of this, we can freely replace sup with max and inf with min.
This might be viewed as a fundamental result in optimization, as it
gives very general (sufficient) conditions under which optimization
problems have well-defined solutions.
References
[BV04] S. Boyd and L. Vandenberghe. Convex Optimization.
Cambridge University Press, 2004.
[Roc70] R. T. Rockafellar. Convex Analysis. Princeton University
Press, 1970.
[Rud76] W. Rudin. Principles of Mathematical Analysis.
McGraw-Hill, 1976.
28
Georgia Tech ECE 6270 Notes by M. Davenport and J. Romberg. Last updated 9:47, August 29, 2022