Riemannian Analysis
Riemannian Analysis
LECLERC Gaétan
7 June 2019 - 24 August 2019
1 Introduction
This paper is an internship report done at the end of my second year in the ENS Rennes, in the
maths university of Freiburg (Germany), with Nadine Große.
The main goal was to learn as much differential geometry that I could, so I’ve discovered the area
of Riemannian geometry and geometric analysis.
To be more specific, I worked on a course for some weeks to introduce myself to the subject
(An Introduction to Riemannian Geometry with Applications to Mechanics and Relativity, Leonor
Godinho and Jose Natario, 2014) , then I spent the rest of my time trying to understand a research
paper of Justin Corvino, written in 1999 : Scalar curvature Deformation and a gluing Construction
for the Einstein Constraint Equations.
Here, I’ll try to give to the reader a short but clear introduction to Riemannian geometry, and
then we’ll explain the main ideas of the upper paper of Justin Corvino.
Let’s give a motivation example. Suppose you are a physicist, trying to understand the prop-
erties of a given function defined on the surface of earth : say, f : S 2 −→ R. For example,
we are trying to find the extrema of this function. What can we do ? In the more usual case
f : Rn −→ Rm , the first question that come in the mind of an analyst is : Is this function smooth
? Can I differentiate it ?
If the function is smooth, then studying it becomes way easier : We can find the extrema of
it, most of the time, just by looking of the points where the differential vanishes; we can know
if f is a local diffeo by checking if (df )x is invertible; we can also compute some nice polynomial
approximations of f by using a taylor formula, bounding the error made.
One of the goals of differential geometry is to define a way for our physicist to differentiate maps
that are not defined on an open subset of Rn . More generally, differential geometry is a branch of
geometry that nicely mixes algebra and analysis to study the so called smooth manifolds and the
smooth maps we can define on it.
In the next part, I’ll recall some elementary differential geometry, but since the main subject
of this paper is to talk about Riemannian manifolds, I’ll just quote the main definitions and the
main results, without proving anything yet. All the missing proofs can be found in the course of
Godinho and Natario.
1
2.1 Smooth manifolds
Definition 1. A topological manifold M of dimension n is a topological space such that:
• M is Hausdorff
Of course, submanifolds are naturally seen as abstract manifolds, where the atlas is given by
all the conveniently chosen restrictions of the diffeomorphism that appears in the upper definition.
Let’s make a short remark here. The isomorphisms of the manifolds world are the diffeomor-
phisms. Meaning that, from the differentiable structure point of view, two diffeomorphic manifolds
are considered the same. Hence, it seems important to me to actually be able to visualise what it
means. For example, let’s visualise S n and play with it a bit. Every diffeomorphism of Rn+1 will
send S n to a diffeomorphic sister of her. Which means that the relation ”being diffeomorphic” is
very jelly-ish. It is really important to visualise this correctly to be able to fully understand what
will be going on when we’ll add a metric on our manifolds, later in this paper.
2
2.2 Differentials
Now, we would like to be able to define actual differentials and derivatives on manifolds. On a
submanifold of the euclidean space, there is a natural way to do it :
(df )p : Tp M −→ Tf (p) N
d
h 7−→ dt (f ◦ γ)(0)
If f is actually the corestriction to M and N of a smooth map f˜ defined on some open subset
of RN , then the differential of f will just be the corestriction of the differential:
∀h ∈ Tp M, (df )p (h) = (df˜)p (h).
f : Sn −→ Sn
x 7−→ −x
Let γ a smooth curve on S n , such that γ(0) = p and γ̇(0) = h ∈ Tp S n . Then f (γ(t)) = −γ(t),
d
so ◦ γ)(0) = −γ̇(0), and hence :
dt (f
(df )p : Tp S n −→ T−p S n
h 7−→ −h
This works well. Hence, in the case of abstracts manifolds, we are going to do the same. First of
all, we have to define the derivative of smooth curves on a manifold. It is clear, in the submanifold
case, that the operator
C ∞ (p) −→ R
d
f 7−→ dt (f ◦ γ)(0)
where C ∞ (p) is the set of all maps f : M → R that are smooth around p,
characterise the vector γ̇(0). Indeed, if we choose f = πi : x = (x1 , x2 , . . . , xn ) 7→ xi , then we
are able to know all the coordinates of our vector. This allow us to identify γ̇(0) to the previous
operator.
Definition 6. Let M be a manifold, and γ : (−1, 1) → M a smooth curve on M. We define the
tangent vector of γ at p :
γ̇(0) : C ∞ (p) −→ R
d
f 7−→ dt (f ◦ γ)(0)
A tangent vector to M at p is a tangent vector to some differentiable curve γ : (−1, 1) → M with
γ(0) = p. The tangent F
space of M at p is the space Tp M of all tangent vectors at p.
We also define T M := p Tp M , the tangent bundle of M.
Example 2. What we actually did is identifying a vector with the associated operator that takes
the directional derivative. Let’s choose a chart ϕ : U → ϕ(U ) around p ∈ M . Let x = ϕ−1 (p).
Now let’s consider the special curves defined by : γi (t) = ϕ(x1 , . . . , xi + t, . . . , xn ).
Then :
d ∂f ◦ ϕ −1 ∂
γ˙i (0)(f ) = (f ◦ γi )(0) = (ϕ (p)) =: (f )
dt ∂xi ∂xi p
∂
Hence, the tangent vector at p, ∂x i
p
, represents the speed of the curve γi when he goes around
p.
Also, notice that
∂ϕ ∂
(x) = ∈ Tp M
∂xi ∂xi p
3
Example 3. Consider the parameterization ψ : (0, π) × (−π, π) → S 2 given by
Example 4. Let γ : (−ε, ε) → M be a smooth curve in M, and f ∈ C ∞ (p) Let’s fix a chart ϕ
around γ(0) : in coordinates, we have
Hence :
n
X
i ∂
γ̇(t) = ẋ (t) ∈ Tp M
i=1
∂xi γ(t)
(df )p : Tp M −→ Tf (p) N
d
h 7−→ dt (f ◦ γ)(0)
n n bj ∂
d X d ∂ X
i ∂f
(df )p (h) = (f ◦ γ)(0) = (fbj ◦ γ
b)(0) = h ∈ Tf (p) N
dt j=1
dt ∂y j f (γ(0)) i,j=1
∂xi ∂y j f (p)
X ∂ fb
(df )p = i
(x(p))(dxi )p
i
∂x
We found out happily that the formulaes are the same as in the usual euclidean case. The usual
properties of the differential are true and will be used without any justifications from now on : The
differential vanishes on extrema, if the differential is invertible then f is a local diffeomorphism, the
chain rule is satisfied, etc.
∂
The maps p 7→ (dxi )p and p 7→ ∂x i
p
are the first examples of tensor fields. Tensor fields appears
naturally in a lot of areas of physics and mathematics : they will be briefly introduce in the next
section.
4
2.3 Vector fields
One of the simplest example of tensor field are the so-called vector fields.
Definition 8. A vector field on a smooth manifold M is a map that to each point p ∈ M assigns
a vector tangent to M at p:
X : M −→ T M
p 7−→ Xp ∈ Tp M
Locally, we can write :
1 ∂ n ∂
Xp = X (p) + · · · + X (p)
∂x1 p ∂xn p
If the maps X i : U ⊂ M → R are smooth, we say that the vector field X is smooth. We denote by
X(M ) the linear space of all smooth vector fields.
Example 7. Let f : M → N be a smooth map. Define the push-forward :
f∗ : TM −→ TN
h ∈ Tp M 7−→ (df )p (h) ∈ Tf (p) N
∀f ∈ C ∞ (M, R), Z · f = X · (Y · f ) − Y · (X · f )
The lie bracket will actually become quite important later, when we will define the Levi-Civita
connection, so let’s explain a bit more what it does represent. Take two vector fields X and Y .
Now, we are going to follow the flow of X for a very short amount of time, and then we are going
to follow Y for the same little amount of time. Next, we do the same, but following for Y and then
X. The ”vector” that infinitesimally represents the gap between the arrival of the first trajectory
and the second one is the lie bracket of X and Y .
Actually, understanding this way what a lie bracket is, we should expect some link between the
flow of two vector fields and the lie bracket of the two of them. Fortunately for us, such links
exists.
Theorem 2. Let X, Y be two vector fields on a compact manifold M .
Then their flow commutes, ie ψtX ◦ ψsY = ψsY ◦ ψtX , iif [X, Y ] = 0.
5
2.4 Tensor fields
Definition 11. Let V be a finite dimensional vector space.
A multilinear map T : V × · · · × V × V ∗ × · · · × V ∗ −→ R is called a (k, m)-tensor.
We say that T is k times covariant and m times contravariant.
The vector space of all the (k, m)-tensors on V is often denoted by T k,m (V ∗ , V ).
Definition 12. Let M be a manifold.
A (k, m)-tensor field on M is a map that, at each point p ∈ M , assigns a (k, m)-tensor on Tp M :
T : p ∈ M 7→ T k,m (Tp M ∗ , Tp M )
Example 8. On a manifold of dimension n, tangent spaces are finite dimensional vector spaces,
which implies a natural isomorphism Tp M ' (Tp M )∗∗ .
∂
with a linear map Tp M ∗ → R,
For example, one can naturally identify the tangent vector ∂x i
p
defined by :
∂ j
∂
(dx )p := (dxj )p ( ) = δij
∂xi p ∂xi p
Hence, vector fields are naturally seen as 1-contravariant tensor fields :
∂ ∂
1
Xp = X (p) n
+ · · · + X (p) ∈ Tp M ∗∗ = T 0,1 (Tp M ∗ , Tp M )
∂x1 p ∂xn p
X ∂ fb
df : p ∈ M 7−→ (df )p = i
(x(p))(dxi )p ∈ Tp M ∗
i
∂x
is a 1-covariant tensor field.
Now, we can use those two examples to construct a lot of new tensor fields. Here is one way to
do it :
Definition 13. Let T, S be two tensor fields over M .
We define : (T ⊗ S)p (v1 , . . . , vK , w1 , . . . , wN ) := Tp (v1 , . . . , vK )Sp (w1 , . . . , wN ).
If T and S are covariant (or contravariant) tensor fields, then T ⊗ S is a (K+N)-covariant (con-
travariant) tensor field. Note that this operation is bilinear, and is NOT commutative.
Example 10. The usual inner product on Rn is a 2-covariant tensor field :
X
< · ,· > = dxi ⊗ dxi
i
Roughly speaking, this means that one can understand tensor fields as maps that takes a bunch
of vector fields and assigns them to another bunch of vector fields, like the differential above.
Again, we say that a tensor field is smooth if the Tij11,...,i
,...,jm
k
are smooth.
6
2.5 Differential forms
There is a really important class of tensor fields that appears in differential geometry : the differ-
ential forms. The goal of differential n-forms is to be integrated over n-dimensional manifolds, like
a measure. They are, in a sense, linked to the idea of (”smooth”) measures, of the form f (x)dλn (x)
where λn is the usual Lebesgue measure on Rn . Indeed, the definition of an abstract smooth man-
ifold allow us to believe that we can (locally !) transport the measure from open subset of Rn to
our manifold.
One might ask if one of those measures is more natural than others. What makes the Lebesgue
measure so special in the euclidean case ? In the euclidean case, we have at our disposal a natural
notion of lengths, provided by the usual inner product, and the lebesgue measure is the only mea-
sure that actually behave well with the natural idea of volume induced by the notion of length. In
an abstract manifold, we don’t have this chance : there is no natural idea of length, nor a natural
idea of volume, and so no measures will be special in the same way that λ is.
A manifold that actually have a notion of lengths on it, like submanifolds of Rn , are called Rie-
mannian manifolds. For them, one special measure will actually rise.
Let’s get more into the details. First of all, we define the pullback of a k-covariant tensor :
this should be understood as a change of variables.
Definition 14. Let M and N be two smooth manifolds. Let α be a k-covariant tensor field over
N , and f : M → N a smooth map. We define the pullback of α by f the following k-covariant
tensor field on M :
This kind of behavior, (factorizing a determinant) often occurs when one is working with alternating
multilinear maps. Having the determinant in mind, this leads us to the following definitions :
Definition 15. Let T be a k-covariant tensor over a vector space V .
We say that T is alternating iff : T (h1 , . . . , hi , . . . , hj , . . . , hk ) = −T (h1 , . . . , hj , . . . , hi , . . . , hk )
We note Λk (V ) the vector space of all alternating k-covariant tensors.
7
Definition 16. Let M be a smooth manifold.
We say that a tensor field α over M is a differential form iif αp ∈ Λk (Tp M ), for all p ∈ M .
We note Ωk (M ) the space of all smooth differential forms.
By definition, Ω0 (M ) = C ∞ (M ).
Note also that every 1-covariant tensor is trivially alternating.
Definition 17. Let α and β be k, l-differential forms. We define the exterior product of α and β
the following (k+l)-differential form :
X
(α ∧ β)p (v1 , . . . , vk+l ) = ε(σ)(α ⊗ β)p (vσ(1) , . . . , vσ(k+l) )
σ∈Sk+l
We check that this is indeed an alternating tensor. Moreover, the requested formula is verified.
Indeed, if one choose a smooth map ϕ : U → V , with U, V ⊂ Rn , we have :
We say that a smooth map ϕ : V → U preserve the orientation iif det(dφ) > 0.
In this case, and if ϕ is a diffeomorphism, one can reformulate the formula for the change of
variables in the following way :
Z Z
ω= ϕ∗ ω
ϕ(V ) V
8
2.6 Integration on orientable manifolds
We now have enough tools to define the integral of a n-form on a n-manifold. We will also define
the exterior derivative, and, finally, quote the Stokes theorem.
The main idea to define the integral is to copy the formula from above and apply it to the case of
an abstract manifold. Since the upper theorem only holds for orientation-preserving maps, we will
have to define what it means for a chart ϕ to preserve the orientation (recall that ”det(dϕ)” only
make sense if dϕ is an endomorphism ). A manifold that admits an atlas of orientation-preserving
maps will called an orientable manifold.
Definition 19. Let V be a finite dimensional vector space, and consider two ordered basis B1 =
(u1 , . . . , un ) and B2 = (v1 , . . . , vn ). There exists a unique linear map such that f (ui ) = vi .
If det f > 0, we say that B1 and B2 are equivalent.
This defines an equivalence relation that divides the set of all ordered basis of V into two equivalence
classes. An orientation for V is an assignment of a positive sign to the elements of one equivalence
class and a negative sign to the elements of the other.
Example 17. In Rn , the canonical basis is usually positively oriented.
Definition 20. Let M be a manifold. An orientation for M is the given of an orientation for
all tangent spaces Tp M , and of an atlas A = {(ϕα , Uα )} such that all the maps ϕα preserve the
orientation, i.e. : (dϕα )x : Rn → Tp M maps positively oriented basis to positively oriented basis.
Notice that, given an oriented manifold and two parameterizations ϕα and ϕβ , the map ϕ−1
α ◦ϕβ
is still orientation preserving, wich implies : det d(ϕ−1
α ◦ ϕ )
β x > 0.
Definition 21. Let M be an oriented manifold.
Let ω ∈ Ωn (M ) with compact support S := {p ∈ M | ωp 6= 0}.
Let (Vi ) be a finite open covering of S, where the Vi are coordinates neighbourhood associated to
orientation preserving maps ϕi : Ui → Vi .
Let χi : M → R be an associated partition of unity, i.e. :
• supp χi ⊂ Vi
P
• i χi = 1 on S.
We define : Z XZ
ω := ϕ∗i (χi ω)
M i Ui
We verify that this definition does not depend neither on the choice of the covering (Vi ), the
choice of the coordinates (ϕi ), nor the choice of the partition of unity (χi ).
Example 18. If the manifold is nearly covered, up to a (n-1)-submanifold, by one coordinate
chart ϕ, then we can write : Z Z
ω= ϕ∗ ω
M U
1 2
For example, let S ⊂ R be the circle, oriented anticlockwise.
Let dx, dy ∈ Ω1 (S 1 ) defined by :
(dx)p , (dy)p : Tp S 1 −→ R
(h1 , h2 ) 7−→ h1 , h2
We want to compute the following integral :
Z
xdy − ydx
S1
For this, we have to parameterize the circle. Let γ : (0, 2π) → S 1 defined by γ(θ) = (cos(θ), sin(θ)).
We have :
Z Z Z 2π
xdy − ydx = γ ∗ (xdy − ydx) = cos(θ)2 dθ + sin(θ)2 dθ = 2π
S1 (0,2π) 0
9
Example 19. With the same notations than before, we want this time to compute the integral :
Z
xdy ∧ dz − ydx ∧ dz + zdx ∧ dy
S2
and hence : Z Z 2π Z π
xdy ∧ dz − ydx ∧ dz + zdx ∧ dy = sin(θ)dθdϕ = 4π
S2 0 0
Now that we know how to compute integral of n-forms over our manifold, it is natural to search
for a way to differentiate them. The difficulty resides (it often does) in the fact that we want the
operation to be independent of the choice of coordinates.
It appears that the following, the exterior derivative, provides a way to do this.
Definition 22. Let ω ∈ Ωk (M ). We define the exterior derivative dω ∈ Ωk (M ) in the following
way : On a coordinate neighbourhood, given a choice of coordinates, we note :
X
ωp = wi1 ...ik (p)(dxi1 )p ∧ · · · ∧ (dxik )p
i1 <···<ik
We then define X
(dω)p = (dwi1 ...ik )p ∧ (dxi1 )p ∧ · · · ∧ (dxik )p
i1 <···<ik
We can show that this expression is independent of the choice of coordinates, and thus, that
this is well defined on all M .
The exterior derivative plays an important role in geometry : it is the basic tool for studying
the De Rham Cohomology, and is one of the central notions that appears in the Stokes formula.
We recall some important formulae.
Theorem 5. Let ω ∈ Ωk (M ) and α ∈ Ωj (M ). Let f : N → M be a smooth map.
• d(dω) = 0
• d(f ∗ ω) = f ∗ (dω)
• d(ω ∧ α) = dω ∧ α + (−1)k ω ∧ dα
Finally, we will quote the stokes theorem.
Rb
This theorem is a generalization of the fundamental theorem of analysis, a f 0 (t)dt = f (b) − f (a).
It has a lot of powerful corollaries, used everyday by analysts and physicists.
First, we have to define what a manifold with boundary is.
Definition 23. We note H = {(x1 , . . . , xn ) ∈ Rn |xn ≥ 0} the upper half space.
A smooth manifold with boundary is a topological manifold with boundary of dimension n and a
family of parameterizations ϕα : Uα ⊂ H → M such that :
S
• ϕα (Uα ) = M
α
10
Example 20. If Ω ⊂ M is a domain on a smooth manifold M, we sometimes says that Ω is a
smooth domain if Ω is a submanifold with boundary of dimension n.
This corollary of the Stokes formula is often used when dealing with PDE (but only in Rn , this
becomes absolutly false in a general manifold).
11
3 Riemannian Manifolds
The goal here is to do analysis on manifolds.
What we did for now was more algebraically-minded, in the same way that an abstract vector
space only offers us the possibility to do algebra. An analyst, to work, needs some way to measure
lengths : on vector spaces, this give rise to the well known notion of Banach and Hilbert spaces.
We are then going to add an extract ”inner product” on our manifold.
A Riemannian manifold will be a manifold where measuring the speed of a curve c(t) running
on it will make sense, and where we will be able to talk about angles. This will naturally induce
a special notion of length and distance on our manifold, and this will lead to a rigidification of
the notion of manifold. The jelly-ish structure of an abstract manifold solidifies into a paper-ish
structure of riemannian manifold.
The sphere of radius 1 and the sphere of radius 2 will no longer be seen as the same objects,
as the lenghts on the first one are dilated when going to the second one.
A real piece of paper defines a riemannian manifold. By putting it flatly on a table, one can
measure the distance between two points. This piece of paper will define the same riemannian
manifold as long as we play with it without tearing it : we can roll it a bit for example.
Given some local coordinates, one can always write, for any covriant 2-tensor g :
X
g= gij dxi ⊗ dxj
ij
With
∂ ∂
, )
gij = g(
∂xi ∂xj
One can then verify that g is symmetric iif the matrix (gij ) is, and that g is symetric positive
definite iif (gij ) is too.
The positive definite condition comes from the fact that f is supposed to be an immersion, keeping
away (df )p u to be zero if u is not zero.
In the special case where N ⊂ M and where the immersion f = ι : N ,→ M is the inclusion, we
say that N is a submanifold of M . The induced metric, ι∗ g, is just the restriction of the ambiant
metric g.
12
Example 22. The euclidian space of dimension n, (Rn , g), with
X
g= dxi ⊗ dxi
i
is a riemannian manifold.
In the simplified case of the plane R2 , it becomes :
g = dx ⊗ dx + dy ⊗ dy
Example 23. The 2-sphere S 2 ⊂ R3 is a submanifold of the riemannian manifold R3 , and thus
can naturally be seen as a riemannian manifold. The induced metric g is the restriction of the
usual inner product of R3 .
In spherical coordinates, with the usual ψ(θ, ϕ) = (sin θ cos ϕ, sin θ sin ϕ, cos θ), recall that we
have :
∂ ∂
= (cos θ cos ϕ, cos θ sin ϕ, − sin θ) ; = (− sin θ sin ϕ, sin θ cos ϕ, 0)
∂θ p ∂ϕ p
g = dθ ⊗ dθ + sin(θ)2 dϕ ⊗ dϕ
Example 24. We note H2 = {(x, y) ∈ R2 | y > 0}. This manifold, equipped by the following
metric :
1
g = 2 (dx ⊗ dx + dy ⊗ dy)
y
is called the hyperbolic plane.
Two riemannian manifolds will be regarded the same if they are isometric.
Definition 27. Let (M, g) and (N, h) be Riemannian manifolds. A diffeomorphism f : M → N
is said to be an isometry if f ∗ h = g, i.e., hf (p) ((df )p u, (df )p v) = gp (u, v).
Similarly, a local diffeomorphism f : M → N is said to be a local isometry if f ∗ h = g.
Example 25. The map
f: H2 −→ H2
(x, y) 7−→ (a + bx, by)
is an isometry of the hyperbolic plane.
A riemannian metric allows us to measure the length ||u|| := g(u, u)1/2 of a vector (as well as
the angle between two vectors with the same base point). Therefore, we can measure the length
l(c) of a piecewise smooth curve c : [a, b] → M :
Z b
l(c) := ||ċ(t)||dt
a
Finally, this induces a natural distance over our riemannian manifold : the distance between
two points p and q is the length of the shortest path between those two.
13
Now that we have a metric, we can choose a special n-form on our manifold to integrate with.
ωp = f (p)(dx1 )p ∧ · · · ∧ (dxn )p
with
∂ ∂
f =ω ,..., n
∂x1 ∂x
And we can choose the coordinates such that f is strictly positive. Then, since ω is n-linear and
alternating, and because of the condition of normalization, we have :
f = det S
∂
where S is the matrix of all the components of the ∂x i in some positivey oriented orthonormal
That
R we, as usual, quotient by the subspace of all the maps that vanish almost everywhere (that
is, M |f | = 0), creating the well known normed space L1 (M ).
14
Example 26. At the end of the last part, we introduced a ”surface measure” on hypersurfaces of
Rn . This is the same measure that the one induced by it’s associated riemann volume form.
With ni the euclidian coordinates of the outward pointing vector at x. One can verify that, for all
v1 , . . . , vn ∈ Tp M , we have :
ωx (v1 , . . . , vn ) = det(n v1 . . . vn )
From this, we check that ω is indeed a riemannian volume form over our submanifold.
Example 27. We study the circle of radius r, noted S 1 (r), oriented anticlockwise.
Seen as a submanifold of R2 , the unit normal vector is n = (x/r, y/r), and so : dvol = 1r (ydx−xdy).
With the coordinates γ(θ) = (r cos θ, r sin θ), we can write it in the following way :
∂
This is coherent with the upper formula : in those coordinates, we have ∂θ = (−r sin θ, r cos θ),
and so the metric can be written as :
g = h∂θ , ∂θ idθ ⊗ dθ = r2 dθ ⊗ dθ
p
And so : dvol = det(gij )dθ = rdθ, as wanted.
Hence, integrating f : S 1 (r) → R over the circle of radius r becomes :
Z Z 2π
f= f (r cos θ, r sin θ)rdθ
S 1 (r) 0
Now, we are going to understand what happens when we do a change of coordinates : let’s say
we switch from (xi ) to some (y i ).
More precisely, x : U ⊂ M → R and y : V ⊂ M → R are two coordinates chart, and we study what
happens when changing coordinates on U ∩ V 6= ∅.
∂ ∂
First of all, let’s try to understand how the vectors ∂x i and ∂y i are linked. We have, for any
smooth f : M → R :
∂f ◦ x−1 ∂f ◦ y −1 ◦ y ◦ x−1
∂
· f = (x(p)) = (x(p))
∂xi p ∂xi ∂xi
X ∂y j ◦ x−1 ∂f ◦ y −1 X ∂y j
∂
= (x(p)) (y(p)) = (p) ·f
j
∂xi ∂y j j
∂xi ∂y j p
This is the usual chain rule. Now, let’s see what happens for the (dxi ). We have :
X ∂xi ◦ y −1
(dxi )p = d(xi ◦ y −1 ◦ y)p = (y(p))(dy j )p
j
∂y j
15
Hence :
X ∂xi
dxi = j
dy j
j
∂y
It seems here that covariant and contravariant tensors behave nicely when a change of variable
occurs. For example, let’s study what happens to the metric when we do this change of coordinates.
Let’s note gij the coordinates of the metric on the chart x and g̃ab the coordinates of the metric
on the chart y. We have, using the multilinearity of the tensor product :
X X ∂xi ∂xj a
g= gij dxi ⊗ dxj = gij dy ⊗ dy b
ij
∂y a ∂y b
ijab
Hence :
X ∂xi ∂xj
g̃ab = gij
ij
∂y a ∂y b
For example, we know that the family of maps (gij ) defines a 2-covariant tensor field, because
we verified that it behaves nicely under a change of coordinates. Let’s define a new tensor field
using this one.
We denote by g ij the (ij)-th coefficient of the inverse of the natrix (gij ), i.e. we have :
X
g ij gjk = δkj
j
This nice behavior of the inverse of the metric allows us to create a lot of new tensors
Pby ”uppering
∂
and lowering” the indices. For example, take a vector field X, denoted by X = i X i ∂x i . We
[ i
P
can define a new 1-covariant tensor field, denoted by X = i Xi dx , whose coordinates will be
defined by X
Xi := X j gji .
j
Because of the behavior of the metric under a change of coordinates, one can verify that the family
of maps (Xi ) behave well under a change of coordinates, and thus, defines a 1-covariant tensor field.
16
Another example : Given a 2-covariant tensor with coordinates Rij , one can define a new (1,1)-
tensor field defined by the following coordinates :
X
Ri j := Rik g kj .
k
We can even make all the indices go up if we want : the following coordinates defines a 2-
contravariant tensor field : X X
Rij = Ra j g ai = Rab g ai g bj
a ab
Now, let’s use this new formalism to generalise a bit the domain of P
definition of the metric. We
already know how to measure the norm of vectors : |X|2 = hX, Xi = ij X i X j gij .
∂ ∂
We can as well generalize this metric to more general tensors : since gij = h ∂x i , ∂xj i, and
because of the wanted behavior under a change of coordinates, it seems natural to define :
hdxi , dxj i := g ij
Let’s generalize it further : because of the very idea of a tensor product, it seems natural to
define
hR ⊗ S, T ⊗ U i := hR, T ihS, U i
This generalize the metric on all tensor fields.
For two (k,m)-tensor fields denoted in coordinates by Ti1 ...ik j1 ...jm (p) and Si1 ...ik j1 ...jm (p), this
gives us the following formula :
X
hT, Si = Ta1 ...ak b1 ...bm Sc1 ...ck d1 ...dm g a1 c1 . . . g ak ck gb1 d1 . . . gbm dm
Which can be reformulated in the following way, using the previous formalism :
X
hT, Si = Ta1 ...ak b1 ...bm S a1 ...ak b1 ...bm
For example, the norm of the metric can be computed easily :
X X
|g|2 = gij g ij = δii = n.
ij i
We can also verify by the symmetry of the formula that the operation of uppering and lowering
the indices becomes an isometry. For example, given a vector field X :
X X X
|X|2 = X i X j gij = X i Xi = Xi Xj g ij = |X [ |2
Moreover, this norm is the same as the norm found naturally by considering tensors as multi-
linear maps. Indeed, for any (k,m)-tensor T , and for any vector fields X1 , . . . , Xk and 1-covariant
tensor fields Y1 , . . . , Ym , one can verify that :
17
3.3 Affine Connections
Take a smooth manifold M .
We do know how to compute the speed of a smooth curve c(t) on M : on each time t, this will be a
vector ċ(t) defined on the tangent plane Tc(t) M . But what if we want to compute the acceleration
on our curve c ? We are going to need an operator that allows us to derive vector fields.
Let’s try to understand what it could mean on submanifolds of RN . Take c(t) a smooth curve
on M ⊂ RN . This represents a little human moving on our manifold.
The vector ċ(t) is the speed of our human. This vector is tangent to the manifold at the point
where our human is.
Now, let’s take a look at the acceleration. We can write
, where c̈k ∈ Tc(t) M and c̈⊥ (t) ∈ Tc(t) M ⊥ . In this decomposition, c̈k is the only acceleration that
the human sees. The orthogonal contribution only serve to stay on the manifold. (Imagine going
around the world walking on the equator : The human seems to go at a constant speed, this is
because all the acceleration point toward the center of earth.)
More generally, for two vector fields X and Y defined on M , one can define the covariant
derivative of Y in the direction X, denoted by ∇X Y . ∇X Y is a vector field on M defined in the
following way :
(∇X Y )p := πp (DX Y ) ∈ Tp M
Where DX Y is the usual directional derivative of Y in the direction X.
Studying a bit this operator leads us to the following definition :
Definition 29. Let M be a smooth manifold.
An affine connection on M is a map ∇ : X(M ) × X(M ) → X(M ) such that
• ∇f X+gY Z = f ∇X Z + g∇Y Z
• ∇X (Y + Z) = ∇X Y + ∇X Z
• ∇X (f Y ) = (X · f )Y + f ∇X Y
18
This is a straighforward use of the properties of a connection :
!
X
i ∂ i ∂ ∂
X
i
∇X Y = ∇X Y = (X · Y ) i + Y ∇X i
i
∂xi i
∂x ∂x
X
(X · Y i ) ∂ X ∂
= + Y iX j ∇ ∂j .
i
∂xi j
∂x ∂xi
∂ ∂
Defining Γkij := dxk ∇ Γkij ∂x∂ k ) and relabeling the indices gives :
P
∂
∂xj , (ie ∇ ∂
∂xj = k
∂xi ∂xi
X
X · Y i +
X ∂
∇X Y = Γijk X j Y k i .
i
∂x
jk
The local maps Γkij are called the christoffel symbols associated to the connection ∇. Locally,
an affine connection is uniquely determined by specifying its Christoffel symbols on a coordinate
neighborhood. However, the choices of Christoffel symbols on different charts are not independent,
as the covariant derivative must agree on the overlap.
Let’s compute what happens to the christoffel symbols when one apply a change of coordinates:
we note
∂ ∂
Γkij := dxk ∇ ∂ i j and Γ̃cab := dy c ∇ ∂a b .
∂x ∂x ∂y ∂y
We have :
∂
X ∂y c X ∂xj ∂
Γ̃cab = dy c ∇ ∂ = dxk ∇ ∂a
∂y a ∂y b ∂xk ∂y
j
∂y b ∂xj
k
X ∂y c ∂ 2 xj X ∂y c ∂xj
∂
k k
= δ + dx ∇ ∂ .
∂xk ∂y a ∂y b j ∂xk ∂y b ∂y a ∂xj
jk ijk
Hence :
X ∂y c ∂ 2 xj X ∂y c ∂xj ∂xi
Γ̃cab = j a b
+ Γk .
k ∂y b ∂y a ij
j
∂x ∂y ∂y ∂x
ijk
Thus, the christoffel symbols does not behave like a tensor under a change of coordinates, and
thus does NOT defines a tensor field. Ie, there is no (2,1)-tensor field T defined on all M such
that, in coordinates, Tij k = Γkij .
Example 28. The usual connection connection on Rn is the usual directionnal derivative. In this
case,
∂ ∂ej
∇ ∂i j = =0
∂x ∂x ∂xi
where ej is the j-th vector of the canonical basis. Hence, we find that Γkij = 0, and then, in
coordinates, we verify that the upper formula gives us the usual definition of the directionnal
derivative on vector fields :
X ∂
∇X Y = (X · Y i ) i .
i
∂x
Hence, we verify the following symmetry formula :
X ∂
∇X Y − ∇ Y X = (X · Y i − Y · X i ) i = [X, Y ].
i
∂x
All connections that satisfy this last equality will be called symmetric. The same computations
show that all the natural connections on submanifolds of Rn are symmetric. Symmetry of the
connection is equivalent to Γkij = Γkji .
19
Connections give us a way to differentiate vector fields. But once again, the same ”problem”
that appeared with n-form occurs : the choice of a connection on a general manifold, even sym-
metric ones, is fairly arbitrary.
But again, things get better when dealing with a riemannian manifold.
In the case of submanifolds M of Rn again, we see that, for any vector field X, Y, Z on M :
As we already saw, the natural connection on submanifolds satisfies those properties, and hence
is the unique Levi-Civita connection on it.
Example 29. Recall the metric of S 2 in spherical coordinates :
g = dθ ⊗ dθ + (sin θ)2 dϕ ⊗ dϕ
Applying the previous formula for the christoffels symbols gives us, for the usual connection :
Γyxx = 1/y
Γyyy = −1/y
Γxxy = Γxyx = −1/y
Hence :
∂ 1 ∂ ∂ 1 ∂
∇x = ; ∇y =−
∂x y ∂y ∂y y ∂y
∂ ∂ 1 ∂
∇x = ∇y =−
∂y ∂x y ∂x
20
Now that we have a nice objects to compute directionnal derivatives on our manifolds, we can
define the covariant derivative.
DV X
i
X
i j k ∂
(t) = V̇ (t) + Γjk (c(t))V (t)ẋ (t) .
dt i
∂xi c(t)
jk
If this is zero, then the map V (t) will feel like not turning around when c(t) moves : We call this
the parallel transport. V being parrallel transported along c exactly means that, on coordinates :
X
∀i, V̇ i (t) + Γijk (c(t))V j (t)ẋk (t) = 0.
jk
This is a first order system of ODE. Hence, being given an initial vector V0 ∈ Tc(0) and a smooth
curve c, usual ODE theory tells us that there exists a unique V (t), defined for t small enough, such
that V (t) is parallel transported along c.
If this vanishes, then the curve has the feeling of going forward at a constant speed. Curves
with null acceleration will be called geodesics. Then, the equations of the geodesics are :
X
∀i, ẍi + Γijk ẋj ẋk = 0.
jk
This is a second order system of ODE. So this time, usual theorems tell us that each geodesics
is characterised by a given starting point c(0) and an initial speed ċ(0).
Example 31. On the euclidian space Rn , the christoffels symbols are null. Hence, geodesics are
actual straight lines.
Example 32. Imagine living on earth. You decide to go straight forward and see what happens.
Your trajectory will describe a great circle around earth : this is actually the geodesics of S 2 .
Let’s verify this by computing the acceleration of the following curve, moving along the equator :
c(t) = (cos t, sin t, 0). In coordinates, it corresponds to θ(t) = π/2 and ϕ(t) = t.
∂
Thus : ċ(t) = ( ∂ϕ )c(t) , and hence :
Dċ ∂ ∂
= ∇ϕ = sin(π/2) cos(π/2) = 0,
dt ∂ϕ ∂θ
making our equator a geodesic. More generally, the equation of geodesics on S 2 are :
θ̈ − sin(θ) cos(θ)ϕ̇2 = 0
ϕ̈ − 2 cot(θ)θ̇ϕ̇ = 0
Example 33. On the hyperbolic plane, geodesics satisfies the equations :
ẍy − 2ẋẏ = 0
ÿy − ẏ 2 + ẋ2 = 0
Geodesics are then vertical lines and half-circles.
21
3.4 Connection and metric
On a riemannian manifold, we now know how to compute the derivatives of vector fields. Our goal
now is to generalize this construction to all tensor fields.
First of all, for any 0-tensors, i.e., for any smooth map f : M → R, there is a natural general-
ization of the notation. Given local coordinates (xi ), we note :
∂f
∇i f = = ∂i f
∂xi
With this notation, the differential of f is the covariant tensor ∇f , with coordinates ∇i f .
Similarly, for a given vector field X, one can define a (1,1) tensor ∇X that, in coordinates, is
defined by : X j
∇i X j := (∇ ∂ i X)j = ∂i X j + Γik X k .
∂x
k
Now, let’s generalize this to 1-forms. Let ω be a 1-covariant tensor field over M , noted, in
coordinates : ω = i ωi dxi . By bilinearity of the map
P
Ω1 (M ) × X(M ) −→ C ∞ (M, R) ,
(ω, X) 7−→ ω(X)
In particular, for the forms dxj , this gives us : (∇ ∂ i dxj )(X) = ∂i X j −dxj (∇i X) = − Γjik X k
P
∂x
k
Hence : X j
∇i dxj = − Γik dxk
k
Then, linearity of the derivative and formula for deriving a product gives us a formula for
!
X X X j
j j j j j
∇ ∂i ω = ∂i ω dx + ω ∇i dx = ∂i ω + Γik ω dxj .
k
∂x
i i k
In fact, we can once again define ∇ω a 2-covariant tensor field, defined in coordinates by :
X j
∇i ωj = (∇ ∂ i ω)j = ∂i ωj − Γik ω k
∂x
k
Finally, the tensor product being naturally bilinear, we can naturally generalize our derivative
to all tensors T . Let’s take T a (k, m)-tensor field over M , noted :
X j ...j
i1 ik ∂ ∂
T = Ti1 ...ik dx ⊗ · · · ⊗ dx ⊗
1 m
⊗ ··· ⊗
∂xj1 ∂xjm
We can naturally define ∇i T by just deriving it like a product of (1+k+m)-terms. This defines
∇T as a (k+1,m)-tensor field, and some computations allow us to see that, in coordinates :
X X
∇i Ti1 ...ik j1 ...jm = ∂i Ti1 ...ik j1 ...jm − Γlii1 Tl...ik j1 ...jm − · · · − Γliik Ti1 ...l j1 ...jm
l l
X X
+ Γjil1 Ti1 ...ik l...jm + · · · + Γjilm Ti1 ...ik j1 ...l
l l
Notice that this definition is coherent with what we wanted with our 1-forms : For any tensor
ω, and vector fields X1 , . . . , Xn , we still have :
We have defined the covariant derivative of any tensor. This actually helps in some computations,
and allows us to reformulate some statement.
22
For example, the Levi-Civita connection is supposed to be compatible with the metric, i.e. :
wich exacly means ∇g = 0. In coordinates, this gives us the following useful formula :
X X
∂i gjk = Γlij glk + Γlik gjl .
l l
ij
It is then natural to ask if ∇g also vanishes. Let’s do the computations. We know that :
X
gij g jk = δik ,
j
Before going on, let’s explain a bit more what’s under this formula.
Creating a (1,1)-tensor by using a (2,0)-tensor and a (0,2)-tensor by this sort of formula is an ex-
ample of a process called contraction. A contraction of two tensors T and S, noted in coordinates
Ti1 ...ik j1 ...jm and SI1 ...IK J1 ...JM , is a tensor R defined in coordinates by a relation of the type :
X
Ri1 ...ik I1 ...IK j1 ...jm J1 ...JM = Ti1 ...ik j1 ...s Ss...IK J1 ...JM .
s
This special use of the coordinates of T and S allows us to verify that this familly of maps indeed
defines a tensor. Hence, ∇R makes sense, and one can verify through a bit of computations that
the following natural formula is satisfied :
X X
∇l Ri1 ...ik I1 ...IK j1 ...jm J1 ...JM = ∇l Ti1 ...ik j1 ...s Ss...IK J1 ...JM + Ti1 ...ik j1 ...s ∇l Ss...IK J1 ...JM .
s s
It is important to realize that this formula is not trivial. The first derivative acts on a
(k+K,m+M)-tensor, while the second and third acts on (k,m) and (K,M) tensors. Strictly speak-
ing, those are not the same objects, and thus does not has any apparent simple reason to behave
like this. One have to do the computations, making the christoffel symbols appears, to convince
himself of this formula.
Careful that, in those kind of expression, the first derivative is deriving a tensor. A more explicit
formula is :
!
X X X X
i ij ij k
∇ ∇i f = g ∇j ∇i f = g ∂j ∂i f + Γji ∂k f .
i i i k
23
One can proove, after some computation, the following expression for the connection laplacian :
1 X p
∆f = p ∂i ( | det g|g ij ∂j f ).
| det g| ij
Again, recall that all of those complicated objects ∇ exists to provide sense to derivatives in
such
P 2sense that it does not depends on the choice of coordinates. Boringly defining a laplacian by
i ∂ii f would not have defined the same formula depending on our coordinates. Our connection
laplacian, here, has the good property that the formula does not depend on the choice of the co-
ordinates.
Example 34. Take the usual euclidian space (Rn , g), with
i
X
g= dxi ⊗ dxi ,
i.e., (gij ) is the identity matrix. We know that, in cartesian coordinates, the christoffel symbols
are zero. Then, we can write down the connection laplacian associated to this metric :
X ∂2f
∆f =
i
∂x2i
Now let’s change coordinates. We decide to go on spherical coordinates, (r, θ, ϕ), defined by :
g = dr ⊗ dr + r2 dθ ⊗ dθ + r2 (sin θ)2 dϕ ⊗ dϕ
p
Hence, we find that det(gij ) = r2 sin θ, and thus, the general formula for the connection
laplacian gives us the following expression :
∂2f
1 ∂ 2 ∂f 1 ∂ ∂f 1
∆f = 2 r + 2 sin θ + 2
r ∂r ∂r r sin θ ∂θ ∂θ r (sin θ)2 ∂ϕ2
Which is the usual formula for the laplacian in spherical coordinates.
Example 35. Let’s compute the connection laplacian on S 2 , equipped with the induced metric
from R3 :
g = dθ ⊗ dθ + (sin θ)2 dϕ ⊗ dϕ
p
Hence, det(gij ) = sin θ, and thus, the formula gives us the spherical laplacian :
∂2f
1 ∂ ∂f 1
∆f = sin θ +
sin θ ∂θ ∂θ (sin θ)2 ∂ϕ2
This is a natural expression to obtain, since this is the upper one but without taking derivative
over r and putting r = 1 in the expression.
Example 36. On the hyperbolic plane, recall that the metric is given by :
1
g= (dx ⊗ dx + dy ⊗ dy)
y2
Hence, we have the following connection laplacian :
∂2f
∂ 1 ∂f
∆f = 2
+y
∂x ∂y y ∂y
24
4 Curvature
Now that we know how to do a bit of analysis on riemannian manifolds, let’s take a break to talk
about more geometric aspects : the curvature.
Given a riemannian manifold (M, g), there exists a lot of different ways to describe its curva-
ture. Here is one way to understand it. Suppose you, and a friend, decide to go north at a constant
speed, starting from the equator. You will describe geodesics on earth, with the same initial direc-
tion. At first, it will seem like your two trajectories are parallel, but as the time goes on, you will
actually approach each other, and finally meet at the north pole.
This is because the earth is curved. If you were living on R2 instead, you would have never
met up, and your trajectory would have stayed parallel. Instead, you experimented a deflect from
your original parallel-ness and approached your friend : this is a sign that the earth is positively
curved. On the hyperbolic plane, instead of approaching your friend, you would have been moving
away from each others. This is because of the negative curvature on the hyperbolic plane.
Another way to understand curvature is to look at triangles. On R2 , the sum of all the angles of
a triangle will always add to π. But on the earth, triangles mades from geodesics will always have
angles that adds up to more than π. For example, one can easily construct a triangle on earth that
have 3 right angles.
On the hyperbolic plane, all geodesic triangle will have angles that will adds to less than π.
Those way of measuring curvature are by essence intrinsic, and will stay unchanged under an
isometry. For example, a piece of paper is flat, and so will be a piece of paper rolled to a cylinder.
(Recall that an isometry is everything that does not tear our paper appart). This can sound
counter-intuitive to think that a cylinder is called flat, but think about it this way : if you try to
roll a piece of paper, there will always be a direction that will stay straight. You can not turn your
paper into a sphere.
R(X, Y )Z = ∇X ∇Y Z − ∇Y ∇X Z − ∇[X,Y ] Z
for all vector field X, Y, Z ∈ X(M ).
The Riemann tensor is a way of measuring the non-commutativity of the connection. The fact
that this indeed defines a tensor is not clear at first. First of all, recall that the lie bracket [X, Y ]
is defined as the only vector field such that, for all smooth real valued map f :
[X, Y ] · f = X · (Y · f ) − Y · (X · f ),
• [f X, Y ] = f [X, Y ] − (Y · f )X.
Now we can try to prove that R defines a tensor. As a map who takes three vector fields and
maps them to one vector field, we only have to check the C ∞ −multilinearity of R.
25
Let X, X̃, Z, T be vector fields over M, and f a smooth real valued map. We have :
= R(X̃, Y )Z + f R(X, Y )Z
Thus, R is linear with respects to its first variable. Since R(X, Y )Z = −R(Y, X)Z, we see imme-
diately that R is also C ∞ −linear with respects to its second variable. Finally, for any vector fields
X, Y, Z, Z̃ and real valued map f :
is clear, and :
R(X, Y )(f Z) = ∇X ∇Y f Z − ∇X ∇Y f Z − ∇[X,Y ] f Z
= ∇X ((Y · f )Z + f ∇Y Z) + ∇Y ((X · f )Z + f ∇X Z) − ([X, Y ] · f )Z − f ∇[X,Y ] Z
= (Y · f )∇X Z + (X · f )∇Y Z + f ∇X ∇Y Z − (X · f )∇Y Z − (Y · f )∇X Z − f ∇Y ∇X Z − f ∇[X,Y ] Z
= f R(X, Y )Z
Hence, R defines a (3,1)-tensor on M . Given some coordinates, we can then write :
X ∂
R(X, Y )Z = Rijk l dxi ⊗ dxj ⊗ dxk ⊗
∂xl
With
l l ∂ ∂ ∂
Rijk = dx R , .
∂xi ∂xj ∂xk
∂ ∂
We can actually compute those coefficients. First of all, remark that ∂xi , ∂xj = 0. Indeed,
for all smooth real valued map, we have :
∂2f ∂2f
∂ ∂
, · f = − =0
∂xi ∂xj ∂xi ∂xj ∂xj ∂xi
Hence, we have :
∂ ∂ ∂ ∂ ∂
R , = ∇ ∂ i ∇ ∂j − ∇ ∂j ∇ ∂ i k
∂xi ∂xj∂xk ∂x ∂x ∂xk ∂x ∂x ∂x
! !
m ∂ m ∂
X X
= ∇ ∂i Γjk m − ∇ ∂ j Γik m
∂x
m
∂x ∂x
m
∂x
X ∂Γm
jk ∂Γm
∂ X ∂
= i
− ik
j m
+ Γm l m l
jk Γmi − Γik Γmj
m
∂x ∂x ∂x ∂xl
ml
!
X ∂Γljk ∂Γlik X m l ∂
m l
= i
− j
+ Γjk Γmi − Γik Γmj
∂x ∂x m
∂xl
l
Hence :
∂Γljk ∂Γlik X m l
Rijk l = i
− + Γjk Γmi − Γm l
ik Γmj
∂x ∂xj m
Example 37. In the euclidian space (Rn , g), since all the christoffel symbols vanishes, the Riemann
tensor vanishes too.
This tensor appears naturally when one is dealing with the idea mentioned before : deflection
of geodesics. One of the most well knowned example is the case of the so called Jacobi-field. Let’s
quickly explain this.
26
D ċ
Recall that a geodesic is a smooth curve c such that dt = 0, i.e., such that :
X
∀i, ẍi + Γijk ẋj ẋk = 0
jk
This is a second order equation, hence, the local solution is uniquely determined by an initial
point and an initial velocity. Being given a point p ∈ M , we can then define the exponential map
expp : U ⊂ Tp M → M as the map that takes v ∈ Tp M sufficiently small, and maps it to c(1),
where c is the geodesic that starts from p at initial velocity v.
With this notation, we check that c(t) = expp (tv). In other words, another way to write the
geodesic that starts at p at intial velocity v is c(t) = expp (tv).
Now, we can study what happens when, given p ∈ M , t ∈ R, and v0 sufficiently small, one do a
little deflect on v0 . Namely, take a smooth map v : (−, ) → Tp M such that v(0) = v0 , and define
D2 J
= R(ċ, J)ċ.
dt2
Let’s prove this, to familiarize ourself a bit more with covariant derivatives and this new cur-
vature tensor. First of all, since t 7→ expp (tv(s)) is a geodesic, we have by definition :
D ∂f
= 0.
dt ∂t
Recall that DV dt is defined as ∇γ̇ , where γ is the path taken by V (t), in the sense that V (t) ∈
Tγ(t) M . Hence :
D D ∂f ∂f D D ∂f ∂f ∂f ∂f
0= = ∇ ∂f ∇ ∂f = +R ,
ds dt ∂t ∂s ∂t ∂t dt ds ∂t ∂s ∂t ∂t
ie
D ∂f X ∂ 2 fˆi X ∂ fˆi ∂ fˆj ∂ D ∂f
= + ∇ ∂j =
ds ∂t ∂s∂t ∂t ∂s ∂x ∂xi dt ∂s
i ij
D2 ∂f
D D ∂f ∂f ∂f ∂f ∂f ∂f ∂f
0= +R , = 2 +R ,
dt ds ∂t ∂s ∂t ∂t dt ∂s ∂s ∂t ∂t
27
Hence, we see that the Riemann tensor appears when studying how geodesics deflect. Let’s
study a bit some more properties of this tensor.
Theorem 14. (Bianchi identity) If M is a riemannian manifold then the associated curvature
satisfies
R(X, Y )Z + R(Y, Z)X + R(Z, X)Y = 0.
This result is a direct consequence of the jacobi identity for vector fields, namely :
Sometimes, the riemann tensor is more convenient to use if we lower the last index, creating
the curvature tensor. More precisely :
In coordinates, we have :
* +
∂ ∂ ∂ ∂ X ∂ ∂ X
Rijkl = R , , , = Rijk m m , l = Rijk m gml
∂xi ∂xj ∂xk ∂xl m
∂x ∂x m
Hence, this definition is a special case of what we previously called ”lowering an index”.
This curvature tensor inherit from the symmetries of the Riemann tensor and the Levi-Civita
curvature. I’ll just quote the main symmetries.
Theorem 15. If X, Y, Z, W are vector fields on (M, g) we have :
• R(X, Y, Z, W ) + R(Y, Z, X, W ) + R(Z, X, Y, W ) = 0
• R(X, Y, Z, W ) = −R(Y, X, Z, W )
• R(X, Y, Z, W ) = −R(X, Y, W, Z)
• R(X, Y, Z, W ) = R(Z, W, X, Y )
These symmetries show us that this 4-tensor is a bit too large to countain all its information.
An equivalent way of encoding the riemann tensor is the following definition :
Definition 32. Let Π be a 2-dimensional subspace of Tp M and let Xp , Yp be two linearly inde-
pendent elements of Π. Then, the sectional curvature of Π is defined as
R(Xp , Yp , Xp , Yp )
K(Π) = −
|Xp |2 |Yp |2 − hXp , Yp i2
Note that |Xp |2 |Yp |2 −hXp , Yp i2 is the square of the area of the paral- lelogram in Tp M spanned
by Xp , Yp , and thus the above definition of sectional curvature does not depend on the choice of the
linearly independent vectors Xp , Yp . Indeed, when we change the basis of Π, both R(Xp , Yp , Xp , Yp )
and |Xp |2 |Yp |2 − hXp , Yp i2 change by the square of the determinant of the change of basis matrix.
It is possible to show that all the values of all the sectional curvatures indeed characterize the
riemann tensor.
In the case of 2-dimensionnal manifolds, ”sectional curvature” is often what someone means when
talking about curvature. In this case, the sectional curvature at a point p is often denoted by Kp .
Let’s compute it on some examples.
∂ ∂
Example 38. The sectional curvature of R2 is zero. Indeed, for p ∈ R2 , denoting by ∂x , ∂y the
∂ ∂ ∂ ∂ ∂
canonical basis, one has : Kp = −hR( ∂x , ∂y ) ∂x , ∂y i = −h0, ∂y i = 0.
28
Example 39. Let’s compute the sectional curvature of S 2 (r), the sphere of radius r. The spherical
coordinates here give us the following coordinates on S 2 (r) :
g = r2 dθ ⊗ dθ + r2 (sin θ)2 dϕ ⊗ ϕ.
Then, using the formula giving the christoffel symbol for the l Levi-Civita connection from the
metric, we find the nonvanishing christoffel symbols :
Example 40. We define the hyperbolic plane of radius r the manifold H2 (r) := {(x, y) ∈ R2 |y > 0}
equipped with the following metric :
r2
g := (dx ⊗ dx + dy ⊗ dy)
y2
The nonvanishing christoffel symbols asociated to it’s Levi-civita connection are :
The formula gives us all the nonvanishing christoffels symbols for it’s levi-civita connection :
Hence :
∂ 1 ∂ ∂ 1 ∂ ∂ ∂ 1 ∂
∇x = ; ∇y =− ; ∇x = ∇y =−
∂x y ∂y ∂y y ∂y ∂y ∂x y ∂x
We can then compute the sectionnal curvature of the hyperbolic plane of radius r :
y4
∂ ∂ ∂
K(H2 (r)) = − 4 ∇x ∇y − ∇ y ∇x ,
r ∂x ∂x ∂y
y4
1 ∂ 1 ∂ ∂
= − 4 −∇x − ∇y ,
r y ∂y y ∂y ∂y
2
y 1 ∂ 1 ∂ 1 ∂ ∂ 1
=− 4 2
+ 2 − 2 , =− 2
r y ∂y y ∂y y ∂y ∂y r
Hence, the standard hyperbolic plane has a sectional curvature of −1. Notice that, with all
these examples, we just proved that there exist manifolds of constant curvature κ for all κ ∈ R.
29
4.2 Ricci tensor, scalar curvature
Let’s talk a bit about general relativity. In general relativity, we often say that mass and energy
”deform spacetime”. What does it mean actually ?
In general relativity, we think about space and time as M = R4 equipped with a (pseudo)-metric
g. In pure vaccum, without any presence of mass anywhere, the metric g is given by :
g = −c2 dt ⊗ dt + dx ⊗ dx + dy ⊗ dy + dz ⊗ dz
Where c is the speed of light.
Stricto sensu, this is not a metric because it does not satisfy the definite positive axiom. Instead,
we call it a pseudo-metric, and (R4 , g) is called a pseudoriemannian manifold. Nearly everything
that was defined on riemannian manifolds can still be defined on riemannian manifolds, such as the
christoffel symbols (via the explicit formula that involve the metric for example), the associated
connection, the riemann tensor...
A vector u is said to be timelike if hu, ui < 0, null if hu, ui = 0, and spacelike if hu, ui > 0. For
example, the vector (1, 0, 0, 0) ∈ T(t,x,y,z) M is timelike and represents a jump in time of 1 second,
without moving in space. The vector (0, 1, 1, 1) ∈ T(t,x,y,z) M is spacelike and represents a jump,
at t = cste, from where we are to the point (x + 1, y + 1, z + 1).
With those definition, let’s represent a ray of light moving through space and time. Say that
we shoot a laser through the x-axis. A chosen photon in this light ray will have the following
trajectory through space time : γ(t) = (t, ct, 0, 0). He will have the following speed in spacetime :
γ̇(t) = (1, c, 0, 0). Hence :
hγ̇, γ̇i = −c2 + c2 = 0.
In particular, we see that, in the case of an absolutly empty space time with no mass nor
gravity nor energy anywhere, light rays describe straight lines through spacetime and have null
length. Notice that, in this particular case, the christoffel symbols associated to the metric vanishe
(because the gij are constant), and thus, straight line are the geodesics of this manifold. The
Riemann tensor also vanishes, making our manifold ”flat”. We then say that the trajectory of light
describe null geodesics.
But it is well known that, most of the time, light doesn’t go straightforward. Most of the time,
light trajectory is curvy. For example, when approaching a star, a light curve will be deflected from
it’s initial straigh path. (This phenomenon is used by astrophysicists and is called gravitationnal
lensing.)
This is because the upper metric is deformed when we add massive objects in our spacetime,
like the sun. In general, the trajectory of light still describes null geodesics, but with a curved
metric instead.
For example, around the sun, the metric of spacetime is often approximated by the Schwarzschild
metric. Given spherical coordinates centered in the sun, we can express it like this (it is valid only
outside the sun) :
rs 2 rs −1
g =− 1− c dt ⊗ dt + 1 − dr ⊗ dr + r2 dθ ⊗ dθ + r2 (sin θ)2 dϕ ⊗ ϕ
r r
Where rs is a constant called the Schwarzschild radius (it is way smaller than the radius of the
sun).
The trajectory of light in this curved spacetime will be null geodesics, ie : ∇γ̇ γ̇ = 0 and hγ̇, γ̇i = 0.
Hence, one (theoretically) can compute the christoffel symbols of the Schwarzschild metric, write
down the equations of geodesics and the condition of nullity of the norm of γ̇, and solve this set of
equation to determine the trajectory of light under these conditions.
30
The Einstein Equation tells us, given a set of mass and of energy in space, what are the possible
metrics that can exists. To quote them when need to define the so called Ricci tensor, and the
scalar curvature.
Definition 33. Let (M, g) be a (pseudo-)riemannian manifold. We denote by R the riemann
tensor associated to the levi-civita connection on M . Given some coordinates, we define Ric the
following symmetric 2-tensor :
X ∂
Ric(X, Y ) := dxk R , X Y
∂xk
k
And hence,we know that Ric is well defined. (i.e., does not depend of the chosen coordinates.)
The fact that this tensor is symmetric comes directly from the symmetries of the Riemann
tensor that we previously mentioned. This Ricci tensor is another information that express some
notions about the curvature of a riemannian manifold.
Then, by contracting the Ricci tensor with himself again, one can define the so called scalar
curvature :
X X
S := Ri i = Rij g ji = hRic, gi
i j
Those informations about curvature appears in the following Einstein equation, the ”master equa-
tion” of general relativity :
Ric = 0.
Remember that this is just a fancy notation to talk about a huge system of equations that
only depends on the metric g. Indeed, one can write the coordinates of the Ricci tensor using the
christoffel symbols :
!
X ∂Γkij k
X ∂Γ kj
X
Rkij k = Γm k m k
Ricij = − + ij Γmk − Γkj Γmi ,
∂xk ∂i m
k k
Hence, if we underline the dependence of Ricci from the metric, we see that Ric(g) = 0 is noth-
ing more than a system of n(n+1) 2 highly non-linear differential equations, with n(n+1)
2 variables
gij . In the case of special relativity, n = 4, and hence we have a huge system of 10 equations,
which are very difficult to solve.
In the case of riemannian manifolds of dimension 2, the scalar curvature and the sectional
curvature actually encode the same information. Indeed, one can prove that, in this case, we have
S = 2K.
31
4.3 Theorema egregium
Historically, the riemann tensor, the sectional curvature, the Ricci tensor and the scalar curvature
were not how we talked first about curvature of surfaces. If we take an embbeded riemannian
2-manifold in R3 , that is, a submanifold of dimension 2 in R3 , one natural way to see if something
is curved is to look if the normal vector moves.
More precisely, take M ⊂ R3 a smooth surface. Choose ϕ(x, y) a local parameterization of our
surface in a coordinate neighbourhood W . Then, in our coordinates chart, there exists a unique
normalized vector np ∈ Tp M ⊥ such that ( ∂x ∂ ∂
, ∂y , np ) is a positively oriented basis of R3 .
This induces a local map, called the gauss map, defined by :
G : p ∈ W ⊂ M 7→ np ∈ S 2 .
This map is smooth, and its differential can be identified to the following symmetric tensor :
(dG)p : h ∈ Tp M 7→ (dG)p h ∈ Tp S 2 ' Tp M
Hence, there exist orthonormal direction (u, v) ∈ Tp M , and eigenvalues λ, µ, such that :
(dG)p u = λu ; (dG)p v = λv
The values λ and µ are called the principal curvatures at p. They, of course, depend on the
immersion of the manifold M into R3 : i.e., they can change under the action of an isometry. We
call H := 21 (λ + µ) the mean curvature at p, and κ = λµ the Gauss Curvature at p.
Naturally, the mean curvature depends on the embedding. But surprisingly, the Gauss curva-
ture can be proved to not depend on the chosen embbeding, but only on the intrinsic geometry of
the surface. In fact, one can proove that the gauss curvature of a surface is equal to it’s sectionnal
curvature, which is invariant under the action of an isometry.
Gauss was so pleased by this discovery that he called this result theorema egregium (remarkable
theorem).
Example 41. We take a sheet of paper : M := ]0, 1[2 ×{0} ⊂ R3 , equipped with the induced
metric g = dx ⊗ dx + dy ⊗ dy. The gauss map is given by :
∂
G(p) = ∈ S2.
∂z
It is a constant map, hence, it’s differential is null, and we have :
λ=µ=0 ; λµ = 0.
Now, we roll our piece of paper to a cylinder using the following isometry :
ϕ(x, y) := (x, cos y, sin y).
We check that this is indeed an isometry and that we haven’t teared our paper :
h(dϕ)p u, (dϕ)p vi = h(ux , − sin(y)uy , cos(y)uy ), (vx , − sin(y)vy , cos(y)vy u)i = ux vx + uy vy = hu, vi.
Hence, our (almost) cylinder ϕ(M ) is our same sheet of paper, but isometrically rolled. We have:
∂ϕ ∂ϕ
= (1, 0, 0) ; = (0, − sin y, cos y).
∂x ∂y
Hence, the Gauss map is given by :
∂ϕ ∂ϕ
G : p ∈ ϕ(M ) 7→ × = (0, − cos y, − sin y) ∈ S 2
∂x ∂y
We easily find the eigenvalues of the differential :
∂ϕ ∂ϕ
∂x G = 0 × ; ∂y G = −
∂x ∂y
Hence, λ = 0; µ = −1, and so λµ = 0.
In this example, we indeed verified that the gauss curvature stays unchanged under an isometry,
which is not the case for the principal curvature values.
32
5 Geometric analysis
Now that we introduced the concept of riemannian manifolds and that we have discussed curvature
and a bit of physics, let’s try to do a bit more of analysis. My goal here will be to introduce Sobolev
spaces on riemannian manifolds, for maps and tensors. Then, I will briefly explain the topic of the
paper of Justin Corvino, quote the theorem that I studied, and present some used techniques.
Then, with (k,m) fixed, the set of all (equivalence classes of) (k,m)-tensors in L2 (Ω), equipped
with the previous inner product, is an Hilbert space.
Those notations can seems ambiguous, but actually it will always be clear from the context in wich
L2 -space we work on.
Now we are going to define the sobolev spaces. First of all, let’s denote by Ck (Ω) the following
vector space :
Ck (Ω) := {ϕ ∈ C ∞ (Ω) | ∀l ≤ k, ∇(l) ϕ ∈ L2 (Ω)},
where we have noted
∇(k) ϕ := ∇ . . . ∇ϕ.
For example, in the particular case of the space C1 (Ω), and if Ω is covered entirely by one
coordinate chart,the condition ∇f ∈ L2 (Ω) can be written the more explicit way :
Z XZ XZ
|∇f |2 = (∇i f )(∇i f ) = (∇i f )(∇j f )g ij < ∞
Ω i Ω ij Ω
33
Definition 34. For all ϕ, ψ ∈ Ck (Ω), we define :
Z Z Z
hϕ, ψiH k (Ω) = ϕψ + h∇ϕ, ∇ψi + · · · + h∇(k) ϕ, ∇(k) ψi
Ω Ω Ω
Definition 36. Analogously, we can define the set of all (K,M)-tensors in H k (Ω) by the following
construction : define
(K,M )
Ck := {T smooth (K,M)-tensor field over Ω | ∀l ≤ K, ∇(l) T ∈ L2 (Ω)},
(K,M )
then define the following inner product on Ck (Ω) :
Z Z Z
hT, SiH k (Ω) = hT, Si + h∇T, ∇Si + · · · + h∇(k) T, ∇(k) Si
Ω Ω Ω
(K,M )
We then define H k (Ω) as the completion of Ck (Ω) with respect to this inner product.
The Sobolev spaces arise naturally when we try to solve differential equations. A lot of oper-
ators, that exists in Ck (Ω), can be extended by continuity to H k (Ω). Differential operators, such
as the connection laplacian, will naturally be studied in these spaces.
Example 42. We are going to define the notion of weak derivative.
The linear map
∇ : C1 (Ω) → L2 (Ω)
is clearly continous, since Z
|∇ϕ|2 ≤ ||ϕ||2H 1 (Ω) .
Ω
Then, as C1 (Ω) is dense in H 1 (Ω), and since L2 (Ω) is an Hilbert, we know that there exists a
unique continuous extension
∇ : H 1 (Ω) → L2 (Ω)
This is called the weak derivative of a map f ∈ H 1 (Ω). Analogously, there exists a unique way
to continously extends all the maps ∇(l) : Ck+l (Ω) ⊂ H (k+l) (Ω) → Ck (Ω) ⊂ H k (Ω).
By continuity, weak derivatives of maps still behave like we are used to. For example :
This notion of weak derivative allow us to give an explicit formula for the induced norm over
H k (Ω) by just taking the natural continuation :
Z Z Z
hf, giH k (Ω) = f g + h∇f, ∇gi + · · · + h∇(k) f, ∇(k) gi.
Ω Ω Ω
With this formula for the norms ||.||H k (Ω) , it is not difficult to see that the weak derivative
34
Example 43. Computing integrals assumes to have coordinates-free objects at our disposal. Only
tensors that are defined globally will have a chance to behave well.
For example, suppose that our subset Ω is covered by one coordinate neighbourhood. We choose
xi some coordinates over Ω. Therefore, and this is only because we supposed so, for f ∈ H 1 (Ω),
∇i f is defined globally.
Most of the time, only the tensor ∇f would be defined globally, and the expression ∇f =
∂
P
i ∇i f ∂xi would only make sense locally.
But let’s suppose that our coordinates allow us to compute ∇i f on all Ω. Since f ∈ H 1 (Ω), we
know that ∇f ∈ L2 (Ω), which means exactly that :
Z
(∇i f )(∇j f )g ij < ∞
Ω
Notice that this condition does not imply that ∇i f ∈ L2 (Ω), because we don’t know how the g ij
behave !
Still, in the special case where Ω is compact, we can prove that f ∈ H 1 (Ω) implies that ∇i f ∈
L2 (Ω), for all i. (But once again, this only works if we can cover our domain entirely by one
coordinate chart, since in the other case it won‘t even makes any sense.)
Example 44. Choose a smooth vector field X on Ω. Given some coordinates, we define :
X
divX := ∇i X i .
i
This is the contraction of the (1,1)-tensor ∇X with himself, and hence this quantity does not
depends on the choice of coordinates.
Making the christoffel symbols appear, we can write it like this :
X X
divX = ∂i X i + Γiij X j
i ij
It seems natural to hope that this can be extend into a continuous map div : H 1 (Ω) → L2 (Ω).
Indeed, this is the case. We are going to prove it by using some special coordinates.
(0,1)
First of all, choose X ∈ C1 (Ω) a smooth vector field. Fix p ∈ Ω. We admit that there exist
coordinates (x )i around p, called the normal coordinates, such that : gij (p) = g ij (p) = δij . Of
i
course, this only works at the point p, even in these normal coordinates, we wont have gij (q) =
g ij (q) = δij for q 6= p in general.
In these coordinates, at the point p, we have :
!
X
i
X ∂ i ∂
|(divX)(p)| = (∇i X )(p) = (∇X)p i
, (dx )p ≤ n|∇X|× i
×|(dxi )p | = n|(∇X)p |
i i
∂x p ∂x p
And now, we see that the resulting inequality is coordinates free, and hence is valid for all
p ∈ Ω. Integrating over Ω gives :
Z Z
2
|divX| ≤ n |∇X|2 ≤ n||X||2H 1 (Ω)
Ω Ω
(0,1) 2
Hence, div : C1 (Ω) → L (Ω) defines a continuous operator.
Thus, we can extend it continuously to a map div : H 1 (Ω) → L2 (Ω).
gradf is just the vector field obtained when uppering the indices of the tensor ∇f . Hence,
|gradf | = |∇f | and thus grad defines a continuous map H k+1 (Ω) → H k (Ω).
35
Now, let f be a smooth map Ω → R. Remember that we defined the connection laplacian of f
by : X
∆f := ∇i ∇i f.
i
∆f = div(gradf ).
Hence,we see that the connection laplacian can be naturally extended, in the following way :
Example 46. Let’s try to compute the formal adjoint of div : H 1 (Ω) → L2 (Ω). Let Φ be a smooth
vector field, with compact support in Ω. Let ψ be a smooth, real valued map, with compact support
in Ω. It is a classic corollary of the Stokes formula that, for any smooth vector field X with compact
support : Z
divX = 0
Ω
(This is the divergence theorem.) Replacing X by ψΦ in the divergence gives, given some local
coordinates :
X X
∇i (ψΦi ) = ψ∇i Φi + (∇i ψ)Φi = ψdiv(Φ) + hΦ, gradψi.
div(ψΦ) =
i i
Thus, we see that the formal adjoint of div : H 1 (Ω) → L2 (Ω) is grad : H 1 (Ω) → L2 (Ω).
Example 47. From this previous example, we are able to easily compute the formal adjoint of
the laplacian. We have, for any smooth ϕ, ψ with compact support :
Z Z Z
(∆ϕ)ψ = − h∇ϕ, ∇ψi = ϕ(∆ψ)
Ω Ω Ω
Hence, ∆ is formally self-adjoint. Moreover, these computations allow us to check that it also is a
negative operator, in the following sense :
36
5.3 Deformation of the scalar curvature
Now, we have all the tools to talk quickly about the paper of Justin Corvino : ”Scalar curvature
Deformation and a Gluing Construction for the Einstein Constraint Equations.”
all the maps in this space will be forced to decay quickly when approaching the boundary.
We also define the banach space C k,α (Ω), normed by the following :
X |∇(k) f (x) − ∇(k) f (x)|
||f ||k,α = sup |∇(l) f (x)| + sup
x∈Ω x6=y∈Ω d(x, y)α
l≤k
• S − R(g0 ) ∈ Cρk,α
−1 (Ω)
One classical approach to solve this kind of equations is to use the Newton algorithm. It goes
like this :
37
• First of all, linearize the problem. Instead of studying R(g) = S, we linearize at g0 and study
the following problem :
Lg0 (h) = S − R(g0 )
• Then, we solve the linearized equation, with care. To ensure that our solution to this lin-
earized problem has nice properties (like h being small and smooth), we will have to solve it
via a variationnal approach, making the adjoint appear here.
• Once having solved Lg0 (h0 ) = S − R(g0 ), we define g1 = g0 + h0 . And then we iterate the
process : we linearize at g1 , solve the linearized problem, define g2 = g1 + h1 , etc.
• To finish, we have to prove that the created sequence of metrics gn converges to a metric g
that solves the problem.
We will try to see, roughly, what happens when we try to do this.
with :
Lg0 (h) = −∆(hg, hi) + div div h − hRic, hi
That is, in coordinates notations :
X X X
Lg0 (h) = − ∆(g ij hij ) + ∇i ∇j hij − Ricij hij .
ij ij ij
If we can easily see that the scalar curvature is indeed differentiable, the computations to pro-
vide a formula for Lg0 are quite long and technical, but are doable with all the tools that we have
developped here. The computations are clearly presented in the blog of Terence Tao, in a post
called ”285G, Lecture 1: Flows on Riemannian manifolds”.
Notice that the scalar curvature takes a smooth symmetric 2-tensor h and maps it to a real
valued map f . Hence, its formal adjoint will take a real valued map f , and maps it to a symmetric
2-tensor h.
In the case where this formal adjoint is injective, one can prove the following estimate :
Theorem 20. There exists C > 0 such that :
where : Z Z Z
||f ||2Hρ2 (Ω) := 2
|f | ρ + 2
|∇f | ρ + |∇∇f |2 ρ.
Ω Ω Ω
Hρ2 (Ω) is a weighted sobolev space : it is defined analogously than H 2 (Ω), but with respects to the
upper norm.
38
5.5 Solving the linearized problem
Now, we will explain briefly how to solve conveniently our linearized problem L∗g0 (f ) = S − R(g0 ).
Since (S − R(g0 )) ∈ L2ρ−1 (Ω) by hypothesis, we can write it in the form : S − R(g0 ) = f ρ, with
f ∈ L2ρ (Ω). Then, define the following functional :
J : Hρ2 (Ω) −→ RR
1
|L∗g0 u|2 ρ −
R
u 7−→ 2 Ω Ω
f uρ
The estimates quoted previously allow us to show that this map admits a minimum in Hρ2 (Ω).
Indeed, we have :
1 ∗ 1
|J(u)| ≥ ||L u||2 2 − |hf, uiL2ρ (Ω) | ≥ ||u||2Hρ2 (Ω) − ||f ||L2ρ (Ω) ||u||Hρ2 (Ω)
2 g0 Lρ (Ω) 2C 2
Hence, J(u) → ∞ when ||u||Hρ2 (Ω) → ∞. This proves that there exists a bounded subset
A ⊂ Hρ2 (Ω), such that inf Hρ2 (Ω) J = inf A J.
Then, by taking a minimizing sequence (ui ) which will be bounded, one can extract a subsequence
that converge weakly to a map u ∈ Hρ2 (Ω), and then prove that this u is a minimum of J.
Once we know that there exists a u0 ∈ Hρ2 (Ω) such that J(u0 ) = inf J, we can compute the
Euler-Lagrange equations associated to this functional. We have, for all η ∈ Cc∞ (Ω) :
d
|t=0 J(u0 + tη) = 0
dt
i.e. : Z Z
hL∗g0 f, L∗g0 ηiρ − f ηρ,
Ω Ω
which is exactly the weak formulation of
Hence, we have found u0 ∈ Hρ2 (Ω) such that h0 := ρL∗g0 u0 ∈ L2ρ−1 (Ω) solves our linearized
equation weakly. But we want more.
It appears that, conveniently, the operator P := Lg0 (ρL∗g0 ) is an elliptic operator of degree 4.
Moreover, its coefficients are smooth enough (C k+4,α ), and S − R(g0 ) is also smooth enough
(C k,α ). Hence, our weak solution u0 ∈ Hρ2 (Ω) is upgraded, via elliptic regularity, to a classical
solution u0 ∈ C k+4,α .
39
the boundaries. This is here, in those inequalities, that our weight ρ is really mandatory. If we
forget the decreaseness information, inequalities of the form
become false.
So, our newton algorithm seems to have a healthy start. But what happens next ?
We want to linearize again, but at g1 this time. We then solve, weakly, the following equation :
But here, an important problem appear. Our operator is still a 4-order elliptic operator, but
now, its coefficients are only C k+2,α ! Hence, our weak solution is only upgraded to a classical
solution u1 ∈ Cρk+2,α , and so, h1 becomes only C k,α . We lost two degrees of differentiability !
It seems, then, that the newton algorithm won’t do well. To overcome this difficulty, we modify
a bit the newton algorithm, using the so called picard approach. The idea is, simply, to continue
linearizing at g0 at each time. i.e.,to define h1 through :
Let’s see what happens for the second step of iteration. We suppose that ||S −R(g0 )||k,α,ρ−1 ≤ .
We can then write :
R(g2 ) = R(g1 ) + Lg1 (h1 ) + O(||h1 ||k+2,α ) = S + (Lg1 − Lg0 )(h1 ) + O(||h1 ||2k+2,α )
Since g 7→ Lg is continuous, we can write :
And hence, this time, things seem to be going well for our algorithm.
And then, one can proove that, indeed, if we follow this algorithm, the constructed sequences of
metrics gn will actually geometrically converge to a C k+2,α -metric g, such that R(g) = S.
This finishes the sketch of the proof of the theorem of Justin Corvino.
6 Acknowledgement
I would like to thanks a lot Nadine Große, who was really kind and supportive with me during
this internship, and who gave me the opportunity to give a talk on the paper of Corvino during
my internship. Many thanks to everyone. The place was great and I had a great time. I hope that
I will have the opportunity to work with you again.
7 Bibliography
An Introduction to Riemannian Geometry with Applications to Mechanics and Relativity, Leonor
Godinho and Jose Natario, 2014
Scalar Curvature Deformation and a Gluing Construction for the Einstein Constraint Equations :
Corvino, J. Commun. Math. Phys. (2000) 214: 137. https://doi.org/10.1007/PL00005533
40