Note 12.
Alternating tensors Differential Geometry, 2005
Let V be a real vector space. In Note 11 the tensor spaces T k (V ) were defined,
together with the tensor product
(S, T ) 7→ S ⊗ T, T k (V ) × T l (V ) → T k+l (V ).
There is an important construction of vector spaces which resemble tensor powers
of V , but for which there is a more refined structure. These are the so-called
exterior powers of V , which play an important role in differential geometry because
the theory of differential forms is built on them. They are also of importance in
algebraic topology and many other fields.
A multilinear map
ϕ: V k = V × · · · × V → U
is called alternating if for all v1 , . . . , vk ∈ V the value ϕ(v1 , . . . , vk ) changes sign
whenever two of the vectors v1 , . . . , vk are interchanged, that is
ϕ(v1 , . . . , vi , . . . , vj , . . . , vk ) = −ϕ(v1 , . . . , vj , . . . , vi , . . . , vk ). (1)
Since every permutation of the numbers 1, . . . , k can be decomposed into transpo-
sitions, it follows that
ϕ(vσ(1) , . . . , vσ(k) ) = sgn σ ϕ(v1 , . . . , vk ) (2)
for all permutations σ ∈ Sk of the numbers 1, . . . , k.
Examples. 1. Let V = R3 . The vector product (v1 , v2 ) 7→ v1 × v2 ∈ V is
alternating from V × V to V .
2. Let V = Rn . The n × n determinant is multilinear and alternating in its
columns, hence it can be viewed as an alternating map (Rn )n → R.
Lemma 1. Let ϕ: V k → U be multilinear. The following conditions are equivalent:
(a) ϕ is alternating,
(b) ϕ(v1 , . . . , vk ) = 0 whenever two of the vectors v1 , . . . , vk coincide,
(c) ϕ(v1 , . . . , vk ) = 0 whenever the vectors v1 , . . . , vk are linearly dependent.
Proof. (a)⇒(b) If vi = vj the interchange of vi and vj does not change the value
of ϕ(v1 , . . . , vk ), so (1) implies ϕ(v1 , . . . , vk ) = 0.
(b)⇒(a) Consider for example the interchange of v1 and v2 . By linearity
0 = ϕ(v1 + v2 , v1 + v2 , . . . )
= ϕ(v1 , v1 , . . . ) + ϕ(v1 , v2 , . . . ) + ϕ(v2 , v1 , . . . ) + ϕ(v2 , v2 , . . . )
= ϕ(v1 , v2 , . . . ) + ϕ(v2 , v1 , . . . ).
It follows that ϕ(v2 , v1 , . . . ) = −ϕ(v1 , v2 , . . . ).
1
2
(b)⇒(c) If the vectors v1 , . . . , vk are linearly dependent then one of them can
be written as a linear combination of the others. It follows that ϕ(v1 , . . . , vk ) is a
linear combination of terms in each of which some vi appears twice.
(c)⇒(b) Obvious.
In particular, if k > dim V then every set of k vectors is linearly dependent, and
hence ϕ = 0 is the only alternating map V k → U .
Definition 1. An alternating k-form is an alternating k-tensor V k → R. The
space of these is denoted Ak (V ), it is a linear subspace of T k (V ).
We define A1 (V ) = V ∗ and A0 (V ) = R.
Example. Let η, ζ ∈ V ∗ . The 2-tensor η ⊗ ζ − ζ ⊗ η is alternating.
The following lemma exhibits a standard procedure to construct alternating
forms.
Lemma 2. For each T ∈ T k (V ) the element Alt(T ) ∈ T k (V ) defined by
1 X
Alt(T )(v1 , . . . , vk ) = sgn σ T (vσ(1) , . . . , vσ(k) ) (3)
k!
σ∈Sk
is alternating. Moreover, if T is already alternating, then Alt(T ) = T .
Proof. Let τ ∈ Sk be the transposition corresponding to an interchange of two
vectors among v1 , . . . , vk . We have
1 X
Alt(T )(vτ (1) , . . . , vτ (k) ) = sgn σ T (vτ ◦σ(1) , . . . , vτ ◦σ(k) ).
k!
σ∈Sk
Since σ 7→ τ ◦ σ is a bijection of Sk we can substitute σ for τ ◦ σ. Using sgn(τ ◦ σ) =
− sgn(σ), we obtain the desired equality with − Alt(T )(v1 , . . . , vk ).
If T is already alternating, all the summands of (3) are equal to T (v1 , . . . , vk ).
Since |Sk | = k! we conclude that Alt(T ) = T .
Let e1 , . . . , en be a basis for V , and ξ1 , . . . , ξn the dual basis for V ∗ . We saw
in Note 11 that the elements ξi1 ⊗ · · · ⊗ ξik form a basis for T k (V ). We will now
exhibit a similar basis for Ak (V ). We have seen already that Ak (V ) = 0 if k > n.
Theorem 1. Assume k ≤ n. For each subset I ⊂ {1, . . . , n} with k elements, let
1 ≤ i1 < · · · < ik ≤ n be its elements, and let
ξI = Alt(ξi1 ⊗ · · · ⊗ ξik ) ∈ Ak (V ). (4)
n!
These elements ξI form a basis for Ak (V ). In particular, dim Ak (V ) = k!(n−k)! .
Proof. It follows from the last statement in Lemma 2 that Alt: T k (V ) → Ak (V ) is
surjective. Applying Alt to the basis elements ξi1 ⊗ · · · ⊗ ξik for T k (V ), we therefore
obtain a spanning set for Ak (V ). Notice that
1 X
Alt(ξi1 ⊗ · · · ⊗ ξik )(v1 , . . . , vk ) = sgn σ ξi1 (vσ(1) ) · · · ξik (vσ(k) ),
k!
σ∈Sk
3
which is 1/k! times the determinant of the k × k matrix (ξip (vq ))p,q . It follows that
Alt(ξi1 ⊗ · · · ⊗ ξik ) = 0 if there are repetitions among the i1 , . . . , ik . Moreover,
we can rearrange these numbers in increasing order at the cost only of a possible
change of the sign. Therefore Ak (V ) isPspanned by the elements ξI in (4).
Consider a linear combination T = I aI ξI with some coefficients aI . Applying
the k-tensor ξI to an element (ej1 , . . . , ejk ) ∈ V k , where 1 ≤ j1 < · · · < jk ≤ n, we
obtain 0 except when (j1 , . . . , jk ) = I, in which case we obtain 1. It follows that
T (ej1 , . . . , ejk ) = aJ for J = (j1 , . . . , jk ). Therefore, if T = 0 we conclude aJ = 0
for all the coefficients. Thus the elements ξI are independent.
In analogy with the tensor product (S, T ) 7→ S ⊗ T , from T k (V ) × T l (V ) to
T k+l (V ), there is a construction of a product Ak (V ) × Al (V ) → Ak+l . Since
tensor products of alternating tensors are not alternating, the construction is more
delicate.
Definition 2. Let S ∈ Ak (V ) and T ∈ Al (V ). The wedge product S∧T ∈ Ak+l (V )
is defined by
S ∧ T = Alt(S ⊗ T ).
Example Let η1 , η2 ∈ A1 (V ) = V ∗ . Then by definition η1 ∧η2 = 21 (η1 ⊗η2 −η2 ⊗η1 ).
Since the operator Alt is linear, the wedge product depends linearly on the factors
S and T . It is more cumbersome to verify the associative rule for ∧. In order to do
this we need the following lemma.
Lemma 3. Let S ∈ T k (V ) and T ∈ T l (V ). Then
Alt(Alt(S) ⊗ T ) = Alt(S ⊗ Alt(T )) = Alt(S ⊗ T ).
Proof. We will only verify
Alt(Alt(S) ⊗ T ) = Alt(S ⊗ T ).
The proof for the other expression is similar.
Let G = Sk+l and let H ⊂ G denote the subgroup of permutations leaving each
of the last elements k + 1, . . . , k + l fixed. Then H is naturally isomorphic to Sk .
Now
Alt(Alt(S) ⊗ T )(v1 , . . . , vk+l )
X
1
= (k+l)! sgn σ Alt(S)(vσ(1) , . . . , vσ(k) )T (vσ(k+1) , . . . , vσ(k+l) )
σ∈Sk+l
X X
1
= (k+l)!k! sgn σ sgn τ S(vσ(τ (1)) , . . . , vσ(τ (k)) )T (vσ(k+1) , . . . , vσ(k+l) )
σ∈Sk+l τ ∈Sk
XX
1
= (k+l)!k!
sgn(σ ◦ τ ) S(vσ(τ (1)) , . . . , vσ(τ (k)) )T (vσ(τ (k+1)) , . . . , vσ(τ (k+l)) ).
τ ∈H σ∈G
Since σ 7→ σ ◦ τ is a bijection of G we can substitute σ for σ ◦ τ , and we obtain the
desired expression, since there are k! elements in H.
4
Lemma 4. Let R ∈ Ak (V ), S ∈ Al (V ) and S ∈ Am (V ). Then
(R ∧ S) ∧ T = R ∧ (S ∧ T ) = Alt(R ⊗ S ⊗ T ).
Let S ∈ T k (V ) and T ∈ T l (V ).
Proof. It follows from the preceding lemma that
(R ∧ S) ∧ T = Alt(Alt(R ⊗ S) ⊗ T ) = Alt(R ⊗ S ⊗ T )
and
R ∧ (S ∧ T ) = Alt(R ⊗ Alt(S ⊗ T )) = Alt(R ⊗ S ⊗ T ).
Since the wedge product is associative, we can write any product T1 ∧ · · · ∧ Tr
of tensors Ti ∈ Aki (V ) without specifying brackets. In fact, it follows by induction
from Lemma 4 that
T1 ∧ · · · ∧ Tr = Alt(T1 ⊗ · · · ⊗ Tr )
regardless of how brackets are inserted in the wedge product.
In particular we see that the basis elements ξI in Theorem 1 are given by
ξI = ξi1 ∧ · · · ∧ ξik
where I = (i1 , . . . , ik ) is an increasing sequence from 1, . . . , n, and the basis elements
ξi from V ∗ are viewed as 1-forms. This will be our notation for ξI from now on.
Lemma 5. Let η, ζ ∈ V ∗ , then
ζ ∧ η = −η ∧ ζ. (5)
More generally, if S ∈ T k (V ) and T ∈ T l (V ) then
T ∧ S = (−1)kl S ∧ T (6)
Proof. The identity (5) follows immediately from the fact that η∧ζ = 21 (η⊗ζ−ζ⊗η).
Since Ak (V ) is spanned by elements of the type S = η1 ∧ · · · ∧ ηk , and Al (V ) by
elements of the type T = ζ1 ∧ · · · ∧ ζl , where ηi , ζj ∈ V ∗ , it suffices to prove (6) for
these forms. In order to rewrite T ∧ S as S ∧ T we must let each of the k elements
ηi pass the l elements ζj . The total number of sign changes is therefore kl.