Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
27 views20 pages

Vectors Tensors

Uploaded by

mkdas2611
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views20 pages

Vectors Tensors

Uploaded by

mkdas2611
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1 Vectors and tensors: Algebra

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
The theories of Solid Mechanics are mathematical in nature. Thus in this chapter and the next,
we begin with a discussion of the essential mathematical definitions and operations regarding
vectors and tensors needed to discuss the major topics covered in this book.1
The approach taken is a pragmatic one, without an overly formal emphasis upon mathemati-
cal proofs. The end-goal is to develop, in a reasonably systematic way, the essential mathematics
of vectors and tensors needed to discuss the major aspects of mechanics of solids. The physical
quantities that we will encounter in this text will be of three basic types: scalars, vectors, and
tensors. Scalars are quantities that can be described simply by numbers, for example things like
temperature and density. Vectors are quantities like velocity and displacement—quantities that
have both magnitude and direction. Tensors will be used to describe more complex quantities
like stress and strain, as well as material properties. In this chapter we deal with the algebra of
vectors and tensors. In the next chapter we deal with their calculus.

1.1 Cartesian coordinate frames. Kronecker delta.


Alternating symbol
Throughout this text lower case Latin subscripts range over the integers
{1, 2, 3}.
A Cartesian coordinate frame for the Euclidean point space E consists of a reference point
o called the origin together with a positively oriented orthonormal basis {e1 , e2 , e3 } for the
associated vector space V . Being positively oriented and orthonormal, the basis vectors obey
ei · ej = δij , ei · (ej × ek ) = eijk . (1.1.1)
Here δij , the Kronecker delta, is defined by

⎨1, if i = j,
δij = (1.1.2)
⎩0, if i = j,

1
The mathematical preliminaries in this chapter and the next are largely based on the clear expositions by Gurtin
in Gurtin (1981) and Gurtin et al. (2010).

Continuum Mechanics of Solids. Lallit Anand and Sanjay Govindjee, Oxford University Press (2020).
© Lallit Anand and Sanjay Govindjee, 2020.
DOI: 10.1093/oso/9780198864721.001.0001
4 VECTORS AND TENSORS: ALGEBRA

while eijk , the alternating symbol,2 is defined by




⎪ 1, if {i, j, k} = {1, 2, 3}, {2, 3, 1}, or {3, 1, 2},


eijk = −1, if {i, j, k} = {2, 1, 3}, {1, 3, 2}, or {3, 2, 1}, (1.1.3)



⎩ 0, if an index is repeated,

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
and hence has the value +1, −1, or 0 when {i, j, k} is an even permutation, an odd permuta-
tion, or not a permutation of {1, 2, 3}, respectively. Where convenient, we will use the compact
notation
def
{ei } = {e1 , e2 , e3 },
to denote our positively oriented orthonormal basis.

1.1.1 Summation convention


For convenience, we employ the Einstein summation convention according to which summa-
tion over the range 1, 2, 3 is implied for any index that is repeated twice in any term, so that,
for instance,
ui v i = u1 v 1 + u2 v 2 + u3 v 3 ,

Sij uj = Si1 u1 + Si2 u2 + Si3 u3 ,

Sik Tkj = Si1 T1j + Si2 T2j + Si3 T3j .


In the expression Sij uj the subscript i is free, because it is not summed over, while j is a dummy
subscript, since
Sij uj = Sik uk = Sim um .
As a rule when using the Einstein summation convention, one can not repeat an index more
than twice. Expressions with such indices indicate that a computational error has been made.
In the rare case where three (or more) repeated indices must appear in a single term, an explicit
3
summation symbol, for example i=1 , must be used.
Note that the Kronecker delta δij modifies (or contracts) the subscripts in the coefficients of
an expression in which it appears:
ai δij = aj , ai bj δij = ai bi = aj bj , δij δik = δjk , δik Skj = Sij .
This property of the Kronecker delta is sometimes termed the substitution property.
Further, when an expression in which an index is repeated twice but summation is not to be
performed we state so explicitly. For example,
ui v i (no sum)
signifies that the subscript i is not to be summed over.

2
The alternating symbol is sometimes called the permutation symbol.
CARTESIAN COORDINATE FRAMES. KRONECKER DELTA. ALTERNATING SYMBOL 5

Because {ei } is a basis, every vector u admits the unique expansion


u = u j ej ; (1.1.4)
the scalars ui are called the (Cartesian) components of u (relative to this basis). If we take the
inner product of (1.1.4) with ei we find that, since ei · ej = δij ,
u i = u · ei . (1.1.5)

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Guided by this relation, we define the coordinates of a point x with respect to the origin o by
xi = (x − o) · ei . (1.1.6)
In view of (1.1.4), the inner product of vectors u and v may be expressed as
u · v = (ui ei ) · (vj ej )
= ui vj δij

= ui v i = u1 v 1 + u2 v 2 + u3 v 3 . (1.1.7)
It should also be recalled that this relation is equivalent to u · v = |u| |v| cos θuv , where | · |
denotes the length of a vector (its magnitude or norm) and θuv is the angle between vectors
u and v. Equation (1.1.4) further implies that the cross product of two vectors
u × v = (uj ej ) × (vk ek )
= u j v k (ej × ek )

= eijk uj vk ei . (1.1.8)
In particular, (1.1.8) implies that the vector u × v has the component form
(u × v)i = eijk uj vk . (1.1.9)
This definition of the cross product is fully compatible with the relation |u × v| = |u| |v| sin θuv ,
which indicates that the magnitude of the cross product of two vectors is equal to the area of
the parallelogram whose edges are defined by u and v.
It can be shown that the permutation symbol is related to the Kronecker delta by
⎡ ⎤ ⎡ ⎤
δi1 δi2 δi3 δi1 δj1 δk1
eijk = det ⎣ δj1 δj2 δj3 ⎦ = det ⎣δi2 δj2 δk2 ⎦ , (1.1.10)
δk1 δk2 δk3 δi3 δj3 δk3
where det[·] represents the determinant of a 3 × 3 matrix. A consequence of (1.1.10) is that
⎡ ⎤
δip δiq δir
eijk epqr = det ⎣ δjp δjq δjr ⎦ ,
δkp δkq δkr (1.1.11)

= δip (δjq δkr − δjr δkq ) − δiq (δjp δkr − δjr δkp ) + δir (δjp δkq − δjq δkp ).
6 VECTORS AND TENSORS: ALGEBRA

Some useful identities which follow from successive contractions of (1.1.11), the substitution
property of the Kronecker delta and the identity δii = 3 are

eijk eipq = δjp δkq − δjq δkp , (1.1.12)

and hence also that


eijk eijl = 2δkl ,

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
(1.1.13)
eijk eijk = 6.
Also useful is the identity
ei = 12 eijk ej × ek . (1.1.14)
Let u, v, and w be vectors. Useful relations involving the inner and cross products then
include,
u · (v × w) = v · (w × u) = w · (u × v),

u · (u × v) = 0,
(1.1.15)
u × v = −v × u,

u × ( v × w) = ( u · w) v − ( u · v ) w.
The operation
def
[u, v, w] = u · (v × w), (1.1.16)
is known as the scalar triple product and is physically equivalent to the (signed) volume of the
parallelepiped whose edges are defined by any three linearly independent vectors u, v, and w.

1.2 Tensors
1.2.1 What is a tensor?
In mathematics the term tensor is a synonym for the phrase “a linear transformation which
maps a vector into a vector.” A tensor S is therefore a linear mapping that assigns to each vector
u a vector
v = Su. (1.2.1)
One might think of a tensor S as a machine with an input and an output: if a vector u is the
input, then the vector v = Su is the output. Linearity of a tensor S is the requirement that
S(u + v) = Su + Sv for all vectors u and v,
(1.2.2)
S(αu) = αSu for all vectors u and scalars α.
TENSORS 7

1.2.2 Zero and identity tensors


Two basic tensors are the zero tensor 0 and the identity tensor 1:
0v = 0, 1v = v
for all vectors v.

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
1.2.3 Tensor product
Another example of a tensor is the tensor product3 u ⊗ v, of two vectors u and v, defined by
(u ⊗ v)w = (v · w)u (1.2.3)
for all w; the tensor u ⊗ v maps any vector w onto a scalar multiple of u.

EXAMPLE 1.1 Projection tensor

As a concrete example of the tensor product consider the tensor


P = 1 − n ⊗ n,
where n is a vector of unit length. The action of P on an arbitrary vector u is then given by
Pu = (1 − n ⊗ n)u,
= 1u − (n ⊗ n)u,
= u − ( u · n) n . (1.2.4)
Equation (1.2.4) can be interpreted to be the vector u minus the projection of u in the
direction of n. Hence Pu is equal to the projection of the vector u into the plane orthogonal
to n. Note that in this example we have exploited the fact that the dot product of a vector
with a unit vector results in its projection onto the direction defined by the unit vector.
Further, we have exploited the additive algebraic structure of tensors, whereby for tensors
S and T, (S + T)u = Su + Tu for all vectors u.

1.2.4 Components of a tensor


Given a tensor S, the quantity Sej is a vector. The components of a tensor S with respect to the
basis {e1 , e2 , e3 } are defined by
def
Sij = ei · Sej . (1.2.5)

3
Other common terminologies for a tensor product are tensor outer product or simply outer product, as well as,
dyadic product.
8 VECTORS AND TENSORS: ALGEBRA

With this definition of the components of a tensor, the Cartesian component representation of
the relation v = Su is
vi = ei · v = ei · Su
= ei · S(uj ej )
= (ei · Sej )uj

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
= Sij uj . (1.2.6)
It should be observed that (1.2.6) indicates that each component of v is a linear combination
of the components of u, where the weights are given by the components of S.
Further, a tensor S has the representation
S = Sij ei ⊗ ej , (1.2.7)
in terms of its components Sij and the basis tensors ei ⊗ ej .

EXAMPLE 1.2 Identity tensor components

Using (1.2.5), the components of the identity tensor are given by


(1)ij = ei · 1ej = ei · ej = δij .
Thus the identity tensor can be expressed as 1 = δij ei ⊗ ej , equivalently as 1 = ei ⊗ ei .

EXAMPLE 1.3 Zero tensor components

Using (1.2.5), the components of the zero tensor are given by


(0)ij = ei · 0ej = ei · 0 = 0.
3 3
Thus the zero tensor can be expressed as 0 = i=1 j=1 0 ei ⊗ ej .

1.2.5 Transpose of a tensor


The transpose S of a tensor S is the unique tensor with the property that
u · Sv = v · S u (1.2.8)
for all vectors u and v. The components of the transpose are determined via (1.2.5) as
ei · S ej = ej · Sei = Sji ;
thus the components of S are
(S )ij = Sji . (1.2.9)
TENSORS 9

1.2.6 Symmetric and skew tensors


A tensor S is symmetric if
S = S , Sij = Sji , (1.2.10)
and skew if
S = −S , Sij = −Sji . (1.2.11)

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Clearly,
Sij = 1
2 Sij + Sji + 1
2 Sij − Sji .
Thus, every tensor S admits the decomposition
S = sym S + skw S (1.2.12)
into a symmetric part and a skew part, where
sym S = 12 (S + S ),
(1.2.13)
skw S = 12 (S − S ),
with components
(sym S)ij = 12 (Sij + Sji ),
(1.2.14)
(skw S)ij = 12 (Sij − Sji ).
Note that a symmetric tensor has at most six independent components, and a skew tensor has
at most three independent components; the latter point follows since (skw S)ii = 0 (no sum).

1.2.7 Axial vector of a skew tensor


Since a skew tensor has only three independent components, it is possible to define its action
on vectors by another vector. Given any skew tensor Ω, there is a unique vector ω —called the
axial vector of Ω—such that
Ωu = ω × u (1.2.15)
for all vectors u. Since
(Ωu)i = Ωij uj and (ω × u)i = eijk ωj uk = eikj ωk uj ,
we have that
Ωij = eikj ωk ≡ −eijl ωl . (1.2.16)
Further, operating on both sides of (1.2.16) with eijk and using the epsilon-delta identity
(1.1.13)1 we obtain
eijk Ωij = −eijk eijl ωl = −2δkl ωl = −2ωk ,
and hence in terms of the three independent components of the skew tensor Ω, the three
components of its axial vector ω are given by
ωi = − 12 eijk Ωjk , (1.2.17)
10 VECTORS AND TENSORS: ALGEBRA

which in expanded form is


ω1 = Ω32 = −Ω23 , ω2 = Ω13 = −Ω31 , ω3 = Ω21 = −Ω12 . (1.2.18)

1.2.8 Product of tensors


Given tensors S and T, the product ST is defined by composition; that is, ST is defined by

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
(ST)v = S(Tv) (1.2.19)

for all vectors v. By (1.2.5), Tej = Tlj el and by (1.2.9), S ei = Sik ek ; thus,
ei · STej = S ei · Tej
= (Sik ek ) · (Tlj el )

= Sik Tlj (ek · el )


 
δkl
= Sik Tkj ,
and, hence,
(ST)ij = Sik Tkj . (1.2.20)
Generally, ST = TS. When ST = TS, the tensors S and T are said to commute.

EXAMPLE 1.4 Product with the identity tensor

The product of the identity tensor 1 with any arbitrary tensor T, is T itself:
(1T)v = 1(Tv) = Tv,
and since this holds for all vectors v we have
1T = T.
Alternatively, in components
(1T)ij = ei · 1Tej
= 1 ei · Tej
= ei · Tej
= Tij .
The components of 1T are equal to the components of T. The two are equal to each other.
TENSORS 11

1.2.9 Trace of a tensor. Deviatoric tensors


The trace is the linear operation that assigns to each tensor S a scalar tr S and satisfies
tr(u ⊗ v) = u · v (1.2.21)
for any vectors u and v. Linearity is the requirement that (cf. (1.2.2))

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
tr(αS + β T) = α tr(S) + β tr(T)
for all tensors S and T and all scalars α and β . Thus, by (1.2.7),
tr S = tr(Sij ei ⊗ ej )

= Sij tr(ei ⊗ ej )

= Sij (ei · ej )

= Sii , (1.2.22)
and the trace is well-defined for all tensors. Some useful properties of the trace are
tr(Su ⊗ v) = v · Su,

tr(S ) = tr S,
(1.2.23)
tr(ST) = tr(TS),

tr 1 = 3.
As a consequence of (1.2.23)2 ,
tr S = 0 whenever S is skew. (1.2.24)
A tensor S is deviatoric (or traceless) if
tr S = 0, (1.2.25)
and we refer to
S ≡ dev S
def
= S − 13 (tr S)1 (1.2.26)
as the deviatoric part of S,4 and to
1
3 (tr S)1 (1.2.27)
as the spherical part of S. Trivially,
S = S − 13 (tr S)1 + 13 (tr S)1,
   
S s1

4
The notation “dev” is useful for denoting the deviatoric part of the product of many tensors; e.g., dev(ST · · · M).
12 VECTORS AND TENSORS: ALGEBRA

where s = 13 tr S, and it is noted that tr dev S = 0. Thus every tensor S admits the decomposition
S = S + s1 (1.2.28)
into a deviatoric tensor and a spherical tensor.

EXAMPLE 1.5 Deviatoric and spherical parts of a tensor

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
The deviatoric part of a tensor
T = 3e1 ⊗ e1 + 2e1 ⊗ e3
is determined as
T = T − 13 (tr T)1
= (3e1 ⊗ e1 + 2e1 ⊗ e3 ) − 1
3 · 31
= 2e1 ⊗ e1 + 2e1 ⊗ e3 − 1e2 ⊗ e2 − 1e3 ⊗ e3 .
1
The spherical part of T is given by 3 tr T1 = 1.

1.2.10 Positive definite tensors


A tensor C is positive definite if and only if
u · Cu > 0, ui Cij uj > 0 (1.2.29)
for all vectors u = 0.

1.2.11 Inner product of tensors. Magnitude of a tensor


The inner product of two vectors u and v is defined by
u · v = v · u = ui v i .
Analogously, the inner product of two tensors S and T is defined by
S : T = T : S = Sij Tij .
The symbol : is known as the double contraction symbol.
By analogy to the notion of the magnitude (or norm) of a vector u,
√ √
|u| = u · u = u i u i ,
the magnitude (or norm) |S| of a tensor S is defined by
√ 
|S| = S : S = Sij Sij .
TENSORS 13

EXAMPLE 1.6 Contraction of a skew tensor with a symmetric tensor

Consider a symmetric tensor S (Sij = Sji ) and a skew tensor Ω (Ωij = −Ωji ). Their
double contraction S : Ω = Sij Ωij will always be zero:
Sij Ωij = −Sij Ωji = −Sji Ωji = −Sij Ωij .

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Since the only scalar with the property that it is equal to its negative is zero, we conclude
that Sij Ωij = 0 whenever S is symmetric and Ω is skew.

1.2.12 Matrix representation of tensors and vectors


Tensors and vectors of the type presented have related matrix representations. We write [u] and
[S] for the matrix representations of a vector u and a tensor S with respect to the basis {ei }:
⎡ ⎤ ⎡ ⎤
u1 S11 S12 S13
[u] = ⎣ u 2 ⎦ , [S] = ⎣ S21 S22 S23 ⎦ .
u3 S31 S32 S33

• All the operations and operators we have introduced for vectors and tensors are in one-
to-one correspondence to the same operations and operators for matrices.

For example,
⎡ ⎤⎡ ⎤
S11 S12 S13 u1
[S][u] = ⎣ S21 S22 S23 ⎦ ⎣ u2 ⎦
S31 S32 S33 u3
⎡ ⎤
S11 u1 + S12 u2 + S13 u3
= ⎣ S21 u1 + S22 u2 + S23 u3 ⎦
S31 u1 + S32 u2 + S33 u3
⎡  ⎤
S u
i 1i i
=⎣ S u ⎦
i 2i i
i S3i ui

= [Su],
so that the action of a tensor on a vector is consistent with that of a 3 × 3 matrix on a 3 × 1
matrix. Further, the matrix [S ] of the transpose S of S is identical to the transposition of the
matrix [S]:
⎡ ⎤
S11 S21 S31
[S ] ≡ [S] = ⎣ S12 S22 S32 ⎦ .
S13 S23 S33
14 VECTORS AND TENSORS: ALGEBRA

Similarly, the trace of a tensor S is equivalent to the conventional definition of this quantity
from matrix algebra
tr S ≡ tr [S] = S11 + S22 + S33 = Skk .

The inner product of two vectors


⎡ ⎤
  v1
⎣ v 2 ⎦ = u1 v 1 + u2 v 2 + u3 v 3 ≡ ui v i ,

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
u · v ≡ [u] [v] = u1 u2 u3
v3
and the tensor product

⎤ ⎡ ⎤
u1   u1 v 1 u1 v 2 u1 v 3
[u ⊗ v] ≡ [u][v] = ⎣ u2 ⎦ v1 v2 v3 = ⎣ u2 v 1 u2 v 2 u2 v 3 ⎦ .
u3 u3 v 1 u3 v 2 u3 v 3
If one chooses to perform computations using matrix representations of vectors and tensors,
care must be taken to use a single basis in all calculations, as the matrix representations are
basis dependent.

1.2.13 Determinant of a tensor


Tensors, like square matrices, have determinants. The general definition of the determinant is
an operation that assigns to each tensor S a scalar det S defined by
Su · (Sv × Sw)
det S = (1.2.30)
u · ( v × w)
for any three non-coplanar vectors {u, v, w}. Thus, det S is the ratio of the volume of the
parallelepiped defined by the vectors Su, Sv, and Sw to the volume of the parallelepiped defined
by the vectors u, v, and w.
Definition (1.2.30) is fully equivalent to the conventional one given for square matrices; in
particular
 
 S11 S12 S13 
 
det S ≡ det[S] =  S21 S22 S23 
 S S33 
31 S32

= S11 (S22 S33 − S23 S32 ) − S12 (S21 S33 − S23 S31 ) + S13 (S21 S32 − S22 S31 )

= eijk Si1 Sj2 Sk3 = 16 eijk epqr Sip Sjq Skr . (1.2.31)
It is useful also to observe that
det(ST) = det S det T, (1.2.32)
and that
det S = det S. (1.2.33)
TENSORS 15

1.2.14 Invertible tensors


A tensor S is invertible if there is a tensor S−1 , called the inverse of S, such that
SS−1 = S−1 S = 1. (1.2.34)
Further,
S is invertible if and only if det S = 0. (1.2.35)

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Note also that det S−1 = 1/ det S.

1.2.15 Cofactor of a tensor


Let S be an invertible tensor, then the tensor
cof S = (det S)S− ,
def
(1.2.36)
is called the cofactor of S. A straight forward but slightly involved calculation shows that for
all linearly independent vectors u and v,
cof S(u × v) = Su × Sv. (1.2.37)
That is, cof S transforms the area vector u × v of the parallelogram defined by u and v into the
area vector Su × Sv of the parallelogram defined by Su and Sv.

1.2.16 Orthogonal tensors


A tensor Q is orthogonal if and only if
Q Q = QQ = 1. (1.2.38)
If Q is orthogonal, then
det Q = ±1.
An orthogonal tensor is a rotation (or a proper rotation) if det Q = 1, and a reflection (or an
improper rotation) if det Q = −1.

1.2.17 Transformation relations for components of a vector and a tensor


under a change in basis
Given a Cartesian coordinate system with a right-handed orthonormal basis {ei }, a vector v
and a tensor S may be represented by their components
vi = ei · v and Sij = ei · Sej ,
respectively. In another coordinate system, cf. Fig. 1.1, with a right-handed orthonormal basis
{e∗i }, v and S have the component representations
vi∗ = e∗i · v and ∗
Sij = e∗i · Se∗j .
16 VECTORS AND TENSORS: ALGEBRA

e2
e∗2

e1

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
o
e∗3

e∗1
e3

Fig. 1.1 Change in basis.

We now consider how the two representations (vi , Sij ) and (vi∗ , Sij

) in the two coordinate
systems are related to each other.
Let Q be the rotation tensor defined by

Q = e1 ⊗ e∗1 + e2 ⊗ e∗2 + e3 ⊗ e∗3 ≡ ek ⊗ e∗k ,


def
(1.2.39)
so that
e∗i = Q ei i = 1, 2, 3. (1.2.40)
That Q is a rotation follows from QQ = (ei ⊗ e∗i )(e∗j ⊗ ej ) = δij ei ⊗ ej = 1 and similarly for
Q Q, along with the right-handedness of the bases to ensure det Q = +1. The components of
Q with respect to {ei } are given by
Qij = ei · (ek ⊗ e∗k )ej = δik e∗k · ej = e∗i · ej .
Thus, a base vector e∗i may be expressed in terms of the basis {ej }, with Qij serving as
components of the vector e∗i :
e∗i = Qij ej , (1.2.41)
and the matrix representation of the components of Q with respect to the basis {ei } is
⎡ ⎤
Q11 Q12 Q13
[Q] = ⎣Q21 Q22 Q23 ⎦ .
Q31 Q32 Q33

Transformation relation for vectors


For a vector v the components with respect to the basis {e∗i } are
vi∗ = e∗i · v = (Qij ej ) · v = Qij (ej · v) = Qij vj .
Thus, the transformation relation for the Cartesian components of a vector v under a change
of basis are:
vi∗ = Qij vj . (1.2.42)
TENSORS 17

These relations may be expressed in matrix form as follows:


⎡ ∗⎤ ⎡ ⎤⎡ ⎤
v1 Q11 Q12 Q13 v1
⎣v2∗ ⎦ = ⎣Q21 Q22 Q23 ⎦ ⎣v2 ⎦ ,
v3∗ Q31 Q32 Q33 v3

where the components of Q are given in the {ei } basis.

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Transformation relation for tensors
For a tensor S the components with respect to the basis {e∗i } are
     

Sij = e∗i · Se∗j = Qik ek · S Qjl el = Qik Qjl ek · Sel = Qik Qjl Skl .

Thus, the transformation relation for the Cartesian components of a tensor S under a change
of basis are:

Sij = Qik Qjl Skl . (1.2.43)

These relations may be expressed in matrix form as follows:


⎡ ∗ ∗ ∗
⎤ ⎡ ⎤⎡ ⎤⎡ ⎤
S11 S12 S13 Q11 Q12 Q13 S11 S12 S13 Q11 Q12 Q13
⎣S21
∗ ∗
S22 ∗ ⎦
S23 = ⎣Q21 Q22 Q23 ⎦ ⎣S21 S22 S23 ⎦ ⎣Q21 Q22 Q23 ⎦ ,
∗ ∗ ∗
S31 S32 S33 Q31 Q32 Q33 S31 S32 S33 Q31 Q32 Q33
where the components of Q are given in the {ei } basis.

1.2.18 Eigenvalues and eigenvectors of a tensor. Spectral theorem


A scalar ω is an eigenvalue of a tensor S, if there is a unit vector μ such that
Sμ = ωμ, (1.2.44)
in which case μ is an eigenvector of S corresponding to the eigenvalue ω . Physically, eigenvec-
tors of S are those vectors which remain parallel to themselves when mapped by the tensor S,
Fig. 1.2.
If ω and μ are an eigenvalue and corresponding eigenvector for a tensor S, then (1.2.44)
implies that
(S − ω1)μ = 0. (1.2.45)

µ Sµ

Fig. 1.2 Eigenvector μ of a tensor S.


18 VECTORS AND TENSORS: ALGEBRA

For (1.2.45) to possess a non-trivial (μ = 0) solution, the tensor (S −ω1) can not be invertible,
so that, by (1.2.35),
det(S − ω1) = 0.
Thus, the determinant of the 3 × 3 matrix [S] − ω[1] must vanish,
det [S] − ω[1] = 0,

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
which is simply the classic requirement for a system of homogeneous equations to have a non-
trivial solution. Each eigenvalue ω of a tensor S is, therefore, a solution of a polynomial equation
of the form ω 3 − a1 ω 2 + a2 ω − a3 = 0, where the coefficients are functions of S. A tedious
computation shows that this characteristic equation5 has the explicit form
ω 3 − I1 (S)ω 2 + I2 (S)ω − I3 (S) = 0, (1.2.47)
where I1 (S), I2 (S), and I3 (S), called the principal invariants6 of S, are given by

I1 (S) = tr S, ⎪



 2 
I2 (S) = 12 tr(S) − tr(S ) ,2 (1.2.48)




I3 (S) = det S.

Ik (S) are called invariants because of the way they transform under the group of orthogonal
tensors:
Ik (QSQ ) = Ik (S) for any orthogonal tensor Q. (1.2.49)

The solutions of the characteristic equation (1.2.47), cubic in ω , are the eigenvalues ωi ,
i = 1, 2, 3. Since the principal invariants Ik (S) are always real, the theory of polynomials tells
us that

• The characteristic equation has (i) either three real roots (not necessarily distinct), or (ii)
one real and two complex conjugate roots.

Once the eigenvalues have been determined, the eigenvectors corresponding to each eigenvalue
can be found by substituting back into (1.2.45).

5
The Cayley–Hamilton theorem also tells us that a tensor satisfies its characteristic equation; that is
S3 − I1 (S)S2 + I2 (S)S − I3 (S)1 = 0. (1.2.46)
6
The principal invariants can also be defined in terms of any collection of three vectors a, b, c, with [a, b, c] = 0,
via the relations
 
I1 (S) = [Sa, b, c] + [a, Sb, c] + [a, b, Sc] /[a, b, c],
 
I2 (S) = [Sa, Sb, c] + [a, Sb, Sc] + [Sa, b, Sc] /[a, b, c],
 
I3 (S) = [Sa, Sb, Sc] /[a, b, c].
TENSORS 19

Eigenvalues of symmetric tensors


It turns out that for a symmetric tensor the eigenvalues are all real, and the corresponding
eigenvectors {μi } are mutually orthogonal. Suppose that S is symmetric and that ω1 = ω2 are
real eigenvalues of S with corresponding eigenvectors μ1 and μ2 . Then, since S is symmetric,
0 = (S μ 1 − ω1 μ 1 )
0 = μ2 · (Sμ1 − ω1 μ1 )

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
= Sμ2 · μ1 − ω1 μ2 · μ1
= ω2 μ 2 · μ 1 − ω1 μ 2 · μ 1
= (ω2 − ω1 )μ2 · μ1
Since ω1 = ω2 , we must have
μ2 · μ1 = 0.
Thus, for a symmetric tensor, eigenvectors corresponding to distinct eigenvalues are orthogo-
nal. When the eigenvalues are repeated, the situation is a bit more complex and is treated below.
Notwithstanding, if S is symmetric, with distinct eigenvalues {ω1 > ω2 > ω3 }, then they are
real, and there exists an orthonormal set of corresponding eigenvectors {μ1 , μ2 , μ3 } such
that Sμi = ωi μi (no sum).

Spectral Theorem Let S be symmetric with distinct eigenvalues ωi . Then there is a corre-
sponding orthonormal set {μi } of eigenvectors of S and, what is most important, one uniquely
has that
 3
S= ωi μ i ⊗ μ i . (1.2.50)
i=1
The relation (1.2.50), which is called a spectral decomposition of S, gives S as a linear combi-
nation of projections, with each μi ⊗ μi (no sum) a projection tensor onto the eigenvector μi .
Since the eigenvectors {μi } are orthonormal, they can also function as a basis, and in this
basis the matrix representation S is diagonal:
⎡ ⎤
ω1 0 0
[S] = ⎣ 0 ω2 0 ⎦ .
0 0 ω3
When the eigenvalues of S are repeated, the spectral theorem needs to be modified. The
remaining relevant cases are when two of the eigenvalues are the same and when all three
eigenvalues are the same. Respectively, in these cases one has
(a) If ω1 = ω2 = ω3 , then S admits the representation
S = ω1 μ1 ⊗ μ1 + ω2 (1 − μ1 ⊗ μ1 ), (1.2.51)
which indicates that μ1 , as well as any vector orthogonal to μ1 , is an eigenvector of S.
(b) If ω ≡ ω1 = ω2 = ω3 , then S admits the representation
S = ω1, (1.2.52)
which indicates that any vector will qualify as an eigenvector of S and that S is spherical.
20 VECTORS AND TENSORS: ALGEBRA

Other invariants and eigenvalues


The principal invariants (1.2.48) of a tensor are completely characterized by the eigenvalues
{ω1 , ω2 , ω3 }:

I1 (S) = ω1 + ω2 + ω3 , ⎪


I2 (S) = ω1 ω2 + ω2 ω3 + ω3 ω1 , (1.2.53)


Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
I3 (S) = ω1 ω2 ω3 .
Invariants play an important role in continuum mechanics. As is clear from (1.2.53) for a tensor
S the eigenvalues {ω1 , ω2 , ω3 } are the basic invariants in the sense that any invariant of S can
be expressed in terms of them. In many applications, as an alternate to (1.2.53) it is often more
convenient to choose as invariants the following three symmetric functions of {ω1 , ω2 , ω3 }:

tr S = ω1 + ω2 + ω3 , ⎪


tr S2 = ω12 + ω22 + ω32 , (1.2.54)



tr S3 = ω13 + ω23 + ω33 .
These three quantities are clearly invariant and they are independent in the sense that no one
of them can be expressed in terms of the other two.
As a further alternative set of invariants recall that a tensor S may always be decomposed
into a deviatoric and spherical part as
1
S= S
 + (tr S)1 , with tr S = 0. (1.2.55)
3  
deviatoric part
spherical part

The decomposition (1.2.55) leads to the possibility of using the invariant (tr S) and the
invariants of S as an alternative set of invariants. Since S has only two independent non-zero
invariants  2 
tr S , tr S3 , (1.2.56)
it is sometimes convenient to adopt the list
 
tr S, tr S2 , tr S3 (1.2.57)
as a set of alternative invariants of a tensor S.

1.2.19 Fourth-order tensors


In our discussions we will also need the concept of a fourth-order tensor. A fourth-
order tensor is defined as a linear transformation that maps a second-order tensor to a
second-order tensor. A fourth-order tensor C is therefore a linear mapping that assigns to
each second-order tensor A a second-order tensor

B = CA. (1.2.58)
TENSORS 21

Linearity of C is the requirement that


C(A + B) = CA + CB for all tensors A and B,
C(αA) = αCA for all tensors A and scalars α.
It should also be noted that common alternate notations for CS are
C[S] and C : S.

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
The latter form, C : S, emphasizes the action of C via its component form, which is discussed
next.
Recall that the components Tij of a second-order tensor are defined as
Tij = ei · Tej .
For discussing the component form of fourth-order tensors it is convenient to introduce the
basis tensors
def
Eij = ei ⊗ ej , (1.2.59)
with the orthonormality property
Eij : Ekl = δik δjl .
Using this notation, the components Cijkl of C are defined as
Cijkl = Eij : CEkl . (1.2.60)
This allows one to also express a fourth-order tensor as
C = Cijkl Eij ⊗ Ekl ≡ Cijkl ei ⊗ ej ⊗ ek ⊗ el . (1.2.61)
From these expressions, one can also infer the component-wise action of a fourth-order tensor
on a second-order tensor as
S = CT, Sij = Cijkl Tkl . (1.2.62)

EXAMPLE 1.7 Fourth-order identity tensor

The fourth-order tensor


I = δik δjl Eij ⊗ Ekl
is the identity tensor. Verify that I as defined possesses the property of an identity operator,
viz. IT = T for all tensors T.
IT ≡ I : T = (δik δjl Eij ⊗ Ekl ) : (Tpq Epq )
= δik δjl Tpq (Ekl : Epq )Eij
= δik δjl Tpq δkp δlq Eij
= Tij Eij = T.
Since this expression was derived for an arbitrary tensor T, I must be the fourth-order
identity tensor.
22 VECTORS AND TENSORS: ALGEBRA

EXAMPLE 1.8 Symmetric fourth-order identity tensor

Isym = 12 (δik δjl + δil δjk )Eij ⊗ Ekl


is the symmetric identity tensor. It maps any second-order tensor to its symmetric part.
Check:

Downloaded from https://academic.oup.com/book/43650/chapter/365066176 by Indian Inst of Tech Madras user on 12 August 2025
Isym T ≡ Isym : T = 1
2 (δik δjl + δil δjk )Eij ⊗ Ekl : (Tpq Epq )
1
= 2 (δik δjl + δil δjk )Tpq (Ekl : Epq )Eij
1
= 2 (δik δjl + δil δjk )Tpq δkp δlq Eij
1
= 2 (δip δjq + δiq δjp )Tpq Eij
1
= 2 (Tij + Tji )Eij
= sym T.
If T is already symmetric, then Isym functions as an alternate identity tensor.

Transformation rules for fourth-order tensors


The transformation rule for the components of a fourth-order tensor under a change in basis
from {ei } to {e∗i } with e∗i = Qij ej can be determined in the same way as for second-order
tensors by first noting that
E∗ij = e∗i ⊗ e∗j = Qip Qjq ep ⊗ eq = Qip Qjq Epq .
Then we obtain

Cijkl = E∗ij : CE∗kl ,
= Qip Qjq Qkr Qls Epq : CErs ,
or

Cijkl = Qip Qjq Qkr Qls Cpqrs . (1.2.63)

You might also like