Notes Pages 4
Notes Pages 4
The term ǫijk is anti-symmetric for any fixed i; for example for i = 1, we have
ǫ111 ǫ112 ǫ113 0 0 0
ǫ1jk = ǫ121 ǫ122 ǫ123 = 0 0 1. (2.100)
ǫ131 ǫ132 ǫ133 0 −1 0
Thus, when its tensor inner product is taken with the symmetric T(jk) , the result must be
the scalar zero. Hence, we also have
1
di = ǫijk T[jk] . (2.101)
2
Let us find the inverse relation for di , Starting with Eq. (2.99), we take the inner product of
di with ǫilm to get
1
ǫilm di = ǫilm ǫijk Tjk . (2.102)
2
Employing Eq. (2.74) to eliminate the ǫ’s in favor of δ’s, we get
1
ǫilm di = (δlj δmk − δlk δmj ) Tjk , (2.103)
2
1
= (Tlm − Tml ), (2.104)
2
= T[lm] . (2.105)
Hence,
T[lm] = ǫilm di . (2.106)
Note that
0 d3 −d2
T[lm]
= ǫ1lm d1 + ǫ2lm d2 + ǫ3lm d3 = −d3 0 d1 . (2.107)
d2 −d1 0
And we can write the decomposition of an arbitrary tensor as the sum of its symmetric
part and a factor related to the dual vector associated with its anti-symmetric part:
Note that j is a dummy index, i and k are free indices, and that the free indices in each
additive term are the same. In that sense they behave somewhat as dimensional units, that
must be the same for each term. In Gibbs notation, the equivalent tensor product is written
as
S · T = P. (2.110)
In contrast to the tensor inner product, that has two pairs of dummy indices and two dots,
the tensor product has one pair of dummy indices and one dot. The tensor product is
equivalent to matrix multiplication in matrix algebra.
An important property of tensors is that, in general, the tensor product does not commute,
S · T 6= T · S. In the most formal manifestation of Cartesian index notation, one should also
not commute the elements, and the dummy indices should appear next to another in adjacent
terms as shown. However, it is of no great consequence to change the order of terms so that
we can write Sij Tjk = Tjk Sij . That is in Cartesian index notation, elements do commute.
But, in Cartesian index notation, the order of the indices is extremely important, and it
is this order that does not commute: Sij Tjk 6= Sji Tjk in general. The version presented
for Sij Tjk in Eq. (2.109), in which the dummy index j is juxtaposed between each term, is
slightly preferable as it maintains the order we find in the Gibbs notation.
Example 2.7
For two general 2 × 2 tensors, S and T, find the tensor inner product.
2.1.4.7.1 Pre-multiplication
uj = vi Tij = Tij vi , (2.113)
uT = vT · T 6= T · v. (2.114)
In the Cartesian index notation here, the first form is preferred as it has a correspondence
with the Gibbs notation, but both are correct representations given our summation conven-
tion.
2.1.4.7.2 Post-multiplication
As opposed to the inner product between two vectors, that yields a scalar, we also have the
dyadic product, that yields a tensor. In Cartesian index and Gibbs notation, we have
Tij = ui vj = vj ui , (2.117)
T = uvT 6= vuT . (2.118)
Notice there is no dot in the dyadic product; the dot is reserved for the inner product.
Example 2.8
Find the dyadic product between two general two-dimensional vectors. Show the dyadic product
does not commute in general, and find the condition under which it does commute.
Take
u1 v1
u= , v= . (2.119)
u2 v2
Then
T u1 u1 v1 u1 v2
uv = ui vj = ( v1 v2 ) = . (2.120)
u2 u2 v1 u2 v2
By inspection, we see the operations in general do not commute. They do commute if v2 /v1 = u2 /u1 .
So in order for the dyadic product to commute, u and v must be parallel.
It is easily seen that the dyadic product vvT is a symmetric tensor. For the two-dimensional
system, we would have
v1 v1 v1 v1 v2
vvT = vi vj = ( v1 v2 ) = . (2.122)
v2 v2 v1 v2 v2
2.1.4.9 Contraction
We contract a general tensor, that has all of its subscripts different, by setting one subscript
to be the same as the other. A single contraction will reduce the order of a tensor by two.
For example the contraction of the second order tensor Tij is Tii , that indicates a sum is to
be performed:
Tii = T11 + T22 + T33 , (2.123)
tr T = T11 + T22 + T33 . (2.124)
So, in this case the contraction yields a scalar. In matrix algebra, this particular contraction
is the trace of the matrix.
Example 2.9
Find the vector associated with the 1 face, t(1) , as shown in Fig. 2.2,
x3
t(3)
Τ33
Τ31 Τ32
Τ23
t(2)
Τ13
Τ22
Τ12 Τ21
x2
Τ11
t(1)
x1
Figure 2.2: Sample Cartesian element that is aligned with coordinate axes, along with
tensor components and vectors associated with each face.
We first choose the unit normal associated with the x1 face, that is the vector ni = (1, 0, 0)T . The
associated vector is found by doing the actual summation
Aij xj = bi , (2.135)
A · x = b. (2.136)
Full details can be found in any text addressing linear algebra, e.g. Powers and Sen (2015).
Let us presume that A is a known square matrix of dimension N × N, x is an unknown
column vector of length N, and b is a known column vector of length N. The following can
be proved:
• If det A = 0, solutions for x may or may not exist; if they exist, they are not unique.
• Cramer’s8 rule, a method involving the ratio of determinants discussed in linear algebra
texts, can be used to find x; other methods exist, such as Gaussian elimination.
Let us consider a few examples for N = 2.
Example 2.10
Use Cramer’s rule to solve a general linear algebra problem with N = 2.
Consider then
a11 a12 x1 b1
= . (2.137)
a21 a22 x2 b2
The solution from Cramer’s rule involves the ratio of determinants. We get
b1 a12 a11 b1
b2 a22 b1 a22 − b2 a12 a21 b2 b2 a11 − b1 a21
x1 = = , x2 = = . (2.138)
a11 a12 a 11 a22 − a12 a21 a11 a12 a 11 a22 − a12 a21
a21 a22 a21 a22
If b1 , b2 6= 0 and det A = a11 a22 − a12 a21 6= 0, there is a unique nontrivial solution for x. If b1 = b2 = 0
and det A = a11 a22 − a12 a21 6= 0, we must have x1 = x2 = 0. Obviously, if det A = a11 a22 − a12 a21 = 0,
we cannot use Cramer’s rule to compute as it involves division by zero. But we can salvage a non-unique
solution if we also have b1 = b2 = 0, as we shall see.
Example 2.11
Find any and all solutions for
1 2 x1 0
= . (2.139)
2 4 x2 0
Certainly (x1 , x2 )T = (0, 0)T is a solution. But maybe there are more. Cramer’s rule gives
0 2 1 0
0 4 0 2 0 0
x1 = = , x2 = = . (2.140)
1 2 0 1 2 0
2 4 2 4
This is indeterminate! But the more robust Gaussian elimination process allows us to use row operations
(multiply the top row by −2 and add to the bottom row) to rewrite the original equation as
1 2 x1 0
= . (2.141)
0 0 x2 0
8
Gabriel Cramer, 1704-1752, Swiss mathematician at University of Geneva.
By inspection, we get an infinite number of solutions, given by the one-parameter family of equations
x1 = −2s, x2 = s, s ∈ R1 . (2.142)
We could also eliminate s and say that x1 = −2x2 . The solutions are linearly dependent. In terms of
the language of vectors, we find the solution to be a vector of fixed direction, with arbitrary magnitude.
In terms of a unit normal vector, we could write the solution as
!
− √25
x=s 1 , s ∈ R1 . (2.143)
√
5
Example 2.12
Find any and all solutions for
1 2 x1 1
= . (2.144)
2 4 x2 0
1 2 1 1
0 4 4 2 0 −2
x1 = = , x2 = = . (2.145)
1 2 0 1 2 0
2 4 2 4
x3 x'
t(3) 3
t (3’)
Τ33 Τ32 rotate
Τ31
Τ23 t(2)
Τ13 Τ22
Τ12 Τ21
x2 t (1’)
Τ11 x' t (2’)
1
(1)
t
x'
2
x1
Figure 2.3: Sample Cartesian element that is rotated so that its faces have vectors that are
aligned with the unit normals associated with the faces of the element.
Here λ is an as of yet unknown scalar. The vector ni could be a unit vector, but does not
have to be. We can rewrite this as
A trivial solution to this equation is (n1 , n2 , n3 ) = (0, 0, 0). But this is not interesting.
As suggested by our understanding of Cramer’s rule, we can get a non-unique, non-trivial
solution if we enforce the condition that the determinant of the coefficient matrix be zero.
Example 2.13
Find the principal axes and principal values of stress if the stress tensor is
1 0 0
Tij = 0 1 2 . (2.159)
0 2 1
10
We employ a slightly more common form here than the similar Eq. (3.10.4) of Panton (2013).
11
Christian Otto Mohr, 1835-1918, Holstein-born German civil engineer, railroad and bridge designer.
x3
x2
x1
Figure 2.4: Sketch of stresses being applied to a cubical fluid element. The thinner lines
with arrows are the components of the stress tensor; the thicker lines on each face represent
the vector associated with the particular face.
A sketch of these stresses is shown on the fluid element in Fig. 2.4. We take the eigenvalue problem
1−λ 0 0
0 1−λ 2 = 0. (2.164)
0 2 1−λ
is symmetric. Although this is not a crucial step, we will use it to write the eigenvalue problem in a
slightly more familiar notation:
We have taken the transpose of T . Substituting for Tji and considering the eigenvalue λ = 1, we get
0 0 0 n1 0
0 0 2 n2 = 0 . (2.170)
0 2 0 n3 0
We get two equations 2n2 = 0, and 2n3 = 0; thus, n2 = n3 = 0. We can satisfy all equations with an
arbitrary value of n1 . It is always the case that an eigenvector will have an arbitrary magnitude and a
well-defined direction. Here we will choose to normalize our eigenvector and take n1 = 1, so that the
eigenvector is
1
nj = 0 for λ = 1. (2.171)
0
Geometrically, this means that the original 1 face already has an associated vector that is aligned with
its normal vector.
Now consider the eigenvector associated with the eigenvalue λ = −1. Again substituting into the
original equation, we get
2 0 0 n1 0
0 2 2 n2 = 0 . (2.172)
0 2 2 n3 0
This is simply the system of equations
2n1 = 0, (2.173)
2n2 + 2n3 = 0, (2.174)
2n2 + 2n3 = 0. (2.175)
x3
λ(3) = 3
λ(2)= -1
x2
λ(1)= 1
Figure 2.5: Sketch of fluid element rotated to be aligned with axes of principal stress, along
with magnitude of principal stress. The 1 face projects out of the page.
The eigenvectors are mutually orthogonal, as well as normal. We say they form an orthonormal set of
vectors. Their orthogonality, as well as the fact that all the eigenvalues are real can be shown to be
a direct consequence of the symmetry of the original tensor. A sketch of the principal stresses on the
element rotated so that it is aligned with the principal axes of stress is shown on the fluid element in
Fig. 2.5. The three orthonormal eigenvectors when cast into a matrix, form an orthogonal matrix Q,
and calculation reveals that det Q = 1, so that it is a rotation matrix.
. .. ..
.. . . 1 √0 √0
2 2
Q = n(1) n(2) n(3) = 0 2
√ √2
. (2.185)
.. .. .. 0 − 2 2
. . . 2 2