Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
8 views12 pages

Notes Pages 4

Uploaded by

bchethanrs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views12 pages

Notes Pages 4

Uploaded by

bchethanrs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

2.1.

VECTORS AND CARTESIAN TENSORS 37

The term ǫijk is anti-symmetric for any fixed i; for example for i = 1, we have
   
ǫ111 ǫ112 ǫ113 0 0 0
ǫ1jk =  ǫ121 ǫ122 ǫ123  =  0 0 1. (2.100)
ǫ131 ǫ132 ǫ133 0 −1 0

Thus, when its tensor inner product is taken with the symmetric T(jk) , the result must be
the scalar zero. Hence, we also have
1
di = ǫijk T[jk] . (2.101)
2
Let us find the inverse relation for di , Starting with Eq. (2.99), we take the inner product of
di with ǫilm to get
1
ǫilm di = ǫilm ǫijk Tjk . (2.102)
2
Employing Eq. (2.74) to eliminate the ǫ’s in favor of δ’s, we get
1
ǫilm di = (δlj δmk − δlk δmj ) Tjk , (2.103)
2
1
= (Tlm − Tml ), (2.104)
2
= T[lm] . (2.105)

Hence,
T[lm] = ǫilm di . (2.106)
Note that  
0 d3 −d2
T[lm] 
= ǫ1lm d1 + ǫ2lm d2 + ǫ3lm d3 = −d3 0 d1  . (2.107)
d2 −d1 0
And we can write the decomposition of an arbitrary tensor as the sum of its symmetric
part and a factor related to the dual vector associated with its anti-symmetric part:

Tij = T(ij) + ǫkij dk . (2.108)


|{z} |{z} | {z }
arbitrary tensor symmetric part anti−symmetric part

2.1.4.6 Tensor product: two tensors


The tensor product between two arbitrary tensors yields a third tensor. For second order
tensors, we have the tensor product in Cartesian index notation as

Sij Tjk = Pik . (2.109)

Note that j is a dummy index, i and k are free indices, and that the free indices in each
additive term are the same. In that sense they behave somewhat as dimensional units, that

© 01 July 2025, Joseph M. Powers. All rights reserved.


38 CHAPTER 2. SOME NECESSARY MATHEMATICS

must be the same for each term. In Gibbs notation, the equivalent tensor product is written
as
S · T = P. (2.110)
In contrast to the tensor inner product, that has two pairs of dummy indices and two dots,
the tensor product has one pair of dummy indices and one dot. The tensor product is
equivalent to matrix multiplication in matrix algebra.
An important property of tensors is that, in general, the tensor product does not commute,
S · T 6= T · S. In the most formal manifestation of Cartesian index notation, one should also
not commute the elements, and the dummy indices should appear next to another in adjacent
terms as shown. However, it is of no great consequence to change the order of terms so that
we can write Sij Tjk = Tjk Sij . That is in Cartesian index notation, elements do commute.
But, in Cartesian index notation, the order of the indices is extremely important, and it
is this order that does not commute: Sij Tjk 6= Sji Tjk in general. The version presented
for Sij Tjk in Eq. (2.109), in which the dummy index j is juxtaposed between each term, is
slightly preferable as it maintains the order we find in the Gibbs notation.

Example 2.7
For two general 2 × 2 tensors, S and T, find the tensor inner product.

The tensor inner product is


    
S11 S12 T11 T12 S11 T11 + S12 T21 S11 T12 + S12 T22
S·T= = . (2.111)
S21 S22 T21 T22 S21 T11 + S22 T21 S21 T12 + S22 T22
Compare with the commutation:
    
T11 T12 S11 S12 S11 T11 + S21 T12 S12 T11 + S22 T12
T·S= = . (2.112)
T21 T22 S21 S22 S11 T21 + S21 T22 S12 T21 + S22 T22
T
Clearly S · T 6= T · S. It can be shown that if both S and T are symmetric, that S · T = (T · S) .

2.1.4.7 Vector product: vector and tensor


The product of a vector and tensor, again that does not in general commute, comes in two
flavors, pre-multiplication and post-multiplication, both important, and given in Cartesian
index and Gibbs notation next:

2.1.4.7.1 Pre-multiplication
uj = vi Tij = Tij vi , (2.113)
uT = vT · T 6= T · v. (2.114)
In the Cartesian index notation here, the first form is preferred as it has a correspondence
with the Gibbs notation, but both are correct representations given our summation conven-
tion.

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 39

2.1.4.7.2 Post-multiplication

wi = Tij vj = vj Tij , (2.115)


w = T · v 6= vT · T. (2.116)

2.1.4.8 Dyadic product: two vectors

As opposed to the inner product between two vectors, that yields a scalar, we also have the
dyadic product, that yields a tensor. In Cartesian index and Gibbs notation, we have

Tij = ui vj = vj ui , (2.117)
T = uvT 6= vuT . (2.118)

Notice there is no dot in the dyadic product; the dot is reserved for the inner product.

Example 2.8
Find the dyadic product between two general two-dimensional vectors. Show the dyadic product
does not commute in general, and find the condition under which it does commute.

Take    
u1 v1
u= , v= . (2.119)
u2 v2

Then    
T u1 u1 v1 u1 v2
uv = ui vj = ( v1 v2 ) = . (2.120)
u2 u2 v1 u2 v2

Compare this to the commuted operation, vuT :


   
v1 v1 u1 v1 u2
vuT = vi uj = ( u1 u2 ) = . (2.121)
v2 v2 u1 v2 u2

By inspection, we see the operations in general do not commute. They do commute if v2 /v1 = u2 /u1 .
So in order for the dyadic product to commute, u and v must be parallel.
It is easily seen that the dyadic product vvT is a symmetric tensor. For the two-dimensional
system, we would have
   
v1 v1 v1 v1 v2
vvT = vi vj = ( v1 v2 ) = . (2.122)
v2 v2 v1 v2 v2

© 01 July 2025, Joseph M. Powers. All rights reserved.


40 CHAPTER 2. SOME NECESSARY MATHEMATICS

2.1.4.9 Contraction
We contract a general tensor, that has all of its subscripts different, by setting one subscript
to be the same as the other. A single contraction will reduce the order of a tensor by two.
For example the contraction of the second order tensor Tij is Tii , that indicates a sum is to
be performed:
Tii = T11 + T22 + T33 , (2.123)
tr T = T11 + T22 + T33 . (2.124)
So, in this case the contraction yields a scalar. In matrix algebra, this particular contraction
is the trace of the matrix.

2.1.4.10 Vector cross product


The vector cross product is defined in Cartesian index and Gibbs notation as
wi = ǫijk uj vk , (2.125)
w = u × v. (2.126)
Expanding for i = 1, 2, 3 gives
w1 = ǫ123 u2 v3 + ǫ132 u3 v2 = u2 v3 − u3 v2 , (2.127)
w2 = ǫ231 u3 v1 + ǫ213 u1 v3 = u3 v1 − u1 v3 , (2.128)
w3 = ǫ312 u1 v2 + ǫ321 u2 v1 = u1 v2 − u2 v1 . (2.129)

2.1.4.11 Vector associated with a plane


We often have to select a vector that is associated with a particular direction. Now for any
direction we choose, there exists an associated unit vector and normal plane. Recall that
our notation has been defined so that the first index is associated with a face or direction,
and the second index corresponds to the components of the vector associated with that face.
If we take ni to be a unit normal vector associated with a given direction and normal plane,
and we have been given a tensor Tij , the vector tj associated with that plane is given in
Cartesian index and Gibbs notation by
tj = ni Tij , (2.130)
tT = nT · T, (2.131)
t = TT · n. (2.132)
A sketch of a Cartesian element with the tensor components sketched on the proper face is
shown in 2.2.

Example 2.9
Find the vector associated with the 1 face, t(1) , as shown in Fig. 2.2,

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.2. SOLUTION OF LINEAR ALGEBRA EQUATIONS 41

x3
t(3)

Τ33

Τ31 Τ32
Τ23
t(2)
Τ13
Τ22
Τ12 Τ21
x2
Τ11

t(1)

x1

Figure 2.2: Sample Cartesian element that is aligned with coordinate axes, along with
tensor components and vectors associated with each face.

We first choose the unit normal associated with the x1 face, that is the vector ni = (1, 0, 0)T . The
associated vector is found by doing the actual summation

tj = ni Tij = n1 T1j + n2 T2j + n3 T3j . (2.133)

Now n1 = 1, n2 = 0, and n3 = 0, so for this problem, we have


(1)
tj = T1j . (2.134)

2.2 Solution of linear algebra equations


We briefly discuss the solution of linear algebra equations of the form

Aij xj = bi , (2.135)
A · x = b. (2.136)

Full details can be found in any text addressing linear algebra, e.g. Powers and Sen (2015).
Let us presume that A is a known square matrix of dimension N × N, x is an unknown
column vector of length N, and b is a known column vector of length N. The following can
be proved:

© 01 July 2025, Joseph M. Powers. All rights reserved.


42 CHAPTER 2. SOME NECESSARY MATHEMATICS

• A unique solution for x exists iff det A 6= 0.

• If det A = 0, solutions for x may or may not exist; if they exist, they are not unique.

• Cramer’s8 rule, a method involving the ratio of determinants discussed in linear algebra
texts, can be used to find x; other methods exist, such as Gaussian elimination.
Let us consider a few examples for N = 2.

Example 2.10
Use Cramer’s rule to solve a general linear algebra problem with N = 2.

Consider then     
a11 a12 x1 b1
= . (2.137)
a21 a22 x2 b2
The solution from Cramer’s rule involves the ratio of determinants. We get

b1 a12 a11 b1
b2 a22 b1 a22 − b2 a12 a21 b2 b2 a11 − b1 a21
x1 = = , x2 = = . (2.138)
a11 a12 a 11 a22 − a12 a21 a11 a12 a 11 a22 − a12 a21
a21 a22 a21 a22

If b1 , b2 6= 0 and det A = a11 a22 − a12 a21 6= 0, there is a unique nontrivial solution for x. If b1 = b2 = 0
and det A = a11 a22 − a12 a21 6= 0, we must have x1 = x2 = 0. Obviously, if det A = a11 a22 − a12 a21 = 0,
we cannot use Cramer’s rule to compute as it involves division by zero. But we can salvage a non-unique
solution if we also have b1 = b2 = 0, as we shall see.

Example 2.11
Find any and all solutions for
    
1 2 x1 0
= . (2.139)
2 4 x2 0

Certainly (x1 , x2 )T = (0, 0)T is a solution. But maybe there are more. Cramer’s rule gives

0 2 1 0
0 4 0 2 0 0
x1 = = , x2 = = . (2.140)
1 2 0 1 2 0
2 4 2 4

This is indeterminate! But the more robust Gaussian elimination process allows us to use row operations
(multiply the top row by −2 and add to the bottom row) to rewrite the original equation as
    
1 2 x1 0
= . (2.141)
0 0 x2 0
8
Gabriel Cramer, 1704-1752, Swiss mathematician at University of Geneva.

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.3. EIGENVALUES, EIGENVECTORS, AND TENSOR INVARIANTS 43

By inspection, we get an infinite number of solutions, given by the one-parameter family of equations

x1 = −2s, x2 = s, s ∈ R1 . (2.142)

We could also eliminate s and say that x1 = −2x2 . The solutions are linearly dependent. In terms of
the language of vectors, we find the solution to be a vector of fixed direction, with arbitrary magnitude.
In terms of a unit normal vector, we could write the solution as
!
− √25
x=s 1 , s ∈ R1 . (2.143)

5

Example 2.12
Find any and all solutions for
    
1 2 x1 1
= . (2.144)
2 4 x2 0

Cramer’s rule gives

1 2 1 1
0 4 4 2 0 −2
x1 = = , x2 = = . (2.145)
1 2 0 1 2 0
2 4 2 4

There is no value of x that satisfies A · x = b!9

2.3 Eigenvalues, eigenvectors, and tensor invariants


For a given tensor Tij , it is possible to select a plane for which the vector from Tij associated
with that plane points in the same direction as the normal associated with the chosen plane.
In fact for a three-dimensional element, it is possible to choose three planes for which the
vector associated with the given planes is aligned with the unit normal associated with those
planes. We can think of this as finding a rotation as sketched in Fig. 2.3.
9
There is however, in some sense a best solution, that is, an x of minimum length that also minimizes
||A · x − b||. Using the pseudoinverse procedure described in Powers and Sen (2015), Ch. 7, we find there
exists a non-unique set of x = (1/25 − 2s, 2/25 + s)T , s ∈ R1 , for √which for all values of s, the so-called
error norm takes on the same minimum value, e = ||A · x − b|| = 2/ 5. For s = 0, we then get the “best”
x = (1/25, 2/25)T in that this x minimizes the error and is itself of minimum length.

© 01 July 2025, Joseph M. Powers. All rights reserved.


44 CHAPTER 2. SOME NECESSARY MATHEMATICS

x3 x'
t(3) 3
t (3’)
Τ33 Τ32 rotate
Τ31
Τ23 t(2)
Τ13 Τ22
Τ12 Τ21
x2 t (1’)
Τ11 x' t (2’)
1
(1)
t
x'
2
x1

Figure 2.3: Sample Cartesian element that is rotated so that its faces have vectors that are
aligned with the unit normals associated with the faces of the element.

Mathematically, we can enforce this condition by requiring that

ni Tij = λnj . (2.146)


| {z } |{z}
vector associated with chosen direction scalar multiple of chosen direction

Here λ is an as of yet unknown scalar. The vector ni could be a unit vector, but does not
have to be. We can rewrite this as

ni Tij = λni δij . (2.147)

In Gibbs notation, this becomes nT · T = λnT · I. In mathematics, this is known as a left


eigenvalue problem. Solutions ni that are non-trivial are known as left eigenvectors. We
can also formulate this as a right eigenvalue problem by taking the transpose of both sides
to obtain TT · n = λI · n. Here we have used the fact that IT = I. We note that the left
eigenvectors of T are the right eigenvectors of TT . Eigenvalue problems are quite general
and arise whenever an operator operates on a vector to generate a vector that leaves the
original unchanged except in magnitude.
We can rearrange to form
ni (Tij − λδij ) = 0. (2.148)
In matrix notation, this can be written as
 
T11 − λ T12 T13
( n1 n2 n3 )  T21 T22 − λ T23  = ( 0 0 0 ) . (2.149)
T31 T32 T33 − λ

A trivial solution to this equation is (n1 , n2 , n3 ) = (0, 0, 0). But this is not interesting.
As suggested by our understanding of Cramer’s rule, we can get a non-unique, non-trivial
solution if we enforce the condition that the determinant of the coefficient matrix be zero.

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.3. EIGENVALUES, EIGENVECTORS, AND TENSOR INVARIANTS 45

As we have an unknown parameter λ, we have sufficient degrees of freedom to accomplish


this. So, we require
T11 − λ T12 T13
T21 T22 − λ T23 = 0. (2.150)
T31 T32 T33 − λ
We know from linear algebra that such an equation for a third order matrix gives rise to a
characteristic polynomial for λ of the form10
(1) (2) (3)
λ3 − IT λ2 + IT λ − IT = 0, (2.151)
(1) (2) (3)
where IT , IT , IT are scalars that are functions of all the scalars Tij . The IT ’s are known
as the invariants of the tensor Tij . They can be shown, following a detailed analysis, to be
given by
(1)
IT = Tii = tr T, (2.152)
(2) 1 1  
IT = (Tii Tjj − Tij Tji ) = (tr T)2 − tr (T · T) = (det T) tr T−1 , (2.153)
2 2
1 
= T(ii) T(jj) + T[ij] T[ij] − T(ij) T(ij) , (2.154)
2
(3)
IT = ǫijk T1i T2j T3k = det T. (2.155)
(1) (2) (3)
Here “det” denotes the determinant. It can also be shown that if λ , λ , λ are the three
eigenvalues, then the invariants can also be expressed as
(1)
IT = λ(1) + λ(2) + λ(3) , (2.156)
(2)
IT = λ(1) λ(2) + λ(2) λ(3) + λ(3) λ(1) , (2.157)
(3) (1) (2) (3)
IT = λ λ λ . (2.158)
In general these eigenvalues, and consequently, the eigenvectors are complex. Addition-
ally, in general the eigenvectors are non-orthogonal. If, however, the matrix we are consid-
ering is symmetric, that is often the case in fluid mechanics, it can be formally proven that
all the eigenvalues are real and all the eigenvectors are real and orthogonal. If for instance,
our tensor is the stress tensor, we will show that it is symmetric in the absence of external
couples. The eigenvectors of the stress tensor can form the basis for an intrinsic coordinate
system that has its axes aligned with the principal stress on a fluid element. The eigenvalues
themselves give the value of the principal stress. This is actually a generalization of the
familiar Mohr’s11 circle from solid mechanics.

Example 2.13
Find the principal axes and principal values of stress if the stress tensor is
 
1 0 0
Tij =  0 1 2  . (2.159)
0 2 1
10
We employ a slightly more common form here than the similar Eq. (3.10.4) of Panton (2013).
11
Christian Otto Mohr, 1835-1918, Holstein-born German civil engineer, railroad and bridge designer.

© 01 July 2025, Joseph M. Powers. All rights reserved.


46 CHAPTER 2. SOME NECESSARY MATHEMATICS

x3

x2

x1

Figure 2.4: Sketch of stresses being applied to a cubical fluid element. The thinner lines
with arrows are the components of the stress tensor; the thicker lines on each face represent
the vector associated with the particular face.

A sketch of these stresses is shown on the fluid element in Fig. 2.4. We take the eigenvalue problem

ni Tij = λnj , (2.160)


= λni δij , (2.161)
ni (Tij − λδij ) = 0. (2.162)

This becomes for our problem


 
1−λ 0 0
( n1 n2 n3 )  0 1−λ 2  = (0 0 0). (2.163)
0 2 1−λ

For a non-trivial solution for ni , we must have

1−λ 0 0
0 1−λ 2 = 0. (2.164)
0 2 1−λ

This gives rise to the polynomial equation

(1 − λ) ((1 − λ)(1 − λ) − 4) = 0. (2.165)

This has three solutions


λ = 1, λ = −1, λ = 3. (2.166)
Notice all eigenvalues are real, that we expect because the tensor is symmetric.
Now let us find the eigenvectors (aligned with the principal axes of stress) for this problem First, it
can easily be shown that when the vector product of a vector with a tensor commutes when the tensor

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.3. EIGENVALUES, EIGENVECTORS, AND TENSOR INVARIANTS 47

is symmetric. Although this is not a crucial step, we will use it to write the eigenvalue problem in a
slightly more familiar notation:

ni (Tij − λδij ) = 0 =⇒ (Tij − λδij ) ni = 0, because scalar components commute. (2.167)

Because of symmetry, we can now commute the indices to get

(Tji − λδji ) ni = 0, because indices commute if symmetric. (2.168)

Expanding into matrix notation, we get


    
T11 − λ T21 T31 n1 0
 T12 T22 − λ T32   n2  =  0  . (2.169)
T13 T23 T33 − λ n3 0

We have taken the transpose of T . Substituting for Tji and considering the eigenvalue λ = 1, we get
    
0 0 0 n1 0
 0 0 2   n2  =  0  . (2.170)
0 2 0 n3 0

We get two equations 2n2 = 0, and 2n3 = 0; thus, n2 = n3 = 0. We can satisfy all equations with an
arbitrary value of n1 . It is always the case that an eigenvector will have an arbitrary magnitude and a
well-defined direction. Here we will choose to normalize our eigenvector and take n1 = 1, so that the
eigenvector is  
1
nj =  0  for λ = 1. (2.171)
0
Geometrically, this means that the original 1 face already has an associated vector that is aligned with
its normal vector.
Now consider the eigenvector associated with the eigenvalue λ = −1. Again substituting into the
original equation, we get     
2 0 0 n1 0
 0 2 2   n2  =  0  . (2.172)
0 2 2 n3 0
This is simply the system of equations

2n1 = 0, (2.173)
2n2 + 2n3 = 0, (2.174)
2n2 + 2n3 = 0. (2.175)

Clearly n1 = 0. We could take n2 = 1 and n3 = −1 for a non-trivial solution. Alternatively, let’s


normalize and take  
0

2
nj =  2√  . (2.176)
− 22
Finally consider the eigenvector associated with the eigenvalue λ = 3. Again substituting into the
original equation, we get     
−2 0 0 n1 0
 0 −2 2   n2  =  0  . (2.177)
0 2 −2 n3 0

© 01 July 2025, Joseph M. Powers. All rights reserved.


48 CHAPTER 2. SOME NECESSARY MATHEMATICS

x3

λ(3) = 3

λ(2)= -1

x2
λ(1)= 1

Figure 2.5: Sketch of fluid element rotated to be aligned with axes of principal stress, along
with magnitude of principal stress. The 1 face projects out of the page.

This is the system of equations


−2n1 = 0, (2.178)
−2n2 + 2n3 = 0, (2.179)
2n2 − 2n3 = 0. (2.180)
Clearly again n1 = 0. We could take n2 = 1 and n3 = 1 for a non-trivial solution. Once again, let us
normalize and take  
√0
2
nj =  √2  . (2.181)
2
2
In summary, the three eigenvectors and associated eigenvalues are
 
1
(1)
nj = 0 for λ(1) = 1, (2.182)
0
 
√0
(2) 2 
nj =  √ 2 for λ(2) = −1, (2.183)
− 2
2
 
√0
(3) 2
nj =  √2
 for λ(3) = 3. (2.184)
2
2

The eigenvectors are mutually orthogonal, as well as normal. We say they form an orthonormal set of
vectors. Their orthogonality, as well as the fact that all the eigenvalues are real can be shown to be
a direct consequence of the symmetry of the original tensor. A sketch of the principal stresses on the
element rotated so that it is aligned with the principal axes of stress is shown on the fluid element in
Fig. 2.5. The three orthonormal eigenvectors when cast into a matrix, form an orthogonal matrix Q,
and calculation reveals that det Q = 1, so that it is a rotation matrix.
 . .. ..   
.. . . 1 √0 √0
  2 2 
Q =  n(1) n(2) n(3)  =  0 2
√ √2
. (2.185)
.. .. .. 0 − 2 2
. . . 2 2

© 01 July 2025, Joseph M. Powers. All rights reserved.

You might also like