Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views12 pages

Notes Pages 3

Uploaded by

bchethanrs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Notes Pages 3

Uploaded by

bchethanrs2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

2.1.

VECTORS AND CARTESIAN TENSORS 25

a representation of a as a 3 × 1 column vector. We adopt the standard that all vectors can
be thought of as column vectors. Often in matrix operations, we will need row vectors.
They will be formed by taking the transpose, indicated by a superscript T , of a column
vector. In the interest of clarity, full consistency with notions from matrix algebra, as well
as transparent translation to the conventions of necessarily meticulous (as well as popular)
software tools such as MATLAB, we will scrupulously use the transpose notation. This comes
at the expense of a more cluttered set of equations at times. We also note that most authors
do not explicitly use the transpose notation, but its use is implicit.

2.1.2 Rotation of axes


The Cartesian index notation is developed to be valid under transformations from one Carte-
sian coordinate system to another Cartesian coordinate system. It is not applicable to either
general orthogonal systems (such as cylindrical or spherical) or non-orthogonal systems. It
is straightforward, but tedious, to develop a more general system to handle generalized co-
ordinate transformations, and Einstein did just that as well. For our purposes however, the
simpler Cartesian index notation will suffice.
We will consider a coordinate transformation that is a simple rotation of axes. This trans-
formation preserves all angles; hence, right angles in the original Cartesian system will be
right angles in the rotated, but still Cartesian system. It also preserves lengths of geometric
features, with no stretching. We will require, ultimately, that whatever theory we develop
must generate results in which physically relevant quantities such as temperature, pressure,
density, and velocity magnitude, are independent of the particular set of coordinates with
which we choose to describe the system. To motivate this, let us consider a two-dimensional
rotation from an unprimed system to a primed system. So, we seek a transformation that
maps (x1 , x2 )T → (x′1 , x′2 )T . We will rotate the unprimed system counterclockwise through
an angle α to achieve the primed system.2 The rotation is sketched in Fig. 2.1. It is easy
to show that the angle β = π/2 − α. Here a point P is identified by a particular set of co-
ordinates (x∗1 , x∗2 ). One of the keys to all of continuum mechanics is realizing that while the
location (or velocity, or stress, ...) of P may be represented differently in various coordinate
systems, ultimately it must represent the same physical reality. Straightforward geometry
shows the following relation between the primed and unprimed coordinate systems for x′1

x∗1 = x∗1 cos α + x∗2 cos β. (2.3)

More generally, we can say for an arbitrary point that

x′1 = x1 cos α + x2 cos β. (2.4)


2
This is an example of a so-called alias transformation. In such a transformation, the coordinate axes
transform, but the underlying object remains unchanged. So a vector may be considered to be invariant, but
its representation in different coordinate systems may be different. Alias transformations are most common
in continuum mechanics. In contrast, an alibi transformation is one in which the coordinate axes remain
fixed, but the object transforms. This mode of thought is most common in fields such as robotics. In short,
alias rotates the axes, but not the body; alibi rotates the body, but not the axes.

© 01 July 2025, Joseph M. Powers. All rights reserved.


26 CHAPTER 2. SOME NECESSARY MATHEMATICS

x2
x ‘2

x *1 ‘ = x *1 cos α + x 2* cos β
P
x 2*
α α

x ‘1

x *1

β β β
α
x *1 x1

Figure 2.1: Sketch of coordinate transformation that is a rotation of axes.

We adopt the following notation


• (x1 , x′1 ) denotes the angle between the x1 and x′1 axes,
• (x2 , x′2 ) denotes the angle between the x2 and x′2 axes,
• (x3 , x′3 ) denotes the angle between the x3 and x′3 axes,
• (x1 , x′2 ) denotes the angle between the x1 and x′2 axes,
.
• ..
Thus, in two-dimensions, we have
x′1 = x1 cos(x1 , x′1 ) + x2 cos(x2 , x′1 ). (2.5)
In three dimensions, this extends to
x′1 = x1 cos(x1 , x′1 ) + x2 cos(x2 , x′1 ) + x3 cos(x3 , x′1 ). (2.6)
Extending this analysis to calculate x′2 and x′3 gives
x′2 = x1 cos(x1 , x′2 ) + x2 cos(x2 , x′2 ) + x3 cos(x3 , x′2 ), (2.7)
x′3 = x1 cos(x1 , x′3 ) + x2 cos(x2 , x′3 ) + x3 cos(x3 , x′3 ). (2.8)

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 27

These can be written in matrix form as


 
cos(x1 , x′1 ) cos(x1 , x′2 ) cos(x1 , x′3 )
( x′1 x′2 x′3 ) = ( x1 x2 x3 )  cos(x2 , x′1 ) cos(x2 , x′2 ) cos(x2 , x′3 )  . (2.9)
cos(x3 , x′1 ) cos(x3 , x′2 ) cos(x3 , x′3 )

If we use the shorthand notation, for example, that ℓ11 = cos(x1 , x′1 ), ℓ12 = cos(x1 , x′2 ), etc.,
we have  
ℓ11 ℓ12 ℓ13
( x′ x′ x′3 ) = ( x1 x2 x3 )  ℓ21 ℓ22 ℓ23  . (2.10)
| 1 {z2 } | {z }
ℓ 31 ℓ 32 ℓ 33
x′T xT | {z }
Q

In Gibbs notation, defining the matrix of ℓ’s to be Q3 , and recalling that all vectors are taken
to be column vectors, we can alternatively say4

x′T = xT · Q. (2.11)

Taking the transpose of both sides and recalling the useful identities that (A · b)T = bT · AT
and (AT )T = A, we can also say
x′ = QT · x. (2.12)
We call Q = ℓij the matrix of direction cosines and QT = ℓji the rotation matrix. It can be
shown that coordinate systems that satisfy the right hand rule require further that

det Q = 1. (2.13)

Matrices Q that have | det Q| = 1 are associated with volume-preserving transformations.


Matrices Q that have det Q > 0, are orientation-preserving transformations. Matrices Q that
have det Q = 1 are thus volume- and orientation-preserving, and can be thought of a rota-
tions. A matrix that had determinant −1 would be volume-preserving but not orientation-
preserving. It could be considered as a reflection. A matrix Q composed of orthonormal
column vectors, with | det Q| = 1 (thus either rotation or reflection matrices) is commonly
known as orthogonal, though perhaps “orthonormal” would have been a more descriptive
3
Panton (2013) has a different notation for the direction cosines ℓij and employs Q for a different purpose;
our usage is probably more common in the broader literature.
4
The more commonly used alternate convention of not explicitly using the transpose notation for vectors
would instead have our x′T = xT · Q written as x′ = x · Q. In fact, our use of the transpose notation
is strictly viable only for Cartesian coordinate systems, while many will allow Gibbs notation to represent
vectors in non-Cartesian coordinates, for which the transpose operation is ill-suited. However, realizing that
these notes will primarily focus on Cartesian systems, and that such operations relying on the transpose
are useful notions from linear algebra, it will be employed in an overly liberal fashion in these notes. The
alternate convention still typically applies, where necessary, the transpose notation for tensors, so it would
also hold that x′ = QT · x.

© 01 July 2025, Joseph M. Powers. All rights reserved.


28 CHAPTER 2. SOME NECESSARY MATHEMATICS

nomenclature. Another way to think of the matrix of direction cosines ℓij = Q is as a matrix
of orthonormal basis vectors in its columns:
 . .. .. 
.. . .
ℓij = Q =  n(1) n(2) n(3)  . (2.14)
.. .. ..
. . .
In a result that is both remarkable and important, it can be shown that the transpose of
an orthogonal matrix is its inverse:
QT = Q−1 . (2.15)
Thus, we have
Q · QT = QT · Q = I. (2.16)
The equation x′T = xT · Q is really a set of three linear equations. For instance, the first
is
x′1 = x1 ℓ11 + x2 ℓ21 + x3 ℓ31 . (2.17)
More generally, we could say that

x′j = x1 ℓ1j + x2 ℓ2j + x3 ℓ3j . (2.18)

Here j is a so-called “free index,” that for three-dimensional space takes on values j = 1, 2, 3.
Some rules of thumb for free indices are

• A free index can appear only once in each additive term.

• One free index (e.g. k) may replace another (e.g. j) as long as it is replaced in each
additive term.

We can simplify Eq. (2.18) further by writing


3
X
x′j = xi ℓij . (2.19)
i=1

This is commonly written in the following form:

x′j = xi ℓij . (2.20)

We again note that it is to be understood that whenever an index is repeated, as has the
index i here, that a summation from i = 1 to i = 3 is to be performed and that i is the
“dummy index.” Some rules of thumb for dummy indices are

• dummy indices can appear only twice in a given additive term,

• a pair of dummy indices, say i, i, can be exchanged for another, say j, j, in a given
additive term with no need to change dummy indices in other additive terms.

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 29

We define the Kronecker5 delta, δij as



0, i 6= j,
δij = (2.21)
1, i = j.
This is effectively the identity matrix I:
 
1 0 0
δij = I =  0 1 0  . (2.22)
0 0 1
Direct substitution proves that what is effectively the law of cosines can be written as
ℓij ℓkj = δik . (2.23)
This is also equivalent to Eq. (2.16), Q · QT = I.

Example 2.1
Show for the two-dimensional system described in Fig. 2.1 that ℓij ℓkj = δik holds.

Expanding for the two-dimensional system, we get


ℓi1 ℓk1 + ℓi2 ℓk2 = δik . (2.24)
First, take i = 1, k = 1. We get then
ℓ11 ℓ11 + ℓ12 ℓ12 = δ11 = 1, (2.25)
cos α cos α + cos(α + π/2) cos(α + π/2) = 1, (2.26)
cos α cos α + (− sin(α))(− sin(α)) = 1, (2.27)
cos2 α + sin2 α = 1. (2.28)
This is obviously true. Next, take i = 1, k = 2. We get then
ℓ11 ℓ21 + ℓ12 ℓ22 = δ12 = 0, (2.29)
cos α cos(π/2 − α) + cos(α + π/2) cos(α) = 0, (2.30)
cos α sin α − sin α cos α = 0. (2.31)
This is obviously true. Next, take i = 2, k = 1. We get then
ℓ21 ℓ11 + ℓ22 ℓ12 = δ21 = 0, (2.32)
cos(π/2 − α) cos α + cos α cos(π/2 + α) = 0, (2.33)
sin α cos α + cos α(− sin α) = 0. (2.34)
This is obviously true. Next, take i = 2, k = 2. We get then
ℓ21 ℓ21 + ℓ22 ℓ22 = δ22 = 1, (2.35)
cos(π/2 − α) cos(π/2 − α) + cos α cos α = 1, (2.36)
sin α sin α + cos α cos α = 1. (2.37)
Again, this is obviously true.
5
Leopold Kronecker, 1823-1891, German mathematician, critic of set theory, who stated “God made the
integers; all else is the work of man.”

© 01 July 2025, Joseph M. Powers. All rights reserved.


30 CHAPTER 2. SOME NECESSARY MATHEMATICS

Using this, we can easily find the inverse transformation back to the unprimed coordinates
via the following operations:

ℓkj x′j = ℓkj xi ℓij , (2.38)


= ℓij ℓkj xi , (2.39)
= δik xi , (2.40)

ℓkj xj = xk , (2.41)
ℓij x′j = xi , (2.42)
xi = ℓij x′j . (2.43)

The Kronecker delta is also known as the substitution tensor as it has the property that
application of it to a vector simply substitutes one index for another:

xk = δki xi . (2.44)

For students familiar with linear algebra, it is easy to show that the matrix of direction
cosines, ℓij , is a rotation matrix. Each of its columns is a vector that is orthogonal to the
other column vectors. Additionally, each column vector is itself normal. Such a matrix has a
Euclidean norm of unity, and three eigenvalues that have magnitude of unity. Its determinant
is +1, that renders it a rotation; in contrast a reflection matrix would have determinant of
−1. Operation of a rotation matrix on a vector rotates it, but does not stretch it.

2.1.3 Vectors
Three scalar quantities vi where i = 1, 2, 3 are scalar components of a vector if they transform
according to the following rule
vj′ = vi ℓij , (2.45)
under a rotation of axes characterized by direction cosines ℓij . In Gibbs notation, we would
say
v′T = vT · Q, (2.46)
or alternatively
v′ = QT · v. (2.47)
We can also say that a vector associates a scalar with a chosen direction in space by an
expression that is linear in the direction cosines of the chosen direction.

Example 2.2
Consider the set of scalars that describe the velocity in a two-dimensional Cartesian system:
 
vx
vi = , (2.48)
vy

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 31

where we return to the typical x, y coordinate system. Determine if vi is a vector.

In a rotated coordinate system, using the same notation of Fig. 2.1, we find that

vx′ = vx cos α + vy cos(π/2 − α) = vx cos α + vy sin α, (2.49)


vy′ = vx cos(π/2 + α) + vy cos α = −vx sin α + vy cos α. (2.50)

This is linear in the direction cosines, and satisfies the definition for a vector.

Example 2.3
Do two arbitrary scalars, say the quotient of pressure and density and the product of specific heat
and temperature, (p/ρ, cv T )T , form a vector?

If this quantity is a vector, then we can say


 
p/ρ
vi = . (2.51)
cv T

This pair of numbers has an obvious physical meaning in our unrotated coordinate system. If the
system were a calorically perfect ideal gas (CPIG), the first component would represent the difference
between the enthalpy and the internal energy, and the second component would represent the internal
energy. And if we rotate through an angle α, we arrive at a transformed quantity of
p
v1′ = cos α + cv T cos(π/2 − α), (2.52)
ρ
p
v2′ = cos(π/2 + α) + cv T cos(α). (2.53)
ρ
This quantity does not have any known physical significance, and so it seems that these quantities do
not form a vector.

We have the following vector algebra

• Addition

– wi = ui + vi (Cartesian index notation)


– w = u + v (Gibbs notation)

• Dot product (inner product)

– ui vi = b (Cartesian index notation)


– uT · v = b (Gibbs notation)
– both notations require u1 v1 + u2 v2 + u3 v3 = b.

© 01 July 2025, Joseph M. Powers. All rights reserved.


32 CHAPTER 2. SOME NECESSARY MATHEMATICS

While ui and vi have scalar components that change under a rotation of axes, their in-
ner product (or dot product) is a true scalar and is invariant under a rotation of axes.

Example 2.4
Demonstrate invariance of the dot product uT · v = b by subjecting vectors u and v to a rotation.

Under rotation, our vectors transform as u′ = QT · u, v′ = QT · v. Thus Q · u′ = Q · QT · u = u.


and Q · v′ = Q · QT · v = v. Then consider the dot product

uT · v = b, (2.54)
′ T ′
(Q · u ) · (Q · v ) = b, (2.55)
u′T · QT · Q ·v′ = b, (2.56)
| {z }
=I
u′T · I · v′ = b, (2.57)
′T ′
u ·v = b. (2.58)

The inner product is invariant under rotation.

Here we have in the Gibbs notation explicitly noted that the transpose is part of the
inner product. Most authors in fact assume the inner product of two vectors implies the
transpose and do not write it explicitly, writing the inner product simply as u · v ≡ uT · v.

2.1.4 Tensors
2.1.4.1 Definition
A second order tensor, or a rank two tensor, is nine scalar components that under a rotation
of axes transformation according to the following rule:

Tij′ = ℓki ℓlj Tkl . (2.59)

We could also write this in an expanded form as


X 3
3 X 3 X
X 3
Tij′ = ℓki ℓlj Tkl = ℓTik Tkl ℓlj . (2.60)
k=1 l=1 k=1 l=1

In these expressions, i and j are both free indices; while k and l are dummy indices. The
notation ℓTik is unusual and rarely used. It does allow us to see the correspondence to GIbbs
notation. The Gibbs notation for this transformation is easily shown to be

T′ = QT · T · Q. (2.61)

Analogously to our conclusion for a vector, we say that a tensor associates a vector with
each direction in space by an expression that is linear in the direction cosines of the chosen

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 33

direction. For a given tensor Tij , the first subscript is associated with the face of a unit cube
(hence the mnemonic device, first-face); the second subscript is associated with the vector
components for the vector on that face.
Tensors can also be expressed as matrices. All rank two tensors are two-dimensional
matrices, but not all matrices are rank two tensors, as they do not necessarily satisfy the
transformation rules. We can say
 
T11 T12 T13
Tij =  T21 T22 T23  . (2.62)
T31 T32 T33

The first row vector, ( T11 T12 T13 ), is the vector associated with the 1 face. The second
row vector, ( T21 T22 T23 ), is the vector associated with the 2 face. The third row vector,
( T31 T32 T33 ), is the vector associated with the 3 face.

Example 2.5
Consider how the equation A · x = b transforms under rotation.

Using

A′ = QT · A · Q, (2.63)
x′ = QT · x, (2.64)
b′ = QT · b, (2.65)

we see that by pre-multiplying all equations by Q, and post-multiplying the tensor equation by QT that

A = Q · A′ · QT , (2.66)
x = Q · x′ , (2.67)
b = Q · b′ , (2.68)

giving us

Q · A′ · QT · Q · x′ = Q · b′ , (2.69)
| {z } | {z } | {z }
A x b
Q · A′ · x′ = Q · b′ , (2.70)
T ′ ′ T ′
Q ·Q·A ·x = Q ·Q·b , (2.71)
A′ · x′ = b′ . (2.72)

Obviously, the form is invariant under rotation.

We also have the following items associated with tensors.

© 01 July 2025, Joseph M. Powers. All rights reserved.


34 CHAPTER 2. SOME NECESSARY MATHEMATICS

2.1.4.2 Alternating symbol


The alternating symbol, ǫijk , will soon be seen to be useful, especially when we introduce
the vector cross product. It is defined as follows

 1 if ijk = 123, 231, or 312,
ǫijk = 0 if any two indices identical, (2.73)

−1 if ijk = 321, 213, or 132.

Another way to remember this is to start with the sequence 123, that is positive. A sequential
permutation, say from 123 to 231, retains the positive nature. A trade, say from 123 to 213,
gives a negative value.
An identity that will be used extensively

ǫijk ǫilm = δjl δkm − δjm δkl , (2.74)

can be proved a number of ways, including tedious direct substitution for all values of
i, j, k, l, m.

2.1.4.3 Some secondary definitions


2.1.4.3.1 Transpose The transpose of a second rank tensor, denoted by a superscript
T , is found by exchanging elements about the diagonal. In shorthand index notation, this is
simply
(Tij )T = Tji . (2.75)
Written out in full, if
 
T11 T12 T13

Tij = T21 T22 T23  , (2.76)
T31 T32 T33
then  
T11 T21 T31
T 
Tij = Tji = T12 T22 T32  . (2.77)
T13 T23 T33

2.1.4.3.2 Symmetric A tensor Dij is symmetric iff

Dij = Dji , (2.78)


D = DT . (2.79)

A symmetric tensor has only six independent scalars. We will reserve D for tensors that are
symmetric. We will see that D is associated with the deformation of a fluid element.

© 01 July 2025, Joseph M. Powers. All rights reserved.


2.1. VECTORS AND CARTESIAN TENSORS 35

2.1.4.3.3 Anti-symmetric A tensor Rij is anti-symmetric iff

Rij = −Rji , (2.80)


R = −RT . (2.81)

An anti-symmetric tensor must have zeroes on its diagonal and only three independent
scalars on off-diagonal elements. We will reserve R for tensors that are anti-symmetric. We
will see that R is associated with the rotation of a fluid element. But R is not a rotation
matrix.

2.1.4.3.4 Decomposition An arbitrary tensor Tij can be separated into a symmetric


and anti-symmetric pair of tensors:

1 1 1 1
Tij = Tij + Tij + Tji − Tji . (2.82)
2 2 2 2
Rearranging, we get
1 1
Tij = (Tij + Tji ) + (Tij − Tji ) . (2.83)
|2 {z } |2 {z }
symmetric anti−symmetric

The first term must be symmetric, and the second term must be anti-symmetric. This is
easily seen by considering applying this to any matrix of actual numbers. If we define the
symmetric part of the matrix Tij by the following notation

1
T(ij) = (Tij + Tji ) , (2.84)
2
and the anti-symmetric part of the same matrix by the following notation

1
T[ij] = (Tij − Tji ) , (2.85)
2
we then have
Tij = T(ij) + T[ij] . (2.86)

2.1.4.4 Tensor inner product


The tensor inner product of two tensors Tij and Sji is defined as follows

Tij Sji = a, (2.87)

where a is a scalar. In Gibbs notation, we would say

T : S = a. (2.88)

© 01 July 2025, Joseph M. Powers. All rights reserved.


36 CHAPTER 2. SOME NECESSARY MATHEMATICS

It is easily shown, and will be important in upcoming derivations, that the tensor inner
product of any symmetric tensor D with any anti-symmetric tensor R is the scalar zero:

Dij Rji = 0, (2.89)


D : R = 0. (2.90)

Example 2.6
For all 2 × 2 matrices, prove the tensor inner product of general symmetric and anti-symmetric
tensors is zero.

Take    
a b 0 d
D= , R= . (2.91)
b c −d 0
By definition then

D : R = Dij Rji = D11 R11 + D12 R21 + D21 R12 + D22 R22 , (2.92)
= a(0) + b(−d) + bd + c(0), (2.93)
= 0. QED. (2.94)

The theorem is proved.6 The proof can be extended to arbitrary square matrices.

Further, if we decompose a tensor into its symmetric and anti-symmetric parts, Tij =
T(ij) + T[ij] and take T(ij) = Dij = D and T[ij] = Rij = R, so that T = D + R, we note the
following common term can be expressed as a tensor inner product with a dyadic product:

xi Tij xj = xT · T · x, (2.95)
xi (T(ij) + T[ij] )xj = xT · (D + R) · x, (2.96)
xi T(ij) xj = xT · D · x, (2.97)
T(ij) xi xj = D : xxT . (2.98)

2.1.4.5 Dual vector of a tensor


We define the dual vector, di , of a tensor Tjk as follows7
1 1 1
di = ǫijk Tjk = ǫijk T(jk) + ǫijk T[jk] . (2.99)
2 2 | {z } 2
=0

6
The common abbreviation QED at the end of the proof stands for the Latin quod erat demonstrandum,
“that which was to be demonstrated.”
7
There is a lack of uniformity in the literature in this area. First, note this definition differs from that
given by Panton (2013) by a factor of 1/2. It is closer, but not identical, to the approach found in Aris
(1962), p. 25.

© 01 July 2025, Joseph M. Powers. All rights reserved.

You might also like