Notes Pages 3
Notes Pages 3
a representation of a as a 3 × 1 column vector. We adopt the standard that all vectors can
be thought of as column vectors. Often in matrix operations, we will need row vectors.
They will be formed by taking the transpose, indicated by a superscript T , of a column
vector. In the interest of clarity, full consistency with notions from matrix algebra, as well
as transparent translation to the conventions of necessarily meticulous (as well as popular)
software tools such as MATLAB, we will scrupulously use the transpose notation. This comes
at the expense of a more cluttered set of equations at times. We also note that most authors
do not explicitly use the transpose notation, but its use is implicit.
x2
x ‘2
x *1 ‘ = x *1 cos α + x 2* cos β
P
x 2*
α α
x ‘1
‘
x *1
β β β
α
x *1 x1
If we use the shorthand notation, for example, that ℓ11 = cos(x1 , x′1 ), ℓ12 = cos(x1 , x′2 ), etc.,
we have
ℓ11 ℓ12 ℓ13
( x′ x′ x′3 ) = ( x1 x2 x3 ) ℓ21 ℓ22 ℓ23 . (2.10)
| 1 {z2 } | {z }
ℓ 31 ℓ 32 ℓ 33
x′T xT | {z }
Q
In Gibbs notation, defining the matrix of ℓ’s to be Q3 , and recalling that all vectors are taken
to be column vectors, we can alternatively say4
x′T = xT · Q. (2.11)
Taking the transpose of both sides and recalling the useful identities that (A · b)T = bT · AT
and (AT )T = A, we can also say
x′ = QT · x. (2.12)
We call Q = ℓij the matrix of direction cosines and QT = ℓji the rotation matrix. It can be
shown that coordinate systems that satisfy the right hand rule require further that
det Q = 1. (2.13)
nomenclature. Another way to think of the matrix of direction cosines ℓij = Q is as a matrix
of orthonormal basis vectors in its columns:
. .. ..
.. . .
ℓij = Q = n(1) n(2) n(3) . (2.14)
.. .. ..
. . .
In a result that is both remarkable and important, it can be shown that the transpose of
an orthogonal matrix is its inverse:
QT = Q−1 . (2.15)
Thus, we have
Q · QT = QT · Q = I. (2.16)
The equation x′T = xT · Q is really a set of three linear equations. For instance, the first
is
x′1 = x1 ℓ11 + x2 ℓ21 + x3 ℓ31 . (2.17)
More generally, we could say that
Here j is a so-called “free index,” that for three-dimensional space takes on values j = 1, 2, 3.
Some rules of thumb for free indices are
• One free index (e.g. k) may replace another (e.g. j) as long as it is replaced in each
additive term.
We again note that it is to be understood that whenever an index is repeated, as has the
index i here, that a summation from i = 1 to i = 3 is to be performed and that i is the
“dummy index.” Some rules of thumb for dummy indices are
• a pair of dummy indices, say i, i, can be exchanged for another, say j, j, in a given
additive term with no need to change dummy indices in other additive terms.
Example 2.1
Show for the two-dimensional system described in Fig. 2.1 that ℓij ℓkj = δik holds.
Using this, we can easily find the inverse transformation back to the unprimed coordinates
via the following operations:
The Kronecker delta is also known as the substitution tensor as it has the property that
application of it to a vector simply substitutes one index for another:
xk = δki xi . (2.44)
For students familiar with linear algebra, it is easy to show that the matrix of direction
cosines, ℓij , is a rotation matrix. Each of its columns is a vector that is orthogonal to the
other column vectors. Additionally, each column vector is itself normal. Such a matrix has a
Euclidean norm of unity, and three eigenvalues that have magnitude of unity. Its determinant
is +1, that renders it a rotation; in contrast a reflection matrix would have determinant of
−1. Operation of a rotation matrix on a vector rotates it, but does not stretch it.
2.1.3 Vectors
Three scalar quantities vi where i = 1, 2, 3 are scalar components of a vector if they transform
according to the following rule
vj′ = vi ℓij , (2.45)
under a rotation of axes characterized by direction cosines ℓij . In Gibbs notation, we would
say
v′T = vT · Q, (2.46)
or alternatively
v′ = QT · v. (2.47)
We can also say that a vector associates a scalar with a chosen direction in space by an
expression that is linear in the direction cosines of the chosen direction.
Example 2.2
Consider the set of scalars that describe the velocity in a two-dimensional Cartesian system:
vx
vi = , (2.48)
vy
In a rotated coordinate system, using the same notation of Fig. 2.1, we find that
This is linear in the direction cosines, and satisfies the definition for a vector.
Example 2.3
Do two arbitrary scalars, say the quotient of pressure and density and the product of specific heat
and temperature, (p/ρ, cv T )T , form a vector?
This pair of numbers has an obvious physical meaning in our unrotated coordinate system. If the
system were a calorically perfect ideal gas (CPIG), the first component would represent the difference
between the enthalpy and the internal energy, and the second component would represent the internal
energy. And if we rotate through an angle α, we arrive at a transformed quantity of
p
v1′ = cos α + cv T cos(π/2 − α), (2.52)
ρ
p
v2′ = cos(π/2 + α) + cv T cos(α). (2.53)
ρ
This quantity does not have any known physical significance, and so it seems that these quantities do
not form a vector.
• Addition
While ui and vi have scalar components that change under a rotation of axes, their in-
ner product (or dot product) is a true scalar and is invariant under a rotation of axes.
Example 2.4
Demonstrate invariance of the dot product uT · v = b by subjecting vectors u and v to a rotation.
uT · v = b, (2.54)
′ T ′
(Q · u ) · (Q · v ) = b, (2.55)
u′T · QT · Q ·v′ = b, (2.56)
| {z }
=I
u′T · I · v′ = b, (2.57)
′T ′
u ·v = b. (2.58)
Here we have in the Gibbs notation explicitly noted that the transpose is part of the
inner product. Most authors in fact assume the inner product of two vectors implies the
transpose and do not write it explicitly, writing the inner product simply as u · v ≡ uT · v.
2.1.4 Tensors
2.1.4.1 Definition
A second order tensor, or a rank two tensor, is nine scalar components that under a rotation
of axes transformation according to the following rule:
In these expressions, i and j are both free indices; while k and l are dummy indices. The
notation ℓTik is unusual and rarely used. It does allow us to see the correspondence to GIbbs
notation. The Gibbs notation for this transformation is easily shown to be
T′ = QT · T · Q. (2.61)
Analogously to our conclusion for a vector, we say that a tensor associates a vector with
each direction in space by an expression that is linear in the direction cosines of the chosen
direction. For a given tensor Tij , the first subscript is associated with the face of a unit cube
(hence the mnemonic device, first-face); the second subscript is associated with the vector
components for the vector on that face.
Tensors can also be expressed as matrices. All rank two tensors are two-dimensional
matrices, but not all matrices are rank two tensors, as they do not necessarily satisfy the
transformation rules. We can say
T11 T12 T13
Tij = T21 T22 T23 . (2.62)
T31 T32 T33
The first row vector, ( T11 T12 T13 ), is the vector associated with the 1 face. The second
row vector, ( T21 T22 T23 ), is the vector associated with the 2 face. The third row vector,
( T31 T32 T33 ), is the vector associated with the 3 face.
Example 2.5
Consider how the equation A · x = b transforms under rotation.
Using
A′ = QT · A · Q, (2.63)
x′ = QT · x, (2.64)
b′ = QT · b, (2.65)
we see that by pre-multiplying all equations by Q, and post-multiplying the tensor equation by QT that
A = Q · A′ · QT , (2.66)
x = Q · x′ , (2.67)
b = Q · b′ , (2.68)
giving us
Q · A′ · QT · Q · x′ = Q · b′ , (2.69)
| {z } | {z } | {z }
A x b
Q · A′ · x′ = Q · b′ , (2.70)
T ′ ′ T ′
Q ·Q·A ·x = Q ·Q·b , (2.71)
A′ · x′ = b′ . (2.72)
Another way to remember this is to start with the sequence 123, that is positive. A sequential
permutation, say from 123 to 231, retains the positive nature. A trade, say from 123 to 213,
gives a negative value.
An identity that will be used extensively
can be proved a number of ways, including tedious direct substitution for all values of
i, j, k, l, m.
A symmetric tensor has only six independent scalars. We will reserve D for tensors that are
symmetric. We will see that D is associated with the deformation of a fluid element.
An anti-symmetric tensor must have zeroes on its diagonal and only three independent
scalars on off-diagonal elements. We will reserve R for tensors that are anti-symmetric. We
will see that R is associated with the rotation of a fluid element. But R is not a rotation
matrix.
1 1 1 1
Tij = Tij + Tij + Tji − Tji . (2.82)
2 2 2 2
Rearranging, we get
1 1
Tij = (Tij + Tji ) + (Tij − Tji ) . (2.83)
|2 {z } |2 {z }
symmetric anti−symmetric
The first term must be symmetric, and the second term must be anti-symmetric. This is
easily seen by considering applying this to any matrix of actual numbers. If we define the
symmetric part of the matrix Tij by the following notation
1
T(ij) = (Tij + Tji ) , (2.84)
2
and the anti-symmetric part of the same matrix by the following notation
1
T[ij] = (Tij − Tji ) , (2.85)
2
we then have
Tij = T(ij) + T[ij] . (2.86)
T : S = a. (2.88)
It is easily shown, and will be important in upcoming derivations, that the tensor inner
product of any symmetric tensor D with any anti-symmetric tensor R is the scalar zero:
Example 2.6
For all 2 × 2 matrices, prove the tensor inner product of general symmetric and anti-symmetric
tensors is zero.
Take
a b 0 d
D= , R= . (2.91)
b c −d 0
By definition then
D : R = Dij Rji = D11 R11 + D12 R21 + D21 R12 + D22 R22 , (2.92)
= a(0) + b(−d) + bd + c(0), (2.93)
= 0. QED. (2.94)
The theorem is proved.6 The proof can be extended to arbitrary square matrices.
Further, if we decompose a tensor into its symmetric and anti-symmetric parts, Tij =
T(ij) + T[ij] and take T(ij) = Dij = D and T[ij] = Rij = R, so that T = D + R, we note the
following common term can be expressed as a tensor inner product with a dyadic product:
xi Tij xj = xT · T · x, (2.95)
xi (T(ij) + T[ij] )xj = xT · (D + R) · x, (2.96)
xi T(ij) xj = xT · D · x, (2.97)
T(ij) xi xj = D : xxT . (2.98)
6
The common abbreviation QED at the end of the proof stands for the Latin quod erat demonstrandum,
“that which was to be demonstrated.”
7
There is a lack of uniformity in the literature in this area. First, note this definition differs from that
given by Panton (2013) by a factor of 1/2. It is closer, but not identical, to the approach found in Aris
(1962), p. 25.