Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
38 views19 pages

Matrix

Uploaded by

dhanushdhanu9157
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views19 pages

Matrix

Uploaded by

dhanushdhanu9157
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

8

Matrices
8.1 Eigen }'aloes, Eigen-Vectors : Characteristic equation of a matrix
Let A= (a;J1 be a square matrix of order n . Suppose there is an n~
dimensional nonzero column vector X ,such that the action of A on X ( i.e.
the matrix product AX, gives a vector which is just a multiple of X , that is'
AX= AX .
where A is a scalar. In other words, the transformation represented by
the matrix A just multiplies the vector X by a scalar A. The vector X is then
called an eigen vector of the matrix A. A is calJed an Eigen value of A
corresponding to the eigenvector X . The problem of finding the eigenvectors
and· the eigenvalues of a matrix is called the eigenvalue problem.
Definition of Eigenvector. A nonzero vector X is called an eigenvector
of a matrix A if there is a number A such that AX = AX .
Here A is called an eigen value of A corresponding to the eigenvector X
and vice versa.
We have, AX = AX =Al X, I being a unit matrix.
or (A - A I) X = 0.
This is the matrix form of an eigenvalue problem.
Since X :I: 0, the matrix ( A - Al ) is singular, so that
IA -All=O ... (1)
Equation (I) is called the characteristic equation of A .The eigenvalues
are just the roots of the equation obtained by expanding the determinant in Eq.
(1). Then- roots A1, Ai, .. .An of the characteristic equation are not necessarily
aJJ different.

r~ ·;!]
Example 1. Determine the eigenvalues and eigenvectors ofthe matrix·

A=
0 0 5
Sol. The characteristic-equation of A 1s I A - Al I= O i.e.,
3-.A. I 4 .
0 2
_A = 0 i.e., (A - 2) (A - 3) (.A. - 5) = 0
6
0 0 _
5 A :. At = 2, ½ = 3, A.:3 = 5
These are the eigenvalues of A.
To determine eigen vectors let us consider the eigen values one by one.
(i) When l 1 = 2 the eigenvector X 1 is given by (A - 2l)X 1 = 0

130
0

[H <, [ •

fhe rank of coefficient m tru bean 1

e
I linearly independent lut1
1-- 2 I .•
ri,ese are equ1 valent to x 1 + .x + 4
fr 0
3x -0
ri,e last two g ive x3 = 0 Then fo t one g1\ ,: + X

l
Takex 1 = I, then x2=- l andx3 =0.
Hence 1
C bem
X ~c, [~
(ii) When½= 3, the eigenvector X 2 1s g1\len by (A - 31) X2 =O

i.e., [g ~I i][~~]=[g]
These are equivalent to x 2 + 4x 3 = 0
-X2 + fu-3 = 0
2x3 = 0
giving x 3 = 0, x 2 =0 and x 1 is arbitrary, say x 1 = 1.
C2 being a sca lar.
Then

These are equivalent to -2x 1 + x2 + 4.x3 = 0


-3x2 + 6x3 = 0

g1vmg x 2 = 2x3 = ix 1, i.e., 2x 1 = 3x2 =6x3


Take x3 =1, so that x2 = 2 and x 1 =3
Hen~ X3 = C3 m, C3 being a scalar.

Eb
Mechanics and Mathematica/ Mer/
IOd.1
Example 2. Find thl' eiRen l'{,/11es and normalised et,?en i•ectors of the
11/{l// ; ,

[~ ~ :1
Sol.
Let A=[~
A- 11.I =[~ 0
O
1
I -11. 0
= 0 I - 11.
[ 0 l
The characteristic equation of A 1s
I -11. 0 0
I A - 11.II = 0 l - 11. I =O
0 I l-11.
i.e., (l-11.)[(l-11.) 2 - l]=O
I.e., ( l - 11.) (11. 2 - 211.) = 0
I.e.,
11. ( I - 11.) (11. - 2) = 0
i.e.,
11. = 0, I, 2.
Thus the eigen 1·a/ues of the matrix A are 0, l, 2.
Eigenvalue equation is
(A -11.I) X = 0
... (I)
For').= 0, Eq. (I) reduces to

[g : : ] [::I]= [g]
Th1s 1s equivalent to the following equations

XI= 0
=Q X2 +X3
x 2 +x3 =O
Solving these equations, we get

xi =0. x 2 =- x 3 =k (say)
Xi
N
0
·- ru X1 = X2 ={xi, x 2• x3 } = { 0, k. -k }
lN X3
0
:--1
0 .
,ctS
Jf thee gen, ed rs be normahsed to unit), then I x1 I= 1

{[o2 + k2 + - k -] = 1 or k = _l....
'\J ( 2)

Normalised eigen , ector


X
1=
{ 1
0, ✓(2)' ✓(°"2)j
-1 l
for ;. . = 1, Eq. ( 1) reduces to

[~ ~ 11(:}lgJ
This 11- equt' alent to the following equations
,;c., = 0
X3 =Q
;. Xz = l t. 0,0} in normalised form.
For :l = :. Eq. ( 1) reduces to

[-i Jl]l:~1 =[gl


~I

'-inch 1s equ1, alent to the following equations


-X1 =0
-1.2 +:t3 = O
X2 -X3 =0
Solving these equations. we get
(1 =0,.x_ = (3
\\ nhm the arbitrary scale f ctor. the e1ge11 iector corresponding to

=21s gnen b)
X3={:c, ,X2 1.3}= {0.k,kj
For X normahsed to unat)
3
l
l e ., k=~
-.J2

1 1} in normalised fonn.
X3 = 0. - ( ? ) ' ,
"- ~ -
1'1lm the normalised ergen ,ectors of the gn·e11 matrix A correspo11d-

llw ~•ien l:al~s O. 1. 2 art


l · - TiI . respectively
o. (2) {1. 0. 0} , { 0. ~·
1 ~l}
134 Mec han ics and Mat hem atic al Me h
t Ods
8 _2 . Ca) ley- Ham ilto n The ore m
Stat eme nt. E,•ery r;quare mat dx sati sfie s its own cha ract eris tic eq« .
at10,1
OR
If IA - Al I = a0 + a I A+ a 2 A2 + . . . + a An = 0
11
be the cha ract eris tic equ atio n of a squ are
mat rix A, then
a 0 I+ a 1 A+ a 2 A 2 + ... + a A 11 = 0
11
Pro of. The cha ract eris tic poly nom ial is
I A - Al I= a0 + a A+ a ')/ + ···· + _a A0
1 2 11 .. . (1)
Eac h elem ent of cha ract eris tic mat rix (A -
Al) is an ord inar y ~olynornial
of deg ree n (at mos t). The refo re the co-f acto
r of eve ry elem ent of I A - A.I I is
an ordi nary poly nom ial of deg ree n - 1 · (at
mos t). Con seq uen tly eac h element
of

B = adj (A - A.I) ... (2)


is an ord inar y poly nom ial of deg ree (n - 1)
(at mos t).
B = adj (A - Al) = B0 + B A + B2A2 + · · · +
1 B 0 _ 1
... (3)An- 1
Her e B0 , B1 , B2 , ... B _ are all squ are mat
0 1 rice s of the sam e orde r n
who se elem ents are poly nom ials in the elem
ents of A.
Now , (A - Al) adj (A - Al) = IA - Al I I
Usi ng Eqs. (3) and (1), we get
(A - Al) [Bo + B 1 A + B2 A.2 + . .. + Bn - 1 A.n-1]
= (a 0 + a 1 A+ a 2 A.2 + ... + an A.°) I
... (4)
Com pari ng the coe ffic ient s of like pow ers
of A on both the side s, we get
AB0 =aaI
AB 1 - B0 = a 1 I
AB .i- B 1 = a 2I
································
······················· ·········
A Bn - I - Bn - 2 = an - 1 I
-B n-1 = an I
Pre mul tipl yin g thes e by I, A, A 2 , A 3 .... An
in ord er and add ing ,
aoJ + a1A +a 2A 2 + ··· +an An =O
Thi s is Cay ley- Ham ilto .
n theorem.
Corollary. To det erm ine A - 1 by using Cay ley-
Ham ilto n theorem.
Let A be a non -sin gul ar mat rix of ord er n so
tha t I A I"# 0.
rff' e, 135
Acctmhng to Cayley-Ham1lton theoicm
ool+a,A+u2A2+ ... +a"A"==O .. (1)
J1ic char actenstic pol) nomial j~
I A - Al I== a 0 + a 1 A+ a 2A.2 + .. a A" ... (2)
11
flJr;... = 0, Eq. (2) gives I A I= a 0 • .-. ao -:t 0
Now dividing Eq. (I) by a 0 . ,,, e get

l=-(:1 A+ a2 A2+ ... + an An\) ..(3)


o ao ao
...(I J 1•
pre-multiplying Eq. (3) b) \ - we get
OllJ1a1
All rs
fllelll
A - I= -(~I+
ao
a2 A+ ... + an An - 1\
ao ao )
... (4)

Example 1. Fi11d the characteristic equation of the matrix


... (2

... (3 3 1 1
A=[~ -~ !Ij
and verify the Cayley-Hamz ton theorem for it. Hence or othenvise
der n 1
Jind A -

Sol. A - A I = [1 -~ !]-
3 1 1
A. [~ ~ gl
0 0 1j 3 1
1
= [ ; A. _ ; _ A \
1-AJ
~
... (4 :. I A- :U I= 2
1-/\.
- 1- A 4
2 31
= -A.3 + A.2 + 18/\.+ 30
3 1 1 -/\.
get
Hence the characteristic equation is
3 2
- /\. + /\. + 18/\. + 30 = 0
Now, in order to verify Cayley-Hamilton theorem, we have to show
that - A3 + A2 + 18A + 30 I = 0

Here I = [~ ~ ~] , A =\ 1-~ !l
oa 1 l3 1 1J
A2 = [; - ~ ;J [~ -i !lj=I~~ ~ !l l
3 1 I 3 l 14 j
1 I 8 6

3
A =A2A=(~~
s
! :1li -~
1
!l =I~~~;
l3 J l62 62J
6 14
~!l 1 1 24
-A3 + A2 + 18A + 301
,1e 'i.Ci.T.

f ,, ,,
-
r>2 ~ '"1'
- 4
~1 2 "'
24 f2
- ':I
- !j
/.
6
4
~

(J , = J

5
L

'!nfiec
- .., ,
To Juul A - ~, [ - A - A - - ~ A =
I - -
I = - A -A- - 'It' A
3J
Premulupl., ing

..,:::
-H
j
30 30 30
")
JO 8
A - J=
30 30 30
5 5 5
30 30 30
Example 2. Illustrate the Ca~leJ-Hamilton :heorerr. _:or ;he 1r.c.:~..z
1 0 (J
A= 2 - 1 0
0 0 !
J
j - ). ") 0
I A - i.I I = 2 - l - i. 0
0 0 l - i.
= - 5 ..- 5i. - ).2 - i. 3
Hence the characteristic equation is .
- 5 ..- Si. - i.: - )..3 = 0
We have to show that - 51 + 5A ~ A 2 -A 3 = O

Now. - 51 + 5A + A2 - A3
-5
~~ g]+ (~~
-10
= 0 -~ g)'
0 -5
+(
ll~ 5 0 0 0 0 I
0O -5
J+ I-10
0
5
( 0 \ 0
/t,/ c !ta u,( und Marli e111ar, u,/ vT
fhcurc rn JJ. / In , , Ju ,/ a rltu rmul
1
111011 t , r
/11 I
I ,1,,, dwg rmal
111 11tr 111

Proof. LetA = diag. la , 1 u _2


Then (A - I.I) == diag. [ a, J - i. a 22 I.
'Jhc churn<.:ten~t1<.: eyu8.ll<;n I A - i.l 1 :::. r1 gi ve

(11
11 - i..J (a 22 - i.) • (r.1 ,,,, -iJ=( J

a 11 , a22 · ann are the diagon al eleme nts of A . The refore the ti•
values A of a diagon al matrix are the eleme nts in the diagon al. / ?,

8.4 DiagonaJization of matrices.


Let A1, Ai· .. 'J...n be n dhtin ct eigen value s of a matrix A
X 1, X2, ..... Xn be then corres pondin g eigenv ectors . an,
Let X; be the col umn vector given by

X;=[1xm~J ...(;
Consi der a matrix P whose colum n vectors are n eigenv ectors such tha.t

X11 X12 X1n


P=
X21 X22 X2n
=[Xu ]
.. '2,
Xn1 Xn2 xnn
Suppo se that Dis a diagon al matrix such that
A-1 0 0 0
0 Ai O 0
D= O O ½ 0 =diag [ A- 1, ½, ... An ] ... (3)
0 0 O An
A1X11 JviX12 ...... ')..nX1n
Then PD= A1X21 ½X22 ...... A,rX2n =[A•X .. ]
'] lJ ... (4)
A1Xn1 ½Xn2 ....... AnXnn
(no summation over j )
= (AX1, AX2, ... AXn) (expressing matrix as vectors)
= A (X 1, X2 , ... X,,)
=AP.. ...(5)
143
ftCC-1
{J / 1, Wl!
Jf l' be a non srngular matrix, then premultiplying (')) by p get
D=P - 1
AP ... ( ())
1
fhUS, p~ernultiplyi_ng A by p - and postmultiplying by P, we g<..,;t the
, nal rnatnX wh~se ?iagonal clements are the eigenvalues. This proces., i~
J1J!/1 he diagonaltzat w11 oft he matrix A.
r~11ed t -[3 4]
txarnPle. Let A - 4 -3
The characteristic equation 1s
IA-All= -A
3
4
4
-3-A - \-o
- 9 + A.2 - 16 = 0 or A = ± 5
i.e.,
"-1 = Ai= 5.
::=r~rr.::
- 5 and
i.e.,
Toe corresponding e•:~: mined •

Let P=[-; i]
Then p-l=-H-1 =i]
AP= -H-1 =i] [! _j} [~; :i]
1
Hence, P-

=
1
-H-~ =i] [-1~ ~]
=-![2t-2~]=[-~ ~]
D=[Ab ~]
Practical Method of Diagonalization. To reduce a given square ma
Ato diagonal form, we first write the characteristic equation for the m
a·d e\'aluate the characteristic roots At, ½ ... "-n· Then the required diag
ill~"' D of A will be
A.1 0 0 0
o Iv;_ 0 0
D= 0 0 ½ O

0 0 0 /1.n
Example 1. Diagonalise the following matrix
eas 0 - sin e 01
l
A = sin 8 cos 8 0
0 0 1
12
Matrices II
~~.1. Special Types of ~at~ces
Here we define cert.am important matrices.
1. Square Matrix. A matrix having the same number of :-c :vs as the
number of columns is called a square matrix. For example f/ / n matnx
al I a12 ........... a,n

ra21 a22 ........... a2r.

ar.2 ar.ri

is a square matrix of order n. The elements a 1, a 22 ••••••a/lll are caUed its


diagonal elements. The sum of the diagona l elements of a square matrix
is called the trace of that matrix.
Tr A= I,au
,-
2. Diagonal Matrix. If all the elements of a square matrix are zero
except those m the leading diagonal, then the matrix is said to be a dragonal
matrix. For example
a 11 0 0 .. ········ 0
0 a22 0 ........... 0
0 0 a33 .......... 0

0 0 0

is a diagonal matrix of order n.


Thus a n-square matrix A= [a,)n x n is diagonal matrix if and only if
a = f O for i *j
1
1 i "# Q
\.
for l =]

a= a,1 8iJ
It can also be written as
A =cliag. [a 11 , a22.. .......a,)
3. Scalar Matri.L A diagonal matrix in v.-hich all the diagonal elements
are equal is called a scalar matrix.. Thus
183
184 A1echamcs and Mathemat1ca/ p1,,.....
".YSlcs

'}.. 0 0 ........... 0
0 1.. 0 ........... 0
0 0 J.. .......... 0
1s a n x n scalar rnati
rx.

0 0 0 ........... .1.

Thus a n-square matrix A = [a,)n x n is a scalar matrix if and only if

01
1-
-{i..
0
for
for
i=j
i -:t- j
4. Identity or Unit Matrix. A scalar matrix in which each diagonal
element is unity. is called an identity or unit matrix. Thus an-square matrix
[a,) 1s a unit matrix if and only if
I for I= J
01
1= {0 for

5. Row l\-1atrix and Column Matrix. A matrix having one row only
i.e. . a matnx of order 1 x 11 is called a row matrix or a row vector and is
expressed as [X] 1 xn = [a 11 a 12 a 13........... a,n]
A matrix having one column only i.e .. a matrix of order m x 1 is called
a coumn matrix or a column vector. It is expressed as
a,1
a:n
a3,
{X}mx I=

6. Null Matrix. The m x n matrix whose elements arc all O 1::i called
the null matrix of type m x n.
7. Upper-Triangular Matrix. A square matrix, whose every element
a,1 for i > j is zero, is caJJed an upper triangular matrix. Thus
all a, 3 ........... a,n
0 a 13···· ........ a2n
0 a 33 ............. a 3n 1s an upper triangular matrix .

.. .......... ............ .............. ......


0 0 0............... ann

8. Lower-Triangular Matrix. A square matrix whose every element


a,1 for i < j is zero, is called the lower triangular matrix. Thus
I I

(J I/
(J
I I r,
')

a, a l a

9. fr 1m.spo e of o Mntnx A 11111 , ix ,f rd , n, II ,n r


fl 111 Ill ,ow and c. l1lfr111 (ti I(/// /. n) II frt

I 1 ind , dcnul cl I y II y11Jl1ol \' ,A r


rA 1A
fJ I I a, a,n
11 a I
t1 I
a,
If A- th n A AT

am, llm2 .a a,
p10 perties
1
/1) (/\ / =A.
I •1 ,, T 1 /
(11) (M) rv1 , fl, ueIng w,) \Cldw 1re£1I or comp[ r)
7 7
(111) (A I /1) A 11P,1\ ,111d/J bci111: ,on{omiahlefo, uddu1m1
I I I
(n) (All) IJ A ,A :uul IJ ht111g 11m/01111<1hl1 J-n 11111lr1rJl1 a11011

10. Complcx-Conjugufr Mutrix. 'I h1: rnm11fn nm111gI11e of ,n 11 h111 11 )


111ut11x A 1s formed by 1ak111g thl lompln lo111ug11k ol e II h clc111e111 I h n1 1.
we have
A. = u~ (Im all i and Jl
If A· = A, then A is a real m,1tr1 ~.

Propertic!:i
0
(1) (A / = A.
0

(11) (A + B)° = A• -+ B A and IJ brmg , to11Jo111wl,/,,.


0 0
(lit) (M = "A.." A A. being wn ,omple., rtu111hcr.
) ,

(h·) (AJJ)° = A• JJ°, A nnd B !J,,ing um/01111al>I,· /or 11111lt1plicat1m1.


1

ll. Hcrmil ian Con,iu~atc (Th(• conju~nfr transpose of a matrix)


'fhc llrrmitian ronJugat,• of an 111h1t1a1y mat11x A is ohtamcd by t,1k111g the
complex conJugatc of the matrix and then thl' t,an~posl' of till' co111plex
conJugatc matrix.
• I r •
At= (A ) = (A )
186
Propcrfie.c; ,
. (A t)t == A. t+ Bt ,A an dB b em
. g con fom iab le for addu1on '
11
(1) -A
(ii) (A+ B)t - ple x number. . .
• t A being anY· .com iformable for mult1p hcation .
.. ') (M )t == A AA •t A and B bem g con
(111
(iv) (AB )t == .B~nt isy mm etr ic Ma tric es
t .
12 Sym me tnc and wil
. 'd to be a symmetric ma tnx .

:x:~:: A[iss;, ~]

e .
c d
. imetric (skew) ma tnx .

:x:::A[, ~ ~s sait tJ an ant ,sy "


VI

-b -c 0
.
13. Hermitian matrix
· · ma tnx·
If A t = A, A is said to be a He mu tla n 1

Example. (
0
1
-~ J
ical obs erv abl es are rep res ent ed by Her-
In quantum mecham•cs, aII Phys
mitian operators (matrices).
14. Skew-Hermitian Ma trix
ermitian matrix.
If A t = -A , A is said to be a skew-H
1
Ex am ple . [-IO+; ; ;]
es
15. Sin gul ar and No n-s ing ula r Ma tric
A square matrix A is said to be singul
ar if I A I = 0

A square matrix A is said to be non


-singular if I A I =I- 0.
16. Un itar y Ma trix
A square finite matrix A is said to
be unitary if
At A =I
Thi s implies AA t =/

where A t is conjugate transpose of A and I is a un it ma trix .


I A ti I A I
We know that I Ati = I A*1 and I At A I =
I A I = I / I = I.
Hence if At A = I, we have I At A I = I A ti
tary matrix is always of unit
This shows that the determinant of uni
non-singular.
mo dul us an d hence a unitary matrix is
Analysis 187
vector~
11"'2 il..Jfl
t:ia,nple. [ -il...ff 2°j
- ll'✓

orthogonal Matrix
11, fl . . A . .
A square imte matn.x is sa,d to be orthogonal if
AT A =I
'[his implies A AT= I
Ar is the transpose of A and / is unit matrix ·
where
We know that I A Tl = I A I and I AT A I = I A Tl I A I
2
Hence if AT A= I, we have I A 1 = I i.e., I A I=+ 1
This shows that the determinant of an orthogonal matrix can only have
values + 1 or - 1.
cos 0 -sin 0]
. 0
Example. [ sm cos 0

JS. Adjoint of a Matrix

The adjoint of matrix A is defined as the transpose of the matrix formed


b) the cofactors of the elements of I A I.
all a12........ aln

'½1 a22 ........ a2n


Let A=

Let us denote the cofactors of elements by the corresponding capital


letters (e.g., the cofactor of a 11 is A 11 etc.). Then,
a
A 11 = (- 1) I + I 22
G32

A
13
= (- 1) 1 + J a21 a22
a31 a32

A 23 = (- 1)2+3 au a12
a31 a32
188 Mechanics and Mathematica/ Ph
1
~Sic~
a a
•A =(- 1)3+2 II 13
32 a 21 G23

The matrix of cofactors of det A is


A11 Al2"'"" "AIII
A21 A22 ........ Ai,,
[A I)..]= ···························· ······
··································

Now taking the transpose of the matrix of cofactors of det A,


All A21""""Anl
A12 A 22 ........ A112
adj A=

12.2. Inverse of a Matrix


Let A be a square matrix of order n. If there exists a n-square matrix
B such that
AB= BA = I,, where /11 is n-square unit matrix,

then B is called the inverse of A and A is called the invertible matrix.


1
The inverse of matrix A is denoted by A- •
Theorem. The necessary and sufficient condition for a square matrix
A to possess an inverse is that it is non-singular; i.e., I A I:;; 0.
Proof. To prove that the condition is necessary, let us assume tha1
A is a given square matrix and let B be its inverse. Then by definition.
AB=B A =I ...(ll

Taking determinant of either side, we get


I AB I= I i.e., I A II BI= I/ I
t.e., I A II BI= 1
This implies I A .II B I :;; 0
Consequently I A I:;; 0 i.e., the matrix A is non-singular.
Thus the necessary condition for a square matrix A to poss~ss an inverst
is that I A I :;t 0.
,. t4rtO(l'SlS
\89

o"e that the condition is sufficient, let us assume that A is any


fo pr rnatrix i.e. , I A I* 0. Then we have simply to prove that the
A e,dsts. Let us. d e iime a matnx
£!1.llaf
!'l",1fl~
. B such that
of• .
,-!r.t B = adJ A
IAI

'f}lell•
_ A (adj
AB - IA I
A)= A (IadjA IA) =~
IA I
AB=!
t• -(~dj AJA= (adj A)· A= lA\l
.A.Jso BA - IA I IA I IA I
BA =I
(· AB=BA = I
ThUS
. hows that B is the inverse of A.
htCh
11 C
s .
The inverse of A 1s defined as
or.
B =A-1 = adJ A
IA I
Example 1. Find the invelrseof_r;e m;]tri.x
1
-1 1 2
3 2 -1

1
Solution. I A I = -1
3
-~ l\
2 -1
=-25

A 11 =-5, A 12 =5, A 13 =-5


A =5, A 22 =- 10, A 23 =- 5
21
A31 = - S, A32 = - S,A33 = O

Therefore, the matrix of cofactors of I A I is

-5 5 -SJ
[A, ]=
1
l 5
-5

Taking the transpose of the above matrix, we get


-10
-5
-5
0

l
-5 5 -5~
adj A= 5 -10 -5
-5 -5 0
-VS
d.
A-1=~- _1
IA I -- 25
l-5S
5
-10 2/5
vs
1/51
1/S
0
-5 -5
- ~A
t~ hcorcin, ff
Mechanics and Mathematical physic.,
1
d Bare rwo 11-square, non-singular matrices • hen
i,l;;.,i

yec101

1
011 (ABf I == B- 1 A-1
T I 1
Proof. since A and B arc non-singular matrices, I A I CFO and B
I AB I =" A II B \ CF O "fl

... '!1iis impli,•s that AB is non-singular matrix and hence invertible


Let A- and B- 1 be the inverse of matrices A and B respectiv~l;
1
AA-I == A-I A := / .

BB- 1 == B- 1 B == I
Now consider a matrix C given by 1
C == B- 1 A-
1 1
1
TI1cn, C (AB)== (JT 1 A- 1) (AB)== B- (A- A) B == B- JB
1
. == (B-1 B) == I (Since A- A == J and JB == B)
1 1 ...(la)
i.e.· (B- A- ) (AB)== J
Similarly it can be shown that

Equations (I) imply that


1
(AB) (B- 1 A- ) = I
is the inverse
-I 8 -1 tA-I . . of AB .
' z.e.,
...(lb1 I
B) == B- A-t ... (2) \
= -.z:i\T 1'•)11
I

[Since trace of a matrix is the sum of its diagonal terms]


1 1
= L L P~y A.I'k pkI. =LL
I ./, k
L(Pk;P;;
I j k .
)A jk
.

= ~L(P p-l )k1 A·1k = "L(P p-J A)kk = "L Akk = Trace A
.! k k · k

i.e., Trace B = Trace A


12.13. Unitary Transformation
Consider a linear transfonnation
Y=AX ... {I)
such that X and Y are column vectors of order n x I and A is a square
transformation matrix of order n x n. If the matrix A of transformation is
unitary, the linear transformation is said to be a unitary transformation.
From definition of unitary matrix
~
I ?09
At A AAt I tl)
VY (AX)t ( U) \t It A\ \ 1,\ l 1 ,1)
ti

or L>'v1 L,x: \
I I
(3b)
I 1

:::, 1/Je norm of, , tor,


lll\'cir 1a111 um1er 11111/a,y /ramjorma/11111
1~

As nn '"' ei ,c \.nsc, ums1dc1 a linc.u t1 ,msfomrnt1on matrix A which


a, e the nonn ol 1;, Cl) , cch'f c)t· a vector . space um:hnngcd, thnt ,~
I l I IX I l,r I AX I x
\t X (A Y)t f\ \ t tt I\
L)b, 1l)l1~h At \ l. ~Ihm mg tlMt I is unit,\!)
I hu~, :-\ l1tH ar lransfor111011011 acr111g 0 11 a ,,l <tor 'l"'"e n w111w·y if
nl) 1 II /cm,•' llrt! norm of l'\'l 'J' ,·ector 111 the Vt'< 101 ,pace um hanged
2. t 4. Orthogonal frnnsformution
c~,n,1lkr th1.· folllm ing trnnsfonnation frnm the ,-variables to the x-
' nablc",
\ I ½- a , I ~
\ I <111 1 2 ll I n \ 11'

I a., --1 t (I
\2 a2, :\"
I
\ ' n'

ii,. I\ 1 a,. \' ➔ t (Ill/ \ 11'

AX.
1
'
,here , () , J , ; ,. )' and , (\' 1. , 2 ... .. ,,,)' ,md A ta) 1s the matrix
tr n fomuiuon In 1.:ase b) th(' II ,mslrn m,111on the sum ol square::,

v·, X'X - ,,.' .


\

n It 1 \.ulled nn ortlwglin,tl t1M1!:ifotm,1t1on. The m,1trix J\ l)I such a


mgul,ir
onnatIon I non-
Ob, ,ou I~ the ond111011 of trt1m(o, 1111111011 I \ to bl 011hogonal
ti t th tr m ft rmatIm1 111rItrI\ I '"11,Jie, thl.' u\m.i1t1on
\ \' A'\
Thcnn m. I he prtldlll:I of l\\ll ,Hthngnn,d t1,rns1~,,mutwns ti •
ortho 1onul tran 1orn1at1on
Proof. I ct , AX and \J HV \w t,H) tHthognn.11 t1,msllmnations

\'A I, HH'
1hen,
(AHHI\B)' AHif A' AIA'
t\1\' I

You might also like