Linear Algebra for Electrical Engineers
Midterm
Name :
Student No.:
Before you begin, please check if you received all of the problems. This exam
is composed of 8 problems and the score adds up to 100. This document has
total 9 pages.
1. (4 points) Suppose A is a 5 by 4 matix with rank 4.
(a) (2 points) Show that Ax = b has no solution when the 5 by 5 matrix [Ab] is
invertible.
(b) (2 points) Show that Ax = b is solvable when the 5 by 5 matrix [Ab] is singular.
Solution
(a) When the square matrix [Ab] is invertible, b cannot be represented as a linear
combination of the columns of A. Thus, b is not in the column space of A. Thus,
Ax = b has no solution.
(b) Since [Ab] is singular, there exists some nonzero c = [c1 , c2 , c3 , c4 , c5 ] that satisfies
c1 a1 + c2 a2 + c3 a3 + c4 a4 + c5 b = 0, where A = [a1 a2 a3 a4 ].
From the condition that A has rank 4, c5 6= 0. Otherwise, c1 = c2 = c3 = c4 = 0 as
a1 , a2 , a3 , a4 are linearly independent.
Thus, b can be written as
c1 c4
b=− a1 − · · · − a4 .
c5 c5
Since b is in the column space of A, Ax = b is solvable.
1
2. (20 points) Determine whether the following statements are true or false. Give a short
explanation to justify your answer. 2 points each.
(a) For two matrices A, B, R(AB) ⊆ R(A).
(b) For two matrices A, B, N (AB) ⊆ N (A).
(c) For skew symmetric matrix A, I − A is nonsingular.
(d) If A = (I + S)(I − S)−1 where S is a skew symmetric matrix, A−1 = A>
cos θ sin θ
(e) Rotation of θ around conventional 2D axis as in Fig.1 can be written as .
− sin θ cos θ
(f) If two m by n matrices A and B have the same four fundamental subspaces, their
row echelon forms are identical.
(g) If two m by n matrices A and B have the same four fundamental subspaces, A
and B are identical.
(h) If W is a subspace of V and B is a basis for V , then some subset of B is a basis
for W.
(i) Matrix A is orthogonal if and only if A> A = I.
(j) If A has orthogonal columns, and A = QR is its QR-factorization, then R is
diagonal.
𝜃
𝑥
Figure 1: θ in conventional 2D coordinates.
2
Solution
(a) True. In lecture 8.
(b) False. In lecture 8
(c) True. In lecture 7-2.
(d) True. In lecture 7-2.
(e) False. Wrong sign for sins.
(f) False. Row echelon form has no constraints on nonzero elements. Different from
reduced row echelon form.
(g) False. Multiply by constant.
(h) False. Opposite holds.
(i) False. A may not be square.
(j) True. Think of Gram-Schmit.
3. (8 points) LU-decomposition expresses any matrix A as A = LU , where L is a lower
triangular matrix and U an upper triangular matrix. This form can be obtained
through Gaussian elimination, as each elementary operation can be expressed as a
matrix
multiplication with an elementary matrix E. For example for a 2 by2 matrix
a b a b
, subtracting the first row from the second row can be written as =
c d c d
1 0 a b
. Performing Gaussian elimination, the matrix A becomes E1 E2 ·
1 1 c−a d−b
· · En U , where n is the number of required elementary operations. By multiplying the
elementary matrices E1 · · · En you can get the lower triangular matrix L. Now let’s
say the matrix A is given as below.
1 2 3 4
A = 6 13 20 27
9 26 44 62
(a) (4 points) Perform LU-decompostion on A and fill in the blanks.
1 0 0 1
A= 1
0 0 1
1 0 0 1
(b) (3 points) Now notice that the upper triangular matrix is in row-echelon form.
What is the rank of matrix A? What are the basic columns of A?
(c) (1 point) Based no the rank nullity theorem, what is the dimension of N (A)?
Solution
Perform elementary operations and get row echelon form.
3
(a) A becomes:
1 0 0 1 2 3 4
A = 6 1 0 0 1 2 3
9 8 1 0 0 1 2
(b) Since A has three pivots rank is 3. The first three columns of A are the basic
columns.
(c) Since the rank of A is 3, null space should have dimension 1.
4
4. For A ∈ Rm×n and a subspace S of Rn×1 , the image
A(S) = {Ax|x ∈ S}
of S under A is a subspace of Rm×1 . Prove that if S ∩ N (A) = {0}, then dim A(S) =
dim(S).
Solution.
Suppose B = {v1 , v2 , . . . , vk } be the basis vector for S. Then we need to show that
BA = {Av1 , Av2 , . . . , Avk } is the basis for A(S).
• Show that BA spans A(S).
For a vector y ∈ A(S), we can write y = Ax for x ∈ S. Then we can write x as
the linear combination of the basis of S,
k
! k
X X
y=A αi vi = αi (Avi ) .
i=1 i=1
Therefore y ∈ A(S) can be spanned by BA .
• Show that BA is a linearly independent set. Suppose ki=1 αi Avi = 0. Then
P
A ki=1 αi vi = 0. Since S ∩ N (A) = {0}, ki=1 αi vi = 0 and since vi are linearly
P P
independent, αi = 0 for all i.
5
5. (12 points) A set of nodes {N1 , N2 , . . . , Nm }, together with a set of edges {E1 , E2 , . . . , En },
between the nodes is called a graph. A connected graph is one in which there is a se-
quence of edges linking any pair of nodes, and a directed graph is one in which each
edge has been assigned a direction. The connectivity of a directed graph is indepen-
dent of the directions assigned to the edges - i.e., changing the direction of an edge
doesn’t change the connectivity. On the surface, the concepts of graph connectivity
and matrix rank seem to have little to do with each other, but, in fact, there is a close
relationship.
The incidence matrix associated with a directed graph containing m nodes and n edges
is defined to be the m × n matrix E shows (k, j)-entry is
1,
if edge Ej is directed toward node Nk .
ekj = −1, if edge Ej is directed away from node Nk .
0, if edge Ej neither begins nor ends at node Nk .
Each edge in a directed graph is associated with two nodes - the nose and the tail of
the edge - so each column in E must contain exactly two nonzero entries - a (+1) and
a (−1). Consequently, all column sums are zero. In other words, if e> = (1, 1, · · · , 1),
then e> E = 0, so e = N (E > ), and
rank(E) = rank(E > ) = m − dim N (E > ) ≤ m − 1.
This inequality holds regardless of the connectivity of the associated graph. Prove that
the equality holds, i.e., rank(E) = m − 1 if and only if the graph is connected.
Solution. Suppose G is connected. Prove rank(E) = m−1 by arguing that dim N (E > ) =
1, and do so by showing e = (1, 1, · · · , 1)> is a basis N (E > ). To see that e spans
N (E > ), consider an arbitrary x ∈ N (E > ), and focus on any two components xi and
xk in x along with the corresponding nodes Ni and Nk in G. Since G is connected,
there must exist a subset of r nodes,
{Nj1 , Nj2 , . . . , Njr }, where i = j1 and k = jr ,
such that there is an edge between Njp and Njp+1 for each p = 1, 2, . . . , r − 1.
Therefore, corresponding to each of the r − 1 pairs (Njp , Njp+1 ), there must exist a col-
umn cp in E (not necessarily the pth column) such that components jp and jp+1 in cp
are complementary in the sense that one is (+1) while the other is (-1) (all other com-
ponents are zero). Because x> E = 0, it follows that x> cp = 0, and hence xjp = xjp+1 .
But this holds for every p = 1, 2, . . . , r − 1, so xi = xk for each i and k, and hence
x = αe for some scalar α. Thus {e} spans N (E > ). Clearly, {e} is linearly independent,
so it is a basis N (E > ), and therefore dim N (E > ) = 1 or, equivalently rank(E) = m−1.
6
Conversely, suppose rank(E) = m − 1, and prove G is connected with an indirect
argument. If G is not connected, then G is decomposable into two nonempty subgraphs
G1 and G2 in which there are no edges between nodes in G1 and nodes in G2 . This means
that the nodes in G can be ordered so as to make E have the form
E1 0
E= ,
0 E2
where E1 and E2 are the incidence matrices for G1 and G2 , respectively. If G1 and G2
contain m1 and m2 nodes respectively, then
E1 0
rank(E) = rank = rank(E1 ) + rank(E2 ) ≤ (m1 − 1) + (m2 − 1) = m − 2.
0 E2
But this contradicts the hypothesis that rank(E) = m − 1, so the supposition that G
is not connected must be false.
7
6. (20 points) Suppose the n × n matrix A is invertible, with inverse B = A−1 . We
let the n-vectors a1 , . . . , an denote the columns of A, and b> >
1 , . . . , bn the rows of B.
Determine whether each of the following statements is true or false, and justify your
answer. True means the statement always holds, with no further assumptions. False
means the statement does not always hold, without further assumptions.
Pn >
(a) For any n-vector x, we have x = i=1 (bi x)ai .
Pn >
(b) For any n-vector x, we have x = i=1 (ai x)bi .
(c) For i 6= j, ai ⊥bj .
(d) For any i, kbi k ≥ 1/kai k.
(e) For any i and j, bi + bj 6= 0.
(f) For any i, ai + bi 6= 0.
(g) (A−1 )> = (A> )−1 .
(h) If all the entries of A are integers, the same is true for A−1 .
(i) If A is symmetric, so is A−1 .
(j) If the dimension null space of matrix C is 0, C is invertible.
Solution.
(a) True. This is writing x = ABx.
(b) True. This is writing x = BAx.
(c) True.
(d) True. By using Cauchy-Schwarz inequality, |a> >
i bi | ≤ kai kkbi k. However, ai bi = 1
as A and B are inverses. Therefore 1 ≤ kai kkbi k.
(e) True. bi are linearly independent.
(f) We have a>
i bi = 1, and therefore ai 6= −bi .
(g) True.
(h) False.
(i) True.
(j) False. C may not be square.
8
7. (8 points) Given a vector space V , an inner product on V is a function that associates
with each pair of vectors v, w ∈ V a real number, denoted hv, wi. It satisfies the
following properties for all u, v, w ∈ V and for all scalars a ∈ R:
hu, vi = hv, ui
hu + v, wi = hu, wi + hv, wi
(1)
hau, vi = ahu, vi
hv, vi ≥ 0 and hv, vi = 0 if and only if v = 0.
Let C [0, 1] be the space of all continuous functions f : [0, 1] → R. define
Z 1
hf, gi = f (x)g(x)dx (2)
0
for all pairs of functions f, g ∈ C [0, 1]. Show that this is in fact an inner product by
showing it satiesfies the four properties listed above (1 point each for the first three,
and 5 points for the last one).
Hint. When deriving the fourth properties, you should show that hf, f i > 0 if f (x) is
not the constant zero function. Use the − δ argument such that δ = f (x0 )2 /2 for non
constant zero function f some x0 ∈ [0, 1] that satiesfies f (x0 )2 > 0.
Solution. The first three properties can be proved with elementary facts about inte-
gration.
R1 R1
(a) hf, gi = 0 f (x)g(x)dx = 0 g(x)f (x)dx = hg, f i .
R1 R1 R1
(b) hf +g, hi = 0 (f (x)+g(x))h(x)dx = 0 f (x)h(x)+g(x)h(x)dx = 0 f (x)h(x)dx+
R1
0
g(x)h(x)dx = hf, hi + hg, hi .
R1 R1
(c) hcf, gi = 0 cf (x)g(x)dx = c 0 f (x)g(x)dx = chf, gi .
R1
(d) First, hf, f i = 0 f (x)2 dx ≥ 0 as f (x)2 ≥ 0 ∀x ∈ [0, 1]. We also need to show
that hf, f i > 0 if f (x) is not the constant zero. If f (x) is not the constant zero
function, f (x0 )2 > 0 for some x0 ∈ [0, 1]. By introducing − δ argument and
letting δ = f (x0 )2 /2, we can find an > 0 such that |f (x)2 − f (x0 )2 | < δ if
|x − x0 | < . It results f (x)2 > f (x0 )2 /2 if |x − x0 | < . Thus,
Z 1 Z x0 + Z x0 +
2 2 f (x0 )2
hf, f i = f (x) dx ≥ f (x) dx ≥ dx
0 x0 − x0 − 2
(3)
f (x0 )2 2
= (2) = f (x0 ) > 0.
2
9
8. (20 points) Matrix representation of a mesh. There are four subproblems.
Figure 2: A triangular mesh defined in 3D space. vi,j,k ∈ R3 indicate vertices of the mesh and hi
is a function defined on a triangular surface.
In this problem, we derive a discretized form of Poisson’s equation, ∆f = g, which is
used to represent the property of triangular mesh in Fig. 2. The first step is to get
modified form of Poisson’s equation
Z Z Z
hi ∆f dA = hi gdA = ∇hi · ∇f dA ∀ i ∈ {1, 2, · · · , |V |} , (4)
M M M
where |V | is the number of adjacent vertices for each vertex including itself, and M
denotes a set of adjacent triangles for ith vertex.
We further represent the function f and g of each vertex x as linear combinations
of piece-wise linear functions hi (x). Let the coefficients of linear combination
P as ai
∀ i = 1, · · · , |V |. Then the approximation becomes f (x) = i ai hi (x), and
and bi , P
g(x) = i bi hi (x). Thus, the left and righthand side of Poisson’s equation become
Z |V | Z
X
hi ∆f dA = aj ∇hi · ∇hj dA
M j M
(5)
Z |V | Z
X
hi gdA = bj hi · hj dA.
M j M
(a) (5 points) First, we can represent Eq. (5) as a linear system of a form
> >
La = Ab, where L, A ∈ R|V |×|V | & a = a1 · · · a|V | , b = b1 · · · b|V | ∈ R|V | .
(6)
What is Lij (the ith row - jth column element of L) and Aij ?
10
(b) (5 points) Let us define the piece-wise linear function hi as
hi (x) = hi (vi ) + ∇hi (vi ) · (x − vi ) for each triangle in Fig. 2.
In addition, hi (vi ) = 1 and hi (vj ) = hi (vk ) = 0. ∇hi (vi ) (gradient of hi at point
vi ) is a constant vector for each triangle and hi (vi ) · n = 0 for the normal vector
n of the triangular surface. Show that the following statements hold.
∇hi (vi ) · (vj − vk ) = 0,
∇hi (vi ) is perpendicular to the edge vj vk , and (7)
∇hi (vi ) is heading outside the triangle if it starts at vi .
(c) (5 points) Now show that k∇hi (vi )k = 1l . Hint. Start from evaluating ∇hi (vi ) ·
(vi − vk ).
(d) (5 points) Let the vector vj − vk as ejk , and define e⊥
jk as the vector obtained by
e⊥
jk
rotating vk − vj by 90 degrees counterclockwise. Show that ∇hi (vi ) = 2A
, where
A is the area of the triangle.
Solution.
(a) For matrix L ∈ R|V |×|V | , the element Lij is RM ∇hi · ∇hj dA.
R
For matrix M ∈ R|V |×|V | , the element Aij is M hi · hj dA.
(b) By plugging in each of vj and vk into hi (x) = hi (vi ) + ∇hi (vi ) · (x − vi ), we get
hi (vj ) = hi (vi ) + ∇hi (vi ) · (vj − vi ) = 0
(8)
hi (vk ) = hi (vi ) + ∇hi (vi ) · (vk − vi ) = 0
Subtracting one equation from the other yields ∇hi (vi ) · (vj − vk ) = 0.
Next, ∇hi (vi ) · (vj − vk ) = 0 proves that the angle between the two vectors are
90 degrees (perpendicular).
Finally, the gradient ∇hi (vi ) can either head inside or outside the triangle. From
∇hi (vi )(vi −vk ) = 1 > 0 and ∇hi (vi )(vi −vj ) = 1 > 0, we observe that the angles
between ∇hi (vi ) and the vectors vi −vj and vi −vj should be less than 90 degrees.
Also the gradient shoule lie in the same plane with the triangle ∇hi · n = 0. Thus,
the gradient should head outside the triangle if it starts at vi .
(c) Starting from hi (vk ) = hi (vi )+∇hi (vi )·(vk −vi ) = 0, we get 1 = ∇hi (vi )·(vi −vk ).
By the definition of the dot product,
1 =∇hi (vi ) · (vi − vk )
π
k∇hi (vi )kk(vi − vk )k cos − β = sin β
2 (9)
1 1
↔ k∇hi (vi )k = = .
k(vi − vk )k sin β l
11
(d) The area of the triangle in the figure is A = 12 kvk − vj kl = 12 ke⊥
jk kl. The unit
e⊥
jk ∇hi (vi ) e⊥
jk e⊥
jk
vector along the gradient is ke⊥
= k∇hi (vi )k
. Thus, ∇hi (vi ) = ke⊥
= 2A
.
jk k jk kl
This is the end of this problem.
Using the ∇hi we computed above, we can compute the following integrals
(
1
Z P
2 i∼k (cot αik + cot βik ) if i = j
∇hi · ∇hj dA = 1
(10)
M − 2 (cot αij + βij ) if i ∼ j.
where i ∼ j denotes vertex j is adjacent to vertex i, and these are elements of L.
In a similar procedure we computed the elements of L, we can derive the elements of
A as
(
1
6
area around vertex i if i = j
Aij = 1
(11)
12
area adjacent to vertices i and j if i ∼ j.
Thus, we have the discretized linear system La = Ab that describes the Poisson
equation ∆f = g.
12