Signals and Vectors
• The success of frequency analysis approach depends upon our
ability to represent a given function of time as a sum of various
exponential functions.
• Signal analysis can be better understood by studying analogy
between vectors and signals.
• Analogy between Signals and Vectors
--A vector can be represented as a sum of its components
--A Signal can also be represented as a sum of its components
• Component of a vector:
A vector is represented by bold-face type letter.
Specified by its magnitude and its direction.
• E.g
Vector x of magnitude | x | and Vector g of magnitude | g |
Let the component of vector g along x be cx
Geometrically this component is the projection of g on x
The component can be obtained by drawing a perpendicular from the
tip of g on x and expressed as:
g = cx + e
Component of a Vector
• There are infinite ways to express g in terms of x
g 𝒆𝟏 𝒆𝟐
•g is represented in terms of x plus another vector
which is called the error vector e 𝑐1 x x 𝑐2 x x
•e can be made minimum by proper selection of c g=𝑐2 x + 𝒆𝟐
g=𝑐1 x + 𝒆𝟏
• If we approximate g by cx
Component of a Vector (cont..)
The error in this approximation is the vector e
e = g - cx
The error in the approximation in both cases for last figure
are
Component of a Vector (cont..)
• We can mathematically define the component of vector g along x
• We take dot product (inner or scalar) of two vectors g and x as:
g.x = | g || x | cos
The square of length of vector by definition is
|x|² = x.x
The length of component of g along x is
c| x | = | g | cos
Multiply both sides by | x |
c | x | ² = | g | | x | cos = g.x
Component of a Vector (cont..)
• Consider the first figure again and expression for c
• Let g and x are perpendicular (orthogonal)
• g has a zero component along x gives c = 0
•From equation g and x are orthogonal if the
inner(scalar or dot) product of two vectors is zero
i.e.
g.x = 0
Orthogonal vectors
• Larger the component of a vector along other vector, the more closely do the two vectors
resemble each other in their directions, and smaller will be the error vector
• If the component of the vector g along x is cx, then the magnitude of c is an indication of
the similarity of the two vectors
• If c is zero, then the vector has no component along the other vector, and hence, the two
vectors are mutually perpendicular. Such vectors are known as orthogonal vectors.
• So, the orthogonal vectors are independent vectors.
Best friends, worst enemies and complete strangers
• cn = 1. Best friends. This happens when g(t) = cx(t) and c is positive. The signals
are aligned, maximum similarity.
• cn = −1. Worst Enemies. This happens when g(t) = cx(t) and c is negative. The
signals are again aligned, but in opposite directions. The signals understand each
others, but they do not like each others.
• cn = 0. Complete Strangers The two signals are orthogonal. We may view
orthogonal signals as unrelated signals.
Component of a Signal
• Vector component and orthogonality can be extended to signals
• Consider approximating a real signal g(t) in terms of another real signal x(t)
The error e(t) in the approximation is given by
Component of a Signal contd..
• One possibility for minimizing the error function e(t) over the interval 𝑡1 to 𝑡2
𝑖𝑠 𝑡𝑜 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑡ℎ𝑒 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑒 𝑡 𝑜𝑣𝑒𝑟 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙 𝑖. 𝑒. , 𝑡𝑜 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒
1 𝑡2
𝑔 𝑡 − 𝑐𝑥(𝑡) dt
𝑡2 −𝑡1 𝑡1
• This criteria of minimizing the average value of e(t) is inadequate because there can be large +ve
and –ve errors present that may cancel one another in the process of averaging and gives the false
indication that the error is zero.
• This situation can be corrected if we choose to minimize the average of the square of error instead
of error itself.
Component of a Signal contd..
• Let the average of 𝑒 2 (t) be E
1 𝑡2 2 1 𝑡2 2
E= ( 𝑒 t)dt
𝑡2 −𝑡1 𝑡1
=
𝑡2 −𝑡1 𝑡1
𝑔 𝑡 − 𝑐𝑥(𝑡) dt
𝑑𝐸
To find the value of c which will minimize E we must have =0
𝑑𝑐
𝑑 1 𝑡2 2
i.e., 𝑔 𝑡 − 𝑐𝑥(𝑡) dt = 0
𝑑𝑐 𝑡2 −𝑡1 𝑡1
𝑡
𝑡2 𝑔 𝑡 𝑥(𝑡) 𝑑𝑡
1
Changing the order of integration and differentiation and solving we get c= 𝑡
𝑡2 𝑥 2 (t)dt
1
Component of a Signal
Component of a Signal
𝑡2 2
Energy in x(t) signal is 𝐸𝑥 = 𝑥 𝑡 𝑡 𝑑𝑡
1
•Remarkable similarity between behavior
of vectors and signals. Area under the
product of two signals corresponds to the
dot product of two vectors
•The energy of the signal is the inner
Recall equation for two vectors product of signal with itself and
corresponds to the vector length
squared (which is the inner product of the
vector with itself)
Component of a Signal
Consider the signal equation again:
1 𝑡2
c = 𝑡1 𝑔 𝑡 𝑥 𝑡 𝑑𝑡
𝐸𝑥
• Signal g(t) contains a component cx(t)
• cx(t) is the projection of g(t) on x(t)
• If cx(t) = 0 c = 0
signal g(t) and x(t) are orthogonal over the interval [𝑡1,, 𝑡2 ]
Orthogonal signals
• It can be shown that the functions sin(n𝜔0 t) and sin(m𝜔0 t) are
2𝜋
orthogonal over any interval (𝑡0 , 𝑡0 + ) for integer values of n and m.
𝜔0
• It can also be shown that the functions sin(n𝜔0 t) and cos(m𝜔0 t) are
orthogonal functions and cos(n𝜔0 t) and cos(m𝜔0 t) are also mutually
orthogonal
Component of a Signal (cont..)
Example
• For the square signal g(t), find the component of g(t) of the form sint or in other
words approximate g(t) in terms of sint
g(t) c sin t 0 t 2
Example (cont…)
2𝜋 2𝜋 1−𝑐𝑜𝑠2𝑡
• x(t) = sin t and 𝐸𝑥 =0 𝑠𝑖𝑛2 𝑡 𝑑𝑡 = 0 2
𝑑𝑡 = 𝜋
From equation for signals
1 𝑡2
c = 𝑡1 𝑔 𝑡 𝑥 𝑡 𝑑𝑡 W.k.t sin 𝑡 𝑑𝑡 = − cos 𝑡 𝑎𝑛𝑑 cos 𝜋 = −1
𝐸𝑥
1 2𝜋 1 𝜋 1 2𝜋 4
c = 0 𝑔 𝑡 𝑠𝑖𝑛 𝑡 𝑑𝑡 =𝜋 0 𝑠𝑖𝑛 𝑡 𝑑𝑡 +
𝜋
𝜋
−𝑠𝑖𝑛 𝑡 𝑑𝑡=
𝜋
𝜋
4
g(t) = Sin(t)
𝜋
Orthogonality in complex signals
• For complex functions of t over an interval
• g(t) cx(t)
𝑡2
𝐸𝑛𝑒𝑟𝑔𝑦 𝑖𝑛 𝑥 𝑡 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 = 𝐸𝑥 =𝑡1 |𝑥 𝑡 |2 𝑑𝑡
Coefficient c and the error in this case is
e(t) = g(t)− cx(t)
𝑡2
𝐸𝑛𝑒𝑟𝑔𝑦 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 = 𝐸𝑒 =𝑡1 |𝑔 𝑡 − 𝑐𝑥 𝑡 |2 𝑑𝑡
Orthogonality in complex signals
𝑡2 𝑡2
• 𝐸𝑒 =𝑡1 |𝑒 2
𝑡 | 𝑑𝑡 = 𝑡1 |𝑔 𝑡 − 𝑐𝑥 𝑡 |2 𝑑𝑡
We know that:
2
(
|u + v| = (u + v ) u + v = |u| +|v|
)+ u 2
v+ uv 2
𝑡2 𝑡2 𝑡2 𝑡2
𝐸𝑒 = න |𝑔 𝑡 𝑑𝑡|2 + න |𝑐𝑥 𝑡 𝑑𝑡|2 − න 𝑔∗ 𝑡 𝑐𝑥(𝑡) − න 𝑔 𝑡 𝑐𝑥 ∗ 𝑡 𝑑𝑡
𝑡1 𝑡1 𝑡1 𝑡1
Orthogonality in complex signals
1 𝑑𝐸
MSE E= 𝐸
𝑡2 −𝑡1 𝑒
Performing = 0 𝑤𝑒 𝑜𝑏𝑡𝑎𝑖𝑛
𝑑𝑐
t g (t)x ∗ (t)dt
t 2
C= 1t ∗
t 2 x(t)x (t)dt
1
So, two complex functions are orthogonal over an interval, if
t2
=
g (t)x (t)dt 0
t1 or
t2
g (t)x (t)dt = 0
t1
Orthogonal Vector Space
A complete set of orthogonal vectors is referred to as orthogonal vector space.
Consider a three dimensional vector space as shown below:
Consider a vector A at a point (X1, Y1, Z1). Consider three unit
vectors (VX, VY, VZ) in the direction of X, Y, Z axis respectively.
Since these unit vectors are mutually orthogonal, it satisfies that
𝑣𝑥. 𝑣𝑥. = 𝑣𝑦. 𝑣𝑦. =𝑣𝑧. 𝑣𝑧. =1
𝑣𝑥. 𝑣𝑦. = 𝑣𝑦. 𝑣𝑧. =𝑣𝑧. 𝑣𝑥. =0
We can write above conditions as
Va.Vb={1 a=b
0 a≠b
The vector A can be represented in terms of its components and unit vectors as
A=𝑿𝟏 𝑽𝒙 +𝒀𝟏 𝑽𝒚 +𝒁𝟏 𝑽𝒛 ................(1)
Any vectors in this three dimensional space can be represented in terms of these three unit vectors only.
Orthogonal Vector Space cont.…
• If we consider n dimensional space, then any vector A in that space can be represented as
A=𝐶𝟏 𝑿𝟏 +𝐶𝟐 𝑿𝟐 +𝐶𝟑 𝑿𝟑 +…… + 𝑪𝒏 𝑿𝒏 ...............(2)
All the vectors 𝑿𝟏 , 𝑿𝟐 , 𝑿𝟑 , 𝑿𝟒 ,……., 𝑿𝒏 are mutually orthogonal and the set must be complete in order
for any general vector A to be represented by above equation
By taking the dot product of both sides of equation 2 with vector 𝑿𝒓 , we have
A . 𝑿𝒓 =𝐶𝟏 𝑿𝟏 . 𝑿𝒓 +𝐶𝟐 𝑿𝟐 . 𝑿𝒓 +𝐶𝟑 𝑿𝟑 . 𝑿𝒓 +…… + 𝑪𝒓 𝑿𝒓 . 𝑿𝒓 +…….. + 𝑪𝒓 𝑿𝒏 𝑿𝒓
A . 𝑿𝒓 = 𝑪𝒓 𝑿𝒓 . 𝑿𝒓 = 𝑪𝒓 …………… (3)
So, 𝑪𝒓 = A . 𝑿𝒓
We call the set of vectors (𝑿𝟏 , 𝑿𝟐 , 𝑿𝟑 , 𝑿𝟒 ,……., 𝑿𝒏 ) an orthogonal vector space.
Orthogonal Vector Space cont.…
• In general, the product 𝑿𝒎 . 𝑿𝒏 can be some constant 𝒌𝒏 instead of unity. When 𝒌𝒏 is unity, the set
is called as normalized orthogonal set, or orthonormal vector set/space.
• In general, for orthogonal vector space {𝑿𝒓 } where r= 1, 2, 3, …, n we have
𝑿𝒎 . 𝑿𝒏 = 0 for m≠n
𝒌𝒏 for m = n
• For orthogonal vector space, from equation (3) we have
A . 𝑿 𝒓 = 𝑪𝒓 𝑿 𝒓 . 𝑿 𝒓
A . 𝑿𝒓
= 𝑪𝒓 𝒌𝒓 𝑪𝒓 =
𝒌𝒓
Orthogonal Signal Space
• We have seen that any vector can be expressed as a sum of it’s components along n-mutually orthogonal
vectors, provided these vectors form a complete set of coordinate system.
• Similarly it is possible to express any function f(t) as a sum of it’s components along a set of mutually
orthogonal functions if these functions form a complete set.
• Considering a set of n functions 𝒈𝟏 (t), 𝒈𝟐 (t), 𝒈𝟑 (t), …….., 𝒈𝒏 (t) which are orthogonal to one another
over an interval 𝒕𝟏 to 𝒕𝟐
𝑡 𝑡
i.e., 𝑡2 𝒈𝒋 (t) 𝒈𝒌 (t)dt = 0 for j ≠ k and 𝑡2 𝒈𝒋 2 (t) dt = 𝒌𝒋
1 1
• f(t)≈ σ𝑛𝑟=1 𝒄𝒓 𝒈𝒓 (t)
For best approximation, the values of constants 𝒄𝟏 , 𝒄𝟐 , 𝒄𝟑 , … , 𝒄𝒏 should be such that 𝑬𝒆 , the mean square
of error function e(t) is minimum.
Orthogonal Signal Space contd…
• By definition e(t)=f(t)- σ𝑛𝑟=1 𝒄𝒓 𝒈𝒓 (t) and the mean square error will be
1 𝑡2
𝐸= 𝑓[ 𝑡 − σ𝑛𝑟=1 𝒄𝒓 𝒈𝒓 (t)]2 𝑑𝑡
𝑡2 −𝑡1 𝑡1
𝜕𝐸 𝜕𝐸 𝜕𝐸 𝜕𝐸
To minimize E, we must have = = = ………..= =0
𝜕𝑐1 𝜕𝑐2 𝜕𝑐3 𝜕𝑐𝑛
𝑡
𝜕𝐸 𝑡2 𝑓 𝑡 𝒈𝒋 (𝑡) 𝑑𝑡 1 𝑡2
1
Considering = 0 we obtain 𝒄𝒋 = 𝑡 = 𝑓 𝑡 𝒈𝒋 (𝑡) 𝑑𝑡
𝜕𝑐𝑗 𝑡2 𝒈𝒋 2 (t)dt 𝒌𝒋 𝑡1
1
Mean square error (MSE)
• Evaluation of MSE when optimum values of coefficients 𝒄𝟏 , 𝒄𝟐 , 𝒄𝟑 , … , 𝒄𝒏 are chosen
• By definition
1 𝑡2
𝐸𝑒 = 𝑓 𝑡 − σ𝑛𝑟=1 𝒄𝒓 𝒈𝒓 (t) 2 dt
𝑡2 −𝑡1 𝑡1
1 𝑡2 𝑡 𝑡
= 𝑡2−𝑡1
[𝑡1 𝑓 2 (𝑡)𝑑𝑡 + σ𝑛𝑟=1 𝒄𝒓 𝑡2 𝒈𝒓 2 (t) dt − 2 σ𝑛𝑟=1 𝒄𝒓 𝑡2 𝒇(𝒕)𝒈𝒓 (t) dt ]
1 1
𝑡 𝑡
• But from the equation of 𝒄𝒋 we have 𝑡2 𝒇(𝒕)𝒈𝒋 (t) dt = 𝒄𝒋 𝑡2 𝒈𝒋 2 (t) dt = 𝒄𝒋 𝒌𝒋
1 1
• Therefore,
𝐸𝑒 =
1 𝑡
[ 𝑡2 𝑓² 𝑡 𝑑𝑡] + [σ𝑛𝑟=1 𝒄2𝒓 𝒌𝒓 − 𝟐 σ𝑛𝑟=1 𝒄2𝒓 𝒌𝒓 ]
𝑡2 −𝑡1 1
𝐸𝑒 =
1
[
𝑡2 𝑓² 𝑡 𝑑𝑡] − [σ𝑛𝑟=1 𝒄2𝒓 𝒌𝒓 ]
𝑡2 −𝑡1 𝑡1
From the expression of MSE we observe that, if we increase n,
i.e., if we approximate f(t) by a large number of orthogonal functions, the error will become smaller
Energy of the sum of orthogonal signals
•Sum of the two orthogonal vectors is equal to the sum of the lengths of the
squared of two vectors. z = x+y then
z = x + y
2 2 2
• Sum of the energy of two orthogonal signals is equal to
the sum of the energy of the two signals. If x(t) and y(t) are
orthogonal signals over the interval, t1 ,t 2 and if
z(t) = x(t)+ y(t) then
Ez = Ex + E y
Correlation
Consider vectors again:
• Two vectors g and x are similar if g has a large component along x
OR
• If c has a large value, then the two vectors will be similar
c could be considered the quantitative measure of similarity between g
and x
But such a measure could be defective. The
amount of similarity should be independent of the
lengths of g and x
Correlation
Doubling g should not change the similarity between g and x
However:
c is faulty measure for
Doubling g doubles the value of c similarity
Doubling x halves the value of c
•Similarity between the vectors is indicated by angle between the vectors.
• The smaller the angle , the largest is the similarity, and vice versa
• Thus, a suitable measure would be cn = cos , given by
g.x
c = cos = Independent of the lengths of g
|g||x| and x
Correlation
g.x
c = cos =
|g||x|
This similarity measure cn is known as correlation co-efficient.
The magnitude of cn is never greater than unity −1 cn 1
•Same arguments for defining a similarity index (correlation co-efficient) for signals
• consider signals over the entire time interval
• normalize c by normalizing the two signals to have unit energies
1
cn =
E E