Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
46 views9 pages

MATLAB Report Example

The document provides an overview of three mathematical concepts: the Gram-Schmidt method for orthogonalization of vectors, the Fourier transform for analyzing frequency components of signals, and digital image processing techniques. It includes detailed explanations of the methods, their applications, and relevant MATLAB code for implementation. Each section highlights the significance of the respective method in various fields such as linear algebra, signal processing, and image enhancement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views9 pages

MATLAB Report Example

The document provides an overview of three mathematical concepts: the Gram-Schmidt method for orthogonalization of vectors, the Fourier transform for analyzing frequency components of signals, and digital image processing techniques. It includes detailed explanations of the methods, their applications, and relevant MATLAB code for implementation. Each section highlights the significance of the respective method in various fields such as linear algebra, signal processing, and image enhancement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Exercise of the second series of MATLAB

Amir Mohammad Ahmadizad * 401411057 * TA: Amir Hossein Gholami

Question1)
Explanation of the Gram-Schmidt method: In mathematics, particularly linear algebra and numerical
analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors
that are perpendicular to each other. By technical definition, it is a method of constructing an orthonormal basis
from a set of vectors in an inner product space, most commonly the Euclidean space R^n equipped with the standard
inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v 1, …, v k} for k
≤ n and generates an orthogonal set S ′ = {u 1, …, u k} that spans the same k -dimensional subspace of R^n as S.
The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar
with it before Gram and Schmidt. In the theory of Lie group decompositions, it is generalized by the Iwasawa
decomposition. The application of the Gram–Schmidt process to the column vectors of a full column rank matrix
yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix).
The vector projection of a vector v on a nonzero vector u is defined as

Given k vectors v 1, …, v k the Gram–Schmidt process defines the vectors u 1, …, u k follows:

The sequence u 1, …, u k is the required system of orthogonal vectors, and the normalized vectors e 1, …, e k
forms an orthonormal set. The calculation of the sequence u 1, …, u k is known as Gram–Schmidt
orthogonalization, and the calculation of the sequence e 1, …, e k is known as Gram–Schmidt orthonormalization.
To check that these formulas yield an orthogonal sequence, first compute ⟨ u 1, u 2 ⟩ by substituting the above
formula for u 2: we get zero. Then use this to compute ⟨ u 1, u 3 ⟩ again by substituting the formula for u 3: we get
zero. For arbitrary k the proof is accomplished by mathematical induction. Geometrically, this method proceeds as
follows: to compute u i, its projects v i orthogonally onto the subspace U generated by u 1, …, u i −, which is the
same as the subspace generated by v 1, …, v i − 1. The vector u i is then defined to be the difference between v i and
this projection, guaranteed to be orthogonal to all of the vectors in the subspace U. The Gram–Schmidt process also
applies to a linearly independent countably infinite sequence {vi}i. The result is an orthogonal (or orthonormal)
sequence {ui}i such that for natural number n: the algebraic span of v 1, …, v n is the same as that of u 1, …, u n. If
the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the 0 vector on the i the step,
assuming that v i is a linear combination of v 1, …, v i − 1. If an orthonormal basis is to be produced, then the
algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a
length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the
original inputs. A variant of the Gram–Schmidt process using transfinite recursion applied to a (possibly
uncountably) infinite sequence of vectors (v α) α < λ yields a set of orthonormal vectors (u α) α < κ with κ ≤ λ such
that for any α ≤ λ, the completion of the span of {u β: β < min (α, κ)} is the same as that of {v β: β < α}. In
particular, when applied to a (algebraic) basis of a Hilbert space (or, more generally, a basis of any dense subspace),
it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality κ < λ
holds, even if the starting set was linearly independent, and the span of (u α) α < κ need not be a subspace of the
span of (v α) α < λ (rather, it's a subspace of its completion).
Relevant MATLAB code:
% first problem
% Amir Mohammad Ahmadizad-401411057
% Gram-Schmidt
%% Implementation
function [u1,u2,u3,u4] = GramSchmit(v1,v2,v3,v4)
u1 = v1;
u2 = v2 - dot(v2,u1)/dot(u1,u1)*u1;
u3 = v3 - dot(v3,u1)/dot(u1,u1)*u1 - dot(v3,u2)/dot(u2,u2)*u2;
u4 = v4 - dot(v4,u1)/dot(u1,u1)*u1 - dot(v4,u2)/dot(u2,u2)*u2 -
dot(v3,u3)/dot(u3,u3)*u3;
% if function is simply used for demonstration!
% basis = {u1,u2,u3,u4};
% for i = 1:4
% fprintf("the basis vector for u%d is: \n\n",i)
% disp(basis{i})
% end
end
This short code performs exactly the above math operation for the question request. First, we defined a function with
four outputs that takes four vectors with 10 dimensions from the input. Then, according to the formulas defined in
the page algorithm, we get their base vectors. The commented code is just the possibility that the function will be
called without proper output, which naturally does not require the presence of these lines in the function code.

Question2)
Explanation of the Fourier transform: In physics, engineering and mathematics, the Fourier transform
(FT) is an integral transform that takes a function as input and outputs another function that describes the extent to
which various frequencies are present in the original function. The output of the transform is a complex-valued
function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical
operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain
representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical
chord into the intensities of its constituent pitches.
Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency
domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the
Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical
phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another
Gaussian function. Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions
appear as solutions of the heat equation.
The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform,
although this definition is not suitable for many applications requiring a more sophisticated integration theory. For
example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it
were a function, but the justification requires a mathematically more sophisticated viewpoint.
The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a
function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and
time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of
waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of
either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are
complex-valued, and possibly vector-valued.[note 3] Still further generalization is possible to functions on groups,
which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform
(DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular
Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is
routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the
DFT.
The Fourier transform is an analysis process, decomposing a complex-valued function f (x) into its constituent
frequencies and their amplitudes. The inverse process is synthesis, which recreates f (x) from its transform.
We can start with an analogy, the Fourier series, which analyzes f (x) on a bounded interval x ∈ [ − P / 2, P / 2],
for some positive real number P. The constituent frequencies are a discrete set of harmonics at frequencies n P, n ∈
Z, whose amplitude and phase are given by the analysis formula:

The actual Fourier series is the synthesis formula:

The analogy for a function f (x) can be obtained formally from the analysis formula by taking the limit as P → ∞,
while at the same time taking n so that n P → ξ ∈ R. Formally carrying this out, we obtain, for rapidly decreasing f:

Relevant MATLAB code:


In this problem, we have been given a signal and a time limit based on which we have to fulfill the demands of the
problem. The first requirement is the removal of additive Gaussian noise that has entered the signal. For this, we
used a simple filter that works based on averaging. In this way, we used the movement function, which regularly
averages the data inside the signal, and replaced the final result with a signal with less noise.
We are then asked to obtain the Fourier transform of the filtered signal, which can easily be done using the fft
command. In order to obtain even and odd frequencies of the converted X(j2πf) signal, we first determine a base
frequency called F, so that even and odd multiples of this base frequency are separated using filters.

filt_signal = movmean(signal,3); % decreasing the effect of AWGN


figure(1) % demonstrating the effects of filtering and comparing the result.
subplot(3,1,1)
plot(t,filt_signal,"r");
subplot(3,1,2)
plot(t,signal,"b")
subplot(3,1,3)
plot(t,signal,"b")
hold on
plot(t,filt_signal,"r")
hold off
In the first part of the code, we want to remove the Gaussian adder from the given signal. For this, we use the
movement function, which we will explain now. This function is as follows: fog has two inputs, A and B. A variable
is the signal on which this function is applied. The variable B is also a scalar number that determines the averaging
range for us. The meaning of the averaging range; Repeated ranges are of length B, the average of which is entered
into our new vector named filt_signal.
then using a subplot of all three plots related to the primary signal; We get the filtered signal and the comparison
between them.

• The first plot is the same imported signal.


• The second plot is the filtered signal using the movement function.

X = fft(filt_signal); % first we take the fourie


Fs = 1/mean(diff(t)); % then we calculate the Sampling frequency by 1/dt
L = length(signal); % number of data on time vector
f = (0:L-1)*Fs/L; % frequency domain, 0 to Fs-Fs/L
f0 = Fs/L; % base frequency.
figure(2)
subplot(2,1,1)
plot(f,abs(X)) % plotting the Amplitude of the signal
ylabel("Amplitude of X")
xlabel("Frequency")
subplot(2,1,2)
plot(f,angle(X)) % plotting the phase of the signal
ylabel("Angle of the phase")
xlabel("Frequency")
In this section, we deal with the Fourier transform of the filtered signal. For this, we must first perform calculations
to obtain the frequency. We can have the frequency equal to the inverse of the time period or a time period. For this,
we must first obtain the time period, which with the presence of a large population of data, we must check the time
vector to see how it is distributed.
According to the form, we follow the following formula to obtain dt which can somehow be our time constant. In
this signal, there is no particular periodicity, so that the periodicity of the signal can be reached intuitively, while in
conversions, we usually deal with non-periodic signals that do not include only correct frequency coefficients.

Now we want to plot our transformation in the frequency domain; But since the conversion data have both
imaginary and real values, they cannot be plotted with frequency; So, using the subplot; We specify two plots for
amplitude and phase.

• The first plot is related to the amplitude of the converted signal in the frequency domain.
• The second plot shows the phase angle of the data.
In this section, we want to separate the even and odd frequencies of the signal using two low-pass filters. For this,
we need to separate even and odd signals, for which we use simple variables named oddmask and evenmask.
% Create masks for even and odd harmonics
evenMask = mod(f, 2*f0) == 0; % separating the even freq
oddMask = mod(f, 2*f0) ~= 0 & mod(f, f0) == 0; % separating the odd freq
% Apply masks to isolate even and odd harmonics
X_even = X .* transpose(evenMask);
X_odd = X .* transpose(oddMask);
% Inverse Fourier Transform to get back to the time domain
evenHarmonics = ifft(X_even, 'symmetric');
oddHarmonics = ifft(X_odd, 'symmetric');
figure(3)
subplot(4,1,1)
plot(t,evenHarmonics,"r")
ylabel("IFT(X(even freq)")
xlabel("time (s)")
title("IFT of even harmonics of the X")
subplot(4,1,2)
plot(t,oddHarmonics,"b")
ylabel("IFT(X(odd freq)")
xlabel("time (s)")
title("IFT of odd harmonics of the X")
subplot(4,1,3)
plot(t,filt_signal)
ylabel("filtered signal")
xlabel("time (s)")
title("original filtered signal")
subplot(4,1,4)
plot(t,evenHarmonics,"r",t,oddHarmonics,"b")
title("IFT of even and odd frequency overlap")

• The first plot is the signal resulting from the inverse Fourier transform of the even frequencies of the X signal in the time domain.
• The second plot is the signal resulting from the inverse Fourier transform of the odd frequencies of the X signal in the time domain.
• The third plot is the primary signal filter.
Question3)
Explanation of image processing: Digital image processing is the use of a digital computer to process
digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing
has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the
input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are
defined over two dimensions (perhaps more) digital image processing may be modeled in the form of
multidimensional systems. The generation and development of digital image processing are mainly affected by three
factors: first, the development of computers; second, the development of mathematics (especially the creation and
improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment,
agriculture, military, industry and medical science has increased.

Many of the techniques of digital image processing, or digital picture processing as it often was called,
were developed in the 1960s, at Bell Laboratories, the Jet Propulsion Laboratory, Massachusetts Institute
of Technology, University of Maryland, and a few other research facilities, with application to satellite
imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and
photograph enhancement. The purpose of early image processing was to improve the quality of the image.
It was aimed for human beings to improve the visual effect of people. In image processing, the input is a
low-quality image, and the output is an image with improved quality. Common image processing includes
image enhancement, restoration, encoding, and compression. The first successful application was the
American Jet Propulsion Laboratory (JPL). They used image processing techniques such as geometric
correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the
Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the
Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a
success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by
the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained,
which achieved extraordinary results and laid a solid foundation for human landing on the Moon.

The cost of processing was fairly high, however, with the computing equipment of that era. That
changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated
hardware became available. This led to images being processed in real-time, for some dedicated problems
such as television standards conversion. As general-purpose computers became faster, they started to take
over the role of dedicated hardware for all but the most specialized and computer-intensive operations.
With the fast computers and signal processors available in the 2000s, digital image processing has
become the most common form of image processing, and is generally used because it is not only the most
versatile method, but also the cheapest.

The basis for modern image sensors is metal–oxide–semiconductor (MOS) technology, which
originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and
Dawon Kahng at Bell Labs in 1959. This led to the development of digital semiconductor image sensors,
including the charge-coupled device (CCD) and later the CMOS sensor.

The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in
1969. While researching MOS technology, they realized that an electric charge was the analogy of the
magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to
fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge
could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in
the first digital video cameras for television broadcasting.
The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This
was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching
smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's
team at Olympus in 1985.[14] The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric
Fossum's team at the NASA Jet Propulsion Laboratory in 1993. By 2007, sales of CMOS sensors had
surpassed CCD sensors.

MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by
Richard F. Lyon at Xerox in 1980, used a 5 μm NMOS integrated circuit sensor chip. Since the first
commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS
sensors.

An important development in digital image compression technology was the discrete cosine transform
(DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became
the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG
compresses images down to much smaller file sizes, and has become the most widely used image file
format on the Internet. Its highly efficient DCT compression algorithm was largely responsible for the
wide proliferation of digital images and digital photos, with several billion JPEG images produced every
day as of 2015.

Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET
modalities. As a result, storage and communications of electronic image data are prohibitive without the
use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and
transmission of medical images. The cost and feasibility of accessing large image data sets over low or
various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable
efficient streaming of the JPEG 2000 compressed image data.

Electronic signal processing was revolutionized by the wide adoption of MOS technology in the 1970s.
MOS integrated circuit technology was the basis for the first single-chip microprocessors and
microcontrollers in the early 1970s, and then the first single-chip digital signal processor (DSP) chips in
the late 1970s. DSP chips have since been widely used in digital image processing.

The discrete cosine transform (DCT) image compression algorithm has been widely implemented in
DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely
used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-
to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and
YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation,
inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and
motion vectors, and decoding operations such as the inverse operation between different color formats
(YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television
(HDTV) encoder/decoder chips.

Digital filters are used to blur and sharpen digital images. Filtering can be performed by:

• convolution with specifically designed kernels (filter array) in the spatial domain
• masking specific frequency regions in the frequency (Fourier) domain

Relevant MATLAB code:


The description of the code is expressed as a comment inside the code itself.

You might also like