Parseval's Energy Theorem states that the total energy of a signal in the time domain is equal to the
total energy of its Fourier transform in the frequency domain. More precisely, for a signal x(t) with
Fourier transform X(ω), the theorem is expressed mathematically as:
This means the energy computed by integrating the square of the signal's magnitude over time equals
the energy computed by integrating the square of the magnitude of its frequency spectrum, scaled by
1/2π.
In other words, Parseval's theorem is a statement of conservation of energy between the time and
frequency domains, ensuring no energy is lost in the Fourier transform process. If the signal is
periodic, the theorem relates the average power of the signal to the sum of the squares of the
magnitudes of its harmonic components (Fourier series coefficients).
Ans:
Show that an LTI system is said to be BIBO stable if its impulse response is absolutely summable.
Parameter Energy Signal Power Signal
Total Energy E Finite Infinite
Average Power P Zero Finite
Signal Type Non-periodic Periodic
Example Exponential signal Sinusoidal, square wave
Quantity Type Scalar Vector
Parameter Causal System Non-Causal System
Output depends on Present and past inputs only Past, present, and future inputs
Physical realizability Yes No
Impulse response Zero for t<0 May be non-zero for t<0
Example Resistor, real-time systems Theoretical filters using future inputs
Parameter Even Signal Odd Signal
Symmetry Symmetrical about vertical axis Anti-symmetrical about vertical axis
Mathematical condition x(t)=x(−t) x(−t)=−x(t)
Can be non-zero Must be zero
Value at t=0t=0
Example Cosine wave Sine wave
Area under signal Twice the area on one side Zero
Sampling is the process of converting a continuous-time signal x(t) into a discrete-time signal
x[n] = x(nT) by taking values at uniform intervals T (sampling period). The sampling frequency is
fs = 1/T
According to the Nyquist Sampling Theorem, if a continuous-time signal is bandlimited with
maximum frequency (i.e., no frequency components above fm), then the sampling frequency must
satisfy: fs >2fm.
This minimum sampling rate 2fm is called the Nyquist rate. Sampling at or above this rate ensures
the original signal can be perfectly reconstructed from its samples.
Reconstruction is the process of recovering the continuous-time signal x(t) from its discrete samples.
It is typically done by passing the sampled signal through an ideal lowpass filter with cutoff frequency
fm. The ideal reconstruction uses the Whittaker-Shannon interpolation formula, which involves sinc
functions centered at sample points and scaled by sample values.
This ideal lowpass filter removes spectral replicas introduced by sampling, recovering the original
signal perfectly if the Nyquist criterion is met.
Aliasing occurs when the sampling frequency fs is less than twice the maximum frequency fm of the
signal: fs ≤2fm.
In this case, spectral replicas of the signal's spectrum overlap in the frequency domain, causing
different frequency components to become indistinguishable after sampling. This overlap results in
distortion because the reconstructed signal contains frequency components that were not present in
the original signal, making perfect reconstruction impossible in general.
To avoid aliasing, signals are often passed through an anti-aliasing lowpass filter before sampling to
remove frequency components above fs/2.
The Region of Convergence (ROC) for the Z-transform of a discrete-time signal x[n] is defined as the
set of points in the complex z-plane for which the Z-transform sum converges to a finite value.
Since the Z-transform is a power series in 1/z , convergence depends on the magnitude and phase of z.
The ROC typically forms a ring or disc centered at the origin in the z-plane, and it cannot include any
poles of X(z) (points where X(z) becomes infinite).
This maps points from the s-plane to the z-plane:
The imaginary axis in the s-plane (σ=0) maps to the unit circle in the z-plane (r=1).
The left half of the s-plane (σ<0) maps inside the unit circle (r<1) in the z-plane, corresponding to
stable discrete-time systems.
The right half of the s-plane (σ>0) maps outside the unit circle (r>1) in the z-plane, corresponding
to unstable discrete-time systems.
The s-plane and z-plane are connected through the exponential mapping z= esT , which translates
continuous-time system characteristics into discrete-time equivalents. This mapping preserves
stability and frequency relations but introduces periodicity and aliasing in the discrete domain.
Answer:
Answer:
Answer:
Answer:
Properties of ROC:
Relation between Z transform and DTFT
Answer:
Zero padding is a signal processing technique where zeros are added to the end of a time-domain
signal or sequence to increase its length, often to the next power of two. For example, if a signal has
10 samples, zero padding might add 6 zeros to make the total length 16, which is a power of two.
Uses of Zero Padding
Efficient FFT Computation: Many FFT algorithms are optimized for input lengths that are powers
of two. Zero padding helps achieve this, speeding up the computation.
Spectral Interpolation: By zero padding, the discrete Fourier transform (DFT) produces a more
densely sampled frequency spectrum. This interpolation helps in visualizing the spectrum with
finer detail, though it does not increase the fundamental frequency resolution.
Amplitude Estimation: Zero padding can improve the accuracy of amplitude estimates of
sinusoidal components in the frequency domain, especially when the signal frequencies do not
align exactly with DFT bins. This helps in better identifying signal amplitudes.
Better Frequency Discrimination: Adding zeros allows for better discrimination between closely
spaced frequency components by increasing the number of FFT points, which refines the
frequency axis sampling.
Handling Non-Periodic Signals: When analyzing time-limited (non-periodic) signals in blocks or
frames, zero padding helps in treating each block as finite-duration, enabling spectral
interpolation and analysis without assuming periodicity.
Answer:
https://youtu.be/5aPIak0q9yA?si=FpQKBslssq3_YQZe
Rest In Peace
A twiddle factor is a complex exponential coefficient used in Fast Fourier Transform (FFT) algorithms,
particularly in the Cooley–Tukey FFT, to combine smaller discrete Fourier transforms into a larger one
efficiently. It is mathematically defined as:
Role in FFT
In each stage of the FFT, twiddle factors multiply the "odd" indexed inputs before combining
them with the "even" indexed inputs.
This multiplication adjusts the phase and amplitude, effectively rotating the input vectors to
their correct positions in the frequency domain.
They are central to the butterfly operations that reduce the computational complexity of the
DFT from O(N2) to O(NlogN).
The correlation function measures the similarity between a signal and a time-shifted version of itself
(autocorrelation) or between two different signals (cross-correlation) as a function of the time lag. It
reveals how the signal values at different times are related statistically.
For discrete-time signals, the autocorrelation function rxx (m) is defined as the expected value of the
product of the signal x(n) and its shifted version x(n−m). For finite signals, this is often computed as a
sum over the signal duration.
The correlation function can be defined for different types of signals:
Aperiodic signals: The autocorrelation is an aperiodic function.
Periodic signals: The autocorrelation is periodic with the same period as the signal.
Random signals: The correlation function captures statistical dependencies and can be used to
identify structure or predictability in the signal.
The power spectrum (or power spectral density, PSD) describes how the power of a signal is
distributed over frequency. It is obtained by taking the Fourier transform of the autocorrelation
function. This Fourier relationship means the power spectrum and correlation function form a Fourier
transform pair, providing two equivalent descriptions of the signal’s statistical structure in time and
frequency domains.
For a periodic signal, the power spectrum consists of discrete spectral lines at harmonics of the
fundamental frequency. For a random or noise-like signal, the power spectrum is typically continuous
and spread over frequencies, indicating lack of periodic structure.
The power spectrum reveals repetitive or correlated patterns in the signal, with more predictable
signals having power concentrated in narrow frequency bands, while random signals show a more
uniform spread.
A stationary process is a fundamental concept in DSP and stochastic signal analysis, describing a
random process whose statistical properties do not change over time. This property makes stationary
processes critical for modeling, analyzing, and processing signals that exhibit consistent behavior over
time.
A stochastic process X(t) is said to be strictly stationary if the joint probability distribution of any
collection of samples is invariant under time shifts. In other words, for any time shift h, the statistical
behavior remains the same as the non-shifted original. This implies that all moments (mean, variance,
higher order moments) and the entire distribution remain constant with respect to time.
Aspect Strict-Sense Stationary (SSS) Wide-Sense Stationary (WSS)
Statistical All moments and joint distributions Mean and autocovariance invariant
Invariance invariant
Mean Constant Constant
Autocovariance Depends only on lag τ Depends only on lag τ
Practical Usage Theoretical ideal, often difficult to Widely used in DSP due to simpler
verify conditions
Examples Random cosine with uniform phase White noise, many noise processes
Optimal filtering with ARMA (Autoregressive Moving Average) models is a powerful approach in
digital signal processing (DSP) and time series analysis for estimating or predicting signals corrupted
by noise. The ARMA model captures both the autoregressive (AR) behavior, where current values
depend on past values, and the moving average (MA) behavior, where current values depend on past
noise terms. This dual structure makes ARMA models highly effective for modeling a wide range of
stationary stochastic processes.
ARMA models are widely used to represent stationary time series with finite parameters, making
them suitable for optimal filtering tasks. The Kalman filter is commonly employed alongside ARMA
models to perform optimal state estimation and prediction. It recursively computes the conditional
mean and covariance of the signal’s state, providing minimum MSE estimates for linear Gaussian
systems modeled by ARMA processes.
Advantages of ARMA-Based Optimal Filtering
Compact Parametric Representation: ARMA models efficiently capture the dynamics of many
natural and engineered signals with a finite number of parameters.
Statistical Optimality: When the underlying process follows an ARMA model and noise is
Gaussian, the Kalman filter combined with ARMA modeling yields minimum MSE estimates,
making it optimal in the least-squares sense.
Flexibility: ARMA models can approximate a wide variety of spectral shapes, including those with
peaks and dips, enabling effective filtering in diverse applications such as speech processing,
biomedical signal analysis, and communications.
The Wiener filter is an optimal linear filter used in digital signal processing to estimate a desired signal
from a noisy observation by minimizing the mean square error (MSE) between the estimated and the
true signals. To produce the best linear estimate of a desired signal corrupted by additive noise,
assuming known statistical properties of both the signal and noise.
Assumptions:
Both the signal and noise are stationary stochastic processes with known autocorrelation or
spectral characteristics.
Noise is additive and uncorrelated with the signal.
The filter is linear and time-invariant (LTI).
The filter can be causal or non-causal depending on the application.
Key Characteristics:
Optimality Criterion: Minimizes the mean square error (MSE) between the estimated signal and the
true signal, making it a minimum mean square error (MMSE) estimator.
Statistical Approach: Unlike deterministic filters designed for a fixed frequency response, the Wiener
filter uses knowledge of the signal and noise spectra to design the filter.
Implementation: Often implemented in the frequency domain using the Discrete Fourier Transform
(DFT), where the filter is expressed as a function of signal and noise power spectra.
Advantages:
Provides the best linear estimate in the MSE sense under the assumptions of stationarity and
known spectra.
Adaptable to different noise and signal conditions if spectral information is available.
Can be implemented efficiently in the frequency domain.
Disadvantages:
Requires accurate knowledge or estimation of signal and noise statistics, which may not always
be available.
Performance degrades if the assumptions (stationarity, linearity, known spectra) are violated.
Computationally more intensive than simple deterministic filters, especially for large data sets.
A butterfly in the context of the Fast Fourier Transform (FFT) is a fundamental computational building
block that combines results of smaller discrete Fourier transforms (DFTs) into larger DFTs, or breaks a
larger DFT into smaller subtransforms. It is named for the distinctive shape of its data-flow diagram,
which resembles a butterfly when drawn for the common radix-2 FFT case.
Procedure for Designing FIR Filter Using Frequency Sampling Method
The frequency sampling method designs an FIR filter by specifying its desired frequency response at a
finite set of equally spaced frequency points and then computing the filter coefficients via inverse
discrete Fourier transform (IDFT). This method is flexible and can approximate arbitrary frequency
responses.
Specify Filter Length N: Choose the number of filter coefficients N (filter length). Typically, N is
odd for linear phase filters.
Define Desired Frequency Response Hd(ω): Specify the desired frequency response of the filter
over the normalized frequency range [0,2π). This can be an ideal lowpass, highpass, bandpass, or
arbitrary response.
Sample the Desired Frequency Response: Sample Hd(ω) at N equally spaced frequency points:
Apply Symmetry for Linear Phase (if required): For linear phase FIR filters with symmetric
impulse response, impose conjugate symmetry on H(k) to ensure real coefficients.
Compute Impulse Response via Inverse DFT: Calculate the filter coefficients h(n) by taking the
inverse discrete Fourier transform of the frequency samples:
This yields the FIR filter coefficients in the time domain.
Implement the Filter: Use the computed coefficients h(n) in convolution with the input signal to
perform filtering.
Bi linear Transformation Advantages and Disadvantages:
Advantages Disadvantages
Preserves stability (maps stable analog filter to stable Frequency warping causes nonlinear
digital filter) frequency mapping
One-to-one mapping of analog frequency axis to Requires prewarping of frequencies to
digital frequency axis correct distortion
Avoids aliasing of frequency components Phase response distortion due to warping
Enables use of existing analog filter designs Transition band shape can be altered
Applicable to all standard filter types (lowpass, Not ideal for filters requiring exact frequency
highpass, bandpass, bandstop) correspondence
Prewarping
The Impulse Invariant Method is a technique used to design Infinite Impulse Response (IIR) digital
filters from analog filters by sampling the analog filter’s impulse response. The key idea is to make the
digital filter's impulse response a sampled version of the analog filter’s impulse response, thereby
preserving the shape of the impulse response at the sampling instants.
The Gibbs phenomenon refers to the oscillatory behavior—specifically overshoots and undershoots—
that occur near a jump discontinuity when approximating a discontinuous function using a finite
number of terms in its Fourier series or truncated frequency components. This effect is commonly
observed in signal processing, especially in digital filter design, where ideal filters with abrupt
transitions are approximated by finite impulse response (FIR) filters.
When a discontinuous signal (like a square wave) is approximated by a finite sum of sinusoids (Fourier
series), the partial sums exhibit ripples near the discontinuities that do not vanish even as the number
of terms increases.
The overshoot magnitude is approximately 9% of the jump size and remains constant regardless of
the number of terms, though the oscillations become narrower and more localized near the
discontinuity.
In signal processing, this manifests as "ringing" or oscillations near sharp transitions caused by
truncating infinite Fourier series or ideal filter impulse responses.
Effects of Gibbs Phenomenon on Digital Filters
Ripple in Passband and Stopband: The Gibbs phenomenon causes ripples or oscillations in both
the passband and stopband of FIR filters, leading to deviations from the ideal flat response.
Ringing Artifacts: In the time domain, it appears as ringing near sharp edges or transitions in
signals after filtering.
Fixed Overshoot Magnitude: The height of the largest ripples remains roughly constant
regardless of filter length, limiting how closely the filter can approximate an ideal sharp cutoff.
Increased Number of Ripples with Filter Length: Increasing the filter length increases the number
of ripples but decreases their width; the main lobe of the filter’s frequency response becomes
narrower, improving selectivity but not eliminating ripples.
Distortion of Signal: The oscillations can cause distortion, especially in applications requiring
sharp transitions or minimal artifacts (e.g., image processing, audio).
The primary method to reduce the effects of Gibbs phenomenon in digital filter design is windowing:
Instead of abruptly truncating the infinite ideal impulse response (which corresponds to a rectangular
window), multiply it by a smooth window function w(n) that tapers gradually to zero at the edges.
Common windows include Hamming, Hanning, Blackman, Kaiser, and Bartlett windows.
Aspect FIR Filters IIR Filters
Feedback No feedback (non-recursive) Uses feedback (recursive)
Stability Inherently stable Conditionally stable
Phase Response Linear phase achievable Non-linear phase
Coefficient Count High (e.g., 60 taps) Low (e.g., 4th order)
Design Flexibility Arbitrary frequency responses Limited to analog prototypes
Latency Higher (due to more taps) Lower
Use Cases Audio, ECG, communications Real-time systems, high-speed filtering
Quantization of filter coefficients refers to the process of representing the filter’s coefficients with a
finite number of bits (finite word length) when implementing digital filters in hardware or software.
Since digital systems have limited precision, the coefficients designed with infinite precision must be
approximated, leading to quantization errors.
What is Coefficient Quantization?
Filter coefficients (e.g., numerator and denominator coefficients in IIR filters or tap weights in
FIR filters) are typically real numbers with infinite precision in design.
In practical digital implementations, these coefficients must be represented using a finite
number of bits (fixed-point or floating-point format).
Quantization involves rounding or truncating the coefficients to fit into the available bit-width.
This introduces quantization errors, causing the implemented filter to differ from the ideal
designed filter.
Effects of Coefficient Quantization
Deviation in Frequency Response: Quantization causes shifts in the filter’s poles and zeros,
leading to changes in the frequency response such as altered passband ripple, stopband
attenuation, and cutoff frequencies.
Stability Issues (Especially in IIR Filters): Small changes in denominator coefficients can move
poles outside the unit circle, causing filter instability.
Increased Noise and Distortion: Quantization noise appears as additional noise in the output,
degrading signal quality.
Reduced Dynamic Range: Finite word length limits the dynamic range, potentially causing
overflow or underflow.
Sensitivity to Realization Structure: The effect of quantization depends on the filter structure
(direct form, cascade, lattice). Cascade and lattice structures are generally more robust to
coefficient quantization.