Signal Recovery Algorithms in Compressive Sensing
September 7, 2025
1 Introduction to Compressive Sensing (CS)
Compressive Sensing is a modern signal processing paradigm that enables the reconstruction of
a signal from significantly fewer samples than dictated by the classic Nyquist-Shannon theorem.
The core principle of CS is that if a signal is sparse or compressible in a particular domain
(meaning it can be represented with few non-zero coefficients), it can be accurately recovered
from a small number of non-adaptive, linear measurements.
The foundational mathematical model for CS is expressed as:
y = As
Where:
• y ∈ RM is the measurement vector (the compressed signal).
• s ∈ RN is the sparse signal vector to be recovered. It is characterized by having only
K non-zero entries, where K ≪ N .
• A ∈ RM ×N is the sensing matrix, with far fewer rows than columns (M ≪ N ).
The central challenge in CS is to solve this highly underdetermined system of equations. Since
there are infinite possible solutions for s, the goal is to develop algorithms that can efficiently
find the unique solution that is also the sparsest. This report provides a detailed overview of
the major classes of algorithms designed for this purpose.
2 Recovery Algorithms: A Categorization
Signal reconstruction algorithms are the computational heart of CS. They can be broadly
grouped into several families, each presenting a different balance of computational speed, re-
construction accuracy, and robustness to noise.
1. Convex Optimization Methods: These methods reformulate the intractable (NP-
hard) sparse recovery problem into a solvable convex optimization problem.
2. Greedy Iterative Algorithms: These algorithms construct the sparse solution sequen-
tially, making a locally optimal decision at each step.
3. Iterative Thresholding Algorithms: These algorithms iteratively refine a signal es-
timate using a gradient descent-like step, followed by a non-linear thresholding step to
enforce sparsity.
4. Probabilistic and Bayesian Methods: These approaches use statistical information
and priors about the signal and noise to infer the most probable solution.
We will now examine a key algorithm from each of these categories.
3 Convex Optimization: Basis Pursuit (BP)
Principle
The most direct approach to finding the sparsest solution involves minimizing the ℓ0 -norm
(counting non-zero entries), which is computationally intractable. Basis Pursuit (BP) cleverly
bypasses this by substituting the ℓ0 -norm with the ℓ1 -norm, its nearest convex relaxation. It
has been proven that under specific conditions on the matrix A (such as the Restricted Isometry
Property - RIP), the ℓ1 -minimization solution is exactly the same as the true sparse solution.
Mathematical Formulation
The problem is structured as a convex optimization program:
min ∥s∥1 subject to y = As
s
Here, ∥s∥1 = N
P
i=1 |si |. In the presence of noise, a more resilient formulation, Basis Pursuit
Denoising (BPDN) or LASSO, is employed:
1
min ∥y − As∥22 + λ∥s∥1
s 2
The regularization parameter λ balances the trade-off between the sparsity of the solution and
its fidelity to the measurements.
Pros & Cons
• Pros:
– High Accuracy: Often serves as the benchmark for recovery quality.
– Robustness: The BPDN variant is highly effective in noisy environments.
– Strong Theoretical Guarantees: Mathematical proofs ensure exact recovery un-
der favorable conditions.
• Cons:
– High Computational Cost: Solving the linear program is resource-intensive, mak-
ing it less suitable for large-scale or real-time problems.
4 Greedy Algorithms: Orthogonal Matching Pursuit (OMP)
Principle
OMP is a fast, iterative algorithm that ”builds” the sparse solution one component at a time.
In each iteration, it greedily selects the column of A (an ”atom”) that is most correlated with
the current residual—the part of the signal yet to be explained.
Algorithmic Steps
1. Initialization: Set the residual r0 = y, the support set of indices Λ0 = ∅.
2. Identification: Find the index jk of the atom in A that best matches the residual:
jk = arg maxj |⟨rk−1 , aj ⟩|.
3. Augmentation: Add the new index to the support set: Λk = Λk−1 ∪ {jk }.
2
4. Orthogonal Projection: Solve a least-squares problem to find the optimal signal esti-
mate using only the atoms identified so far: sk = arg minz ∥y − AΛk z∥22 .
5. Update: Compute the new residual: rk = y − AΛk sk .
6. Iteration: Repeat from Step 2 until a stopping criterion (e.g., reaching a known sparsity
K) is met.
Pros & Cons
• Pros:
– Fast: Significantly quicker than convex optimization methods.
– Simple to Implement: The algorithm’s logic is intuitive.
• Cons:
– Suboptimal: A wrong choice in an early iteration cannot be corrected later.
– Requires Sparsity Level: The basic version requires knowing the sparsity level K
in advance.
– Less Robust to Noise: Can be more sensitive to measurement noise than BP.
5 Iterative Thresholding Algorithms
Principle
This family of algorithms operates by alternating between a standard optimization step (like
gradient descent) to minimize the error and a non-linear ”thresholding” step to enforce sparsity.
Instead of selecting atoms one-by-one, they update the entire signal vector and then project it
onto the set of sparse signals.
Example: Iterative Hard Thresholding (IHT)
IHT is the canonical example. It aims to minimize the error ∥y − As∥22 directly.
1. Initialization: Start with an initial signal estimate, e.g., s0 = 0.
2. Gradient Step: Update the signal estimate by moving in the direction of the negative
gradient, which reduces the measurement error.
bt = st−1 + µAT (y − Ast−1 )
Here, µ is the step size. This is essentially a Landweber iteration.
3. Thresholding (Projection) Step: Create the new sparse estimate st by keeping the
K largest-magnitude elements of bt and setting all others to zero. This is a ”hard”
thresholding operation.
st = HK (bt )
4. Iteration: Repeat from Step 2 until the solution converges.
Other variants like Soft Thresholding (ISTA) replace the hard thresholding step with a
shrinkage function, which is closely related to Basis Pursuit.
3
Pros & Cons
• Pros:
– Fast and Scalable: Computationally simple (often just matrix-vector multiplies)
and scales well to large problems.
– Simple Implementation: The core loop is very straightforward to program.
• Cons:
– Convergence Issues: Convergence can be slow and is sensitive to the choice of step
size µ.
– Requires Sparsity: Like OMP, IHT requires knowing the sparsity level K.
– Suboptimal: May converge to a local minima rather than the globally optimal
sparse solution.
6 Probabilistic Methods: Approximate Message Passing (AMP)
Principle
AMP is a highly efficient iterative algorithm derived from concepts in statistical physics. It
functions as a sophisticated iterative thresholding algorithm but includes a crucial memory-like
correction term, known as the Onsager term. This term helps to decorrelate the error at
each iteration, approximating the behavior of a more complex belief propagation algorithm and
leading to very fast convergence.
Mathematical Formulation
The core AMP iterations are:
1. Signal Estimate Update: st+1 = ηt (AT z t + st )
′ (·)⟩
2. Residual Update: z t = y − Ast+1 + 1δ z t−1 ⟨ηt−1
Here, ηt (·) is a non-linear shrinkage function (e.g., soft-thresholding), and the second term in
the residual update is the Onsager correction.
Pros & Cons
• Pros:
– Extremely Fast: Exhibits rapid convergence for suitable matrices.
– State-of-the-Art Accuracy: Performance is among the best for large systems with
random Gaussian sensing matrices and can be precisely characterized by a theory
called State Evolution.
• Cons:
– Matrix Sensitivity: The original AMP algorithm can be unstable if the sensing
matrix A is not composed of i.i.d. Gaussian entries. More robust versions (e.g.,
VAMP, OAMP) have been developed to address this limitation.
4
7 Summary and Comparison
The choice of a recovery algorithm involves a critical trade-off between computational complex-
ity, accuracy, and specific problem requirements.
Table 1: Comparison of Recovery Algorithms
Algorithm Type Cost Accuracy Noise Robustness Key Requirement
Basis Pursuit (BP) Convex Opt. High Very High Excellent Regularizer λ (for BP
OMP Greedy Low Good Moderate Sparsity Level K
IHT Iterative Thr. Low Good Moderate Sparsity Level K, Ste
AMP Probabilistic Very Low Excellent High i.i.d. Gaussian Matri
8 Conclusion
The selection of the recovery algorithm is a pivotal decision in system design. Basis Pursuit
offers unparalleled accuracy at a high computational price. In contrast, OMP and Iterative
Hard Thresholding provide fast and simple alternatives, making them suitable for real-time
and large-scale applications where absolute optimality is not a strict requirement. Finally,
advanced methods like Approximate Message Passing represent the cutting edge, delivering
an exceptional blend of speed and accuracy for problems with suitable random structures.