Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views12 pages

Image Compression

The document discusses image compression, detailing the processes and components involved in both lossy and lossless compression methods. It covers the roles of source and channel encoders, the importance of information theory in measuring and reducing redundancy, and various image compression standards. Key elements include techniques for error-free comparison, the impact of quantization, and examples of common compression models and formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Image Compression

The document discusses image compression, detailing the processes and components involved in both lossy and lossless compression methods. It covers the roles of source and channel encoders, the importance of information theory in measuring and reducing redundancy, and various image compression standards. Key elements include techniques for error-free comparison, the impact of quantization, and examples of common compression models and formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Module IV: Image Compression models, Elements of Information

Theory, Error free comparison, Lossy compression, Image


compression standards.
Image compression

In the field of Image processing, the compression of images is an important step before we
start the processing of larger images or videos.

Image compression is the process of reducing the size of an image file while maintaining an
acceptable level of quality. It is essential in digital image processing to optimize storage,
transmission, and processing efficiency.
Source Encoder
The source encoder is responsible for compressing the image by removing redundancy
(unwanted data) and representing the data efficiently.

• Reduces the number of bits required to represent an image.


• Uses lossless or lossy compression techniques.
• Converts image data into a compact form suitable for transmission or storage.
• Lossless: Huffman coding, Arithmetic coding, Run-Length Encoding (RLE).
• Lossy: Transform coding (DCT in JPEG, DWT in JPEG 2000), Quantization.
Example:
In JPEG compression, the source encoder applies Discrete Cosine Transform (DCT) to
convert spatial image data into frequency components, followed by quantization to remove less
significant details.
Channel Encoder
The channel encoder adds extra bits (redundancy) to protect the compressed image data from
transmission errors due to noise or interference. (Increases Noise immunity)
Functions
• Introduces error correction codes (ECC) to detect and correct errors.
• Ensures data integrity during transmission over a noisy channel.
• Error Detection: Cyclic Redundancy Check (CRC).
• Error Correction: Hamming code, Reed-Solomon coding, Convolutional coding.

In satellite imaging, error-correcting codes are applied to compressed images before


transmission to ensure correct data recovery despite noise.
Channel
The channel is the medium through which the encoded image data is transmitted.
• Noise & Interference: Causes distortion in transmitted data.
• Bandwidth Constraints: Limits data transmission speed.
• Latency & Delay: Affects real-time applications.

A mobile phone transmitting a compressed image over a 4G/5G network experiences noise,
requiring robust error correction to recover lost bits.
Channel Decoder
• The channel decoder receives the transmitted data and corrects errors introduced by
the channel.
• Detects and corrects errors using error correction codes.
• Ensures reliable data delivery to the receiver.
• Reed-Solomon decoding, Turbo decoding, Viterbi decoding.
Example:
If a JPEG image is transmitted over a satellite link, the channel decoder corrects any bit
errors that occurred due to atmospheric interference.
Source Decoder
The source decoder reconstructs the original image from the compressed and error-corrected
data.
• Applies decompression algorithms to recover the image.
• In lossy compression, an approximate version of the original image is reconstructed.
• Inverse Transform (IDCT for JPEG, IDWT for JPEG 2000).
• Huffman decoding, Run-Length Decoding.
A web browser displaying a JPEG image decompresses the received file using IDCT and
Huffman decoding to reconstruct the image.
Key Components in the Encoder Stages of Image Compression
In the encoder stage of image compression, three essential components work together to
efficiently represent image data:

1. Mapper
2. Quantizer
3. Symbol Encoder
Each of these components plays a crucial role in transforming raw image data into a
compressed format.
1. Mapper- The mapper is responsible for transforming the input image into a more compact and
efficient representation.
• Converts spatial domain pixel values into another domain, typically the frequency
domain.
• Reduces correlation between pixel values.
• Prepares data for further compression by highlighting important information.
• Discrete Cosine Transform (DCT) – Used in JPEG to separate image frequencies.
• Discrete Wavelet Transform (DWT) – Used in JPEG 2000 for multi-resolution
analysis.
• Predictive Coding (DPCM) – Uses previous pixel values to predict the current pixel,
encoding only the difference.
Example:

• In JPEG compression, the mapper applies DCT, converting an 8×8 pixel block from
spatial domain to frequency domain.
• Most energy is concentrated in low-frequency components, making high-frequency
components easier to discard.

2. Quantizer- The quantizer reduces the number of bits required to represent the transformed
image data by approximating values.
• Reduces precision of frequency components.
• Controls compression ratio and image quality trade-off.
• In lossy compression, it discards less important high-frequency components.
➢ Uniform Quantization
o Uses a fixed step size to divide values into discrete levels.
o Simpler but may introduce artifacts (blockiness in JPEG).
➢ Non-Uniform Quantization
o Uses variable step sizes, allocating more levels to low frequencies (which hold
more visual information).
o More efficient for human perception.

• After DCT, the transformed values are divided by a quantization matrix.


• Higher frequency values are heavily quantized (more approximation), while low
frequencies are preserved for better quality.

3. Symbol Encoder- The symbol encoder compresses the quantized data by replacing common
patterns with shorter binary representations.
• Reduces redundancy using entropy coding.
• Assigns short codes to frequently occurring values.
• Improves compression efficiency without data loss.
• Huffman Coding (used in JPEG) – Assigns variable-length codes based on frequency.
• Arithmetic Coding – More efficient than Huffman, assigns fractional-bit codes.
• Run-Length Encoding (RLE) – Encodes consecutive zeros efficiently.
• The quantized DCT coefficients are scanned in a zigzag pattern to group zero values
together.
• Then, Run-Length Encoding (RLE) compresses sequences of zeros.
• Finally, Huffman Coding assigns shorter codes to more frequent values.
Image Compression Models

Image compression models are frameworks that reduce the size of image files by eliminating
redundant or irrelevant information while attempting to preserve visual quality (in lossy
compression) or ensure perfect reconstruction (in lossless compression).

Classification of Image Compression Models


Model Type Description Example

Lossless No information is lost; original image can be perfectly


PNG, GIF
Compression reconstructed

Lossy Some information is discarded; only an approximation of JPEG, WebP,


Compression the original is retained HEIC

General Components of a Compression Model


1. Source Encoder (Mapper)
o Converts input image data into a format suitable for compression.
o Often applies transformations (e.g., DCT, Wavelet, Predictive Coding).
2. Quantizer (only in lossy)
o Reduces precision of transformed values to eliminate psychovisual redundancy.
o Main source of data loss in lossy compression.
3. Entropy Encoder
o Encodes data into a compact bitstream using entropy coding.
o Techniques: Huffman Coding, Arithmetic Coding, LZW
Common Compression Models

1. Predictive Coding Model


• Predicts current pixel based on previous ones and encodes the error (difference).
• Used in lossless compression (e.g., PNG uses DEFLATE which combines LZ77 +
Huffman).
• Efficient for images with high interpixel correlation.

2. Transform Coding Model


• Applies mathematical transform (e.g., Discrete Cosine Transform – DCT, or Wavelet
Transform) to decorrelate pixel data.
• High-energy coefficients (usually low frequencies) are preserved; low-energy ones can
be quantized or discarded.
• Used in lossy methods like JPEG, JPEG2000.

3. Subband Coding Model


• Image is divided into frequency bands using filter banks.
• Each band can be compressed independently.
• Used in JPEG2000 (Wavelet-based), audio/image codecs.

4. Vector Quantization (VQ) Model


• Blocks of pixels (vectors) are matched to entries in a codebook.
• Each block is replaced by the index of the closest codeword.
• High compression possible but prone to blocking artifacts.

5. Fractal Compression Model


• Uses self-similarity in images to encode transformations instead of pixels.
• High compression, but computationally expensive.
Elements of Information Theory (in Image Compression Context)

Information Theory, introduced by Claude Shannon, provides a mathematical foundation for


measuring information, reducing redundancy, and compressing data. It plays a critical role in
designing efficient image compression systems.

Key Elements of Information Theory

1. Entropy (H)- Average amount of information produced by a source symbol.

Interpretation
• Lower entropy → less uncertainty → more predictable data → higher
compressibility
• Entropy is minimum number of bits needed per symbol for optimal encoding
If an image has only black and white pixels (p=0.5 each),

2. Redundancy- Difference between actual average code length and entropy.


Redundancy=L−H(X)
where L is the average length of the encoded symbols.
High redundancy implies room for compression.

3. Mutual Information (I)- Measures the amount of information shared between two
variables.

• In images, mutual information between pixels indicates interpixel redundancy.

4. Conditional Entropy (H(X|Y))- Amount of uncertainty in X given that Y is known.


• If neighboring pixels Y give good prediction of X, conditional entropy is low → good
for predictive coding.

5. Channel Capacity (C)- Maximum rate at which information can be reliably transmitted over
a channel.
6. Relative Entropy (Kullback-Leibler Divergence)- Measures inefficiency when using
approximation q(x)q(x) instead of true distribution p(x)p(x).
• Formula:

• Used in comparing models, coding strategies, or evaluating compressibility.

Relevance to Image Compression

Element Application

Entropy Determines theoretical minimum bits needed

Redundancy Guides compression ratio improvements

Mutual Information Basis for removing interpixel redundancy

Conditional
Used in predictive compression models
Entropy

Channel Capacity Sets limits on storage/communication systems

Evaluates how well a compression model approximates the true


Relative Entropy
distribution
Error-Free Comparison, Lossy Compression, and Image Compression Standards.

1. Error-Free Comparison- Error-free comparison refers to a bit-level verification of whether


the original and decompressed images are identical. It is essential in lossless image
compression to ensure perfect reconstruction.
Techniques
• Bit-by-bit Comparison: Compare binary values of all pixels.
• Checksum/Hash Comparison: Use MD5, SHA-256 to compare hashes of original and
decompressed files.
• Pixel Difference Matrix: Subtract corresponding pixel values to detect any change
(should be all zeros for lossless compression).
Use Cases
• Medical imaging, satellite imaging, legal documents: where no data loss is permissible.
• Validates integrity of a compression algorithm or codec.
2. Lossy Compression- Lossy compression reduces image file size by permanently discarding
some data that is less noticeable to human vision (exploiting psychovisual redundancy).
Key Steps
➢ Transform Coding (e.g., DCT, Wavelet)
➢ Quantization (major source of information loss)
➢ Entropy Coding (e.g., Huffman, Arithmetic)
Benefits
• Much higher compression ratios than lossless
• Greatly reduces storage and bandwidth needs
Trade-offs
• Irreversible data loss
• May produce artifacts like blurring, ringing, or blocking
Common Use Cases
• Web images, photography, streaming media, social media uploads

3. Image Compression Standards


Below is a list of major image compression standards, categorized by compression type:
Lossless Standards

Format Description

PNG Uses DEFLATE (LZ77 + Huffman), supports transparency


Format Description

GIF Limited to 256 colors, uses LZW compression

WebP (lossless) Combines prediction + entropy coding

Lossy Standards

Format Description

JPEG Most common, uses DCT + quantization + entropy coding

WebP (lossy) Modern, smaller size than JPEG, based on VP8

JPEG2000 Uses Wavelet Transform; supports both lossy and lossless

HEIC/HEIF Based on HEVC, better compression than JPEG

AVIF Based on AV1 codec, superior to WebP and JPEG

You might also like