Components of an Image Processing System
Fig. Components of a general-purpose Image Processing System
1. IMAGE SENSORS
With reference to sensing, two elements are required to acquire digital
images.
A) The first is a physical device that is sensitive to the energy radiated by the
object we wish to image.
B) The second, called a digitizer, is a device for converting the output of the
physical sensing device into digital form. The digitizer converts these
outputs to digital data.
2. Specialized image processing hardware
It consists of the digitizer plus hardware that performs other primitive
operations, such as an arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.
This type of hardware is called a front-end subsystem
It performs functions that require fast data throughputs (e.g., digitizing and
averaging video images at 30 frames)
3. Computer
The computer in an image processing system is a general-purpose computer
and can range from a PC to a supercomputer.
4. Image Processing Software
Software for image processing consists of specialized modules that perform specific
tasks.
A well-designed package includes the capability for the user to write code.
Example: JAVA, MATLAB.
5. Mass Storage
Mass storage capability is a must in image processing applications
Example:
An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-bit
quantity, requires one megabyte of storage space if the image is not compressed.
Digital storage for image processing applications falls into three principal categories:
(1) short-term storage for use during processing,
(2) on-line storage for relatively fast re-call, and
(3) archival storage, characterized by infrequent access.
Storage is measured in bytes , Kilobytes (KB), Megabytes(MB), Gigabytes (GB), and Terabytes
(TB).
6. Image displays
Image displays in use are mainly color (preferably flat screen) TV monitors.
Monitors are driven by the outputs of image and graphics display cards that are an
integral part of the computer system
7. Hardcopy devices
Hardcopy devices for recording images include
1. laser printers,
2. film cameras,
3. heat-sensitive devices,
4. inkjet units, and
5. digital units, such as optical and CD-ROM disks.
8. Networking
Networking is a default function in any computer system in use today
The key consideration in image transmission is bandwidth.
Elements of Visual Perception:
Structure of the Human Eye
Fig: Simplified diagram of a cross section of the human eye.
General Overview
The human eye is nearly spherical in shape.
Average diameter: approximately 20 mm.
Main function: Converts light into electrical signals to the brain enabling vision.
Membranes of the Eye
1. Outer Covering: Cornea & Sclera
o Cornea: Transparent, tough tissue at the front.
o Sclera: Opaque, covers the rest of the eye (white part).
2. Middle Layer: Choroid
o Located below the sclera.
o Contains blood vessels — main source of nutrition.
o Injury can lead to inflammation and reduced blood flow.
o Heavily pigmented — reduces stray light and backscatter.
3. Inner Layer: Retina
o Lines the back of the eye (posterior portion).
o Main light-sensitive part where images are formed.
o Contains light receptors: Rods and Cones.
4. Ciliary Body and Iris
Ciliary body: controls lens shape.
Iris diaphragm: colored part, controls light entry by adjusting pupil size.
Pupil: central opening of the iris; diameter ranges from 2 to 8 mm.
5. Lens
o Made of fibrous layers, suspended by fibers from the ciliary body.
o Composition: 60-70% water, 6% fat, high protein content.
o Slight yellow pigmentation that increases with age.
o Clouding of the lens leads to cataracts (poor vision).
o Absorbs 8% of visible light, more at shorter wavelengths.
o UV and IR light can damage the lens proteins.
Retina and Vision Types
🔹Light Receptors
6. Cones
o 6 to 7 million in number.
o Located mostly in fovea (central retina).
o Responsible for color vision and fine detail.
o Each cone has its own nerve ending.
o Active in bright light (Photopic Vision).
7. Rods
o 75 to 150 million in number.
o Spread throughout the retina.
o No color perception; sensitive to low light.
o Multiple rods connect to a single nerve ending.
o Give overall scene perception, less detail.
o Active in dim light (Scotopic Vision).
o In low light, colored objects appear grayscale — only rods are active.
Key Terms
Photopic Vision: Bright light vision (cones).
Scotopic Vision: Dim light vision (rods).
Fovea: Center of retina; sharpest vision.
Cataract: Lens clouding, affects vision clarity.
Pupil: Opening in the iris controlling light entry.
2) Image Formation in the Eye
🔹 1. Key Difference Between Eye Lens and Optical Lens
Eye lens is flexible, unlike a regular (glass) optical lens.
Flexibility allows automatic focusing on near and far objects.
The anterior (front) surface of the lens has a larger radius of curvature than the
posterior (back) surface.
Fig : Graphical representation of the eye looking at a palm tree Point C is the optical
center of the lens
🔹 2. Role of Ciliary Body Muscles
Ciliary muscles adjust the shape of the lens:
o To view distant objects: Muscles flatten the lens.
o To view nearby objects: Muscles allow the lens to become thicker.
This process helps adjust the focal length for proper focusing.
🔹 3. Focal Length of the Eye
Focal length = Distance between the center of the lens and the retina.
It varies from ~17 mm (minimum power) to ~14 mm (maximum power).
When viewing farther than 3 meters, lens is at minimum refractive power.
For closer objects, the lens becomes more refractive (thicker).
🔹 4. Image Size Calculation on Retina
Based on similar triangles in image geometry:
o Formula:
Object height / Distance to object = Image height on retina /
Focal length of eye
o Example:
A 15 m tall tree at 100 m distance gives:
15 / 100 = h / 17⇒h=2.55 mm
So, the retinal image height = 2.55 mm
🔹 5. Fovea and Image Perception
The image is focused mainly on the fovea (central retina).
Light receptors (cones & rods) get excited based on the image formed.
These receptors convert light energy into electrical signals.
Signals are transmitted to the brain for image interpretation and visual perception.
3) Brightness Adaptation and Discrimination:
Eye can't see all brightness levels at once.
Adjusts sensitivity based on lighting — called brightness adaptation.
Adaptation level (Ba): Current light sensitivity.
Range of visible brightness is small at each Ba.
Below Bb: Everything looks black.
Too much light raises Ba instead of showing more details.
Image Sensing and Acquisition – Simplified
Notes
What is Image Acquisition?
• Process of capturing images using energy sources (like light, X-rays, etc.)
• Image = Illumination (light/energy) + Scene (object)
• Energy sources: Visible light, X-rays, Ultrasound, Radar, etc.
• Scenes: Real-world objects, human body, molecules, even the Sun!
How Imaging Works
• Energy is either reflected or transmitted through objects.
• Reflection: Light bouncing off surfaces (e.g., mirror, walls).
• Transmission: X-rays passing through the body.
• A sensor detects this energy and converts it to an electrical signal, then digitized into an image.
1. Single Sensor Imaging
FIG: (a) Single imaging Sensor (b) Line sensor (c) Array sensor
• Uses one sensor (e.g., photodiode).
• Requires mechanical motion in X and Y directions to scan the whole image.
• Example: Scanning film negatives or microdensitometers.
• Pros: Low cost, high precision
• Cons: Slow, needs mechanical movement
2. Linear Sensor Strip (1D)
• Uses a line of sensors arranged in a strip.
• Motion in the perpendicular direction forms a complete 2D image.
• Common in: Flatbed scanners, Aerial imaging, Medical imaging (CT/CAT scans using rotating
sensors)
• Pros: Faster than single sensor
• Cons: Requires motion or rotation, needs data reconstruction
3. 2D Sensor Arrays
• Sensors arranged in a grid format (like in digital cameras).
• Captures a full image at once without mechanical motion.
• Used in: Digital cameras (e.g., CCD sensors), Astronomy, Low-light imaging
• Pros: Fast, high quality, no movement needed
• Cons: Costly and complex systems
Steps in Digital Image Acquisition
• 1. Energy Source → Light, X-ray, Ultrasound
• 2. Scene/Object → Reflects or transmits energy
• 3. Imaging System → Uses a lens to focus energy
• 4. Sensor Array → Converts light into electrical signals
• 5. Electronic Circuitry → Converts signals into a digital image
Key Terms Recap
• Illumination: Light or energy that helps us see objects
• Sensor: Device that captures energy and gives electrical output
• Digitization: Converting signals into digital format
• CCD: Common type of camera sensor
• CAT/MRI/PET: Medical imaging methods using advanced sensors