Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
20 views80 pages

Exp ImageProcessing ENG

The document outlines a training module on image processing technologies aimed at technicians specializing in Color/Production Printing. It covers topics such as image data processing workflows, A/D conversion, filtering techniques, and halftone rendering, along with knowledge check questions for comprehension. The module emphasizes the importance of understanding image processing to enhance and correct images effectively.

Uploaded by

bossie.meyer6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views80 pages

Exp ImageProcessing ENG

The document outlines a training module on image processing technologies aimed at technicians specializing in Color/Production Printing. It covers topics such as image data processing workflows, A/D conversion, filtering techniques, and halftone rendering, along with knowledge check questions for comprehension. The module emphasizes the importance of understanding image processing to enhance and correct images effectively.

Uploaded by

bossie.meyer6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Light Sources and Monitors in Color DTP v1.

Image Processing Technologies v1.0


On completion of this module you will have developed an understanding of
generic image processing technologies and should be able to relate this
knowledge to the skills required as technicians.

The specific areas covered are:

Input and Output Processes

Image Data Processing Workflow

Image Conversion and Data Processing Technologies

Filter and Data Processing Technologies

Halftone and Data Processing Technologies

1
Image Processing Technologies v1.0

Module Training Overview


Target audience will be:

Any technician who has completed all Professional level modules, or technicians studying to become
Color/Production Printing specialists.

This module explores the technologies and techniques used in image processing, both in a general
sense, as well in relation to Konica Minolta’s own image processing technologies. This module helps
to develop an understanding of the features and differences between each of the data handling
methods and processes.

Attainment targets:

• To understand why image processing is required between the image reading and output
processes.

• To understand image data processing workflow on digital MFPs.

• To understand how image conversion and data processing technologies work together.

• To provide an outline of filter data processing technologies.

• To provide an outline of halftone data processing technologies.

Knowledge check questions are provided at the end of each chapter. The knowledge check questions
require a written response and the suggested course of action is as follows:

• Read through the chapter thoroughly.

• Fill in the knowledge check questions.

• If you have answered all the questions with the correct response, proceed to the next chapter.
If you have missed a question or answered incorrectly, revise the topic and repeat the
question.

• On successful completion of all questions, proceed to the next chapter.

© 2006 KONICA MINOLTA BUSINESS TECHNOLOGIES, INC

2
Image Processing Technologies v1.0

Contents
1 What is Image Processing? 5
1.1 Why do we need image processing? 5
1.2 Issues with digitizing images 7
1.2.1 Aliased edges 7
1.2.2 Moiré patterns 8
1.2.3 Halftone process 9
1.3 Knowledge check 10

2 Image Processing Workflow


2.1 Black and white MFP flow chart 11
2.2 Color MFP flow chart 12
2.3 Device color-spaces 13
2.4 Knowledge check 14

3 A/D Conversion
3.1 Sampling 15
3.1.1 What is sampling? 15
3.1.2 What is sampling frequency? 16
3.2 Quantization 17
3.2.1 What is quantization? 17
3.2.2 What is quantization error? 18
3.3 Knowledge check 19

4 Image Processing Filters


4.1 What is filter processing? 20
4.2 Types of filters 22
4.3 Noise removal (smoothing) 25
4.4 Types of smoothing filters 27
4.4.1 Average filter 27
4.4.2 Weighted average filter 29
4.4.3 Moving average method 30
4.4.4 Median filter 31
4.4.5 Selective local average method 32

3
Image Processing Technologies v1.0

4.5 Contour definition 34


4.5.1 Outline of contour definition 34
4.5.2 1st Differential method 36
4.5.3 2nd differential method 38
4.6 Knowledge check 40

5 Processes of Image Reconstruction


5.1 Counter measures for image deterioration 43
5.2 Knowledge check 46

6 Color Conversion
6.1 3D-LUT method 48
6.2 Interpolation method 49

7 Halftone Rendering
7.1 Pseudo-halftone processing 50
7.1.1 Outline of pseudo-halftone rendering 50
7.1.2 Dither method 51
7.1.3 Error diffusion method 57
7.1.4 Density pattern method 62
7.1.5 Sub-matrix method 64
7.2 Screening 64
7.2.1 Screen angles 64
7.2.2 Screen Patterns 65
7.2.3 Outline of screening 66
7.2.4 What is an AM screen? 67
7.2.5 What is an FM screen? 67
7.2.6 Comparison between AM and FM screens 68
7.2.7 Other Screen Types 68
7.3 Konica Minolta technologies 69
7.3.1 Laser Dot Size 69
7.3.2 Descreening 74
7.4 Knowledge check 76

4
Image Processing Technologies v1.0

1 What is Image Processing?


This chapter discusses why image processing is required between the image input and output
processes.

Image processing is a technique in which data from a digitized image is manipulated by


various mathematical operations in order to create an enhanced image that is more useful or
pleasing to a human observer.

Its main components are:

• Importing, in which an image is captured through scanning or digital photography

• Analysis and manipulation of the image, accomplished using various specialized


software applications

• Output (e.g. to a printer or monitor)

1.1 Why do we need image processing?


There are various reasons for images to require processing (sometimes referred to as “post
processing”, as the processing occurs after the input device has released the image). Some
factors may arise simply from the inherent limitations of the input device, whilst others may
arise from over-processing of the image or unsuitable settings.

Image processing is best used as a tool to enhance or correct an image.

The more an image is processed, the greater the chance of introducing unwanted digital
artifacts. Therefore, image processing should be used in moderation to avoid causing other
complications.

5
Image Processing Technologies v1.0

The following chart outlines the common causes leading to the need for image processing,
the correction process and the end result.

Cause Correction Process Result

Soft edges of images Sharpening • Enhances edges and highlights


from a digital camera or for better definition
scanner, unwanted out-
• Correction of blurred images by
of-focus effects in the
increasing contrast of adjacent
image.
pixels

Electrical interference Noise removal • Smoothing of pixels with


from the CCD randomly distributed color levels
(high ISO levels). by blending into surrounding
pixels

Incorrect software Gradation • Correction of banding found in


settings in scanning correction smooth gradations
software, limitations of
• Gamma correction, contrast
input devices (scanner or
enhancement
camera).

Misalignment of image Correction of • Removal of Moiré patterns


on scanner, digital digitized images
• Improvement of scanned type
camera imaging material
and lines
or fine mesh.

6
Image Processing Technologies v1.0

1.2 Issues with digitizing images


1.2.1 Aliased edges

When scanning images with hard edges, particularly those on an angle, the pixels can exhibit
a “staircase” effect that appears as a jagged line. To overcome this problem, it is best to scan at
a higher resolution than needed and resample the image down to the correct size.

The jagged lines on the left image have been scanned at 200%. When the image, on the right, is
resampled to 100% the lines appear smoother.

Example of how low resolution images can produce imprecise results. Where there is limited data
the pixels cannot accurately match the intended path.

7
Image Processing Technologies v1.0

1.2.2 Moiré patterns

Scanners and digital cameras operate by recording pixel values based on a grid (horizontal
and vertical) which dictates its resolution. When the digitized image has a regular pattern
(such as fine fabric weave) the scanner’s grid can be misaligned and may cause interference,
resulting in a moiré pattern. This pattern can also be produced by scanning a printed image
with a halftone pattern. The best way to overcome this effect is to scan at a higher resolution
and resample down to the required size, or to slightly rotate the image on the scanner.

Example of a moiré pattern. Identical magenta and cyan screens have been overlaid on each other
and rotated slightly to show a pattern. Each degree of rotation will produce a slightly different
result.

8
Image Processing Technologies v1.0

1.2.3 Halftone process

When scanning solid and continuous tone images using 1-bit (black and white) settings, the
color or grayscale component of the image will convert to a simulated tone using a halftone
process. This can result in edges and detail with dots due to the conversion process.

Original logo in color Scanned as 1-bit (error diffusion).

Scanned as 1-bit (halftone dither). Scanned as 1-bit (50% threshold).

Enlargement of halftone pattern. Scanned as 8-bit (grayscale).

9
Image Processing Technologies v1.0

1.3 Knowledge check


1. Image Processing is generally needed between what two processes?

2. Moiré patterns can occur when scanning objects with a halftone pattern, what
may be one method of reducing this effect?

10
Image Processing Technologies v1.0

2 Image Processing Workflow


2.1 Black and white MFP flow chart
The following is an image processing flow chart typical in black and white MFPs.

11
Image Processing Technologies v1.0

2.2 Color MFP flow chart


The following is an image processing flow chart typical in color MFPs.

12
Image Processing Technologies v1.0

2.3 Device color-spaces


In workflow we talk of device color-space and editing color-space.

Each piece of equipment that is involved with creating an image is referred to as a “device" i.e.
cameras, scanners, computer monitors, proofers, platesetters, and printing presses are all
devices.

No device can reproduce the whole of the visible color spectrum. The range of colors that an
individual device can reproduce is called that device’s color-space, or its gamut.

The color-space for a particular device is defined digitally, and is called a device profile. Each
device profile has categories: input, output, and display profiles.

A CIE chromaticity diagram showing a generic interpretation of the colors visible to the human eye
(the human vision gamut). The computer monitor can display a wider and richer gamut than can
the printer. Inside the brown line is the gamut that can be displayed by a typical computer
monitor. Inside the yellow line is the gamut of a generic offset printing press.

13
Image Processing Technologies v1.0

2.4 Knowledge check


1. How many processes are involved between the input and output of
black and white MFPs?

2. List some ‘devices’ that are involved with creating images.

3. What are the categories of a color-space device profile?

14
Image Processing Technologies v1.0

3 A/D Conversion
3.1 Sampling
3.1.1 What is sampling?

Sampling is the conversion process of capturing analog (continuous tone) image information
and mapping it into digital values. The process of sampling requires enough samples to allow
accurate reconstruction of the original.

The diagram shows the sampling/reconstruction stages of the digitizing process. The number
of sample points (or sampling frequency) determines the resolution.

Image with minimal sample points. Image with correct sample points.

15
Image Processing Technologies v1.0

3.1.2 What is sampling frequency?

The sampling frequency of an image is defined by the number of horizontal and vertical
samples, usually expressed as Dots Per Inch (DPI). The correct DPI will be determined by the
accuracy required to faithfully reproduce the image for a given output device.

An image expressed as a histogram. More sample points result in a more accurate reproduction of
the original. Fewer sample points result in a less accurate reconstruction of the original. Values
from 0 to 256 represent the full tonal range where 0 is black and 256 is white.

16
Image Processing Technologies v1.0

3.2 Quantization
3.2.1 What is quantization?

Quantization is the procedure of compressing a range of values to a single numerical value. By


reducing the amount of numbers in a given stream, the stream becomes more compressible,
e.g. when seeking to reduce the number of colors required to represent an image.

Quantization methods involve compressing high frequency brightness variations, which the
human eye has trouble distinguishing, by dividing each component in the frequency domain
by a constant for that component, and then rounding to the nearest integer. This is regarded
as lossy compression since some data is rounded down to zero.

Through a quantization process, the sequential analog data is converted to the specific level
digital data. The level of digital data is numerically expressed in the number of bits.

17
Image Processing Technologies v1.0

3.2.2 What is quantization error?

Regardless of the number of bits used, there will always be a difference, or error, between the
original analog data and the output digital data. This error is called the quantization error. The
quantization error can be explained as follows.

The smooth blue line shows the analog data. When the analog data is quantized to 3-bit (8-
level) digital data, the digital data can be represented by the red step-like graph. The area
colored in gray between the red and blue lines represents the error.

When the number of bits is increased, the gray area, and hence the quantization error
becomes smaller:

Quantization error is large = Image is rough

Quantization error is small = Image is smooth

18
Image Processing Technologies v1.0

3.3 Knowledge check


1. Describe why scanning at a higher resolution can produce more accurate results
than at a low resolution. What advantages and disadvantages are there by scanning
at high resolutions?

2. The purpose of compressing an image is to create a smaller file size and quicker
processing. What possible negative impact could occur by using compression?

19
Image Processing Technologies v1.0

4 Image Processing Filters


4.1 What is filter processing?
To process an image using filters, it is first necessary to reduce the image to a series of
numbers that can be manipulated by a computer. Each number represents the brightness of
the image at a particular location, called a pixel. Once the image has been digitized, there are
three basic operations that can be performed.

1. A point operation is where a pixel value in the output image depends on a single pixel
value in the input image. Examples of point operations include contrast adjustment,
colorizing a black and white image, and adjusting the hue settings of a particular color (as
in the example below).

20
Image Processing Technologies v1.0

2. A local operation involves using several neighboring pixels in the input image to
determine the value of an output image pixel.

3. In a global operation, all of the input image pixels contribute to an output image pixel
value. These operations, taken singly or in combination, are the means by which the image
is enhanced, restored, or compressed.

21
Image Processing Technologies v1.0

4.2 Types of filters


Filters can be useful for repairing an image or to apply artistic effects. All filters use
mathematical algorithms to achieve their result. The main types of filters commonly used in
image processing are as follows:

High Pass - uses edge detection to create the appearance of sharpening. Detects areas of
contrast to determine edges with minimal effect on areas with low contrast (i.e. flat color).

Original High Pass (sharpen)

Low Pass – uses a smoothing process to create a blur effect. It averages neighboring
pixels to create a smooth transition between areas of contrast.

Original Low Pass (blur)

22
Image Processing Technologies v1.0

Distortion/Displacement - geometrically distorts an image, creating 3D or other reshaping


effects.

Original Pixel displacement Image distortion

Textures - applies an effect which creates a relief pattern to give a 3D appearance of texture.

Original Sandstone texture

23
Image Processing Technologies v1.0

Colorizing – applies color effects which can affect saturation, hue and brightness.

Original Colorizing effect

Artistic/Stylizing – used to emulate painted or hand drawn effects or to stylize an image


with unusual results.

Original Artistic effect

24
Image Processing Technologies v1.0

Noise/Pixilation – applies an effect which creates a relief pattern to give a 3D appearance of


texture.

Original Add noise Pixilation

4.3 Noise removal (smoothing)


To perform high-accuracy image processing, unnecessary components that are not related to
an image (called noise) must be removed. Noise may occur in a number of ways, including
high ISO levels produced by image sensors of digital cameras, CCD sensor of scanners, film
grain visible in scanned images or JPEG artifacts caused by heavily compressing image files.
Noise can typically be divided into Impulse noise and Gaussian noise.

Impulse noise occurs at random positions and appears at irregular sizes.

Gaussian noise is based on a Gaussian distribution and can be caused by electrical


interference from the input device.

In either case, the noise can be dealt with so as to make it less noticeable by acquiring
information from the nearby pixels and applying a low pass averaging algorithm.

The advantage of using this type of filter is that it reduces the visual pattern caused by the
noise by applying a blur (smoothing) to the image. The disadvantage is that it could lead to a
defocusing (or softening) of the image. Judicious use of settings can offer a good compromise
and visually improve the results.

Images are composed of independent square pixels, therefore, noise that occurs irregularly
causes a sudden density difference (brightness of color) relative to the neighboring pixels.
There is a need to minimize these density differences without losing the original data values.

25
Image Processing Technologies v1.0

Original With noise removal

Enlargements – neighboring pixels have been used to determine the focusing pixel’s value.

26
Image Processing Technologies v1.0

4.4 Types of smoothing filters


Smoothing filters are also called low-pass filters because they let the low frequency
components pass and reduce the high frequency components. Low-pass filtering effectively
blurs the image and removes speckles of high frequency noise.

The following types of smoothing filters are used in image processing:

• Average filter

• Weighted average filter

• Moving average method

• Median filter

• Selective local average method.

4.4.1 Average filter

The average filter is a method of determining the focusing pixel value from the average of its
neighboring pixels.

Advantages: The main purpose of this filter is to remove noise and to control defocusing.

Disadvantages: A pattern with gradual luminance does not change after processing. The filter
can effectively remove noise, which has a short-cycle luminance pattern.

Images tend to appear out-of-focus (soft).

Smoothing in this example has averaged out the high frequency components to reduce unwanted
noise.

27
Image Processing Technologies v1.0

Data processing with the average filter

In the case of a 3 x 3 matrix, the value of the center pixel is calculated from the neighboring 8
pixels. The coefficient of the average filter is simply one ninth, based on the total number of
pixels in the matrix (3 x 3), and so the value of the center pixel is simply 1/9 of the total of the
center value and each of its neighboring 8 pixel values.

1/9 1/9 1/9

1/9 1/9 1/9

1/9 1/9 1/9

Specifically, if the following is the original data, the center pixel value is calculated from the
above filtering coefficients like this:

12x1/9 + 10x1/9 + 6x1/9 + 2x1/9 + 3x1/9 + 8x1/9 + 4x1/9 + 5x1/9 + 4x1/9 = 6

12 10 6

2 3 8

4 5 4

28
Image Processing Technologies v1.0

4.4.2 Weighted average filter

The weighted average filter operates in a similar manner to the average filter with the
exception that it puts a larger weight on the center pixel (also known as the focusing pixel).
The coefficients are different from the average filter but the formula is the same.

Advantages and disadvantages: Compared with the average filter, out-of-focus problems are
improved and images have a much smoother and more natural look. However, images can
appear soft if the filter is overused.

Original image

Average filter Weighted average filter

29
Image Processing Technologies v1.0

Data processing with the weighted average filter

Data processing with the weighted average filter method is basically the same as for the
average filter. However, different weighting coefficients are used:

1/16 2/16 1/16

2/16 4/16 2/16

1/16 2/16 1/16

4.4.3 Moving average method

Noise processing using the moving average method is achieved by setting the value of the
focusing pixel to the average value of its neighboring pixel.

Advantages and disadvantages: This method changes the values of all the pixels. It blurs not
only noise but also image edges.

Data processing with the moving average method

The calculation of the central pixel value in the red frame proceeds as follows:

A 3 x 3 matrix is extracted from the data (Table 1) before processing. The total of all the
neighboring pixel values (i.e. excluding the central pixel) is calculated:

5 + 4 + 6 + 4 + 5 + 5 + 5 + 6 = 40

This total is divided by the number of neighboring pixels (8) to calculate the average value:

40 ÷ 8 = 5

This average value (5) is assigned as the new value of the central pixel.

30
Image Processing Technologies v1.0

The frame is then moved along by one pixel, and its color is changed to blue now to
distinguish it from the first frame. The calculation of the central pixel value in the blue frame
then proceeds in a similar way:

A 3 x 3 matrix is extracted from the data (Table 1) before processing. The total of all the
neighboring pixel values (i.e. excluding the central pixel) is calculated:

4 + 6 + 8 + 5 + 6 + 5 + 6 + 8 = 48

This total is divided by the number of neighboring pixels (8) to calculate the average value:

48 ÷ 8 = 6

This average value (6) is assigned as the new value of the central pixel.

4.4.4 Median filter

Noise processing by a median filter blurs only noise, while the moving average method blurs
noise and edges at the same time. A median value is the value at the middle position when a
set of data values is arranged according to size (or the average of the two middle values when
there is an even number of values).

The median filter takes out the median value of the surrounding density of the target pixel. It
gives contrasting density of the target pixel.

Advantages: Compared with the moving average method, the median filter has the following
advantages:

• More effective noise reduction

• Small density fluctuations can be smoothed

• Fewer blurs at contoured parts of images.

31
Image Processing Technologies v1.0

Data processing with the median filter

The calculation of the central pixel value in the red frame proceeds as follows:

A 3 x 3 matrix is extracted from the data (Table 1) before processing. The pixel values in the
matrix are arranged according to size:

4, 4, 4, 5, 5, 5, 5, 6, 6

The middle (5th) value is assigned as the value of the central pixel.

4.4.5 Selective local average method

The variance is calculated for each one of 9 sub-matrices in a larger 5 x 5 matrix. Then the
pattern with the smallest variance is selected and its average is calculated. This value is
assigned to the central pixel in the large matrix.

Advantages: Stable density value is achieved since the minimum variance in the 5 x 5
neighboring pixel area can be obtained. Around each pixel, the local area where no edge is
included is searched and the average density of the area is set as the density of the target
pixel. It is possible to remove noise without blurring edges.

Data processing with the selective local average method

An area composed of 5 x 5 pixels around the target pixel is examined. From that area, 9
additional local areas are examined. A large variance in a local area indicates that an image
edge is included in the area. Therefore, the local area with the smallest variance should not
contain an image edge area.

32
Image Processing Technologies v1.0

Noise can be removed without blurring the original image by finding the local area with the
smallest variance and using its average density as the new central pixel value.

Original Image after processing

33
Image Processing Technologies v1.0

4.5 Contour definition


4.5.1 Outline of contour definition

What is contour definition?

A contour is a line that represents the outer edge of an object. In image processing, extracting
a contour is called “contour definition” or “edge extraction”.

In images, “the part where the density or color suddenly changes”, such as a border between
objects or between an object and the background, is visible as a contour. Since a contour is a
sudden density change, a differential operation that identifies sudden changes can be used
for contour definition.

There are two types: the 1st differential method and the 2nd differential method. From 3 x 3
matrices, the difference values between the brightness of the target pixel and the brightness
of its four neighboring pixels (top/bottom/left/right) are obtained. A portion of the large
difference value is set as contour and extracted.

For an image from which noise has been reduced by noise processing, edge enhancement is
performed to correct image deterioration. To catch sudden changes of brightness, the first
difference filter within a 3 x 3 matrix detects the difference values of the target pixel
compared to four neighboring pixels, as shown in the image below.

Based on this, a large value point is set as the contour and the edge enhancement process is
performed. In image processing, the concept of contour is a line element used to characterize
an image. Based on the information about the contour, a specified object is extracted and the
area and girth are measured. Or, corresponding points of two images are detected and used
as a key to recognize complex images.

The following are contour patterns:

34
Image Processing Technologies v1.0

Why is contour definition necessary?

Images are composed of pixels of different densities. The contour of an object is considered to
be the part where the pixel density changes drastically. Therefore, by focusing on density in
an image and detecting the change in pixel density, the target contour can be extracted.

Types of contour definition

There are two methods used to extract contours: 1st differential processing and 2nd
differential processing. Examples of the 1st differential processing method are Prewitt, Sobel
and Roberts.

An example of the 2nd differential processing method is the Laplacian method.

Images after each processing method are shown below.

35
Image Processing Technologies v1.0

4.5.2 1st Differential method

1st differential processing is a filter for enhancing lines and edges in images. The output
becomes larger at the pixel where density changes and borders between background and
objects are extracted.

How data is processed

With the Sobel and Prewitt 1st differential filters, data is processed as follows:

As the image below shows, coefficients of a horizontal and vertical filter are multiplied by nine
pixels (the target pixel and its neighboring eight) and the values are combined. If the total
value in horizontal direction is A and the total value in vertical direction is B, the pixel value of
the target pixel is calculated by this expression.

Target pixel value =

-1 0 1 -1 -2 -1

-2 0 2 0 0 0

-1 0 1 1 2 1

Horizontal direction Vertical direction

36
Image Processing Technologies v1.0

Example: Horizontal direction

Example: vertical direction

The new target pixel value =

The Prewitt filter data processing method is similar to that for the Sobel filter, except that the
coefficients by which the neighboring pixels are multiplied are different.

37
Image Processing Technologies v1.0

4.5.3 2nd differential method

2nd differential processing is another filter used to enhance lines and edges in images.

How data is processed

This filter has two processing methods: 4-directional processing based on the
top/bottom/left/right pixels and 8-directional processing based on the top/bottom/left/right
and oblique pixels.

To use a Laplacian filter with 8-directional processing, a matrix of 9 pixels from the original
image (with the target pixel at the center) is multiplied by the coefficients shown in the
following. The total of the values is the new value of the target pixel.

1 1 1

1 -8 1

1 1 1

After processing by a Laplacian filter, brightness in the image becomes uneven.

38
Image Processing Technologies v1.0

39
Image Processing Technologies v1.0

4.6 Knowledge check


1. Noise is usually introduced by some type of electrical interference by the input
device. What methods can be used to reduce this effect?

2. Noise reduction usually has an unwanted side effect. Describe this effect and any
methods used to counter it.

40
Image Processing Technologies v1.0

3. Describe 4 methods of edge detection within an image.

4. Explain how a low-pass filter can effectively reduce noise in an image.

41
Image Processing Technologies v1.0

5. Describe the 3 basic operations used in filter processing.

6. What part of an image does contour definition target?

42
Image Processing Technologies v1.0

5 Processes of Image Reconstruction


5.1 Counter measures for image deterioration
When an image has not been acquired correctly or exhibits forms of deterioration due to over
processing, there are processes available to assist in restoring its visual integrity.

These processes may include reconstruction by:

• digitally enhancing an element to recover definition that has been lost or obscured

• reconstructing sections of the image which are irreparably damaged by grafting other
elements in its place, either from the original or from a reference image

• cropping of the image to delete unfavorable edges

• straightening or rotating parts of the image

• contrast enhancement

• gamma (midtone) adjustment for shadow definition

• edge sharpening and contour definition

• smoothing processes to reduce noise and compression artifacts

• colorizing or color correction to obtain true white balance.

Gamma Correction

Gamma correction controls the overall brightness of an image. It targets the mid tones and
leaves the black and white values as they are. If the tonal range is from 0 to 255, then the
midpoint level is 128. By moving the midpoint value closer to 0, the dark values become
compressed and the lighter values are expanded. This gives the appearance of lightening the
image. To darken the image, increase the value of the midpoint closer to 255.

Original image Example of Gamma correction

43
Image Processing Technologies v1.0

Contrast Enhancement

A technique used to improve an image which doesn’t use the full tonal range i.e. black values
are not at 0 and white values are not at 255. By compressing light and dark values together
the image is remapped to a full tonal range.

Original image Example of contrast enhancement

Noise Reduction

This is the process of smoothing parts of an image by using an Averaging filter to minimize
density differences between neighboring pixels. Noise can be dealt with to make it less
noticeable by acquiring information from the nearby pixels and applying a low-pass averaging
algorithm. Low-pass filters let the low frequency components pass and reduce the high
frequency components. Low-pass filtering effectively blurs the image and removes speckles of
high frequency noise.

Original image Example with noise reduction

44
Image Processing Technologies v1.0

White Balance Correction

This will color correct images which do not exhibit the correct color temperature of the
environment it was obtained from. Artificial lighting and incorrect settings on digital cameras
can cause as shift of an image color range either warmer or cooler than intended. The device
may have trouble detecting what may be a neutral white because the light may be warmer or
cooler than neutral (5500K). If that occurs, the whole RGB spectrum will be shifted away from
its correct values. The image may appear warmer or cooler than intended.

Original image Example of white balance correction

Edge Sharpening

Images are composed of pixels of different densities. The edge of an object is considered to be
the part where the pixel density changes drastically. By extracting the contour of an edge and
increasing its contrast, the image appears to sharpen.

Original image Example of edge sharpening

45
Image Processing Technologies v1.0

Cropping

Cropping an image has the effect of focusing onto the main subject matter. This is a
compositional adjustment based on personal judgment. The image size will decrease when
there is cropping.

Original image Example using cropping

5.2 Knowledge check


1. How does Gamma correction affect the midtones of an image?

46
Image Processing Technologies v1.0

2. How does Noise reduction achieve its smoothing qualities?

3. What may cause the white balance to shift in a digital image?

47
Image Processing Technologies v1.0

6 Color Conversion
6.1 3D-LUT method
Three-dimensional table interpolation techniques are now widely used in color management
systems. These techniques are practical because complicated color conversions such as
gamma conversion, matrix masking, under color removal, or gamut mapping can be executed
at once by use of a three-dimensional lookup table (3D LUT).

3D LUTs are used to warp color relationships to make them look correct. For each input value,
three output values are assigned. Between the RGB input and the CMYK output, there is a 3D
LUT that defines the warping from one device to another or from one color space to another.

Input Output

C M Y K

R 1 .9 .01 .01 .02

G .5 .01 .6 .02 .2

B .3 .01 .03 .5 .02

Example of a 3D-LUT

LUTs are used to define relationships, to speed the conversion process and to avoid the need
to process complex calculations quickly.

48
Image Processing Technologies v1.0

6.2 Interpolation method


Conversions between color spaces are frequently used in image processing. Each type of
device, such as a color MFP, has a unique color space called a “color gamut”. Each device will
reproduce an identical image in a different manner, based upon the capacity of the device’s
gamut. This may be obvious when comparing output from different devices.

The problem with matching colors can be observed if the gamut of the output device is
narrower than that of the input device, because the output device cannot reproduce the
colors outside of its own gamut.

As seen in the diagram, the CMYK gamut is smaller (i.e. less colors) than the RGB gamut. This is
apparent when comparing the printed output to the monitor display of an image.

The technology which handles the conversion of RGB to CMYK (MFP) is proprietary to each
manufacturer’s device.

However, for some conversions, purely mathematically-based methods have been found
wanting, and the method of measurement and interpolation has been used to produced more
accurate results. Nowhere is this method more prevalently used than in dealing with colors
resulting from combinations of inks or toners applied to paper.

Although ink or toner colors of Cyan, Magenta, Yellow and Black are most commonly used, the
use of a larger number of inks or toners that expand the color gamut is becoming more
important for high fidelity color printing.

49
Image Processing Technologies v1.0

7 Halftone Rendering
7.1 Pseudo-halftone processing
7.1.1 Outline of pseudo-halftone rendering

There are two methods for expressing the gradation of an original image produced by silver
halide photography on an offset printer or MFP: density gradation and area gradation.

Pseudo-halftone processing is an area gradation technique that is used for offset printers and
MFPs.

What is pseudo-halftone rendering?

It is a rendering technology that reproduces the dark and light areas of an input image with
limited color and gradation by changing the ink (toner) attachment area ratio.

For representing an analog photograph on a dye sublimation printer, the gradation can be
changed for each pixel and the linear relation between the input and the output resolutions
can be maintained. This is called the density gradation method.

The difference between area gradation and density gradation is explained in the following
diagram.

50
Image Processing Technologies v1.0

Types of pseudo-halftone processing

The following are the major classifications of pseudo-halftone processing:

• Dither method

• Error diffusion method

• Density pattern method

• Sub matrix method.

7.1.2 Dither method

What is the dither method?

The dither method expresses the halftone with a fixed gradation. For example, it expresses the
various gray color levels with only black and white. Using a fixed rule, black and white pixels
corresponding to the original gradation are generated. The color of gray is then determined
by the frequency of the appearance of the black and white pixels.

The dither method can be classified into two types: ordered dither and random dither.

An ordered dither reproduces the image using a dither matrix (2-dimensional block) with a
fixed rule.

51
Image Processing Technologies v1.0

Random dither is the method to reproduce the image referring to the 2-dimsenional block
which is expanded to infinity.

Ordered dither method

When converting to bi-level data, the noise is attached to the threshold value. Single pixel
gradation cannot be reproduced with the ordered dither method because the gradation is
reproduced for the entire image.

52
Image Processing Technologies v1.0

The ordered dither method includes the following types:

Dot distribution type: Bayer pattern

Advantage: the dot configuration on the output is unnoticeable and high resolution can be
achieved.

Dot concentration type: Halftone dot pattern, Spiral pattern

Advantage: image compression efficiency and gradation quality are superior to that of the dot
distribution type.

Dither matrix - a mask to generate the pseudo-random number. It is considered that when the
random number for the dither matrix is given by the two-dimensional block, a good result is
achieved. The popular matrix size is four x four to eight x eight.

53
Image Processing Technologies v1.0

Data rendering

An original image is divided in blocks and the image in each block is further divided in 4 x 4
pixels. The gradation level of each pixel is quantized to 16 levels from 0 to 15.

The original image gradation levels from 0 to 255 are quantized to 16 levels using this table:

00 to 15 0 128 to 143 8

16 to 31 1 144 to 159 9

32 to 47 2 160 to 175 10

48 to 63 3 176 to 191 11

64 to 79 4 192 to 207 12

80 to 95 5 208 to 223 13

96 to 111 6 224 to 239 14

112 to 127 7 240 to 255 15

Example of a quantization table.

54
Image Processing Technologies v1.0

55
Image Processing Technologies v1.0

Random dither method

This is a rendering method that considers pixel density as a probability. Noise is added to the
original before processing to the bi-level data.

Original Random dither

Advantage: The process is easier than that of ordered dither.

Disadvantages: The output image contains noise because of the random number.

The rendering result is not constant and the quality is poor.

Outlines are not properly reproduced.

Not for practical use.

Data processing

Random numbers from 0 to 255 are given as the threshold values for each pixel. The output
pixel is determined by the relationship between the random number and the brightness
(input pixel value). Specifically, when the input pixel value is equal to or higher than the
threshold, 1 (255) is output. When the input pixel value is lower than the threshold, 0 is
output.

Input pixel < Random number (0 to 255) = output pixel 0

Input pixel ≥ Random number (0 to 255) = output pixel 1

56
Image Processing Technologies v1.0

7.1.3 Error diffusion method

What is the error diffusion method?

Halftone rendering steps the gradation level down from the original to reduce the data
volume. Consequently, it is impossible to truly reproduce the gradation of the original. On the
other hand, the error diffusion method can reproduce the halftone in good levels as an overall
image though the errors between each corresponding pixel are large. It is because the error
diffusion method distributes the errors at the threshold processing to the neighboring pixels.

There are several algorithms with different error distribution methods: Floyd and Steinberg,
Jarvis, Judice and Ninke, Shiau–Fan, etc.

Original data After error diffusion

Advantages: The dither method expresses 1 pixel data with 1 dot. On the other hand, the error
diffusion method expresses 1 pixel data with several dots after diffusing the error to the
neighboring pixels. This method achieves an optically better result with gradations and tonal
areas.

Less moiré occurs with the error diffusion method than with the dithering method.

Disadvantages: Noise is more likely to appear at highlighted parts of the image.

The data volume to be processed becomes large and a high-speed processing circuit is
necessary.

57
Image Processing Technologies v1.0

Data processing method

1. Simple error diffusion

The following examples show simplified processes to help illustrate error diffusion concepts.
Error diffusion to neighboring pixels is excluded from the examples.

Example: 1-bit (bi-level) error diffusion with a threshold value of 120

Pixel A is 210. This value is larger than the threshold of 120 so it is replaced by 255. The error
relative to the original value of pixel A (210 – 255 = -45) is added to pixel B.

Pixel B is 75 after adding the previous error value (-45) to the original value (120). This value is
smaller than the threshold of 120 so it is replaced by 0. The error (75 – 0 = 75) is added to pixel
C.

Pixel C is 165 after adding the previous error value (75) to the original value (90). This value is
greater than the threshold of 120 so it is replaced by 255. The error (165 – 255 = -90) is added
to pixel D.

Pixel D is 20 after adding the previous error value (-90) to the original value (110). This value is
less than the threshold of 120 so it is replaced by 0.

Note that the error relative to the threshold value is always added to the next pixel’s value.

58
Image Processing Technologies v1.0

2. Floyd-Steinberg method

The error is distributed to the neighboring 4 pixels with weighting.

Example: 1-bit (bi-level) error diffusion with a threshold value of 120

59
Image Processing Technologies v1.0

3. Jarvis, Judice and Ninke method

The error is distributed to the neighboring 12 pixels with weighting. Compared to the Floyd-
Steinberg method, the error is distributed to more pixels and better image quality is achieved.
However, data processing time is longer and high performance processing circuitry is
necessary.

Example: 1-bit (bi-level) error diffusion with a threshold value of 120

60
Image Processing Technologies v1.0

4. Shiau–Fan method

The error is distributed to the neighboring 5 pixels with weighting. Compared to the Floyd
and Steinberg method, the error is diffused to many pixels in the main-scan direction and
better image quality can be achieved.

Example: 1-bit (bi-level) error diffusion with a threshold value of 120

61
Image Processing Technologies v1.0

7.1.4 Density pattern method

What is the density pattern method?

The density pattern method expresses the density of one pixel with an n x n pixel sub-matrix,
with the result that 1 pixel is replaced by several dots. The resolution is therefore increased by
a factor of n x n.

Original data After density pattern processing

Advantage: The larger the size of the sub-matrix, the higher the degree of gradation becomes.

Disadvantage: The larger the size of the sub-matrix, the larger the size of the final image.

62
Image Processing Technologies v1.0

Data processing method

Example: When using a 2 x 2 sub-matrix, the following 5 levels are possible:

Pixel A: 255 is converted to density pattern 4.

Pixel B: 122 is converted to density pattern 2.

63
Image Processing Technologies v1.0

7.1.5 Sub-matrix method

What is the sub-matrix method?

The sub-matrix method is a mix of the dither method and the density pattern method. The
original image is processed with a dither matrix and then each pixel is replaced with multiple
dots using the density pattern method.

Advantage

Both high resolution (which is inherent in the dither method) and high gradation (which is
inherent in the density pattern method) can be achieved.

7.2 Screening
7.2.1 Screen angles

Example of screen angles.

When overprinting colors, a different screen angle is used for each color in order to minimize
the moiré effect. The most popular method involves having the black (K) screen placed at 45°,
the cyan (C) and magenta (M) screens placed at 30° on each side of the black screen (at 15°
and 45° respectively) and the yellow (Y) screen, which is not distinctive, placed between two
of the other colors, or beyond one of the other colors, as shown above.

A correct choice of angles will result in a dot pattern in the overlapping section known as a
rosette, whereas an incorrect choice will produce an undesirable moiré pattern. Each vendor
(printing company or device manufacturer) uses its own screen angles.

Generally, moiré is highly visible at a screen angle of 0° and it becomes less visible when the
screen angle is set to 30° or 45°. This is why the most easily distinguished color, black, is set to

64
Image Processing Technologies v1.0

45°, whilst the less distinct colors, cyan and magenta, are offset by 30° to either side of black.
Moiré for yellow is very hard to distinguish, hence the reason its screen angle may be safely
set to 0°.

For each pixel in the writing area, the screen angle is determined for the four colors (CMYK) by
assigning specific multi-level dither patterns to each color.

7.2.2 Screen Patterns

When a continuous tone (contone) image is intended to be printed, a halftone image of each
of the separated CMYK channels needs to be created. During the halftoning process, the color
separated channels are rotated in order to avoid full overprinting of the inevitable clusters of
dots from each of the CMYK channels.

This is demonstrated below in the enlargement detail of a printed image, which shows the
cluster of patterns referred to as a rosette. The rosette pattern will evenly distribute the tonal
characteristics of the image without producing moiré artifacts.

Example of CMYK channels correctly overlaid to produce a rosette pattern.

65
Image Processing Technologies v1.0

7.2.3 Outline of screening

Screening methods obtain a dithered image by creating regular clusters of points, with the
black and white dot patterns inside the clusters having a variable size, according to the
image’s tonal values. For this reason, these techniques are known by the name of amplitude
modulated dithering technique, or simply AM dithering.

Dispersed dithering algorithms use a fixed point size and modulate the spatial distribution of
black and white points to render the tonal values of the image. In contrast to AM dithering
techniques, these algorithms are called frequency modulated dithering, or simply FM
dithering. FM dithering techniques have been introduced into raster image processors (RIP) of
imagesetters and printers. Here are some typical types of AM and FM screens:

AM screens

Round dot (AGFA)

Square dot

• Tone jump becomes noticeable at around 50% image density.

Chain dot

• This, along with the square dot screen, is one of the most popular dot shapes for
expressing continuous tones.

• The dot shape is rhombic.

• Around an image density of 50%, the longer ends of the dots are chained.
Chain dots are effective in resolving the issue of tone jump.

FM screen

Staccato (Creo)

Fairdot (Dainippon Screen)

Diamond dot (Heidelberg)

Number of screen lines

The optimal number of screen lines (“lines per inch”, or LPI) for the desired purpose must be
selected at the time of printing.

66
Image Processing Technologies v1.0

The following table shows the most popular number of screen lines for several types of paper.

Number of screen lines (LPI) Paper type

65 to 85 Newspaper

85 to 133 Medium/high quality paper

133 to 200 Coated paper

300 to 400 Art paper

7.2.4 What is an AM screen?

An AM (amplitude modulation) screen produces dots with the following properties:

• Color density is expressed by the size of the dot.

• Image density is expressed by positioning the dots on ordered grids.

7.2.5 What is an FM screen?

An FM (frequency modulation) screen produces dots with the following properties:

• The dots are uniform in size and very small

• The dots are positioned at random, with gradation being expressed by the density of
the dots.

67
Image Processing Technologies v1.0

7.2.6 Comparison between AM and FM screens

When printing continuous tone originals, such as photographs, AM screening expresses the
density gradation by the size of a halftone dot. Photographs in newspapers and magazines are
usually printed with this screening method.

FM screening expresses the density of the photograph by the density of the very small dots in
the area - fewer dots for the light and more dots for the dark parts of the image. The dot
pattern is random, the moiré effect does not occur and the gradation expression is smooth.
This method is generally used for calendars, posters, art books, etc.

Detail Moiré Rosette Tone jump

AM The interval between Because of the The screen angle for Though smooth
halftone dots is fixed halftone dot shape, each process color gradation is expected,
and details are lost. moiré sometimes sometimes causes a the image suddenly
occurs. rosette pattern. turns darker from the
middle.

FM Since the dot size is Since the direction Since a fixed Since the gradation is
very small, and the and interval pattern does not output smoothly, the
distance between between dots are exist, a rosette tone jump is unlikely
dots is narrow, details not constant, moiré pattern does not to occur.
are not lost. As a does not occur. appear.
result, details can be
expressed.

7.2.7 Other Screen Types

Hybrid screen

Hybrid screening has the advantages of both AM screening, in that it is easy for handling, and
FM screening, in that it produces a high quality result. Different halftone dots are used
according to the density of the original image to enable an optimal representation of all the
parts of the image.

In the highlight and shadow areas, a single dot size is used and the density of the dots
changes as necessary to help reproduce the image, as with FM screening. In the middle areas,
between highlights and shadows, gradations are expressed by changing the dot size, as is
done with AM screening. Several companies have developed hybrid screens and each
company employs a unique variant of the technology.

68
Image Processing Technologies v1.0

7.3 Konica Minolta technologies


7.3.1 Laser Dot Size

In the following two examples, the dither patterns shown below are used to generate halftone
copy for various levels of cyan input. The input vs. output graph is used to determine the
output and, hence, the laser diameter, for each of the individual dither patterns (1 to 4).

Example: In case of cyan.

Example: Halftone copy (cyan) with a pixel density of 160.

For a given input of a particular color, the graph above may be used to determine the
corresponding output value for each dither pattern and, hence, the required laser diameter.

69
Image Processing Technologies v1.0

For each output threshold value generated by the above graph, the laser emission time and,
hence, the laser diameter is shown in the following table.

Threshold value level Laser PWM Laser diameter

255 Laser emission time: long

192 Laser emission time: medium

128 Laser emission time: short

0 None

In the case of an input level of 160 (see the diagram above), the laser diameter for dither
patterns 1 and 2 is large, since the output for these patterns is 255.

For dither pattern 3, the laser diameter is medium because the output is 192, whilst for dither
pattern 4, the laser diameter is small because the output is 128.

This is summarized in the following table.

Multi-level dither pattern Threshold value level Laser diameter

1, 2 255

3 192

4 128

70
Image Processing Technologies v1.0

Executing laser writing at the calculated diameters for each of the multi-level dither patterns
produces the following output for cyan at an input level of 160.

Example: Halftone copy (cyan) with an image density of 80.

In the case where the input level is 80 (see the diagram above), the laser diameter is large only
for dither pattern 1, since the output for that pattern for the given input is 255.

For dither patterns 2 and 3, however, the output is 128 and the laser diameter is therefore
small, whilst for dither pattern 4, there is no laser emission at all because the output is 0.

71
Image Processing Technologies v1.0

This is summarized in the following table.

Multi-level dither pattern Threshold value level Laser diameter

1 255

2, 3 128

4 0

Once again, executing laser writing at the calculated diameters for each of the multi-level
dither patterns produces the following output for cyan, this time at an input level of 80.

For magenta, yellow and black, a similar process is performed. Each color has its own specific
arrangement of dither patterns by which the screen angles specific to those colors can be
produced. See diagram on the next page.

72
Image Processing Technologies v1.0

Color Dither pattern

Magenta

Yellow

Black

73
Image Processing Technologies v1.0

7.3.2 Descreening

Screen processing is not suitable for text because of poor area discrimination (see below). For
this reason, error diffusion processing is used instead.

Image with Screen Processing Image with Error Diffusion

Even with a judicious choice of screen angle, a moiré pattern may still be generated. To
minimize this, descreen processing is used, whereby the entire image is gradated by reducing
the density gap between adjacent pixels (making the moiré less visible).

Example: the average value is calculated for the data of 3 bits and converted to 8 bits. The
average value of 3-bit value is used.

74
Image Processing Technologies v1.0

For laser writing, the laser emission time is changed and a pattern is created, according to the
processing data. However, a gap will be generated between pixels. Therefore, the dot-shifting
process is executed to create a smooth line screen.

75
Image Processing Technologies v1.0

7.4 Knowledge check


1. What advantages does the Jarvis, Judice and Ninke error diffusion method have
over the Floyd and Steinberg method?

2. Describe how the dither method functions and the 2 types used in its operation.

76
Image Processing Technologies v1.0

3. Process the following data with the 1-bit error diffusion method with a threshold
of 159.

4. Why does the file size increase when applying the density pattern method?

77
Image Processing Technologies v1.0

5. How does the number of screen lines (lpi) affect the tonal gradient within an
image?

6. Describe the differences of the dot pattern with AM screening and FM screening.

78
Image Processing Technologies v1.0

7. Explain why process screens (CMYK) are overlaid on different angles when
printing in color.

8. How does the laser diameter effect gradients when printing?

79
Image Processing Technologies v1.0

9. What is a recognized method of reducing a moiré effect in a scanned image?

80

You might also like