Physical Principles of Medical Imaging 2nd Ed
Physical Principles of Medical Imaging 2nd Ed
Sprawls, Perry.
Physical principles of medical imaging / Perry Sprawls, Jr. -2nd
ed.
p. cm.
Includes index.
Originally published: Gaithersburg, Md.: Aspen PubHshers, 1993.
ISBN 0-944838-54-5 (hardcover)
I. Diagnostic imaging. 2. Medical physics. I. Title.
[DNLM: 1. Diagnostic Imaging. 2. Health Physics. WN l lO S767pb
1993a]
RC78.7.D53S63 1995
6l 6.07'54-<lc20
DNLM/DLC
for Library of Congress 95-14209
CIP
Perry Sprawls grants permission for photocopying for limited personal or internal
use. This consent does not extend to other kinds of copying, such as copying for
general distribution, for advertising or promotional purposes, for creating new
collective works, or for resale.
For infonnation, contact Perry Sprawls at [email protected].
Every reasonable effort has been made to give factual and up-to-date information
to the reader of this book. However, because of the possibility of human error and
the potential for change in the medical sciences, the editor, publisher, and any
other persons involved in the publication of this book cannot assume responsibility
for the validity of all materials or for the consequences of their use.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
iii
iv Physical Principles of Medical Imaging
Exposure 41
Energy 44
Absorbed Dose 46
Biological Impact 48
Light 50
Radio Frequency Radiation 52
Chapter 6—Radioactivity 83
Introduction and Overview 83
Radioactive Lifetime 83
Index ^33
Preface
The effective use of any medical imaging modality and the interpretation of
images requires some understanding of the physical principles of the image for¬
mation process. This is because the ability to visualize specific anatomical struc¬
tures or pathologic conditions depends on the inherent characteristics of a particu¬
lar modality and the set of imaging factors selected by the user. The relationship
between visibility and imaging factors is rather complex and often involves com¬
gram and is also a useful reference for the practicing radiologist who is often faced
with day-to-day decisions concerning imaging equipment, procedures, and patient
safety.
This text contains much of the material from the author's previous books: The
xiii
xiv Physical Principles of Medical Imaging
•
communicate effectively with members of the technical staff
•
make intelligent decisions when selecting equipment and imaging supplies
Acknowledgments
The preparation of this book has been aided by the significant contributions of
many individuals, which are gratefully acknowledged.
Margaret Nix typed and edited the manuscript and coordinated its production.
Dr. Jack E. Peterson has provided much valuable advice and editorial support.
Department of Radiology, for his many years of encouragement and support in the
development of diagnostic radiological physics at Emory University. It was his
interest and high standards in training future radiologists that has made this book
possible.
The support and guidance of Dr. William J. Casarella, Chairman, Department
of Radiology, has been a significant factor in the development of the educational
programs and materials for the new imaging modalities which are contained in
this edition.
xv
Chapter 1
To the human observer, the internal structures and functions of the human body
are not generally visible. However, by various technologies, images can be cre¬
ated through which the medical professional can look into the body to diagnose
abnormal conditions and guide therapeutic procedures. The medical image is a
window to the body. No image window reveals everything. Different medical im¬
aging methods reveal different characteristics of the human body. With each
method, the range of image quality and structure visibility can be considerable,
depending on characteristics of the imaging equipment, skill of the operator, and
compromises with factors such as patient radiation exposure and imaging time.
Figure 1-1 is an overview of the medical imaging process. The five major com¬
ponents are the patient, the imaging system, the system operator, the image itself,
and the observer. The objective is to make an object or condition within the
patient's body visible to the observer. The visibility of specific anatomical fea¬
tures depends on the characteristics of the imaging system and the manner in
The values selected will determine the quality of the image and the visibility of
specific body features.
The ability of an observer to detect signs of a pathologic process depends on a
combination of three major factors: (1) image quality, (2) viewing conditions, and
(3) observer performance characteristics.
1
Interpet Observ
!
Image
Artifacts Blur
Contrast Noise
Distorin
MPImtrewohdaciegsln
SImysategming Transfer Parmetrs Select\i//,
Operator
Patient
CAosmpocniaetetds
1-1
Figure
Image Characteristics and Quality 3
IMAGE QUALITY
The quality of a medical image is determined by the imaging method, the char¬
acteristics of the equipment, and the imaging variables selected by the operator.
Image quality is not a single factor but is a composite of at least five factors:
contrast, blur, noise, artifacts, and distortion, as shown in Figure 1-1. The relation¬
ships between image quality factors and imaging system variables are discussed in
detail in later chapters.
The human body contains many structures and objects that are simultaneously
Image Contrast
Patient Image
Contrast
♦
Figure 1-2 Medical Imaging Is the Process of Converting Tissue Characteristics into a
Visual Image
4 Physical Principles of Medical Imaging
specific points or areas in an image. In most cases we are interested in the contrast
between a specific structure or object in the image and the area around it or its
background.
Contrast Sensitivity
The degree of physical object contrast required for an object to be visible in an
image depends on the imaging method and the characteristics of the imaging sys¬
tem. The primary characteristic of an imaging system that establishes the relation¬
ship between image contrast and object contrast is its contrast sensitivity. Con¬
sider the situation shown in Figure 1-3. The circular objects are the same size but
are filled with different concentrations of iodine contrast medium. That is,
they
have different levels of object contrast. When the imaging system has a relatively
low contrast sensitivity, only objects with a high concentration of iodine (ie, high
object contrast) will be visible in the image. If the imaging system has a high
contrast sensitivity, the lower-contrast objects will also be visible.
Figure 1-3 Increasing Contrast Sensitivity Increases Image Contrast and the Visibility of
Objects in the Body
0s Invisible t
Increase
r 25 H
(0
(0
k-
Contrast Sensitivity
Decrease
o 50 H
o
■*->
n
O
75 H
•
Visible
# I
O
• • • ^
100
are no
longer visible. Contrast sensitivity is the characteristic of the imaging sys¬
tem that raises and lowers the curtain. Increasing sensitivity raises the curtain and
allows us to see more objects in the body. A system with low contrast sensitivity
allows us to visualize only objects with relatively high inherent physical contrast.
that add detail to a medical image. Each imaging method has a limit as to the
smallest object that can be imaged and thus on visibility of detail. Visibility of
detail is limited because all imaging methods introduce blurring into the process.
The primary effect of image blur is to reduce the contrast and visibility of small
objects or detail.
Consider Figure 1-5, which represents the various objects in the body in terms
of both physical contrast and size. As we said, the boundary between visible and
invisible objects is determined by the contrast sensitivity of the imaging system.
We now extend the idea of our curtain to include the effect of blur. It has little
effect on the
visibility of large objects but it reduces the contrast and visibility of
small objects. When blur is present, and it always is, our curtain of invisibility
covers small objects and
image detail. Blur and visibility of detail are discussed in
more depth in Chapter 18.
Gamma Camera
Ultrasound
MRI
♦
CT
■ »
Fluoroscopy
Radiography
I ! 1 1 1 1—I I I I 1 1 1 1 1 I I I |
10 5 2 1 .5 .2 .1
Blur (mm)
Figure 1-6 Range of Blur Values and Visibility of Detail Obtained with Various Imaging
Methods
The amount of blur in an image can be quantified in units of length. This value
represents the width of the blurred image of a small object. Figure 1-6 compares
the approximate blur values for medical imaging methods. As a general rule, the
smallest object or detail that can be imaged has approximately the same dimen¬
sions as those of the image blur.
Noise
Another characteristic of all medical images is image noise. Image noise, some¬
times referred to image mottle, gives an image a textured or grainy appearance.
as
The source and amount of image noise depend on the imaging method and are
discussed in more detail in Chapter 21. We now briefly consider the effect of
image noise on visibility.
In Figure 1-7 we find our familiar array of body objects arranged according to
physical contrast and size. We now add a third factor, noise, which will affect the
boundary between visible and invisible objects. The general effect of increasing
image noise is to lower the curtain and reduce object visibility. In most medical
imaging situations the effect of noise is most significant on the low-contrast ob¬
jects that are already close to the visibility threshold.
Artifacts
body structure or object. These are image artifacts. In many situations an artifact
does not significantly affect object visibility and diagnostic accuracy. But artifacts
can obscure a
part of an image or may be interpreted as an anatomical feature. A
variety of factors associated with each imaging method can cause image artifacts.
Distortion
A medical image should not only make internal body objects visible, but should
give an accurate impression of their size, shape, and relative positions. An imag¬
ing procedure can, however, introduce distortion of these three factors.
Compromises
It would belogical to raise the question as to why we do not adjust each imaging
procedure to yield maximum visibility. The reason is that in many cases the vari¬
ables that affect image quality also affect factors such as radiation
exposure to the
patient and imaging time. In general, an imaging procedure should be set up to
produce adequate image quality and visibility without excessive patient exposure
or imaging time.
as blur and
visibility of detail. Therefore an imaging procedure must be selected
according to the specific requirements of the clinical examination.
A combination of two factors makes each imaging method unique. These are
the tissue characteristics that are visible in the
image and the viewing perspective.
The specific tissue characteristics that produce the various shades of gray and
injury in the body. Signs can be observed only if the condition produces a physical
change in the associated tissue. Many pathologic conditions produce a change in a
physical characteristic that can be imaged by one method but not another.
Imaging methods create images that show the body from one of two perspec¬
tives, through either projection or tomographic imaging. There are advantages and
disadvantages to each.
In projection imaging (radiography and fluoroscopy), images are formed by
projecting an x-ray beam through the patient's body and casting shadows onto an
appropriate receptor that converts the invisible x-ray image into a visible light
image. The gamma camera records a projection image that represents the distribu¬
tion of radioactive material in the body. The primary advantage of this type of
image is that a large volume of the patient's body can be viewed with one image.
A disadvantage is that structures and objects are often superimposed so that the
image of one might interfere with the visibility of another. Projection imaging
produces spatial distortion that is generally not a major problem in most clinical
applications.
Tomographic imaging, ie, conventional tomography, computed tomography
(CT), sonography, single photon emission tomography (SPECT), positron emis¬
sion tomography (PET), and MRI, produces images of selected planes or slices of
tissue in the patient's body. The general advantage of a tomographic image is the
increased visibility of objects within the imaged plane. One factor that contributes
to this is the absence of overlying objects. The major disadvantage is that only a
small slice of a patient's body can be visualized with one image. Therefore, most
tomographic procedures usually require many images to survey an entire organ
system or body cavity.
Our ability to see a specific object or feature in an image depends on the condi¬
tions under which we view the image. We must deal with the effects of viewing
10 Physical Principles of Medical Imaging
newspapers, etc. A small object dropped onto the smooth surface of the dining
table is easier to see than an object dropped onto a textured carpet or sandy beach.
With these experiences in mind, let us consider the factors associated with image
viewing conditions and how they affect our ability to visualize body structures.
Figure 1-8 shows the primary factors that affect our ability to see or detect an
object in an image. We will assume a circular object located within a larger back¬
ground area. The ability of an observer to detect the object depends on a combina¬
tion of factors including object contrast and size, background, brightness (lumi¬
nance) and structure (texture), glare produced by other light sources, distance
between the image and the observer, and the time available to search for the ob¬
ject.
Figure 1-9 is an image of the array of objects we used to demonstrate the effects
of image quality factors. We now use it to demonstrate how the factors associated
with the viewing process affect our ability to see the
objects. You can use this
actual image to test the factors discussed below.
OBJECT
Size
Contrast
Edge Sharpness
Object Contrast
The ability to see or detect an object is heavily influenced by the contrast be¬
tween theobject and its background. For most viewing tasks there is not a specific
threshold contrast at which the object suddenly becomes visible. Instead, the accu¬
alter the contrast sensitivity of the observer: background brightness, object size,
viewing distance, glare, and background structure.
Background Brightness
The human eye can function over a large range of light levels or brightness, but
vision is not equally sensitive at all brightness levels. The ability to detect objects
generally increases with increasing background brightness or image illumination.
To be detected in areas of low brightness, an object must be large and have a
relatively high level of contrast with respect to its background. This can be demon¬
strated with the image in Figure 1-9. View this image with different levels of illu¬
mination. You will notice that under low illumination you cannot see all of the
small and low-contrast objects. A higher level of object contrast is required for
visibility.
Object Size
The
relationship between the degree of contrast required for detectability and
forbackground brightness is influenced by the size of the object. Small objects
require either a higher level of contrast or increased background brightness to be
detected.
The
detectability of an object is more closely related to the angle it forms in the
visual field. Theangle is the ratio of object diameter to the distance between image
and observer. In principle, a small object will have the same
detectability at close
range as a larger object viewed at a greater distance.
Viewing Distance
The relationship between visibility and viewing distance is affected by several
factors. When the viewing distance is reduced, an object creates a larger angle and
is generally easier to see. However, the eye does not focus and exhibit maximum
contrast sensitivity at close range. Therefore, the
relationship between detectabil¬
ity and viewing distance generally peaks at a distance of approximately 2 ft.
Glare
Background Structure
The structure or texture of
object's background has a significant effect on its
an
OBSERVER PERFORMANCE
In many situations, the presence of a specific object or sign is not obvious but
requires establishment by a trained observer. The criteria used to establish the
presence of a specific sign often vary among observers. Individual observers also
use different criteria, often influenced by the clinical significance of a specific
observation.
Let us assume we have
relatively large number of cases to be examined by
a
means of a medical
imaging procedure, and that a specific pathologic condition is
present in some and absent in others. The ideal situation would be if the condition
were diagnosed as positive when present and negative when absent. In actual prac¬
tice, this is usually not achieved. A more realistic situation is represented in Figure
1-10. Here we see that a fraction of the pathological conditions were diagnosed as
positive. This fraction (or percentage) represents the sensitivity of the specific di¬
agnostic procedure. We also see that the condition was not always diagnosed as
negative when absent. The percentage of these cases diagnosed as negative repre¬
sents the specificity of the procedure.
The diagnoses derived from the imaging procedure divide the cases into four
categories, as shown in Figure 1-11: true positives, true negatives, false positives,
PATHOLOGY
Present Absent
100
O n r\ O
« 40 Diagnosed Diagnosed
as
O as 3
Positive Negative
20-
° °
o
O o o
PATHOLOGY
Present Absent
100 100
False
O °
O Positive
80- o False 80
Negative
O
>»
-
60 >
'o
■*-> O O O O h—
Figure 1-11 Relationship of True and False Diagnostic Decisions to Sensitivity and
Specificity
and false negatives. In the ideal situation, there are only true positives and true
negatives. This would be a diagnostic process with 100% accuracy.
False negatives and false positives occur for a number of reasons,
including
inherent limitations of a specific imaging method, selection of
inappropriate imag¬
ing factors, poor viewing conditions, and the performance of the observer (radi¬
ologist).
In general, if an observer is aggressive in
trying to increase the number of true
positives (sensitivity), the number of false negatives (decreased specificity) also
increases. The relationship between
sensitivity and specificity for a specific diag¬
nostic test (including observer performance) can be described
by a graph (shown
in Figure 1-12) known as a receiver
operating characteristic (ROC) curve.
The ideal diagnostic test produces 100%
sensitivity and 100% specificity as
shown. If a diagnostic procedure has no
predictive value, and the diagnosis is
obtained by a random selection process, the
relationship between sensitivity and
specificity is linear as shown. The observer determines the actual operating point
along this line. Since this particular diagnostic procedure is providing no useful
information, an attempt to increase the sensitivity by calling a greater number of
positives will produce a proportionate decrease in the specificity.
The relationship between
sensitivity and specificity for most medical imaging
procedures is between the ideal and no predictive value. The ROC curve shown in
Figure 1 -13 is typical. The characteristics of the imaging method and the
quality of
the resulting image determine the
shape of the curve and the relationship between
Image Characteristics and Quality 15
sensitivity and specificity for a specific pathological condition. The criteria used
by the observer to make the diagnosis determine the point on the curve that pro¬
duces the actual sensitivity and specificity values.
ROC Curve
100
80
> 60
]>
'35 40
c
0)
</>
20
0
100 80 60 40 20 0
Specificity (%)
Figure 1-12 Comparison of ROC Curves for an Ideal Diagnostic Procedure with One That
Produces No Useful Information
ROC Curve
100 Ir
J-f 1—~—i 1 1
100 80 60 40 20 0
Specificity (%)
Figure 1-13 An ROC Curve for a Specific Imaging Procedure. The Actual Operating Point
Is Determined by Characteristics of the Observer.
Chapter 2
There are two components of the physical universe: energy and matter. In most
physical processes there is a constant interaction and exchange between the two;
medical imaging is no exception. In all imaging methods, images are formed by
the interaction of energy and human tissue (matter). A variety of energy types are
used in medical imaging. This is, in part, what accounts for the difference in imag¬
ing methods. In this chapter we review some basic energy concepts and then look
in detail at radiation, which is energy on the move, and the role of electrons in
energy transfer.
Images of internal body structures require a transfer of energy from an energy
source to the human body and then from the body to an appropriate receptor, as
shown in Figure 2-1. Although the types might be different, certain characteristics
body. Visible light is the primary type of energy used to transfer image informa¬
tion in everyday life. However, because it usually cannot penetrate the human
body, we must use other energy types for internal body imaging.
Another characteristic of any energy used for imaging is that it must interact
with internal body structures in a manner that will create image information.
A common element of all imaging methods is that a large portion of the energy
used is deposited in the human tissue. It does not reside in the body as the same
type of energy but is converted into other energy forms such as heat and chemical
change. The possibility that the deposited energy will produce an undesirable bio¬
logical effect must always be considered.
As we approach the process of medical imaging, it is helpful to recognize two
broad categories of energy. One category is the group of energy forms that require
a material in which to exist. The other category is energy that requires no material
18 Physical Principles of Medical Imaging
object for its existence. Although the latter category does not require matter for its
existence, it is always created within a material substance and is
constantly mov¬
ing and transferring energy from location to another. This form of energy is
one
radiation; all energy forms used for medical imaging, with the exception of ultra¬
sound, are forms of radiation.
RADIATION
Radiation is energy that moves through space from one object, the source to
another object where it is absorbed. Radiation sources are generally collections of
matter or devices that convert other forms of
energy into radiation. In some cases
Energy and Radiation 19
Electrical
Heat
X-ray.
Chemical -
Heat -
Light
Chemical
the energy to be converted is stored within the object. Examples are the sun and
radioactive materials. In other cases the radiation source is only an energy con¬
verter, and other forms of energy must be applied in order to produce radiation;
light bulbs and x-ray examples.
tubes are
Most forms of radiation can penetrate through a certain amount of matter. But in
most situations, radiation energy is eventually absorbed by the material and con¬
Electromagnetic Radiation
There are general types of radiation, as shown in Figure 2-3. In one type,
two
the energy is "packaged" in small units known as photons or quanta. A photon or
quantum of energy contains no matter, only energy. Since it contains no matter, it
has no mass or weight. This type of radiation is designated electromagnetic radia¬
tion. Within the electromagnetic radiation family are a number of specific
radiation types that are used for different purposes. These include such familiar
radiations as radio signals, light, x-radiation, and gamma radiation. The designa¬
tions are determined by the amount of energy packaged in each photon.
20 Physical Principles of Medical Imaging
RADIATION
Photons
X-ray Energy
Energy Gamma
Absorber
Source Light
Radio
Particles
Beta
Electrons Internal
Positrons Conversion
Auger
Alpha
Particle Radiation
The other general type of radiation consists of small particles of matter moving
through high velocity. They carry energy because of their motion.
space at a very
Particle radiation primarily from radioactive materials, outer space, or ma¬
comes
chines that accelerate particles to very high velocities, such as linear accelerators,
betatrons, and cyclotrons. Particle radiation differs from electromagnetic radiation
in that the particles consist of matter and have mass. The type of particle radiation
encountered most frequently in clinical medicine is high-velocity electron radia¬
tion. Particle radiation is generally not used as an imaging radiation because of its
low tissue penetration. Also, when x-radiation interacts with matter, such as hu¬
man tissue, it transfers energy to electrons, thus creating a form of electron radia¬
tion within the material. Several types of particle radiation are produced as
byproducts of photon production by a number of radioactive materials used in
medical imaging.
There are occasions on which we must consider the quantity of energy involved
in a process. Many units are used to quantify energy because of the different unit
systems (metric, British, etc.) and the considerable range of unit sizes. At this
time, we consider only those energy units encountered in radiological and medical
imaging procedures. The primary difference among the energy units to be consid¬
ered is their size, which in turn determines their specific usage. We use the basic
joule, erg
*
j l I i > ' * Light Photons electron volt(eV)
> > li ti i
Joule
The joule (J) is the fundamental unit of energy in the metric International Sys¬
tem of Units (SI*). It is the largest unit encountered in radiology. One joule is
equivalent to 1 watt second. A 100-watt light bulb dissipates 100 J of energy per
second. In the next chapter we consider the full range of quantities and units used
specifically for radiation; several are energy-related and are defined in terms of the
joule or other energy units.
In general the joule is used when relatively large quantities of energy are in¬
volved.
Heat Unit
The heat unit was developed within radiology as a convenient unit for express¬
ing the amount of heat energy produced by an x-ray tube. One heat unit is 71 % of
a joule. The use of the heat unit is discussed in Chapter 9; it is gradually being
Gram-Rad
The gram-rad is another unit developed in radiology to express the total radia¬
tion energy absorbed by the body. Its usage is discussed in the following chapter.
A general trend is to use the joule for this application rather than the gram-rad.
Erg
The erg is a metric energy unit but is not an SI unit. It is much smaller than the
joule. Its primary use in radiology is to express the amount of radiation energy
absorbed in tissue.
Electron Volt
The electron volt (eV) is the smallest energy unit. It and its multiples,
kiloelectron volt (keV) and megaelectron volt (MeV), are used to express the en¬
individual light photons is
ergy of individual electrons and photons. The energy of
in the range of a few electron volts. X-ray and gammaphotons used in imaging
procedures have energies ranging from approximately fifteen to several hundred
kiloelectron volts.
The relationships of the three basic energy units are
1 joule = 107 ergs
1 joule = 6.24 x 1018 electron volts.
Power
Power is the term that expresses the rate at which energy is transferred in a
particular process. The watt is the unit for expressing power. One watt is equiva¬
lent to an energy transfer or conversion at the rate of 1 J/sec. As mentioned above,
a 100-watt light bulb converts energy at the rate of 100 J/sec. In medical imaging,
Intensity
Intensity is the spatial concentration of power and expresses the rate at which
energy passes a unit area. It is typically expressed in watts per square
through
meter or watts per square centimeter. Intensity is also used to express relative
values of x-ray exposure rate, light brightness, radio frequency (RF) signal
strength in MRI, etc.
Energy and Radiation 23
Figure 2-5 illustrates the basic quantum characteristics of both radiation and
matter. When we consider the structure of matter in
Chapter 4 we will find that
electrons within atoms generally reside at specific energy levels rather than at
arbitrary energy levels. Electrons can move from one energy level to another, but
they must go all the way or not at all. These discrete electron energy levels give
matter certain quantum characteristics. In simple terms, matter prefers to ex¬
Electron Electron
-j _i
Lii Ui
> Photon >
UJ UJ
>- >-
O O
oc oc
LU UJ
z z
UJ UJ
(eV)
created and the time it is absorbed, the average lifetime of a photon would be 3.3 x
lO-9 seconds. Photons cannot be stored or suspended in space. Once a photon is
created and emitted by a source, it travels at this very high velocity until it interacts
with and is absorbed by some material. In its very short lifetime, the photon moves
a small amount of energy from the source to the absorbing material.
In Figure 2-7 the scales for the three quantities are shown in relationship to the
various types of radiation. While it is possible to characterize any radiation by its
Photon Energy
Since aphoton is simply a unit of energy, its most important characteristic is the
quantity of energy it contains. Photon energies are usually specified in units of
electron volts or appropriate multiples.
If the various types of electromagnetic radiation were ordered with
respect to
photon energies, as shown in Figure 2-7, the scale would show the electromag¬
netic spectrum. It is the energy of the individual
photons that determines the type
of electromagnetic radiation: light,
x-ray, radio signals, etc.
An important aspect of photon
energy is that it generally determines the pen¬
etrating ability of the radiation. The lower energy x-ray photons are often referred
to as soft radiation, whereas those at the
higher-energy end of the spectrum would
be so-called hard radiation. In most situations, high-energy (hard) x-radiation is
more
penetrating than the softer portion of the spectrum.
If the individual units of energy, photons or particles, have energies that exceed
the binding energy of electrons in the matter through which the radiation is
pass¬
ing, the radiation can interact, dislodge the electrons, and ionize the matter.
Energy and Radiation 25
Visible Light
Red Green Blue
MRI
Radio Frequency 700 nm
1 Mhz
Wavelength (nm)
1010 110s 106 104 102 10-2
Frequency (Hz)
106 108 1010 1012 1014 1016 101£
The minimum radiation energy that can produce ionization varies from one
material to another, depending on the specific electron binding energies. Electron
binding energy is discussed in more detail in Chapter 4. The ionization energies
for many of the elements found in tissue range between 5 eV and 20 eV. There¬
fore, all radiations with energies exceeding these values are ionizing radiations.
Photon energy quantities are generally used to describe radiation with relatively
Frequency
E = hf.
In this relationship, h is Planck's constant, which has a value of 6.625 x 10-27 erg-
second, and f is frequency in hertz (Hz, cycles per second).
Frequency is the most common quantity used to characterize radiations in the
lower end, or the RF portion, of the electromagnetic spectrum and includes radia¬
tion used for radio and television broadcasts, microwave communications and
cooking, and MRI. For example, in MRI, protons emit signals with a frequency of
42.58 MHz when placed in a 1-tesla magnetic field. Although, theoretically,
x-radiation has an associated frequency, the concept is never used.
26 Physical Principles of Medical Imaging
Wavelength
energy photons, such as light and x-ray, two smaller length units are used. These
are:
O
inversely related, the highest energy on the spectrum corresponds to the shortest
wavelength.
Wavelength is most frequently used to describe light. At one time it was used to
describe x-radiation but that practice is now uncommon. Wavelength is often used
to describe radio-type radiations. General terms like "shortwave" and "micro¬
9.1 10~28 g, which means it would take 10.9 x 1026 electrons to equal the weight
x
of 1 cm3 of water. The question might be raised as to why such a small particle can
be the foundation of our modern technology. The answer is simple—numbers.
Tremendous numbers of electrons are involved in most applications. For example,
Because an electron has both mass and electrical charge, it can possess energy
of several types, as shown in Figure 2-8. It is the ability of an electron to take up,
transport, and give up energy that makes it useful in the x-ray system.
Energy and Radiation 27
E = mc2
predicts the amount of energy that could be obtained if an object with a mass, m,
were completely converted. In this
relationship, c is the speed of light. Although it
is not possible with our present technology to convert most objects into energy,
certain radioactive materials emit particles, called positrons, that can annihilate
electrons. When this happens, the electron's entire mass is converted into energy.
According to Einstein's relationship, each electron will yield 510 keV. This en¬
ergy appears as a photon. The annihilation of positrons and electrons is the basis
for positron emission tomography (PET).
Kinetic Energy
Kinetic energy is associated with motion. It is the type of energy that a moving
automobile or baseball has. When electrons are moving, they also have kinetic
energy.
Generally, the quantity of kinetic energy an object has is related to its mass and
velocity. For large objects, like baseballs and cars, the energy is proportional to the
Different Energy
Levels
mass object and the square of the velocity. Doubling the velocity of such an
of the
object increases its kinetic energy by a factor of 4. In many situations, electrons
travel with extremely high velocities that approach the velocity of light. At these
high velocities, the simple relationship between energy and velocity given above
does not hold. One of the theories of relativity states that the mass of an object,
such as an electron, changes at high velocities. Therefore, the relationship be¬
tween energy and velocity becomes complex. Electrons within the typical x-ray
tube can have energies in excess of 100 keV and can travel with velocities of more
than one-half the speed of light.
Potential Energy
will have more or less energy in one location or configuration than in another.
Although there is generally not a position of absolute zero potential energy, cer¬
tain locations are often designated as the zero-energy level for reference.
Electrons can have two forms of potential energy. One form is related to loca¬
tion within an electrical circuit, and the other is related to location within an atom.
One important aspect of electron potential energy is that energy from some source
is required to raise an electron to a higher energy level, and that an electron gives
up energy when it moves to a lower potential energy position.
Energy Exchange
Because electrons are too small to see, it is sometimes difficult to visualize what
is meant by the various types of electron energy. Consider the stone shown in
Figure 2-9; we will use it to demonstrate the various types of energy that also
apply to electrons.
Potential energy is generally a relative quantity. In this picture, the ground level
is arbitrarily designated as the zero potential energy position. When the stone is
raised above the ground, it is at a higher energy level. If the stone is placed in a
hole below the surface, its potential energy is negative with respect to the ground
level. However, its energy is still positive with respect to a position in the bottom
of a deeper hole. The stone at position A has zero potential energy (relatively
speaking), zero energy because it is not moving, and a rest-mass energy propor¬
tional to its mass. (The rest-mass energy of a stone is of no practical use and is not
discussed further.) When the man picks up the stone and raises it to position B, he
increases its potential energy with respect to position A. The energy gained by the
stone comes from the man. (We show later that electrons can be raised to
higher
potential energy levels by devices called power supplies.) The additional potential
Energy and Radiation 29
V
0
(Ston^ Ground Level fstoneT
Zero" Potential
Energy/"/ t \ \
Sound,Heat,Etc.
// IU
energy possessed by the stone at B can be used for work or can be converted into
other forms of energy. If the stone were connected to a simple pulley arrangement
and allowed to fall back to the ground, it could perform work by raising an object
fastened to the other end of the rope.
If the man releases the stone at B and allows it to fall back to the ground, its
energy is converted into kinetic energy. As the stone moves downward, decreas¬
ing its potential energy, which is proportional to its distance above the ground, it
constantly increases its speed and kinetic energy. Just before it hits the ground, its
newly gained kinetic energy will be just equal to the potential energy supplied by
the man. (Electrons undergo a similar process within x-ray tubes where they swap
potential for kinetic energy.) Just as the stone reaches the surface of the ground, it
will have more energy than when it was resting at position A. However, when it
comes to rest on the ground at D, its energy level is the same as at A. The extra
energy originally supplied by the man must be accounted for. In this situation, this
energy is converted into other forms, such as sound, a small amount of heat, and
mechanical energy used to alter the shape of the ground. When high-speed elec¬
trons collide with certain materials, they also lose their kinetic energy; their en¬
Energy Transfer
One of the major functions of electrons is to transport energy from one location
to another. We have just seen that individual electrons can possess several forms
30 Physical Principles of Medical Imaging
The pathway electrons travel as they transfer energy from one point to another
is a circuit. A basic electrical circuit is shown in Figure 2-10. All circuits must
contain at least two components (or devices) as shown. One component, desig¬
nated here as the source, can convert energy from some other form and transfer it
to the electrons. Batteries are good examples of electron energy sources. The other
SOURCE LOAD
^ @ e
has higher potential energy than the other conductor. In principle, the energy
source elevates the electrons to the higher potential energy level which they main¬
tain untilthey give up the energy in passing through the load device. The electrons
at the lower
potential level return to the energy source to repeat the process.
The connection points (terminals) between the source and load devices and the
conductors are designated as either positive or negative. The electrons exit the
source at the negative terminal and enter the
negative terminal of the load. They
then exit the positive terminal of the load device and enter the source at the posi¬
tive terminal. In principle, the negative conductor contains the electrons at the
high potential energy level. The positive conductor contains the electrons that
have lost their energy and are returning to the source. In direct current (DC) cir¬
cuits the polarities do not change. However, in alternating current (AC) circuits
the polarity of the conductors is constantly alternating between negative and posi¬
tive.
ELECTRICAL QUANTITIES
Each electron
passing through the circuit carries a very small amount of energy.
However, by collective effort, electrons can transport a tremendous amount of
energy. The amount of energy transferred by an electrical circuit depends on the
quantity of electrons and the energy carried by each. We now consider these spe¬
cific electrical quantities and their associated units.
Current
cated in Figure 2-11, a current of 1 mA is equal to the flow of 6.25 x 1015 electrons
per second past a given point. The current that flows through an x-ray tube is
generally referred to as the "MA." When used to mean the quantity, it is written as
MA. When used as the unit, milliampere, it is written as mA.
In addition to the rate at which electrons are flowing through a circuit, ie, the
current, it is often necessary to know the total quantity in a given period of time. In
32 Physical Principles of Medical Imaging
CURRENT
1 mA = 6.25 x 1015 electrons per second
x-ray work the most appropriate unit for specifying electron quantity is the milli-
ampere-second (mAs). The total quantity of electrons passing a point (MAS) is the
product of the current (MA) and the time in seconds (S). Since a current of 1 mA
is a flow of electrons per second, it follows that 1 mAs is a cluster of 6. 25 x 1015
electrons, as shown in Figure 2-11.
It should be recalled that all electrons carry a negative electrical charge of the
same size. In some situations the quantity of electrons might be specified in terms
of the total electrical charge. If extra electrons are added to an object, it is said to
have acquired a negative charge. However, if some of the free electrons are re¬
moved from an object, a positive charge is created. In either case, the total charge
on the object is directly proportional to the number of electrons moved.
Generally
speaking, charge is a way of describing a quantity of electrons. The basic unit of
charge is the coulomb (C), which is equivalent to the total charge of 6.26 x 1018
electrons; 1 C is equivalent to 1,000 mAs.
Voltage
We pointed out earlier that electrons could exist at different potential energy
levels, because of either their different positions within the atom or their different
locations within an electrical circuit. Consider the two wires or conductors shown
in Figure 2-12. The electrons contained in one of the conductors are at a higher
potential energy level than the electrons in the other. Generally, the electrons in
the negative conductor are considered to be at the higher energy level. An electri¬
cal quantity that indicates the difference in electron potential energy between two
points within a circuit is the voltage, or potential difference, suggesting a differ-
Energy and Radiation 33
Voltage
J
c~o~o o o n
[ o o o o (j
Low Energy Electrons
ence inpotential energy. The unit used for voltage, or potential difference, is the
volt. The difference in electron potential energy between two conductors is di¬
rectly proportional to the voltage. Each electron will have an energy difference of
1 eV for each volt. It is the quantity of energy that an electron gains or loses,
depending on direction, when it moves between two points in a circuit that are 1 V
apart. In the basic x-ray machine circuit the voltage is in the order of thousands of
volts (kilovolts) and is often referred to as the KV. When used to mean the quan¬
Power
Power is the quantity that describes the rate at which energy is transferred. The
watt and is equivalent to an energy transfer rate of 1 J/second.
is the unit of power
The power in an electrical circuit is proportional to the energy carried by each
electron (voltage) and the rate of electron flow (current). The specific relation¬
ship is
Power (watts) = Voltage (volts) x Current (amperes).
34 Physical Principles of Medical Imaging
Total Energy
The amount of energy that an electrical circuit transfers depends on the voltage,
current, and the duration (time) of the energy transfer. The fundamental unit of
energy is the joule. The relationship of total transferred energy to the other electri¬
cal quantities is
The basic circuit shown in Figure 2-13 is found in all x-ray machines. The
power supply that gives energy to the electrons and pumps them through the cir¬
cuit is discussed in detail in Chapter 8. The voltage between the two conductors in
the x-ray circuit is typically in the range of 30,000 V to 120,000 V (30 kV to 120
kV), and in radiology this kilovoltage is generally adjustable and an appropriate
value can be selected by the operator of the x-ray equipment.
In this circuit, the x-ray tube is the load. It is the place where the electrons lose
their energy. The energy lost by electrons in passing through an x-ray tube is con¬
verted into heat and x-ray energy.
ALTERNATING CURRENT
some electrical circuits, the voltage and current remain constant with respect
In
totime, and the current always flows in the same direction. These are generally
designated as direct current (DC) circuits. A battery is an example of a power
supply that produces a direct current.
Some power supplies, however, produce voltages that constantly change with
time. Since, in most circuits, the current is more or less proportional to the voltage,
High Voltage
Power Supply
it also changes value. In most circuits of this type, the voltage periodically changes
polarity and the current changes or alternates direction of flow. This is an alternat¬
ing current (AC) circuit. The electricity distributed by power companies is AC.
There are certain advantages to AC in that transformers can be used for stepping
plotted with respect to time, it will generally be similar to the one shown in Figure
2-14. This representation of the voltage with respect to time is known as the wave¬
form. Most AC power sources produce voltages with the sine-wave waveform,
shown in Figure 2-14. This name is derived from the mathematical description of
its shape.
One characteristic of an alternating voltage is its frequency. The frequency is
the rate at which the voltage changes through one complete cycle. The time of one
complete cycle is the period; the frequency is the reciprocal of the period. For
example, the electricity distributed in the United States goes through one complete
cycle in 0.0166 seconds and has a frequency of 60 cycles per second. The unit for
frequency is the hertz, which is 1 cycle per second.
During one voltage cycle, the voltage changes continuously. At two times dur¬
ing the period it reaches a peak, but remains there for a very short time. This means
that for most of the period the circuit voltage is less than the peak value. For the
purpose of energy and power calculations, an effective voltage value, rather than
the peak value, should be used. For the sine-wave voltage, the effective value is
70.7% (0.707) of the peak voltage. This is the waveform factor, and its value de¬
+ -i
4>
O)
(0
0
O Time
>
• — 1 /60 sec •
Frequency = 60 Hz (cycle/sec)
There are many different quantities and units used to quantify radiation, be¬
cause there are a of an x-ray beam or gamma radiation
number of different aspects
that can be used to express the amount of radiation. The selection of the most
UNIT SYSTEMS
ing a change not only to the general metric units but also to the proposed adoption
of a set of fundamental metric units known as the International System of Units (SI
units). The adoption of SI radiation units is progressing rather slowly because
there is nothing wrong with our conventional units, and SI units are somewhat
awkward for a number of common applications. Throughout this text we use the
units believed the most useful to the reader. In this chapter both unit systems are
discussed and compared.
Table 3-1 is a listing of most of the physical quantities and units encountered in
37
38 Physical Principles of Medical Imaging
Conversions
Quantity Conventional Unit SI Unit
coulomb/kg of air (C/kg) 1 C/kg 3876 R
Exposure roentgen (R) =
1 R = 258 pC/kg
1 Gy 100 rad
Dose rad gray (Gy) =
QUANTITIES
the width of the area covered is proportional to the distance from the source. At a
rO On
: —r
Area = 1
Exposure = 1
Area - 4
Exposure = 1/4
Area = 9
Exposure = 1/9
fore, the area covered by our x-ray beam is increasing in proportion to the square
of the distance from the source.
First, let us consider the amount of radiation passing through the three areas.
We assuming that none of the radiation is absorbed or removed from the beam
are
before it reaches the thirdarea. All radiation that passes through the first area will
also pass through the second and third areas. In other words, the total amount of
radiation is the same through all areas and does not change with distance from the
source.
Now let us consider the concentration of radiation through the three areas. In the
first area, all radiation is concentrated in a one-unit area. At a distance of 2 m from
the source the radiation is spread over a four-unit square area, and continues to
spread to cover a nine-unit square area at a distance of 3 m. If the same total
40 Physical Principles of Medical Imaging
area covered by the beam, and the area covered by the beam is proportional to the
square of the distance from the source. We can conclude that the concentration of
radiation is inversely related to the square of the distance from the source. This is
commonly known as the inverse-square law.
We now introduce some quantities and the associated units that can be used to
PHOTONS
Since an x-ray beam and gamma radiation are showers of individual photons,
the number of photons could, in principle, be used to express the amountof radia¬
tion. In practice, the number of photons is not commonly used, but it is a useful
and examine the different ways the concentration of radiation delivered to a small
area on a patient's body could be expressed.
single abdominal radiographic exposure we would find that close to 1010 photons
would have passed through our square centimeter. The more formal term for pho¬
ton concentration is photon fluence.
Total Photons
EXPOSURE
Concept
Exposure is the quantity most commonly used to express the amount of radia¬
tion delivered to a point. The conventional unit for
exposure is the roentgen (R),
and the SI unit is the coulomb per kilogram of air (C/kg):
air. The enclosure for the air volume is known as an ionization chamber. The use
gamma, etc.), some of the photons will interact with the atomic shell electrons.
The interaction separates the electrons from the atom, producing an ion pair.
When the negatively charged electron is removed, the atom becomes a positive
ion. Within a specific mass of air the quantity of ionizations produced is deter-
Exposure
(1 Roentgen)
©
>► o
| © o
1 cm3 of Air
(0.001293 gm at STP)
mined by two factors: the concentration of radiation photons and the energy of the
individual photons.
An exposure of 1 roentgen produces 2.08 x 109 ion pairs per cm3 of air at stan¬
dard temperature and pressure (STP); 1 cm3 of air at STP has a mass of 0.001293
g. The official definition of the roentgen is the amount of exposure that will pro¬
duce 2.58 x 1(H C (of ionization) per kg of air. A coulomb is a unit of electrical
charge. Since ionization produces charged particles (ions), the amount of ioniza¬
tion produced can be expressed in coulombs. One coulomb of charge is produced
changes with photon energy because both the number of photons that will interact
and the number of ionizations produced by each interacting photon is dependent
on photon energy. If we assume a photon energy of 60 keV, a 1-R exposure is
exposure and the dimensions of the exposed area. It is also referred to as the expo¬
sure-area product.
pare exposure (concentration) and SIE (total radiation). In Figure 3-5 two cases
are compared. In both instances the beam area was 10 cm x 10 cm (100 cm2); the
total exposure time was 5 minutes at an exposure rate of 3 R/min. In both instances
the SIE is 1,500 R-cm2. However, the exposure depends on how the x-ray beam
was moved during the examination. In the first example the beam was not moved
and the resulting exposure was 15 R. In the second example, the beam was moved
to different locations so that the exposure was distributed over more surface area
Exposure
100 mR
10 R-cm2
100 R-cm2
Surface Integral Exposure
Another important example is illustrated in Figure 3-6. Here the same exposure
(100 mR) is delivered to both patients. However, there is a difference in the ex¬
posed area: the patient on the right received 10 times as much radiation as the
patient on the left.
The important point to remember is that exposure (roentgens) alone does not
express the total radiation delivered to a body. The total exposed area must also be
considered.
ENERGY
An x-raybeam and other forms of radiation deliver energy to the body. In prin¬
ciple, the amount of radiation delivered could be expressed in units of energy
(joules, ergs, kiloelectron volts, etc.). The energy content of an x-ray beam is
rather difficult to measure and for that reason is not widely used in the clinical
Energy Fluence
assume a photon
energy of 60 keV, the energy fluence for a 1-R exposure is ap¬
proximately 0.3 mJ/cm2.
The energy delivered by an x-ray beam can be put into
perspective by compar¬
ing it to the energy delivered by sunlight (see Figure 3-7). For the x-ray exposure
we will use the
fluoroscopic factors of 5 minutes at the rate of 3 R/min. This 15-R
exposure delivers x-ray energy to the patient with a concentration (fluence) of 4.5
mJ/cm2 if we assume an effective photon energy of 60 keV.
The energy delivered by the sun depends on many factors
including geographic
location, season, time of day, and atmospheric conditions; a typical midday sum¬
mer exposure on a clear day in Atlanta
produces approximately 100 mJ/sec/ cm2.
In 5 minutes a person would be exposed to an energy fluence of 30,000 mJ/ cm2.
We see from this example that the energy content of an x-ray beam is
relatively
small in comparison to sunlight. However, x-ray and gamma radiation will gener¬
ally produce a greater biological effect per unit of energy than sunlight because of
two significant differences: x- and gamma radiation penetrate and
deposit energy
within the internal tissue, and the high energy content of the individual photons
produces a greater concentration of energy at the points where they are absorbed
within individual atoms.
Total Energy
Sun
X-ray Tube
5 Minute Exposure
□ Energy
4.5 mJ/cm? ^30,000mJ/cm2
ABSORBED DOSE
Concept
A human body absorbs most of the radiation energy delivered to it. The portion
of x-ray beam that is absorbed depends on the
an penetrating ability of the radia¬
tion and the size and density of the body section exposed. In most clinical situa¬
tions more than 90% is absorbed. In nuclear imaging procedures, a large percent¬
age of the energy emitted by radionuclides is absorbed in the body. Two aspects of
the absorbed radiation energy must be considered: the amount (concentration) ab¬
sorbed at various locations throughout the body and the total amount absorbed.
Absorbed dose is thequantity that expresses the concentration of radiation en¬
ergy absorbed at a specific point within the body tissue. Since an x-ray beam is
attenuated by absorption as it passes through the body, all tissues within the beam
will not absorb the same dose. The absorbed dose will be much greater for the
tissues near the entrance surface than for those deeper within the body. Absorbed
dose is defined as the quantity of radiation energy absorbed per unit mass of tissue.
Units
The conventional unit for absorbed dose is the rad, which is equivalent to 100
ergs of absorbed energy per g of tissue. The SI unit is the gray (Gy), which is
equivalent to the absorption of 1 J of radiation energy per kg of tissue. The rela¬
tionship between the two units is
For specific type of tissue and photon energy spectrum, the absorbed dose is
a
proportional to the exposure delivered to the tissue. The ratio, f, between dose
(rads) and exposure (roentgens) is shown in Figure 3-8 for soft tissue and bone
over the photon energy range normally encountered in diagnostic procedures. The
absorbed dose in soft tissue is slightly less than 1 rad/R of exposure throughout the
photon energy range. The relationship for bone undergoes a considerable variation
with photon energy. For a typical diagnostic x-ray spectrum, a bone exposure of
1 R will produce an absorbed dose of approximately 3 rad.
Integral Dose
Integral dose is the total amount of energy absorbed in the body. It is deter¬
mined not only by the absorbed dose values but also by the total mass of tissue
exposed.
Radiation Quantities and Units 47
The conventional unit for integral dose is the gram-rad, which is equivalent to
100 ergs of absorbed energy. The
concept behind the use of this unit is that if we
add the absorbed doses (rads) for each gram of tissue in the
body, we will have an
indication of total absorbed energy. Since integral dose is a quantity of energy, the
SI unit used is the joule. The relationship between the two units is
1 J = 1,000 gram-rad.
Integral dose (total absorbed radiation energy) is probably the radiation quan¬
tity that most closely correlates with potential radiation damage during a diagnos¬
tic procedure. This is because it reflects not only the concentration of the radiation
absorbed in the tissue but also the amount of tissue affected by the radiation.
There is no practical method for measuring integral dose in the human body.
However, since most of the radiation energy delivered to a body is absorbed, the
integral dose can be estimated to within a few percent from the total energy deliv¬
ered to the body.
Computed tomography can be used to demonstrate integral dose, as illustrated
in Figure 3-9. We begin with a one-slice examination and assume that the average
dose to the tissue in the slice is 5 rad. If there are 400 g of tissue in the slice, the
integral dose will be 2,000 gram-rad. If we now perform an examination of 10
slices, but all other factors remain the same, the dose (energy concentration) in
each slice will remain the same. However, the integral dose (total energy) in¬
creases in proportion to the number of slices and is now 20,000 gram-rad. In this
BIOLOGICAL IMPACT
tity of radiation and its ability to produce biological effects. Two radiation quanti¬
ties are associated with
biological impact.
Dose Equivalent
Dose equivalent (H) is the
quantity commonly used to express the biological
impact of radiation on persons receiving occupational or environmental expo¬
sures. Personnel
exposure in a clinical facility is often determined and recorded as
a dose
equivalent.
Dose equivalent is proportional to the absorbed dose
(D), the quality factor (Q),
and other modifying factors (N) of the specific
type of radiation. Most radiations
encountered in diagnostic procedures (x-ray, gamma, and beta) have
quality and
modifying factor values of 1. Therefore, the dose equivalent is numerically equal
to the absorbed dose. Some radiation
types consisting of large (relative to elec¬
trons) particles have quality factor values greater than 1. For example, alpha par¬
ticles have a quality factor value of approximately 20.
The conventional unit for dose equivalent is the rem, and the SI unit is the
sievert (Sv). When the quality factor is 1, the different
relationships between dose
equivalent (H) and absorbed dose (D) are
H(rem) = D(rad)
H(Sv) = D(Gy).
Dose equivalent values can be converted from one system of units to the other by:
1 Sv = 100 rem.
Figure 3-10 is a summary of the general relationship among the three quantities:
exposure, absorbed dose, and dose equivalent. Although each expresses a differ¬
ent aspect of radiation, they all express radiation concentration. For the
types of
radiation used in diagnostic procedures, the factors that relate the three quantities
have values of approximately 1 in soft tissue. Therefore, an exposure of 1 R pro¬
duces an absorbed dose of approximately 1 rad, which, in turn, produces a dose
equivalent of 1 rem.
When specific radiation effects rather than general risk are being considered,
the relative biological effectiveness (RBE) of the radiation must be taken into ac¬
count. The value of the RBE depends on characteristics of the radiation and the
Exposure
Absorbed Dose
(Energy Concentration)
1 rad = 100 ergs/gram
1 gray = 1 joule/kilogram
1 gray = 100 rad
Dose Equivalent
(Biological Impact)
1 rem = 10 mSv
Q Dose Equivalent
Exposure —— Absorbed Dose ■
(0 93-0 96 ID'
1 R 1 rad (10 mGy)
-
LIGHT
The basic light quantities and units encountered in radiology can be conven¬
iently divided into two categories: those that express the amount of light emitted
by a source and those that describe the amount of light falling on a surface, such as
a piece of film. The relationships of several light quantities and units are shown in
Figure 3-11.
Luminance
LUMINANCE ILLUMINANCE
(Brightness)
Illuminance
acquisition phase of MRI, the system transfers energy to the patient's body at
some specific power level. The actual power (watts) used depends on many fac¬
body. The RF energy absorbed by the tissue is converted into heat. Therefore, the
power concentration is an indication of the rate at which heat is produced within
specific tissue.
Chapter 4
Characteristics and Structure
of Matter
Radiation is created and then later absorbed within some material substance or
matter. Certain materials are more suitable than others as both radiation sources
the shells surrounding the nucleus. Transitions in the shell electrons also produce
one form of x-radiation.
NUCLEAR STRUCTURE
make up the nucleus, and the electrons are located at a much greater distance from
the nucleus than shown.
53
54 Physical Principles of Medical Imaging
Nucleus
Electron Shells
Composition
All nuclei are composed of two basic particles, neutrons and protons. Neutrons
and protons are almost the same size but differ in their electrical charge. Neutrons
have no electrical charge and contribute only mass to the nucleus. Each proton has
a positive charge equal in strength to the negative charge carried by an electron.
Most physical and chemical characteristics of a substance relate to the nuclei's
Because of their very small size it is not convenient to express the mass of
nuclei and atomic particles in the conventional unit of kilograms. A more appro¬
priate unit is the atomic mass unit (amu), the reference for which is a carbon atom
Characteristics and Structure of Matter 55
with a mass number of 12, which is assigned a mass of 12.000 amu. The relation¬
ship between the atomic mass unit and kilogram is
1 amu = 1.66 x 10-27 kg.
The difference in mass between a neutron and proton is quite small: approxi¬
mately 0.1%. The larger difference is between the mass of these two particles and
the mass of an electron. More than 1,800 electrons are required to equal the mass
of a proton or neutron.
The total number of particles (neutrons and protons) in a nucleus is the mass
number (A). Since neutrons and protons have approximately the same mass, the
total mass or weight of a nucleus is, within certain limits, proportional to the mass
number. However, the nuclear mass is not precisely proportional to the mass num¬
ber because neutrons and protons do not have the same mass, and some of the
mass is converted into energy when the nucleus is formed. The relationship be¬
such as 14C or 131I, or by a number following the symbol, such as C-14,1-131, etc.
The atomic number is added as a subscript preceding the chemical symbol. Add¬
ing the atomic number to the symbol is somewhat redundant since only one atomic
number is associated with each chemical symbol or element.
With the exception of ordinary hydrogen, all nuclei contain neutrons and pro¬
tons. The lighter elements (with low atomic and mass numbers) contain almost
equal numbers of neutrons and protons. As the size of the nucleus is increased, the
ratio of neutrons to protons increases to a maximum of about 1.3 neutrons per
proton for materials with very high atomic numbers. The number of neutrons in a
specific nucleus can be obtained by subtracting the atomic number from the mass
number. One chemical element may have nuclei containing different numbers of
neutrons. This variation in neutron composition usually determines if a nucleus is
radioactive.
Nuclides
proton combinations are now known. The term element refers to the classification
of a substance according to its atomic number, and the term nuclide refers to its
classification by both atomic number and number of neutrons. In other words,
whereas there are at least 106 different elements, there are about 1,300 different
nuclides known.
56 Physical Principles of Medical Imaging
21
20
19
18
17
18 r
15 ill n
14
i
13
z
QC
UJ n
12
H-
/
■
CD 11
^ 10 p!
Z 9
S>
>
Z
o
8 i IIIIMI mmmm
cr 7
/
p
I—
3 6
LU
^ 5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Isotopes
The nuclides of an element may contain different numbers of neutrons. Nu¬
clides that belong to the same chemical element (and have the same atomic num¬
ber) but have different numbers of neutrons are known
isotopes. It should be
as
emphasized that the term isotope describes a relationship between or among nu¬
clides rather than a specific characteristic of a given nuclide. An analogy is that
persons who have the same grandparents but not the same parents are known as
cousins. The isotopes of each element are located in the same vertical column of
the nuclide chart as shown in Figure 4-3.
There seems to be a general misconception that the term isotope means radioac¬
tive. This is obviously incorrect, since every nuclide is an isotope of some other
nuclide. Most elements have several isotopes. In most cases some of the isotopes
of a given element are stable (not radioactive), and some are radioactive. For ex¬
ample, iodine has 23 known isotopes with mass numbers ranging from 117 to 139.
Two of these, 1-127 and 1-131, are shown in Figure 4-4. The relationship between
the two nuclides is that they are isotopes. 1-131 is an isotope of 1-127, and 1-127 is
also an isotope of 1-131. For most elements the most common or most abundant
form is the stable isotope. The radioactive forms are therefore isotopes of the more
common forms, explaining the strong association isotopes have developed with
radioactivity.
Isobars
Nuclides having the same mass number (total number of neutrons and protons)
but different atomic numbers are known as isobars, as shown in Figure 4-5.1-131
and Xe-131 are isobars of each other. A
pair of isobars cannot belong to the same
chemical element. The relationship among isobars in the nuclide chart is illus¬
trated in Figure 4-3, showing aluminum-29, silicon-29, phosphorus-29, and sul-
fur-29.
Our major interest in isobars is that in most radioactive transformations one
nuclide will be transformed into an isobar of itself. For example, the 1-131 shown
58 Physical Principles of Medical Imaging
[Stable
H Radioactive
Isobars
Aluminum-29
Silicon -29
Phosphorus-29 n
Sulfur -29 w
Isotopes
Carbon -16
li
Carbon -15
<e Carbon -14
Carbon -13
<r Carbon -12
K- Carbon -11
ii ■Carbon -10
Carbon 9
m -
I 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
in Figure 4-5 is radioactive and is converted into Xe-131 when it undergoes its
normal radioactive transformation.
Isomers
Nuclei can have the same neutron-proton composition but not be identical; one
nucleus can contain more energy than the other. Two nuclei that have the same
Characteristics and Structure of Matter 59
131 127
Isotopes
131 131
Xe
Isobars
53 Protons 54 Protons
78 Neutrons 77 Neutrons
give off its excess energy and change to the other isomer. Such isomeric transi¬
tions have an important role in nuclear medicine and are discussed in detail later.
60 Physical Principles of Medical Imaging
99 .
99 m
Tc
Isomers ^
U0
43 Protons
43 Protons
56 Neutrons 56 Neutrons
99 ■same 99
Isotones
Nuclides that have the same number of neutrons are known as isotones. This
relationship, mentioned here for the sake of completeness, is not normally encoun¬
tered in nuclear medicine.
NUCLEAR STABILITY
attracted to and repelled from each other. Since each proton carries a positive
electrical charge, protons repel each other. A short-range attraction force between
all particles is also present within nuclei.
Characteristics and Structure of Matter 61
UNSTABLE
The most significant factor that determines the balance between the internal
forces and therefore the nuclear stability is the ratio of the number of neutrons to
the number of protons. For the smaller nuclei, a neutron-proton ratio of 1 to 1
62 Physical Principles of Medical Imaging
produces maximum stability. The ratio for stability gradually increases with in¬
creasing atomic number up to a value of approximately 1.3 to 1.0 for the highest
atomic numbers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
ATOMIC (PROTON) NUMBER
Figure 4-8 Nuclide Chart Showing the Relationship of Unstable Radioactive and Stable
Nuclear Structures
Characteristics and Structure of Matter 63
If the neutron-proton
ratio is slightly above or below the ratio for stability, the
nucleus will generally be radioactive. Ratios considerably different from those
required for stability are not found in nuclei because they represent completely
unstable compositions. In an unstable
composition, the repelling forces override
the forces of attraction between the nuclear
particles.
The relationship between nuclear stability and
neutron-proton ratio is illustrated
in Figure 4-8. The stable nuclides, those with a
neutron-proton ratio of approxi¬
mately 1 to 1, are located in a narrow band running diagonally through the nuclide
chart. The radioactive nuclides are located on either side of the stable band. All
other areas on the nuclide chart represent neutron-proton mixtures that cannot ex¬
ist as a nuclei.
NUCLEAR ENERGY
Whenever a nucleus changes to a more stable form, it must emit energy. Several
types of nuclear changes can result in the release of energy. Under certain condi¬
tions a nucleus can,
by fission, break apart into more stable components. This
process takes place in nuclear reactors where the energy released is often used to
generate electrical energy. The fusion of two small nuclei to form a larger nucleus
is the process that creates energy within the sun and the hydrogen bomb. In
nuclear medicine, radiation energy is created when nuclei undergo spontaneous
radioactive transitions to create more stable nuclear structures.
The energy emitted during nuclear transitions is created by converting a small
fraction of the nuclear
mass into energy. When such a conversion takes place, the
relationship between the amount of energy (E) and the amount of mass (m) in¬
volved is given by Einstein's equation:
E = mc2,
where c is the
velocity of light. A significant aspect of this relationship is that a
tremendous amount of energy can be created from a relatively small mass. The
mass of 1 g, completely converted, would produce 25,000,000 kilowatt-hours.
In clinical applications we are interested in the amount of energy released by an
individual atom. This is expressed in the unit of kiloelectron volts (keV), a rela¬
tively small unit of energy. The relationship between some other energy units and
the keV are:
change of 0.0001 amu to 0.0005 amu. The amount of nuclear mass used to pro¬
duce the radiation energy is relatively small.
The energy equivalent of one electron mass is 511 keV. This value is often
referred to as the rest-mass energy of an electron. In some situations in nuclear
medicine procedures, the masses of individual electrons are completely converted
into energy. The result is a photon with the characteristic energy of 511 keV.
ELECTRONS
Number
strength to the positive charge of a proton. Under normal conditions, when the
number of electrons and protons in an atom is the same, the positive and negative
charges balance so that the atom has no net charge. However, if an electron is
removed from an atom, the atom is said to be ionized and will have a positive
charge.
Energy Levels
letters of the alphabet beginning with K for the shell closest to the nucleus as
shown in Figure 4-1. Each shell has a limited electron capacity. The maximum
capacity of the K shell in any atom is 2 electrons; the L shell, 8 electrons; the M
shell, 18 electrons; etc. The electron shells are generally filled beginning with the
K shell and extending out until the total number of electrons have been placed.
Electrons are bound to the positive nucleus of an atom by their negative electri¬
cal charge. The strength of this binding can be expressed in terms of energy. This
binding energy of an electron is equal to the amount of energy that would be re¬
quired to remove the electron from the atom. Binding energy is a form of electron
potential energy. As with any form of potential energy, some point must be desig¬
nated the zero energy level. In the case of electrons, a location outside the atom
Characteristics and Structure of Matter 65
diagram of the type shown in Figure 4-9. It should be noticed that this diagram
represents the electrons as being down in a hole. The electrons near the bottom are
the lowest energy level and would have the greatest binding energy.
As discussed previously, the electrons are arranged within the atoms in definite
layers, or shells. Each shell is a different energy level. The K shell, which is clos¬
est to the nucleus, is at the lowest energy level. The diagram in Figure 4-9 is for
tungsten, which has an atomic number of 74. Only the K, L, and M electron levels
are shown. Additional electrons are located in the N and O shells. These shells
would be located above the M shell and slightly below the zero level. It should be
noticed that there is significant energy difference between the various shells. All
a
of the shells, except K, are subdivided into additional energy levels. For example,
the L shell is divided into three levels designated LI, LII, and LIII.
The roles of electrons in radiation events usually involve one of two basic prin¬
ciples: (1) Energy from some source is required to move an electron to a higher
shell (such as K to L) or out of the atom; (2) If an electron moves to a lower shell
(ie, L to K), energy must be given up by the electron and usually appears as some
type of radiation. The amount of energy involved depends on the difference in the
energy levels between which the electrons move.
The binding energy for electrons in a specific shell, such as K, is related to
atomic number as indicated in Figure 4-10. It should be noticed that only the
K-shell electrons for the higher atomic number elements have binding energies in
the same range as the energies of diagnostic gamma and x-ray photons. This is
significant in several types of interactions discussed later. The binding energy of
the L-shell electrons is always much less than for the K, but it also increases with
atomic number. For most substances, the binding energy of the outermost elec¬
trons is in the range of 5 eV to 20 eV. Obviously, these are the electrons that are
Figure 4-9 Energy Level Diagram of Electrons within the Tungsten Atom
\%
-A
C
a
(O
(fl
r-f
■o
<0
<23
-J
I
I
CD
X 3
0 10 20 30 40
\
\\
50 60
Atomic Number-Z
70
,/
80 90
Figure 4-10 Relationship between K-Shell Binding Energy and Atomic Number
Concentration
concerning this relationship are in order. Avogadro's number, N, always has the
same value and obviously does not change from element to element. Z and A have
unique values for each chemical element. It should be noticed, however, that the
number of electrons per cubic centimeter depends only on the ratio of Z to A. The
elements with lower atomic numbers have approximately one neutron for each
proton in the nucleus. The value of Z/A is approximately 0.5. As the atomic num¬
ber and atomic weight increase, the ratio of neutrons within the nucleus also in¬
creases. This produces a decrease in the Z/A ratio, but this
change is relatively
small. Lead, which has an atomic number of 82 and an atomic weight of 207, has
a Z/A ratio of 0.4. For most material encountered in x-ray
applications, the Z/A
ratio varies by less than 20%. The single exception to this is hydrogen. Normal
hydrogen contains no neutrons and has a nucleus that consists of a single proton.
The Z/A ratio, therefore, has a value of 1.
Since Avogadro's number is constant, and the Z/A ratio is essentially constant,
the only factor that can significantly alter the electron concentration is the density
of the material. Most materials, especially pure elements, have more or less unique
density values. In compounds and mixtures, the density depends on the relative
concentration of the various elements.
The fact that electron concentration does not significantly change with atomic
number might suggest that atomic number has little to do with electron-x-ray in¬
teractions. This is, however, not the case. As x-ray photons pass through matter,
the chance of interaction depends not only on electron concentration, but also on
how firmly the electrons are bound within the atomic structure. Certain types of
interactions occur only with firmly bound electrons. Since the binding energy of
electrons increases with atomic number, the concentration of highly bound elec¬
trons increases significantly with increased atomic number.
Atomic number is essentially a characteristic of the atom and has a value that is
unique to each chemical element. Many materials, such as human tissue, are not a
single chemical element, but a conglomerate of compounds and mixtures. With
respect to x-ray interactions, it is possible to define an effective atomic number,
Zeff, for compounds and mixtures. The effective atomic number is given by
K electron
Atomic number* binding energy Density
Material (Z) (keV) (g/cc) Application
*
Effective Z of tissues from Spiers (1946).
Source: Spiers FW: Effective atomic number and energy absorption in tissues. Br J Radiol 1946;19:218.
Chapter 5
Radioactive Transitions
In the previous chapter we showed that certain nuclei are not completely stable
and eventually undergo an internal change that will produce a more stable nuclear
structure. This spontaneous change is a radioactive transition. In some older litera¬
because the nucleus does not disintegrate; it simply undergoes a slight change.
This event is illustrated in Figure 5-1. The original nucleus is designated the par¬
ent, and the nucleus after the transition is designated the daughter. In radioactive
transitions, energy is emitted as radiation. The types of radiation encountered in
nuclear medicine are shown in Figure 5-1. The radiation is in the form of either
physical characteristics of the nucleus and are considered in more detail later. The
daughter nucleus can be either stable or a radioactive or metastable nucleus that
will undergo another transition in the future.
In most in-vivo nuclear medicine procedures it is desirable to use a radionuclide
that emits photons in the range of 100 to 500 keV. The penetrating ability of pho¬
tons is related to their energy. Many photons in this energy range can emanate
from the body, but not penetrate through the detector and be lost. Particle radiation
is not useful in most diagnostic procedures. In fact, it is usually undesirable be¬
cause it deposits its energy in the body close to the site of origin and can contribute
69
70 Physical Principles of Medical Imaging
Radioactive Parent
Particles Photons
Positron ^ *
zv >
Daughter
Figure 5-1 Various Radiations Produced by Radioactive Transitions
Radioactive Parent
Intermediate
It
Isobaric
j * r
V-
Transition
, > 'r- r y<
> * "V
JO*
e>
a:
tVv '> «S v '
UJ
z
UJ
UJ
> Isomeric
Transition
UJ »
Oi
.
o:
Daughter kv *>
if1*1 i *
*^><' r Akr*J
r
*<Y
ATOMIC NUMBER Z + l
ISOBARIC TRANSITIONS
Most radioactive transitions have several steps. For most radionuclides, the first
step is an isobaric transition usually followed by an isomeric transition and inter-
72 Physical Principles of Medical Imaging
actions with orbiting electrons. The three types of isobaric transitions of interest to
us are (1) beta emission, (2) positron emission, and (3) electron capture.
In nuclear stability, the neutron-proton ratio (N/P) is crucial. If it is too low or
too high, the nucleus will eventually rearrange itself into a more stable configura¬
tion. Beta radiation, which is the emission of energetic electrons, results when an
N/P ratio is too high for stability; positron emission or electron capture occurs
when it is too low for stability. These two conditions are represented by specific
areas of the nuclide chart shown in Figure 5-3. Beta emitters are above the stable
nuclides, and positron emitters and electron capture nuclides are below.
Beta Emission
The second function of the electron is to carry off a portion of the energy given
up by the nucleus. The energy is carried as kinetic energy by the electron. But the
energy carried by a beta particle is usually less than the total transition energy
given up by the nucleus. The remaining portion is removed from the nucleus by
the emission of a very small particle known as a neutrino. In each transition, the
sum of the beta and neutrino
energy is equal to the transition energy for the nu¬
clide. Unlike the beta particle, the neutrino is very penetrating and carries the en¬
energy value indicates the radiation dose or energy deposited in the body by the
beta radiation. The shape of the beta energy spectrum varies from nuclide to nu¬
clide. The relationship between average energy and transition energy depends on
the value of the transition energy and the atomic number of the nuclide. For most
radionuclides encountered in nuclear medicine, the average beta energy is usually
between 25% and 30% of the maximum energy.
Radioactive Transitions 73
Positron Emission
Two types of transitions can occur when the nuclear N/P is too low for stability.
One is positron emission. A positron is a small particle that has essentially the
same mass as an electron but has a positive rather than negative electrical charge.
version. The energy equivalent to the mass difference between a neutron and a
proton plus the energy equivalent of the positron mass is approximately 1.8 MeV.
This means that the total transition energy must be at least 1.8 MeV for positron
emission to occur.
E = mc2.
The energy produced is 1.022 MeV emitted as a pair of photons, each with an
energy of 511 keV. Therefore, the radiation from a positron-emitting material is
photons with a characteristic energy of 511 keV. The pair of photons leave the site
traveling in opposite directions. This is useful in imaging, because it allows the
annihilation site to be precisely determined.
Proton \
Transition
Energy
Z-l Z
Electron Capture
negative electron enters the nucleus, the positive charge of one proton is canceled
and the proton is converted into a neutron. This results in the reduction of the
atomic number by one unit. Since the mass number does not change, electron
each nuclide.
In an electron capture
transition, radiation is not emitted directly from the
nucleus but results from changes within the electron shells. Electron capture cre¬
ates a vacancy in one shell, which is quickly filled by an electron from a higher
energy location. As the electron moves down to the K shell, it gives off an amount
of energy equivalent to the difference in the binding energy of the two levels. This
energy is emitted from the atom in either characteristic x-ray photons or Auger
electrons. Auger electrons are produced when the energy given up by the electron
/
/
S*eX^.
iVv A.'*
filling the K-shell vacancy is transferred to another electron, knocking it out of its
shell. Most Auger electrons have relatively low energies.
Many radionuclides that undergo electron capture are used in nuclear medicine
because the energy of characteristic x-ray photons is ideal for in-vivo studies.
ISOMERIC TRANSITIONS
Gamma Emission
In most isomeric transitions, a nucleus will emit its excess energy in the form of
a gamma photon. A gamma photon is a small unit of energy that travels with the
speed of light and has no mass; its most significant characteristic is its energy. The
photon energies useful for diagnostic procedures are generally in the range of 100
keV to 500 keV.
The energy of a gamma photon is determined by the difference in energy be¬
tween the intermediate and final states of the nucleus undergoing isomeric transi¬
tion. This difference is the same for all nuclei of a specific nuclide. However,
many nuclides have more than one intermediate state or energy level. When this is
the case, a radionuclide might emit gamma photons with several different ener¬
higher than the first. When there are several different intermediate energy levels, it
is common for some nuclei to go to one level and other nuclei to go to another
level during isobaric transition. This is usually indicated on the transition diagram
by showing the percentage of nuclei that go to each energy level. In the illustration
considered, 80% of the nuclei go directly to intermediate energy level number 1,
and 20% go directly to level number 2. The gamma photons are emitted when the
nuclei move from these intermediate energy levels down to the daughter nuclide
level.
Nuclei that have gone to a specific intermediate energy level might then go
directly to the daughter level or to a lower intermediate level. With this in mind we
can predict the gamma photon energy spectrum produced by our example nuclide.
The spectrum will consist of three discrete energies as shown in Figure 5-8. Sixty
78 Physical Principles of Medical Imaging
percent of the parent nuclei will go to intermediate energy level number 1 and then
directly to the daughter level 800 keV below. Therefore, 60% of the transitions
will give rise to an 800-keV photon. Twenty percent of the nuclei that go to energy
level 1 will then go to intermediate level number 2 by emitting a 300-keV photon.
Forty percent of the nuclei will go through intermediate energy level number 2,
either directly from the parent or from intermediate level number 1. When these
nuclei drop to the daughter energy level, a 500-keV gamma photon will be emit¬
ted. It is the combination of different energy levels and different transition routes
that gives rise to the different energies in the typical gamma spectrum. For most
radionuclides, one or two gamma energies will account for the vast majority of
transitions.
For most nuclides, the time spent by the nucleus in the intermediate state is
extremely short and the isomeric transition appears to coincide with the isobaric
transition. In some nuclides, however, the nuclei remain in the intermediate state
for a longer time. In this
case, the intermediate state is referred to as a metastable
state. Metastable states are of particular interest in nuclear medicine because
they
make possible the separation of electron and photon radiation. In a
diagnostic pro¬
cedure it is undesirable to have electron radiation in the
body because it contrib¬
utes to radiation dosage but not to
image formation. By using a nuclide that has
already undergone an isobaric (electron-emitting) transition and is in a metastable
state, it is possible to have a radioactive material that emits only gamma radiation.
(20%)
Figure 5-8 Relationship of Nuclear Energy Levels to the Energy Spectrum of Gamma
Photons
Radioactive Transitions 79
Internal Conversion
Under some conditions, the energy from an isomeric transition can be trans¬
ferred to an electron within the atom. This energy supplies the
binding energy and
expels the electron from the atom. This process is known as internal conversion
(IC) and is an alternative to gamma emission. In many nuclides, isomeric transi¬
tions produce gamma photons and IC electrons. When an electron is removed
from the atom by internal conversion, a vacancy is created. When the
vacancy is
filled by an electron from a higher energy level, energy must be emitted from the
atom as a characteristic x-ray photon or an Auger electron.
The various isobaric and isomeric transitions give rise to a
combination of both
photon and particulate radiations. The radiations encountered in clinical proce¬
dures are summarized in Figure 5-9. Nuclei with a high N/P generally produce
beta radiation; those with a low N/P produce either positrons or electron capture.
All transitions are usually followed by either gamma or internal conversion elec-
High Low
N/P N/P
Gamma
< A/
Figure 5-9 Composite Diagram Showing the Various Nuclear Transitions That Produce
Radiation
80 Physical Principles of Medical Imaging
tron emission. Internal conversion and electron capture lead to x-ray or Auger
electron emission.
ALPHA EMISSION
Energy
Element A Z T1/2 Transition Radiation Yield (keV)
Energy
Element A Z T1/2 Transition Radiation Yield (keV)
X-Ray 0.71 27
X-Ray 0.78 51
X-Ray 0.36 69
PRODUCTION OF RADIONUCLIDES
Some radionuclides in nature but are generally not suitable for clinical
occur
positive particles such as protons. Neutrons can be obtained from nuclear reactors
or accelerators. Positive particles are obtained from accelerators,
usually cyclo¬
trons.
Chapter 6
Radioactivity
1 Ci = 3.7 x 1010 Bq
1 mCi = 37 MBq
1 MBq = 27 pCi.
The activity of
sample is related to two quantities used in clinical nuclear
a
medicine. These are illustrated in
Figure 6-1 and are (1) the number of radioactive
(untransformed) nuclei in the sample and (2) the elapsed time. Both relationships
involve the lifetime of the radioactive material. The lifetime is the time between
the formation of a radioactive nucleus and its radioactive transition.
RADIOACTIVE LIFETIME
83
84 Physical Principles of Medical Imaging
Activity (A)
+
Formation
Time (hrs)
Half-Life
T
11/2
_
-
0.693 •
The number, 0.693, is the naturallogarithm of the number two, and frequently
appears in relationships involving half-life. The transformation constant X is the
reciprocal of the average life, Ta:
X=±.
Ta
Although activity does not express the amount of radioactive material present,
it isproportional to the amount present at a specific time. The amount can be
expressed by quantities such as mass, volume, or number of nuclei. We now con¬
sider the relationship of activity and the number of nuclei, N, in a specific sample.
We have just seen that transitions are spread over a longer time for some nu¬
clides than for others. In other words, for a given number of radioactive nuclei, the
rate of transition (activity) is inversely related to the lifetime of the nuclide. The
quantity of radioactive nuclei or atoms in a sample, whereas the activity is the rate
at which they are undergoing transitions and emitting radiation. Although activity
is proportional to the number of nuclei, the proportion varies from one nuclide to
another depending on its lifetime. From the relationships above, it can be seen that
for a given quantity of radioactive material, activity is inversely related to half-
life:
Cumulated Activity
Thequantity of radioactive nuclei that undergo transitions in a period of time is
usually designated the cumulated activity, A, and is expressed in the units of
microcurie-hours. 1 pCi-hr is equivalent to 133 million (13.3 x 107) transitions.
Radioactivity 87
The relationship between cumulated activity, A, and the initial activity of a collec¬
tion of radioactive material, A, is
different for each nuclide and depends on the nuclide lifetime (transformation
constant or half-life). This characteristic of radioactive decay is illustrated in Fig¬
ure 6-3.
Let us assume that we have a radioactive material with a transformation con¬
stant hour. This means that approximately one-tenth of the nuclei will
of 0.1 per
undergo transitions during a 1-hour time interval. If we begin with 100 units of
radioactive material, 10 units will undergo transition during the first hour. At the
beginning of the second hour there will be 90 units of material. During the second
hour, one-tenth of 90 units will undergo transition which results in 81 units re¬
maining at the end of the second hour. The relationship between amount of radio¬
activity and time is not linear, but is exponential. This relationship is encountered
when the fraction of material undergoing change remains constant, but the amount
decreases with time.
88 Physical Principles of Medical Imaging
Figure 6-3 Relationship between Amount of Radioactive Material and Elapsed Time
remaining fraction is the fraction 0.5 multiplied by itself the number of times cor¬
responding to the number of half-lives. For example,
1 half-life, f = 0.5
2 half-lives, f = (0.5) x (0.5) = 0.25
3 half-lives, f= (0.5) x (0.5) x (0.5) = 0.125
4 half-lives, f = (0.5) x (0.5) x (0.5) x (0.5) = 0.0625.
f=(0.5),/r = (0.5)n
where t is the elapsed time, T is the half-life, and n is the number of half-lives.
When the elapsed time is an integral number of half-lives, the remaining fraction
can be easily calculated. If the elapsed time is not an integral number of half-lives,
special mathematical table. Table 6-1 gives the remaining fractions for several
elapsed time intervals expressed in terms of the number of half-lives.
Radioactivity 89
Let us see how this table can be used to find the activity remaining after some
elapsed time. Assume you have 100 pCi of a radioactive nuclide with a half-life of
6 hours. How much activity will remain after 33 hours? The first step is to express
the elapsed time in terms of half-lives:
n = 33 hr/6 hr = 5.5.
Therefore, if we started with 100 pCi, 2.2 pCi would remain after an elapsed time
of 33 hours.
Table 6-1 Tabulation of Remaining Fraction (f) After an Elapsed Time of n Half-Lives
n f n f
n f
RADIOACTIVE EQUILIBRIUM
elapsed time. If the radioactive material is being formed or replenished during the
decay process, however, the relationship between activity and elapsed time is
quite different from a simple exponential decay. The form of this relationship de¬
pends on the relationship of the rate of formation to the rate of decay. If we began
by forming radioactive material, we would expect the activity to increase with
elapsed time as illustrated in Figure 6-4. As the amount of radioactive material
(and activity) increases, however, the rate of loss of material by radioactive transi¬
tions also increases.
Consider filling a bucket with a hole in it. As the water level rises in the bucket,
the rate at which water flows out of the hole also increases. The water will usually
Formation
^4 Equilibrium
Level
V Radioactive
Transitions
Secular Equilibrium
Assume that the radioactive material is forming at an almost constant rate. If, at
the beginning, no radioactive daughter material is present, no nuclei will be under¬
going transition. As soon as the radioactive material begins to accumulate, transi¬
tions will begin and some radioactive nuclei will be lost. As the number of radio¬
active nuclei increases, the activity and rate of loss increase. Initially, the rate of
loss is much less than the rate of formation. As the quantity of radioactive material
Parent Daughter
'f Parent
^^Transformation
Formation
of Daughter
Transformation
builds, the activity or transition rate increases until it is equal to the rate of forma¬
tion as shown in Figure 6-6. In other words, radioactive nuclei undergo transitions
at exactly the same rate they are forming, and a condition of equilibrium is estab¬
lished. The amount of radioactive material will then remain constant regardless of
elapsed time. Under this condition, the activity is equal to the rate of formation and
is referred to as the saturation activity. The important point is that the maximum
activity of a radioactive material is determined by the rate (nuclei per second) at
which the material is being formed. Although it is true that the activity gradually
builds with time, a point is reached at which build-up stops and the activity re¬
mains at the saturation level.
The time required to reach a specific activity depends on the half-life of the
material being formed (daughter). After n half-lives the activity will be some frac¬
tion, f, of the rate of formation or saturation activity. The relationship is
Activity values after a specific elapsed time can be obtained for this relationship
by using Table 6-1 to find the value of (l/2)n. It should be observed that the build¬
up of radioactivity is a mirror image of radioactive decay. Just as a radioactive
material never decays to zero activity (at least theoretically), radioactive build-up
never practical purposes, saturation is reached in
reaches saturation. However, for
approximately 5 half-lives when the activity is more than 96% of the saturation
value.
activity reaches the saturation value it remains constant and is in a
After the
state of secularequilibrium. This occurs when the rate of formation does not
change during the time period of interest because either the parent material has a
very long half-life or the radioactive material is forming at a constant rate by an¬
other means such as a cyclotron or nuclear reactor.
Transient Equilibrium
When the half-life of the parentis only a few times greater than the half-life of
the daughter, the condition of transient equilibrium will occur. During the period
of interest the parent will undergo radioactive decay. Daughter activity will build
and establish a state of equilibrium with the parent activity. Transient equilibrium
differs from secular equilibrium in two respects.
First, the equilibrium or saturation activity of the daughter, Ad, is not equal to
the activity of the parent, AP. The relationship is
Tp
Ad — Ap
Tp—Td
Radioactivity 93
When the half-life of the parent, TP, is much greater than that of the daughter, Td,
the term (TP/TP-Td) approaches a value of one, and daughter activity approaches
parent activity, as in secular equilibrium. However, as daughter and parent half-
lives become closer, this term becomes greater than one, which means that daugh¬
ter activity is actually greater than parent activity under equilibrium conditions.
The ratio of daughter to parent activity becomes greater as the half-lives become
closer.
Second, with transient equilibrium the equilibrium activity of the daughter
changes with time because the parent activity is changing. The relationship of
parent and daughter activity for a transient condition is shown in Figure 6-7. After
the condition of equilibrium is reached, the daughter appears to decay with the
half-life of the parent.
Technetium-99m and molybdenum-99 are good examples of transient equilib¬
rium. Technetium-99m is obtained from a generator that contains molybdenum-
99. The molybdenum-99 undergoes an isobaric transition into technetium-99m
(86%) and technetium-99 (14%). The technetium-99m is radioactive with a half-
life of 6 hours. Molybdenum-99 has a half-life of approximately 67 hours. The
relationship between technetium and molybdenum activity in a typical generator
94 Physical Principles of Medical Imaging
is shown in Figure 6-8. In this example it is assumed that all technetium is re¬
moved from the generator every 24 hours.
If all molybdenum nuclei changed into technetium-99m
nuclei, the saturation
activity would be
ATc = A»
t
T"
mo 1 Tc
—1.1 Amo.
>- .5 _
Radioactive Biologic
Decay Elimination
Tp = 5 hr Tk = 3hr
Effective Half-life
TpTb
(Te) = = l.9hr
Tp+Tb
T
2 3
Time (hours)
EFFECTIVE LIFETIME
Figure 6-9. One is the normal radioactive decay, and the other is biological trans¬
port or elimination from the specific site.
Half-life values can be used to express the rate of removal by both mechanisms.
The half-life associated with normal radioactive decay is generally designated the
physical half-life and has a characteristic value for each radionuclide. The rate of
biological removal can generally be expressed in terms of the biological half-life.
The value of the biological half-life is determined by such things as the chemical
form of the radionuclide and the physiological function of the organ or organism
considered.
When biological transport or elimination occurs, the lifetime of the radioactive
material in the organ is reduced. This is generally expressed in terms of an effec¬
tive half-life. The relationship between effective half-life (Te), physical half-life
(TP), and biological half-life (Tb) is given by
T
Te= _ Tp x Tb
VPfb"
When both radioactive decay and biological elimination are present, the effective
half-life will always be less than either the physical or biological half-life. If the
difference in the two half-life values is rather large, the effective half-life will be
slightly less than the shorter half-life of the two; if the two are equal, the effective
half-life will be one-half of the physical or biological half-life value.
Chapter 7
X-Ray Production
Function
X-ray tubes are designed and constructed to maximize x-ray production and to
dissipate heat as rapidly as possible.
The x-ray tube is a relatively simple electrical device typically containing only
two elements: a cathode and an anode. As the electrical current flows through the
tube from cathode to anode, the electrons undergo an energy loss, which results in
the generation of x-radiation. A cross-sectional view of a typical x-ray tube is
shown in Figure 7-1.
Anode
97
98 Physical Principles of Medical Imaging
Glass
Cathode
The anode has two primary functions: ( 1 ) to convert electronic energy into
in the process. The material for the
x-radiation, and (2) to dissipate the heat created
anode is selected to enhance these functions.
The ideal situation would be if most of the electrons created x-ray photons
rather than heat. The fraction of the total electronic energy that is converted into
x-radiation(efficiency) depends on two factors: the atomic number (Z) of the an¬
ode material and the energyof the electrons. Most x-ray tubes use tungsten, which
has an atomic number of 74, as the anode material. In addition to a high atomic
number, tungsten has several other characteristics that make it suited for this pur¬
pose. Tungsten is almost unique in its ability to maintain its strength at high tem¬
peratures, and it has ahigh melting point and a relatively low rate of evaporation.
For many years, pure tungsten was used as the anode material. In recent years an
alloy of tungsten and rhenium has been used as the target material but only for the
surface of some anodes. The anode body under the tungsten-rhenium surface on
many tubes is manufactured from a material that is relatively light and has good
heat storage capability. Two such materials are molybdenum and graphite. The
use of molybdenum as an anode base material should not be confused with its use
as an anode surface material. Most x-ray tubes used for mammography have
molybdenum-surface anodes. This material has an intermediate atomic number
(Z = 42), which produces characteristic x-ray photons with energies well suited to
X-Ray Production 99
Design
Most anodes shaped as beveled disks and attached to the shaft of an electric
are
in Chapter 9.
Focal Spot
Not all of the anode is involved in x-ray production. The radiation is produced
in very small area on the surface of the anode known as the focal spot. The
a
dimensions of the focal spot are determined by the dimensions of the electron
beam arriving from the cathode. In most x-ray tubes, the focal spot is rectangular.
The dimensions of focal spots usually range from 0.1 mm to 2 mm. X-ray tubes
are designed to have specific focal spot sizes; small focal spots produce sharper
images, and large focal spots have a greater heat-dissipating capacity.
Focal spot size is one factor that must be considered when selecting an x-ray
tube for a specific application. Tubes with small focal spots are used when high
image quality is essential and the amount of radiation needed is relatively low.
Most x-ray tubes have two focal spot sizes (small and large), which can be se¬
lected by the operator according to the imaging procedure.
Cathode
The basic function of the cathode is to expel the electrons from the electrical
circuit and focus them into a well-defined beam aimed at the anode. The typical
cathode consists of a small coil of wire (a filament) recessed within a cup-shaped
region, as shown in Figure 7-2.
Electrons that flow through electrical circuits cannot generally escape from the
conductor material and move into free space. They can, however, if they are given
sufficient energy. In a process known as thermionic emission, thermal energy (or
heat) is used to expel the electrons from the cathode. The filament of the cathode is
heated in the same way as a light bulb filament by passing a current through it.
This heating current is not the same as the current flowing through the x-ray tube
100 Physical Principles of Medical Imaging
Cathode
High Energy
• • 1 Electrons (100KeV)
Potential Energy L^\jKinetic Energy
100 • 0
50 • . 50
0 . 1QQ Anode
Envelope
The anode and cathode contained in
are an airtight enclosure, or envelope. The
envelope and its contents are often referred to as the tube insert, which is the part
of the tube that has a limited lifetime and can be
replaced within the
housing. The
majority of x-ray tubes have glass envelopes, although tubes for some applications
have metal and ceramic
envelopes.
The primary functions of the
envelope are to provide support and electrical in¬
sulation for the anode and cathode assemblies and to maintain a vacuum in the
tube. The presence of gases in the
x-ray tube would allow electricity to flow
through the tube freely, rather than only in the electron beam. This would interfere
with x-ray production and possibly
damage the circuit.
Housing
The x-ray tube housing provides several functions in addition to
enclosing and
supporting the other components. It absorbs radiation, except for the radiation that
X-Ray Production 101
passes through the window as the useful x-ray beam. Its relatively large exterior
surface dissipates most of the heat created within the tube. The space between the
housing and insert is filled with oil, which provides electrical insulation and trans¬
fers heat from the insert to the housing surface.
ELECTRON ENERGY
The energy that will be converted into x-radiation (and heat) is carried to the
x-ray tube by a current of flowing electrons. As the electrons pass through the x-
ray tube, they undergo two energy conversions, as illustrated in Figure 7-2: The
electrical potential energy is converted into kinetic energy that is, in turn, con¬
verted into x-radiation and heat.
Potential
1 keV of energy. By adjusting the KV, the x-ray machine operator actually assigns
a specific amount of energy to each electron.
Kinetic
the anode, they are slowed very quickly and lose their kinetic energy; the kinetic
Figure 7-3. Two types of interactions produce radiation. An interaction with elec¬
tron shells produces characteristic x-ray photons; interactions with the atomic
L electron
_ 10.2 ke V
■Q- -
^
Choracterislic pholon
\ — >
59 3 ke V
A # \
/ / ✓ K electron N
[ / 69 5 keV n \
HiQh Spe6d
electrons
I f *
BREMSSTRAHLUNG
Production Process
The interaction that produces the most photons is the Bremsstrahlung process.
Bremsstrahlung is German word for "braking radiation" and is a good descrip¬
a
tion of the process. Electrons that penetrate the anode material and
pass close to a
nucleus are deflected and slowed down by the attractive force from the nucleus.
The energy lost by the electron during this encounter appears in the form of an
x-ray photon. All electrons do not produce photons of the same energy.
Spectrum
Only a few photons that have energies close to that of the electrons are pro¬
duced; most have lower energies. Although the reason for this is complex, a sim¬
plified model of the Bremsstrahllung interaction is shown in Figure 7-4. First,
assume that there is
a space, or field, surrounding the nucleus in which electrons
experience the "braking" force. This field can be divided into zones, as illustrated.
This gives the nuclear field the appearance of a target with the actual nucleus
located in the center. An electron striking anywhere within the target experiences
X-Ray Production 103
Nuclear Field
Figure 7-4 A Model for Bremsstrahlung Production and the Associated Photon Energy
Spectrum
some braking action and produces an x-ray photon. Those electrons striking near¬
est the center are subjected to the greatest force and, therefore, lose the most en¬
ergy and produce the highest energy photons. The electrons hitting in the outer
zones experience weaker interactions and produce lower energy photons. Al¬
though the zones have essentially the same width, they have different areas. The
area of a given zone depends on its distance from the nucleus. Since the number of
electrons hitting a given zone depends on the total area within the zone, it is obvi¬
ous that the outer zones capture more electrons and create more photons. From this
model, an x-ray energy spectrum, such as the one shown in Figure 7-4, could be
predicted.
The basic Bremsstrahlung spectrum has a maximum photon energy that corre¬
sponds to the energy of the incident electrons. This is 70 keV for the example
shown. Below this point, the number of photons produced increases as photon
energy decreases. The spectrum of x-rays emerging from the tube generally looks
quite different from the one shown here because of selective absorption within the
filter.
Asignificant number of the lower-energy photons are absorbed or filtered out
as they attempt to pass through the anode surface, x-ray tube window, or added
filter material. X-ray beam filtration is discussed more extensively in Chapter 11.
The amount of filtration is generally dependent on the composition and thickness
104 Physical Principles of Medical Imaging
of material through which the x-ray beam passes and is generally what determines
the shape of the low-energy end of the spectrum curve.
KVP
Changing the KVP will generally alter the Bremsstrahlung spectrum, as shown
in Figure 7-5. The total area under the spectrum curve
represents the number of
photons produced. If no filtration is present where the spectrum is essentially a
triangle, the amount of radiation produced is approximately proportional to the
KV squared. With the presence of filtration, however,
increasing the KV also in¬
creases the relative
penetration of the photons, and a smaller percentage is filtered
out. This results in an even greater increase in radiation
output with KVP.
CHARACTERISTIC RADIATION
Production Process
Figure 7-3, involves a collision between the high-speed electrons and the orbital
electrons in the atom. The interaction can occur only if the
incoming electron has
a kinetic
energy greater than the binding energy of the electron within the atom.
When this condition exists, and the collision occurs, the electron is
dislodged from
the atom. When the orbital electron is removed, it leaves a vacancy that is filled
by
an electron from a
higher energy level. As the filling electron moves down to fill
the vacancy, it gives up energy emitted in the form of an x-ray photon. This is
known as characteristic radiation because the energy of the photon is characteris-
X-Ray Production 105
Figure 7-5 Comparison of Photon Energy Spectra Produced at Different KVP Values
tic of the chemical element that serves as the anode material. In the example
shown, the electron dislodges a tungsten K-shell electron, which has a binding
energy of 69.5 keV. The vacancy is filled by an electron from the L shell, which
has a binding energy of 10.2 keV. The characteristic x-ray photon, therefore, has
an energy equal to the energy difference between these two levels, or 59.3 keV.
Actually, a given anode material gives rise to several characteristic x-ray ener¬
gies. This is because electrons at different energy levels (K, L, etc.) can be dis¬
lodged by the bombarding electrons, and the vacancies can be filled from different
energy levels. The electronic energy levels in tungsten are shown in Figure 7-6,
along with some of the energy changes that give rise to characteristic photons.
Although filling L-shell vacancies generates photons, their energies are too low
for use in diagnostic imaging. Each characteristic energy is given a designation,
which indicates the shell in which the vacancy occurred, with a subscript, which
shows the origin of the filling electron. A subscript alpha (a) denotes filling with
an L-shell electron, and beta ((3) indicates filling from either the M or N shell.
Tungsten Spectrum
The spectrum of the significant characteristic radiation from tungsten is shown
in Figure 7-6. Characteristic radiation produces a line spectrum with several dis¬
crete energies, whereas Bremsstrahlung produces a continuous spectrum of pho¬
ton energies over a specific range. The number of photons created at each charac¬
teristic energy is different because the probability for filling a K-shell vacancy is
different from shell to shell.
106 Physical Principles of Medical Imaging
Liu
Jiil.
Kau
Ka.
^r
—i—
p- •'
± ±± V 20 40 60 80
Photon Energy (keV)
Figure 7-6 Electron Energy Levels in Tungsten and the Associated Characteristic X-Ray
Spectrum
Molybdenum Spectrum
Molybdenum anode tubes produce two rather intense characteristic x-ray ener¬
gies: K-alpha radiation, at 17.9 keV, and K-beta, at 19.5 keV.
KYP
The KVP value also strongly influences the production of characteristic radia¬
tion. No characteristic radiation will be produced if the KVP is less (numerically)
than the binding energy of the K-shell electrons. When the kilovoltage is increased
above this threshold level, the quantity of characteristic radiation is generally pro¬
portional to the difference between the operating kilovoltage and the threshold
kilovoltage.
The x-ray beam that emerges from a tube has a spectrum of photon energies
determined by several factors. A typical spectrum is shown in Figure 7-7 and is
made up of photons from both Bremsstrahlung and characteristic interactions.
The relative composition of an x-ray spectrum with respect to Bremsstrahlung
and characteristic radiation depends on the anode material, kilovoltage, and filtra¬
tion. In a tungsten anode tube, no characteristic radiation is produced when the
KVP is less than 69.5. At some higher kilovoltage values generally used in diag¬
nostic examinations, the characteristic radiation might contribute as much as 25%
X-Ray Production 107
Figure 7-7 Typical Photon Energy Spectrum from a Machine Operating at KVP = 80
of the total radiation. In molybdenum target tubes operated under certain condi¬
tions of KVP and filtration, the characteristic radiation can be a major part of the
total output.
EFFICIENCY
Concept
Only a small fraction of the energy delivered to the anode by the electrons is
converted into x-radiation; most is absorbed and converted into heat. The effi¬
ciency of x-ray production is defined as the total x-ray energy expressed as a frac¬
tion of the total electrical energy imparted to the anode. The two factors that deter¬
mine production efficiency are the voltage applied to the tube, KV, and the atomic
number of the anode, Z. An approximate relationship is
KVP
The relationship between x-ray production efficiency and KVP has a specific
effect on the practical use of x-ray equipment. As we will see in Chapter 9, x-ray
tubes have a definite limit on the amount of electrical energy they can dissipate
108 Physical Principles of Medical Imaging
because of the heat produced. This, in principle, places a limit on the amount of
x-radiation that can be produced by an x-ray tube. By increasing KVP, however,
the quantity of radiation produced per unit of heat is significantly increased.
Anode Material
kVp
Figure 7-8 Typical X-Ray Tube Efficacy (Exposure Output) for Different KVP Values
X-Ray Production 109
EFFICACY (OUTPUT)
Concept
The x-ray efficacy of the x-raytube is defined as the amount of exposure, in
milliroentgens, delivered to point in the center of the useful x-ray beam at a
a
distance of 1 m from the focal spot for 1 mAs of electrons passing through the tube.
The efficacy value expresses the ability of a tube to convert electronic energy
into x-ray exposure. Knowledge of the efficacy value for a given tube permits the
determination of both patient and film exposures by methods discussed in later
chapters. Like x-ray energy output, the efficacy of a tube depends on a number of
factors including KVP, voltage waveform, anode material, filtration, tube age, and
anode surface damage. Figure 7-8 gives typical efficacy values for tungsten anode
tubes with normal filtration.
KVP
KVP is very useful in controlling the radiation output of an x-ray tube. Figure
7-8 shows a nonlinear relationship. It is normally assumed that the radiation out¬
put is proportional to the square of the KVP. Doubling KVP quadruples the expo¬
sure from the tube.
Waveform
Waveform describes the manner in which the KV changes with time during the
KV PRODUCTION
One requirement for x-ray production is that the electrons delivering energy to
the x-ray tube must have individual energies at least equal to the energy of the
111
112 Physical Principles of Medical Imaging
From Generator To
Power Company X-Ray Tube
0
AC Rectify •DC
KV MA Time
Operator Controls
x-ray photons; the x-ray photon energy (kiloelectron volts) is always limited by
the electron energy, or voltage (kilovolts).
The electrical energy from a power company is generally delivered at
120, 240,
or 440 V. This voltage must be increased to the range of 25,000 V to 120,000 V to
produce diagnostic-quality x-rays.
Energizing and Controlling the X-Ray Tube 113
Transformer Principles
The device that increase
can
voltage is the transformer, which is one of the
major components of the generator. It is a relatively large device connected by
cables to the x-ray tube. The basic function of a transformer is illustrated in
Figure
8-3.
A transformer has two separate circuits. The input circuit, which receives the
electrical energy, is designated the primary, and the output circuit is designated
the secondary. Electrons do not flow between the two circuits; rather, energy is
passed from the primary circuit to the secondary circuit by a magnetic field.
As electrons flow into the transformer and through the primary circuit, they
transfer energy to the electrons in the secondary circuit. The voltage (individual
electron energy) increases because the transformer collects the energy from a
large number of primary-circuit electrons and concentrates it into a few secondary
circuit electrons. In principle, the transformer repackages the electron energy; the
total energy entering and leaving the transformer is essentially the same. It enters
in the form of high current, low voltage and leaves in the form of high voltage, low
current.
Transformer Concept
0# m © © m
1—
High Current- Low Current
Low
t
Voltage ®
Energy High Voltage
o o o-o _o
Primary Magnetic Secondary
Field
mary than in the secondary. The ratio of the currents is the same as the voltage
ratio, except it is reversed. The larger current is in the primary, and the smaller
current is in the secondary. For a transformer with a 1,000:1 ratio, the current
flowing through the primary must be 1 A (1,000 mA) per 1 mA of current flowing
through the secondary.
The high voltage transformer in an x-ray machine can be described in quantita¬
tive terms as a device that converts volts into kilovolts and converts amperes into
milliamperes.
A transformer physically consists of two coils of wire, as shown in Figure 8-2.
One coil forms the primary and the other the secondary circuit of the transformer.
Each coil contains a specific number of loops or turns. The characteristic of the
transformer that determines the voltage step-up ratio is the ratio of the number of
turns (loops) in the secondary coil to the number in the primary. The voltage step-
up ratio is determined by, and is the same as, the secondary-to-primary-turns ratio.
There is no direct flow of electrons between the primary and secondary coils;
they are coupled by the magnetic field produced by current passing through the
primary coil. The transformer is based on two physical principles involving the
interaction between electrons and magnetic fields: (1) when electrons flow
through a coil of wire, a magnetic field is created around the coil; (2) electrons
within a coil of wire will receive energy if the coil is placed in a changing mag¬
netic field.
Thekey to transformer operation is that the primary coil must produce a con¬
stantly changing, or pulsing, magnetic field to boost the energy of the electrons in
the secondary coil. This occurs when the primary of the transformer is connected
to an AC source. When AC is applied to the input of a transformer, the primary
coil produces a pulsing magnetic field. It is this pulsing magnetic field that pumps
the electrons through the secondary coil. An electron in the secondary coil gains a
specific amount of energy each time it goes around one loop, or turn, on the coil.
Therefore, the total energy gained by an electron as it passes through the second¬
ary coil is proportional to the number of turns on the coil. Since the energy of an
electron is directly related to voltage, it follows that the output voltage from a
transformer is proportional to the number of turns on the secondary coil.
The Autotransformer
In most x-ray apparatus, it is desirable to change the voltage (KV) applied to the
tube to accommodate clinical needs. This is generally done by using the type of
Energizing and Controlling the X-Ray Tube 115
RECTIFICATION
The output voltage from the high voltage transformer is AC and changes polar¬
ity 60 times
per second (60 Hz). If this voltage were applied to an x-ray tube, the
anode would be positive with respect to the cathode only one half of the time.
During the other half of the voltage cycle, the cathode would be positive and
would tend to attract electrons from the anode. Although the anode does not emit
electrons unless it is very hot, this reversed voltage is undesirable. A circuit is
needed that will take the voltage during one half of the cycle and reverse its polar¬
ity, as illustrated in Figure 8-4. This procedure is called rectification.
Rectifiers
which permit blood to flow in one direction but not the other. In fact, in some
countries, rectifiers are referred to as valves. Earlier x-ray equipment used vacuum
tube rectifiers, but most rectifiers are now solid state.
Rectifier Circuits
Notice that the circuit in Figure 8-4 has two input points, to which the incoming
voltage from the transformer is applied, and two output points, across which the
voltage will appear and be applied to the tube. The circuit contains
rectified output
116 Physical Principles of Medical Imaging
Tube k.
Transformer kV * /
\ i
\ '
electron flow
four rectifiers, labeled a, b, c, and d. Electrons (current) can flow through a recti¬
fier only in the direction indicated by the arrow. The waveform shown indicates
the polarity of the lower terminals with respect to the upper. The operation of this
circuit can be easily understood by considering the following sequence of events.
During the first half of the voltage cycle, the upper transformer terminal is nega¬
tive, and the electrons flow into the rectifier circuit at that point. From there, they
flow only through rectifier a and on to the x-ray tube. They enter the tube at the
cathode terminal, leave by means of the anode, and return by the lower conductor
to the rectifier circuit. At that point, it would appear they have two possible path¬
ways to follow. They flow, however, only through rectifier d because the lower
transformer terminal is positive and is more attractive than the upper negative
terminal. During this part of the voltage cycle, rectifiers b and c do not conduct.
During the second half of the cycle, the polarity of the voltage from the trans¬
former is reversed, and the lower terminal is negative. The electrons leave the
transformer at this point and pass through rectifier c and on to the cathode.
Electrons leaving the x-ray tube by means of the lower conductor pass through
rectifier b because of the attraction of the upper transformer terminal, which is
then positive.
Full-Wave
In effect, the rectifier circuit takes an alternating polarity voltage and reverses
one half of it so that the outcoming voltage always has the same polarity. In this
Energizing and Controlling the X-Ray Tube 117
particular circuit, the cathode of the x-ray tube always receives a negative voltage
with respect to the anode. This circuit,
consisting of four rectifier elements con¬
nected as shown, is known as a
bridge rectifier. Since it makes use of all of the
voltage waveform, it is classified as a full-wave rectifier.
Half-Wave
A rectifier circuit can have
only one rectifier element. The disadvantage is that
it conducts during only one half of the cycle. This type is classified as a half-wave
rectifier. This type of rectification is found in some smaller
x-ray machines, such
as those used in
dentistry. In such apparatus, the x-ray tube itself often serves as
the rectifier.
change constantly with time throughout the cycle. The output from the tube is a
spectrum of photon energies that is an average of all instantaneous spectra.
Three principal KV values are associated with the typical single-phase wave¬
form. Each is related to an aspect of x-ray production. At any instant in time, the
pulsing KV has an instantaneous value (KVi), which determines the rate of x-ray
production at each specific instant. During each cycle, the KV reaches a maximum
or peak value (KVP). It is the KVP that is set by the operator as a control on x-ray
KVp (peak)
KVe (effective)
KV
KVi (instantaneous)
Time (sec)
Figure 8-5 Relationship of KV Peak, Effective, and Instantaneous Values for a Single-
Phase Generator
118 Physical Principles of Medical Imaging
KVe is that it determines the rate at which heat is produced in the x-ray tube.
Some x-ray generators produce a constant KV; in these cases the KVP, KVe, and
KVi have the same value. These generators are called constant potential apparatus.
The constant potential x-ray machine produces more photons with higher average
or effective energy than are produced by the single-phase machine, as shown in
Figure 8-6.
The rate at which exposure is delivered to the receptor varies significantly with
time for single-phase equipment, as shown in Figure 8-6. Most of the exposure is
produced during a small portion of the voltage cycle, when the voltage is near the
KVP value. Several factors contribute to this effect. One is that the efficiency of
x-ray production increases with voltage and gives more exposure per milliampere-
second at the higher voltage levels. Second, the photons produced at the higher
tube voltages have higher average energies and are more penetrating. Third, the
MA also changes with time during the voltage cycle.
When an x-ray machine is set at a certain MA value, the stated value is usually
the average throughout the exposure time. In single-phase equipment, the MA
value changes significantly during the voltage cycle. The effect is that the x-ray
exposure is delivered to the receptor in a series of pulses. Between the pulses is a
period of time during which no significant exposure is delivered. This means, gen¬
erally, that the total exposure time must be longer for single-phase than for con¬
stant potential x-ray equipment, which can deliver a given film exposure in a much
shorter total exposure time.
Time
Three-Phase
One of the most practical means of obtaining essentially constant voltage and
high average current is to use three-phase electrical power. The concept of three-
phase electricity can best be understood by considering it as three separate incom¬
ing power circuits, as shown in Figure 8-7. Although this illustration shows six
conductors coming in, this is not necessary in
reality because the power lines can
be shared by the circuits. Each circuit, or phase, delivers a
voltage that can be
transformed and rectified in the conventional manner. The important characteris¬
tic of a three-phase power system is that the waveforms or
cycles in one circuit are
out of step, or phase, with those in the other two. This means that the
voltage in the
three circuits peak at different times. In an actual circuit, the three
voltage wave¬
forms are combined, as shown in Figure 8-7. They are not added, but are com¬
bined so that the output voltage at any instant is equal to that of the highest
phase
at the time. Since the voltage drops
only a few percent before it is picked up by
another phase, the KVi at all times is quite close to the KVP.
The voltage variation over the period of a cycle is designated the ripple and is
expressed as a percentage. The typical ripple levels for several power supply types
are shown in Figure 8-8. One
way to classify power supply circuits is according to
the number of pulses they produce in the period of one cycle, ie, l/60th of a sec¬
ond. By using a complex circuit of transformers and rectifiers, it is possible to
produce a 12-pulse machine that has a ripple level of less than 4%.
Power
Supplies
Incoming
Power —
Lines
■/ '/
60 '*>
Time (Seconds)
'/60
SEC^j
CAPACITORS
x-ray machines, capacitors are used to accumulate and store electrical energy; in
other types of equipment, they are used as a filtering device to produce constant
potential KV.
A capacitor consists of two electrical conductors, such as metal foil, separated
by a layer of insulation.
Energizing and Controlling the X-Ray Tube 121
Capacitor
Uncharged Charged
Voltage oc Charge
Capacitor Principles
The basic function of a capacitor is illustrated in Figure 8-9. A capacitor can be
described as a storage tank for electrons. When it is connected to a voltage source,
electrons flow into the capacitor, and it becomes "charged. " As the electrons flow
in, the voltage of the capacitor increases until it reaches the voltage of the supply.
Energy is actually stored in the capacitor when it is charged; the amount stored is
proportional to the voltage and the quantity of stored electrons. If a charged ca¬
pacitor is connected to another circuit, the capacitor becomes the source, and the
electrons flow out and into the circuit.
Energy Storage
In the discussion of the high voltage transformer, it was pointed out that the
current flowing into the power supply circuit must be greater than the tube current
by a factor equal to the voltage step-up ratio. This is typically as high as 1,000:1,
which would require 1 A of power line current for every 1 mA of tube current. The
times can be as long as 10 seconds to 20 seconds. The current flow into the capaci¬
tor is typically only a few milliamperes; when it is discharged to the tube over a
122 Physical Principles of Medical Imaging
lyEnergy
O ■ ^t I
short period of time, ie, the exposure time, the current can be several hundred
milliamperes.
The voltage across a capacitor is proportional to the quantity of electrons stored
(MAS); the actual relationship depends on the size, or capacity, of the capacitor.
Many machines use 1-microfarad (pF) capacitors, which produce a voltage of 1 kV
for each milliampere-second stored. As the electrons flow from the capacitor to
the tube, the voltage drops at the rate of 1 kV/mAs. For example, if a machine is
charged to 70 kV, and an exposure of 18 mAs is made, the voltage will have
dropped to 52 kV at the end of the exposure.
Capacitor-storage x-ray equipment has a high-voltage waveform, unlike other
power supplies. An attempt to obtain large milliampere-second exposures drops
the kilovolts to very low values by the time the exposure terminates. Since low
tube voltages produce very little film exposure, but increase patient exposure, this
type of operation should be avoided. The total MAS should generally be limited to
approximately one third of the initial KV value.
A means for turning the tube current on and off is included in the x-ray tube
circuit. Most machines use a grid-control x-ray tube for this purpose.
Filtration
nently connected between the rectifier circuit and the x-ray tube. As the voltage
rises toward its peak, electrons from the rectifier circuit flow both to the x-ray tube
and into the capacitor. When the voltage from the rectifier circuit begins to fall,
electrons flow out of the capacitor and into the x-ray tube. Within certain opera-
Energizing and Controlling the X-Ray Tube 123
MA CONTROL
ture except for the duration of the x-ray exposure. Most x-ray equipment operates
with two levels of cathode heating. When the equipment is turned on, the cathode
is heated to a standby level that should not produce significant evaporation. Just
before the actual exposure is initiated, the cathode temperature is raised to a value
that will give the appropriate tube current. In most radiographic equipment, this
function is controlled by the same switch that activates the anode rotor. Unneces¬
sarily maintaining the cathode temperature at full operating temperature can sig¬
nificantly shorten the x-ray tube lifetime.
Although it is true that the x-ray tube current is primarily controlled by cathode
temperature, there are conditions under which it is influenced by the applied KV.
At low KV values, some of the electrons emitted from the cathode are not attracted
to the anode and form a space charge. In effect, this build-up of electrons in the
vicinity of a cathode repels electrons at the cathode surface and reduces emission.
Under this condition, the x-ray tube current is said to be space-charge limited. This
124 Physical Principles of Medical Imaging
can beespecially significant at low KV, such as are used in mammography. This
effect canbe reduced by locating the cathode and anode closer together.
As the KV is increased, the space charge decreases, and the x-ray tube current
rises to a value limited by the cathode emission. At that point the tube is said to be
saturated. Many x-ray machines contain a compensation circuit to minimize the
effect. The compensation is activated by the KV selector. As the KV is adjusted to
higher values, the compensation circuit causes the cathode temperature to be de¬
creased. The lower emission compensates for the decreased space charge.
The x-ray tube current can be read or monitored by a meter located in the high
voltage circuit; it must be placed in the part of the circuit that is near ground volt¬
age, or potential. This permits the meter to be located on the control console with¬
out extensive high-voltage insulation.
EXPOSURE TIMING
Another function of the generator is to control the duration of the x-ray expo¬
sure. In radiography, the exposure is initiated by the equipment operator and then
terminated either after preset time has elapsed or when the receptor has received
a
Operator-controlled switches and timers turn the radiation on and off by activat¬
ing switching devices in the primary circuit of the x-ray generator.
Manual Timers
X-ray equipment with manual timers requires the operator to set the exposure
time before initiating the exposure. The time is determined by personal knowl¬
edge, or from a technique chart, after the size of the patient and the KV and MA
values being used are considered.
With all x-ray equipment, the operator can control the quantity and quality
(penetrating ability) of the radiation with the KV, MA, and exposure-time con¬
trols. If the equipment is not properly calibrated, or is subject to periodic malfunc¬
tion, it will not be possible to control the radiation output. This can result in re¬
duced image quality and unnecessary patient exposure, especially when repeat
operator must be aware of the quantity of heat being produced and its relationship
to the heat capacity of the x-ray tube.
Figure 9-1 identifies the factors that affect both heat production and heat ca¬
pacity.
HEAT PRODUCTION
Heat is
produced in the focal spot area by the bombarding electrons from the
cathode. Since only a small fraction of the electronic energy is converted in
x-radiation, it can be ignored in heat calculations. We will assume all of the elec¬
tron energy is converted into heat. In a single exposure, the quantity of heat pro¬
127
128 Physical Principles of Medical Imaging
Figure 9-1 Factors That Determine the Amount of Heat Produced and the Three Areas of
an X-Ray Tube That Have
Specific Heat Capacities
tered in diagnostic x-ray machines are constant potential, 1.0; three-phase, 12-
The rate at which heat is produced in a tube is equivalent to the electrical power
and is given by
Power (watts) = w x KVP x MA.
The total heat delivered during an exposure, in joules or watt-seconds, is the prod¬
uct of the power and the exposure time.
HEAT CAPACITY
Temperature is the physical quantity associated with an object that indicates its
relative heat content. Temperature is specified in units of degrees. Physical
changes, such as melting, boiling, and evaporation, are directly related to an
object's temperature rather than its heat content.
For a given object, the relationship between temperature and heat content in¬
volves a third quantity, heat capacity, which is a characteristic of the object. The
to the object's heat capacity. In an object with a large heat capacity, the tempera¬
ture rise is smaller than in one with a small heat capacity. In other words, the
housing; heat is also transferred, by radiation, from the anode body to the tube
housing. Heat is removed from the tube housing by transfer to the surrounding
130 Physical Principles of Medical Imaging
Same Heat
-
atmosphere. When the tube is in operation, heat generally flows into and out of the
threeareas shown. Damage can occur if the heat content of
any area exceeds its
maximum heat capacity.
rad/'at/on
atmosphere
The maximum heat capacity of the focal spot area, or track, is the major limiting
factor with single exposures. If the quantity of heat delivered during an individual
exposure exceeds the track capacity, the anode surface can melt, as shown in Fig¬
ure 9-4. The capacity of a
given focal spot track is generally specified by the
manufacturer in the form of a curve, as shown in Figure 9-5. This type of curve
shows the maximum power (KV and MA) that can be delivered to the tube for a
given exposure time without producing overload. Graphs of this type are generally
designated tube rating charts. From this graph, it is seen that the safe power limit
of a tube is inversely related to the exposure time. This is not surprising, since the
total heat developed during an exposure is the product of power and exposure
time. It is not only the total amount of heat delivered to the tube that is crucial, but
also the time in which it is delivered.
X-ray tubes are often given single power ratings. By general agreement, an ex¬
posure time of 0.1 second is used for specifying a tube's power rating. Although
this does not describe a tube's limitations at other exposure times, it does provide
a means of
comparing tubes and operating conditions.
A number of different factors determine the heat capacity of the focal spot track.
The focal spot track is the surface area of the anode that is bombarded by the
electron beam. In stationary anode tubes, it is a small area with dimensions of a
few millimeters. In the rotating anode tube, the focal spot track is much larger
because of the movement of the anode with respect to the electron beam. Fig¬
ure 9-6 shows a small section of a rotating anode.
low power (KV and MA) settings. The large focal spot is used when the machine
must be operated at power levels that exceed the rated capacity of the small focal
spot. The specified size of an x-ray tube focal spot is the dimensions of the effec¬
tive or projected focal spot shown in Figure 9-6. Notice that the actual focal spot,
the area bombarded by the electron beam, is always larger than the projected, or
effective, focal spot. For a given anode angle, the width of the focal spot track is
directly proportional to the size of the projected spot. The relationship between
heat capacity and specified focal spot size is somewhat different. In many tubes,
doubling the focal spot size increases the power rating by a factor of about 3.
ObDAvaenmrhogydaetding
Rotaing
A9Fig-u4re
X-Ray Tube Heating and Cooling
Figure 9-5 Rating Curves for an X-Ray Tube Operated under Different Conditions
134 Physical Principles of Medical Imaging
130
120
110 \
100
900 MA
\ \
90
80
70
60
50
.001 X302 .005 jOI .02 .05 .1 .2 .3 .5.7 I 2 3 5 7 10 20
MAXIMUM EXPOSURE TIME IN SECONDS
Anode Angle
The actual relationship between focal spot width (and heat capacity) and the
size of theprojected focal spot is determined by the anode angle. Anode angles
generally range from about 7° to 20°. For a given effective focal spot size, the
X-Ray Tube Heating and Cooling 135
Figure 9-6 Section ofa Rotating Anode Showing Relationship of Focal Spot Track to
Electron Beam and Anode Angle
track width and heat capacity are inversely related to anode angle. Although an¬
odes with small angles give maximum heat capacity, they have specific limita¬
tions with respect to the area that can be covered by the x-ray beam. X-ray inten¬
sity usually drops off significantly toward the anode end because of the heel effect.
In tubes with small angles, this is more pronounced and limits the size of the useful
beam. Figure 9-7 shows the nominal field coverage for different anode angles. The
x-ray tube anode angle should be selected by a compromise between heat capacity
and field of coverage.
form the stator of the motor. When the coils are energized from the power line, the
rotor spins. The speed of rotation is determined by the frequency of the applied
current. When the stator coils are operated from the 60-Hz power line, the speed of
approximately 3,000 rpm. By using a power supply that produces 180-
rotation is
rotation speeds of approximately 10,000 rpm can be obtained. This is
Hz current,
commonly referred to as high-speed rotation.
136 Physical Principles of Medical Imaging
Figure 9-7 Variation of X-Ray Intensity Because of the Anode Heel Effect
The effective length of the focal spot track is proportional to the speed of rota¬
tion for given exposure time. High-speed rotation simply spreads the heat over a
a
Kilovoltage Waveform
Another factor that affects the heat capacity of the focal spot track is the wave¬
form of the kilovoltage. Single-phase power delivers energy to the anode in
pulses, as shown in Figure 9-8. Three-phase and constant potential generators de¬
liver the heat at an essentially constant rate, as indicated. Figure 9-8 compares the
These hot spots exceed the temperature produced by an equal amount of three-
phase energy. When an x-ray tube is operated from a single-phase power supply,
the maximum power must be less than for constant potential operation to keep the
hot spots from exceeding the critical temperature. In other words, constant poten¬
tial operation increases the effective focal spot track heat capacity and rating of an
x-ray tube.
The effect of kilovoltage waveform on tube rating should not be confused with
the effect of waveform on heat production, which was discussed earlier. However,
X-Ray Tube Heating and Cooling 137
/ D~tential
Figure 9-8 Approximate Distribution of Temperature along the Focal Spot Track for
Single-Phase and Three-Phase Operation
3. Constant potential operation produces more heat for a given KVP and MAS
setting.
The real advantage of constant potential operation is related to the first two fac¬
tors. Because of the increased efficiency of x-ray production, and the increased
penetrating ability of the radiation, a lower KVP or MAS value is required to pro¬
duce a given film exposure. This more than compensates for the increased heat
power level when the power is supplied from a three-phase or constant potential
power supply, and it will also produce radiation more efficiently.
A rating chart for an x-ray tube operated at different waveforms and rotation
speeds is shown in Figure 9-5. The highest power capacity is obtained by using
three-phase power and high-speed rotation; notice that the real advantage occurs
at relatively short exposure times. As exposure time is increased, overlapping of
the focal spot track and the diffusion of heat make the difference in power capacity
much less significant.
The actual rating charts supplied by an x-ray tube manufacturer are shown in
Figure 9-5. It is common practice for each of the four operating conditions (wave-
138 Physical Principles of Medical Imaging
form and speed) to be on a separate chart. Each chart contains a number of differ¬
ent curves, each representing a different MA value. The vertical scale on such a
rating chart is KVP. A chart of this type is still a power rating chart. Each combina¬
tion of KVP and MA represents a constant power value. Such a chart is easier to
use, since it is not necessary to calculate the power. The rating chart is used by the
operator to determine if the technical factors, KVP, MA, and exposure time, for a
given exposure will exceed the tube's rated capacity.
Most rotating anode tubes contain two focal spots. As mentioned previously,
the size of the focal spotsignificantly affects the heat capacity. Remember that a
given x-ray tube has a number of different rating values, depending on focal spot
size, rotation speed, and waveforms. Some typical values are shown in Table 9-1.
ANODE BODY
The heat capacity of the focal spot track is generally the limiting factor for
single exposures. In a series of radiographic exposures, CT scanning, or fluoros¬
copy, the build-up of heat in the anode can become significant. Excessive anode
temperature can crack or warp the anode disc. The heat capacity of an anode is
generally described graphically, as shown in Figure 9-9. This set of curves, de¬
scribing the thermal characteristics of an anode, conveys several important pieces
of information. The maximum heat capacity is indicated on the heat scale. The
heating curves indicate the build-up of heat within the anode for various energy
input rates. These curves apply primarily to the continuous operation of a tube,
such as in CT or fluoroscopy. For a given x-ray tube, there is a critical input rate
that can cause the rated heat capacity to be exceeded after a period of time. This is
generally indicated on the graph. If the heat input rate is less than this critical
value, normal cooling prevents the total heat content from reaching the rated
capacity.
The cooling curve can be used to estimate the cooling time necessary between
sets of exposures. Suppose a rapid sequence of exposures has produced a heat
Table 9-1 Heat Rating (in Joules) for Typical X-Ray Tube for Exposure Time of 0.1 sec and
Focal Spot Sizes of 0.7 mm and 1.5 mm
Single-phase Three-phase
150,000.
Rated Anode Capacity
£ 120,000-
o
X
90,000-
TJ
10
60,000.
30,000.
2 3 4 5 6 7 8
Time (Minutes)
input of 90,000 HU. This is well over 50% of the anode storage capacity. Before a
similar sequence of exposures can be made, the anode must cool to a level at
which the added heat will not exceed the maximum capacity. For example, after
an initial heat input of 90,000 HU, a cooling time of approximately 3.5 minutes
will decrease the heat content to 30,000 HU. At this point, another set of exposures
factor, a higher scan rate can be obtained by operating the anode with the highest
safe heat content since the cooling rate is higher for a hot anode and more scans
can be obtained in a specific time than with a cool anode. Most CT systems have a
display that shows the anode heat content as a percentage of the rated capacity.
The anodes in most radiographic equipment are cooled by the natural radiation
of heat to the surrounding tube enclosures. However, anodes in some high-
powered equipment, such as that used in CT, are cooled by the circulation of oil
through the anode to a heat exchanger (radiator).
Anode damage can occur if a high-powered exposure is produced on a cold
anode. It is generally recommended that tubes be warmed up by a series of low
TUBE HOUSING
The third heat capacity that must be considered is that of the tube housing. Ex¬
cessive heat in the housing can rupture the oil seals, or plugs. Like the anode, the
140 Physical Principles of Medical Imaging
housing capacity places a limitation on the extended use of the x-ray tube, rather
than on individual exposures. Since the housing is generally cooled by the move¬
ment of air, or convection, its effective capacity can be increased by using forced-
air circulation.
The housing heat capacity is much larger than that of the anode and is typically
over 1 million HU. The time required for a housing to dissipate a given quantity of
heat can be determined with cooling charts supplied by the manufacturer.
SUMMARY
The heat characteristics of x-ray tubes should be considered when tubes are
selected for specific applications and should be used as a guide to proper tube
operation.
Chapter 10
X-ray photons are created by the interaction of energetic electrons with matter
at the atomic level. Photons (x-ray and gamma) end their lives by transferring their
x-ray photons with the structure of the human body produces the image; the inter¬
action of photons with the receptor converts an x-ray or gamma image into one
that can be viewed or recorded. This chapter considers the basic interactions be¬
tween x-ray and gamma photons and matter.
INTERACTION TYPES
Photon Interactions
Recall that photons are individual units of energy. As an x-ray beam or gamma
radiation passes through an object, three possible fates await each photon, as
shown in Figure 10-1:
1. It can penetrate the section of matter without interacting.
2. It can interact with the matter and be completely absorbed by depositing its
energy.
3. It can interact and be scattered or deflected from its original direction and
deposit part of its energy.
There are two kinds of interactions through which photons deposit their energy;
both are with electrons. In one type of interaction the photon loses all its energy; in
it loses a portion of its energy, and the
the other, remaining energy is scattered.
These two interactions are shown in Figure 10-2.
141
142 Physical Principles of Medical Imaging
Figure 10-1 Photons Entering the Human Body Will Either Penetrate, Be Absorbed, or
Produce Scattered Radiation
Photoelectric
surrounding matter. The electron rapidly loses its energy and moves only a rela¬
tively short distance from its original location. The photon's energy is, therefore,
deposited in the matter close to the site of the photoelectric interaction. The energy
transfer is a two-step process. The photoelectric interaction in which the photon
transfers its energy to the electron is the first step. The depositing of the energy in
the surrounding matter by the electron is the second step.
Photoelectric interactions usually occur with electrons that are firmly bound to
the atom, that is, those with a relatively high binding energy. Photoelectric interac¬
tions are most probable when the electron binding energy is only slightly less than
the energy of the photon. If the binding energy is more than the
energy of the
photon, a photoelectric interaction cannot occur. This interaction is possible only
when the photon has sufficient energy to overcome the binding energy and re¬
move the electron from the atom.
The photon's energy is divided into two parts by the interaction. A portion of
the energy is used to overcome the electron's binding energy and to remove it
Interaction of Radiation with Matter 143
Electron
Kinetic
Photon
Energy
Strong j Binding
Energy
Nucleus
Kinetic
Energy^
Compton Interaction
Photon
Scattered
Weak Binding
Energy
"^Photon
Figure 10-2 The Two Basic Interactions between Photons and Electrons
from the atom. The remaining energy is transferred to the electron as kinetic en¬
ergy and is deposited near the interaction site. Since the interaction creates a va¬
cancy in one of the electron shells, typically the K or L, an electron moves down to
fill in. The drop in energy of the filling electron often produces a characteristic
x-ray photon. The energy of the characteristic radiation depends on the binding
energy of the electrons involved. Characteristic radiation initiated by an incoming
photon is referred to as fluorescent radiation. Fluorescence, in general, is a process
in which some of the energy of a photon is used to create a second photon of less
energy. This process sometimes converts x-rays into light photons. Whether the
fluorescent radiation is in the form of light or x-rays depends on the binding en¬
ergy levels in the absorbing material.
Compton
A Compton interaction is one in which only a portion of the energy is absorbed
and photon is produced with reduced energy. This photon leaves the site of the
a
Figure 10-2. Because of the change in photon direction, this type of interaction is
classified scattering process. In effect, a portion of the incident radiation
as a
tion source. The most significant object producing scattered radiation in an x-ray
procedure is the patient's body. The portion of the patient's body that is within the
primary x-ray beam becomes the actual source of scattered radiation. This has two
undesirable consequences. The scattered radiation that continues in the forward
direction and reaches the image receptor decreases the quality (contrast) of the
image; the radiation that is scattered from the patient is the predominant source of
radiation exposure to the personnel conducting the examination.
Coherent Scatter
Pair Production
MeV.
Electron Interactions
The interaction and transfer of energy from photons to tissue has two phases.
The first is the "one-shot" interaction between the photon and an electron in which
allor a significant part of the photon energy is transferred; the second is the trans¬
fer of energy from the energized electron as it moves through the tissue. This
occurs as a series of interactions, each of which transfers a relatively small amount
of energy.
Several types of radioactive transitions produce electron radiation including
beta radiation, internal conversion (IC) electrons, and Auger electrons. These ra¬
diation electrons interact with matter (tissue) in a manner similar to that of elec¬
trons produced by photon interactions.
In photoelectric interactions, the energy of the electron is equal to the energy of
the incident photon less the binding energy of the electron within the atom. In
Compton interactions, the relationship of the electron energy to that of the photon
depends on the angle of scatter and the original photon energy. The electrons set
free by these interactions have kinetic energies ranging from relatively low values
Interaction of Radiation with Matter 145
Electron Range
The total distance an electron travels in a material before
losing all its energy is
generally referred to as its range. The two factors that determine the range are (1)
the initial energy of the electrons and (2) the density of the material. One impor¬
tant characteristic of electron interactions is that all electrons of the same energy
have the same in a specific material, as illustrated in Figure 10-4. The gen¬
range
eral relationship between electron range and energy is shown in Figure 10-5. The
curve shown is the range for a material with a density of 1 g/cm3. This is the
Radiation Electron
Matter
Same Initial
Energy
Figure 10-4 The Range of Electrons with the Same Initial Energies
given in Figure 10-5 by the density of the material. Let us now apply this proce¬
dure to determine the range of 300-keV beta particles in air. (Air has a density of
0.00129 g/cm3.) From Figure 10-5 we see that a 300-keV electron has a range of
0.76 mm in a material with a density of 1 g/cm3. When this value is divided by the
Figure 10-5 Relationship of Electron Range to Initial Energy in a Material with a Density
of 1g/cm3 (Soft Tissue)
values in soft tissue for several electron energies are given below.
Electron Energy LET
(keV) (keV/jim)
1000 0.2
100 0.3
10 2.2
1 12.0
often related to the LET of the radiation. The actual relationship of the efficiency
in producing damage to LET values depends on the biological effect considered.
For some effects, the efficiency increases with an increase in LET, for some it
decreases, and for others it increases up to a point and then decreases with addi¬
tional increases in LET. For a given biological effect, there is an LET value that
produces an optimum energy concentration within the tissue. Radiation with
lower LET values does not produce an adequate concentration of energy. Radia-
148 Physical Principles of Medical Imaging
tions with higher LET values tend to deposit more energy than is needed to pro¬
duce the effect; this tends to waste energy and decrease efficiency.
Positron Interactions
Recall that a positron is the same size as an electron, but has a positive charge.
It is also different from the electron in that it is composed of what is referred to as
antimatter. This leads to a type of interaction that is quite different from the inter¬
actions among electrons.
The interaction between a positron and matter is in two phases, as illustrated in
Figure 10-6. These are ionization and annihilation. As the energetic positron
passes through matter, it interacts with the atomic electrons by electrical attrac¬
tion. As the positron moves along, it pulls electrons out of the atoms and produces
ionization. A small amount of energy is lost by the positron in each interaction. In
general, this phase of the interaction is not too unlike the interaction of an ener¬
getic electron, but the positron pulls electrons as it races by and electrons push
electrons away from the path. Also, when the positron has lost most of its kinetic
energy and is coming to a stop, it comes into close contact with an electron and
enters into an annihilation interaction.
The annihilation process occurs when the antimatter positron combines with the
conventional-matter electron. In this interaction, the masses of both particles are
completely converted into energy. The relationship between the amount of energy
jr
*
^ $10X*V
Pcrs+ira/L *
lomzattofr y -flmrttvlatten
yC
/
tively short, the site of interaction is always very close to the location of the radio¬
active nuclei.
Attenuation
As a photon makes its way through matter, there is no way to predict precisely
either how far it will travel before engaging in an interaction or the type of interac¬
tion it will engage in. In clinical applications we are generally not concerned with
the fate of an individual photon but rather with the collective interaction of the
large number of photons. In most instances we are interested in the overall rate at
which photons interact as they make their way through a specific material.
Let us observe what happens when a group of photons encounters a slice of
material that is 1 unit thick, as illustrated in Figure 10-7. Some of the photons
interact with the material, and some pass on through. The interactions, either pho-
1 cm-
J
Photon Material
Energy Density
Atomic Number
toelectric or Compton, remove some of the photons from the beam in a process
known as attenuation. Under specific conditions, a certain percentage of the pho¬
tons will interact, or be attenuated, in a 1-unit thickness of material.
ing per 1-unit thickness of material. In our example the fraction that interacts in the
1-cm thickness is 0.1, or 10%, and the value of the linear attenuation coefficient is
0.1 per cm.
Linear attenuation coefficient values indicate the rate at which
photons interact
as they move through material and are inversely related to the average distance
photons travel before interacting. The rate at which photons interact (attenuation
coefficient value) is determined by the energy of the individual photons and the
atomic number and density of the material.
face area, as shown in Figure 10-8. The area mass is the product of material thick¬
ness and density:
Area Mass (g/cm2) = Thickness (cm) x Density (g/cm3).
The mass attenuation coefficient is the rate of photon interactions per 1-unit (g/
cm2) area mass.
Figure 10-8 compares two pieces of material with different thicknesses and
densities but the same area mass. Since both attenuate the same fraction of pho¬
tons, the mass attenuation coefficient is the same for the two materials. They do
not have the same linear attenuation coefficient values.
The relationship between the mass and linear attenuation coefficients is
Mass Attenuation Coefficient (m/p) =
Linear Attenuation Coefficient (m) /Density (r).
Notice that the symbol for mass attenuation coefficient (|i/p) is derived from the
symbols for the linear attenuation coefficient (p.) and the symbol for density (p).
We must be careful not to be misled by the relationship stated in this manner.
Confusion often arises as to the effect of material density on attenuation coeffi¬
cient values. Mass attenuation coefficient values are actually normalized with re¬
spect to material density, and therefore do not change with changes in density.
Material density does have a direct effect on linear attenuation coefficient values.
Interaction of Radiation with Matter 151
The total attenuation ratedepends on the individual rates associated with photo¬
electric and Compton interactions. The respective attenuation coefficients are re¬
lated as follows:
In Chapter 4 we observed that the electrons with binding energies within the en¬
ergy range of diagnostic x-ray photons were the K-shell electrons of the intermedi¬
ate- and high-atomic-number materials. Since an atom can have, at the most, two
152 Physical Principles of Medical Imaging
electrons in the K shell, the majority of the electrons are located in the other shells
and have relatively low binding energies.
Photoelectric Rates
The probability, and thus attenuation coefficient value, for photoelectric inter¬
actions depends on how well the photon energies and electron binding energies
match, as shown in Figure 10-9. This can be considered from two perspectives.
In a specific material with a fixed binding energy, a change in photon energy
alters the match and the chance for photoelectric interactions. On the other hand,
with photons of a specific energy, the probability of photoelectric interactions is
affected by the atomic number of the material, which changes the binding energy.
strongly dependent on the energy of the photon and its relationship to the binding
energy of the electrons. Figure 10-10 shows the relationship between the attenua¬
tion coefficient for iodine (Z = 53) and photon energy. This graph shows two
significant features of the relationship. One is that the coefficient value, or the
probability of photoelectric interactions, decreases rapidly with increased photon
energy. It is generally said that the probability of photoelectric interactions is in¬
versely proportional to the cube of the photon energy (1/E3). This general relation¬
ship can be used to compare the photoelectric attenuation coefficients at two dif-
Atomic Number-Z
20 30 40 50 60 70
Photon
Energy
Increase
Photoelectric
Interactions
0 10 20 30 40 50 60 70 80 90
Energy (keV)
Figure 10-9 The Relationship between Material Atomic Number and Photon Energy That
Enhances the Probability of Photoelectric Interactions
Interaction of Radiation with Matter 153
55 keV
45 keV
yes + +
35 keV
25 keV
15 keV
ferent photon energies. The significant point is that the probability of photoelec¬
tric interactions occurring in a given material drops drastically as the photon en¬
ergy is increased.
The other important feature of the attenuation coefficient-photon energy rela¬
tionship shown in Figure 10-10 is that it changes abruptly at one particular energy:
the binding energy of the shell electrons. The K-electron binding energy is 33 keV
for iodine. This feature of the attenuation coefficient curve is generally designated
as the K, L, or M edge. The reason for the sudden change is apparent if it is re¬
called that photons must have energies equal to or slightly greater than the binding
energy of the electrons with which they interact. When photons with energies less
than 33 keV pass through iodine, they interact primarily with the L-shell electrons.
They do not have sufficient energy to eject electrons from the K shell, and the
probability of interacting with the M and N shells is quite low because of the
relatively large difference between the electron-binding and photon energies.
However, photons with energies slightly greater than 33 keV can also interact with
154 Physical Principles of Medical Imaging
the K shell electrons. This means that there are now more electrons in the material
that are produces a sudden increase in the attenua¬
available for interactions. This
tion coefficient at the K-shell energy. In the case of iodine, the attenuation coeffi¬
cient abruptly jumps from a value of 5.6 below the K edge to a value of 36, or
increases by a factor of more than 6.
A similar change in the attenuation coefficient occurs at the L-shell electron
binding energy. For most elements, however, this is below 10 keV and not within
the useful portion of the x-ray spectrum.
Photoelectric interactions occur at the highest rate when the energy of the x-ray
energies move closer to the photon energy. The general relationship is that the
probability of photoelectric interactions (attenuation coefficient value) is propor¬
tional to Z3. In general, the conditions that increase the probability of photoelec¬
tric interactions are low photon energies and high-atomic-number materials.
Compton Rates
Compton interactions can occur with the very loosely bound electrons. All elec¬
trons in low-atomic-number materials and the majority of electrons in high-
atomic-number materials are in this category. The characteristic of the material
that affects the probability of Compton interactions is the number of available
electrons. It was shown earlier that all materials, with the exception of hydrogen,
have approximately the same number of electrons per gram of material. Since the
concentration of electrons in a given volume is proportional to the density of the
materials, the probability of Compton interactions is proportional only to the
physical density and not to the atomic number, as in the case of photoelectric
interactions. The major exception is in materials with a significant proportion of
hydrogen. In these materials with more electrons per gram, the probability of
Compton interactions is enhanced.
Although the chances of Compton interactions decrease slightly with photon
energy, the change is not so rapid as for photoelectric interactions, which are in¬
versely related to the cube of the photon energy.
Direction of Scatter
which the angle of scatter for a specific photon can be predicted. However, there
are certain directions that are more
probable and that will occur with a greater
frequency than others. The factor that can alter the overall scatter direction pattern
is the energy of the original
photon. In diagnostic examinations, the most signifi¬
cant scatter will be in the forward direction. This would be an
angle of scatter of
only afew degrees. However, especially at the lower end of the energy spectrum,
there is a significant amount of scatter in the reverse direction, ie, backscatter. For
the diagnostic photon energy range, the number of
photons that scatter at right
angles to the primary beam is in the range of one-third to one-half of the number
that scatter in the forward direction.
Increasing primary photon energy causes a
general shift of scatter to the forward direction. However, in diagnostic proce¬
dures, there is always a significant amount of back- and sidescatter radiation.
the scattered secondary photon and the electron with which it interacts. The
electron's kinetic energy is quickly absorbed by the material along its path. In
other words, in a Compton interaction, part of the original photon's energy is ab¬
sorbed and part is converted into scattered radiation.
The manner in which the energy is divided between scattered and absorbed
radiation depends on two factors—the angle of scatter and the energy of the origi¬
nal photon. The relationship between the energy of the scattered radiation and the
angle of scatter is a little complex and should be considered in two steps. The
photon characteristic that is specifically related to a given scatter angle is its
change in wavelength. It should be recalled that a photon's wavelength (X) and
energy (E) are inversely related as given by:
Since photons lose energy in a Compton interaction, the wavelength always in¬
creases. The relationship between the change in a photon's wavelength, A X, and
the angle of scatter is given by:
change of say 0.024 A represents a larger energy change than it would for a lower
energy photon. All photons scattered at an angle of 90 degrees will undergo a
wavelength change of 0.0243 A. The change in energy associated with 90-degree
scatter is not the same for all photons and depends on their original energy. The
change in energy can be found as follows. For a 110-keV photon, the wavelength
is 0.1127 A. A scatter angle of 90 degrees will always increase the wavelength by
0.0243. Therefore, the wavelength of the scattered photon will be 0.1127 plus
0.0243 or of a photon with this wavelength is 91 keV. The 110
0.1370. The energy
keVphotons will lose 19 keV or 17% of their energy in the scattering process.
Lower energy photons lose a smaller percentage of their energy.
COMPETITIVE INTERACTIONS
produces the overall attenuation of the x-ray beam. We now consider the factors
that determine which of the two interactions is most likely to occur in a given
situation.
The energy at which interactions change from predominantly photoelectric to
Compton is function of the atomic number of the material. Figure 10-11 shows
a
this crossover energy for several different materials. At the lower photon energies,
photoelectric interactions are much more predominant than Compton. Over most
of the energy range, the probability of both decreases with increased energy. How¬
ever, the decrease in photoelectric interactions is much greater. This is because the
able, in general, and they predominate up to higher photon energy levels. The
conditions that cause photoelectric interactions to predominate over Compton are
the same conditions that enhance photoelectric interactions, that is, low photon
Figure 10-11 Comparison of Photoelectric and Compton Interaction Rates for Different
Materials and Photon
Energies
The total attenuation coefficient value for materials involved in x-ray and
gamma interactions can vary tremendously if photoelectric interactions are in¬
volved. A minimum value of approximately 0.15 cm2/g is established by Compton
interactions. Photoelectric interactions can cause the total attenuation to increase
to very high values. For example, at 30 keV, lead (Z = 82) has a mass attenuation
coefficient of 30 cm2/g.
Chapter 11
Radiation Penetration
One of the characteristics of x- and gamma radiation that makes them useful for
medical imaging is their penetrating ability. When they are directed into an object,
some of the photons are absorbed or scattered, whereas others
completely pen¬
etrate the object. The penetration can be expressed as the fraction of radiation
passing through the object. Penetration is the inverse of attenuation. The amount
of penetration depends on the energy of the individual photons and the atomic
number, density, and thickness of the object, as illustrated in Figure 11-1.
The probability of photons interacting, especially with the photoelectric effect,
is related to their energy. Increasing photon energy generally decreases the prob¬
PHOTON RANGE
159
160 Physical Principles of Medical Imaging
Object
Density
Atomic Number (Z)
Figure 11-1 Factors That Affect the Penetration of Radiation through a Specific Object
ceeding layers.
In a given situation a group of photons have different individual
ranges which,
when considered together, produce an average range for the group. The average
range is the average distance traveled by the photons before they interact. Very
few photons travel a distance exactly equal to the
average range. The average
range of a group of photons is inversely related to the attenuation rate. Increasing
the rate of attenuation by changing photon energy or the type of material decreases
the average range of photons. Actually, the average photon range is
equal to the
reciprocal of the attenuation coefficient (|a):
Average Range (cm) = 1/Attenuation Coefficient (cm"1)
Therefore, the average distance (range) that photons penetrate a material is deter¬
mined by the same factors that affect the rate of attenuation: photon energy, type
of material (atomic number), and material density.
Average photon range is a useful concept for visualizing the penetrating charac¬
teristics of radiation photons. It is, however, not the most useful parameter for
measuring and calculating the penetrating ability of radiation.
Radiation Penetration 161
Half value layer (HVL) is the most frequently used factor for describing both
the penetrating ability of specific radiations and the penetration through specific
objects. HVL is the thickness of material penetrated by one half of the radiation
and is expressed in units of distance (mm or cm).
Increasing the penetrating ability of a radiation increases its HVL. HVL is re¬
lated to, but not the same as, average photon range. There is a difference between
the two because of the exponential characteristic of
x-ray attenuation and penetra¬
tion. The specific relationship is
(/a = 0.1/cm)
Photons interacting in each one cm layer
100 90 81 73 65 59 53 47 43 38 34 31
rsj-
-rJ.
1000
Photons Photons
-nJ-
1000
10 12
Thickness (cm)
This shows that the HVL is inversely proportional to the attenuation coefficient.
The number, 0.693, is the exponent value that gives a penetration of 0.5
(e-0.693 = 0.5).
Any factor that changes the value of the attenuation coefficient also changes the
HVL. These two quantities are compared for aluminum in Figure 11-3. Aluminum
has two significant applications in an x-ray system. It is used as a material to filter
x-ray beams and as a reference material for measuring the penetrating ability
(HVL) of x-rays. The value of the attenuation coefficient decreases rather rapidly
with increased photon energy and causes the penetrating ability to increase.
Figure 11-4 illustrates an important aspect of the HVL concept. If the penetra¬
tion through a thickness of 1 HVL is 0.5 (50%), the penetration through a thick¬
ness of 2 HVLs will be 0.5 x 0.5 or 25%. Each succeeding layer of material with a
thickness of 1 HVL reduces the number of photons by a factor of 0.5. The relation¬
ship between penetration (P) and thickness of material that is n half value layers
thick is
P = (0.5 )n.
10 20 30 40 50 60
Photon Energy (keV)
Figure 11-3 Relationship between Attenuation Coefficient and HVL for Aluminum
Radiation Penetration 163
50%
PENETRATION
25%
w-^12.5%
-5
O"
O
O
6.25%
3.13%
0 1 4
Thickness (HVLs)
Figure 11-4 Relationship between Penetration and Object Thickness Expressed in HVLs
An
example using this relationship is determining the penetration through lead
shielding. Photons of 60 keV have an HVL in lead of 0.125 mm. The problem is to
determine the penetration through a lead shield that is 0.5 mm thick. At this par¬
ticular photon energy, 0.5 mm is 4 HVLs, and the penetration is
n = thickness/HVL = 0.5/0.125 = 4
p = (0.5)4 = 0.0625.
logarithmic scale (by using semilogarithmic graph paper) so that the resulting
graph is essentially a straight line.
The general term "quality" refers to an x-ray beam's penetrating ability. It has
been shown that, for a given material, the penetrating ability of an x-ray beam
depends on the energy of the photons. Up to this point, the discussion has related
Low
Energy
—
r^J —
-50%
l HVL1!
High
Energy
-
High
Density
rJ- -50%
Same
Energy
/ 1 1
! HVL
-rJ. -50%
Low
Density
Figure 11-5 Factors That Affect the Figure 11-6 Procedures for Determining
Thickness of 1 HVL the HVL of an X-Ray Beam
Radiation Penetration 165
penetration to specific photon energies. For x-ray beams that contain a spectrum
of photon energies, the penetration is different for each
energy. The overall pen¬
etration generally corresponds to the penetration of a
photon energy between the
minimum and maximum energies of the
spectrum. This energy is designated the
effective energy of the x-ray spectrum as shown in Figure 11-7. The effective
energy of an x-ray spectrum is the energy of a monoenergetic beam of photons that
has the same penetrating ability (HVL) as the spectrum of
photons.
The effective energy is generally close to 30% or 40% of
peak energy, but its
exact value depends on the shape of the spectrum. For a
given KVP, two factors
that can alter the spectrum are the amount of filtration in the beam and the
high
voltage waveform used to produce the x-rays.
FILTRATION
100 kVp
Effective Energy
50 kVp
0 20 40 60 80 100 120
Photon Energy (keV)
energy. In the range of 10 keV to 25 keV, penetration rapidly increases with en¬
ergy. As photon energy increases to about 40 keV, penetration increases, but much
more gradually. Of special interest is the very low penetrating ability of x-ray
photons with energies below approximately 20 keV. At this energy, the penetra¬
tion through 1 cm of tissue is 0.45, and the penetration through 15 cm of tissue is
P = (0.45)15 = 0.0000063.
Figure 11-8 Penetration of Soft Tissue and Aluminum for Various Photon Energies
Radiation Penetration 167
ways be in the form of aluminum because several objects contribute to x-ray beam
filtration: the x-ray tube window, the x-ray beam collimator mirror, and the table
top in fluoroscopic equipment. The total amount of filtration in a given x-ray ma¬
chine is generally specified in terms of an equivalent aluminum thickness.
The addition of filtration significantly alters the shape of the x-ray spectrum, as
shown in Figure 11-9. Since filtration selectively absorbs the lower energy pho¬
tons, it produces a shift in the effective energy of an x-ray beam. Figure 11-9
compares an unfiltered spectrum to spectra that passed through 1-mm and 3-mm
filters. It is apparent that increasing the filtration from 1 mm to 3 mm of aluminum
sumed that if an x-ray beam has the minimum specified HVL value at a stated
KVP, the filtration is adequate.
Up to this point, the x-ray photons that penetrate an object were assumed to be
those that had escaped both photoelectric and Compton interactions. In situations
in which Compton interactions are significant, it is necessary to modify this con¬
cept because some of the radiation removed from the primary beam by Compton
168 Physical Principles of Medical Imaging
Pe = P x S
where S is the scatter factor. Its value ranges from 1 (no scatter) to approximately
6 for conditions encountered in some diagnostic examinations.
Several factors contribute to the amount of radiation scattered in the forward
direction and hence to the value of S. One of the most significant factors is the
x-ray beam area, or field size. Since the source of the scattered radiation is the
volume of the patient within the primary x-ray beam, the source size is propor¬
tional to the beam area. Within limits, the value of S increases from a value of 1,
Radiation Penetration 169
Table 11-1 Recommended Minimum Penetration (HVL) for Various KVP Values
30 0.3
50 1.2
70 1.5
90 2.5
110 3.0
>r
Primory Source
Figure 11-10 Scattered Radiation Adds to the Primary Radiation That Penetrates an
Object
more or less, in proportion to field size. Another important factor is body section
thickness, which affects the size of the scattered radiation source. A third signifi¬
cant factor is KVP. As the KVP is increased over the diagnostic range, several
changes occur. A greater proportion of the photons that interact with the body are
involved in Compton interactions, and a greater proportion of the photons created
in Compton interactions scatters in the forward direction. Perhaps the most sig¬
nificant factor is that the scattered radiationproduced at the higher KVP values is
more penetrating. A larger proportion of it leaves the body before being absorbed.
When the scattered radiation is more penetrating, there is a larger effective source
within the patient. At low KVP values, most of the scattered radiation created near
the entrance surface of the x-ray beam does not penetrate the body; at higher KVP
values, this scattered radiation contributes more to the radiation passing through
the body.
170 Physical Principles of Medical Imaging
PENETRATION VALUES
diagnostic imaging.
Chapter 12
There are two basic ways to create images with x-radiation. One method is to
pass an x-ray beam through the body section and project a shadow image onto the
receptor. The second method, used in CT, employs a digital computer to calculate
(reconstruct) an image from x-ray penetration data. CT image formation is dis¬
cussed in Chapter 23. At this time, we consider only projection imaging, which is
the basic process employed in conventional radiography and fluoroscopy.
The contrast that ultimately appears in the image is determined by many factors,
as indicated in Figure 12-1. In addition to the penetration characteristics to be
CONTRAST TYPES
Several types of contrast are encountered during x-ray image formation. The
formation of a visible image involves the transformation of one type of contrast to
another at two stages in the image-forming process, as shown in Figure 12-2.
171
172 Physical Principles of Medical Imaging
Radiographic
Contrast
Object Contrast
For an object to be visible in an x-ray image, it must have physical contrast in
relationship to the tissue or other material in which it is embedded. This contrast
can be a difference in physical density or chemical composition (atomic number).
radiograph. The third factor that affects object contrast is its thickness in the direc¬
tion of the x-ray beam. Object contrast is proportional to the product of object
density and thickness. This quantity represents the mass of object material per unit
area (cm2) of the image. For example, a thick (large diameter) vessel filled with
diluted iodine contrast medium and a thin (small diameter) vessel filled with undi¬
luted medium will produce the same amount of contrast if the products of the
diameters and iodine concentrations (densities) are the same.
The chemical composition of an object contributes to its contrast only if its
effective atomic number (Z) is different from that of the surrounding tissue. Rela¬
tively little contrast is produced by the different chemical compositions found in
soft tissues and body fluids because the effective atomic number values are close
together. The contrast produced by a difference in chemical composition (atomic
number) is quite sensitive to photon energy (KVP).
Most materials that produce high contrast with respect to soft tissue differ from
the soft tissue in both physical density and atomic number. The physical character¬
istics of most materials encountered in x-ray imaging are compared in Table 12-1.
O
a. u>
3 a a a a a a a a a
S3
><
-j
-Photn
Filter
KV
Pentraion
Charcteis
Film
Contras
Object Contras
Subject Radiogrphc Image Contrast
iRDCaeovndnoilootpgrmarsfpthy
EAxproesuare
Object
Patient Exposure
Background Recptor Image S1t2ag-es
Figure
174 Physical Principles of Medical Imaging
Subject Contrast
The contrast in the invisible image emerging from the patient's body is tradi¬
tionally referred to as subject contrast. Subject contrast is the difference in expo¬
sure between various points within the image area.
For an object, the significant contrast value is the difference in expo¬
individual
sure between the object area and its surrounding background. This exposure dif¬
penetrates the object. Metal objects (lead bullets, rods, etc.) are good examples.
Contrast is reduced as x-ray penetration through the object increases. When object
penetration approaches the penetration through an equal thickness of surrounding
tissue, contrast disappears.
The amount of subject contrast produced is determined by the physical contrast
characteristics (atomic number, density, and thickness) of the object and the pen¬
Image Contrast
trast delivered to the receptor and the contrast transfer characteristics of the film,
which are discussed in
Chapter 16.
The contrast in a visible
fluoroscopic image is in the form of brightness ratios
between various points within the image area. The amount of contrast in a fluoro¬
scopic image depends on the amount of subject contrast entering the receptor sys¬
tem and the characteristics and adjustments of the
components (image intensifier
tube, video, etc.) of the imaging system. The contrast transfer characteristics of a
fluoroscopic system are discussed in Chapter 20.
Object penetration and the resulting contrast often depend on the photon energy
spectrum. This, in turn, is determined by three factors: (1) x-ray tube anode mate¬
rial, (2) x-ray beam filtration, and (3) KV. Since most x-ray examinations are
performed with tungsten anode tubes, the first factor cannot be used to adjust con¬
trast. The exception is the use of molybdenum anode tubes in mammography.
Most x-ray machines have essentially the same amount of filtration, which is a few
millimeters of aluminum. Two exceptions are molybdenum filters used with mo¬
lybdenum anode tubes in mammography and copper or brass filters, sometimes
used in chest radiography.
In most procedures, KVP is the only photon-energy controlling factor that can
be changed by the operator to alter contrast. Radiographic examinations are per¬
formed with KVP values ranging from a low of approximately 25 kVP, in
mammography, to a high of approximately 140 kVP, in chest imaging. The selec¬
tion of a KV for a specific imaging procedure is generally governed by the contrast
requirement, but other factors, such as patient exposure (Chapters 17 and 33) and
x-ray tube heating (Chapter 9), must be considered.
Both photoelectric and Compton interactions contribute to the formation of im¬
age contrast. It was shown in Chapter 10 that the rate of Compton interactions is
primarily determined by tissue density and depends very little on either tissue
atomic number or photon energy. On the other hand, the rate of photoelectric in¬
teractions is very dependent on the atomic number of the material and the energy
of the x-ray photons. This means that when contrast is produced by a difference in
the atomic numbers of an object and the surrounding tissue, the amount of contrast
is very dependent on photon energy (KVP). If the contrast is produced by a differ¬
ence in density (Compton interactions), it will be relatively independent of photon
energy. Changing KVP produces a significant change in contrast when the condi¬
tions are favorable for photoelectric interactions. In materials with relatively low
atomic numbers (ie, soft tissue and body fluids), this change is limited to relatively
low KVP values. However, the contrast produced by higher atomic number mate¬
rials such as calcium, iodine, and barium, has a KVP dependence over a much
wider range of KVP values.
176 Physical Principles of Medical Imaging
Two basic factors tend to limit the amount of contrast that can be produced
between types of soft tissue and between soft tissue and fluid. One factor is the
small difference in the physical characteristics (density and atomic number)
among these materials, as shown in Table 12-1, and the second factor is the rela¬
tively low number of photoelectric interactions because of the low atomic num¬
bers.
Calcium
Figure 12-4 shows the relationship between calcium penetration (contrast) and
photon energy. In principle, the optimum photon energy range (KVP) for imaging
calcium depends, to some extent, on the thickness of the object. When imaging
very small (thin) calcifications, as in mammography, a low photon energy must be
used or the contrast will be too low for visibility. When the objective is to see
through a large calcified structure (bone), relatively high photon energies (KVP)
must be used to achieve adequate object penetration.
The two chemical elements iodine and barium produce high contrast with re¬
spect to soft tissue because of their densities and atomic numbers. The signifi¬
cance of their atomic numbers (Z = 53 for iodine, Z = 56 for barium) is that they
cause the K-absorption edge to be located at a very favorable energy relative to the
typical x-ray energy spectrum. The K edge for iodine is at 33 keV and is at 37 keV
for barium. Maximum contrast is produced when the x-ray photon energy is
slightly above the K-edge energy of the material. This is illustrated for iodine in
X-Ray Image Formation and Contrast 177
•*—
K shell binding energy
10 15 20 25 30 35
Figure 12-5. A similar relationship exists for barium but is shifted up to slightly
higher photon energies.
Since the typical x-ray beam contains a rather broad spectrum of photon ener¬
gies, all of the energies do not produce the same level of contrast. In practice,
maximum contrast is achieved by adjusting the KVP so that a major part of the
spectrum falls just above the K-edge energy. For iodine, this
generally occurs
when the KVP is set in the
range of 60-70.
AREA CONTRAST
MEDIASTINUM
LUNGS
(lungs). If there is a relatively high level of contrast between areas within an im¬
age, then the contrast of objects within these areas can be reduced because of film
limitations. Three actions can be taken to minimize the problem. One is to use a
wide latitude film that reduces area contrast and improves visibility within the
individual areas in many situations; this is described in Chapter 16. A second ap¬
proach is to place compensating filters between the x-ray tube and the patient's
body. The filter has areas with different thicknesses and is positioned so that its
thickest part is over the thinnest, or least dense, part of the body. The overall effect
is a reduction in area contrast within the image. The third action is to use a very
penetrating x-ray beam produced by high KV.
Figure 12-7 compares chest radiographs made at two KVP values. The image on
the left was made at 60 KVP. Although it has high contrast between the mediasti-
180 Physical Principles of Medical Imaging
num and lung areas, visibility of structures within these areas is diminished. The
high-KV radiograph, which has less area contrast, has increased object contrast,
especially within the lung areas.
Chapter 13
Scattered Radiation and Contrast
When an x-ray beam enters a patient's body, a large portion of the photons
engage in Compton interactions and produce scattered radiation. Some of this
scattered radiation leaves the body in the same general direction as the primary
beam and exposes the image receptor. The scattered radiation reduces image con¬
trast. The degree of loss depends on the scatter content of the radiation emerging
from the patient's body. In most radiographic and fluoroscopic procedures, the
major portion of the x-ray beam leaving the patient's body is scattered radiation.
This, in turn, significantly reduces contrast.
Subject contrast was previously defined as the difference in exposure to the
object area on a film expressed as a percentage of the exposure to the surrounding
background. Maximum contrast, ie, 100%, is obtained when the object area re¬
ceives no exposure with respect to the background. A previous chapter discussed
the reduction of subject contrast because of x-ray penetration through the object
being imaged. This chapter describes the further reduction of contrast by scattered
radiation.
CONTRAST REDUCTION
receptor, or film, is produced by radiation that penetrates the body adjacent to the
object plus the scattered radiation. For a given x-ray machine setting, the back¬
ground area exposure is proportional to PS, the product of the penetration through
181
With
)
Scater FSacctaotr=e4r
(
2105=%%/4
Contras=
Scater )FSaccatotr=e1r
Without (
1100=%%/ RSbCaIcmRdoietnodaeurroagycntsiefn
Contras=
1F3ig-u1re
Scattered Radiation and Contrast 183
the patient and the scatter factor. For the same exposure conditions, the exposure
to the
object area is proportional to P (S - 1 ). By combining these expressions for
relative background and object area exposure, it can be shown that the contrast is
inversely related to the value of the scatter factors, as follows:
Cs (%) = 100/S.
Thisrelationship shows that as the proportion of scattered radiation in the x-ray
beam increases, contrast proportionally decreases. For example, if the scatter fac¬
tor has a value of 4, the contrast between the object and background areas will be
reduced to 25%. In other words, the object area exposure is 75% of the exposure
reaching the surrounding background. The contrast can also be determined as fol¬
lows. The ratio of scattered to primary radiation is always S - 1. For a scatter
factor value of 4, the scatter-primary ratio is 3. The background area exposure is,
therefore, composed of one unit of primary and three units of scattered radiation.
The object area receives only the three units of scattered radiation. This yields an
Figure 13-2 shows the general relationship between contrast and scatter factor.
The value of the scatter factor is primarily a function of patient thickness, field
size, and KVP. In examinations of relatively thick body sections, contrast reduc¬
tion factors of 5 or 6 are common.
Figure 13-2 Relationship between Contrast Reduction and the Amount of Scatter
184 Physical Principles of Medical Imaging
object penetration and scattered radiation. For example, if an object is 60% pen¬
etrated (40% contrast), and the scatter factor, S, has a value of 4, the final contrast
will be 10%.
Since scattered radiation robs an image of most of its contrast, specific
x-ray
actions must be taken to regain of the lost contrast. Several methods can be
some
used to reduce the effect of scattered radiation but none is capable of restoring the
full image contrast. The use of each scatter reduction method usually involves
COLLIMATION
AIR GAP
GRIDS
*•- Receptor
GRID PENETRATION
X-ray Tube
Grid
H\
ideal grid
1.0 i
§ 0.8"
(0
% 0.6-
c
0.2-
\
——
ideal
~l |1 =t—
1 T ^-7™- 1 I 1
24 6 8 10 12 14 16
Grid Ratio
Figure 13-7 General Relationship between Radiation Penetration and Grid Ratio
Scattered Radiation and Contrast 189
pensate for the radiation absorbed by the grid. Patient exposure is directly propor¬
tional to the Bucky factor. For example, if a grid with a Bucky factor of 3 is re¬
placed by onewith a Bucky factor of 6, the exposure to the patient must be
doubled to compensate for the additional grid absorption.
Scatter Penetration
The relationship between the quantity of scattered radiation that passes through
the grid and the grid ratio can be visualized by referring to Figure 13-9. Consider
the exposure that reaches a point on the receptor located at the bottom of an
interspace. Since no radiation penetrates the lead strips, radiation can reach the
point on the receptor only from the directions indicated. The amount of radiation
reaching this point is generally proportional to the volume of the patient's body in
direct "view" from this point. As grid ratio is increased, this volume becomes
smaller, and the amount of radiation reaching this point is reduced. In effect, with
a high-ratio grid, each point on the receptor surface is exposed to a smaller portion
of the patient's body, which is the source of scattered radiation. Using basic geo¬
metrical relationships, the theoretical penetration of scattered radiation through
16 to I Grid
Primary Penetration
Because of the presence of the lead strips, grids attenuate part of the primary
radiation. The penetration of primary radiation through the grid is generally in the
range of 0.6 to 0.7. This value depends on grid design and is generally inversely
related to grid ratio.
Contrast Improvement
improves. For relatively small amounts of scattered radiation, that is, S = 2, a grid
ratio of 8:1 restores the contrast to 90%. The additional improvement in contrast
with higher grid ratios is relatively small. It should be noticed, however, that even
with high grid ratios, all contrast is not restored. When the proportion of scattered
radiation in the beam is higher, for example, when S has a value of 6, the situation
is significantly different. At each grid ratio value, the contrast is much less than for
lower scatter factor values. Even with a high-ratio grid such as 16:1, the contrast
,
is restored to only about 76%. This graph illustrates that contrast is not only a
function of grid ratio, but is also determined by the quantity of scattered radiation
in the beam, the value of S.
It might appear that the data in Figure 13-10 indicate that grids do not remove as
much scattered radiation when the amount of scattered radiation in the beam is
grid ratio and with the quantity of scattered radiation in the beam, S. Although it is
true that grids improve contrast by larger factors under conditions of high levels of
scattered radiation, one significant fact should not be overlooked: the total restora-
192 Physical Principles of Medical Imaging
-
c
51
0)
4^ CO
<0 U.
c
o
O 1-
2 4 6 8 10 12 14 16
Grid Ratio
Figure 13-11 Relationship of Contrast Improvement Factor to Scatter Factor and Grid
Ratio
tion of contrast for a given grid is always less for the higher values of scattered
radiation. This becomes apparent by comparing the value of the contrast
improve¬
ment factor to the value of the contrast reduction factor, which is
equal to the value
of S. This expresses the ability of a grid under various scatter conditions to recover
lost contrast. For example, in Figure 13-11 it is shown that when S is
equal to 5
(contrast reduced to one fifth) a 16:1-ratio grid produces a contrast improvement
factor of 4. The contrast recovery, K/S, is four fifths, or 80%. Flowever, at a lower
level of scattered radiation, such as S = 3, the same
grid produces a contrast im¬
provement factor of 2.7, which represents a contrast recovery of 2.7/3, or 90%.
The relationship between the improvement in contrast and
grid ratio strongly
depends on the proportion of scattered radiation in the beam emerging from the
patient's body. This, in turn, is a function of patient thickness, field size, and KVP.
Under conditions that produce high scatter radiation values, a
given grid improves
contrast by a greater factor, but cannot recover as much contrast
as is possible at
lower scattered radiation levels.
Artifacts
Since the grid is physically located between the patient and the receptor, there is
always possibility that it will interfere with the formation of the image. This
a
interference can be in the form of an image of the grid strips
(lines) on the film, or
the abnormal attenuation of radiation in certain portions of the field.
Scattered Radiation and Contrast 193
Grid Lines
that is not aligned with the grid interspaces. It is desirable that the primary radia¬
tion from the x-ray tube focal spot pass through the grid with a minimum of ab¬
sorption. Maximum grid penetration by primary radiation can occur only if the
x-ray tube focal spot is located at the grid focal point. If these two points are not
properly aligned, as shown in Figure 13-12, the direction of the primary radiation
might be such that the radiation does not adequately penetrate certain sections of
the grid.
Misalignment of the x-ray tube focal spot with respect to the focal point of the
grid can be either lateral or vertical, or a combination of both. Lateral misalign¬
ment causes the x-ray beam to be misaligned with all interspaces, and grid pen¬
etration is decreased over the entire beam area. The amount of penetration reduc¬
tion is related to the amount of misalignment and the grid ratio. Alignment
becomes more critical for higher ratio grids. That is, the loss of grid penetration
because of a specific misalignment is much greater for a high-ratio grid.
Vertical misalignment does not alter penetration in the center of the grid, but
decreases penetration near the edges. The loss of penetration is related to the de¬
gree of misalignment and the grid ratio. The reduction in penetration for a given
degree of misalignment increases with grid ratio. Focused grids are labeled with
either a focal distance or a focal range, which should be carefully observed to
prevent this type of grid cutoff. Cutoff toward the edges of the image area will also
occur if a focused grid is turned upside down because the primary radiation will be
194 Physical Principles of Medical Imaging
X-ray Tube
Focal Spot Grid Focal Point
Figure 13-12 Two Forms of Grid Misalignment That Can Produce Artifacts
unable to penetrate except near the center. This produces an artifact similar to
vertical misalignment but usually much more pronounced.
GRID SELECTION
which maximum image contrast is not necessary. On the other hand, a 16:1-ratio
applications are best served by grid ratio values between these two extremes. Such
Scattered Radiation and Contrast 195
grids generally represent compromises between image quality and the other fac¬
tors discussed.
Some grids have strips running at right angles to each other, generally desig¬
nated crossed grids. This design generally increases contrast improvement but
cannot be used in examinations in which the
x-ray tube is tilted.
In stationary grid applications in which lines in the
image are undesirable, grids
with a high spacing density (lines per centimeter) can be used. An increase in the
spacing density generally requires a higher ratio grid to produce the same contrast
improvement.
Chapter 14
Radiographic Receptors
There are three basic types of radiographic receptors. In addition to the conven¬
tional type described in this chapter there are the digital radiographic receptors
described in Chapter 22 and the fluoroscopic systems that can also produce radio¬
light. The light, in turn, exposes the film. Intensifying screens are used because
film is much more sensitive to light than to x-radiation; approximately 100 times
as much x-radiation would be required to expose a film without using intensifying
process.
A variety of intensifying screens is available for clinical use. The selection of a
screen for specific procedure is usually based on a compromise between the
a
that require high image detail, such as mammography, one intensifying screen is
used in conjunction with a single-emulsion film.
SCREEN FUNCTIONS
X-Ray Absorption
The first function performed by the intensifying screen is to absorb the x-ray
beam (energy) emerging from the patient's body. The ideal intensifying screen
197
198 Physical Principles of Medical Imaging
EXPOSURE SENSITIVITY
► IMAGE QUALITY
would absorb all x-ray energy that enters it; real intensifying screens are generally
notthick enough to absorb all of the photons. As we discuss later, increasing the
thickness of an intensifying screen to increase its absorption capabilities degrades
image quality.
In most cases, a significant portion of x-ray energy is not absorbed by the screen
material and penetrates the receptor. This is wasted radiation since it does not
contribute to image formation and film exposure. The absorption efficiency is the
percentage of incident radiation absorbed by the screen material. An ideal screen
would have a 100% absorption efficiency; actual screens generally have absorp¬
tion efficiencies in the range of 20 to 70%. Absorption efficiency is primarily
determined by three factors: (1) screen material, (2) screen thickness, and (3) the
photon energy spectrum.
Light Production
Film
/-N_y
r^j
r-u X-ray r^j
r^J r^j n*J r-vj
_____
High
Same
Exposure Density
Low P
I
X-ray *0 Light
v&S*
Screen Film
Although the total energy of the light emitted by a screen is much less than the
total x-ray energythe screen receives, the light energy is much more efficient in
exposing film because it is "repackaged" into a much larger number of photons. If
we assume a 5% energy conversion efficiency, then one 50-keV x-ray photon can
Exposure Reduction
RECEPTOR SENSITIVITY
above the base plus fog level. Some manufacturers do not provide sensitivity val¬
ues for their receptor systems,but most provide speed values such as 100, 200,
400, etc. The speed scale compares the relative exposure requirements of different
receptor systems. Most speed numbers are referenced to a so-called par speed
system that is assigned a speed value of 100. Whereas sensitivity is a precise re¬
ceptor characteristic that expresses the amount of exposure the receptor requires,
speed is a less precise value used to compare film-screen combinations. There is,
however, a general relationship between exposure requirements (sensitivity) and
receptor speed values:
800 0.16
400 0.32
200 0.64
100 1.28
50 2.56
25 5.0
12 10.0
Most receptors are given a nominal speed rating by the manufacturer. The actual
speed varies, especially with KVP and film processing conditions.
The sensitivity (speed) of an intensifying screen-film receptor depends on the
type of screen and film used in addition to the conditions under which they are
used and the film is processed.
We now consider characteristics of the screen that contribute to its sensitivity.
Materials
Several compounds are used to make intensifying screens. The two major char¬
acteristics the material must have are (1) high x-ray absorption and (2) fluores¬
cence. Because of their fluorescence, intensifying screen materials are often re¬
ferred to as phosphors.
Radiographic Receptors 201
Soon after the discovery of x-rays, calcium tungstate became the principal ma¬
terial in intensifying screens and continued to be until the 1970s. At that time, a
variety of new phosphor materials were developed; many contain one of the rare
earth chemical elements. Phosphor
compounds now used as intensifying screen
materials include:
• barium fluorochloride
•
yttrium oxysulfide
• lanthanum oxybromide
• lanthanum oxysulfide
•
gadolinium oxysulfide.
K-edge energy is, in turn, determined by the atomic number of the material.
Calcium tungstate, the most common screen material for many years, uses tung¬
sten as the absorbing element. The K edge of tungsten is at 69.4 keV. For most
x-ray examinations, a major portion of the x-ray beam spectrum falls below this
energy. For this reason, screens containing tungsten are limited with respect to
x-ray absorption. Today, most intensifying screens contain either barium, lantha¬
num, gadolinium, or yttrium as the absorbing element. The K edge of these ele¬
ments is below a major portion of the typical x-ray beam spectrum. This increases
Spectral Characteristics
generally classified as either blue or green emitters. The significance of this is that
a screen must be used with a film that has adequate sensitivity to the color of light
the screen emits. Some radiographic films are sensitive only to blue light; others
(orthochromatic) are also sensitive to green light. If screen and film spectral char¬
acteristics are not properly matched, receptor sensitivity is severely reduced.
202 Physical Principles of Medical Imaging
Thickness
The sensitivity of intensifying screens varies with x-ray photon energy because
sensitivity is directly related to absorption efficiency. Absorption efficiency and
screen sensitivity are maximum when the x-ray photon energy is just above the K
edge of the absorbing material. Each intensifying screen material generally has a
different sensitivity-photon energy relationship because the K edge is at different
energies.
The spectrum of photon energies within an x-ray beam is most directly affected
and controlled by the KVP; the sensitivity and speed of a specific intensifying
screen is not constant but changes with KVP.
Significant exposure errors can occur if technical factors (KVP and MAS) are
not adjusted to compensate for the variation in screen sensitivity. This often oc¬
curs when the same technique charts are used with screens composed of different
Small Objects
X-ray
Fast Medium Detail
Screens
Light
Blur (mm)
IMAGE BLUR
The most significant effect of intensifying screens on image quality is that they
produce blur. The reason for this is illustrated in Figure 14-3. Let us consider the
imaging of a very small object, such as a calcification. The x-ray photons passing
through the object are absorbed and produce light along the vertical path extend¬
ing through the intensifying screen. Before exiting the screen, the light spreads out
of the absorption path, as illustrated. The light image of the object that
appears on
the surface of the intensifying screen is therefore blurred; the degree of blurring
by
this process is related to the thickness and transparency of the intensifying screen.
The major issue in selecting intensifying screens for a particular clinical appli¬
cation is arriving at an appropriate compromise between patient exposure and im¬
age quality or, more specifically, between receptor sensitivity (speed) and image
blurring (visibility of detail). Screens that produce maximum visibility of detail
generally have a low absorption efficiency (sensitivity) and require a relatively
high exposure. On the other hand, screens with a high sensitivity (speed) cannot
produce images with high visibility of detail because of the increased blurring.
Intensifying screens are usually identified by brand names, which do not always
indicate specific characteristics. Most screens, however, are of five generic types:
1. mammographic
2. detail
3. par speed
4. medium speed
5. high speed.
Figure 14-4 shows how these general screen types fit into the relationship be¬
tween image blur and required exposure.
Screen-Film Contact
If the film and intensifying screen surfaces do not make good contact, the light
will as shown in Figure 14-5, and will produce image blurring. This is an
spread,
abnormal condition that occurs when a cassette or film changer is defective and
does not apply sufficient pressure over the entire film area. Inadequate film-screen
contact usually produces blurring in only a portion of the image area.
The conventional test for film-screen contact is to radiograph a wire mesh. Ar¬
eas within the image where contact is inadequate will appear to have a different
density than the other areas. This variation in image density is most readily seen
when the film is viewed from a distance of approximately 10 ft and at an angle.
204 Physical Principles of Medical Imaging
SPEED
Figure 14-4 General Relationship between Image Blur and Sensitivity (Speed)
Crossover
If the film emulsion does not completely absorb the light from the intensifying
screen,the unabsorbed light can pass through the film base and expose the emul¬
sion on the other side. This is commonly referred to as crossover. As the light
passes through the film base, it can spread and introduce image blur, as illustrated
in Figure 14-5. Many modern film-screen receptor systems are designed to mini-
Screen i . . Double
T
Emulsion Film
V V V
I I I
l g r
# m
ML v— *
M t< 1 H—H
Screen Poor Contact Crossover
mize crossover
blurring. Crossover can be decreased by placing a light-absorbing
layer between the film emulsion and film base, using a base material that selec¬
tively absorbs the light wavelengths emitted by the intensifying screens, and de¬
signing the film emulsion to increase light absorption.
Halation
IMAGE NOISE
ally the most significant type of noise in radiographs. Intensifying screens with
high conversion efficiencies generally produce more quantum noise than other
screens for reasons discussed in Chapter 21. Also, the visibility of noise is de¬
ARTIFACTS
Most medical images are recorded on photographic film. The active component
of film is emulsion of radiation-sensitive crystals coated onto a transparent base
an
FILM FUNCTIONS
Image Recording
207
208 Physical Principles of Medical Imaging
EXPOSURE
Figure 15-2 The General Relationship between Film Density (Shades of Gray) and
Exposure
The Photographic Process and Film Sensitivity 209
Image Display
Image Storage
Film has been the traditional medium for medical image storage. If a film is
properly processed it will have lifetime of many years and will, in most cases,
a
outlast its clinical usefulness. The major disadvantages of storing images on film
are bulk and inaccessibility. Most clinical facilities must devote considerable
space to film storage. Retrieving films from storage generally requires manual
search and transportation of the films to a viewing area.
Because film performs so many of the functions that make up the radiographic
examination, it will continue to be an important element in the medical imaging
process. Because of its limitations, however, it will gradually be replaced by digi¬
tal imaging media in many clinical applications.
OPTICAL DENSITY
Light Penetration
The optical density of film is assigned numerical values related to the amount of
light that penetrates the film. Increasing film density decreases light penetration.
The relationship between density values and light penetration is exponential, as
illustrated in Figure 15-3.
A clear piece of film that allows 100% of the light to penetrate has a density
value of 0. Radiographic film is never completely clear. The minimum film den¬
sity is usually in the range of 0.1 to 0.2 density units. This is designated the base
plus fog density and is the density of the film base and any inherent fog not asso¬
ciated with exposure.
DENSITY
OPTICAL P1(%) ENTRAIO
LIGHT
Density
10(%)
LPFeingaitrlhmod
Rbeeltawionsnhp
V
0.5
15-3
(%) Figure
0
100
The Photographic Process and Film Sensitivity 211
Each unit of density decreases light penetration by a factor of 10. A film area
with a
density value of 1 allows 10% of the light to penetrate and generally ap¬
pears as a medium gray when placed on a conventional viewbox. A film area with
a density value of 2 allows 10% of 10% (1.0%) light penetration and appears as a
relatively dark area when viewed in the usual manner. With normal viewbox illu¬
mination, it is possible to see through areas of film with density values of up to
approximately 2 units.
A density value of 3 corresponds to a
light penetration of 0.1% (10% of 10% of
10%). A film with a density value of 3 appears essentially opaque when
transilluminated with a conventional viewbox. It is possible, however, to see
through such a film using a bright "hot" light. Radiographic film generally has a
maximum density value of approximately 3 density units. This is
designated the
Dmax of the film. The maximum density that can be
produced within a specific film
depends on the characteristics of the film and processing conditions.
Measurement
FILM STRUCTURE
EMULSION—- 10/xm
150 /u.m
BASE—-
EMULSION—-
graphic procedures, such as mammography, have one emulsion layer and are
called single-emulsion films.
Base
about 150 jam thick. It provides the physical support for the other film components
and does not participate in the image-forming process. In some films, the base
contains a light blue dye to give the image a more pleasing appearance when illu¬
minated on a viewbox.
Emulsion
The emulsion is the active component in which the image is formed and con¬
sists of many small silver halide crystals suspended in gelatin. The gelatin sup¬
ports, separates, and protects the crystals. The typical emulsion is approximately
10 |Ltm thick.
Several different silver halides have photographic properties, but the one typi¬
cally used in medical imaging films is silver bromide. The silver bromide is in the
form of crystals, or grains, each containing on the order of 109 atoms.
Silver halide grains are irregularly shaped like pebbles, or grains of sand. Two
grain shapes are used in film emulsions. The conventional form approximates a
cubic configuration with its three dimensions being approximately equal. More
recently, tabular-shaped grains were developed. The tabular grain is relatively thin
in one direction, and its length and width are much larger than its thickness, giving
it a relatively large surface area. The primary advantage of tabular grain film in
comparison to cubic grain film is that sensitizing dyes can be used more effec¬
tively to increase sensitivity and reduce crossover exposure.
process that converts the latent image into a visible image with a range of densi¬
ties, or shades of gray.
Film density is produced by converting silver ions into metallic silver, which
causes processed grain to become black. The process is rather complicated
each
and is illustrated by the sequence of events shown in Figure 15-5.
Each film grain contains a large number of both silver and bromide ions. The
silver ions have a one-electron deficit, which gives them a positive charge. On the
other hand, the bromide ions have a negative charge because they contain an extra
The Photographic Process and Film Sensitivity 213
0
G
©
Unexposed Film
© ^-Sensitivity Speck
© ©
Figure 15-5 Sequence of Events That Convert a Transparent Film Grain into Black
Metallic Silver
electron. Each grain has a structural "defect" known as a sensitive speck. A film
grain in this condition is relatively transparent.
The first step in the formation of the latent image is the absorption of light
photons by the bromide ions, which frees the extra electron. The electron moves to
the sensitivity speck, causing it to become negatively charged. The speck, in turn,
attracts one of the positively charged silver ions. When the silver ion reaches the
speck, its positive charge is neutralized by the electron. This action converts the
silver ion into an atom of black metallic silver. If this process is repeated several
times within an individual grain, the cluster of metallic silver at the sensitive speck
will become a permanent arrangement. The number of grains in the emulsion that
214 Physical Principles of Medical Imaging
reach this statusdepends on the overall exposure to the film. The grains that re¬
ceived sufficient exposure to form a permanent change are not visually distin¬
guishable from the unexposed grains, but are more sensitive to the action of the
developer chemistry. The distribution of these activated, but "invisible," grains
throughout the emulsion creates the latent image.
Development
The invisible latent image is converted into a visible image by the chemical
process of development. The developer solution supplies electrons that migrate
into the sensitized grains and convert the other silver ions into black metallic sil¬
ver. This causes the grains to become visible black specks in the emulsion.
spond to the four steps in film processing. In a conventional processor, the film is
in the developer for 20 to 25 seconds. All four steps require a total of 90 seconds.
When a film is inserted into a processor, it is transported by means of a roller
system through the chemical developer. Although there are some differences in
the chemistry of developer solutions supplied by various manufacturers, most
contain the same basic chemicals. Each chemical has a specific function in the
development process.
Reducer
Chemical reduction of the exposed silver bromide grains is the process that
converts typically provided by two
them into visible metallic silver. This action is
chemicals in the solution: phenidone and hydroquinone. Phenidone is the more
Film Path
active andprimarily produces the mid to lower portion of the gray scale. Hydro-
quinone produces the very dense, or dark, areas in an image.
Activator
Preservative
Sodium sulfite, a typical preservative, helps protect the reducing agents from
oxidation because of their contact with air. It also reacts with oxidation products to
reduce their activity.
Hardener
Fixing
After leaving the developer the film is transported into a second tank, which
contains the fixer solution. The fixer is a mixture of several chemicals that perform
the following functions.
Neutralizer
When a film is removed from the developer solution, the development contin¬
ues because of the solution soaked up by the emulsion. It is necessary to stop this
action to prevent overdevelopment and fogging of the film. Acetic acid is in the
fixer solution for this purpose.
Clearing
The fixer solution also clears the undeveloped silver halide grains from the film.
Ammonium sodium thiosulfate is used for this purpose. The unexposed grains
or
leave the film and dissolve in the fixer solution. The silver that accumulates in the
fixer during the clearing activity can be recovered; the usual method is to electro¬
plate it onto a metallic surface within the silver recovery unit.
216 Physical Principles of Medical Imaging
Preservative
Hardener
Wash
Film is next passed through a waterbath to wash the fixer solution out of the
emulsion. It is especially important to remove the thiosulfate. If thiosulfate (hypo)
is retained in the emulsion, it will eventually react with the silver nitrate and air to
form silver sulfate, a yellowish brown stain. The amount of thiosulfate retained in
the emulsion determines the useful lifetime of a processed film. The American
National Standard Institute recommends a maximum retention of 30 pg/in2.
Dry
The final step in processing is to dry the film by passing it through a chamber in
which hot air is circulating.
SENSITIVITY
One of the most important characteristics of film is its sensitivity, often referred
to as film speed. The sensitivity of a particular film determines the amount of
exposure required to produce an image. A film with a high sensitivity (speed)
requires less exposure than a film with a lower sensitivity (speed).
The sensitivities of films are generally compared by the amount of exposure
required to produce an optical density of 1 unit above the base plus fog density.
The sensitivity of radiographic film is generally not described with numerical val¬
ues but rather with a variety of generic terms such as "half speed," "medium
speed," and "high speed." Radiographic films are usually considered in terms of
their relative sensitivities rather than their absolute sensitivity values. Although it
is possible to choose films with different sensitivities, the choice is limited to a
production of a specific density value (ie, 1 density unit) requires less exposure.
High sensitivity (speed) films are chosen when the reduction of patient expo¬
sure and heat loading of the x-ray equipment are important considerations.
a
s a a
1 a- On n>
s-
0 "lC r>
sx
to
•o
-o
OQ
64
32
->
■>
16
F(SSielpnmesitvd)y F(SSeipnlmseitvdy)
1
DENSITY 1
2 DENSITY
4
1
SeDnFisfTtwvirlomthsf
Low 8
High
1
1
16
1
32
1
64
C1om5pari-s7n
Figure
218 Physical Principles of Medical Imaging
Low sensitivity (speed) films are used to reduce image noise. The relationship
of filmsensitivity to image noise is considered in Chapter 21.
The sensitivity of film is determined by a number of factors, as shown in Figure
15-8, which include its design, the exposure conditions, and how it is processed.
Composition
The basic sensitivity characteristic of a film is determined by the composition
of the emulsion. The size and shape of the silver halide grains have some effect on
film sensitivity. Increasing grain size generally increases sensitivity. Tabular-
shaped grains generally produce a higher sensitivity than conventional grains. Al¬
though grain size may vary among the various types of radiographic film, most of
the difference in sensitivity is produced by adding chemical sensitizers to the
emulsion.
Processing
The effective sensitivity of film depends on several factors associated with the
development:
• the type of developer
•
developer concentration
•
developer replenishment rates
•
developer contamination
Film Sensitivity
(Speed)
•
development time
•
development temperature.
In most medical imaging applications, the objective is not to use these factors to
vary film sensitivity, but rather to control them to maintain a constant and predict¬
able film sensitivity.
Developer Composition
The processing chemistry supplied by different manufacturers is not the same.
It isusually possible to process a film in a variety of developer solutions, but they
will not all produce the same film sensitivity. The variation in sensitivity is usually
relatively small, but must be considered when changing from one brand of devel¬
oper to another.
Developer Concentration
Developer chemistry is usually supplied to a clinical facility in the form of a
concentrate that must be diluted with water before it is pumped into the processor.
Mixing errors that result in an incorrect concentration can produce undesirable
changes in film sensitivity.
Developer Replenishment
The film development process consumes some of the developer solution and
causes the solution to become less active. Unless the solution is replaced, film
sensitivity will gradually decrease.
Inradiographic film processors, the replenishment of the developer solution is
automatic. When a sheet of film enters the processor, it activates a switch that
causes fresh solution to be pumped into the development tank. The replenishment
rate can be monitored by means of flow meters mounted in the processor. The
appropriate replenishment rate depends on the size of the films being processed. A
processor used only for chest films generally requires a higher replenishment rate
than one used for smaller films.
Developer Contamination
developer solution becomes contaminated with another chemical, such as
If the
abrupt changes in film sensitivity can occur in the form of either
the fixer solution,
an increase or decrease in sensitivity, depending on the type and amount of con¬
tamination. Developer contamination is most likely to occur when the film trans¬
port rollers are removed or replaced.
Development Time
When an exposed film enters the developer solution, development is not instan¬
taneous. It is agradual process during which more and more film grains are devel-
220 Physical Principles of Medical Imaging
produce a higher contrast when developed for a longer time in an extended cycle
processor.
Development Temperature
The activity of the developer changes with temperature. An increase in tem¬
perature speeds up the development process and increases film sensitivity because
less exposure is required to produce a specific film density.
The temperature of the developer is thermostatically controlled in an automatic
processor. It is usually set within the range of 90-95°F. Specific processing tem¬
peratures are usually specified by the film manufacturers.
Blue Sensitivity
A basic silver bromide emulsion has its maximum
sensitivity in the ultraviolet
and blue regions of the light spectrum. For many years most intensifying screens
contained calcium tungstate, which emits a blue
light and is a good match for blue
sensitive film. Although calcium tungstate is no
longer widely used as a screen
material, several contemporary screen materials emit blue light.
Green Sensitivity
Several image light sources, including
image intensifier tubes, CRTs, and some
intensifying screens, emit most of their light in the green portion of the spectrum.
Film used with these devices must, therefore, be sensitive to
green light.
Silver bromide can be made sensitive to
green light by adding sensitizing dyes
to the emulsion. Users must be careful not to use the
wrong type of film with
intensifying screens. If a blue-sensitive film is used with a green-emitting intensi¬
fying screen, the combination will have a drastically reduced sensitivity.
The Photographic Process and Film Sensitivity 221
Red Sensitivity
Many lasers produce red light. Devices that transfer images to film by means of
a laser beam must,therefore, be supplied with a film that is sensitive to red light.
Safelighting
Darkrooms in which film is loaded into cassettes and transferred to processors
are usually illuminated with safelight. A safelight emits a color of light the eye
a
can see but that will not
expose film. Although film has a relatively low sensitivity
to the light emitted by
safelights, film fog can be produced with safelight illumina¬
tion under certain conditions. The safelight should
provide sufficient illumination
for darkroom operations but not produce significant
exposure to the film being
handled. This can usually be accomplished if certain factors are controlled. These
include safelight color, brightness, location, and duration of film
exposure.
The color of the safelight is controlled by the filter. The filter must be selected
in relationship to the spectral sensitivity of the film being used. An amber-brown
Selecting the appropriate safelight filter does not absolutely protect film be¬
cause film has some sensitivity to the light emitted by most safelights. Therefore,
the brightness of the safelight (bulb size) and the distance between the light and
film work surfaces must be selected so as to minimize film exposure.
Since exposure is an accumulative effect, handling the film as short a time as
possible minimizes exposure. The potential for safelight exposure can be evalu¬
ated in a darkroom by placing a piece of film on the work surface, covering most
of its area with an opaque object, and then moving the object in successive steps to
expose more of the film surface. The time intervals should be selected to produce
exposures ranging from a few seconds to several minutes. After the film is pro¬
cessed, the effect of the safelight exposure can be observed. Film is most sensitive
to safelight fogging after the latent image is produced but before it is processed.
Exposure Time
In radiography it is usually possible to deliver a given exposure to film by using
many combinations of radiation intensity (exposure rate) and exposure time. Since
radiation intensity is proportional to x-ray tube MA, this is equivalent to saying
that a given exposure (in milliampere-seconds) can be produced with many com-
222 Physical Principles of Medical Imaging
means that it is
possible to swap radiation intensity (in milliamperes) for exposure
time and produce the same film exposure. When a film is directly exposed to
x-radiation, the reciprocity law holds true. That is, 100 mAs will produce the same
film density whether it is exposed at 1,000 mA and 0.1 seconds or 10 mA and 10
seconds. However, when a film is exposed by light, such as from intensifying
screens or image intensifiers, the reciprocity law does not hold. With
light expo¬
sure, as opposed to direct x-ray interactions, a single silver halide grain must ab¬
sorb more than one photon before it can be developed and can contribute to image
density. This causes the sensitivity of the film to be somewhat dependent on the
intensity of the exposing light. This loss of sensitivity varies to some extent from
one type of x-ray film to another. The clinical
significance is that MAS values that
give the correct density with short exposure times might not do so with long expo¬
sure times.
There are many variables, such as temperature and chemical activity, that can
affect the level of processing that a film receives. Each type of film is
designed and
manufactured to have specified sensitivity (speed) and contrast characteristics.
Underprocessing
If a film isunderprocessed its sensitivity and contrast will be reduced below the
specified values. The loss of sensitivity can usually be compensated for by in¬
creasing exposure but the loss of contrast cannot be recovered.
Overprocessing
Overprocessing can increase sensitivity. The contrast of some films might in¬
crease with overprocessing, up to a point, and then decrease. A
major problem
with overprocessing is that it increases
fog (base plus fog density) which contrib¬
utes to a decrease in contrast.
Processing Accuracy
The first step in processing quality control is to set up the correct processing
conditions and then verify that the film is being correctly processed.
The Photographic Process and Film Sensitivity 223
Processing Conditions
A specification of recommended processing conditions (temperature, time, type
of chemistry, replenishment rates, etc.) should be obtained from the manufacturers
of the film and chemistry.
Processing Verification
After the recommended
processing conditions are established for each type of
film, should be performed to verify that the film is producing the
a test
design
sensitivity and contrast characteristics as specified by the manufacturer. These
specifications are usually provided in the form of a film characteristic curve that
canbe compared to one produced by the processor
being evaluated.
Processing Consistency
The second step in processing quality control is to reduce the variability over
time in the level of processing.
Variations in processing conditions can produce significant differences in film
sensitivity. One objective of a quality control program is to reduce exposure errors
that cause either underexposed or overexposed film. Processors should be checked
several times each week to detect changes in processing. This is done
by exposing
a test film to a fixed amount of
light exposure in a sensitometer, running the film
through the processor, and then measuring its density with a densitometer. It is not
necessary to measure the density of all exposure steps. Only a few exposure steps
are selected, as shown in Figure 15-9, to
give the information required for proces¬
sor
quality control. The density values are recorded on a chart (Figure 15-10) so
that fluctuations can be easily detected.
Speed
A single exposure step that produces a film density of about 1 density unit
(above the base plus fog value) is selected and designated the "speed step." The
density of this same step is measured each day and recorded on the chart. The
density of this step is a general indication of film sensitivity or speed. Abnormal
variations can be caused by any of the factors affecting the amount of develop¬
ment.
224 Physical Principles of Medical Imaging
SENSITOMETER
EXPOSED DENSITY
FILM
Base + Fog
Speed Index
2.20 - 1.20 = 1.0 - Contrast Index
Figure 15-9 Density Values from a Sensitometer Exposed Film Strip Used for Processor
Quality Control
Contrast
Two other steps are selected, and the difference between them is used as a mea¬
sureof film contrast. This is the contrast index. If the two sensitometer
steps that
are selected represent a two-to-one exposure ratio (50% exposure contrast), the
contrast index is the same as the contrast factor discussed earlier. This value is
processing conditions.
If abnormal variations in film density are observed, all
possible causes, such as
developer temperature, solution replenishment rates, and contamination, should
be evaluated.
If more than one processor is used for films from the same imaging device, the
level of development by the different processes should be matched.
3
O o O S 3
S- s-
03
OS
"13 os
rs
»Q sx to
CQProohucneatasrliy
1FAi5g-ur0e
226 Physical Principles of Medical Imaging
Artifacts
A variety of artifacts can be produced during the storage, handling, and process¬
ing of film.
Bending unprocessed film can produce artifacts or "kink marks," which can
appear as either dark or light areas in the processed image. Handling film, espe¬
cially in a dry environment, can produce a build-up of static electricity; the dis¬
charge produces dark spots and streaks.
Artifacts can be produced during processing by factors such as uneven roller
age difference, as illustrated in Figure 16-1. The film contrast between two areas is
expressed as the difference between the density values. The ability of the film to
convert exposure contrast into film contrast can be expressed in terms of the con¬
trast factor. The value of the contrast factor is the amount of film contrast resulting
CONTRAST TRANSFER
227
228 Physical Principles of Medical Imaging
Exposure Contrast
i-50% -|
Relative Exposure
0.6
Film Contrast
Figure 16-1 The General Relationship between Exposure Contrast and Film Contrast
One method of doing this is illustrated in Figure 16-2; this type of exposure
pattern is usually produced by a device known as a sensitometer. In this method, a
strip of film is divided intoa number of individual areas, and each area is
exposed
to a different level of radiation. In this
particular illustration, the exposure is
changed by a factor of 2 (50% contrast) between adjacent areas. When considering
contrastcharacteristics, we are usually not interested in the actual exposure to a
film but rather the relative
exposure among different areas of film. In Figure 16-2
the exposures to the different areas are
given relative to the center area, which has
been assigned a relative
exposure value of 1. We will use this relative exposure
scale throughout our discussion of film contrast
characteristics. Note that each
interval on the scale represents a 2:1 ratio. This is a characteristic
of a logarithmic
scale. When the film is
processed, each area will have density values, as shown
directly below the area. The amount of contrast between any two adjacent areas is
the difference in
density, as shown. In this illustration we can observe one of the
very important characteristics of film contrast. Notice how the contrast is not the
A
3624
A
16
8
I3.0
I222..49.118
I1.8
A
A
A
.4
.6
EXPOSUR
A
.6
CONTRAS
RELATIVE A
35
25
.05
4s iCVTaonrwnhtritaeohs Exposure
.05
16 15
.05
A
1
32
.1
A
0
"64
A
1Fi6gu-r2e
230 Physical Principles of Medical Imaging
same between each pair of adjacent areas throughout the exposure range: there is
no contrast between the first two areas, but the contrast gradually increases with
exposure, reaches a maximum, and then decreases for the higher exposure levels.
In other words, a specific type of film does not produce the same amount of con¬
trast at all levels of exposure. This important characteristic must be considered
when using film to record medical images.
All films have alimited exposure range in which they can produce contrast: if
areas of a film receive exposures either below or above the useful exposure range,
The relationship between film density and exposure is often presented in the
form of a graph, as shown in Figure 16-3. This graph shows the relationship be¬
tween the density and relative exposure for the values shown in Figure 16-2. This
sure range. At any exposure value, the contrast characteristic of the film is repre¬
sented by the slope of the curve. At any particular point, the slope represents the
density difference (contrast) produced by a specific exposure difference. The
same interval anywhere on the relative exposure scale represents the same expo¬
sure ratio and amount of contrast delivered to the film during the exposure pro¬
cess. An interval along the density scale represents the amount of contrast that
actually appears in the film. The slope of the characteristic curve at any point can
be expressed in terms of the contrast factor because the contrast factor is the den¬
sity difference (contrast) produced by a 2:1 exposure ratio (50% exposure con¬
trast).
A film characteristic curve has three distinct regions with different contrast
transfer characteristics. The part of the curve associated with relatively low expo¬
sures is designated the toe, and also corresponds to the light or low density por¬
tions of an image. When an image is exposed so that areas fall within the toe
region, little or no contrast is transferred to the image. In the film shown in Figure
16-2, the areas on the left correspond to the toe of the characteristic curve.
A film also has a reduced ability to transfer contrast in areas that receive rela¬
tively high exposures. This condition corresponds to the upper portion of the char¬
acteristic curve in which the slope decreases with increasing exposure. This por¬
tion of the curve is traditionally referred to as the shoulder. In Figure 16-2 the dark
areas on the right correspond to the shoulder of the characteristic curve. The two
significant characteristics of image areas receiving exposure within this range are
Film Contrast Characteristics 231
1 1 1 1 1 i i i i i i I l
_J 1 2 2 2 — 1 2 4 8 16 32 64
64 32 16 8 4 2
RELATIVE EXPOSURE
Figure 16-3 A Film Characteristic Curve Showing the Relationship between Density and
RelativeExposure
that the film isquite dark (dense) and contrast is reduced. In many instances, im¬
age contrast is present that cannot be observed on the conventional viewbox be¬
cause of the high film density. This contrast can be made visible by viewing the
Contrast Curve
It is easier to see the relationship between film contrast and exposure by using a
contrast curve, as shown in Figure 16-4. The contrast curve corresponds to the
slope of the characteristic curve. It clearly shows that the ability of a film to trans¬
fer exposure contrast into film contrast changes with exposure level, and that
maximum contrast is produced only within a limited exposure range.
The exposure range over which a film produces useful contrast is designated the
latitude. An underexposed film area contains little or no image contrast. Exposure
values above the latitude range also produce areas with very little contrast and
have the added disadvantage of being very dark or dense.
Since the contrast transfer characteristics of film change with exposure, a spe¬
cific film characteristic can be described only by using either a characteristic curve
or contrast curve, as illustrated in Figure 16-4. There are occasions, however,
Gamma
The gamma value of a film is the maximum slope of the characteristic curve, as
shown in Figure 16-5. By tradition, the gamma value is the slope expressed in
terms of the density difference associated with an exposure ratio of 10:1. The
relationship between the film gamma value and the maximum contrast factor is
given by
Gamma = 3.32 Maximum contrast factor.
Film Contrast Characteristics 233
RELATIVE EXPOSURE
Figure 16-4 The Relationship of Film Contrast (Solid Line) to Relative Exposure and the
Line)
Characteristic Curve (Dotted
The factor 3.32 converts a slope based on an exposure ratio of 2:1 to a slope ex¬
pressed with respect to a 10:1 exposure ratio.
234 Physical Principles of Medical Imaging
Average Gradient
The average gradient is the average slope between two designated density val¬
ues, asillustrated in Figure 16-5. For medical imaging film the density values of
0.25 and 2.0 above the base plus fog density are used to determine average gradi¬
ent. Average gradient values, like gamma values, are based on an exposure ratio of
8 4 2 1 ^
RELATIVE EXPOSURE
Figure 16-5 The Relationship of Average Gradient and Gamma to the Characteristic Curve
Film Contrast Characteristics 235
10:1. The relationship between the average gradient and the average contrast fac¬
tor is therefore:
FILM LATITUDE
InFigure 16-4 we saw that film contrast is limited to a specific range of expo¬
surevalues. The exposure range in which a film can produce useful contrast is
known as its latitude. The latitude of a specific film is determined primarily by the
Exposure Error
In every
imaging procedure it is necessary to set the exposure to match the
sensitivity (speed) of the film being used. This is not always an easy task.
Exposure error is generally a much more significant problem in radiography
than in other imaging procedures. It is not always possible to predict the amount of
When an x-ray beam passes through certain body areas, the penetration of the
areas varies considerably because of differences in tissue thickness and composi¬
tion. Under these conditions it is possible for the range of exposures from the
patient's body (subject contrast range) to exceed the latitude of the film. This typi¬
cally produces a high level of area contrast, as discussed in a previous chapter.
When the exposure to some image areas falls outside the film latitude, details
within the recorded with reduced contrast, as illustrated in Figure 16-6.
areas are
Notice that the objects located within the very thick and thin body sections are not
recorded because they are located in areas outside the film latitude. Radiography
236 Physical Principles of Medical Imaging
if)
if)
<D
C
o
if)
C
I- <D
+-» a
c
a>
<S
Q.
1 1 1 i i i i
1 1 1 i i i i
tit
'
'
■
V
r' r
1/8 1/4 1/2 1 2 4 8
Relative Exposure
Figure 16-6 Loss of Contrast in Both Thick and Thin Body Sections when Using High
Contrast Film
of the chest illustrates thisproblem: the area of the mediastinum receives a rela¬
tively low exposure whereas the lung areas receive a much higher level.
One possible solution to the problem is to decrease the
subject contrast range by
using increased KVP, spacial filtration, bolus, or compression. Another possible
solution is to use a film with a
longer latitude.
FILM TYPES
1 I I I I I I T
-5T -s—^—i i 5- 1 2 4 8 16 32 64
RELATIVE EXPOSURE
Figure 16-8 Characteristic Curves for the High Contrast and Latitude Films Illustrated in
Figure 16-7
Film Contrast Characteristics 239
RELATIVE EXPOSURE
Figure 16-9 Contrast Curves for the High Contrast and Latitude Films Illustrated in
Figure 16-7 (Compare with Characteristic Curves in Figure 16-8 )
Figure 16-10 illustrates how using a medium contrast, or latitude, film actually
increasesobject contrast within certain areas because of the overall reduction in
area contrast.
EFFECTS OF PROCESSING
Both the sensitivity and the contrast characteristics of a given film type are
affectedby processing. The degree of processing received by film generally de¬
pends on three factors: (1) the chemical activity of the developer solution, (2) the
temperature of the developer, and (3) the period of immersion in the developer. In
most applications it is usually desirable to maintain a constant developer activity
by replenishment and to control the degree of development by varying the tern-
240 Physical Principles of Medical Imaging
I i
perature or, in some cases, the amount of development time. Varying the amount
of development by changing either the chemical activity, the time period, or the
temperature produces a shift in the characteristic curve.
Theoptimum performance of most film types is obtained by using the recom¬
mended degree of development. Deviation in either direction
generally results in a
loss of contrast. Although the
sensitivity of film can usually be increased by over¬
developing, this is usually accompanied by an increase in undesirable fog.
Overprocessing
Increasing development will cause the curve to shift to the left with a rise in the
toe. The movement of the curve to the left indicates increase in
an
sensitivity be¬
cause a given density value is produced with a lower exposure. As the toe of the
curve rises, the
general slope of the curve decreases, which results in less contrast.
The increased density value of the toe also indicates an increased
fog level. This
Film Contrast Characteristics 241
fog density occurs because more of the unexposed silver grains are developed by
the excess processing.
Underprocessing
FILM FOG
Any density in a film that is not produced as part of the image-forming exposure
is generally referred to as fog. There are several potential sources of film fog.
Inherent
All film, even under the best conditions, showsdensity even if it has re¬
some
ceived no radiation exposure.
This density comes from the film base and from the
unexposed emulsion, and is the density observed if a piece of unexposed film is
processed. This is typically referred to as the base plus fog density and is generally
in the range of 0.15 to 0.2 density units for radiographic film.
Chemical
If a film is
overprocessed, abnormally high densities will be developed by
chemical action in image areas that received little or no exposure. This results
from chemicals in the developer solution interacting with some of the film grains
that were not sensitized by exposure.
Fog will gradually develop in unprocessed film with age; therefore, film should
not be stored for long periods of time. Each box of film is labeled with an expira¬
tion date by the manufacturer. When stored under proper conditions, film should
not develop appreciable fog before the expiration date. When film is stored in a
Radiation Exposure
It is not uncommon for film to be fogged by accidental exposure to either
x-radiation light. Light-exposure fogging can result from light leaks in a dark¬
or
room, the use of incorrect safelights, and cassettes with defective light seals
around the edges.
Film darkrooms and storage areas should be properly shielded from
nearby
x-ray sources.
Chapter 17
by many factors, as shown in Figure 17-1. Film density is optimal when all of
these factors are properly balanced.
After a radiographic system is installed, the films, screens, and grids are se¬
lected, and the processor is adjusted, the major task is selecting KVP and MAS
values to compensate for variations in patient thickness and composition. If the
KVP and MAS are to be selected manually, technique charts should be used for
reference. The most common chart form gives KVP and MAS values in relation to
the thickness of different parts of the body. It should be emphasized that a given
technique chart should be used only if it has been calibrated for the specific ma¬
chine and film-screen-grid combination being used.
Exposure errors are produced by variations in any of the factors listed in Figure
17-1 that are not properly compensated for. When it is necessary to change certain
factors, such as focal-film distance (FFD) or KVP, to meet a specific examination
objective, the change can usually be compensated for by changing another factor
according to established relationships, such as the inverse-square law and the 15%
rule.
243
244 Physical Principles of Medical Imaging
Radiographic
Density
•
Thickness •
KV •
Grid
•
Material •
Type
• MA • FFD
• Thickness •
Spectral Sens. •
Density
• Time •
Field Size
• KV •
Processing
•
Waveform
In this chapter we consider the specific factors that relate to exposure (density)
control, exposure error, and technique compensation.
MA
• Select low MA values to permit use of the small focal spot when image detail
is important.
Radiographic Density Control 245
x-ray generator. Calibration consists of measuring the x-ray exposure rate at each
MA value that can be selected.
Exposure Time
radiography, exposure times are selected either by the operator, who sets the timer
before initiating the exposure, or by the AEC circuit, which terminates the expo¬
sureafter the selected exposure has reached the receptor.
As with MA, no one exposure time value is correct for a specific procedure.
Remember that it is the combination of exposure time, MA, and KVP that deter¬
mines exposure. Some general rules for selecting an appropriate exposure time
are:
• Select short exposure times to minimize motion blurring and improve image
detail.
• Select longer exposure time when motion is not a problem and it is necessary
to reduce either MA or KVP.
Exposure errors can result from the selection of an inappropriate exposure time
by the operator or from the failure of the generator to produce the exposure time
indicated on the time selector.
X-ray machine timers should be calibrated periodically to determine if the ma¬
chineproduces an exposure that is proportional to the indicated exposure time.
Timers can be calibrated by several methods including the use of an electronic
246 Physical Principles of Medical Imaging
KVP
Although it is true that KVP-MAS values represented by any point along the
curve produce the same film exposure, they will not produce the same image qual¬
ity, patient exposure, and demands on the x-ray equipment. Moving down the
curve (higher KVP values)
generally decreases patient exposure, x-ray tube heat¬
ing, and motion blurring when the MAS is decreased by a shorter exposure time.
The major reason for moving up the curve
(higher MAS values) is to increase
image contrast with the lower KVP values, as discussed in Chapter 12.
The range of KVP values for a
specific procedure is selected on the basis of
contrast requirements, patient
exposure, and the limitations of the x-ray generator.
However, small changes in KVP within each range can be used to adjust film expo¬
sure.
Exposure errors can occur if the actual KVP produced by an x-ray generator is
different from the value indicated by the KVP selector. Periodic calibration of an
x-ray generator helps reduce this potential source of error.
Radiographic Density Control 247
60 "i
50
40
UT
<
30
(/>
<
2 20
10
15%
0_j ! , , p-
60 70 80 90 100
KVP (kV)
Waveform
quires less KVP and/or MAS than a single-phase generator to produce the same
film exposure. The constant potential, or three-phase, generator produces more
radiation exposure per unit of MAS, as discussed in Chapter 7. For a specific KVP
value, the radiation is more penetrating because of the higher average KV during
the exposure.
Technique charts designed for use with a single-phase generator would lead to
considerable overexposure if used with a constant potential, or three-phase, gen¬
erator.
X-Ray Tubes
All x-ray tubes do not produce the same exposure (for a specific KVP-MAS
value), and the exposure output sometimes decreases with aging. A difference in
tube output among tubes is often caused by variations in the amount of filtration.
The significance of the tube-to-tube variation is that technique factors for one
x-ray machine might not produce proper film exposure if used with another
machine.
248 Physical Principles of Medical Imaging
RECEPTOR SENSITIVITY
Grids
When a grid is changed, the exposure factors must be changed. The approxi¬
mate relationship between the old and new MAS values depends on the Bucky
factors, B, or penetration values of the grids and is
The grid penetration is the reciprocal of the Bucky factor. If the condition being
considered does not involve a grid, then the value of the Bucky factor or grid
penetration will be 1. The value of these factors generally depends on grid ratio
and the quantity of scattered radiation in the beam, as discussed in Chapter 13.
Values of grid penetration range from 1 (no grid) to approximately 0.2 for a high-
Radiographic Density Control 249
ratio grid. Changing from an examination condition without a grid to one with a
grid penetration of 0.2 (Bucky factor of 5) requires the MAS to be increased by a
factor of 5. Approximate grid penetration and
Bucky factor values for different
grids are given in Chapter 13.
PATIENT
reduction.
As the areacovered by the x-ray beam, or field size, is increased, more scattered
radiation is produced and contributes to film exposure. Although much of the scat¬
tered radiation is removed by the grid, it is often necessary to change exposure
factors to get the same density with different field sizes.
Because of the inverse-square effect, the exposure that reaches the receptor is
related to the focal spot-receptor distance, d. If this distance is changed, the new
MAS value required to obtain the same film exposure will be given by
Many radiographic systems are equipped with an AEC circuit. The AEC is of¬
ten referred to as the phototimer. The basic function of an AEC is illustrated in
Figure 17-3.
The principal component of the AEC is a radiation-measuring device, or sensor,
located near the receptor. A common type of sensor contains an intensifying
screen that converts the x-ray
exposure into light. The sensor also contains a com¬
ponent, typically a photomultiplier tube, that converts the light into an electrical
signal. Another type of sensor is an ionization chamber (described in Chapter 34)
that also converts x-ray exposure into an electrical signal.
Within the AEC circuit, the exposure signal is compared to a reference value.
When the accumulated exposure to the sensor (receptor) reaches the predeter¬
mined reference value, the exposure is terminated when the x-ray tube automati¬
cally turns off.
The reference exposure level is determined by two variables: one is the calibra¬
tion of the AEC. A service engineer must adjust the basic reference level to match
the sensitivity (speed) of the receptor. If either the intensifying screen or film is
changed so that the overall receptor sensitivity changes, it will usually be neces¬
sary to recalibrate the AEC.
3
Backup
_
2 Timer
-1
/ \
/ *
'
' \
/ \
/ \
/ \
/ \ ■
+ 1
/ *
l-1/2
<
Receptor Density
Exposure
Value
_
N Setting
Radiation Sensor
1/2
AEC Calibration
The use of AEC does not eliminate exposure error. Some possible sources of
error that must be considered are the
following:
• AEC not calibrated for a specific receptor
•
Density control not set to proper value
•
Back-up timer not set to proper value
• Sensor field incorrectly positioned with respect to anatomy.
Chapter 18
mined, to a large extent, by the amount of blur produced by the imaging proce¬
dure. There is some blur in all medical images. Some methods, however, produce
images with significantly less blur than others, and the result is images that show
much greater detail. Each imaging method also has certain associated factors that
control the amount of blurring and the ultimate visibility of detail.
In this chapter we consider the general characteristics of image blur and its
BLUR
object, such as a patient's body. In an ideal situation, each small point within the
object would be represented by a small, well-defined point within the image. In
reality, the "image" of each object point is spread, or blurred, within the image.
The amount of blurring can be expressed as the dimension of the blurred image of
a very small object point. Blur values range from approximately 0.15 mm, for
ponents, such as intensifying screens and image intensifier tubes, generally pro-
253
254 Physical Principles of Medical Imaging
Small Object
Point
Patient
Blur .
Image
B (mm)
ing the imaging process typically produces an elongated blur pattern. X-ray tube
focal spots produce a variety of blur shapes.
In addition to a size and shape, the blur
produced by a specific factor has a
specific intensity distribution. This characteristic is related to the manner in which
the point image is spread, or distributed, within the blur area. Two blur distribution
patterns are illustrated in Figure 18-3. The actual distribution of the image inten¬
sity within the blur area is often illustrated by means of an intensity profile. Some
sources of blur uniformly distribute the
object-point image intensity within the
blur area. This gives the blur pattern a precise dimension, as illustrated in
Figure
18-3. Many sources of blur, however, do not
uniformly distribute blur. Common
examples are intensifying screens and defocused optical systems. A common dis-
Blur Shapes
Blur Distribution
tribution patternis one with a relatively high intensity near the center with a
gradual reduction of intensity toward the periphery. The profile of this type of
distribution is often a Gaussian curve, as illustrated. The full dimension (diameter)
of Gaussian blur pattern is not used to express the amount of blur because
a it
would tend to overstate the blur in relation to blur that is uniformly distributed. A
more appropriate value is the dimension of a uniform distribution that would pro¬
duce the same general image quality as the Gaussian distribution. This is desig¬
nated the equivalent blur value. For example, the two blur patterns shown in Fig¬
ure 18-3 have the same general effect on image quality even though the total
VISIBILITY OF DETAIL
The most significant effect of blur in an imaging process is that it reduces the
visibility of details such as small objects and structures. In every imaging process,
blur places a definite limit on the amount of detail (object smallness) that can be
visualized.
is to reduce the contrast of small objects and features, as
The direct effect of blur
18-4. In effect, blur spreads the image of small objects into the
illustrated in Figure
surrounding background area. As the image spreads, the contrast and visibility are
reduced.
RCeolnattriavs
50% 20%
100%
Image
CImonatrges
o
Blur
Object Point
o
o
o
EGBefonleurncartl
1Fi8gu-r4e
Blur, Resolution, and Visibility of Detail 257
objects, both small and large, have the same amount of inherent contrast. In pro¬
jection x-ray imaging, small objects tend to produce much less contrast than large
objects because of their increased penetration. The contrast and visibility of small
objects are actually reduced by two factors: the increased x-ray penetration and the
effects of blur, which we are considering here.
No medical imaging method produces images that are free of blur; the no-blur
line is included in the illustration as a point of reference. In this particular ex¬
ample, the blur values are described only in the relative terms of low, medium, and
high.
Let us now consider the effect of a small amount of blur on the imaging process.
The contrast of the larger objects is not affected. The loss of contrast because of
blur increases with decreasing object size (detail). The contrast is eventually re¬
duced to zero at some point along the detail scale, and no smaller objects will be
visible. For objects that produce relatively low contrast, even without blur, the
threshold of visibility might occur at object sizes larger than the point at which
blur produces zero contrast. The visibility threshold is related to, but not necessar¬
ily equal to, the object size at which blur produces zero relative contrast.
As blur is increased in the imaging process, the loss of contrast increases for all
UNSHARPNESS
An image that shows much detail and distinct boundaries is often described as
being sharp. The presence of blur produces unsharpness. Image unsharpness, as
the term is commonly used, and blur refer to the same general image characteris-
258 Physical Principles of Medical Imaging
No Blurring (Ideal)
\
\
\
\
8 4 2 0.4 0.2 0
RESOLUTION
Figure 18-6 A Test Pattern Used To Measure the Resolution of an X-Ray Imaging System
line width and separation distance in terms of line pairs (lp) per unit distance (mil¬
limeters or centimeters). One line pair consists of one lead strip and one adjacent
separation space. The number of line pairs per millimeter is actually an expression
of spatial frequency. As the lines get smaller and closer together, the spatial fre¬
quency (line pairs per millimeter) increases. A typical test pattern contains areas
with different spatial frequencies. An imaging system is evaluated by imaging the
test object and observing the highest spatial frequency (or minimum separation) at
Figure 18-7 illustrates the effect of blur on resolution. When no blur is present,
all of the line-pair groups can be resolved. As blur is increased, however, resolu¬
tion is decreased, and only the lines with larger separation distances are visible.
260 Physical Principles of Medical Imaging
No Blurring (Ideal)
100-
80-
</>
ra
i: 60-
c
o
o ^ V *5 \
a> 40-
a> \ V
CC 20-
\ \ \
\
\
1—i 1 1 1 r— i i i—i ■ i i 1—
Figure 18-8 compares how the contrast within the various line-pair groups is
reduced by various levels of blur. Note the similarity between this illustration and
Figure 18-5. Increasing spatial frequency corresponds to increasing image detail
100-1 —
O— — -0 — o~ -0 O— -0 O— -0 —~o
*
— — — — — — — — —- -
90-
h
</> 80- NO BLUR
<
OS 70-
H
Z 60-
O
o 50-
U1 40-
>
30-
i-
< 20-
-j
LU
10-
OS
T6 1I 7
7 1I ft
8 1I a
9 1I in
10
SPATIAL FREQUENCY (Ll>/mit!)
Figure 18-8 The General Relationship between Blur and Resolution
Blur, Resolution, and Visibility of Detail 261
100-
w
90-
h* 0. MEDIUM BLUR
H) 80'
<
OS 70-
t-
z 60-
o
o 50-
LU 40-
>
r" 3o-
\
< 20-
-j
u
10-
cc
100-n
#
90-
H HIGH BLUR
(/> 80-
<
0S 70-
H
Z 60-
O
o 50-
u 40-
>
30-
H
< 20-
-1
U
10-
OS
and reducing object size. Curves like those shown in Figure 18-8 are
generally
designated contrast transfer functions (CTF). They show the ability of an imaging
system to transfer contrast of objects of different sizes in the presence of blur.
The shape of the curve depends, to some extent, on the
major source and distri¬
bution of blur within the system, as shown in Figure 18-9. For example, the
typical
CTF curve associated with focal spot blur usually has a
specific point at which the
contrast becomes zero. This is commonly referred to as the
disappearance fre¬
quency and represents the resolution limit, or resolving ability, of the system.
When the major source of blur is the receptor, ie, the
intensifying screen or image
intensifier tube, the curve might not show a distinct zero-contrast point. Criteria
must be established for
defining the maximum resolution point. It is common
practice to specify the resolution of such a system in terms of the spacing fre¬
quency, in line pairs per millimeter, at which the contrast falls to a relatively low
value, typically 3%. When comparing the resolution values of different systems,
this practice should be taken into consideration. For
example, if a manufacturer
arbitrarily uses a 3% contrast, the resolution values for its equipment will be
higher than for equipment from a company that uses 10%, even if the systems are
identical.
Figure 18-9 The Contrast Transfer Function Associated with Two Types of Blur. The
Solid Line ( ) Is Characteristic of Motion and Focal Spot Blur. The Broken line ( ) Is
Generally Characteristic of Receptor Blur.
Blur, Resolution, and Visibility of Detail 263
valleys (cycles) per millimeter, whereas a CTF test object has a certain number of
lines and spaces (line pairs) per millimeter. The ability of a system to image the
various spatial frequencies is related to the amount of blur present.
A large flat object of relatively uniform thickness contains low frequency com¬
ponents. At the edge of such an object, however, high frequency components are
created by the sudden change in thickness. Generally speaking, small objects of a
Gamma Camera
Ultrasound
MRI
CT
Fluoroscopy
Radiography
Maximum Resolution (lp/mm) —
.3 .4 .5 1 2 3 4 5 6 7 8 910
1 1 1 I I II I 1 1 1 I I II I I
10 2 1 .5 .2 .1
Blur (mm)
Figure 18-10 Comparison of Blur and Resolution Values for the Different Imaging
Methods
264 Physical Principles of Medical Imaging
imaging system. Any frequency components of the object that are above the reso-
Object Spectrum
Figure 18-11 The Relationship of an Image Spatial Frequency Spectrum to the Object
Spectrum and the MTF of the Imaging System
Blur, Resolution, and Visibility of Detail 265
lution limit of the system are completely lost. In effect, the MTF of the imaging
system can cut out the higher frequency components associated with an object,
and the image will be made up only of lower frequency components. Since low
frequency components are associated with large objects with gradual changes in
thickness, as opposed to sharp edges, the image will be blurred.
A property of MTFs that is not possessed by CTFs is their ability to be cas¬
caded. Consider an imaging system in which the sources of blur are the focal spot
and receptor. The blur characteristics of each of these can be described by an MTF
curve, as shown in Figure 18-12. The MTF of the total system is found by multi¬
plying the two MTF values at each frequency. For example, if at 2 cycles per mm
the MTF for the focal spot is 62%, and for the receptor is 72%, the total system
will have an MTF value of 44%. It should be observed that a system cannot have
frequency components that are higher than the resolution limit of either the recep¬
tor or the focal spot. This is equivalent to saying that the total system blur cannot
be less than the blur from either of the two sources. If the MTFs of the different
blur (focal spot, motion, or receptor) are significantly different, the one
sources
with the lowest limiting frequency (the largest blur) will generally determine the
MTF of the total system. In other words, if one source is producing significantly
larger blur than the other sources, the total system blur will be essentially equal to
the largest blur value.
Chapter 19
Radiographic Detail
Of all medical imaging systems, radiography has the ability to produce images
with the greatest detail. All radiographs, however, contain some blur. The three
basic sources of radiographic blurring and loss of detail are indicated in Fig¬
ure 19-1 and are (1) the focal
spot, (2) motion during film exposure, and (3) the
receptor. Most receptor blur is produced by the spreading of light within the inten¬
sifying screen, as described in Chapter 14. Light crossover between the two film
emulsions can add to receptor blur. If the intensifying screen and film surfaces do
not make good contact, additional blurring is produced.
proximately 1 mm. The blur value for a specific radiographic procedure depends
on a combination of factors including focal spot size, type of intensifying screen,
location of the object being imaged, and exposure time (if motion is present). The
general objective is not to produce a radiograph with the greatest possible detail
but to produce one with adequate detail within the confines of x-ray tube heating
and patient exposure.
We begin by considering the characteristics of the three basic blur sources, and
then show how they combine to affect image quality.
267
268 Physical Principles of Medical Imaging
Radiographic
Detail
(Blurring)
FOCAL SPOT
■
Object Location
Focal Spot
—n— —r 1.0
i \
i \ 0.9
FOD
i \ -
0.8
0.7
0.6 O
Object ro ORD
FRD 0.5 O S
I \ CO FRD
i \ 0.4 w
I \\
FRD 1- 0.3
ORD / FOD \
0.2
0.1
J L 0
Receptor
Object Plane
Receptor Plane
Blur (Magnified)
most appropriate location for evaluating blur is at the location of the object where
the blur values have meaning with respect to image detail. Another major reason
for evaluating blur at the location of the
object, rather than at the receptor, is that
the relationship of blur values to all
contributing factors is a simple linear relation¬
ship, as described below.
MOTION BLUR
Blurring will if the object being imaged moves during the exposure. The
occur
amount of blur in theobject plane is equal to the distance moved during the expo¬
sure. As shown in
Figure 19-3, the blur value at the receptor is larger and is in
proportion to the magnification factor, m. The effect of motion on each point
within the object is to reduce contrast and
spread the image over a larger area, as
indicated. If motion of body parts cannot be
temporarily halted, motion blur can be
minimized by reducing exposure time.
A misconception
regarding motion blur is that it is increased by magnification.
Although it is true that the blur at the receptor surface is increased, image quality
generally depends on the amount of blur with respect to the object size. This value
is not affected by magnification.
All x-ray tube focal spots have a finite size, and this contributes to image blur.
Consider the example shown in Figure 19-4.
X-ray photons passing through each
point of the object from the focal spot diverge and form a blurred image of the
object point. The blur value with respect to the object size (in the object plane) is
given by
Bf = F x s
Object Point
Blur(B,J
object and the receptor. The maximum blur occurs at the focal spot where the
blur value becomes equal to the actual size of the spot.
The effect of focal spot size on the visibility of a specific object depends on
three factors: (1) the size of the object, (2) the size of the focal spot, and (3) the
location of the object.
1.5 1.5
1.4 1.4
1.3 1.3
1.2 1.2
1.1 1.1
1.0 1.0
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
Object
I 1 1 1 1 1 1 1 1 1 1
0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
I-Receptor Object Location(s) Focal Spot—t
Figure 19-5 Relationship of Focal Spot Blur to Focal Spot Size and Object Location
Manufacturer's Tolerance
Blooming
A common characteristic of many focal spots is that they undergo a
change in
size with changes in MA and KVP. This effect is known as blooming. If the size of
Radiographic Detail 273
a focal spotis measured at a relatively low tube current, the size during operation
at higher tube current values can be significantly larger. The amount of blooming
with an increase in tube current varies from tube to tube. In some tubes, the bloom¬
ing of the focal spot in one direction is more than in the other. Blooming is gener¬
ally greater for small focal spots.
KVP generally has less effect on focal spot size than current. Some focal spots
undergo a slight reduction in size with increased KVP.
Intensity Distribution
Most focal spots do not have a uniform distribution of radiation over their entire
area. A non-uniform distribution of x-ray intensity causes a focal spot to have an
effective blur size different from its actual physical size. The variation in x-ray
emission can be
represented by an intensity profile, as shown in Figure 19-7. The
focal spot with a rectangular intensity distribution (center) has an effective blur
size identical with the dimensions of the spot. A focal spot with a double peak
distribution (top) has an effective blur size significantly larger than the actual di¬
mension of the spot. This double-peak distribution is characteristic of many focal
shape. A focal spot with this type of intensity distribution has an effective blur size
less than its actual physical size. Gaussian-shaped focal spots are highly desirable
because they have a relatively low effective blur size in comparison to their heat
capacity. A focal spot with an approximate Gaussian distribution can be produced
in line focus tubes by applying a bias voltage between the cathode elements.
Anode Angle
The size of the focal spot is usually specified with reference to the center of the
x-ray beam area, or field. Because of the angle of the anode surface, the effective
focal spot size changes with position in the field. It becomes smaller for points
274 Physical Principles of Medical Imaging
Equivolent
Size
toward the anode end and larger toward the cathode end of the field. The variation
in effective focal spot size with position within the field is more significant in
tubes with small anode angles.
Pin-Hole Camera
The principle of the pin-hole camera is illustrated in Figure 19-8. The pin-hole
camera consists of a very small hole in a sheet of metal, such as gold or lead. The
pin-hole is positioned between the focal spot and a film receptor, as shown. When
Radiographic Detail 275
Focal Spot
Pin Hole
the x-ray tube is energized, an image of the focal spot is projected onto the film.
The size of the focal spot can be determined by measuring the size of the image
and applying a correction factor if there is any geometric magnification present. If
the pin-hole is located at the midpoint between the focal spot and film, no correc¬
tion factor will be required.
The effective blur size of a focal spot can be measured by using a test object that
measures blur or resolution. The most common object used for this purpose has
alternating lines and spaces arranged in a star pattern, as shown in Figure 19-9.
The first step in determining focal spot size is to make a radiograph with the test
object located at approximately the midpoint between the focal spot and receptor.
An image is obtained in which there is a zone of blurring at some distance from the
center of the object. The distance between the blur zones is measured and used to
calculate the size of the focal spot by using the following formula:
P_ k8D
180(M - 1)
276 Physical Principles of Medical Imaging
Figure 19-9 The Image of a Star Test Pattern Used To Determine Focal Spot Size
where F is the effective focal spot size, D is the diameter of the blur circle illus¬
trated in Figure 19-9, M is the magnification factor, and t9 is the angle of one test-
pattern segment.
RECEPTOR BLUR
If the receptor input surface that absorbs the x-ray beam has significant thick¬
ness, blur will be introduced
at this point. Blurring of this type generally occurs in
intensifying screens and in the input phosphor layers of image intensifies. Blur
production was illustrated in Figure 14-5. X-ray photons that pass through a point
Radiographic Detail 277
within the object are absorbed by the phosphor layer and converted into light. The
light created along the x-ray "path" spreads into the surrounding portion of the
phosphor layer. When the light emerges from the screen it covers an area that is
larger than the object point. In other words, the x-rays that pass through each point
within an object form an image that is blurred into the
surrounding area.
Because this type of blur is caused by the
spreading, or "diffusing," of light, the
blur profile generally has a shape different from that of motion blur. In most
cases,
the blur profile is "bell-shaped" or Gaussian in nature. Because it is somewhat
difficult to specify an exact blur dimension,
receptor blur is more appropriately
described in terms of an effective blur value. The amount of blur is
primarily de¬
pendent on the thickness of the phosphor layer.
Intensifying screens generally have effective blur values in the range of 0.15
mm to 0.6 mm. The
approximate breakdown for the basic screen types is as
follows:
•
mammographic, 0.15-0.2 mm
•
detail, 0.2-0.35 mm
exposure.
In a given imaging system, the receptor blur value with respect to the size of the
object can be decreased by introducing magnification. This is illustrated in Figure
19-10, in which a small object is being imaged. The presence of receptor blur
produces a zone of unsharpness around the object. The actual blur dimension at
the receptor surface is fixed by the receptor characteristics and is unaffected by
tween the amount of blur and the size of the object image because the blur value at
the receptor remains fixed. However, the amount of receptor blur projected back
to the location of the object is reduced by magnification. Receptor blur in the
where Bro is the equivalent blur value of the receptor measured at the receptor
surface. Although magnification can be used to reduce receptor blur with respect
278 Physical Principles of Medical Imaging
1.0 -
0.9 -
^ 0.8 -
0 .1 .2 .3 .5 .6 .7 .8 .9
Figure 19-10 Relationship between Receptor Blur and Object Location for Three Types of
Intensifying Screens
to object size, it must be approached with caution. Since focal spot blur increases
with magnification, the two blur sources must be considered together.
COMPOSITE BLUR
1 4
—I r14
1.3- -
1.3
1.2- -
1.2
0 .1 .2 .3 .4 .5 .6 .7 .8 .9
For example, if receptor blur, Br, is 0.3 mm; focal spot blur, Bf, is 0.2 mm; and
motion blur, Bm, is 0.2 mm; the total blur, Bt, would be found by substituting in
the equation above as follows:
It was shown earlier that blur from two sources, the receptor
and the focal spot,
depends the position of the object with respect to the s scale. As an object is
on
moved away from the receptor, focal spot blur increases and receptor blur de¬
creases. This means that the position of the object must be taken into account
when considering the blur characteristics of a specific x-ray system. For most sys¬
tems, there is an object position on the s scale that produces the minimum blur.
Moving the object in either direction, toward or away from the receptor, in¬
creases the blur.
The relationship between blur and object position is easy to visualize by using a
blur nomograph, as shown in Figure 19-11. The nomograph has three scales: (1)
blur, (2) focal spot size, and (3) the position of the object, or s scale. The lines
representing the blur from the three sources are drawn on the diagram according to
the following simple rules.
The line representing receptor blur is drawn between a point on the blur scale
that represents the equivalent blur of the particular receptor being used and a point
located at a value of 1 on the s scale.
280 Physical Principles of Medical Imaging
Figure 19-12 Blur Produced by High Speed Screens and a 0.5-mm Focal Spot
Figure 19-13 Blur Produced by Detail Screens and a 1-mm Focal Spot
Radiographic Detail 281
The line
representing focal spot blur is drawn between the zero (0) point on the
s scale and a point on the focal
spot scale that corresponds to the size of the focal
spot.
The line for motion blur is drawn
horizontally and intersects the blur scale at a
value equal to the distance the object moved during the exposure interval. The
significance of the horizontal line for motion blur is that its value relative to the
size of an object does not change with
magnification. In most applications, the
actual value for motion blur is difficult to estimate. It is included in this illustration
increasing receptor blur or decreasing focal spot size shifts the minimum blur
point to higher values on the s scale. Inspection of the diagram leads to two signifi¬
cant observations:
1. When an object is located near the receptor surface (low s scale value), the
total blur is essentially a function of the receptor.
2. As an object is moved away from the receptor surface (high s scale value),
the focal spot becomes the major determining factor in overall system blur.
Figure 19-12 shows the composite blur for a radiographic system using high¬
speed screens and a 0.5-mm focal spot. With this combination, the intensifying
screen is the most significant blur source. Notice that the blur decreases with mag¬
Figure 19-13 illustrates the blur produced when using detailscreens and a 1-mm
focal spot. With this combination, the focal spot is the predominant blur source for
most object locations.
These two examples show how either the receptor or the focal spot can be the
predominant source of blur in a specific radiographic system. The dominating blur
source is determined by the relationship of receptor blur to focal spot size and the
location of the object; these are the factors that must be considered when setting up
a radiographic system.
Chapter 20
cially useful for guiding a procedure, searching through a body section, or observ¬
ing a dynamic function. Fluoroscopic examinations began soon after the discovery
of x-radiation. Since that time, however, the fluoroscopic imaging system has un¬
dergone several major changes that have improved image quality, reduced patient
exposure, and provided much more flexibility and ease of use.
The receptor for the first-generation fluoroscope was a flat fluorescent screen,
which intercepted the x-ray beam as it emerged from the patient's body. The x-ray
beam carrying the invisible image was absorbed by the fluorescent material and
converted into a light image. In fact, it is the fluorescent screen receptor that gives
the name "fluoroscopy" to the procedure.
Under normal operating conditions, the image had a relatively low brightness
level. Because of the low light intensity, it was usually necessary to conduct ex¬
aminations in a darkened room with the eyes dark-adapted by wearing red goggles
or remaining in the dark for approximately 20 minutes. The contrast sensitivity
and visibility of detail were significantly less than what can be achieved with con¬
by the intensifier tube was generally an improvement over the fluoroscopic screen
image. An examination was performed by viewing the image from the intensifier
tube through a system of mirrors. The viewing was generally limited to one person
unless a special attachment was used.
The next step in the evolution of the fluoroscope was the introduction of a video
(TV) system to transfer the image from the output of the image intensifier tube to
a large screen.
283
284 Physical Principles of Medical Imaging
INTENSIFIER TUBES
In any x-ray imaging process, it is necessary to convert the invisible x-ray im¬
age into a visible image. There are two major reasons for this conversion. A light
image can be visualized by the human eye, and film is generally more sensitive to
light than it is to x-radiation. We have already seen that certain fluorescent materi¬
als are used in intensifying screens to convert the x-rays into light images. Al¬
output is generally too low for direct visualization, photographing with a camera
(cine or spot film), or viewing with a television camera. In many applications, a
device is needed that will convert the x-ray into light and intensify, or increase the
brightness of, the light in the process. The image intensifier tube is such a device.
Fluoroscopic Imaging Systems 285
GAIN
-75/0.014 = 5360
I
(0.014 nit)
1
(75 nits) ^
Light Light
t
t
CONVERSION FACTOR
75 nits =
75
1 mR/sec
X-ray X-ray
(1 mR/sec) (1 mR/sec)
Before considering the details of intensifier tube function, let us compare its
overall function to that ofa fluorescent screen, shown in
Figure 20-2. The tube is
exposed to the x-ray beam, and light is emitted from the other end. One of the
important characteristics of a specific intensifier tube is its ability to produce a
bright light image.
Gain
Gain is one factor used to describe the ability of a tube to produce a bright
image. As illustrated in Figure 20-2, the gain value of a specific tube is the ratio of
its light brightness to that of a reference fluorescent screen receiving the same
Conversion Factor
brightness to the input x-ray exposure rate. Gain and conversion factor are merely
two terms used to describe the same general characteristic of an intensifier tube:
its ability to convert an x-ray exposure into a bright image. The approximate rela¬
tionship between the gain value and the conversion factor value is
Gain = 70 x Conversion factor.
The brightness of the light from the intensifier tube is several thousand times
brighter than from a fluorescent screen. This is achieved in two ways.
286 Physical Principles of Medical Imaging
Intensifier Tube
Figure 20-3 The Events That Produce Electronic Gain in an Image Intensifier Tube
Fluoroscopic Imaging Systems 287
These additional steps are necessary to add energy, or intensify the image. It is not
possible to increase the energy of photons. It is possible, however, to increase the
energy of electrons. The result of this process is that an x-ray photon can produce
a much brighter light at the
output of an intensifier tube than in a fluorescent
screen.
Along the length of the tube is a series of electrodes that focus the electron
image onto the output screen. The voltage applied to the focusing electrodes can
be switched to change the size of the input image, or field of view, as shown in
Figure 20-4. The maximum field of view is determined by the diameter of the
tube.
In most tubes, the input image area can be electrically reduced. When the tube is
switched from one mode to another, the factors that change include field of view,
image quality, and receptor sensitivity (exposure). The tube should be operated in
the large field mode when maximum field of view is the primary consideration. In
this mode, the tube has the highest gain and requires the lowest exposure because
the minification gain is proportional to the area of the input image. The small field
mode is used primarily to enhance image quality. As an image passes through an
intensifier tube, its quality is usually reduced.
Contrast
We know that the contrast in an image delivered to the film is reduced by both
object penetration and scattered radiation. When image intensifier tubes are used,
there is an additional loss of contrast because of events taking place within the
288 Physical Principles of Medical Imaging
tube. Some of the radiation that penetrates the input screen can be absorbed by the
output screen and can produce an effect similar to scattered radiation. Also, some
of the light produced in the output screen travels back to the photocathode and
causes electrons to be emitted. These electrons are accelerated to the output screen
where they contribute additional exposure to the image area and further reduce
contrast. The contrast reduction in modern intensifier tubes is generally in the
range of 5 to 15%.
Blur
There are potential sources of blur within the system. The spreading of
several
light in the input and output screens of the image intensifier tube produces blur as
it does in intensifying screens. In many image intensifier tubes, both screens are
significant sources of blur. Actually, the blur in the output screen is quite small
and becomes significant only when it is referred back to the input screen. The
minification of the image within the tube has the effect of magnifying the blur of
most image intensifier tubes so that blur increases with the size of the input image.
With dual-mode tubes, the larger field generally produces more blur than the
Fluoroscopic Imaging Systems 289
Noise
VIDEO SYSTEMS
The
primary function of a video system is to transfer an image from one loca¬
tion to another.During the transfer process, certain image characteristics, such as
size, brightness, and contrast, can be changed. However, as an image passes
through a video system, there can be a loss of quality, especially in the form of blur
and loss of detail visibility.
Video Principles
The two major components of a video system are the camera and the monitor, or
receiver. Conventional broadcast television systems transmit the image from the
camera to the receiver by means of radio frequency (RF) radiation. In a closed
circuit system, the image is transmitted between the two devices by means of elec¬
trical conductors or cables. However, other than for the means of image transmis¬
sion, the basic principles of the two systems are essentially the same.
A basic video system is illustrated in Figure 20-5, which shows the major func¬
tional components contained in the camera and the monitor. The "heart" of each is
an electronic tube that converts the image into an electrical signal or vice versa.
The function of the camera tube is to convert the light image into an electronic
signal. Broadcast television systems use camera tubes known as image orthicons.
Fluoroscopic systems generally use either vidicon or plumbicon tubes; they are
smaller and less complex than image orthicons. They also require brighter input
images, but this is not a problem in fluoroscopy. The significant difference be¬
tween the vidicon and plumbicon is one of image persistence, or lag.
of the tube contains a heated cathode and other electrodes that form an electron
gun. The electron gun shoots a small beam of electrons down the length of the
290 Physical Principles of Medical Imaging
±.
"GUN"
evacuated tube. The electron beam strikes the rear of the screen surface on which
the input image is projected.
Electrical signals are applied to the camera tube, which causes the electron
beam to be swept over the surface of the
input screen. One signal moves the beam
in a horizontal direction, and the other
signal moves it vertically. The two signals
are synchronized so that they work together to move the beam in a specific pattern.
Although the input screen is round, the area covered by the scanning electron
beam is generally rectangular. The beam
begins in the upper left-hand corner and
moves across the screen
horizontally, as shown in Figure 20-6. In the conventional
American 525-line video system, lines are scanned at the rate of
15,750 per sec¬
ond. When the beam reaches the
right-hand side, it is quickly "snapped back" to
the left and deflected downward
by approximately one beam width. It then sweeps
across to form a second scan line. This
process is repeated until the beam reaches
the bottom of the screen, which
usually requires l/60th of a second. After reaching
the bottom, the beam returns to the
top and resumes the scanning process. This
time, however, the scan lines are slightly displaced with respect to the first set so
that they fall between the lines created in the first scan field. This is
known as
interlacing.
Fluoroscopic Imaging Systems 291
INTERLACED SCAN
1st Field
FINISH
Interlacing is used to prevent flicker in the picture. If all lines were scanned
consecutively, it would take twice as long (l/30th of a second) for the beam to
reach the bottom of the screen. This delay would be detectable by the human eye
and would appear as flicker. With interlacing, the face of the screen is scanned in
two sets of lines, or fields. The pattern of scan lines produced in a video system is
known as raster. The conventional video system is generally set to contain 525
lines. The raster, however, contains approximately 485 lines. The remaining lines
are lost during the return of the electron beam to the left side of the screen. In a
525-line system, 30 complete raster frames (60 fields) are formed per second.
The screen of the camera tube is made of a material with light-sensitive electri¬
cal properties. Several types of tubes are used, but the general concept of tube
function can be described as follows.
The electrical conductivity of the screen surface depends on its illumination.
When an image is projected onto the screen, the conductivity varies from point to
point. A dark area is essentially nonconductive, and a brightly illuminated area is
the most conductive. As the electron beam sweeps over the surface, it encounters
areas with various levels of conductivity, which depend on the brightness at each
point. When the beam strikes a bright spot, it is "conducted through" and creates a
relatively high signal voltage at the output terminal of the tube. As the spot moves
across the screen, it creates a signal that represents the brightness of the image at
the video image is displayed. Like the camera tube, the picture tube has an electron
gun located in the end opposite to the screen. The electron gun produces a beam of
electrons that strike the rear of the screen in the
picture tube. The electron beam
scans the surface of the picture tube screen just as it scans the camera tube screen.
In fact, the scanning in the two tubes is synchronized
by a signal transmitted from
the camera to the monitor. If the scanning becomes
unsynchronized, the image
will roll in the vertical direction or become distorted
horizontally. The horizontal
and vertical controls on a video monitor are used to
adjust the scan rates so that
they are identical with those of the camera and can maintain synchronization.
When the electron beam in the picture tube strikes the screen, it
produces a
bright spot. The brightness of the spot is determined by the number of electrons in
the beam, which is controlled by the signal from the camera tube. In other
words,
the brightness of a spot on the picture tube screen is determined
by the brightness
of the corresponding point on the camera tube screen. As the two electron beams
scan the two screens, the
image is transferred from the camera tube to the picture
tube. In the 525-line system, complete
images are transferred at the rate of 30 per
second.
Contrast
In the typical video system, image contrast canbe changed by adjusting a con¬
trol located in the monitor. The contrast control
changes the amplification of the
video signal. The brightness of each point within the
image on the picture tube
screen is determined
by the voltage of the video signal associated with the point.
The contrast of an image on the screen is the
brightness difference between two
points, such as background and object area, and is determined by the voltage dif¬
ference in the video signal for the two areas. When a video
signal is amplified, the
voltage difference between two points is increased. This, in turn, produces a larger
difference in brightness, or more contrast. At first, it
might appear that amplifica¬
tion of the video signals not
only increases contrast, but also produces a brighter
image. Most circuits are designed, however, so that changing the contrast control
does not appreciably
change the average signal level. The average video signal
level is changed by
adjusting the brightness control.
Although the contrast and brightness controls are essentially separate and inde¬
pendent, they must generally be adjusted together for optimum image
quality. The
typical picture tube has a maximum brightness level that cannot be exceeded, re¬
gardless of the value of the incoming video signal. If the average signal level is
pushed toward this upper limit by turning up the brightness control, contrast will
generally decrease. This is because the bright (white) areas have reached a limit¬
ing value, and the darker areas are increasing in brightness. Since the bright areas
cannot increase above the maximum limit, the contrast between the two areas is
reduced.
Fluoroscopic Imaging Systems 293
Blur
mately equal.
Vertical Blur
Vertical blur is caused by the finite size of the electron beam and the scan lines.
The effect of vertical blur is illustrated in
Figure 20-7. If a small-line-type object is
oriented at a slight angle to the scan lines, the images of the object will appear to
be wider because of blur. At any instant, the width of a line in the image cannot be
less than the width of one scan line. An object is normally not perfectly aligned
with a single scan line. Therefore, the width of the image of the line is slightly
larger than the width of a single scan line. The approximate relationship between
blur (image line width), Bv, and scan line width, w, is given by
Bv = 1.4 w.
The width of a scan line, w, is, in turn, related to the vertical field of view (FOV)
and the number of actual scan lines. These factors can be incorporated into the
relationship above to give the following:
Bv= 1.4 FOV/n
where FOV is the vertical dimension of the image, and n is the number of useful
scan lines within that dimension. For an image containing a given number of scan
lines, vertical blur is directly proportional to the dimension of the image, or FOV.
Where within the system is the image dimension determined? Should it be on the
monitor screen, at the camera tube, or at some other point? The appropriate loca-
Image on Image on
Camera Tube Picture Tube
tion fordetermining image size is in the plane through the object being imaged, as
illustrated inFigure 20-8.
The blur value determined by using this image dimension is properly scaled to
the size of the object and can be easily compared to blur values for the focal spot
and receptor. In the plane of the object, the image diameter, FOV, is equal to the
FOV of the image at the input to the image intensifier divided by the geometric
magnification factor, m. Substitution of this expression for image size gives an
expression for image blur that is related to three specific factors:
Bv = 1.4 FOV (image tube)/nm.
Special attention is called to the fact that video blur is directly proportional to
the FOV at the input to the image intensifier tube, as illustrated in Figure 20-9. A
small FOV produces better detail. For example, assume that a system has 485
useful lines and a magnification factor, m, of 1.2. For a 15-cm (6-in.) input image
size, the blur is 0.36 mm. If the image size is switched to 23 cm (9 in.), the blur will
increase to 0.55 mm. When the size of the image at the input of the image intensi¬
fier is increased, the video lines are spread over a larger object area. Since the
number of lines is not changed, the width of the lines must increase. Since blur is
directly related to line width, it is obvious that it must increase with an increase in
image size.
Blur can be decreased by increasing the number of lines used to form the video
image. Figure 20-10 illustrates two of the most common video formats used in
Figure 20-8 The Factors That Affect Line Width and Vertical Detail in a Video Image
Fluoroscopic Imaging Systems 295
Horizontal Blur
Figure 20-10 Improvement in Image Detail by Increasing the Number of Scan Lines
296 Physical Principles of Medical Imaging
designer and is usually the factor used to adjust the horizontal blur value. In most
systems, the horizontal blur is adjusted to be approximately equal to the vertical
blur. There is probably no advantage in having significantly more or less blur in
one direction than the other.
Noise
The two types of noise in a fluoroscopic system are electronic and quantum.
Electronic noise produces "snow," which is familiar to most television viewers. It
is usually significant only when the video signals are extremely weak. Since signal
strength is not a problem in the typical closed circuit video system, the presence of
significant "snow" or electronic noise usually indicates problems within the video
system.
Quantum noise depends on the number of photons used to form the image. The
number of photons involved in image formation is directly related to the receptor
input exposure. The input exposure, for a specific image brightness, can be ad¬
justed by changing the automatic brightness control circuit reference level, as dis¬
cussed in Chapter 21. An input exposure rate of approximately 0.025 mR/sec (1.5
mR/min) is usually required to reduce quantum noise to an acceptable level in
fluoroscopy.
The noise level is related to the total number of photons used to form an image,
not the rate. The human eye has an effective "collection" time of approximately
0.2 seconds. In some video systems, the time during which photons are collected
to form an image is longer because of the persistence, or lag, inherent in the cam¬
An optical system is used to transfer the image from the output screen of the
intensifier tube to the input screen of the video camera tube or to the film in the
spot or cine camera. The components of the total optical system are contained in
the image distributor and the individual cameras, as shown in Figure 20-11, and
are lenses,
apertures, and mirrors. Before we consider the operation of the optical
system, we will review the basic characteristics of two of these components,
lenses and apertures.
Lens
A lens is the basic element that can transfer animage from one location to
another. The curvature of the lens focuses the light that passes through the lens.
Fluoroscopic Imaging Systems 297
TUBE
VIDEO
CAMERA
SPOT-FILM CAMERA
LENS
APERTURE FILM
O COLLIMATOR LENS
INTENSIFIER TUBE
Aperture
Another important characteristic of a lens is its size (diameter, or aperture). This
determines the amount of light that is captured by the lens. This, in turn, affects the
efficiency of light transfer through the optical system and the exposure to film in
the camera. Actually, the factor that determines the efficiency of a lens is the ratio
of its size to its focal length, which is generally expressed in terms of the f number,
which is
It should be noted that the f number is inversely related to the diameter of the lens.
In other words, as the size of the lens is increased, the value of the f number is
decreased. The efficiency of a lens is given by
Efficiency = 1/4 f2
Certain f number values are commonly used and correspond to the different
multiples of two of the lens areas. This relationship between f number and relative
lens area is shown in Table 20-1.
The difference between any two adjacent standard f numbers is referred to as
one stop. A change of one stopcorresponds to changing the relative area, or effi¬
ciency, of the lens (and film exposure) by a factor of two.
In many applications, it is desirable to adjust the aperture size, or light-gather¬
systems, the size of the aperture is changed by replacing one mask (aperture) with
another.
Image Distributor
When intensifying screens are used in radiography, the light is transferred di¬
rectly to the film because the film is in direct contact with the screen. This results
in a film image that is the same size as the image from the intensifying screen. The
output image from the typical intensifying tube, however, is only about 20 mm in
diameter and must be enlarged before it is applied to the film. In order to be en¬
larged, the film must be separated from the output screen. The transfer of the im¬
age from the image intensifier output to the film surface is the main function of the
optical system. If only one camera is to be used, it will be a relatively simple
process to mount the camera so that it views the image from the intensifier tube.
16
11 2
8 4
5.6 8
4 16
2.8 32
2.0 64
1.4 128
1.0 256
Fluoroscopic Imaging Systems 299
Collimator Lens
The first component of the optical system encountered by the light from the
intensifier output screen is the collimator lens. Its function is to collect the light
from the output screen and focus it into a beam of parallel light rays, as shown in
Figure 20-11. The parallel rays are produced by positioning the lens so that the
image intensifier screen is located near the focal point of the lens. One of the
fundamental characteristics of a converging lens is that light originating at its
focal point forms a beam of parallel rays after passing through the lens. Light
originating from points at distances other than the focal length from the lens forms
into either a diverging or converging beam after passing through the lens. The
formation of the image into a parallel beam makes it possible to distribute the
image to two or more devices, such as a spot film camera and a video camera, for
fluoroscopy.
Mirrors
The next element in the path of the light is a beam splitter. A splitter is usually
a mirror that is
only partially reflective. A portion of the light is reflected by the
mirror to one camera. The remaining light passes through the mirror to a second
camera. In some systems, the mirror is attached to a rotating mechanism so that it
can be shifted from one camera to another. The mirrors are generally designed to
divide the light unevenly between two devices. For example, a 70-30 mirror sends
70% of the light to a film camera and 30% to a video camera to form the fluoro¬
scopic image.
Vignetting
A potential problem with a two-lens optical system is vignetting. Vignetting is
the loss of film exposure around the periphery of the image. Light that leaves a
point near the center of the intensifier output screen passes through the collimator
lens and forms a beam that is parallel to the axis running through the two lenses.
Light that originates from points near the periphery of the image also passes
through the collimator lens but is projected in a beam that is not parallel to the
axis. In order for a camera lens to capture light from the periphery of the image, it
must be located in an area where the beams overlap. Vignetting usually occurs
when the camera lens is mounted too far from the collimator, or the camera lens is
too large to be contained in the overlap area. Since the effective diameter of the
camera lens is often determined by the size of the aperture, changing the aperture
Camera
After passing through the aperture, a second lens focuses the light onto the sur¬
face of the film to form the final image. This lens is part of the camera and serves
the same function as a conventional camera lens. The most significant characteris¬
tic of this lens is its focal length.
image at the intensifier output screen is circular, but all films have either square or
rectangular image areas. Because of this, it is impossible to get exact coverage on
the film.
For agiven film size, it is possible to obtain different degrees of coverage, or
framing. With underframing, all of the image appears on the film but is circular;
the corners of the film area are not exposed. On the other hand, with overframing,
all of the film is used, but some portions of the image are lost.
Figure 20-12 Relationship of Image and Film Size for Different Degrees of Framing
Fluoroscopic Imaging Systems 301
Unless the optical system is properly focused, it will also be a source of image
blur. Both the collimator and camera lens can be out of focus. Proper
focusing of
the collimator lens requires special equipment and should be attempted only by
qualified personnel.
Film Exposure
Only afraction of the light emitted by the image intensifier reaches the film.
The reasonsfor this are that the optical system does not capture all of the light, and
a significant portion of the light is absorbed by the lenses and mirrors within the
system or is stopped by the aperture. Only a fraction of light captured by the colli¬
mator lens and passed through the aperture reaches the film because of absorption
in the lens. This fraction is typically about 0.8. If the light that reaches the film is
spread over an area that is larger than the output screen, the intensity, but not the
total number of light photons, will be reduced.
RECEPTOR SENSITIVITY
Fluoroscopy
through an adjustment of the video camera sensitivity or the video camera aper¬
ture. The sensitivity and exposure also change with the field of view (mode), as
described above. The fluoroscope is most sensitive when operated with the maxi¬
mum field of view. Increasing field of view increases sensitivity and decreases
required exposure. Because the x-ray beam then covers more of the patient's body,
however, the total radiation energy to the patient is not significantly reduced.
Some fluoroscopic systems have a control that allows the operator to change the
sensitivity. This is used to control the level of quantum noise in the fluoroscopic
image. The low sensitivity (low noise) settings are used to improve visibility in
certain demanding procedures, such as angioplasty.
Radiography
large field mode is more sensitive and requires less radiation exposure than when
it is operated in the small field mode. The sensitivity is usually set in relationship
to the size of the film being used. In general, maximum sensitivity (minimum
Image Noise
image quality and is especially significant when the objects being imaged are
small and have relatively low contrast. This random variation in film density, or
sources, as we will soon discover. No imaging method is free of noise, but noise is
much more prevalent in certain types of imaging procedures than in others.
Nuclear images are generally the most noisy. Noise is also significant in MRI,
CT, and ultrasound imaging. In comparison to these, radiography produces im¬
ages with the least noise. Fluoroscopic images are slightly more noisy than radio¬
graphic images, for reasons explained later. Conventional photography produces
relatively noise-free images except where the grain of the film becomes visible.
In this chapter we consider some of the general characteristics of image noise
along with the specific factors in radiography and fluoroscopy that affect noise.
EFFECT ON VISIBILITY
trast objects. The general effect of noise on object visibility was described in
303
(LITNMHm(RtAohehaoBiaiggsnfrhe))t
IT2mtho1haeg-ne
Figure
Image Noise 305
Chapter 1 and illustrated in Figure 1-7. The visibility threshold, especially for
low-contrast objects, is very noise dependent. In
principle, when we reduce image
noise, the "curtain" is raised somewhat, and more of the low-contrast objects
within the body become visible.
If the noise level can be
adjusted for a specific imaging procedure, then why not
reduce it to its lowest possible level for maximum visibility? Although it is true
that we can usually change imaging factors to reduce noise, we must
always com¬
promise. In x-ray imaging, the primary compromise is with patient exposure; in
MRI and nuclear imaging, the primary compromise is with
imaging time. There
are also compromises between noise and other
image characteristics, such as con¬
trast and blur. In principle, the user of each
imaging method must determine the
acceptable level of noise for a specific procedure and then select imaging factors
that will achieve it with minimum exposure, imaging time, or effect on other im¬
QUANTUM NOISE
In all imaging procedures using x-ray or gamma photons, most of the image
noise is produced by the random manner in which the photons are distributed
within the image. This is generally designated quantum noise. Recall that each
individual photon is a quantum (specific quantity) of energy. It is the quantum
structure of an x-ray beam that creates quantum noise.
Let us use Figure 21-2 to refresh our concept of the quantum nature of radiation
to see how it produces image noise. Here we see the part of an x-ray beam that
forms the exposure to one small area within an image. Remember that an x-ray
beam is a shower of individual photons. Because the photons are independent,
they are randomly distributed within an image area somewhat like the first few
drops of rain falling on the ground. At some points there might be clusters of
several photons (drops) and, also, areas where only a few photons are collected.
This uneven distribution of photons shows up in the image as noise. The amount
of noise is determined by the variation in photon concentration from point to point
within a small image area.
Fortunately we can control, to some extent, the photon fluctuation and the re¬
sulting image noise. Figure 21-2 shows two 1-mm square image areas that are
subdivided into nine smaller square areas. The difference between the two areas is
the concentration of photons (radiation exposure) falling within the area. The first
has an average of 100 photons per small square, and the second a concentration of
306 Physical Principles of Medical Imaging
X-ray Beam
1,000 photons per small square. For a typical diagnostic x-ray beam, this is
equivalent to receptor exposures of approximately 3.6 jiR and 36 pR, respectively.
Notice that in the first large area none of the smaller areas has
exactly 100
photons. In this situation, the number of photons per area ranges from a low of 89
photons to a high of 114 photons. We will not, however, use these two extreme
values as a measure of photon fluctuation. Because most of the small areas have
photon concentrations much closer to the average value, it is more appropriate to
express the photon variation in terms of the standard deviation. The standard de¬
viation is a quantity often used in statistical analysis (see Chapter 31 ) to
express
the amount of spread, or variation, among quantities. The value of the standard
deviation is somewhat like the "average" amount of deviation, or variation,
among
the small areas. One of the characteristics of photon distribution is that the amount
of fluctuation (standard deviation value) is related to the
average photon concen¬
tration, or exposure level. The square root of the average number of photons per
area provides a close estimate for the value of the standard deviation. In
this ex-
Image Noise 307
ample the standard deviation has a value of ten photons per area. Since this is 10%
of the average value, the quantum noise
(photon fluctuation) at this exposure has a
value of 10%.
Let us now consider the image area on the
right, which received an average of
1,000 photons per area. In this example, we also find that none of the small areas
received exactly 1,000 photons. In this case, the
photon concentrations range from
964 photons to 1,046 photons per area.
Taking the square root of the average pho¬
ton concentration (1,000) gives a standard deviation value of 33.3 photons. It ap¬
pears we have an evenhigher photon fluctuation, or noise, than in the other area.
However, when we express the standard deviation as a percentage of the average
photon concentration, we find that the noise level has actually dropped to 3.3%.
We have just observed what is perhaps the most
important characteristic of
quantum noise; it can be reduced by increasing the concentration of photons (ie,
exposure) used to form an image. More specifically, quantum noise is inversely
proportional to the square root of the exposure to the receptor.
The relationship between image noise and required exposure is one of the issues
that must be considered by persons setting up specific x-ray procedures. In most
situations, patient exposure can be reduced, but at the expense of increased quan¬
tum noise and, possibly, reduced visibility. It is also
possible, in most situations, to
decrease image noise, but a higher exposure would be required. Most x-ray proce¬
dures are conducted at a point of reasonable compromise between these two very
important factors.
RECEPTOR SENSITIVITY
Screen-Film Radiography
The sensitivity of a radiographic receptor (cassette) is determined by character¬
istics of the screen and the film and the way they are matched. The factors that
affect receptor sensitivity do not necessarily alter the quantum noise characteris¬
tics of the receptor. The major factors that affect radiographic receptor sensitivity
are film sensitivity, screen conversion efficiency, and screen absorption effi¬
Speed
P
1200 800 400 200 100 50
I I I I l_
Radiography Mammography
Spot film
—
■J
Regular
j Fluoroscopy
-i—i i i i 111 1 1 i i r i 11 r i i f 11 1—i—! i i 1111 r
Receptor Sensitivity
ered to it.Increasing receptor sensitivity by changing any factor that decreases the
number of photons actually absorbed will increase the quantum noise.
The receptor exposure required to form an image (receptor sensitivity) can be
photons that must be absorbed in the screen. The result would be an image with
increased quantum noise. Recall that the effective sensitivity of a particular film
and screen combination depends on the matching of the spectral sensitivity char¬
acteristics of the film to the spectral characteristics of the light produced by the
screen. When the two characteristics are
closely matched, maximum sensitivity
and maximum quantum noise are produced. In radiography,
changing the film
sensitivity (ie, changing type of film) is the most direct way to adjust the quantum
noise level in images. Quantum noise is usually the factor that limits the use of
1.5 mR-i
1 mR-
Decrease Noise
Absorption_^
Efficiency
0.5 mR- Increase Noise
Conversion
Efficiency" Film
Sensitivity
film sensitivity, spectral matching, and the conversion efficiency of the intensify¬
ing screen generally changes quantum noise and receptor sensitivity.
Two screen-film combinations with the same sensitivity are shown in Figure
21-5. One system uses a relatively thick high-speed screen and a film with conven¬
tional sensitivity. The other system uses a thinner detail-speed screen and a more
310 Physical Principles of Medical Imaging
Sensitive Film
^ Same Density ^
N /
< More Noise
Image Image
More Blur >
sensitive film. The images produced by these two systems differ in two respects.
The system using the thicker screen has more blur but less quantum noise than the
system using the more sensitive film. The reduction in noise comes from the in¬
creases in absorption
efficiency and blur.
Intensified Radiography
Quantum noise is sometimes more significant in intensified radiography (cine
and spot films) than in screen-film radiography because of generally higher recep¬
tor sensitivity values (ie, lower
receptor exposures). With such systems, the quan¬
tum noise level can be
adjusted.
Absorption efficiency of image intensifier tubes has gradually improved over
the years. Like intensifying screens,
absorption efficiency depends on the compo¬
sition of the input screen, its thickness, and the
photon energy spectrum. However,
a manufacturer
generally does not offer choices. Most modern intensifier tubes
have input screens designed to provide a reasonable
compromise between absorp¬
tion efficiency (sensitivity) and image blur.
The variations in intensifier
tube-receptor system sensitivity are related to char¬
acteristics of the intensifier tube, the optical system, and the film. The
gain (con¬
version factor) of an image intensifier tube cannot be
adjusted by the user except
by changing the input field of view (mode).
Changing the size of the optical aperture is the most common method of adjust¬
ing the receptor sensitivity and quantum noise level. This adjustment can usually
be made in the clinical facility by a service engineer. The other factors associated
with the optical system that affect overall receptor
sensitivity, such as magnifica-
Image Noise 311
tion (film size) and light transmission through the lens and distributor compo¬
nents, are not generally adjustable by the user and cannot be used to modify the
quantum noise level.
The sensitivity of the film has a direct effect on the overall receptor system
sensitivity. If the film is changed to one with a greater sensitivity, images can be
produced with less x-radiation, but the amount of quantum noise will be increased.
However, the aperture size can be decreased to compensate for the increase in film
sensitivity and prevent an increase in noise.
Whenever film sensitivity or the aperture is changed, the AEC system must be
input exposure to the image intensifier can be adjusted to obtain a specified quan¬
tum noise level. In most systems, the input to the AEC circuit is a light sensor that
monitors the output of the image intensifier. The intensifier output luminance, or
exposure (luminance multiplied by time), is electronically compared with a pre-
established reference level. The output signal from the control circuit adjusts the
x-ray machine exposure factors until the desired luminance, or exposure, is ob¬
tained. In most spot film and some cine systems, the KVP and MA are preset, and
the AEC circuit adjusts the exposure time to achieve proper film exposure. In
some cine systems, the AEC circuit adjusts the KVP, MA, or both, and the expo¬
sure time is preset. In some systems, the control circuit can adjust all three expo-
Aperture
in.
Luminance Density
^*H=CZf Control
t7 - -
Reference
<
:: Level
Engineers
Control
Input Exposure
kVp,m As
<r
Figure 21-6 Factors That Determine the Input Exposure (and Quantum Noise) of an Inten¬
sified Radiography System
312 Physical Principles of Medical Imaging
sure factors. Generally, the time is changed within certain limits. If the desired
exposure is not obtained when time reaches a limit, either KVP or MA will be
changed.
The reference level for the luminance output of the intensifier is generally ad¬
justed by two controls: the operator's control, which is used to adjust film density,
and the engineer's control, which is used to set the approximate intensifier light
output. This second control is usually located within the equipment and is not
accessible to the equipment operator. The operator's control is essentially a fine
adjustment of the engineer's control. For a given setting of the reference level, the
AEC circuit produces a fixed value of intensifier input exposure. The readjust¬
ment of this reference level by means of the engineer's control is used to alter the
aperture within the optical system. The aperture and reference level are usually
adjusted together to obtain the desired film exposure and image intensifier input
exposure.
Assume that a cine system is producing properly exposed films, but the quan¬
tum noise is considered to be too high. A measurement of the input exposure
shows that it is 10 pR per frame. It might be desirable to increase this to at least
25 pR to reduce quantum noise. This can be achieved by
changing the output light
reference level to a higher value by means of the engineer's control. It is then
necessary to reduce the size of the aperture to prevent the film from being overex¬
posed.
In considering the image intensifier input
exposure, the relationship between
the size of the image intensifier input and film must be considered. In most
sys¬
tems, the size of the image on the film is less than the size of the image at the
image intensifier input. When minification is present, an image can be formed
with a given noise level by using less radiation. The
significant factor is the con¬
centration of x-ray photons with respect to a given film area.
Many imaging systems use intensifier tubes with selectable input image sizes,
or fields of view, as illustrated in
Figure 21-7, in which three image intensifier
inputs, or modes, are compared with a 100-mm film. Intensifier input exposure
values that give approximately the same quantum noise on the film are also indi¬
cated. The relationship among the exposure values is determined
by the ratios of
the respective areas. In all three cases, the number of
photons forming the image is
the same. This is because the total photon number is related to the
product of the
exposure and the image area.
When an image intensifier can be operated with different
input image sizes,
there is a different sensitivity value for each size. Sensitivity
changes because the
gain of the intensifier tube is proportional to the area of the input.
Image Noise 313
Intensifier Input
0. lmR
Film
100mm
0.23 mR
0.5 mR
Figure 21-7 Relationship of Intensifier Tube Input Diameters to Film Size and Exposure
Values Required To Produce the Same Image Quality
Fluoroscopy
The same basic
principles apply to a fluoroscopic imaging system except that
the sensitivity of the video camera, rather than the film, is a determining factor.
The sensitivity of a video camera is generally not fixed but can be varied through
adjustments in the internal amplification, or gain. The quantum noise level for a
fluoroscope is generally set to an acceptable level by adjusting either the video
camera or aperture, or both.
improve the visibility of low-contrast detail. In the low-noise mode, the receptor
sensitivity is reduced, and more exposure is required to form the image.
It is possible to develop receptor systems that would have greater sensitivity
and would require less exposure than those currently used in x-ray imaging. But,
there is no known way to overcome the fundamental limitation of quantum noise.
The receptor must absorb an adequate concentration of x-ray photons to reduce
noise to an acceptable level.
Although the quantum structure of the x-ray beam is the most significant noise
source in most x-ray imaging applications, the structure of the film, intensifying
screens, or intensifier tube screens can introduce noise into images.
314 Physical Principles of Medical Imaging
An
image recorded on film is composed of many opaque silver halide crystals,
or grains. The grains in radiographic film are quite small and are not generally
visible when the film is viewed in the conventional manner. The grainy structure
sometimes becomes visible when an image recorded on film is optically enlarged,
as when projected onto a screen. Whenever it is visible, film
grain is a form of
image noise.
Film-grain noise is generally a more significant problem in photography than in
radiography, especially in enlargements from images recorded on film with a rela¬
tively high sensitivity (speed).
Image-intensifying screens and the screens of intensifier tubes are actually lay¬
ers of small crystals. An
image is formed by the production of light (fluorescence)
within each crystal. The crystal structure of screens introduces a slight variation in
light production from point to point within an image. This structure noise is rela¬
tively insignificant in most radiographic applications.
ELECTRONIC NOISE
Video
images often contain noise that comes from various electronic sources.
Video image noise is often referred to as snow. Some of the electronic components
that make up a video system can be sources of electronic noise. The noise is in the
form of random electrical currents often produced by thermal activity within the
device. Other electrical devices, such as motors and fluorescent
lights, and even
natural phenomena within the atmosphere generate electrical noise that can be
The noise in animage becomes more visible if the overall contrast transfer of
the imaging system is increased. This must be considered when using image dis¬
plays with adjustable contrast, such as some video monitors used in fluoroscopy,
and the viewing window in CT, MRI, and other forms of
digital images. High
contrast film increases the visibility of noise.
Image Noise 315
The
visibility of image noise can often be reduced by blurring because noise has
a rather
finely detailed structure. The blurring of an image tends to blend each
image point with its surrounding area; the effect is to smooth out the random struc¬
ture of the noise and make it less visible.
The use of image
blurring to reduce the visibility of noise often involves a com¬
promise because the blurring can reduce the visibility of useful image detail.
High-sensitivity (speed) intensifying screens generally produce images show¬
ing less quantum noise than detail screens because they produce more image blur.
The problem is that no screen gives both maximum noise
suppression and visibil¬
ity of detail.
A blurring process is sometimes used in digital
image processing to reduce im¬
age noise, as described in Chapter 22.
IMAGE INTEGRATION
ing its noise content. Integration is, in principle, blurring an image with respect to
time, rather than with respect to space or area. The basic limitation of using this
process is the effect of patient motion during the time interval.
Integration requires the ability to store or remember images, at least for a short
period of time. Several devices are used for image integration in medical imaging.
Human Vision
The human eye (retina) responds to average light intensity over a period of
receptor exposure (less than 5 pR) is used to form each individual image, the
images are relatively noisy. However, since the eye does not "see" each individual
image, but an average of several images, the visibility of the noise is reduced. In
effect, the eye is integrating, or averaging, approximately six video images at any
particular time. The noise actually visible to the human eye is not determined by
the receptor exposure for individual fluoroscopic images but by the total exposure
for the series of integrated images.
316 Physical Principles of Medical Imaging
Certain types of video camera tubes have an inherent lag, or slow response, to
changes in an image. This lag is especially significant in vidicon tubes. The effect
of the lag is to average, or integrate, the noise fluctuations and produce a smoother
image. The major disadvantage in using this type of tube for fluoroscopy is that
moving objects tend to leave a temporary trail in the image.
Digital Processing
IMAGE SUBTRACTION
Relatively high exposures are used to create the original images in DSA. This
partially compensates for the increase in noise produced by the subtraction
process.
Chapter 22
Digital computers are now an integral part of the medical imaging process. In
some applications, such as CT, PET, and MRI, general purpose digital computers
are part of the system. In other
applications, such as digital radiography, digital
fluoroscopy, and ultrasound imaging, special purpose digital processors (comput¬
ers) are built into the equipment. Nuclear imaging systems use both general pur¬
pose and specialized computer systems.
In these applications, the computer, or digital processor, performs a variety of
functions including:
•
image acquisition control
•
image reconstruction
•
image storage and retrieval
•
image processing
•
image analysis
In this chapter we will consider some of the common characteristics of digital
images that apply to all modalities and then give special emphasis to radionuclide
imaging and several digital x-ray imaging methods that are not considered in other
chapters.
DIGITAL IMAGES
317
318 Physical Principles of Medical Imaging
6 6 4 6 6
6 6 2 6 6
4 2 0 2 4
6 6 2 6 6
6 6 4 6 6
Figure 22-1 A 25-Pixel Digital Image
Matrix Size
Image Detail
Matrix size is the principal factor that determines the size of the individual pix¬
els. Pixel size affects image detail. Since each pixel has only one numerical value
or shade of gray (brightness), it is not
possible to see any anatomical detail within
a pixel. All structures within the area covered
by a pixel are blurred together and
represented by one value. Digitizing an image adds this additional blurring. If it is
large relative to the level of blur from other sources (focal spot, receptors, etc.), it
becomes the limiting factor with respect to visibility of detail. Good
visibility of
detail requires small pixels that are produced by selecting a
large matrix size.
Digital Imaging Systems and Image Processing 319
Matrix
For a large FOV, a large matrix is required to produce the same detail as a small
matrix in a smaller FOV.
Storage Capacity
The storage capacity required for a digital
image is related to the number of
pixels. Therefore, a large matrix that contains many pixels requires more storage
capacity and processing time than images in a small matrix format. Figure 22-2
shows the effect of matrix and pixel size on image detail. In the bottom row we see
that as the matrix size is decreased the image becomes more blurred. This is be¬
cause the pixel size is increased. Another
important factor is the number of bits
(binary digits) used to represent each pixel. This affects the number of shades of
gray or brightness levels that can be displayed. In the top row we see that as the
number of bits per pixel is decreased there are fewer shades of gray in the dis-
512x512
4 bits
Detail
Figure 22-2 The Effect of Matrix and Pixel Size on Image
320 Physical Principles of Medical Imaging
played image, this is because the number of bits per pixel determines the range of
pixel values and possible shades of gray.
Pixel Values
Devices that process and store digital images operate with binary numbers
rather than decimal numbers. The difference is that digits in a binary number al¬
ways express multiples of the base number 2, whereas digits in a decimal number
express multiples of the base number 10. A basic knowledge of the binary number
format is especially helpful in understanding the storage requirements for digital
images.
When a computer, or digital processor, writes a number in memory, it does so
by filling in, or marking, specific spaces. This is somewhat analogous to what
humans do when they fill out forms where a space is designated for each digit (or
letter of the alphabet). Let us use Figure 22-3 to develop this analogy. Consider the
decimal number first. Each digit in a decimal number represents a multiple of 10.
9 9 9 9
8 8 8 8
7 7 7 7
6 6 6 6
10 decimal 5 5 5 5 1 0,000
4 4 4 4
digits 3 3 3 3
combinations
2 2 2 2
0 0 0 0
0 0 E 6 - 1,956
1000 100 10
digit values
2 binary 16
digits (0) Q O O O combinations
o • 0 = 10
8 4 2
digit values
The specific multiple value is determined by the position, or order, of the digit
within the total number. When humans fill in decimal number,
a
they do so by
writing 1 of 10 different numbers (0-9). The total value of the number is deter¬
mined by the digit selected and the position in which it is entered. For
example, the
digit 3 entered into the space to the extreme right has the value of 3, whereas if it is
entered into the third space from the right, it has a value of 300. The value of a
number is simply the sum of its individual digit values.
In the binary number format shown here, each digit
position is indicated by a
circle. Notice that the value of each digit position is a multiple of the number 2.
When a computer fills in a binary number, it has a choice of
only two values, 0 and
1. A 0 is indicated by leaving the position blank, and a 1 is indicated
by placing a
mark in the position. Each space represents one binary digit, or bit. A bit can have
only two different values, whereas a decimal digit can have 10 different values.
Most digital devices work with groups of bits. A group of 8 bits is often used,
and it is known as a byte. Within a byte, each blank, or unmarked bit, has a value
of 0. Each marked bit has a value determined by its position within the byte. When
the bit on the extreme right is marked, it has a value of 1; the bit on the extreme left
would have a value of 128, etc. The total value of a byte is the sum of the values for
the marked bits.
One byte can represent 256 different values. A byte with all blank bits (all bits
equal to 0) has a value of 0. The other extreme is when all bits are marked, which
gives a total value of 255. There are 254 other combinations that can be formed
with marked and unmarked bits.
The number of different values a binary number can represent is much less than
with a decimal number of the same number of digits. This is because a binary digit
canhave only one of two possible values, whereas a decimal digit can have one of
tendifferent values. We have just seen that an 8-bit binary number, or 1 byte, can
have only 256 different values. By comparison, a three-digit decimal number can
have 1,000 different values, 0 to 999.
primary interest in binary numbers, bits and bytes, is that they are used to
Our
represent the pixels in digital images. Different byte configurations represent dif¬
ferent shades of gray in the pixels. Figure 22-4 shows the general relationship of
byte configurations, pixel values, and shades of gray. Here we see how four bits
per pixel can have 16 different combinations and represent 16 shades of gray. The
relationship between the number of bits per pixel and the number of shades of gray
is shown at the top.
The required number of bits per pixel for medical images is generally in the
range of 8-16 bits (1-2 bytes). Digital systems typically process and store infor¬
mation in byte increments. Whereas an 8-bit pixel can be stored in one byte, a 12-
bit pixel would require 2 bytes, the same as a 16-bit pixel.
322 Physical Principles of Medical Imaging
PIXEL VALUES
266
or
16
SHADES OF GRAY
a*
i
o,
o
01 o-
BITS
Figure 22-4 Relationship between Pixel Values and Image Gray Scale
depends on the source and form of the acquired image or data, which varies from
one imaging
modality to another.
Image Reconstruction
Most of the imaging modalities that produce tomographic image (CT, MRI,
SPECT, and PET) do so by an image reconstruction process. This is a mathemati¬
cal process that converts acquired data into a digital image. The details of the
image reconstruction process for each modality is described in the respective
chapters describing the modalities. Although the methods are somewhat different,
the results are essentially the same: a digital image.
acquisition of the image. The computer is inserted between the gamma camera and
the display device and, in most applications, controls the acquisition, or flow, of
data from the gamma camera. The acquired data is stored in the computer memory
for later processing and display. The processing is usually of two types: the pro¬
cessing of an image to improve its quality and the processing, or abstraction of
quantitative information from the stored data. The computer also controls the
manner in which images and data are displayed.
The two major functions of the computer during the acquisition phase are to
collect data only during specific time intervals and to arrange the data into specific
formats for storage.
The formatting of the data during the acquisition phase is often referred to as the
mode of acquisition. The two most common formats (modes) are the frame (or
matrix) and the list. The selection between these two modes depends on the type of
processing to be performed.
Table 22-1 Typical Matrix Size Used with Specific Imaging Modalities
Frame Mode
In the frame, or matrix, mode the image area is divided into an array of pixels, as
shown in Figure 22-5. Typically the computer can be instructed to divide an image
into pixels of different sizes. The selection of a specific matrix format depends, to
some extent, on the type of study being conducted.
The formatting of an image into discrete pixels is, in effect, a blurring process.
The image of a small object point can be no smaller than one pixel. Therefore,
when it is necessary to visualize small objects, or to determine the size and shape
of structures with a reasonable degree of precision, small pixel sizes must be used.
If,on the other hand, the study is concerned with the build-up and elimination of
activity in relatively large areas, large pixels can be used to an advantage.
In the matrix mode, each pixel is represented by a specific location (address) in
the computer memory. Recall that when each photon is detected by the gamma
camera, an electrical signal (set of pulses) is created that represents its location
within the image area. These signals are processed by the computer to determine
the pixel in which the photon is located. It then goes to the corresponding memory
location for that pixel and adds one count to the number stored there. In effect,
each memory location is like a scoreboard that is continuously updated during the
acquisition of data. The final number in the location represents the number of
photons that originated within the corresponding pixel.
Let us consider the specific example illustrated in Figure 22-5. If a photon origi¬
nates from a specific point in an organ, its vertical and horizontal coordinates will
be sent to the computer. The computer then uses this information to identify the
specific pixel that corresponds to the photon location. The computer goes to the
memory location (address) that corresponds to the specific pixel and increases the
stored count value by 1 unit. The image stored in the computer memory is in a
numerical form. This is desirable because it can then be readily processed and
analyzed.
Many studies require a series of images, as illustrated in Figure 22-6. To
achieve this, the computer collects and stores counts for a series of specified time
intervals. These data are stored as separate images, or frames.
In some studies, it is desirable to have data collected only during specific phases
of a physiological function, such as the cardiac cycle. This is achieved by obtain¬
ing a signal from the patient's body, in this case, an electrocardiogram (EKG), and
using it to gate the acquisition process. For example, frames can be created to
correspond to the different segments of the cardiac cycle, as illustrated in Figure
22-7. In this type of acquisition, the count data can be collected over several car¬
diac cycles. The counts in each interval are added together to form a series of
Image
15 65 86 84 79 65 12
25 61 83 92 99 95 78 68 .Pixel
43 67 80 88 95 91 82 '71 12
25 61 75 91 98 92 85 73 45
48 68 80 87 96 88 79 65
52 77 80 83 81 72 63
60 73 78 76 71 60
60 68 70 65 57
51 58 49
Horizontal Location
Figure 22-5 The Creation of a Numerical Image. A Photon Is Recorded by Increasing the
Count Value in the Corresponding Pixel
326 Physical Principles of Medical Imaging
Frames
List Mode
Figure 22-7 Relationship between a Series of Images and the EKG Signal As Created in a
Gated Study
The third step is to convert the brightness or film density of each pixel into a
digitized numerical value. This general process is illustrated in Figure 22-9.
Scanning
The first stepis to scan the image line by line as shown in Figure 22-9. A laser
beam is typically used to scan images that have been recorded on film or are on
stimulable-phosphor screens, as described later. Fluoroscopic images are scanned
continuously as part of the video process by an electron beam within the video
camera and monitor display tubes. For digitizing purposes the scan typically starts
328 Physical Principles of Medical Imaging
at one corner of the image and then scans it line by line. This scanning process
divides the image into discrete lines. The number of scan lines will generally de¬
termine the number of pixels across one dimension of the image.
Sampling
As each line is a continuous analog signal is created, which shows the
scanned,
variation in image brightness or film density along the line. This signal is then
sampled and measured at discrete intervals along the lines. Each sample interval
will correspond to one pixel. The number of sample intervals along the line will
determine the number of pixels across the image.
Conversion
The value of the analog signal during a specific sample interval is measured and
Converter
Figure 22-9 The Process of Converting a Continuous Analog Image into Digital Values
sulting binary digital number becomes the value for the corresponding pixel.
IMAGE PROCESSING
When image data are stored in computer memory, they are available for various
kinds of processing. The processing is usually for the purpose of either altering a
330 Physical Principles of Medical Imaging
Contrast Modification
Lookup Tables
Many image processors use lookup tables to change the contrast. A lookup table
is programmed to provide a new pixel value for each pixel in the original image.
The values in the table can be selected to give a choice of
image contrast charac¬
teristics as shown in Figure 22-11.
Windowing
Most systems for viewing digitized images allow the observer to select the
range of pixel values that will be converted into the full gray scale, or brightness
range. This function is known as windowing. The two selectable variables associ¬
ated with the window are its width, or range of
pixel values, and its position along
the pixel value scale. The windowing
concept is illustrated in Figure 22-12.
When the window is set at a specific location on the
pixel value scale, all pixels
with values greater than the upper window limit are
displayed as white, all pixels
below the lower window limit are
displayed as black, and pixel values between the
two limits are spread over the full scale of
gray. In principle, the window setting
functions as a contrast control.
Decreasing window width increases image
contrast.
Detail Enhancement
0000024 68 10 12 16 16 16 16 16 16
AAA AAA A A A A A A AAA
t k_
Table Lookup t
0 1 2 3 4 5 6 7 8 9 10 12 13|l4|l5
that the contrast of small objects becomes more predominant. One of these meth¬
ods is the blurred-mask subtraction technique. It is illustrated in Figure 22-13. The
original image is first blurred to create a so-called mask image. Digital image
blurring is generally done by replacing each pixel value with an average of the
pixel values in its immediate vicinity. This is, in effect, a smearing or blurring
process. The blurring of the image reduces the contrast and visibility of small
objects and detail, leaving an image in which only large areas of contrast are vis¬
ible. The next step is to subtract the blurred image from the original image. This
creates an image in which the large area contrast is reduced by the subtraction
process. However, the visibility of small objects and details can be enhanced be¬
cause they are now displayed on a relatively uniform background which permits
the total image contrast to be increased as described above. This technique is espe¬
Figure 22-12 The Window Establishes the Relationship between Pixel Values and Bright¬
ness in the Display Image
Noise Reduction
Digitized images can be processed to reduce the noise produced by the statistical
fluctuation in photon concentration (quantum noise), as described in Chapter 21
and low signal-to-noise conditions in MRI. The apparent noise can be reduced by
blending or blurring the value for an individual pixel with the values for adjacent
pixels. Several mathematical approaches can be used for this, but a specific one,
the nine-point smoothing process, is illustrated in Figure 22-14. In this procedure,
the computer calculates a new image from the old. The value for each new pixel is
a weighted average value of the old pixel and the eight pixels surrounding it.
We can calculate a new value for each pixel shown in Figure 22-14. First, the
original value is multiplied by 4. The values for the four pixels located on the four
sides of the pixel being processed are multiplied by 2, and the values in the comer
pixels are multiplied by 1. The results of these multiplications are then added, and
the total is divided by 16 to give a weighted average. This process is repeated for
each pixel within the image area. When this process is applied to the image section
shown in Figure 22-14, the noise (1 standard deviation) is decreased from 10% to
334 Physical Principles of Medical Imaging
Enhanced Image
Detail
1.8%. Image smoothing to reduce noise is generally a blurring process that re¬
duces the sharpness and visibility of small structures and detail.
Provisions must be provided for storing and then retrieving digital images for
later use. There are several differenttechnologies and storage media that can be
used for this purpose. This storage capability is also referred to as memory. Figure
113 97 110 10$ .97 110 104 1 2 1 100 97 102 103 99 101
k— 3—
102 122 88 98 125 75 122 88 98 X 2 4 2 4- 16 = 101 99 102 101 101 98 100
Figure 22-14 The Reduction of Image Noise by the Use of the Nine-Point Smoothing
Process
Digital Imaging Systems and Image Processing 335
22-15 gives an overview of the storage and retrieval process. At this point we will
consider the basic characteristics that should be considered when
selecting a stor¬
age media for a specific application. As we will see later, it is the differences in
these specific characteristics that distinguishes one
storage medium or method
from another.
storage medium. This transfer process is known as writing to the medium. The
writing process records the image by assigning binary values (0-1) to a long series
of digital bits. How this is done depends on the specific medium. With a magnetic
medium (disk or tape) the individual bits are magnetized to represent a value of
one and demagnetized to represent a value of zero. With
optical disk technology a
small laser beam is used to "punch" or mark the individual bit locations.
The storage or memory area is organized so that each byte has a specific ad¬
dress. When an image is stored a directory is also created in which the addresses
for each image are stored. The directory will associate the specific image storage
location or address to the patient name or ID number.
At a later time the image can be retrieved or read from the storage medium. The
first step is to consult the appropriate directory and find the patient's name or
number. The computer then looks at the assigned address and reads the digital data
representing the image. It is then transferred to the appropriate processor or dis¬
play device.
Storage or memory devices are characterized by their ability to write and read
data.
Bits
Q
Figure 22-15 The General Process for Storing and Retrieving Digital Data
336 Physical Principles of Medical Imaging
written to read from any address. This is achieved by electronic switching from
or
one address to another as opposed to mechanically moving from one area to an¬
other on disk or tape. All computers or digital processors contain RAM. This is the
active memory in which images and data are stored while they are actually being
processed or displayed. The size of a computer's RAM determines the size and
complexity of programs that it can run efficiently.
Capacity
The capacity of a specific memory device determines the number of images that
can be stored. As we recall, there is a considerable variation in the number of bytes
required to store an individual image. It depends on the matrix size and the number
of bytes per pixel. Memory size or storage capacity is
generally expressed in
megabytes or gigabytes.
256 x 256 16 8
512 x 512 4 2
1024 x 1024 0.5
Digital Imaging Systems and Image Processing 337
Speed
An important characteristic of
digital storage media is the speed with which
a
data can be written and read. This is
especially important because it determines the
time required to retrieve and display stored images. There is generally an inverse
relationship between speed and capacity. This is illustrated by the following three
general types of storage media.
Disk
Retrieval of
images from disk is somewhat slower than from RAM. One reason
is that the read head (sensor) must move to the location on the disk where the
image data is stored. It then reads the individual bits from the spinning disk.
Some disk systems for long-term storage (archiving) use a so-called "juke-box"
design. It contains many disks located in a storage rack. When a specific image is
requested the computer locates the disk on which it is stored, mounts it on the
spinning disk drive, and then reads the image data. This design increases total
storage capacity but at the cost of the reduced retrieval speed.
Tape
The retrieval of
images from magnetic tape is relatively slow because the tape
must be run location containing the desired image. Tape can be used for
to the
image archiving because tapes can be manually stored and retrieved when neces¬
sary. In principle, the total storage capacity is limited by a facility's space for
storing tapes.
Digital images are not suitable for direct viewing. In most applications, the digi¬
talimage is converted into a video image, which can then be observed or recorded
on film. This conversion is performed by an electronic device: a digital to analog
(video) converter.
338 Physical Principles of Medical Imaging
The transfer of digital image to film is usually made by a laser camera. In this
a
device a scans over the film. The brightness of the beam is controlled
laser beam
by the pixel value that determines the exposure to the film in each pixel location.
Color
Some systems use a color display. In such a system, the computer translates
pixel values into specific colors. Color spectra are generally used to represent
characteristics such as levels of radioactivity in SPECT and PET and flow velocity
in Doppler ultrasound imaging.
Analysis
Profiles
The ability to compare a characteristic, such as density, activity, or signal inten¬
Regions of Interest
Computers can be instructed to outline a region of interest (ROI) in an image.
Then data such as the average and standard deviation of values within the region
can be obtained. A useful function for many dynamic studies is to produce a
graphic display of the change within the ROI as a function of elapsed time. Time-
activity curves are useful for observing the build-up and elimination of contrast
media or radioactive materials within a specific body region or organ.
Computed Radiography
Although there are several different ways to produce computed radiographs, the
most prevalent method is illustrated in Figure 22-16. Here the general concept of
computed radiography is compared to conventional radiography that uses a film in
direct contact with intensifying screens. In principle, computed radiography in¬
serts an image processing computer between the receptor screen and the film. The
Stimulable-Phosphor Film
Screen
(After X-Ray Exposure)
Figure 22-16 The Concept of Computed Radiography Compared to a Conventional
Cassette
into the processing equipment. In the processor the exposed screen is scanned with
a laser beam. The laser beam stimulates the phosphor material
causing each point
on the surface to emit light with a brightness proportional to the x-ray
exposure.
This general process is illustrated in Figure 22-16. The light is measured, con¬
verted into a digital value, and stored in the computer's memory.
After reading an image from the screen as described above, the processor erases
and restores the screen and then reloads it into a cassette ready for the next patient.
IMAGE PROCESSING
computer creates the effective characteristic curve for the radiograph. This is illus¬
trated in Figure 22-17. The stimulable-phosphor screen has a linear response over
a wide range of
exposures. This is represented by the straight line in Figure 22-17.
When the receptor is exposed, a latent image is formed as a relatively small
range
of exposures somewhere within the range. Images created with three different lev¬
els of x-ray exposure are indicated. During the processing the equipment deter¬
mines where the exposure is located within the range. It then adjusts the effective
receptor sensitivity (speed) to match the actual exposure condition. This proce¬
dure is a compensation for errors in the initial x-ray exposure. It will
produce a
radiograph with appropriate density values regardless of the receptor exposure.
However, low receptor exposures will increase the quantum noise in the image
and high receptor exposure will produce unnecessary
exposure to the patient.
The computer can be programmed to produce contrast characteristics
appropri¬
ate to the specific clinical examination. This is
represented by characteristic
curves with different
shapes.
c/3
c
cd
Q
* I I I I I I I I I I I I I I I I I I I I I
Relative Receptor Exposure
Figure 22-17 Radiographic Exposure and Contrast Characteristics That Can Be Changed
by the Computer
Digital Imaging Systems and Image Processing 341
Image Display
Digital Fluoroscopy
Bones
Bones and Vessels (mask) Vessels
all anatomical structures normally revealed in an x-ray image. The second image
is acquired after the contrast medium is injected into the vessel being imaged. If a
dilute concentration of contrast medium is used, the vessels will have
very little
contrast in comparison to many other structures,
especially bone. If the second
image is subtracted from the first under ideal conditions, an image showing only
the vessel containing the contrast medium will be obtained.
Image production begins with the scanning phase, as shown in Figure 23-1.
During this phase, a thin fan-shaped x-ray beam is projected through the edges of
the body section (slice) being imaged. The radiation that penetrates the section is
measured by an array of detectors. The detectors do not "see" a complete image of
the body section, only a profile from one direction. The profile data are measure¬
ments of the x-ray penetration along each ray extending from the x-ray tube to the
general, the quality of the image can be improved by using longer scanning times.
The second phase of image production is known as image reconstruction, as
illustrated in Figure 23-2. This is performed by the digital computer, which is part
of the CT system. Image reconstruction is a mathematical procedure that converts
the scan data for the individual views into a numerical, or digital, image. The
343
344 Physical Principles of Medical Imaging
Penetration Measurements
unumuum
I I I I I II I I I I I I I I I l-Computer Memory
Reconstruction
Rays
M
CT
number
of the image and the capabilities of the computer. The digital image is then stored
in the computer memory.
The final phase is the conversion
of the digital image into a video display so that
it can be viewed
directly or recorded on film. This phase is performed by elec¬
tronic components that function as a digital-to-analog (video) converter. The rela¬
tionship between the pixel CT number values and the shades of gray, or bright-
i
umugi ujjny irriugt r urmuiwn
CT number
Gray
scale scale
•+1000
L-1000
Figure 23-3 The Conversion of a Digital Image into a Gray Scale Image
346 Physical Principles of Medical Imaging
ing is awkward. Most scanners use cables that wrap around the gantry while it is
rotating. This design allows only a few rotations; the gantry must then be stopped
and rotated in the other direction to uncoil the cables. Another design uses sliding
electrical contacts, or slip rings, that permit continuous high-speed rotation.
Collimation
The x-ray tube assembly contains collimating devices that determine the physi¬
cal size and shape of the x-ray beam. One set of collimators determines the angular
span of the beam, and another set determines its thickness. This latter set can usu¬
ally be adjusted to vary slice thickness.
With current technology it is not possible to create an x-ray beam with
sharply
defined edges. This is because of the finite size of the x-ray tube focal spot, which
results in a penumbra, or "partial shadow," along the beam
edges, as shown in
Figure 23-4. The radiation has the greatest intensity at the center of the slice and
reduced intensity near the edges. Some radiation exposes the tissue
adjacent to the
slice being imaged.
Filtration
The x-ray tube assembly also contains metal filters through which the x-ray
beam passes. CT x-ray beams are filtered for two purposes.
Beam Hardening
Beam hardening refers to the process of
increasing the average photon energy,
ie, hardening, that occurs when the lower-energy photons are absorbed as the
beam passes through any material. This will normally occur when an x-ray beam
passes through the human body if the beam contains a wide range of photon ener¬
gies. In CT imaging, this hardening of the beam creates an image artifact because
the peripheral tissue is exposed to a lower
average photon energy than the inner
portion of the slice. This can be minimized by hardening the beam with the filter
material before it enters the body. This filtration reduces
patient exposure by se¬
lectively removing the low-energy, low-penetrating part of the x-ray beam spec¬
trum.
Compensation
A filter with a non-uniform thickness is often placed in the x-ray beam to com¬
pensate for the non-uniform thickness of the human body. This type of filter is
thicker near the edges and is sometimes referred to as a bow-tie filter. When it is
used, the thick center section of the body is exposed to a higher radiation
intensity
than the thinner sections near the edges. The use of this type of compensation filter
Computed Tomography Image Formation 347
—Focal Spot
™ m '
Collimator
Slice
Thickness
CD
W
O
Q
Figure 23-4 A Profile Showing the Distribution of Radiation Dose through a Slice
Power Supply
The generator, or power supply, for a CT system is typically a constant potential
type that can produce relatively high KV and MA values for a sustained period of
time.
DETECTORS
Function
ciple, each detector measures the radiation that penetrates the body section in the
direction of the detector.
Construction
Several materials are used for CT detectors. Solid-state detectors are made of
solid scintillation crystals that convert the x-ray energy into light. The light is then
converted into an electrical signal by either a photodiode or photomultiplier (PM)
tube. In another design, each detector is a small chamber filled with a high pres¬
sure gas, typically xenon. The radiation absorbed within the chamber ionizes the
actually absorbs. Two factors affect detector efficiency, as shown in Figure 23-5.
The geometric efficiency is determined by the ratio of the individual detector aper¬
ture to the total space associated with each detector. This space includes the detec¬
tor itself and the inactive collimator or the interspace between it and the next de¬
tector. Radiation that enters the interspace is not absorbed by the detector and does
not contribute to image formation. The ideal situation would be a large detector
Sensitivity Profile
In the ideal situation, each detector would be uniformly sensitive to all radiation
passing through the body section being imaged and would be insensitive to radia¬
tion coming from outside the slice. This would permit the imaging of well-defined
slices with good detail. The slice thickness that is within the view of each detector
is determined by the position of the collimating elements. The typical collimator
Absorbed in Detector
\
N
N | Absorbed in Interspace
?
I
Figure 23-5 Factors That Determine Detector Efficiency
Detector Configurations
The way in which the detectors are arranged and moved during the scanning
process has changed during the evolution of the CT scanner and is different among
scanners used today. It is common practice to designate various detector
configu¬
rations as either first, second, third, or fourth generation. The generation designa¬
tions correspond to the order in which the various configurations were developed.
Performance was improved in going from the first to the second and then on to the
third and fourth generations. The concept of generation with respect to the third
and fourth types, however, must be used with caution. They represent two differ¬
ent approaches to detector design. Each has its own operating characteristics, but
Typical scanning time was approximately 4 minutes. The scanning time was re¬
duced with the development of the second type of detector configuration, which
350 Physical Principles of Medical Imaging
Figure 23-6 A Profile Showing the Variation in Detector Sensitivity within a Slice
used multiple detectors and reduced the number of rotations required to achieve a
full scan.The second type also used a combined translate-rotate motion.
Third Type
The third type of detector configuration is shown in Figure 23-7. An array of
individual detector elements that is just large enough to form one view is mounted
on the gantry so that it rotates along with the x-ray tube. This is often referred to as
Fourth Type
The fourth type of detector configuration is a ring of detector elements that
completely encircles the patient's body, as shown in Figure 23-8. The detectors
are stationary and do not rotate. This arrangement has many more detector ele¬
ments than the third type, but they are not all in use at the same time. Different
segments of the detector array are exposed as the x-ray tube rotates. The functional
difference between the third and fourth types is the way in which the individual
views are created.
COMPUTER
Stationary Detectors
Control
After the operator selects the appropriate scanning factors and initiates the scan,
the procedure progresses under the control of the computer. The computer coordi¬
nates and times the sequence of events that occur during the scan, which includes
turning the x-ray beam and detectors on and off at the appropriate time, transfer¬
ring data, and monitoring the system operation.
Processing
quickly. The disk also has a limited capacity, although it is much larger than the
electronic memory. The long-term, or archival, storage of images requires the
transfer to a storage medium that can be removed from the computer and stored
independently. Magnetic tape and floppy disks are used for this purpose.
The two other units that make up a CT system are the display unit (viewing
console) and the camera, which records the images onto film. Most CT systems
use a multi-format camera.
The viewing unit is the interface between the CT system and the physician or
operator. The image display is a CRT or video monitor. Before the digital images
are transferred to the
viewing unit by the computer, they are converted from a
digital to a video form.
The viewer can communicate with the computer through a keyboard, joystick,
or tracker ball. This allows the viewer to select
specific images for display, control
brightness and contrast, implement display functions such as zoom and rotation,
and analyze region of interest (ROI).
Computed Tomography Image Formation 353
SCANNING
Rays
A ray is the portion of an x-ray beam that is projected onto an individual detec¬
tor, as shown in Figure 23-9. Typically, a ray passes through the body slice as
shown. The radiation within the ray is absorbed by the tissue in the pathway. The
rate of absorption at each point along the way is determined by the value of the
linear attenuation coefficient. For the purpose of this discussion, let us divide the
tissue into a line of individual blocks. Each block of tissue has an attenuation
coefficient value that depends on the type of tissue and the energy of the photons
within the x-ray beam. In principle, each block of tissue attenuates the x-ray beam
by an amount equal to the value of the attenuation coefficient. The total attenua¬
tion (or penetration) along a ray is related to the sum of the individual attenuation
coefficients of points along the ray.
The projection of one ray through a body section produces a measurement of
the total attenuation, or penetration, along its path; the measurement represents the
sum of the individual attenuation coefficient values for each voxel of tissue within
the ray. With a single measurement, there is no way to determine the individual
voxel attenuation coefficient values. However, by projecting many rays through a
body section, making measurements for each, and then reconstructing the image,
the attenuation coefficient value for each voxel within the slice can be calculated.
Views
A view consists of a collection of rays that share a common point. The common
point can be either a focal spot location or an individual detector, depending on the
specific detector configuration.
Third Type
In systems with the third type of detector configuration (Figure 23-7), a view is
created by exposing all of the detectors from one focal spot location. All of the
rays within the view are projected simultaneously. The time required to create one
view is relatively short and is controlled by turning either the x-ray tube or the
354 Physical Principles of Medical Imaging
Focal Spot ^
detectors on and off. Additional views are created as the x-ray tube (focal spot)
and detector array rotate around the body.
Fourth Type
In views created by CT scanners with the fourth type of detector configuration
(Figure 23-8), one detector is common to all rays within one view. Individual rays
are created as the focal spot moves along its circular path. The rays are not pro¬
jected simultaneously, as in the third type, but are produced sequentially as the
x-ray tube moves along.
During each scan, many views are being developed at the same time. This is
possible because the x-ray beam exposes many detectors simultaneously. For ex-
Computed Tomography Image Formation 355
ample, when the focal spot is in a specific position, it can project the first ray of one
view, the second ray of another view, and the third ray of yet another view, etc.
of measurements per scan is typically within the range of 500,000 to 1.5 million.
A typical scan is created by rotating the x-ray beam through an angle of 360°.
Some systems can be set to scan through a smaller angle, to reduce time, or to scan
through angles larger than 360°, to increase the number of measurements and im¬
age quality. The number of measurements per scan is generally not set directly by
the operator but is affected by the examination type (or mode) and the scanning
time.
IMAGE RECONSTRUCTION
Image Format
The image is reconstructed in the form of an array of individual picture ele¬
ments, or pixels. The number of pixels making up the image is typically in the
range of 64 x 64 pixels to 512 x 512 pixels. The matrix size (number of pixels per
image) is selected by the operator before the scan procedure. Pixel size has a sig¬
nificant effect on image quality and must be selected to fulfill the needs of the
d =
Matrix Size
Figure 23-10 The Relationship of Voxel Size to FOV, Matrix Size, and Slice Thickness
(length and width) of 1 mm. Changing either the field of view or the matrix size
alters the dimensions of the individual voxels.
CT Numbers
CT number =
Mh,0
/ \
Tissue Photon
Density Energy
Figure 23-11 The Relationship of Pixel (CT) Number and Tissue Voxel Attenuation Coef¬
ficient Value (p.)
Computed Tomography Image Formation 357
attenuation coefficient value for each voxel and then transforms it into an appro¬
priate image pixel value. The pixel values are generally designated CT numbers.
Most systems express CT numbers in Hounsfield units. The
relationship be¬
tween a CT number and the
corresponding attenuation coefficient value is given
by
CT number =
((ji Tissue - p. h2o)/|1 h2o) x 1,000.
Water is used as a reference material for determining CT numbers. By defini¬
tion, water has a CT number value of 0. Materials that have attenuation coefficient
values greater than that of water have positive CT number values, and materials
with coefficient values less than that of water have negative CT numbers. CT scan¬
ners generally operate at relatively high KVs when Compton interactions pre¬
dominate in the soft tissue. The linear attenuation coefficient values for Compton
interactions are primarily determined by material density. Therefore, at least in the
soft tissues, the CT numbers are closely related to tissue density. Tissue with a
density less than that of water (specific gravity less than 1) generally has negative
CT number values. Positive CT number values indicate a tissue density greater
than that of water.
The same tissue will not produce the same CT numbers if scanned with differ¬
ent machines because of differences in x-ray beam energy (KV and filtration) and
system calibration procedures. CT numbers obtained with the same scanner can
C
o
View A
(0
3
C
0)
CD
£
Q)
>
Attenuation
Figure 23-12 Two Views of a Body Section Used To Illustrate Back Projection
358 Physical Principles of Medical Imaging
vary from one time to another and if the location of the specific tissue is changed
within the imaged area. If CT numbers are to be used for analytical purposes such
as the determination of bone density, it is usually necessary to scan a set of refer¬
ence materials along with the patient.
Back Projection
body section, a view "sees" only a composite attenuation profile rather than the
individual anatomical structures within the slice. This illustration shows only two
of the several hundred views usually made in an actual scan.
Computed Tomography Image Formation 359
projection of four views onto an image surface. Each view profile contains only
enough information to project lines, or bands, through the image. However, if the
individual views are superimposed, as shown from left to right in the top row, the
age. Several hundred views are normally required to reconstruct the more com¬
plex and detailed image of a body section.
Chapter 24
The CT image is distinctly different from the conventional radiograph. The best
use of CT imaging and accurate interpretation will be easier to achieve if one has
a good understanding of CT image quality characteristics and how they can be
tics. In many instances, changing a factor to improve one image characteristic will
adversely affect some other characteristic. Therefore, the issue is not which fac¬
tors give the "best" image, but which values produce maximum visibility of spe¬
ferences in soft tissue. Image quality must also be balanced against patient expo¬
sure, x-ray tube heating, and imaging time.
In this chapter, we consider the characteristics of CT image quality and show
how they are related to the various imaging factors.
In comparison with radiography, CT imaging generally has a higher contrast
sensitivity and produces less visibility of detail, more noise, and more artifacts.
CONTRAST SENSITIVITY
361
362 Physical Principles of Medical Imaging
Tomography
In tomographic imaging, each anatomical feature is displayed directly and is
not superimposed on other objects. This makes it possible to enhance the contrast
in the areas of interest without interference from high-contrast bony structures.
Windowing
CT Numbers + 250
^+2Qo)
( + 100
]
+50
i—50
^^Window
fc)
^200^ -250
The relatively narrow x-ray beam used in CT produces much less scattered ra¬
diation than the much larger beams used in conventional radiography.
Focal Spot
Size
Motion
Sampling Filter
Image Detail
Aperture Smoothing
+ + □ + □ + □
Voxel Size ( Blurring)
I I
Matrix FOV
Detector
Aperture
Figure 24-3 Factors That Produce Blurring and Loss of Detail in CT Imaging
Detector Aperture
The detector aperture is the effective size of each detector in the image plane
and is one major factors that determine ray width. A small detector
of the two
aperture produces a narrow ray, less blur, and better image detail. Many scanners
have adjustable collimating devices, which can be used to change the detector
aperture. A small aperture setting produces maximum image detail. When a por¬
tion of the detector is covered, however, the geometric efficiency is reduced. An
increase in radiation exposure to the patient is then required to produce the same
Focal Spot
Each ray is created by the x-ray tube focal spot. Two factors associated with the
total spot affect ray width: (1) the size of the focal spot and (2) movement during
the interval of each measurement. Small focal spots create rays with narrow
widths, which produce better image detail. However, the heat capacity of the focal
spot area is often a limiting factor. Many scanners use x-ray tubes with dual focal
spots; the small spot is used for maximum image detail and the large spot for
maximum heat capacity.
The optimum imaging situation is generally one in which the focal spot size and
detector aperture are approximately equal. If the objects being imaged are ap¬
proximately the same distance from the focal spot and detector, no advantage is
Computed Tomography Image Quality 365
gained if the size of one greatly exceeds the size of the other. If the objects are
closer to either the focal spot or detector, the closer device has more of an influ¬
ence on the ray dimension.
cantly exceeds the dimensions of small objects, or anatomical detail, the detail
will not appear in the image. The rays must be sufficiently close during the scan¬
ning procedure to measure any anatomical detail that is to appear in the image. A
relatively large interval between rays not only reduces image detail but causes
aliasing artifacts.
The formation of an image into an array of pixels is, in itself, a blurring process.
Since specific pixel can have only one CT number value, there can be no detail
a
within a pixel. In other words, all detail within the tissue voxel represented by a
specific pixel is blurred together and assigned a single value. With respect to im¬
age quality, the significant dimension is not that of the pixel in the image but rather
that of the corresponding voxel in the patient's body. Anatomical detail within a
voxel cannot be imaged. Therefore, small voxels are needed when image detail is
required.
Three factors determine voxel size: (1) field of view, (2) matrix size, and (3)
slice thickness. In principle, voxel size can be changed by changing any one of
these factors. Reducing voxel size generally, but not always, improves image de¬
tail. It will not significantly improve image detail if the voxel size is not the limit¬
ing factor, nor if the focal spot, detector, or other factors produce significantly
more blur than the voxel.
The selection of theappropriate voxel size for a specific clinical procedure gen¬
erally depends on the requirement for image detail. Noise increases as voxel size is
decreased.
Reconstruction Filters
image reconstruction process. Most systems have several filters that can be se¬
lected by the operator. Functions performed by the filters include reducing arti¬
facts, smoothing to reduce image noise, and enhancing edges. Since image
smoothing is a blurring process, the use of this type of filter can limit visibility of
detail.
366 Physical Principles of Medical Imaging
Composite Blur
We have seenthat several factors contribute to image blurring in CT. Many of
them can adjusted by the user. However, compromises must often be made
be
between image detail and other factors. The following principles must be consid¬
ered:
•
Decreasing detector aperture decreases efficiency, leading to an increase in
patient exposure or image noise.
•
Decreasing focal spot size decreases x-ray tube heat capacity.
•
Increasing matrix size increases image noise.
•
Decreasing field of view increases image noise and can limit specific clinical
applications.
Reducing blur by changing any one factor will not significantly improve image
quality unless the factor is a significant source of blur with respect to the other
factors. For example, using a small detector aperture will not improve image detail
if the detail is limited by a large focal spot or large voxel. Figure 24-4 can be used
to compare the image blur produced by individual factors; a scale of blur values is
shown for the four most significant. The compromises associated with each factor
are also indicated. Maximum image detail is obtained by reducing the blur values
as much as possible.
Filter
Detector Aperture Focal Spot Algorithm Voxel Size
i\ i | 256 | Matrix I 512 |
2.2- 2.2 2.2- 2.2
1.6 ■1.6 O
1.6- 1.6
z
><
1.4- 1.4
o> 1.4 36 cm ■1.4
o c
A3 >
a !E O •1.2
1.2- ra 1.2
o
1.2-
u. r
U o
1.0 1.0 24 cm ■1.0 48 cm
■
1.0- n
E
a> CO
-0.8- 3: 0.8 0.8- ■0.8
•
0.4. 0.4 -0.4- - •
0.4
The best image procedure is generally one in which blur from all sources, ex¬
cept voxel size, is approximately the same. Voxel size can usually be adjusted to a
smaller value than the other factors.
NOISE
Effect on Visibility
-2 -6 -6
-
I
-
I
-2
-2 -4 -2
-I -3
-3 -4
-
I -2 -5
Attenuation Coefficient
.191 .195 .199
I SD= 0.5%
Figure 24-5 (A) Typical CT Numbers in an Image of a Volume of Water. (B) The Spread
of Values(Standard Deviation) Is an Indication of the Amount of Image Noise.
368 Physical Principles of Medical Imaging
value of 0, but because of the presence of noise, individual pixels have a range of
values as indicated. The variation in CT numbers (noise) can be expressed in terms
of the standard deviation of the values. The graph in Figure 24-5 shows a typical
distribution of CT numbers for water. The range of values represented by 1 stan¬
dard deviation below and above the average value (0) is indicated. The value of
the standard deviation be
expressed in CT numbers or as a percentage.
can
of water and then using the viewing functions to display the standard deviation
value for a specific region of interest (ROI).
Sources
Noise can be decreased by increasing the dimensions of the pixel (voxel), but,
as we have seen, this increases image blurring and reduces visibility of detail. This
is one of the important compromises that must be made in selecting imaging fac¬
tors.
Slice Thickness
Radiation Exposure
The amount of radiation used to create a CT image can usually be varied
by
changing either the MA or the scanning time. Changing either produces a propor¬
tional change in patient dose and the radiation absorbed in individual voxels. Im-
Computed Tomography Image Quality 369
age noise can be decreased by increasing the quantity of radiation used (MAS), but
the radiation dose absorbed by the tissue will also increase.
Window Setting
The visibility of noise in a CT image depends on the setting of the window used
to view the image. Small windows, which enhance contrast, also increase the con¬
trast and visibility of noise.
Filtration
ARTIFACTS
•
patient motion (streaks)
•
high-attenuation objects (streaks)
•
aliasing (streaks)
• beam hardening (cupping)
• detector imbalance (rings)
•
centering
•
partial volume effect.
Chapter 25
Ultrasound Production and
Interactions
Sound is a physical phenomenon that transfers energy from one point to an¬
other. In this respect, it is similar to radiation. It differs from radiation, however, in
that sound can pass only through matter and not through a vacuum as radiation
can. This is because sound waves are actually vibrations passing through a mate¬
rial. If there is no material, nothing can vibrate and sound cannot exist.
One of the most significant characteristics of sound is its frequency, which is the
rate at which the soundsource and the material vibrate. The basic unit for specify¬
ing frequency is the hertz, which is one vibration, or cycle, per second. Pitch is a
term commonly used as a synonym for frequency of sound.
The human ear cannot hear or respond to all sound frequencies. The range of
the body until they are reflected by some structure. Actually, it is the boundary or
interface between different types of tissue that produces the reflection. This is the
source of echo pulses, which provide the information for creating the image. The
ultrasound beam is the pathway followed by the pulses. The ultrasound image is a
display showing the location of reflecting structures or echo sites within the body.
The location of a reflecting structure (interface) in the horizontal direction is deter¬
mined by the position of the beam. In the depth direction, it is determined by the
371
372 Physical Principles of Medical Imaging
Display
Processor
Scan Generator
Intensity ^
( Gain ) ( TCG )
Pulse Generator Amplifier
Transducer "
Echo Pulse
—
Ultrasound Pulse-
Beam
Reflecting Interface
time required for the pulse to travel to the reflecting site and for the echo pulse to
return.
Transducer
The transducer is the component of the ultrasound system that is placed in direct
contact with the patient's body. It alternates between two functions: (1) producing
ultrasound pulses and (2) receiving or detecting the returning echoes. Within the
transducer there are one or more piezoelectric elements. When an electrical pulse
is applied to the piezoelectric element it vibrates and produces the ultrasound.
Also, when the piezoelectric element is vibrated by the returning echo pulse it
produces a pulse of electricity.
Ultrasound Production and Interactions 373
Pulse Generator
The pulse generator produces the electrical pulses that are applied to the trans¬
ducer. For conventional ultrasound imaging the pulses are produced at a rate of
approximately 1,000 pulses per second. The principal control associated with the
pulse generator is the size of the electrical pulses that can be used to change the
intensity of the ultrasound beam.
Amplifier
The amplifier is used to increase the size of the electrical pulses coming from
the transducer. The amount of amplification is determined by the gain setting. The
principal control associated with the amplifier is the time gain compensation
(TGC), which allows the user to adjust the gain in relationship to the depth of echo
sites within the body. This function will be considered in much more detail in
Chapter 26.
Scan Generator
The generator controls the scanning of the ultrasound beam over the body
scan
section being imaged. This is usually done by controlling the sequence in which
the electrical pulses are applied to the piezoelectric elements within the trans¬
ducer. This is also considered in more detail in Chapter 26.
Scan Converter
Image Processor
The digital image from the scan converter is processed to produce the desired
contrast characteristics.
Display
The processed images are converted to video images and displayed on the
recorded on film.
screen or
One additional component of the ultrasound imaging system that is not shown
is the digital disk or tape that is used to store images for later viewing.
374 Physical Principles of Medical Imaging
ULTRASOUND CHARACTERISTICS
Frequency
The frequency of ultrasound pulses must be carefully selected to provide a
proper balance between image detail and depth of penetration. In general, high
frequency pulses produce higher quality images but cannot penetrate very far into
the body. These issues will be discussed in greater detail later.
The basic principles of ultrasound pulse production and transmission are illus¬
trated in Figure 25-2. The source of sound is a vibrating object, the
piezoelectric
transducer element. Since the vibrating source is in contact with the tissue, it is
caused to vibrate. The vibrations in the region of tissue next to the transducer are
passed on to the adjacent tissue. This process continues, and the vibrations, or
sound, is passed along from one region of tissue to another. The rate at which the
Pressure Amplitude
Compression
Beam
/Ci,
< >
< > Velocity
< >
'r> < >
U >)
4
/
A
. Vibration
In soft tissue and fluid materials the direction of vibration is the same as the
direction of pulse movement away from the transducer. This is characterized as
longitudinal vibration as opposed to the transverse vibrations that occur in solid
materials. As the longitudinal vibrations pass through a region of tissue, alternat¬
ing changes in pressure are produced. During one half of the vibration cycle the
tissue will be compressed with an increased pressure. During the other half of the
"ring," for a short period of time. This creates an ultrasound pulse as opposed to a
continuous ultrasound wave. The ultrasound pulse travels into the tissue in contact
with the transducer and moves away from the transducer surface, as shown in
Figure 25-2. A given transducer is often designed to vibrate with only one fre¬
quency, called its resonant frequency. Therefore, the only way to change ultra¬
sound frequency is to change transducers. This is a factor that must be considered
when selecting a transducer for a specific clinical procedure. Certain frequencies
are more appropriate for certain types of examinations than others. Some trans-
376 Physical Principles of Medical Imaging
ducers are capable of producing different frequencies. For these the ultrasound
Velocity
Velocity =
P
where p is the density of the material, and E is a factor related to the elastic prop¬
erties of the material. The velocities of sound through several materials of interest
are given in Table 25-1.
Most ultrasound systems are set up to determine distances using an assumed
velocity of 1540 m/sec. This means that displayed depths will not be completely
accurate in materials that produce other ultrasound velocities such as fat and fluid.
Wavelength
The distance sound travels during the period of one vibration is known as the
Figure 25-3 shows both temporal and spatial (length) characteristics related to
the wavelength. A typical ultrasound pulse consists of several wavelengths or vi¬
bration cycles. The number of cycles within a pulse is determined by the damping
characteristics of the transducer. Damping is what keeps the transducer element
Velocity
Material (m/sec)
Fat 1450
Water 1480
Soft tissue(average) 1540
Bone 4100
Ultrasound Production and Interactions 111
TIME
Pulse Duration
r« ►
Period
VELOCITY
ta A
(Compression)
Pressure Amplitude
(Rarefaction)
Wavelength
Pulse Length
DISTANCE
from continuing to vibrate and produce a long pulse. The wavelength is deter¬
mined by the velocity, v, and frequency, f, in this relationship:
Wavelength = v/f.
The period is the time required for one vibration cycle. It is the reciprocal of the
frequency. Increasing the frequency decreases the period. In other words, wave¬
length is simply the ratio of velocity to frequency or the product of velocity and
the period. This means that the wavelength of ultrasound is determined by the
characteristics of both the transducer (frequency) and the material through which
the sound is passing (velocity).
Amplitude
The amplitude of an ultrasound pulse is the range of pressure excursions as
shown in Figure 25-3. The pressure is related to the degree of tissue displacement
caused by the vibration. The amplitude is related to the energy content, or "loud¬
ness," of the ultrasound pulse. The amplitude of the pulse as it leaves the trans¬
ducer is determined by how hard the crystal is "struck" by the electrical pulse.
378 Physical Principles of Medical Imaging
Most systems have a control on the pulse generator that changes the size of the
electrical pulseand the ultrasound pulse amplitude. We designate this as the inten¬
sity control, although different names are used by various equipment manu¬
facturers.
In diagnostic applications, it is usually necessary to know only the relative am¬
plitude of ultrasound pulses. For example, it is necessary to know how much the
amplitude, A, of a pulse decreases as it passes through a given thickness of tissue.
The relative amplitude of two ultrasound pulses, or of one pulse after it has under¬
When the amplitude ratio is greater than 1 (comparing a large pulse to a smaller
one), the relative pulse amplitude has a positive decibel value; when the ratio is
less than 1, the decibel value is negative. In other words, if the amplitude of a pulse
is increased by some means, it will gain decibels, and if it is reduced, it will lose
decibels.
Figure 25-4 compares decibel values to pulse amplitude ratios and percent val¬
ues. The first two pulses differ in amplitude by ldB. In comparing the second
pulse to the first, this corresponds to an amplitude ratio of 0.89, or a reduction of
approximately 11%. If the pulse is reduced in amplitude by another 11 %, it will be
2 dB smaller than the original pulse. If the pulse is once again reduced in ampli¬
tude by 11% (of 79%), it will have an amplitude ratio (with respect to the first
IdB 2dB A A A /N
89% 3dB
M/
6dB
79°/c
12 dB
71%
18 d B
24 dB
V
50%
V
25%
12.5%
6%
Ii
Note that when intensities are being considered, a factor of 10 appears in the
equation rather than a factor of 20, which is used for relative amplitudes. This is
because intensity is proportional to the square of the pressure amplitude, which
introduces a factor of 2 in the logarithmic relationship. The intensity of an ultra¬
sound beam is not constant with respect to time nor uniform with respect to spatial
area, as shown in Figure 25-5. This must be taken into consideration when
380 Physical Principles of Medical Imaging
Temporal Characteristics
Figure 25-5 shows two sequential pulses. Two important time intervals are the
pulse duration and the pulse repetition period. The ratio of the pulse duration to the
pulse repetition period is the duty factor. The duty factor is the fraction of time that
an ultrasound pulse is actually being produced. If the ultrasound is produced as a
continuous wave (CW), the duty factor will have a value of 1. Intensity and power
are proportional to the duty factor. Duty factors are relatively small, less than 0.01,
peak power, which is associated with the time of maximum pressure. Another is
the average power within a pulse. The lowest value is the average power over the
pulse repetition period for an extended time. This is related to the duty factor.
Average
Figure 25-5 The Temporal and Spatial Characteristics of Ultrasound Pulses That Affect
Intensity Values
Ultrasound Production and Interactions 381
Spatial Characteristics
The energy or intensity is
generally not distributed uniformly over the area of an
ultrasound pulse. It can be expressed either as the peak
intensity, which is often in
the center of the pulse, for the average intensity
as over a designated area.
Temporal!Spatial Combinations
There is significance associated with each of the intensity expressions
some
.
However, they are not all used to express the intensity with respect to potential
biological effects. Thermal effects are most closely related to the spatial-peak and
temporal-average intensity (Ispta). This expresses the maximum intensity deliv¬
ered to any tissue averaged over the duration of the exposure. Thermal effects
(increase in temperature) also depend on the duration of the exposure to the ultra¬
sound. Mechanical effects such as cavitation are more closely related to the spa¬
Absorption
As the ultrasound pulse moves through matter, it continuously loses energy.
This is generally referred to as attenuation. Several factors contribute to this
reduction in energy. One of the most significant is the absorption of the ultrasound
energy by the material and its conversion into heat. Ultrasound pulses lose energy
continuously as they move through matter. This is unlike x-ray photons, which
lose energy in "one-shot" photoelectric or Compton interactions. Scattering and
refraction interactions also remove some of the energy from the pulse and contrib¬
ute to its overall attenuation, but absorption is the most significant.
"windows" through which underlying structures can be easily imaged. Most of the
soft tissues of the body have attenuation coefficient values of approximately 1 dB
per cm per MHz, with the exception of fat and muscle. Muscle has a range of
values that depends on the direction of the ultrasound with respect to the muscle
fibers. Lung has a much higher attenuation rate than either air or soft tissue. This is
because the small pockets of air in the alveoli are very effective in scattering ultra¬
sound energy. Because of this, the normal lung structure is extremely difficult to
penetrate with ultrasound. Compared to the soft tissues of the body, bone has a
relatively high attenuation rate. Bone, in effect, shields some parts of the body
against easy access by ultrasound.
Figure 25-6 shows the decrease in pulse amplitude as ultrasound passes through
various materials found in the human body.
Reflection
Coefficient
Material (dB/cm MHz)
Water 0.002
Fat 0.66
Soft tissue (average) 0.9
Muscle (average) 2.0
Air 12.0
Bone 20.0
Lung 40.0
Ultrasound Production and Interactions 383
two quantities in
way related. Acoustic impedance is a characteristic of a
are no
material related to its
density and elastic properties. Since the velocity is related to
the same material characteristics, a
relationship exists between tissue impedance
and ultrasound velocity. The
relationship is such that the impedance, Z, is the
product of the velocity, v, and the material density, p, which can be written as
llll r
KJ
1
2 3 4 5 6 7 10
Distance (cm)
Figure 25-6 The Effect of Absorption on Ultrasound Pulse Amplitude in Relation to Dis¬
tance orDepth into the Body
384 Physical Principles of Medical Imaging
TRANSDUCER
Sending Receiving
Echo Pulse t
I
M ■ ■ Interface * * ►
Penetrating _ .
Pulse
I
Figure 25-7 The Production of an Echo and Penetrating Pulse at a Tissue Interface
Refraction
When an ultrasound pulse passes through an interface at a
relatively small angle
(between the beam direction and interface surface), the penetrating pulse direction
will be shifted by the refraction process. This can produce certain artifacts as we
will see in the Chapter 26.
Ultrasound Production and Interactions 385
Amplitude Loss
Interface (dB)
along the beam path. The effect of pulse size on image detail will be considered in
Chapter 26. At this point we will observe the change in pulse diameter as it moves
along the beam and show how it can be controlled.
The diameter of the pulse is determined by the characteristics of the transducer.
At the transducer surface, the diameter of the pulse is the same as the diameter of
the vibrating crystal. As the pulse moves through the body, the diameter generally
changes. This is determined by the focusing characteristics of the transducer.
Transducer Focusing
Transducers can designed to produce either a focused or nonfocused beam,
be
as shown in Figure 25-8. A focused beam is desirable for most imaging applica¬
tions because it produces pulses with a small diameter which in turn gives better
visibility of detail in the image. The best detail will be obtained for structures
within the focal zone. The distance between the transducer and the focal zone is
the focal depth.
Unfocused Transducers. An unfocused transducer produces a beam with two
distinct regions, as shown in Figure 25-8. One is the so-called near field or Fresnel
zone and the other is the far field or Fraunhofer zone.
In the near field, the ultrasound pulse maintains a relatively constant diameter
that can be used for imaging.
In the near field, the beam has a constant diameter that is determined by the
diameter of the transducer. The length of the near field is related to the diameter,
D, of the transducer and the wavelength, X, of the ultrasound by
Far Field
(Fraunhofer Zone) I
Unfocused . r\- T\
Transducer
iiiiiifIII'i
(Fresnel Zone) mKJ u.
L •
Near Field
Focal
Zone
Focused
V •
fY
Transducer
Focal
i " ^ .u
Depth
Figure 25-8 Beam Width and Pulse Diameter Characteristics of Both Unfocused and
Focused Transducers
Recall that the wavelength is inversely related to frequency. Therefore, for a given
transducer size, the length of the near field is proportional to frequency. Another
characteristic of the near field is that the intensity along the beam axis is not con¬
stant; it oscillates between maximum and zero several times between the trans¬
ducer surface and the boundary between the near and far field. This is because of
the interference patternscreated by the sound waves from the transducer surface.
An intensity of zero at a point along the axis simply means that the sound vibra¬
tions are concentrated around the periphery of the beam. A picture of the ultra¬
sound pulse in that region would look more like concentric
rings or "donuts" than
the disk that has been shown in various illustrations.
The major characteristic of the far field is that the beam diverges. This causes
the ultrasound pulses to be larger in diameter but to have less intensity along the
central axis. The approximate angle of divergence is related to the diameter of the
higher ultrasound frequencies (shorter wavelengths) is that the beams are less di¬
vergent and generally produce less blur and better detail.
Figure 25-8 is a representation of the ideal ultrasound beam. However, some
transducers produce beams with side lobes. These secondary beams fan out
around the primary beam. The principal concern is that under some conditions
echoes will be produced by the side lobes and produce artifacts in the image.
Fixed Focus. A transducer can be designed to produce a focused ultrasound
beam by using a concaved piezoelectric element or an acoustic lens in front of the
element. Transducers are designed with different degrees of focusing. Relatively
weak focusing produces a longer focal zone and greater focal depth. A strongly
focused transducer will have a shorter focal zone and a shorter focal depth.
Fixed focus transducers have the obvious disadvantages of not being able to
produce the same image detail at all depths within the body.
Adjustable Transmit Focus. The focusing of some transducers can be adjusted
to a specific depth for each transmitted pulse. This concept is illustrated in Figure
array configurations: linear and annular. In the linear array the elements are ar¬
ranged in either a straight or curved line. The annular array transducer consists of
concentric transducer elements as shown. Although these two designs have differ¬
ent clinical applications, the focusing principles are similar.
Focusing is achieved by not applying the electrical pulses to all of the trans¬
ducer elements simultaneously. The pulse to each element is passed through an
electronic delay. Now let's observe the sequence in which the transducer elements
are pulsed in Figure 25-9. The outermost element (annular) or elements (linear)
will be pulsed first. This produces ultrasound that begins to move away from the
transducer. The other elements are then pulsed in sequence, working toward the
center of the array. The centermost element will receive the last pulse. The pulses
only focus in the one dimension that is in the plane of the transducer.
Dynamic Receive Focus. The focusing of an array transducer can also be
changed electronically when it is in the echo receiving mode. This is achieved by
388 Physical Principles of Medical Imaging
Electrical Pulses
i i czz3
Time I I
Linear
i i i i
i i
nnnnnnnnn
(Crossection)
CTT
Annular
Focal Zone
processing the electrical pulses from the individual transducer elements through
different time delays before they are combined to form a composite electrical
pulse. The effect of this is to give the transducer a high sensitivity for echoes
coming from a specific depth along the central axis of the beam. This produces a
focusing effect for the returning echoes.
An important factor is that the
receiving focal depth can be changed rapidly.
Since echoes at different depths do not arrive at the transducer at the same time,
the focusing can be swept down through the
depth range to pick up the echoes as
they occur. This is the major distinction between dynamic or sweep focusing dur¬
ing the receive mode and adjustable transmit focus. Any one transmitted pulse can
only be focused to one specific depth. However, during the receive mode, the
focus can be swept through a range of depths to pick up the
multiple echoes pro¬
duced by one transmit pulse.
Chapter 26
Ultrasound Imaging
tially a map showing the echo producing sites. Two factors are used to determine
the location of the echo producing structures. The depth or distance from the trans¬
ducer to a site is measured by the time interval between the transmission of the
pulse and the reception of the returning echo. The horizontal or lateral location of
an echo site is determined by knowing the location of the beam that produced a
specific echo.
The amplitude of the returning echo is an indication of the strength of the reflec¬
tion. This is displayed either as a graph showing amplitude versus depth (A mode)
or in an image where the echo amplitude controls the brightness of the structure.
move the absorption effects as much as possible so that the displays show only the
ECHO AMPLITUDE
Let us now consider the situation shown in Figure 26-2. Here we see four echo
producing structures that are at different depths within the body section. It is as¬
sumed that all four sites are producing identical reflections and echo pulses of the
389
390 Physical Principles of Medical Imaging
Transducer
Body Section
same amplitude. Both the A-mode display and the B-mode image show how these
four structures would appear if there is no compensation for the tissue absorption.
A-Mode Display
The A-mode display is useful for comparing the amplitudes of returning echo
pulses in relationship to the depth of the echo producing site within the body. The
position of an echo signal on the depth scale is determined by the transmit-receive
time interval as previously described.
B-Mode Image
In the B-mode image the brightness of each structure is determined by the am¬
plitude of the received echo. In this illustration we see that the brightness is de-
Ultrasound Imaging 391
Body Section
creasing with depth. This is caused by tissue absorption of both the pulse traveling
to the echo site and the returning echo. Notice the relationship between the bright¬
ness of the structures in the B-mode image and the echo amplitudes in the A-mode
display.
The decrease in echo amplitude and brightness with increase in depth is unde¬
sirable because the displays do not give a true indication of echo strengths.
Time-Gain Compensation
Time-gain compensation (TGC) is the technique that is used to remove the ef¬
absorption as much as possible. It can also be used to emphasize or
fects of tissue
de-emphasize echoes coming from specific depths.
TGC is performed by the electronic pulse amplifier. When echoes are received
by the transducer they are converted into electrical pulses that go to the amplifier.
392 Physical Principles of Medical Imaging
pulses and then there is the TGC which will automatically adjust the gain in rela¬
tionship to depth. For the TGC the operator must set the controls to specific gain
values at different depths, as shown in Figure 26-3.
In the A-mode display the arrows show the increase in amplitude produced by
the TGC. We also observe that all four of the echo sites are now displayed with the
same brightness in the B-mode image.
A Mod©
TISSUE CHARACTERISTICS
Tissue Boundaries
We have already that echos are produced when the ultrasound pulse en¬
seen
counters an interface boundary between tissues that have different acoustic
or
impedences. This is the basic process that creates the ultrasound image. When
boundaries are generally larger than the ultrasound pulse they are displayed as
distinctive bright areas. Examples are the boundaries of organs, vessel walls, soft
Specular Reflections
If a tissue boundary is relatively smooth, the ultrasound pulse can experience a
specular or "mirror-like" reflection in which all of the echo pulse is reflected in the
same direction. If this is back toward the transducer, the echo will contribute to the
display of the boundary. For a specular reflection this requires the ultrasound
beam to be normal to the boundary. If the incident pulse strikes the boundary at
Figure 26-4 Three Specific Tissue Characteristics That Appear in Ultrasound Images
394 Physical Principles of Medical Imaging
otherangles, the echo will not return to the transducer and be displayed in the
image.
Scatter Reflections
Parenchyma Patterns
The parenchymal structure of tissue provides many small reflection sites within
the tissue. These produce scattered reflections in virtually all directions. Some of
these reflections will be in the direction of the transducer and will be
displayed in
the image. However, there is an important characteristic that distinguishes the
ap¬
pearance of parenchyma from the larger tissue boundaries. In the tissue paren¬
chyma there are many closely spaced echo sites so that the pulses do not interact
with them on a one-to-one basis. The weak echoes from the individual sites are
combined by the process of constructive interference to produce the tissue texture
patterns seen in the image. This is the speckel which is a predominant characteris¬
tic in most ultrasound images. Although the speckel is not a direct
display of the
minute tissue structures, it does relate to overall
parenchymal characteristics. Dif¬
ferent types of tissue and organs tend to produce different
speckel displays. How¬
ever, the appearance of a specific type of tissue will not be the same under all
imaging conditions. It depends on factors such as the frequency and size of the
ultrasound pulses.
Fluid
SCAN PATTERNS
Several scanning patterns are used to move the ultrasound beam over the body
section being imaged. Four of the basic patterns are compared in Figure 26-5.
A linear scan pattern is produced by moving the beam so that all beams remain
parallel. The image area will have the same width at all depths. General advan¬
tages of a linear scan are a wide field of view for anatomical regions close to the
transducer and a uniform distribution of scan lines throughout the region. General
disadvantages include a limited field of view for deeper regions and difficulty in
intercostal imaging.
Sector scanning is performed by angulating the beam path from one transducer
location. Imaging can be performed through an intercostal space. A general disad¬
vantage is the small field of view for regions close to the transducer surface.
A trapezoid pattern is, in principle, a combination of linear and sector scanning.
The beam is angulated and moved as in the linear scan.
There are several different methods used to scan the ultrasound beam over a
body section and to achieve the different scan patterns described above. These
methods are related to specific transducer design characteristics. For many years
the scanning was done manually by the operator who used a single beam trans-
Linear Trapezoid
Mechanical Scanners
beam searchlight sweeping over the body section producing a sector scan pattern.
Some scanners use transducer elements mounted on a rotating wheel, which
sweep the beam over the sector. Another design rocks or oscillates the transducer
element to produce the scan. A third approach is to use a stationary transducer
element, which projects the ultrasound beam onto an acoustic mirror. The mirror
is driven with an oscillating motion so that the sector is scanned by the beam
reflected from the mirror's surface.
A common characteristic of the mechanical scanners is that the ultrasound
beam is produced by either a single element transducer or an annular array. This is
quite different from the electronic scanners that will now be considered.
Electronic Scanners
piezoelectric elements arranged in an array. However, all of the elements are not
used simultaneously. Different scan patterns and focusing are determined by the
along the array. The group of elements that are pulsed together form the transducer
aperture. The size of the aperture (number of elements) determines the lateral di¬
mension (width) of the beam as it leaves the transducer surface. The dimension of
the beam in the slice-thickness direction is determined by the dimension of the
transducer elements in that direction.
Ultrasound Imaging 397
A scan would
begin by electrical pulses being applied simultaneously to the
first groupof elements at the end of the array. All of the small ultrasound pulses
produced by the individual elements will be in phase and will form a combined
pulse that will move away in a direction normal to the surface of the transducer.
When echoes return along the same beam path
they will be received by the same
transducer elements and converted into electrical pulses. This creates the first
beam position or scan line for the image.
The second scan line is created by moving the aperture; for example, the second
beam would be created by adding one element to the right side of the aperture and
dropping one from the left. This process is repeated many times to scan the beams
over the body section.
Another form of linear array transducer is one in which the element array has a
convex curved shape. A curvilinear
array produces a trapezoid scan pattern.
Electrical Pulses
nrrnmitTi^^
Scan
Electrical Pulses
Time
In this transducer the elements are arranged in a relatively short array compared
to the linear array transducer. The different scan lines are created by changing the
angle between the ultrasound beam and the transducer. The beam angle is deter¬
mined by the sequence in which the electrical pulses are applied to the transducer
elements.
pulses are applied in a sequence with a short time delay between each. This
The
causes the transducer elements to produce ultrasound pulses at different times. As
the individual pulses are produced they move away from the face of the trans¬
ducer. The vibrations within the individual pulses combine in a constructive man¬
ner to create a composite pulse or "wavefront" moving away at a specific angle.
Intercavity Probes
Image Detail
The major factor that limits visibility of detail in ultrasound imaging is the blur¬
ring associated with the size of the ultrasound pulse. A second factor is the spacing
of the scan lines within the imaged area. Therefore, image detail is determined by
characteristics of the specific transducer and the adjustment of certain scanning
parameters.
In Chapter 25 we discovered that an ultrasound pulse has two dimensions, di¬
ameter and length, which are generally independent of each other and determined
by different factors. We will now observe how the pulse dimensions affect image
detail.
Lateral Blur
causes the image of the object to be blurred in the lateral or scan direction. The
dimension of the blur is determined by the width of the beam or pulse diameter. In
Figure 26-8 we see that the least blurring and best visibility of detail is obtained
where the transducer is focused. This is an important factor to consider when se¬
lecting transducers for specific clinical applications.
Axial Blur
Axial, or depth, blur is determined by the length of the ultrasound pulse. This, in
turn, is determined by characteristics of the transducer, primarily the frequency
and damping. We recall from Chapter 25 that pulse length is related to wave¬
length, which is inversely related to frequency. High frequency short wave-length
400 Physical Principles of Medical Imaging
Ultrasound Beam
Focus ■ ■ • ►
ultrasound produces shorter pulses and less blurring along the axis of the ultra¬
sound beam. We also recall that the absorption of ultrasound in tissue increases
with frequency and limits imaging depth.
The selection of frequency for a specific clinical application is a compromise
between image detail and depth. This is illustrated in Figure 26-9.
As the ultrasound pulse passes over an object, an echo is produced when the
object is within the pulse. This results in the image of the object being blurred in
the axial direction in relationship to the length of the pulse. The concept of resolu¬
tion is also illustrated in Figure 26-9. When the distance between two objects is
short compared to the pulse length the objects will be blurred together and not
resolved or imaged as separate objects.
High frequency pulses produce better visibility of detail and resolution but are
limited with respect to depth. Lower frequencies can image deeper structures but
produce more image blurring and less visibility of detail and resolution.
Frequency
Pulse Frequency High Low
( ^
High • ■
■ 1
ibe^ebeir mm
mm
b 1
b b
■ ■
■ b
■ b
i
Low
1 1
b b
■ii
1 1
n
j!
.
j
Figure 26-9 Axial Blurring Produced by the Length or Thickness of an Ultrasound Pulse
154,000 cm/s
402 Physical Principles of Medical Imaging
Transducer
Depth
Body Section
Figure 26-10 Factors that Determine the Density of Scan Lines in an Ultrasound Image
A more practical consideration is the rate at which images can be created. This
is especially significant when observing moving structures. This is given by:
r>t
Rate r t \ 77,000 cm/s
(lmage/s) = •
D (cm) N (lines/images)
For example, if we need to image to a depth of 10 cm with 240 scan lines in the
image, the maximum frame rate will be 32 image frames per second. This compro¬
mise between frame rate and image detail as affected by the number of scan lines
must be considered when setting up scanning protocols.
Contrast Sensitivity
adjusted by the operator. The TGC will have an effect on contrast sensitivity at
specific depths within the body. Ultrasound systems apply electronic and digital
processing to the image data before it is displayed. It is possible to change the
processing parameters to produce a variety of image contrast characteristics as
shown in Figure 26-11. Contrast is determined by the relationship of brightness in
the image to the digital values in the image. Different relationships are shown here
Artifacts
Shadowing
Some objects produce shadows in ultrasound images. If the object has either
high reflection or attenuation characteristics, very little pulse energy will pass
Figure 26-11 Curves That Show the Relationship between Brightness and Digital Values
Produced by Changing the Processing Parameters
404 Physical Principles of Medical Imaging
through it. This artifact appears in the image as a streak of reduced intensity
(shadow) behind the object. We see a shadowing artifact by the stone in Fig¬
ure 26-12.
Enhancement
Refraction
The refraction of the ultrasound near the edges of a fluid area can produce shad¬
owy type artifacts, as shown in Figure 26-12. When the ultrasound beam scans
across the edges of the fluid area, the refraction process spreads the beam, which
Reverberation
Reverberation can occur if two or more reflecting structures are located at dif¬
ferent points along the ultrasound beam, as shown in Figure 26-13. This artifact is
produced because some of the pulses are reflected back and forth between the two
Fluid
* Enhancement
Stone
Shadowing
Refraction —
Refraction Artifacts
Image
Reflecting Structure
Artifacts
Reverberations
objects. Thiscauses a time delay before the echo pulse returns to the transducer.
Because of this delay, the echo appears to have originated from a structure that is
deeper within the body than it actually is. Therefore, the image shows the structure
at several points along the beam path.
O
A
Resonating Object
Metal
Stone
Gas Bubble
Ring Down
Ring-down artifacts appear as bright streaks radiating down from certain ob¬
jects within the body, as shown in Figure 26-14. These streaks are actually what
appears as an image of many closely spaced echo sites. These artifacts are caused
by objects or structures that tend to resonate or ring when struck by the ultrasound
pulse. This can happen with small metal objects and with layers of gas bubbles
that produce the so-called bugle effect.
Chapter 27
Ultrasound Imaging
of Cardiac Motion and
Flowing Blood
Ultrasound
imaging is especially useful in the cardiovascular system because of
itsability to produce images of both the motion of cardiac anatomy and flowing
blood through many parts of the vascular system. In this chapter we will consider
the various methods and modes used for this purpose and the factors that must be
considered by the user in order to obtain optimum results.
MOTION MODE
In the motion (M) mode, the ultrasound system records the motion of internal
DOPPLER IMAGING
ability to image the flow of blood comes primarily from the Doppler effect.
The
The significance of Doppler imaging is that it allows us to directly visualize blood
flow and to assess flow velocity.
Doppler Effect
The Doppler effect, which is one method used for flow imaging, is the physical
interaction between the ultrasound and the flowing blood. The Doppler effect
407
408 Physical Principles of Medical Imaging
MOTION MODE
Transducer
Time ►
occurs when ultrasound is reflected from moving object. In vascular imaging the
a
change in frequency shown in Figure 27-2 is the so-called Doppler shift. The Dop-
pler effect was described in 1842 by Christian Johann Doppler, an Austrian math¬
ematician who used it to explain the apparent shift in the color of
light from mov¬
ing stars. The Doppler effect generally applies when any form of sound or
radiation is either emitted by or reflected from
moving objects. Doppler radar is
widely used to detect speeding automobiles and produce color weather maps of
falling precipitation. There are three major factors—velocity, direction, and
angle—that determine the amount of frequency shift that will occur. These will
now be considered.
Figure 27-3. When the flow is toward the transducer, the Doppler effect increases
Ultrasound Imaging of Cardiac Motion and Flowing Blood 409
DOPPLER SHIFT
Moving Object
the frequency of the reflected ultrasound. When the flow is away from the trans¬
ducer, there will bea decrease in frequency. The significance is that Doppler im¬
Angle
An important consideration in Doppler imaging is the angle between the
Figure 27-4. The amount of
ultrasound beam and the direction of flow, as shown in
frequency shift depends on this angle. Maximum frequency shift occurs when the
ultrasound beam is aligned with the direction of flow. This is generally not pos¬
sible to achieve in most situations. The frequency shift decreases as the angle is
increased up to 90 degrees. When the ultrasound beam is perpendicular to the
direction of flow (90-degree angle), no Doppler shift will occur. The specific rela¬
tionship is that the Doppler frequency shift is proportional to the cosine of the
angle, which ranges from a value of 1 at 0 degrees to a value of 0 at 90 degrees.
410 Physical Principles of Medical Imaging
Frequency
Transducer H
Velocity
# • •#
TRANSDUCER ANGLE 70
When using Doppler imaging to evaluate flow, there are two important factors
to consider:
Doppler Modes
The two different modes of Doppler applications are related to how the ultra¬
sound is produced. The two possibilities are continuous wave (CW) or pulsed
ultrasound.
other words, echoes arriving from different depths within the body cannot be sepa¬
rated as in conventional imaging. However, some degree of depth selectivity is
obtained by the geometric relationship between the transmitting and receiving
transducer elements. Echoes will be produced only in the region where the two
Pulsed Doppler
Thepulsed Doppler mode uses a series of discrete ultrasound pulses like the
other imaging modes. The primary advantage of a pulsed mode is that echoes
returning to the transducer can be sorted according to distance traveled (depth) and
used to form an image.
There are several different ways of displaying the information obtained from
Doppler interactions. The selection of a specific display depends on the Doppler
mode (CW or pulse) and the type of flow information required for a specific clini¬
cal application.
412 Physical Principles of Medical Imaging
The information developed with the Doppler effect can be conveyed either as an
audible sound or a graphical display.
Audible Sound
components that change in frequency and loudness with the pulsing flow.
instant in time. The length of the vertical line represents the range of velocities
present. The top of the line indicates the maximum velocity. The brightness of the
Quantity of Blood
at Each Velocity
lime
t: . .
iSpnifei
max
mean
The magnetic resonance (MR) process is capable of producing images that are
distinctly different from the images produced by other imaging modalities. A pri¬
mary difference is that MR can selectively image several different tissue charac¬
teristics. A potential advantage of this is that if a pathologic process does not alter
one tissue characteristic, it might be visible in an image because of its effect on
THE MR IMAGE
415
416 Physical Principles of Medical Imaging
The MR Image
RF Signals
t
Tissue
| Magnetization
t
Magnetic Protons Flow
Relaxation
Proton
T1 T2 | Density
Tissue Characteristics
Image Types
Figure 28-1 Physical Characteristics of Tissue That Are Displayed in the Magnetic
Resonance Image
Tissue Magnetization
The condition within the tissue that
produces the RF signal is magnetization. At
this point we will use an analogy with radionuclide imaging. In nuclear medicine
procedures it is the presence of radioactivity in the tissues that produces the radia¬
tion. In magnetic resonance
imaging (MRI) it is the magnetization within the tis¬
sues that produces the RF
signal radiation that is displayed in the image.
We will soon discover that tissue becomes
magnetized when the patient is
placed in a strong magnetic field. However, all tissues are not magnetized to the
same level. It is the level of
magnetization at specific "picture snapping" times
during the imaging procedure that determines the intensity of the resulting RF
signal and image brightness.
The Magnetic Resonance Image 417
Not all chemical substances have magnetic nuclei. The only substance found in
tissue that has an adequate concentration of magnetic nuclei to produce good im¬
strong magnetic field, some of the protons line up together to produce the magne¬
tization in the tissue which then produces the RF signal. If a tissue does not have
an adequate concentration of molecules containing hydrogen, it will not be visible
in an MR image.
Image Types
Magnetic resonance images are generally identified with specific tissue charac¬
teristics which are the predominant source of contrast. The equipment operator
determines the type of image that is to be produced by adjusting various imaging
factors.
Proton Density
The most direct tissue characteristic that can be imaged is the concentration or
density of protons. In a proton density image, tissue magnetization, RF signal in¬
tensity, and image brightness are determined by the proton (hydrogen) content of
the tissue. Tissues that are rich in protons will produce strong signals and have a
bright appearance.
ies from one type of tissue to another. The relaxation times can be used to distin¬
guish (ie, produce contrast) among both normal and pathologic tissues.
418 Physical Principles of Medical Imaging
Each tissue is characterized by two relaxation times: T1 and T2. Images can be
created in which either one of these two characteristics is the predominant source
of contrast. It is not usually possible to create images in which one of the tissue
characteristics (eg, proton density, Tl, or T2) is the only source of contrast. Typi¬
cally there is a mixing or blending of the characteristics. When an image is de¬
scribed as a Tl-weighted image this means that Tl is the predominant source of
contrast but there is also some contamination from the proton density and T2 char¬
acteristics.
Flow
The heart of the MR system is a large magnet that produces a very strong mag¬
netic field. Thepatient's body is placed in the magnetic field during the imaging
procedure. The magnetic field produces two distinct effects that work together to
create the image.
Tissue Magnetization
When the patient is placed in the magnetic field the tissue becomes temporarily
magnetized because of the alignment of the protons, as described previously. This
is a very low-level effect that disappears when the patient is removed from the
magnet. The ability of MRI to distinguish among different types of tissue is based
on the fact that different tissues, both normal and pathologic, will become magne¬
tized to different levels or change their levels of magnetization (ie, relax) at differ¬
ent rates.
Tissue Resonance
The magnetic field also causes the tissue to "tune in" or resonate at a very spe¬
cific frequency. That is why the procedure is known as magnetic resonance
The Magnetic Resonance Image 419
RF Pulses RP Signals
OOOOOOOOO
Computer
imaging. It is actually certain nuclei, typically protons, within the tissue that reso¬
nate. Therefore, the more comprehensive name for the phenomenon that is the
basis of both imaging and spectroscopy is nuclear magnetic resonance.
In the presence of the strong magnetic field the tissue resonates in the radio
frequency (RF) range. This causes the tissue to function as a tuned radio receiver
and transmitter during the imaging process. The production of an MR image in¬
volves two-way radio communications between the tissue in the patient's body
and the equipment.
Magnet Types
There are several different types of magnets that can be used to produce the
magnetic field. Each has its advantages and disadvantages.
Superconducting
Most MR systems use superconducting magnets. The primary advantage is that
a superconducting magnet is capable of producing a much stronger magnetic field
than the other two types considered below. It is an electromagnet that operates in a
superconducting state. A superconductor is an electrical conductor (wire) that has
no resistance to the flow of an electrical current. This means that very small super-
420 Physical Principles of Medical Imaging
conducting wires can carry very large currents without overheating, which is typi¬
cal of more conventional conductors like copper. It is the combined ability to con¬
struct a magnet with many loops or turns of small wire and then use large currents
Resistive
Permanent
It is
possible to do MRI with a non-electrical permanent magnet. An obvious
advantage is that a permanent magnet does not require either electrical power or
coolants for operation. However, this type of magnet is also limited to relatively
low field strengths.
Gradients
When the MR system is in a resting state and not actually producing an image,
the magnetic field is quite uniform or homogeneous over the region of the
patient's body. However, during the imaging process the field must be distorted
with gradients. A gradient is just a change in field strength from one point to an¬
other in the patient's body. The gradients are produced by a set of gradient coils,
which are contained within the magnet assembly. During an imaging procedure
the gradients are turned on and off many times. This action produces the sound or
noise that comes from the magnet.
Coils
The RF coils are located within the magnet assembly and relatively close to the
patient's body. These coils function as the antennae for both transmitting and re-
The Magnetic Resonance Image 421
ceiving to and from the tissue. There are different coil designs for different ana¬
tomical regions. The three basic types are body, head, and surface coils. In some
applications the same coil is used for both transmitting and receiving. At other
times separate transmitting and receiving coils are used.
Transmitter
The RF transmitter generates the RF energy, which is applied to the coils and
then transmitted to thepatient's body. The energy is generated as a series of dis¬
crete RF pulses. As we will see later the characteristics of an image are determined
Receiver
the resonating tissue will respond by returning an RF signal. These signals are
picked up by the coils and processed by the receiver. The signals are converted
into a digital form and transferred to the computer where they are temporarily
stored.
Computer
Acquisition Control
is the acquisition of the RF signals from the patient's body. This
The first step
operator can select from many preset protocols for specific clinical procedures or
change protocol factors for special applications.
Image Reconstruction
The RF signal data collected during the acquisition phase is not in the form of
an image. However, the computer can use the collected data to create or "recon-
422 Physical Principles of Medical Imaging
All medicalimaging modalities use some form of radiation (eg, x-ray, gamma,
etc.) or energy(eg, ultrasound) to transfer the image from the patient's body.
The MR imaging process uses radio frequency (RF) signals to transmit the im¬
age from the patient's body. The radio frequency energy used in MRI is a form of
non-ionizing radiation. The RF pulses that are applied to the patient's body are
absorbed by the tissue and converted to heat. A small amount of the energy is
emitted by the body as the signals. Actually the image itself is not transmitted
from the body. The RF signals provide information (data) from which the image is
reconstructed by the computer. However, the resulting image is a display of RF
pendent RF signal course. The intensity of the RF signal from a voxel determines
the brightness of the corresponding image pixel. Bright areas (pixels) within an
image are properly described as areas of high RF signal intensity. Contrast be¬
tween two tissues will be visible only if they emit different signal intensities. In a
later chapter we will probe deeper into the process and discover the source of the
RF signals and the conditions that produce image contrast.
We are now at the point of recognizing that the MR image is a display of mag¬
netized tissue. If a specific tissue is not magnetized it cannot produce a signal and
be visible in the image. It will appear only as a dark void. Contrast in an image is
the result of different levels of tissue magnetization at specific times during the
imaging process.
SPATIAL CHARACTERISTICS
Figure 28-3 illustrates the basic spatial characteristics of the magnetic reso¬
nance image. MRI is basically a tomographic imaging process although there are
some procedures, such as angiography, in which a complete anatomical volume
will be displayed in a single image. The protocol for the acquisition process must
The Magnetic Resonance Image 423
Set of
Slices
Slices
major restriction is that images in the different planes cannot be acquired simulta¬
neously. For example, if both axial and sagittal images are required, the acquisi¬
tion process must be repeated. However, there is the possibility of acquiring data
from a large volume of tissue and then reconstructing slices in the different planes.
424 Physical Principles of Medical Imaging
Voxels
optimum size for each type of clinical examination. Each voxel is an independent
source of RF signals.
Image Pixels
Theimage is also divided into rows and columns of picture elements, or pixels.
In general, an image pixel represents a corresponding voxel of tissue within the
slice. The brightness of an image pixel is determined by the intensity of the RF
The operator of an MRI system has tremendous control over the quality of the
Contrast
Distortion Detail Sensitivity Noise Artifacts
Protocol Factors
\ t X
Operator
Figure 28-4 Image Quality Characteristics That Can Be Controlled by the Selection of
Protocol Factors
The Magnetic Resonance Image 425
images that are produced. The five basic image quality characteristics are identi¬
fied in Figure 28-4. These are:
1. contrast sensitivity
2. detail
3. noise
4. artifacts
5. distortion.
given to the time required for the acquisition process. In general, several aspects of
image quality can be improved by using longer acquisition times.
An optimum imaging protocol is one in which there is a proper balance among
the image quality characteristics and also a balance between overall image quality
and acquisition time.
Chapter 29
When certain materials are placed in a magnetic field, they take on a resonant
characteristic. This means the materials can absorb and then re-radiate electro¬
MAGNETIC FIELDS
Field Direction
427
428 Physical Principles of Medical Imaging
RF Pulse RF Signal
used for imaging produce a magnetic field that runs through the bore parallel to the
major patient axis. As the magnetic field leaves the bore, it spreads out and en¬
circles the magnet, creating an external field. The external field can be a source of
interference with other devices and is usually contained by some form of shield¬
ing. This part of the field is not shown in the illustration.
Field Strength
Each point within a magnetic field has a particular intensity, or
strength. Field
strength is expressed either in the units of tesla (T) or gauss (G). The relationship
Magnetic Coil
Figure 29-2 The General Direction of the Magnetic Field Used for Imaging
Nuclear Magnetic Resonance 429
between the two units is that 1.0 T is equal to 10,000 G (or 10 kG). At the earth's
surface, its magnetic field is relatively weak and has a strength of less than 1 G.
Magnetic field strengths in the range of 0.15 T to 1.5 T are used for imaging. The
significance of field strength is considered as we explore the characteristics of MR
images and image quality in later chapters.
Magnetic resonance imaging requires a magnetic field that is very uniform, or
homogeneous. Field homogeneity is affected by magnet design, adjustments, and
environmental conditions. Imaging generally requires a homogeneity (field uni¬
formity) on the order of a few parts per million (ppm) within the imaging area.
Gradients
strength across the imaging space and is known as a gradient. The use of magnetic
fieldgradients is considered later.
MAGNETIC NUCLEI
Materials that participate in the MR process must contain nuclei with specific
magnetic properties. In order to interact with a magnetic field, the nuclei them¬
selves must be small magnets and have a magnetic moment. The magnetic charac¬
teristic of an individual nucleus is determined by its neutron-proton composition.
Only certain nuclides with an odd number of neutrons and protons are magnetic
and have magnetic moments. Even though most chemical elements have one or
more isotopes with magnetic nuclei, the number of magnetic isotopes that might
Among the nuclides that are magnetic and can participate in an NMR process, the
amount of signal produced by each varies considerably.
The magnetic property of a nucleus has a specific direction known as the mag¬
netic moment. In Figure 29-3, the direction of the magnetic moment is indicated
Hydrogen -1 Carbon - 12
Low
Carbon - 13 Nitrogen - 14
Isotropic Abundance
Nitrogen - 15 Oxygen -16
Oxygen - 17
Fluorine - 19 Low
Sodium - 23 Tissue Concentration
Phosphorus - 31
Potassium-39 -1
Tissue Concentration
Isotopic Abundance
typically in the form of one isotope, with very low concentrations of the other
isotopic forms. Unfortunately, some of the magnetic isotopes are the ones with a
Nuclear Magnetic Resonance 431
low abundance in the natural state. These include carbon-13, nitrogen-15, and
oxygen-17.
Relative Sensitivity
The signal strength produced by an equal quantity of the various nuclei also
varies over aconsiderable range. This inherent NMR sensitivity is typically ex¬
pressed relative to hydrogen-1, which produces the strongest signal of all of the
nuclides. The relative sensitivity of some magnetic nuclides are:
hydrogen-1 1
fluorine-19 0.83
sodium-23 0.093
phosphorus-31 0.066
The relative signal strength from the various chemical elements in tissue is de¬
termined by three factors: (1) tissue concentration of the element, (2) isotopic
abundance, and (3) sensitivity of the specific nuclide.
In comparison to all other nuclides, hydrogen produces an extremely strong
signal. This results from its high values for each of the three contributing factors.
Of the three factors, only the concentration, or density, of the nuclei varies from
point to point within an imaged section. The quantity is often referred to as proton
density and is the most fundamental tissue characteristic that determines the inten¬
sity of the RF signal from an individual voxel, and the resulting pixel brightness.
In most imaging situations, pixel brightness is proportional to the density (concen¬
tration) of nuclei (protons) in the corresponding voxel, although additional fac¬
tors, such as relaxation times, modify this relationship.
RF Coil
Figure 29-4 The RF Coils and Energy Exchange with the Human Body
Pulses
RF energy is applied to the body in several short pulses during each imaging
cycle. The strength of the pulses is described in terms of the angle through which
they rotate tissue magnetization, as described below. Many imaging methods use
both 90-degree and 180-degree pulses in each cycle.
Signals
At specific time in each imaging cycle, the tissue is stimulated to emit an RF
a
signal, which is picked up by the coil, analyzed, and used to form the image. The
spin-echo or gradient-echo methods are generally used to stimulate signal emis¬
sion. Therefore, the signals from the patient's body are commonly referred to as
echoes.
Nuclear Alignment
Recall that amagnetic nucleus is characterized by a magnetic moment. The
direction of themagnetic moment is represented by a small arrow passing through
the nucleus. If we think of the nucleus as a small conventional
magnet, then the
magnetic moment arrow corresponds to the south pole-north pole direction of the
magnet.
In the absence of strong magnetic field, magnetic moments of nuclei are ran¬
a
domly oriented in space. Many nuclei in tissue are not in a rigid structure and are
free to change direction. In fact, nuclei are constantly tumbling, or changing direc¬
tion, because of thermal activity within the material.
When a material containing magnetic nuclei is placed in a magnetic field, the
nuclei experience a torque that encourages them to align with the direction of the
field. In the human body, however, thermal energy agitates the nuclei and keeps
most of them from aligning parallel to the magnetic field. The number of nuclei
that do align with the magnetic field is proportional to field strength. The magnetic
fields used for imaging can align only a few of every million magnetic nuclei
present.
Resonance
When a magnetic nucleus aligns with a magnetic field, it is not fixed; the
nuclear magnetic moment precesses, or oscillates, about the axis of the magnetic
field, as shown in Figure 29-5. The precessing motion is a physical phenomenon
that results from an interaction between the magnetic field and the spinning mo¬
mentum of the nucleus. Precession is often observed with a child's spinning top. A
spinning top does not stand vertical for long, but begins to wobble, or precess. In
this case, the precession is caused by an interaction between the earth's gravita¬
tional field and the spinning momentum of the top.
The significance of the nuclear precession is that it causes the nucleus to be
extremely sensitive, or tuned, to RF energy that has a frequency identical with the
precession frequency (rate). This condition is known as resonance and is the basis
for all MR procedures: NMR is the process in which a nucleus resonates, or "tunes
in," when it is in a magnetic field.
Resonance is fundamental to the absorption and emission of energy by many
objects and devices. Objects are most effective in exchanging energy at their reso¬
nant frequency. The resonance of an object or device is determined by certain
physical characteristics. Let us consider two common examples. Radio receivers
operate on the principle of resonant frequency. A receiver can select a specific
broadcast station because each station transmits a different frequency. Tuning a
434 Physical Principles of Medical Imaging
Figure 29-5 The Interactions between RF Energy and Nuclei in a Magnetic Field
radio is actually adjusting its resonant frequency. Its receiver is very sensitive to
radio signals at its resonant frequency and insensitive to all other frequencies.
The strings of a musical instrument also have specific resonant frequencies.
This is the frequency at which the string vibrates to produce a specific audio fre¬
magnetic field. In a very general sense, increasing the magnetic field strength in-
Nuclear Magnetic Resonance 435
creases the tension on the nuclei (as with the strings of a musical instrument) and
increases the resonant frequency. The Larmor frequencies for selected nuclides in
a magnetic field of 1 T are:
The fact that different nuclides have different resonant frequencies means that
most MR procedures can "look at" only one chemical element (nuclide) at a time.
The fact that a specific nuclide can be tuned to different radio frequencies by vary¬
ing the field strength (ie, applying gradients) is used in the imaging process, as
discussed later.
The resonant frequency of magnetic nuclei, such as protons, is also affected by
the structure of the molecule in which they are located. This is the chemical-shift
effect and is the basis of magnetic resonance spectroscopy.
Excitation
Relaxation
Figure 29-5 compares excitation and relaxation, which are the two fundamental
interactions between RF energy and a resonant nucleus in a magnetic field.
436 Physical Principles of Medical Imaging
TISSUE MAGNETIZATION
Figure 29-6 The Magnetization of a Tissue Voxel Resulting from the Alignment of Indi¬
vidual Nuclei
Nuclear Magnetic Resonance 437
field. Since imaging magnetic field aligns a very small fraction of the magnetic
an
gitudinal and transverse components. Since the two components have distinctly
different characteristics, we consider them independently.
period of time, return to its original longitudinal position. The regrowth of longitu-
Tissue Magnetization
T
Longitudinal Transverse
quired for the magnetization to reach 63% of its maximum. This time, the longitu¬
dinal relaxation time, is designated Tl. The 63% value is used because of
mathematical, rather than clinical, considerations. Longitudinal magnetization
continues to grow with time, and reaches 87% of its maximum after two Tl inter¬
vals, and 95% after three Tl intervals. For practical purposes, the magnetization
can be considered fully recovered after approximately three times the Tl value of
the specific tissue. We will see that this must be taken into consideration when
Longitudinal Magnetization
95%
87%
63%
"V—T1-
i—i—i—ill—i—i—i—r
Time (mSec)
Figure 29-8 The Growth of Longitudinal Magnetization during the Relaxation Process
Nuclear Magnetic Resonance 439
Saturation
types of tissue have different T2 values that can be used to discriminate among
tissues and contribute to image contrast.
Transverse magnetization is used during the image formation process for two
reasons: (1) to develop image contrast based on differences in T2 values, and (2)
Transverse Magnetization
100-1
80-
60-
40- 37%
20-
-T2-
It TT-
5%
150 200
50 100
Time (mSec)
nant frequency from one compound to another is known as chemical shift. It can
be used to perform chemical analysis in the technique of NMR spectroscopy and
to produce images based on chemical composition. However, in conventional
Spectroscopy
Let consider a simple but very common example of fat and water, as illus¬
us
mately 3.3 ppm. At a field strength of 1.5 T the protons have a basic resonant
frequency of approximately 64 MHz. Multiplying this by 3.3 gives a water-fat
chemical shift of approximately 210 Hz. At a field strength of 0.5 T the chemical
shift would be only 70 Hz.
Magnetic resonance spectroscopic analysis can be performed with MRI sys¬
tems equipped with the necessary components and software or with specially de-
Nuclear Magnetic Resonance 441
w
c
Water
p210atHz
T 1.5
0
c
Fat
aJ
c
D>
0)
0 1 2 3 4 5 ppm
Chemical-Shift Imaging
There are techniques that can be used to selectively image either the
several
water or fat tissue components.One approach is to suppress either the fat or water
signal with specially designed RF pulses. This technique is known as spectral
presaturation. Another technique makes use of the fact that the signals from water
and fat are not always in step, or in phase, with each other and can be separated to
create either water or fat images.
Chapter 30
Magnetic Characteristics
of Tissue
characteristic.
In this chapter we will consider the various tissue characteristics that can be
visualized in an MR image.
The brightness of a particular tissue in an image (RF signal intensity) is gener¬
ally determined by the level of tissue magnetization at specific "picture-snapping"
times during the MR imaging cycle. Contrast is produced when tissues do not have
the same level of magnetization. We will now see how certain characteristics of
tissue and other materials affect levels of magnetization and tissue visibility.
There are primary magnetic characteristics of tissue that are the source of
three
image contrast. Two of these are associated with the longitudinal magnetization.
They are proton density and Tl, the longitudinal relaxation time. The third charac¬
teristic is associated with the transverse magnetization. It is T2, the transverse
relaxation time.
443
444 Physical Principles of Medical Imaging
CONTRAST SENSITIVITY
spin-echo method that uses only two factors, TR and TE, to control contrast
sensitivity.
TR
TR is the time interval between the beginning of the longitudinal relaxation and
when the magnetization is measured to produce image contrast. This is when the
picture is snapped relative to the longitudinal magnetization.
TR is also the duration of the image acquisition
cycle or the cycle repetition
time (Time of Repetition).
I
Contrast
Sensitivity
Operator
TE
PROTON DENSITY
Low
Longitudinal Magnetization
Relative Proton Density = 100
100
>
'</)
c
a>
+-
c
a>
x
a.
0
0 500 1000 1500 2000
Time (mSec)
Figure 30-3 The Development of Proton Density Contrast
The basic difference between T1 contrast and proton density contrast is that T1
contrast produced by the rate of growth, and proton density contrast is produced
is
by the maximum level to which the magnetization grows. In general, T1 contrast
predominates in the early part of the relaxation phase, and proton density contrast
predominates in the later portion. In most situations, T1 contrast gradually gives
way to proton density contrast as magnetization approaches the maximum value.
A proton density-weighted image is produced by selecting a relatively long TR
snapped" in the later portion of
value so that the image is created or "the picture is
the relaxation phase, where tissue magnetizations approach their maximum val¬
ues. The TR values at which this occurs depend on the T1 values of the tissues
being imaged.
It was shown earlier that tissue reaches 95% of its magnetization in three Tls.
Therefore, a TR value that is at least three times the T1 values for the tissues being
imaged produces almost pure proton density contrast.
The tissue with the shorter T1 value experiences a faster regrowth of longitudinal
magnetization. Therefore, during this period of time it will have a higher level of
magnetization, produce a more intense signal, and appear brighter in the image. In
T1 images brightness or high signal intensity is associated with short T1 values.
At the beginning of each imaging cycle, the longitudinal magnetization is re¬
duced to zero (or a negative value) by an RF pulse, and then allowed to regrow, or
relax, during the cycle. When the cycle is terminated during the regrowth phase
and the magnetization value is measured and displayed as a pixel intensity, or
the shortest T1 has the most magnetization at any particular time. The clinical
significance of this is that tissues with short T1 values will be bright in Tl-
weighted images.
Typical T1 values for various tissues are listed in Table 30-1. Two materials
establish the lower and upper values for the T1 range: Fat has a short T1, and fluid
falls at the other extreme. Therefore, in T1-weighted images, fat is generally
bright, and fluid (cerebrospinal fluid, cyst, etc.) is dark. Most other body tissues
are within the range between fat and fluid.
448 Physical Principles of Medical Imaging
Longitudinal Magnetization
relatively rigid structures of solids does not provide an environment for rapid re¬
laxation, which results in long T1 values. Molecular motion in fluids, and fluid¬
like substances, are more inducive to the relaxation process. In this environment
molecular size becomes an important characteristic.
Relaxation is enhanced by a general matching of the proton resonant frequency
and the frequency associated with the molecular motions. Therefore, factors that
change either of these two frequencies will generally have an effect on T1 values.
Molecular Size
Small molecules, such as water, have faster molecular motions than large mol¬
ecules such lipids. The frequencies associated with the molecular motion of
as
water molecules are both higher and more dispersed over a larger range for the
larger molecules. This reduces the match between the frequencies of the protons
and the molecular environment. This is why water and similar fluids have rela¬
tively long T1 values. Larger molecules, which have slower and less dispersed
molecular movement, have a better frequency match with the proton resonant fre-
Magnetic Characteristics of Tissue 449
quencies. This enhances the relaxation process and produces short T1 values. Fat
is an excellent example of a large molecular structure that exhibits this character¬
istic.
Tissues generally contain a combination of water anda variety of larger mol¬
ecules. Some of the water can be in
relatively free state while other water is
a
bound to some of the larger molecules. In general, the T1 value of the tissue is
probably affected by the exchange of water between the free and bound states.
When the water is bound to larger molecular structures it takes on the motion
characteristics of the larger molecule. Factors such as a pathologic process, which
alters the water composition of tissue, will generally alter the T1 values.
regrowth process the cycle is terminated and "the picture is snapped." When a
short TR is used, the regrowth of the longitudinal magnetization is interrupted
before it reaches its maximum. This reduces signal intensity and tissue brightness
within the image but produces T1 contrast. Increasing TR increases signal inten¬
T2 T1(0.5T) T1(1.5T)
Tissue (msec) (msec) (msec)
80 210 260
Adipose
42 350 500
Liver
45 550 870
Muscle
90 500 780
White Matter
100 650 920
Gray Matter 2400
160 1800
CSF
450 Physical Principles of Medical Imaging
T1 Contrast Sensitivity
Longitudinal relaxation time, Tl, is one of the three basic tissue characteristics
that can be translated into image contrast. The extent to which the Tl values of
ginning of the cycle (time = 0), there is no contrast. As the two tissues regain
magnetization, they do so at different rates. Therefore, a difference in magnetiza¬
tion (ie, Tl contrast) develops between the two tissues. As the tissues approach
maximum magnetization, the difference between the two magnetization levels
diminishes.
In order to produce a Tl-weighted image, a value for TR must be selected to
correspond with the time at which Tl contrast is significant between the two tis¬
sues. Several factors must be considered in selecting TR. Tl contrast is repre¬
sented by the ratio of the tissue magnetization levels, and is at its maximum very
early in the relaxation process. However, the low magnetization levels present at
that time do not generally produce adequate RF signal levels for many clinical
Longitudinal Magnetization
Time (mSec)
applications. The selection of a longer TR produces greater signal strength but less
T1 contrast.
The selection of TR must be appropriate for the T1 values of the tissues being
magnetization. This occurs only when both tissues have the same proton (nuclei)
density and is usually not the case with tissues within the body.
During the decay of transverse magnetization, different tissues will have differ¬
ent levels of magnetization because of different decay rates, or T2 values. As
shown in Figure 30-7, tissue with a relatively long T2 value will have a higher
level of magnetization, produce a more intense signal, and appear brighter in the
Long
(Slow)
Short
(Fast)
Figure 30-8 A Comparison of Transverse Magnetization Decay for Tissues with Different
T2 Values
T2 Contrast Sensitivity
value (100 msec) maintains a higher level of magnetization than the other tissue.
The ratio of the tissue magnetization at any point in time represents T2 contrast.
At the beginning of the cycle, there is no T2 contrast, but it develops and in¬
creases throughout the relaxation
process. At the echo event the magnetization
levels are converted into RF signals and image pixel brightness; this is the time to
the echo event (TE) and is selected by the operator. Maximum T2 contrast is gen¬
erally obtained by using a relatively long TE. However, when a very long TE
value is used, the magnetization and RF signals are too low to form a useful image.
In selecting TE values, a compromise must be made between T2 contrast and
signal intensity.
In many clinical procedures, an imaging technique that creates a series of im¬
Transverse Magnetization
majority of the moments must be in the same direction, or in phase, within the
transverse plane. When a nucleus has a transverse orientation, it is actually spin¬
located. Nuclei located in fields with different strengths precess at different rates.
Even within a very small volume of tissue, nuclei are in slightly different magnetic
fields. As a result, some nuclei precess faster than others. After a short period of
time, the nuclei are not precessing in phase. As the directions of the nuclei begin to
spread, the magnetization of the tissue decreases. A short time later, the nuclei
454 Physical Principles of Medical Imaging
which results in relatively slow dephasing and loss of magnetization. The rate at
which this occurs is determined by characteristics of the tissue. It is this dephasing
activity that is characterized by the T2 values. A second factor, which produces
relatively rapid dephasing of the nuclei and loss of transverse magnetization, is the
inherent inhomogeneity of the magnetic field within each individual voxel. The
field inhomogeneities are sufficient to produce rapid dephasing. This effect, which
is different from the basic T2 characteristics of the tissue, tends to mask the true
relaxation characteristics of the tissue. In other words, the actual transverse mag¬
netization relaxes much faster than the tissue characteristics would indicate. This
real relaxation time is designated as T2*. The value of T2* is always much less
than the tissue T2 value.
Susceptibility Effects
Certain materials are susceptible to magnetic fields and become magnetized
when located in fields. This magnetization can produce a local distortion in the
magnetic field. Magnetic field distortions or gradients are especially significant in
the region of boundaries between materials with different susceptibilities. This
CONTRAST AGENTS
The inherent tissue characteristics (proton density, Tl, and T2) do not always
specific tissues or anatomical regions. There are several different types of contrast
agents, whicht will now be considered.
Paramagnetic Materials
tion of signal intensity. This occurs because the transverse relaxation rate is also
increased, which results in a shortening of the T2 value.
Superparamagnetic Materials
When materials with unpaired electrons are contained in a crystalline structure
they produce a stronger magnetic effect (susceptibility) in comparison with the
independent molecules of a paramagnetic substance. The susceptibility of
superparamagnetic materials is several orders of magnitude greater than that of
paramagnetic materials. These materials are in the form of small particles. Iron
oxide particles are an example.
The particles produce inhomogeneities in the magnetic field, which results in
rapid dephasing of the protons in the transverse plane and a shortening of T2.
Superparamagnetic materials generally reduce signal intensity and are classi¬
fied as negative contrast agents.
Chapter 31
Imaging Methods
There are several different imaging methods that can be used to create the MR
image. The principal difference among these methods is the sequence in which the
RF pulses and gradients are applied during the acquisition process. Therefore, the
different methods are often referred to as the different pulse sequences. An over¬
view of the most common methods is shown in Figure 31-1. As we see, the differ¬
ent methods are organized in a hierarchy structure. For each imaging method there
is a set of factors that must be adjusted by the user to produce specific image
characteristics.
a specific imaging method and factor values is generally based
The selection of
on requirements for contrast sensitivity and acquisition speed. However, other
characteristics such as signal-to-noise and the sensitivity to specific artifacts might
vary from method to method.
In this chapter we will develop the basic concept of each method and then show
how the specific imaging factors can be adjusted to produce the desired image
characteristics.
A common characteristic to all methods is that there are two distinct phases of
the image acquisition process, as shown in Figure 31-2. One phase is associated
with longitudinal magnetization and the other with the transverse magnetization.
In general, T1 and proton density contrast are developed during the longitudinal
age is determined by the duration of the two phases and the transfer of contrast
from the longitudinal phase to the transverse phase.
457
458 Physical Principles of Medical Imaging
TE TE TE TE
A Tl Tl
i
Flip Angle Flip Angle
TS
Spoiling Spoiling
(on/off) (on/off)
A A
i
Excitation
pulse converts all of the existing longitudinal magnetization into transverse mag¬
netization. This type of pulse is used in several imaging methods. However, there
are methods that use excitation pulses with a flip angle of less than 90 degrees.
Small flip angles (less than 90 degrees) convert only a fraction of the existing
RF Excitation Pulses
the image. The echo event is produced by applying either an RF pulse or a gradient
pulse to the tissue.
Spin Echo
Spin echo is the name of the process that uses an RF pulse rather than a gradient
pulse to produce the echo event. It is also the name for one of the imaging methods
that uses the spin-echo process. We will now see how an RF pulse can produce an
echo event and signal.
The decay of transverse magnetization (ie, relaxation) occurs because of
dephasing among individual nuclei. Figure 31-3 is a much simplified model used
to develop this concept.
Two basic conditions are required for transverse magnetization: (1) the mag¬
netic moments of the nuclei must be oriented in the transverse direction, or plane,
460 Physical Principles of Medical Imaging
■TE
~~
~
-------
J"^sije_Relaxatjon Rate - T2
Field Relaxation Rate -12*
&
«$/
Rephasing
Figure 31-3 The Events Contributing to Transverse Relaxation and the Formation of the
Spin-Echo Signal
and (2) a majority of the moments must be in the same direction within the trans¬
verse plane. When a nucleus has a transverse orientation, it is actually spinning
around an axis that is parallel to the magnetic field. In our example, we use four
nuclei to represent the many nuclei involved in the process.
After the application of a 90-degree pulse, the nuclei have a transverse orienta¬
tion and are rotating together, or in phase, around the magnetic field axis. This
rotation is the normal precession discussed earlier. The precession rate, or reso¬
nant frequency, depends on the strength of the magnetic field where the nuclei are
located. Nuclei located in fields with different strengths precess at different rates.
Even within a very small volume of tissue, there are small variations in field
strength. As a result, some nuclei precess faster than others. After a short period of
time, the nuclei are not precessing in phase. As the directions of the nuclei begin to
spread, the magnetization of the tissue decreases. A short time later, the nuclei are
randomly oriented in the transverse plane, and there is no transverse magnetiza¬
tion.
Two factors contribute to the dephasing of the nuclei and the resulting trans¬
verse relaxation. One is the
exchange of energy among the spinning nuclei, which
results in relatively slow dephasing and loss of magnetization. The rate at which
this occurs is determined by characteristics of the tissue. It is this dephasing activ¬
ity that is characterized by the T2 values. A second factor, which produces rela-
Imaging Methods 461
tively rapid dephasing of the nuclei and loss of transverse magnetization, is the
inherent inhomogeneity of the magnetic field. Even within a small volume of tis¬
sue, the field inhomogeneities are sufficient to produce rapid dephasing. This ef¬
fect, which is generally unrelated to the characteristics of the tissue, tends to mask
the true relaxation characteristics of the tissue. In other words, the actual trans¬
verse magnetization relaxes much faster than the tissue characteristics would indi¬
cate. This real relaxation time is
designated as T2*. The value of T2* is always
much less than the tissue T2 value.
An RF signal is produced whenever there is transverse magnetization. Immedi¬
ately after an excitation pulse a so-called free induction decay (FID) signal is pro¬
duced. The intensity of this signal is proportional to the level of transverse magne¬
tization. Both decay rather rapidly because of the magnetic field inhomogeneities
just described. The FID signal is not used in the spin-echo methods. It is used in
the gradient-echo methods to be described later.
In many imaging procedures a special technique, called spin echo, is used to
compensate for the dephasing and rapid relaxation caused by the field
inhomogeneities. The sequence of events in the spin-echo process are illustrated
in Figure 31-3.
Transverse magnetization is produced with a 90-degree RF pulse that flips the
If a 180-degree pulse is applied to the tissue, it flips the spinning protons over by
180 degrees in the transverse plane and reverses their direction of rotation. This
causes the fast protons to be located behind the slower ones. As the faster protons
begin to catch up with the slower ones, they regain a common alignment, or come
back into phase. This, in turn, causes the transverse magnetization to reappear and
form the echo event. However, the magnetization does not grow to the initial value
because the relaxation (dephasing) produced by the tissue is not reversible. The
rephasing of the protons causes the magnetization to build up to a level determined
by the T2 characteristics of the tissue. As soon as the magnetization reaches this
maximum, the protons begin to move out of phase again, and the transverse mag¬
netization dissipates. Another 180-degree pulse can be used to produce another
rephasing. In fact, this is what is done in multi-echo imaging.
The intensity of the echo signal is proportional to the level of transverse magne¬
tization as determined by the tissue relaxation rate, T2. In most imaging proce¬
dures the intensity of the echo signal determines the brightness of the correspond¬
ing image pixel. The time between the initial excitation and the echo signal is TE.
462 Physical Principles of Medical Imaging
This is controlled by adjusting the time interval between the 90-degree and the
180-degree pulses.
Gradient Echo
they have been dephased by inherent magnetic field inhomogeneities within the
tissue voxel. The gradient-echo technique is different in that the protons are first
decay (FID) period or during a spin-echo event. In Figure 31-4 the gradient echo is
being created during the FID. Let us now consider the process in more detail.
The transverse magnetization is produced by the excitation pulse. It immedi¬
ately begins to decay (the FID process) because of the magnetic field
inhomogeneities within each individual voxel. The rate of decay is related to the
value of T2*. A short time after the excitation pulse a gradient is applied, which
produces a very rapid dephasing of the protons and reduction in the transverse
magnetization. This occurs because a gradient is a forced inhomogeneity in the
magnetic field. The next step is to reverse the direction of the applied gradient.
Even though this is still an inhomogeneity in the magnetic field, it is in the oppo¬
site direction. This then causes the protons to rephase and produce an echo event.
As the protons rephase the transverse magnetization will reappear and rise to a
value determined by the FID process. The gradient-echo event produces a rather
well-defined peak in the transverse magnetization and this, in turn, produces a
discrete RF signal.
The time to the echo event (TE) is determined by adjusting the time interval
between the excitation pulse and the gradients that produce the echo event. TE
values for gradient echo are typically much shorter than for spin echo, especially
when the gradient echo is produced during the FID.
Imaging Methods 463
I^ Decgy pjate
Time
i i
Rephasing
Gradient
Direction
De-
phasing
SPIN-ECHO METHODS
It becomes a little confusing because there are two different imaging methods
that use the spin-echo process. One is named spin echo and the other inversion
recovery. We will now see how they are related.
Spin Echo
The basic spin-echo imaging method is characterized by a 90-degree excitation
pulse followed by a 180-degree pulse to produce the echo event and signal. This
method can be used to produce images of the three basic tissue characteristics:
proton density, Tl, and T2. The sensitivity to a specific characteristic is deter¬
mined by the values selected for the two time intervals or imaging factors, TR and
TE.
The individual tissues and the contrast between different tissues
brightness of
are relationship between TR and TE and the basic tissue charac¬
determined by the
teristics (proton density, Tl, and T2). In most MR images, the contrast is not pro¬
duced by a single tissue characteristic but by a combination of the three tissue
464 Physical Principles of Medical Imaging
factors. The weighting of image contrast with respect to a particular tissue charac¬
teristic is achieved by adjusting the TR and TE values. We now consider the se¬
quence of events during an imaging cycle and how the various factors determine
the final image contrast.
Figure 31-5 illustrates the development of contrast between two tissues, A and
B. The process actually extends over a portion of two cycles. Although the same
events occur in each cycle, the process is easier to visualize when it is separated as
shown. Here we are observing the longitudinal magnetization in the first cycle
followed by the transverse magnetization that it produces in the next cycle.
The first cycle begins with a 90-degree pulse that converts longitudinal magne¬
tization into transverse magnetization. Therefore, the cycle begins with complete
saturation or no longitudinal magnetization. The magnetization begins to grow
(relax) at a rate determined by the T1 value for the specific tissue. If two tissues
have different T1 values, a difference in magnetization, or contrast, will develop
between the tissues. This is T1 contrast. As the tissues begin to approach their
maximum magnetization, proton density becomes the major factor affecting mag¬
netization level and contrast between the tissues. This cycle is terminated, and the
next cycle begins by the application of another 90-degree pulse. The pulse inter¬
Pixel
Brightness
First Cycle Next Cycle
Figure 27-18 Sequence of Events and Factors that Determine Image Contrast
Imaging Methods 465
At the beginning of the cycle, the two tissues have different levels of transverse
tensity, in turn, determines the brightness of the tissue as it appears in the image.
The two tissues will produce image contrast if their signal intensities are different.
To produce image contrast based on T1 differences between tissues, two factors
must be considered. Since T1 contrast develops during the early growth phase of
generally counteracts T1 contrast. This is because tissues with short T1 values also
have short T2 values. The problem arises because tissues with short Tls are gener¬
ally bright, whereas tissues with short T2s have reduced brightness when T2 con¬
trast is present. T2 develops during the TE time interval. Therefore, s short TE
relatively long TR value. This minimizes T1 contrast contamination and the trans¬
verse relaxation process begins at a relatively high level of magnetization. Long
TE values are then used to allow T2 contrast time to develop. Table 31-1 summa¬
rizes the combination of TR and TE values used to produce the three basic image
types.
Inversion Recovery
Inversion recovery is a spin-echo imaging method used for several specific pur¬
poses. One application is to produce a high level of T1 contrast and a second
application is to suppress the signals and resulting brightness of fat. The inversion
466 Physical Principles of Medical Imaging
degree pulse inverts the direction of the longitudinal magnetization. The regrowth
(recovery) of the magnetization starts from a negative (inverted) value, rather than
from zero, as in the spin-echo method.
The inversion recovery method, like the spin-echo method, uses a 90-degree
excitation pulse to produce transverse magnetization, and a final 180-degree pulse
to produce a spin-echo signal. An additional time interval is associated with the
inversion recovery pulse sequence. The time between the initial 180-degree pulse
and the 90-degree pulse is designated the inversion time, TI. It can be varied by the
Tl Contrast
cause it starts from the inverted state. There is more time for the Tl contrast to
develop.
<s>
■-v-mA/WW"-* ►
|180"I
Inversion
Pulse
Fat Suppression
We recall that fat has a
relatively short T1 value. Therefore, it recovers its lon¬
gitudinal magnetization faster than the other tissues after the inversion pulse. The
important point here is that the magnetization of fat passes through the zero level
before the other tissues. If the TI interval is selected
so that the excitation
pulse is
applied at that time, the fat will not produce a signal. This is achieved with rela¬
tively short values for TI. Therefore, this method of fat suppression is often re¬
ferred to as short time inversion recovery (STIR).
The
gradient-echo technique is generally used in combination with an RF exci¬
tationpulse that has a small flip angle of less than 90 degrees. We will discover the
advantages of this particular combination later. One source of confusion is that
each manufacturer of MRI equipment has given this imaging method a different
trade name. In this text we will use the generic name of small angle gradient echo
(SAGE) method.
The SAGE method generally requires a shorter acquisition time than the spin-
echo methods. It is also a more complex method with respect to adjusting contrast
sensitivity because the flip angle of the excitation pulse becomes one of the adjust¬
able imaging factors.
We recall that the purpose of the excitation pulse is to convert or flip longitudi¬
nal magnetization into transverse magnetization. When a 90-degree excitation
pulse is used, all of the existing longitudinal magnetization is converted into trans¬
verse magnetization as we have seen with the spin-echo methods. The 90-degree
excitation pulse reduces the longitudinal magnetization to zero (ie, complete satu¬
ration) at the beginning of each imaging cycle. This then means that a relatively
long TR interval must be used to allow the longitudinal magnetization to recover.
The time required for the longitudinal magnetization to relax or recover is one of
the major factors in determining acquisition time. The effect of reducing TR when
90-degree excitation pulses are used is shown in Figure 31-7. As the TR value is
decreased the amount of transverse magnetization and RF signal intensity pro¬
duced by each pulse is decreased. This would result in an increase in image noise
as described in Chapter33. Also, the use of short TR intervals with a 90-degree
excitation pulse cannot produce either proton density or T2 images.
One approach to reducing TR and increasing acquisition speed without incur¬
ring the disadvantages that have just been described is to use an excitation pulse
468 Physical Principles of Medical Imaging
Time
Figure 31-7 The Effect of Reducing TR on Longitudinal Magnetization and the Resulting
Signal Intensity
that has flip angle of less than 90 degrees. A small flip-angle (less than 90 de¬
a
c Maximum
o
03
N Small Angle High C/D
c
CD
C3
c
g>
Low CO
Figure 31-8 The Effect of Using Small Flip-Angle Pulses on Longitudinal Magnetizatior
Imaging Methods 469
Reducing the flip angle has two effects that must be considered together. The
effect that we have just observed is that the
longitudinal magnetization is not com¬
pletely destroyed and remains at a relatively high level even for short TR intervals.
This will increase RF signal intensity compared to the use of
90-degree pulses.
However, as the flip angle is reduced, a smaller fraction of the longitudinal magne¬
tization is converted into transverse magnetization. This has the effect of reducing
signal intensity. The end result is a combination of these two effects. This is illus¬
trated in Figure 31-9. Here we see that as the flip angle is increased over the range
from 0 degrees to 90 degrees, the level of longitudinal magnetization at the begin¬
ning of a cycle decreases. On the other hand, as the angle is increased, the fraction
of this longitudinal magnetization that is converted into transverse magnetization
and RF signal increases. The combination of these two effects is shown in Figure
31-10. Here we see how changing flip angle affects signal intensity. The exact
shape of this curve depends on the specific T1 value of the tissue and the TR
interval. For each T1\TR combination there is a different curve and specific flip
Figure 31-9 The Effects of Excitation Pulse Flip Angle on the Amount of Longitudinal
and Transverse Magnetization
470 Physical Principles of Medical Imaging
s
Flip Angle (Degrees)
Figure 31-10 The Relationship of Signal Intensity to Flip Angle
assuming a short TE and considering the contrast associated with only the longitu¬
dinal magnetization. We will add the effects of transverse magnetization later.
The flip-angle range is divided into several specific segments as shown.
77 Contrast
Relatively large flip angles (45 degrees-90 degrees) produce T1 contrast. This
is what we would expect because a 90-degree flip angle and short TR and TE
values are identical to the factors used to produce T1 contrast with the spin-echo
method. Here we observe a loss of T1 contrast as the flip angle is decreased from
90 degrees.
Low Contrast
>,
"w
c
0)
15
c
g>
in
type of contrast.
inhomogeneities within the magnetic field often related to variations in tissue sus¬
ceptibility. This decay rate is determined by the T2* of the tissue. When a spin-
echo technique is used the spinning protons are rephased, as shown in Figure 31-3,
and the T2* effect is essentially eliminated. However, when a spin-echo technique
is not used the transverse magnetization depends only on the T2* characteristics
of the tissue. The gradient-echo technique does not compensate for the suscepti¬
bility effect dephasing as the spin-echo technique does. Therefore, a basic gradi¬
ent-echo imaging method is not capable of producing true T2 contrast. The con¬
trast will be determined primarily by the T2* characteristics.
Because SAGE methods use relatively short TR values there is the possibility
that some of the transverse magnetization created in one imaging cycle will carry
over into the next cycle. This happens when the TR values are in the same general
range as the T2 values of the tissue. SAGE methods differ in how they use the
transverse magnetization.
Spin Echoes
A typical SAGE sequence is limited to one RF pulse per cycle. If additional
pulses were used, as in the spin-echo techniques, they would affect the longitudi¬
nal magnetization and upset its equilibrium condition. However, because of the
Associated with each excitation pulse, there are actually two components of the
transverse magnetization. There is the FID produced by the immediate pulse and a
to the T2* characteristics. The contrast characteristics of the imaging method are
described below.
Mixed Contrast
When both the FID and spin-echo components are used, an image with mixed
contrast characteristics will be obtained. This method
produces a relatively high
signal intensity compared to the methods described below.
T1 Contrast Enhancement
T2 Contrast Enhancement
ing contrast from the longitudinal magnetization as in the spin-echo methods and
a fast acquisition with the gradient-echo method. The
principle is illustrated in
Figure 31-12. Two options are shown.
The longitudinal magnetization is "prepared"
by applying either a saturation
pulse, as in the spin-echo method, or an inversion pulse, as in the inversion-recov¬
ery method. As the longitudinal magnetization relaxes, contrast is formed between
tissues with different T1 and proton density values. After a time interval (TI or
TS)
selected by the operator, a rapid gradient-echo acquisition begins.
The total acquisition time for this method is the produce of TR and the number
of acquisition cycles plus the TI or TS time interval.
During the MR image formation process a section of the patient's body is sub¬
divided into a set of slices and then each slice is cut into rows and columns to form
a matrix of individual tissue voxels. The RF signal from each individual voxel
must be separated from all of the others and its intensity displayed in the corre¬
sponding image pixel, as shown in Figure 32-1. This process occurs in two distinct
phases, as illustrated in Figure 32-2. The first phase is signal acquisition and is
followed by image reconstruction.
SIGNAL ACQUISITION
During the acquisition phase the RF signals are emitted by the tissue and re¬
ceived by the RF coils of the equipment. During this process the signals from the
different slices and voxels are given distinctive frequency and phase characteris¬
tics so that they can be separated from the other signals during image reconstruc¬
tion. The acquisition phase consists of an imaging cycle that is repeated many
times. The time required for image acquisition is determined by TR, which is the
duration of one cycle, its repetition time, and the number of cycle repetitions. The
number of cycles is determined by the image quality requirements. In general, the
frequency and phase will be developed later. It is not yet in the form of an image.
At this point in the process the data is said to be located in "k space," which will
later be transformed into image space.
475
476 Physical Principles of Medical Imaging
□
Tissue Voxel RF Signal Image Pixel
IMAGE RECONSTRUCTION
Image Image
Acquisition Reconstruction
"k"
ii ii iiiii i ■ 11 M M M
~*
Fourier Transform
Space
/ \ 1
1 Cycle
TR (mSec);
TR x Number of Cycles
Time
Figure 32-2 The Sequence of Events That Produces an MR Image
Spatial Characteristics of the Magnetic Resonance Image All
IMAGE CHARACTERISTICS
The most significant spatial characteristic of an image is the size of the indi¬
vidual tissue voxels. Voxel size has a major effect on both the detail and noise
characteristics of the image. The user can select the desired voxel size by adjusting
a combination of imaging factors, as described in Chapter 33.
GRADIENTS
Magnetic field gradients are used to give the RF signals their frequency and
phase characteristics.
A gradient is a gradual change in field strength across a magnetic field, as illus¬
trated in Figure 32-4. Magnets are equipped with electrical coils, which are used to
tllllllllllllll
"O ~
CO
Fourier
Transform
|
| Acquisition Cycles Finish
Start
o o o o o o 0 o
Gradient
Coils
Field Off
i57
Strength
o 0 o o o o o 0
• • • • • • • •
Gradient
Coils
Field On
1.5T
Strength
Gradient Orientation
The typical imaging magnet contains three separate sets of gradient coils. These
are so that gradients can be produced in the three orthogonal directions (x,
oriented
y, and z), as shown in Figure 32-5. Also, two or more of the gradient coils can be
used together to produce a gradient in any desired direction.
As we will see later, a gradient in one direction is used to create the slices and
then gradients in the other direction are used to cut the slices into rows and col¬
umns to create the individual voxels. However, these functions can be inter¬
changed among the x, y, and z gradients to permit imaging in any plane through
the patient's body.
Gradient Function
In addition to the spatial encoding to be described here, gradients are also used
for other functions such as the production of gradient-echo signals and to compen¬
sate for adverse effects produced by the flow of blood and cerebrospinal fluid.
Gradients produce a rather loud sound when they are turned on and off. The
intensity of the sound is related to the strength of the gradient and can vary with
specific imaging methods.
Spatial Characteristics of the Magnetic Resonance Image 479
Orthogonal Gradients
+
1ST
► Z
15 Y
Gradient Cycle
The functions performed by the various gradients usually occur in a specific
sequence. During each individual image acquisition cycle the various gradients
will be turned on and off at specific times. As we will see later, the gradients are
synchronized with other events such as the application of the RF pulses and the
acquisition of the RF signals.
SLICE SELECTION
Selective Excitation
The first gradient action in a cycle defines the location and thickness of the
tissue slice to be imaged. We will illustrate the procedure for a conventional
transaxial slice orientation. Other orientations, such as sagittal and coronal, are
created by interchanging the gradient directions.
480 Physical Principles of Medical Imaging
ever RF pulses are applied to the body. Since RF pulses contain frequencies within
a limited range (or bandwidth), they can excite tissue only in a specific slice. The
location of the slice can be changed or moved along the gradient by using a
slightly different RF pulse frequency. The thickness of the slice is determined by a
combination of two factors: (1) the strength, or steepness, of the gradient, and (2)
the range of frequencies, or bandwidth, in the RF pulse.
Multi-Slice Imaging
covering a specific anatomical region. By using the multi-slice mode, an entire set
of images can be acquired simultaneously. The basic principle is illustrated in
Figure 32-7. The slices are separated by exciting and detecting the signals from the
during each imaging cycle.
different slices in sequence
When the slice selectiongradient is turned on, each slice is tuned to a different
resonant frequency. A specific slice can be selected for excitation by adjusting the
RF pulse frequency to correspond to the resonant frequency of the slice. The pro¬
cess begins by applying an excitation pulse to one slice. Then, while that slice
Selected Slice
Spatial Characteristics of the Magnetic Resonance Image 481
—I ' I 1—I—'—i—'—l
200 400 600 800 1000 mSec \
Excitation Sequence
-
undergoes relaxation, the excitation pulse frequency is shifted to excite the next
slice. This process is repeated to excite the entire set of slices at slightly different
times within one TR interval.
The advantage of multi-slice imaging is that a set of slices can be acquired in the
same time as a single slice. The principal factor that limits the number of slices is
the value of TR. It takes a certain amount of time to excite and then collect the
signals from each slice. The maximum number of slices is the TR value divided by
the time required for each slice. This limitation is especially significant for T1
Volume Acquisition
Volume (3D) image acquisition has some advantages and disadvantages with
respect to slice (2D) imaging. With this method, no gradient is present when the
RF pulse is applied to the tissue. Since all tissue within an anatomical region, such
as the head, is tuned to the same resonant frequency, all tissues are excited simul¬
Slices Produced
by
Phase Encoding
-'Phase Encoding
nt
Slice
Selection
Gradient
Slices
B.
Figure 32-8 The Volume Acquisition Process (A), Compared to Selective Excitation (B)
Spatial Characteristics of the Magnetic Resonance Image 483
dient setting, complete set of imaging cycles must be executed. Therefore, the
a
total number of cycles required in one acquisition is multiplied by the number of
slices to be produced. This has the disadvantage of increasing total acquisition
time.
The primary advantage of volume imaging is that the phase-encoding process
can generally produce thinner and more contiguous slices than the selective exci¬
tation process used in slice imaging.
FREQUENCY ENCODING
A fundamental characteristic of an RF
signal is its frequency. Frequency is the
number of cycles per second. The frequency unit of Hertz corresponds to one
cycle per second. Radio broadcast stations transmit signals on their assigned fre¬
quency. By tuning our radio receiver to a specific frequency we can select and
separate from all of the other signals the specific broadcast we want to receive. In
other words, the radio broadcast from all of the stations in a city are frequency
encoded. The same process (frequency encoding) is used to cause voxels to pro¬
duce signals that are different and can be used to create one dimension of the
image.
Let us review the concept of RF signal production by a voxel of tissue, as shown
in Figure 32-9. Radio frequency signals are produced only when transverse mag¬
netization is present. The unique characteristic of transverse magnetization that
produces the signal is a spinning magnetic effect, as shown. The transverse mag¬
netization spins around the axis of the magnetic field. A spinning magnet or mag¬
netization in the vicinity of a coil forms a very simple electric generator. It gener¬
ates one cycle for each revolution of the magnetization. When the magnetization is
spinning at the rate of millions of revolutions per second the result is a radio fre¬
quency signal.
Resonant Frequency
The frequency of the RF signal is determined by the spinning rate of the trans¬
verse magnetization. This, in turn, is determined by two factors, as was described
in Chapter 29. One determining factor is the specific magnetic nuclei (usually
protons) and the other is the strength of the magnetic field in which the voxel is
located. When imaging protons, the strength of the magnetic field is the factor that
can be used to vary the resonant frequency and the corresponding frequency of the
RF signals.
Figure 32-10 shows the process of frequency encoding the signals for a column
of voxels. In this example, a gradient is applied along the column. The magnetic
field strength is increased from bottom to top. This means that each voxel is lo¬
cated in a different field strength and is resonating at a frequency different from all
of the others. The resonant and RF signal frequencies increase from the bottom to
the top as shown.
The frequency encoding gradient is turned on at the time of the echo event when
the signals are actually being produced. The signals from all of the voxels are
produced simultaneously and are emitted from the body mixed together to form a
composite signal. The individual signals will be separated later by the reconstruc¬
tion process.
PHASE ENCODING
tion is spinning at the same rate and producing signals that have the same fre¬
quency. However, we notice that one signal is more advanced in time or is out of
step with the other. In other words, the two signals are out of phase. The signifi¬
cance of voxel-to-voxel phase in MRI is that it can be used to separate signals and
magnetization of one voxel with respect to another. This happens when the two
voxels are located in magnetic fields of different strengths. This can be achieved
by turning on a gradient, as shown in Figure 32-12.
Let us begin the process of phase encoding by considering the row of voxels
shown at the bottom of the illustration. We are assuming that all voxels have the
Figure 32-11 The Concept of Phase between Two Signals (A), Compared to a Frequency
Difference (B)
486 Physical Principles of Medical Imaging
Gradient On
Slow
A Fast
N\
\
T / y
/
N>
ing from left to right. Therefore, the magnetization in each voxel is spinning at a
different rate with the speed increasing from left to right. This causes the magneti¬
zation from voxel to voxel to get out of step or produce a phase difference. The
Spatial Characteristics of the Magnetic Resonance Image 487
phase-encoding gradient remains on for a short period of time and then is turned
off. This leaves the condition represented by the top row of voxels. This is the
condition that exists at the time of the echo event when the signals are actually
produced. As we see, the signals from the individual voxels are different in terms
of their phase relationship. In other words, the signals are phase encoded. All of
the signals are emitted at the same time and mixed together as a composite echo
signal. Later the reconstruction process will sort the individual signal components.
Phase encoding is the second function performed by a field gradient during each
cycle, as shown in Figure 32-13. During each pass through an imaging cycle, the
phase-encoding gradient is stepped to a slightly different value. The different set¬
tings create the different "views" required to reconstruct the final image. The con¬
cept of a view in MRI is quite different from a view in CT.
One MRI phase-encoding step produces a composite signal from all voxels
within a slice. The difference from one step to another is that individual voxel
GRADIENTS
Slice Selection
Phase Encoding
Frequency Encoding
RF PULSES
RF SIGNAL
We have seen that various gradients are turned on and off at specific times
within each imaging cycle. The relationship of each gradient to the other events
during an imaging cycle is shown in Figure 32-13. The three gradient activities
are:
1. The slice selection gradient is on when RF pulses are applied to the tissue.
This limits magnetic excitation and echo formation to the tissue located
within the specific slice.
2. The phase-encoding gradient is turned on for a short period in each cycle to
produce a phase difference in one dimension of the image. The strength of
this gradient is changed slightly from one step to another to create the dif¬
ferent "views" needed to form the image.
3. The frequency-encoding gradient is turned on during the echo event when
thesignals are actually emitted by the tissue. This causes the different
voxels to emit signals with different frequencies.
Because of the combined action of the three gradients, the individual voxels
within each slice emit signals that are different in two respects. They have a phase
difference in one direction and a frequency difference in the other. Although these
signals are emitted at the same time, and picked up by the imaging system as one
composite signal, the reconstruction process can sort the signals into the respec¬
tive components.
IMAGE RECONSTRUCTION
The next
major step in the creation of an MR image is the reconstruction pro¬
cess.Reconstruction is the mathematical process performed by the computer that
converts the collected signals into an actual image. There are several reconstruc¬
tion methods, but the one used for most clinical applications is the 2D Fourier
transformation.
The basic concept of the Fourier transformation is illustrated in Figure 32-14. It
is amathematical procedure that can sort a composite signal into individual fre¬
quency and phase components. Since each voxel in a column emits a different
signal frequency, the Fourier transformation can determine the location of each
signal component and direct it to the corresponding pixel.
The sorting of the signals in the phase-encoded direction is also done by a Fou¬
rier transformation in a rather complex process.
Let us now use the concept illustrated in Figure 32-15 to summarize the spatial
characteristics of the MR image. We will use a postal analogy for this purpose.
Spatial Characteristics of the Magnetic Resonance Image 489
Phase Address
Image Pixels
Tissue Voxels
Acquisition
Reconstruction
Figure 32-15 The Concept of Signal Encoding (Addressing) and Image Reconstruction
(Sorting and Delivery)
490 Physical Principles of Medical Imaging
In Chapter 35 we will see that if a voxel of tissue moves during the acquisition
process it might not receive the correct phase address and the signal will be deliv¬
ered to the wrong pixel. This creates ghost images and streak artifacts in the phase-
encoded direction.
The chemical-shift artifact is caused by the difference in signal frequency be¬
tween tissues containing water and fat.
Chapter 33
contrast objects. This is shown in Figure 33-1, where we see objects arranged
according to two characteristics. In the horizontal direction, the objects are ar¬
ranged according to size. Decreasing object size corresponds to increasing detail.
In the vertical direction, the objects are arranged according to their inherent con¬
trast. The object in the lower left is both large and has a high level of contrast. This
is the object that would be most visible under a variety of imaging conditions. The
object that is always the most difficult to see is the small low contrast object lo¬
cated in the upper right corner.
In every imaging procedure we can assume that some potential objects within
the body will not be visible because of the blurring and noise in the image. This is
represented by the area of invisibility indicated in Figure 33-1. The boundary be¬
tween the visible and invisible objects, often referred to as a contrast-detail curve,
is determined by the amount of blurring and noise associated with a specific imag¬
ing procedure.
The equipment operator can change the boundary of visibility by altering the
amount of blurring and noise. These two characteristics are determined by the
combination of many adjustable imaging factors, as shown in Figure 33-2. It is of
acomplex process because the factors that affect visibility of detail (blurring) also
affect noise. Another point to consider is that several of the factors that have an
effect on both image detail and noise also affect image acquisition time. There¬
fore when formulating an image protocol one must consider the multiple effects
491
492 Physical Principles of Medical Imaging
Low
Invisible
Blurring
•\ • •
High
of the imaging factors and then select factor values that provide an appropriate
compromise and an optimized acquisition for a specific clinical study.
The three competing goals associated with some of the same
imaging factors
are shown in
Figure 33-3. These are:
1. high detail (low blurring)
2. low noise (high signal-to-noise)
3. acquisition speed.
Each set of imaging parameters in an
imaging protocol is represented by an
operating point located somewhere within the triangular area. It can be moved by
changing the protocol factors values. However, as the operating point is moved
closer to one of the desirable goals, it
generally moves away from the other two.
This is the compromise that must be considered when
selecting protocol factors.
We will now consider the many factors that have an effect on the characteristics
of image detail, noise, and
acquisition speed.
IMAGE DETAIL
Image Quality
Acquisition
Time
2:10
Detail Noise
Matrix— Matrix Matrix
FOV-— FOV
Slice Thickness— Slice Thickness
Averaging (NSA)™Averaging (NSA)
RF Coil Type
TR TR
TE
Operator
Figure 33-2 The Imaging Protocol Factors That Have an Effect on Image Detail, Noise,
and Acquisition Speed
single signal intensity. It is not possible to see details within a voxel, just the voxel
itself. The amount of image blurring is determined by the dimensions of the indi¬
vidual voxel.
Three basic imaging factors determine the dimensions of a tissue voxel, as illus¬
trated inFigure 33-4. The dimension of a voxel in the plane of the image is deter¬
mined by the ratio of the field of view (FOV) and the dimensions of the matrix.
Both of these factors can be used, to some extent, to adjust image detail.
The selection of the FOV is determined primarily by the size of the body part
being imaged. One problem that often occurs is the appearance of foldover arti¬
facts when the FOV is smaller than the body section. The maximum FOV is usu¬
ally limited by the dimensions and characteristics of the RF coil.
Matrix dimension refers to the number of voxels in the rows or columns of the
matrix. The matrix dimension is selected by the operator before the imaging pro¬
cedure. Typical dimensions are in the range of 128 to 256.
494 Physical Principles of Medical Imaging
IMAGING GOALS
Image Signal to
Detail Noise
Acquisition Speed
Figure 33-3 Three Imaging Goals That Must Be Considered When Selecting Protocol
Factors
There is a considerable range of voxel sizes (image detail) because of the pos¬
sible choices of FOV and matrix dimension. The third dimension of a voxel is the
thickness of the imaged slice of tissue. In most imaging procedures, this is the
largest dimension of a voxel. The amount of blurring can be reduced and the vis¬
ibility of detail improved by reducing voxel size. Unfortunately, there is a com¬
promise. Signal strength is directly proportional to the volume of a voxel. There¬
fore, reducing voxel size to improve image detail reduces signal intensity. This
becomes especially significant with respect to image noise.
IMAGE NOISE
acquisition times to partially compensate for its presence. If it were not for this
form of noise, it would be possible to acquire images with greater contrast sensi¬
tivity and more detail, and to acquire them in less time than is required for current
image acquisition.
Image Detail, Noise, and Acquisition Speed 495
IMAGE DETAIL
(Voxel size)
FOV
Matrix
NOISE SOURCES
signals that can interfere with MRI. These include radio and TV transmitters,
electrosurgery units, fluorescent lights, and computing equipment. All MR units
are installed with an RF shield to reduce the interference from these external
unit. When it does occur, it generally appears as an image artifact rather than the
conventional random noise pattern.
SIGNAL-TO-NOISE CONSIDERATIONS
Image quality is not dependent on the absolute intensity of the noise but rather
the amount of noise energy in relationship to the image signal intensity. Image
quality increases in proportion to the signal-to-noise ratio. When the intensity of
the RF noise is low in proportion to the intensity of the image signal, the noise has
a low visibility. In situations where the signal is relatively weak, the noise be¬
comes much more visible. The principle is essentially the same as with conven-
496 Physical Principles of Medical Imaging
tional TV reception. When a strong signal is received, image noise (snow) is gen¬
erally not visible; when one attempts to tune in to a weak signal from a distant
station, the noise becomes significant.
In MRI the interference from noise is reduced by either reducing the noise in¬
tensity or increasing the intensity of the signals that create the image, as illustrated
in Figure 33-5. Let us now see how this can be achieved.
Voxel Size
One of the major factors that affects signal strength is the volume of the indi¬
vidual voxels. The signal intensity is proportional to the total number of protons
contained within a voxel. Large voxels emit stronger signals and produce less
image noise. Unfortunately, large voxels reduce image detail. Therefore, when the
factors for an imaging procedure are being selected, this compromise between
signal-to-noise and image detail must be considered. The major reason for imag¬
ing with relatively thick slices is to increase the voxel signal intensity.
Field Strength
proportion to the square of the magnetic field strength. However, the amount of
noise picked up from the patient's body often increases with field strength because
of adjustments made to reduce artifacts at the higher fields. Because of differences
in system design, no one precise relationship between signal-to-noise ratio and
magnetic field strength applies to all systems. In general, MRI systems operating
at relatively high field strengths produce images with higher signal-to-noise ratios
than images produced at lower field strengths.
Tissue Characteristics
Signal intensity, and thus the signal-to-noise ratio, depends to some extent on
the magnetic characteristics of the tissue being imaged. For a specific set of imag¬
ing factors, the tissue characteristics that enhance the signal-to-noise relationship
are high magnetic nuclei (proton) concentration, short Tl, and long T2. The pri¬
mary limitation in imaging nuclei other than hydrogen (protons) is the low tissue
concentration and the resulting low signal intensity.
TR and TE
Repetition time (TR) and echo time (TE) are the factors used to control contrast
in conventional spin-echo imaging. We have observed that these two factors also
control signal intensity. This must be taken into consideration when selecting the
factors for a specific imaging procedure.
When a short TR is used to obtain a Tl-weighted image, the longitudinal mag¬
netization does not have the opportunity to approach its maximum and produce
high signal intensity. In this case, some signal strength must be sacrificed to gain a
specific type of image contrast. Also, when TR is reduced to decrease image ac¬
quisition time, image noise often becomes a limiting factor.
When relatively long TE values are used to produce T2 contrast, noise often
becomes noticeable. The long TE values allow the transverse magnetization and
the signal it produces to decay to very low values.
RF Coils
The most direct control over the amount of noise energy picked up from the
patient's body is by selecting appropriate characteristics of the RF receiver coil. In
principle, noise is reduced by decreasing the amount of tissue within the sensitive
region of the coil. Most imaging systems are equipped with interchangeable coils.
These include a body coil, a head coil, and a set of surface coils. The body coil is
the largest and usually contains a major part of the patient's tissue within its sensi¬
tive region. Therefore, body coils pick up the greatest amount of noise. Also, the
distance between the coil and the tissue voxels is greater than in other types of
coils. This reduces the intensity of the signals actually received by the coil. Be¬
cause of this combination of low signal intensity and higher noise pickup, body
coilsgenerally produce a poorer signal-to-noise ratio than the other coil types.
comparison to body coils, head coils are both closer to the imaged tissue and
In
generally contain a smaller total volume of tissue within their sensitive region.
498 Physical Principles of Medical Imaging
types. Because of its small size, it has a limited sensitive region and picks up less
noise from the tissue. When it is placed on or near the surface of the
patient, it is
usually quite close to the voxels and picks up a stronger signal than the other coil
types. The compromise with surface coils is that their limited sensitive region
restricts the useful field of view, and the sensitivity of the coil is not uniform
within the imaged area. This non-uniformity results in
very intense signals from
tissue near the surface and a significant decrease in signal
intensity with increasing
depth. The relatively high signal-to-noise ratio obtained with surface coils can be
traded for increased image detail by using smaller voxels.
Averaging
One of the most direct methods used to control the signal-to-noise characteris¬
tics of MR images is the process of averaging. In principle, each basic imaging
cycle (phase-encoding step) is repeated several times and the resulting signals are
averaged to form the final image. The averaging process tends to reduce the noise
level because of its statistical fluctuation nature from one
cycle to another.
The disadvantage of averaging is that it increases the total
image acquisition
time in proportion to the number of cycle
repetitions or number of signals aver¬
aged (NSA). The NSA is one of the protocol factors set by the operator. Typical
values are 1 (no averaging), 2, or 4,
depending on the amount of noise reduction
required.
The general relationship is that the NSA must be increased
by a factor of 4 in
order to improve the signal-to-noise
by a factor of 2. The signal-to-noise is propor¬
tional to the square root of the NSA.
Matrix Size
For example, if there are to be 256 voxels in the phase-encoded direction, a mini¬
mum of 256 steps is required. However, there are some
modifying factors that will
be described later. It should be noted that the number of voxels or matrix size in
the frequency-encoding direction does not have an effect on
acquisition time. It is
a common procedure to use an asymmetrical matrix with more voxels in the fre¬
quency-encoding direction than in the phase-encoding direction. This helps to
maintain image detail and reduce acquisition time to some extent.
Averaging
When averaging is used to reduce image noise the acquisition cycles are re¬
peated. This increases acquisition time by the number of repetitions or NSA.
Half Acquisition
A basic acquisition requires one phase-encoded step for each voxel in the
phase-encoded direction. There is the possibility to reconstruct an image with only
one half of the normally required phase-encoded
steps, as shown in Figure 33-7.
The process makes use of the symmetry that exists between the first half and the
second half of the phase-encoded steps. In principle, the data collected during the
Image Matrix
(Phase Encode Direction)
Cycle Duration
—TR (Sec) —
J Size r
Acquisition Cycles [j|T|T mm
Number of signals Averaged
Acquisition Time
Conventional Acquisition
Figure 33-7 The Half-Scan Technique That Can Be Used To Reduce Acquisition Time
Turbo Factor
With some imaging methods it is possible to perform more than one phase-
encoded step during one acquisition cycle. This has the potential of decreasing
total acquisition time. The number of phase-encoded steps within one cycle is
often designated as the turbo factor. Increasing the turbo factor decreases acquisi¬
tion time proportionally.
PROCEDURE OPTIMIZATION
,, Voxel Size
small
large
Lm!ge I I I I I I I I I I I I I I 1 I Signal to
Noise
2 Number of
Signals
Matrix
Averaged
(Phase Direction)
(Halfscan)
Acquisition Speed
Figure 33-8 The Relationship of Protocol Factors to the Three Imaging Goals of Detail,
Signal-to-Noise, and Acquisition Speed
we see how the various protocol factors can be used to set the operating point
relative to the specific image characteristics. These factors include:
• Voxel size is selected to produce the desired balance between image detail
and signal-to-noise.
• Matrix size, in thephase-encoding direction, affects the balance between
image detail and acquisition speed.
• The number of signals averaged (NSA) provides a balance between signal-
to-noise and acquisition speed.
Chapter 34
flowing blood will produce an increased signal intensity while under other condi¬
tions very little or no signal will be produced by the flowing blood. We will desig¬
nate these two possibilities as "bright blood" imaging and "black blood" imaging,
as indicated in Figure 34-1. There are several different physical effects that can
produce both bright blood and black blood as indicated. These effects can be di¬
vided into three categories:
1. time effects
2. phase effects
3. selective demagnetization (saturation).
TIME EFFECTS
The time effects are related to the movement of blood during certain time inter¬
vals within the acquisition cycle. The production of bright blood is related to the
TR time interval whereas the production of black blood is related to the TE time
interval.
503
504 Physical Principles of Medical Imaging
Ipr .
( 0 •
V J
TIME EFFECTS \ /
In-Flow Enhancement A
Flow Void 1
PHASE EFFECTS
Intravoxel Dephasing
-
Flow Compensation
-
SATURATION
Figure 34-1 The Different Effects That Can Produce Contrast of Flowing Blood
Flow-Related Enhancement
The process that causes flowing blood to show an increased intensity, or bright¬
ness, is illustrated in Figure 34-2. This occurs when the direction of flow is
through the slice, as illustrated, and is also known
as the in-flow effect. The
degree
of enhancement is determined by the relationship of flow velocity to TR. Three
conditions are illustrated. The arrow indicates the amount of
longitudinal magne¬
tization at the end of each imaging cycle. Because of the slice-selection
gradient
the RF pulses affect only the blood within the slice.
When a long TR is used in the absence of flow, the
longitudinal magnetization
regrows to a relatively high value during each cycle, as indicated at the top. This
condition produces a relatively bright image of both the blood and the
stationary
tissue. If a short TR value is used, each cycle will begin before the
longitudinal
magnetization has approached its maximum. This results in a reduced signal in¬
tensity and a relatively dark image because both the blood and tissue remain par¬
tially saturated.
Magnetic Resonance Imaging of Flowing Blood 505
Image
Long TR 11 r ^
urn ft ««««o
No Flow
ft/ I J
Short TR ft f
um
No Flow
ft
ft /
ft ft ft ft o |;;T|pT|
I J
f
Short TR ft
Flow
(m ft ft ft ft c
/
ft
90
High Magnetization Low Magnetization
(Unsaturated) RF Pulses (Partial Saturation)
The effect of flow is to replace some of the blood in the slice with fully magne¬
tized blood from outside the slice. The increased magnetization at the end of each
cycle increases image brightness of the flowing blood. The enhancement increases
506 Physical Principles of Medical Imaging
with flow until the flow velocity becomes equal to the slice thickness divided by
TR. This represents full replacement and maximum enhancement.
There are several factors that can have an effect on flow enhancement. In multi-
sliceimaging, including volume acquisition, the degree of enhancement can vary
with slice position. Only the first slice in the direction of flow receives fully mag¬
netized blood. As the blood reaches the deeper slices, its magnetization and result¬
ing signal intensity will be reduced by the RF pulses applied to the outer slices.
Slowly flowing blood will be affected the most by this. Faster flowing blood can
penetrate more slices before losing its magnetization. Related to this is a change in
the cross-sectional area of enhancement from slice to slice. A first slice might
show enhancement for the entire cross section of a vessel. However, when laminar
flow is present the deeper slices will show enhancement only for the smaller area
of fast flow along the central axis of the vessel.
Any other effects that produce black blood can counteract flow-related en¬
hancement. One of the most significant is the flow-void effect, which takes over at
higher flow velocities.
Bright blood from flow-related enhancement is especially prevalent with gradi¬
ent-echo imaging. There are two major reasons for this. The short TR values typi¬
cally used in SAGE imaging increase the flow-related enhancement effect. Also,
when a gradient rather than an RF pulse is used to produce the echo there is no
flow-void effect to cancel the enhancement.
FLOW-VOID EFFECT
Relatively high flow velocities through a slice reduce signal intensity and image
brightness. In Figure 34-3 the arrow in the slice indicates the level of residual
transverse magnetization when the 180-degree pulse is applied to form the spin-
echo signal. This is the transverse magnetization produced by the preceding 90-
degree pulse. The time interval between the 90-degree and 180-degree pulses is
one-half TE. If the blood is not moving, the blood that was magnetized trans¬
versely by the 90-degree pulse will be within the slice when the 180-degree pulse
is applied. This results in maximum rephasing of the transverse protons and a
relatively bright image.
If blood moves out of the slice between the 90-degree and 180-degree pulses,
complete rephasing will not occur. This is because the 180-degree pulse can affect
only the blood within the slice. The spin-echo signal is reduced, and the flowing
blood appears darker than blood moving with a lower velocity. The intensity con¬
tinues to drop as flow is increased until the flow velocity removes all magnetized
blood from the slice during the interval between the two pulses (one-half TE).
Magnetic Resonance Imaging of Flowing Blood 507
Image
c
1
%
Slow
Flow
'
(11 If i l
* k \
90 180
Blood Excited
RF Pulses By 90 Degree Pulse
Image
Fast
m Flow
J
No Excited Blood
90 180 In Slice
RF Pulses
Presaturation
Flow Enhancement
FLOW
cm ft ft ft ft ft |
11
/
Images
Presaturatlon
w}(M
Saturation Excitation
RF Pulses
Figure 34-4 The Use of RF Pulses To Demagnetize (Saturate) Blood Before It Flows into
a Slice
image displays a void or black blood in the vessels. The region of presaturation
can be placed on either side of the imaged slice. This makes it
possible to selec¬
tively turn off the signals from blood flowing in opposite directions.
The presaturation technique can be used to: (1)
produce black-blood images, (2)
selectively image either arterial or venous flow, and (3) reduce flow-related arti¬
facts, as described in Chapter 35.
PHASE EFFECTS
of blood during the imaging process. One is the phase relationship among the
spinning protons within each individual voxel (intravoxel) and the other is the
voxel-to-voxel (intervoxel) relationship of the transverse
magnetizations, as
shown in Figure 34-5. Both of these
concepts have been described before. We will
Magnetic Resonance Imaging of Flowing Blood 509
Intervoxel
Phase
Decrease Flow
and Compensation
Phase Shift
Protons
Intravoxel
Phase
now see how they are affected by flowing blood and how they can be corrected by
the technique of flow compensation.
Intravoxel Phase
phase at the time of the echo event. In general, dephasing and the loss of transverse
magnetization occurs when the magnetic field is not perfectly homogeneous or
uniform throughout a voxel. A gradient in the magnetic field is one form of
inhomogeneity that produces proton dephasing. Since gradients are used for vari¬
ous purposesduring an image acquisition cycle, this dephasing effect must be
taken into account. For stationary (non-flowing) tissue or fluid the protons can be
rephased by applying a gradient in the opposite direction, as shown in Figure 34-6.
Let us now consider this process in more detail.
In general, we are considering events that happen within the TE interval, that is,
between the excitation pulse and the echo event. We recall that the phase-encod-
510 Physical Principles of Medical Imaging
Gradient Gradient
Excitation Dephasing Rephasing Echo Event
Field In Phase
Non-moving Strength
Tissue <!> <J> 6 $$$
Flowing Dephased
Blood <t> 6 $
Dark Blood
Flowing In Phase
Blood
m
Bright Blood
ing gradient is applied during this time. We will use it as the example for this
discussion. When the gradient is applied as shown, the right side of the voxel is in
a stronger magnetic field than the left. This means that the
protons on the right are
spinning faster than those on the left and quickly get out of phase. However, they
can be rephased by applying a gradient in the reversed direction, as shown. Now
the protons on the left are in the stronger field and will be spinning faster. They
will catch up with and come into phase with the slower spinning protons on the
right. At the time of the echo event the protons are in phase and a signal is pro¬
duced. The process that has just been described is used in virtually all imaging
methods to compensate for gradient dephasing.
Flow Dephasing
Our next step is to consider what happens to the protons in a voxel of flowing
bloodor other fluid. This is also illustrated in Figure 34-6. If the voxel is
moving
the protons will not be completely rephased by the second gradient. This is be¬
cause the protons will not be in the same position relative to the gradient and will
not receive complete compensation. The result of this is that flow in the direction
Magnetic Resonance Imaging of Flowing Blood 511
of a gradient will generally produce proton dephasing and little or no signal at the
time of the echo event. This is one possible source of black blood.
Flow Compensation
Even-Echo Rephasing
spin. Before the pulse, the protons in the faster layers had moved ahead and gained
phase on the slower moving protons. However, immediately after the pulse they
are flipped so that they are behind the slower spinning protons. This sets the stage
for them to catch up and come back into phase. This occurs at the time of the
second echo event and results in an increase in transverse magnetization, higher
signal intensity, and brighter blood than was observed in the first echo image.
The alternating of blood brightness between the odd- and even-echo images
will continue for subsequent echoes but there will be an overall decrease in inten-
512 Physical Principles of Medical Imaging
sity because of the T2 decay of the transverse magnetization. Both turbulent and
pulsatile flow tend to increase proton dephasing within a voxel, which results in a
loss of signal intensity. Under some conditions this can counteract the effect of
even-echo rephasing.
Intervoxel Phase
will assume laminar flow with thehighest velocity along the central axis of the
vessel. As this blood flows through a
magnetic field gradient the phase of the indi¬
vidual voxels will shift in proportion to the flow
velocity. Therefore, if we create
an image at a
specific time we can observe the phase relationships. If the phase of
a voxel's transverse
magnetization is then translated into image brightness (or
perhaps color) we will have an image that displays flow velocity and direction.
This type of image is somewhat
analogous to a Doppler ultrasound image in that it
is the velocity that is being measured and
displayed in the image.
Analytical software can be used in conjunction with phase images to calculate
selected flow parameters.
Phase effects can also be used to
produce angiographic images as described
later.
Artifacts
Although the techniques that can be used to suppress artifacts will be described in
Chapter 35, it is appropriate to consider the source of one such artifact here. We
recall that the process of phase
encoding is used to produce one dimension in the
MR image. A gradient is used to
give each voxel in the phase-encoded direction a
different phase value. This phase value of each voxel is measured
by the recon¬
struction process (Fourier transform) and used to direct the
signals to the appropri¬
ate image pixels, as described in
Chapter 32. This process works quite well if the
tissue voxels are not moving during the acquisition
process. However, if a voxel
Magnetic Resonance Imaging of Flowing Blood 513
Figure 34-7 Intervoxel Phase Effects That Can Be Used To Produce Images of
Flowing Blood
ANGIOGRAPHY
on phase effects while others rely on the magnetic characteristics of the blood
flowing into the imaging area. We will now consider these two major techniques
and variations within each one.
Phase Contrast
We recall that when blood flows through a magnetic-field gradient the phase
relationship is affected. This applies both to the phase relationship of protons
within a voxel and the voxel-to-voxel relationship of transverse magnetization.
Both of these—intravoxel and intervoxel—phase effects can be used to produce
contrast of flowing blood. However, the intervoxel phase shift of the transverse
age. Therefore, image brightness is directly related to flow velocity in the direc¬
tion of the flow-encoding gradient.
Figure 34-8 illustrates the basic process of creating a phase contrast angiogram.
At least two image acquisitions are required. In one image the phase of the magne¬
tization is shifted in proportion to the flow velocity. Flow compensation is used
during the acquisition of the second image to reset the phase of the flowing blood.
The phase in the stationary tissue is not affected and is the same in both images.
The mathematical process of phase or vectory subtraction is then used to pro¬
duce the phase contrast image. The stationary tissue, which has the same phase in
the two images, completely cancels and produces a black background in the im¬
age. The flowing blood, which has a different phase in the two images, produces a
bright image.
The contrast in this type of image is related to both the velocity and direction of
the flowing blood.
Velocity
The
degree of phase shift and the resulting contrast is related to flow velocity.
When using this imaging method the user must set a velocity value as one of the
protocol factors. Flow at this rate will produce maximum contrast. An advantage
of phase contrast angiography is the ability to image relatively slow flow rates if
the proper factors are used.
Flow Direction
Phase contrast is produced only when the flow is in the direction of a gradient.
To image blood that is flowing in different directions several images must be ac-
Magnetic Resonance Imaging of Flowing Blood 515
0 ft ft ft ft ft
Flow I Flow
ft ft ft ft ft ft
r i > r
\
I J r~
First Image Second Image
quired with gradients in the appropriate directions. Flow in all possible directions
can be imaged by acquiring three images with three orthogonal gradient direc¬
tions.
The images for the different flow directions are combined along with the sub¬
traction process to produce one composite angiographic image.
Inflow Effects
applications.
516 Physical Principles of Medical Imaging
Spatial Characteristics
images. This is usually by means of the maximum intensity projection (MIP) tech¬
nique, which will be described below, or by surface rendering methods.
Angiographic Image
RF Pulses
deeper into the acquisition volume than slow flowing blood. This is illustrated in
Figure 34-10.
Angiographic Image
A A
RF Pulses
MRI Artifacts
There variety of artifacts that can appear in MR images. There are many
are a
process to suppress artifacts. In this chapter we will consider the most significant
artifacts that degrade MR images and how the various artifact suppression tech¬
niques can be employed.
An artifact is something that appears in an image and is not a true representation
of an object or structure within the body. Most MRI artifacts are caused by errors
in the spatial encoding of RF signals from the tissue voxels. This causes the signal
from a specific voxel to be displayed in the wrong pixel location. This can occur in
both the phase-encoding and frequency-encoding directions, as shown in Figure
35-1. Errors in the phase-encoding direction are more common and larger, result¬
ing in bright streaks or ghost images of some anatomical structures. Motion is the
most common cause but the aliasing effect can produce ghost images that fold
over or wrap around into the image.
MOTION-INDUCED ARTIFACTS
Movement of body tissues and the flow of fluid during the image acquisition
process is the most significant source of artifacts. The selection of a technique that
519
520 Physical Principles of Medical Imaging
Artifacts
Streaks
and Pixel
Ghost Displacement
/
Phase Encoding Frequency Encoding
Wrap Around
Figure 35-1 Classification of the Most Common MRI Artifacts
over the region of movement. Artifacts, or ghost images, occur when the signals
Ordered
Phase Encoding
Serial Averaging
Regional Saturation
Figure 35-2 Various Types of Motion That Produce Artifacts in MR Images and
Correction Techniques
Phase-Encoded Direction
Motion artifact streaks and ghosting always occur in the phase-encoded direc¬
tion. Prior to an acquisition the operator can select which direction in the image,
vertical or horizontal, is to be phase encoded as opposed to frequency encoded.
This makes it possible to place the artifact streaks in either the horizontal or verti¬
cal direction. This is a very helpful technique for protecting one anatomical area
from motion and flow artifacts produced in another area. It does not eliminate the
artifacts but orients them in a specific direction.
522 Physical Principles of Medical Imaging
Cardiac Motion
Triggering
Synchronizing the image acquisition cycle with the cardiac cycle is an effective
technique for reducing cardiac motion artifacts. An EKG monitor attached to the
patient provides a signal to trigger the acquisition cycle. The R wave is generally
used as the reference point. The initiation of each
acquisition cycle is triggered by
the R wave. Therefore, an entire
image is created at one specific point in the car¬
diac cycle. This has two advantages. The motion artifacts are reduced and an
unblurred image of the heart can be obtained. The
delay time between the R wave
and the acquisition cycle can be adjusted to
produce images throughout the car¬
diac cycle. This is typically done in cardiac
imaging procedures.
Maximum artifact suppression by this technique
requires a constant heart rate.
Arrhythmias and normal heart-rate variations reduce the effectiveness of this
technique.
Cardiac triggering is also useful for
reducing artifacts from CSF pulsation. This
can be
especially helpful in thoracic and cervical spine imaging.
Flow Compensation
The
technique of flow compensation or gradient moment nulling was described
in Chapter 34. In addition to compensating for blood flow effects, it can be used to
reduce problems arising from CSF
pulsation, especially in the cervical spine. It
actually provides two desirable effects. The rephasing of the protons within each
voxel increases signal
intensity from the CSF, especially in T2 images. It also
reduces the motion artifacts.
Respiratory Motion
Respiratory motion can produce artifacts and blurring in both the thoracic and
abdominal region. Several techniques can be used to suppress these motion
effects.
Averaging
The technique of signal averaging is used primarily to reduce signal noise, as
described in Chapter 33. However, averaging has the additional benefit of reduc-
MRI Artifacts 523
ing streak artifacts arising from motion. If a tissue voxel is moving at different
velocities and in different locations during each acquisition cycle, the phase errors
will be different and somewhat randomly distributed. Averaging the signals over
several acquisition cycles produces some degree of cancellation of the phase er¬
rors and the artifacts. There are several different
ways that signals can be aver¬
aged. Serial rather than parallel averaging gives the best artifact suppression.
Serial averaging is performed by repeating two or more complete acquisitions and
averaging. Parallel averaging is performed by repeating an imaging cycle two or
more times for each phase-encoded gradient step. With serial averaging there is a
much longer time between the measurements made at each phase-encoded step.
This gives a more random distribution of phase errors and better cancellation. As
with noise, increasing the number of signals averaged (NSA) reduces the intensity
of artifacts but at the cost of extending the acquisition time. The averaging process
reduces artifacts but not motion blurring.
acquisition the gradient is turned on with maximum strength during the first step
and is gradually decreased to a value of zero at the midpoint of the acquisition
process. During the second half the gradient strength is increased, step by step, but
in the opposite direction. The basic problem is that two adjacent acquisition cycles
might catch a voxel of moving tissue in two widely separated locations. The loca¬
tion is also somewhat random from cycle to cycle. This contributes to the severity
of the artifacts.
Ordered phase encoding is a technique in which the strength of the gradient for
each phase-encoded step is related to the amount of tissue displacement at that
particular instant. This requires a transducer on the patient's body to monitor res¬
piration. The signals from the transducer are processed by the computer and used
to select a specific level for the phase-encoded gradient.
Regional Presaturation
Regional presaturation is a technique that has several different applications. In
Chapter 34 we saw how it could be used with flowing blood to eliminate the signal
and produce a black-blood image. An application of this technique to reduce respi¬
ratory and cardiac motion artifacts in spine imaging is shown in Figure 35-3. With
thistechnique a 90-degree RF saturation pulse is selectively applied to the region
of moving tissue. This saturates or reduces any existing longitudinal magnetiza¬
tion to zero. This is then followed by the normal excitation pulse. However, the
524 Physical Principles of Medical Imaging
region that had just experienced the saturation pulse is still demagnetized (or satu¬
rated) and cannot produce a signal. This region will appear as a black void in the
image. It is also incapable of sending artifact streaks into adjacent areas.
Flow
Flow is different from the types of motion described above because a specific
structure does not appear to move from cycle to cycle. This reduces the blurring
effect but artifacts remain a problem.
The flow of blood or CSF in any part of the body can produce artifacts because
of the phase-encoding errors. Several of the techniques that have already been
described can be used to reduce flow-related artifacts.
Regional Presaturation
Regional presaturation as described in Chapter 34 is especially effective be¬
cause it turns the blood black. Black blood, which produces no signal, cannot
produce artifacts.
\ / \
♦ I
Saturation Echo
Excitation
RF Pulses
Figure 35-4 illustrates the use of presaturation to reduce flow artifacts. The area
of saturation is locatedso that blood flows from it into the image slice.
Flow Compensation
Flow compensation is useful when it is desirable to produce a bright-blood im¬
age.It both reduces intervoxel phase errors, the source of streaking, and restores
some of the intravoxel magnetization and signal intensity.
ALIASING ARTIFACTS
Saturated
I a 4
Saturation T Echo
Excitation
RF Pulses
known as aliasing because structures outside of the FOV take on an alias in the
form of the wrong spatial-encoding characteristics. Wrap around can occur in both
the frequency- and phase-encoded direction but is more of a problem in the phase-
encoded direction.
Two techniques that can be used to eliminate wrap-around artifacts are illus¬
trated inFigure 35-5. One procedure is to increase the size of the acquisition FOV
and then display only the specific area of interest. The FOV is extended
by in¬
creasing the number of voxels in that direction. This is described as oversampling.
Under some conditions these additional samples or measurements will
permit a
reduction in the NSA so that acquisition time and signal-to-noise is not
adversely
affected by this technique.
An alternative method of eliminating wrap-around or fold-over artifacts is to
apply presaturation pulses to the areas adjacent to the FOV. This eliminates sig¬
nals and the resulting artifacts.
Oversampling
Presaturation
t t
Saturated Areas
Figure 35-5 The Wrap-around or Fold-over Artifact and Methods for Suppressing It
MRI Artifacts 527
CHEMICAL-SHIFT ARTIFACTS
Field Strength
We recall from Chapter 29 that the chemical shift or difference in resonant fre¬
quency between water and fat is approximately 3.3 ppm. This is the amount of
resonant frequency. The product
chemical shift expressed as a fraction of the basic
of this and the proton resonant frequency of 64 MHz (at a field strength of 1.5 T)
Tissue
Water FaT
Freouancv
■BIBBS
BBHBB
■saws
■■BBS
■BBSS
Image
■
■BIBS
■BBSS
■ BBSS
■ ■■■■
■■ass
256 Pixels.
Image Width (pixels) -16 Pixels/kHz
Bandwidth (kHz) 16 kHz """
210 Hz at 1.5 T
70 Hz at 0.5 T
Water Fat
Spectrum 1
3.3 ppm
gives a chemical shift of 210 Hz. At a field strength of 0.5 T the chemical shift will
be only 70 Hz. The practical point is that chemical shift increases with field
strength and is generally more of a problem at the higher field strengths.
Bandwidth
In the
frequency-encoded direction the tissue voxels emit different frequencies
so thatthey can be separated in the reconstruction process. The RF receiver is
tuned to receive this range of frequencies. This is the bandwidth of the receiver.
The bandwidth is often one of the adjustable protocol factors. It can be used to
control the amount of chemical shift (number of pixels) but it also has an effect on
other characteristics such as signal-to-noise.
In Figure 35-6 we assume a bandwidth of 16 kHz. If the
image matrix is 256
pixels in the frequency-encoded direction, this gives 15 pixels per kHz of fre¬
quency (256 pixels/16 kHz). If we now multiply this by the chemical shift of 0.210
kHz (210 Hz) we see that the chemical shift will be 3.4
pixels.
The amount of chemical shift in terms of pixels can be reduced
by increasing
the bandwidth. This works because the chemical shift, 210 Hz, is now a smaller
fraction of theimage width and number of pixels.
On most MRI systems the water-fat chemical shift (number of
pixels) is one of
the protocol factors that can be adjusted
by the operator. When a different value is
selected the bandwidth is automatically changed to
produce the desired shift. Even
through the chemical-shift artifact can be reduced by using a large bandwidth, this
is not always desirable. When the bandwidth is increased more RF noise
energy
will be picked up from the patient's body and the
signal-to-noise relationship will
be decreased. Therefore, a bandwidth or chemical-shift value should be selected
that provides a proper balance between the amount of artifact and
adequate signal-
to-noise.
Chapter 36
Many nuclear medicine procedures require an image that shows the distribution
of a radioactive substance within the patient's body. For many years nuclear im¬
ages were obtained by using a rectilinear scanner. Today most imaging is done
with the gamma camera. The gamma camera takes a picture of a gamma-emitting
radioactive source much like a conventional camera takes a picture of an illumi¬
nated object. Not all gamma cameras in use today are identical in design, but most
have a number of common features. This chapter considers the general construc¬
tion, function, and characteristics of a typical gamma camera.
The gamma camera consists of a number of components, as shown in Figure 36-
1. Each component performs a specific function in converting the gamma image
into a light image and transferring it to an appropriate viewing device or film. The
first component, the collimator, projects the gamma image onto the surface of the
crystal. The scintillation crystal absorbs the gamma image and converts it into a
light image. The light image that appears on the rear surface of the scintillation
crystal has a very low intensity (brightness) and cannot be viewed or photo¬
graphed directly at this stage. The photomultiplier (PM) tube array, which is be¬
hind the crystal, performs two specific functions. It converts the light image into
an image of electrical pulses, and it amplifies, or increases, the intensity of the
image. The electrical pulses from the tube array go to an electronic circuit that
creates three specific signals for each gamma photon detected by the camera. One
signal is an electrical pulse whose size represents the energy of the gamma photon.
The other two signals describe the location of the photon within the image area.
Typically, the size of one pulse represents the horizontal position, and the size of
the other pulse the vertical position.
The pulse representing the energy of the photon goes to the input of a pulse
height analyzer (PHA). (The PHA function is discussed in detail later.) If the pulse
529
530 Physical Principles of Medical Imaging
is within the selected window (energy) range, it will pass through the PHA and be
recorded in the computer memory along with the location information. The data
pulses are available to a computer for future processing, viewing, and analysis.
CAMERA CHARACTERISTICS
To use a gamma camera properly for various types of examinations, one must
be familiar with its imaging characteristics. In some instances, it is desirable to
alter the characteristics of the
camera to fit the examination being conducted. After
considering the significance of basic camera characteristics, we will see how they
depend on the various components that make up the camera system.
Sensitivity
In atypical imaging situation, only a small fraction of the gamma photons emit¬
ted by the radioactive material contribute to the formation of the image. Consider
the situation illustrated in Figure 36-2. The photons leave the small radioactive
source equally distributed in all directions. The only photons that contribute to the
image are the ones passing through the appropriate collimator hole and absorbed
in the crystal. Photons from the source that are not absorbed in the crystal are, in
effect, wasted and do not contribute to image formation. This characteristic of a
gamma camera is generally referred to as the sensitivity. The sensitivity of a cam¬
era can be described in terms of the number of photons detected and used in the
Camera Image
r
\
sensitivity. The problem is that a collimator that yields maximum sensitivity usu¬
ally produces maximum image blur. The compromise between these two factors is
discussed in detail in a later chapter.
The thickness of a scintillation crystal has an effect on detector efficiency. De¬
tector efficiency and camera sensitivity are reduced when photons pass through
the crystal. Therefore a thick crystal tends to yield higher sensitivity, especially for
positioned with respect to the photon energy spectrum can significantly reduce
camera sensitivity.
Many cameras have a short dead time after each photon is detected during
which an arriving photon is not counted. Dead time reduces sensitivity when the
Field of View
systems, the distance between the object being imaged and the camera crystal.
COLLIMATORS
surface of the body. In effect, each point of the crystal is able to see only the
radiation originating from a corresponding point on the patient's body. Although
the illustration shows only a few holes in the collimator, actual collimators contain
hundreds of holes located very close together in order to see all points within the
FOV. The one exception is the pin-hole collimator, which is discussed later.
Front View
Image -
^ -
B
O 'Source B
tors.The differences among the collimators are the thickness, number, and size of
the holes and the way they are arranged or oriented. This, in turn, has an effect on
the camera sensitivity, FOV image magnification, and image blur. The user must
be aware of these differences in order to select the best collimator for a given
examination.
When selecting a collimator, it is necessary to consider the energy of the gamma
relatively thin septa are adequate. The advantage of thin septa is that more holes
can be located in a given area, and this results in a higher sensitivity. However,
thicker septa must be used with high-energy photons in order to prevent photons
from crossing over from one hole to another.
Low-Energy
Collimator
Source of
High-Energy
Photons
High-Energy
Collimator
Sharp
Image
%
Parallel-Hole Collimators
parallel to the holes. Assuming there is no photon absorption between the source
and collimator, the number of these parallel photons does not change significantly
with the source-to-camera distance. Therefore, camera sensitivity with a parallel-
hole collimator is generally not affected by changing the distance between the
source and camera. Note that the inverse-square effect does not occur with a colli-
Diverging Collimators
diverging collimator, the holes fan out from the surface of the crystal, as
In the
shown in Figure 36-6. With this arrangement of holes, the camera can image a
source larger than the crystal. The FOV increases with distance from the face of
the collimator. The major advantage of the diverging collimator is the increased
FOV. The rate at which the FOV increases with distance depends on the
angulation of the holes. For a typical diverging collimator, the FOV at a distance
of 15 cm is approximately 1.6 times the FOV at the collimator surface.
With the diverging collimator, the image at the crystal surface is smaller than
the actual size of the radioactive source. For a given collimator, the degree of
minification increases with the distance between the source and collimator face.
The change in magnification with distance can produce distortion in the image,
because objects close to the camera are minified less than objects located at a
greater distance from the camera surface. For example, two identical lesions will
appear to have different sizes if they are not located at the same distance from the
camera.
IMAGE
~i
JJ
Object
Converging Collimators
The holes in the converging collimator arranged so that they point to, or
are
converge on, a point located in front of thecollimator, as shown in Figure 36-7.
This is, in effect, the reverse arrangement of the diverging collimator. In fact,
some collimators are reversible so that they can be used as either a diverging or
tor decreases with increased distance from the collimator face. The converging
collimator produces image magnification. The degree of magnification depends
on design of the collimator and the distance from the collimator surface. As a
the
radioactive source is moved away from the collimator, it comes into the view of
more collimator holes, and this produces an increase in sensitivity. The sensitivity
increases approximately as the square of the distance from the collimator. Because
of its magnification and sensitivity properties, the converging collimator is useful
for imaging small organs, such as the thyroid gland, kidneys, and heart. However,
Pin-Hole Collimators
passes through the hole. This creates an image of the source on the crystal surface.
With this type of collimator, the orientation of the image at the crystal is inverted
with respect to the source. The FOV of a pin-hole collimator is very dependent on
the distance between the source and the collimator. When the source is located as
far in front of the collimator crystal is behind the collimator, the FOV is
as the
equal to the size of the crystal. If the source is located closer, the image will be
magnified. The degree of magnification increases as the source approaches the
collimator. Because it has only one hole, the sensitivity of the pin-hole collimator
The Gamma Camera 537
IMAGE
>» \
1" S
\
in s
c \
0)
_ x
CO ^
isobviously less than for typical multihole collimators. It also decreases as the
distance between the source and pin-hole is increased. In many cameras, the pin¬
hole can be changed. A large hole gives more sensitivity, but also more blur.
CRYSTALS
As in any scintillation detector system, the crystal in the gamma camera has two
basic functions: (1) to absorb the gamma photons and (2) to convert the gamma
image into a light image. Crystals used for this purpose are typically in the form of
disks. Both dimensions, diameter and thickness, have an effect on the characteris¬
tics of the camera. The diameter of the crystal establishes the basic FOV, which is
then modified by the type of collimator used and the distance between the camera
and the source being imaged. The thickness of the crystal affects sensitivity and
The photomultiplier (PM) tubes are generally arranged in a hexagonal array; the
number that will completely fill a circular area depends on the relative diameters
of the area and the PM tubes. Specific numbers of PM tubes uniformly fill a given
538 Physical Principles of Medical Imaging
circular area: 7, 19, 37, 61,91, etc. The first gamma camera used seven PM tubes.
Throughout the evolution of the camera, both the size of the array and the number
of PM tubes have gradually increased.
In addition to converting the light from the crystal into electrical pulses and
amplifying the pulses, the PM tube array also detects where each gamma photon is
absorbed in the crystal. This information is necessary to transfer the image from
the crystal to the viewing unit. The manner in which the location of a photon
interaction is measured is illustrated in Figure 36-9. Assume that a gamma photon
is absorbed in the crystal at the location shown. The light spreads throughout the
crystal and light pipe and is viewed by a number of PM tubes. The brightness of
the scintillation, as seen by a specific PM tube, depends on the distance between
the PM tube and the scintillation. In the illustration, PM tube B is closest and sees
the brightest scintillation and receives the most light from it. It responds by pro¬
ducing a relatively large electrical pulse. PM tube C receives less light and pro¬
duces a correspondingly smaller electrical pulse. Because it is even farther away
from the scintillation, PM tube A produces an even smaller electrical pulse. In
other words, when one photon is absorbed by the crystal, a number of PM tubes
around the specific point see the light and produce electrical pulses. The relative
Figure 36-9 Three of the PM Tube Pulses Generated by the Capture of a Single Photon
The Gamma Camera 539
size of the pulses from the various PM tubes represents the location of the scintil¬
lation, or gamma photon, within the image area.
IMAGE FORMATION
The gamma camera must take the pulses from the PM tube array and use them
to form an image. This function is performed by an electronic circuit. The first
function of this circuit is to take all of the electrical
pulses created by a single
photon interaction and use them to calculate, or determine, the location of the
interaction within the image area. The circuit then produces two new pulses that
describe the location of the photon. The amplitude of one pulse represents the
location of the photon in the horizontal (H) direction, and the amplitude of the
other pulse specifies the vertical (V) location.
A second function of the circuitry is to combine all of the PM tube pulses into
one electrical pulse whose amplitude
represents the energy of the photon. This
pulse is passed on to a pulse height analyzer (PHA) in the viewing unit.
The function of the viewing unit is the formation of a visible image from electri¬
cal pulses. Most conventional viewing units create the image on the screen of a
cathode ray tube (CRT), more commonly known as a picture tube, especially
when found in television equipment. In a CRT the image is formed on the screen
by a small electron beam striking the screen from the rear. The image is actually
created by controlling the position of the electron beam. When the electron beam
strikes the CRT screen, which is made of a fluorescent material, it creates a small
spot of light. When the two position pulses arrive at the CRT, they are used to
position the electron beam to the appropriate location on the CRT screen. If the
energy pulse is within the appropriate range to pass through the PHA, it is also
directed to the CRT. When this pulse arrives at the CRT, it turns on the electron
beam, momentarily causing a small spot of light to be formed on the CRT screen.
This spot of light is, in effect, the image of a single gamma photon coming from
the patient's body. This process is repeated for each photon accepted by the
gamma camera. Many CRTs can store or maintain each light spot while the total
image is being formed.
A viewing unit generally has an intensity control that can be used to adjust the
brightness of each spot. This control also adjusts the film exposure when the image
is transferred from the CRT screen to film.
SPECTROMETRY
present and enter the detector, as shown in Figure 36-10. Also, some of the radia¬
tion from the primary source undergoes Compton interactions with materials out¬
side the source volume. This produces scattered radiation, which can enter the
detector. If an imaging system responds to this scattered radiation, the resulting
image will include areas around the primary source. This distorts the image and
makes it impossible to determine the actual size, shape, and activity of the primary
source organ or lesion. Radiation from other sources might also be present. Cos¬
and introduce errors into the counting of radioactive samples. Occasionally, two
radioactive materials are administered to a patient, and the system must selec¬
tively respond to each source at the appropriate time.
An imaging or counting system can be made selective by adding an energy
spectrometer after the detector and amplifier, asshown in Figure 36-10. The spec¬
trometeris actually a PHA that works with the electrical pulses produced by the
detector. The purpose of the PHA is to allow pulses created by the desired (pri¬
mary) source of radiation to pass on to the counting and imaging devices and to
reject the pulses associated with other sources of radiation. The user must always
adjust the controls of the PHA to ensure the proper selection of pulses.
Counter
or
scatter JT -
>
r -
\
I Primary
n
ySource J
1
x-ray J~L
l ^
-
->
f \_
Other
^ _ _
1
-
ySourcesJ
Background 1
Figure 36-11. Remember that pulse size height now represents photon energy. The
spectrum shown here is the spectrum of a monoenergetic gamma emitter as seen
"through the eyes of' an ideal detector system. Unfortunately, real detector sys¬
tems do not produce a pulse size spectrum that precisely represents the
photon
energy spectrum. The various factors that affect the pulse size spectrum will now
be considered.
Statistical Fluctuations
The various events taking place between the absorption of the gamma photon
and the formation of the electrical pulse are illustrated in Figure 36-12. A gamma
90 _
PULSES SPECTRUM
80 _
70
60-
50 _
1^
50 60 70 80
Figure 36-11 The Pulse Spectrum That Would Be Produced by a Monoenergetic Radia-
ion Source and an Ideal Detector
542 Physical Principles of Medical Imaging
— —
Same Photon Energy
r
I
Different Pulse Size — —
I
If
* CRYSTAL
Electron
•m Multiplication ^
i P_M _Tube_ j
photon is absorbed in the crystal and creates a cluster of light photons. There is
always variation in the number of light photons created by a specific gamma en¬
ergy. Also, all of the light photons associated with one scintillation are not neces¬
sarily absorbed by the photocathode of the PM tube; some are absorbed within the
crystal itself. The number absorbed within the crystal is influenced, to some ex¬
tent, by the location of the scintillation within the crystal. The number of electrons
emitted from the photocathode by a cluster of light photons is also subject to statis¬
tical fluctuation. The number of electrons also fluctuates at each of the dynodes.
The cluster of electrons (electrical pulse) varies in size because of the combined
effect of the various fluctuations. A series of typical pulses and the resulting spec¬
trum are shown in Figure 36-13.
The variation in pulse size causes the pulse spectrum to assume the form of a
broadened peak rather than a narrow line. An important characteristic of a detector
50 60 70 80 90
Time Pulse Size
Figure 36-13 Pulse Spectrum Produced by a Monoenergetic Radiation Source and a Typi¬
cal Detector
pulse amplifier. Severe loss of energy resolution capability can result from condi¬
tions such as a fractured crystal or inadequate light transmission between the crys¬
tal and PM tube.
Poor energy resolution, or high FWHM values, means that the pulse size associ¬
ated with themonoenergetic photons from the primary source varies considerably.
This makes it difficult for the PHA to separate these pulses from the pulses arising
from other radiation sources.
The peaked spectrum shown in Figure 36-13 results from the complete absorp¬
tion of the gamma photons in the crystal by the photoelectric process. It is there¬
fore often referred to as the photopeak portion of the spectrum. The following
sections discuss other portions of the pulse spectrum created by other interactions.
Compton Scatter
When a gamma photon is engaged in a Compton interaction with a material, it
both loses energy at the site of interaction and changes direction, as discussed in
Chapter 10. Compton interactions can take place between the photon and the ma¬
terial containing the radioactive source, the detector crystal, or material located
between the source and crystal.
In radionuclide imaging procedures, a significant number of Compton interac¬
tions usually occur in the tissue surrounding the radioactive material. If these scat¬
tered photons are included in the image, the image will not be a true representation
of the distribution of radioactive material. It is therefore desirable to exclude the
544 Physical Principles of Medical Imaging
scattered photons from the imaging process. This can be achieved, to some extent,
because their energy is different from the energy of the primary photons.
For a given primary energy, such as 140 keV, the energy of the scattered photon
depends on the angle of scatter. Scatter that takes place within the body adds a
component to the spectrum, as shown in Figure 36-14. Photons that scatter in the
forward direction (directly toward the detector) lose very little energy in the scat¬
tering interaction and have energies very close to 140 keV. The statistical
fluctuation within the detector causes some of these to appear to have energies
greater than 140 keV. The fluctuations within the detector system cause the over¬
lap between the scatter component and the photopeak of the spectrum. Photons
that scatter in the backward direction (180°) have the lowest energy. For 140-keV
primary photons, complete backscatter produces 90-keV photons. This means that
the scattered radiation produced by a 140-keV primary source has photon energies
ranging from 90 keV to 140 keV. However, some photons may undergo two or
more Compton interactions before leaving the body, and this creates some pho¬
tons with energies well below 90 keV. The exact shape of the scatter portion of the
spectrum and its amplitude relative to the photopeak depend on a number of fac¬
tors, especially the thickness of tissue covering the radioactive material.
Photopeak
Figure 36-14 Spectrum Component Produced by Compton Interactions within the Body
The Gamma Camera 545
If Compton interactions take place within the detector crystal, a different spec¬
trum component is created. The spectrum as seen "through the eyes of' the detec¬
tor represents energy deposited within the detector. If a 140-keV photon under¬
goes a single Compton interaction in the crystal, the maximum energy it can
deposit is 50 keV. This occurs when the photon is scattered back out of the crystal
(180°) and carries an energy of 90 keV. The energy deposited in the crystal (50
keV) is the difference between the primary photon energy (140 keV) and the scat¬
tered photon energy (90 keV). Photons that scatter in a more forward direction
have higher energies and therefore deposit less energy in the crystal. The high-
energy side of this spectrum component is known as the Compton edge.
Characteristic X-Rays
When gamma photons interact with materials with relatively high atomic num¬
bers, photoelectric interactions can occur. A photoelectric interaction removes an
electron from an atom and creates a vacancy in one of the shell locations. When
the vacancy is refilled, a characteristic x-ray photon is often produced. The energy
of the characteristic photon is essentially the same as the binding energy of the K-
shell electrons. This type of interaction and the resulting characteristic x-ray pho¬
tons can produce distinct components in the spectrum.
If the interaction occurs in a material other than the crystal, the spectrum com¬
ponent corresponds to the energy of the characteristic x-ray. In many procedures,
a lead collimator is located between the radioactive source and detector crystals.
duces a 28-keV x-ray photon that escapes from the crystal, the energy deposited is
the difference between these two, or 112 keV. In other words, the detector sees
only 112 keV rather than 140 keV. This gives rise to a spectrum component gener¬
ally referred to as an iodine escape peak. The energy of an iodine escape peak is
always 28 keV below the photopeak energy.
Background
No facility is completely free of background radiation. Sources of background
radiation include cosmic radiation, naturally occurring radioactive nuclides in
546 Physical Principles of Medical Imaging
Figure 36-15 Spectrum Component (Escape Peak) Produced by X-Ray Photons Leaving
the Crystal
A spectrometer is, in general, a device that allows the operator to select and use
a specific portion of a spectrum. The type of spectrometer used in most nuclear
medicine systems is a PHA. The PHA is located between the detector and the
counting or imaging components of the system. The pulses from the detector must
pass through the PHA in order to contribute to the image or counting data. The
basic characteristic of a PHA is that it can be set to permit only pulses of a specific
The baseline control sets the minimum pulse amplitude that will pass through
the PHA. The window control sets the range of pulse amplitudes that will pass
through. Window controls are usually calibrated in either pulse height units, like
the baseline control, or as a percentage of the pulse height scale. In the example
shown in Figure 36-17, the baseline is set at 60 pulse height units and the window
has a width of 20 units. The only pulses that can pass through the analyzer with
this setting are those within the range of 60 units (120 keV) and 80 units (160
keV).
Letus now consider the action of the PHA with
regard to the three pulses shown
in Figure 36-17. The 50-unit pulse (100 keV) is below the baseline setting and is
blocked by the analyzer. The 70-unit pulse (140 keV) is well within the range of
acceptable pulse sizes established by the baseline and window and therefore
passes through the analyzer. Since the 90-unit pulse (180 keV) is above the top of
the window, it is also blocked by the analyzer.
PHA settings should be considered in relation to the photon energy spectrum, as
illustrated in Figure 36-18. In effect, the baseline and window settings determine
the portion of the spectrum that will be used for imaging or data collection. The
window is generally positioned over the desired portion of the spectrum, such as
the photopeak. The area under the spectrum curve that falls within the window
(shaded area) represents the relative number of photons that are being collected
and used. A wide window setting, which encompasses more of the spectrum, pro¬
duces an increase in the rate at which photons are counted. With a wide window,
an image can be formed faster, or a certain number of counts can be collected in a
shorter period. The problem with increasing window width is that it decreases the
The Gamma Camera 549
Window
Figure 36-18 PHA Window Positioned To Select the Photopeak from the Other
Components of the Spectrum
ability to discriminate between the desirable and undesirable portion of the spec¬
trum.
In many situations, most of the undesirable portions of the spectrum are at ener¬
gies below the desirable portion, or photopeak. Good data collection can be
achieved by carefully positioning the baseline and opening the window to include
all energies above the baseline setting. This is referred to as an integral counting.
Chapter 37
CONTRAST
Contrast can exist when the radiation from an object is either more or less than
the radiation from the background area. If the radiation from the object is greater
than from the background, the object is commonly referred to as being "hot," and
if it is less, it is referred to as being "cold" (eg, a nodule).
Figure 37-1 illustrates the three stages in the imaging process. The object within
the patient has a certain amount of inherent contrast with respect to its surrounding
551
552 Physical Principles of Medical Imaging
being imaged.
Since the energy of scattered photons is less than the energy of the photons
coming directly from the radioactive source, it is possible to produce some separa¬
tion with a PHA. The difference between the energy of a scattered photon and the
primary radiation depends on two factors: the angle of scatter and the initial en¬
ergy of the primary radiation. For example, a 140-keV photon scattered at an angle
of 90° has an energy of 116 keV. This is less than a 20% difference between the
scattered and primary radiation. If we assume that one half of the scatter occurs at
angles less than 90°, much of the scattered radiation is very close in energy to the
primary radiation.
As discussed in Chapter 36, a scintillation detector system produces a certain
amount of energy spreading, or loss of energy resolution. This spreading and the
relatively small energy difference between scatter and primary radiations make it
impossible in many cases to remove all of the scattered radiation from an image.
Radionuclide Image Quality 553
Figure 37-2 shows a typical photon energy spectrum for both the direct and the
scattered radiation.By carefully positioning the PHA window, it is usually pos¬
sible to exclude a significant amount of the scattered radiation from the image.
Figure 37-3 compares two images of a radioactive object. In the one on the left, the
amount of scattered radiation was reduced by using a PHA.
Body Section
Figure 37-2 The Use of the PHA Window To Exclude Scattered Radiation from the Image
Figure 37-3 Radioactive Object Imaged without and with Scattered Radiation Present
554 Physical Principles of Medical Imaging
depends very heavily on their relative energies. The problem can be appreciated
by considering the photon energy spectra of the two nuclides shown in Figure
37-4. If there is a sufficient energy difference between the two photon peaks, it
will be possible to set the PHA window to image nuclide B. The problem arises
when we attempt to image nuclide A. Because the photopeak energy of nuclide A
is less than that of nuclide B, it might coincide with a significant amount of scat¬
tered radiation from nuclide B.
Blur
Figure 37-5 Profile of the Blurred Image of a Small Object (Radioactive Source)
as to which dimension should be used to express the size of the blur pattern. The
common practice is to use the diameter of a circle located at one half of the maxi¬
mum intensity. With respect to the profile, this corresponds to the full width of the
profile at one half of its maximum height. This blur value is generally expressed in
millimeters and is the FWHM. This is also the name of the parameter used to
express radiation detector energy resolution, but the two entirely different applica¬
tions of the term FWHM should not be confused.
image profile of a point object, such as in Figure 37-5, is generally desig¬
The
nated point spread function (PSF). It is usually easier to measure the image
as a
spread, or blur, of a source that is in the form of a thin line, ie, a small tube filled
with radioactive material. The profile obtained in this manner is known as a line
spread function (LSF). In either case, the blur width is expressed in terms of the
FWHM.
Because blur tends tospread image points, it can make it difficult to resolve, or
separate, small objects or features that are located close together. Because of this,
the term resolution is often used to describe the blur characteristics of an imaging
system. This results from the common practice of using resolution test objects to
measure blur. An image of a typical test object used for this purpose is illustrated
in Figure 37-6. The object consists of a series of lead strips that are placed over a
large uniform source of radiation. In the four sections of the test object, the width
and separation distance between the lead strips varies from section to section.
Each section is characterized by the number of line pairs (one lead strip and one
space) per centimeter. The blur is measured by imaging the test object and deter-
556 Physical Principles of Medical Imaging
Figure 37-6 Image of a Test Object Used To Measure the Resolving Ability (Amount of
Blur) of a Gamma Camera
mining the closest strips that can be resolved. The approximate relationship
between resolution and blur is
equipment user should be familiar with them so that blur can be minimized as
much as possible.
Motion
Motion of the patient during the imaging procedure is an obvious source of blur.
The amount of blur is equal to the distance that each point within the object moves
during the time the image is actually being formed.
Radionuclide Image Quality 557
Two kinds of blur are introduced by the gamma camera itself: intrinsic blur and
collimator blur. The distinction between them can be made by referring to Figure
37-7.
Intrinsic Blur
Consider an image point that has been formed within the camera crystal. (For
the moment we are assuming that the gamma photons from a point source have all
been absorbed in the crystal at this point.) The light spreads as it moves from the
image point to the surface of the crystal. This spreading, or diffusion, of light
within the crystal causes the image at the crystal surface to be blurred. The amount
of blurring introduced by the crystal is more or less proportional to crystal thick¬
ness. This is similar to what happens in intensifying screens. The selection of a
crystal thickness involves a compromise between blur and camera sensitivity (de¬
tector efficiency). A thick crystal captures more photons, and therefore increases
camera sensitivity, but it also increases the amount of intrinsic blur. Thin crystals,
which reduce image blur, can be effectively used with radionuclides that emit
relatively low energy photons.
The light image from the crystal is transferred electronically to the viewing
device. The inability of the electronic circuitry to position precisely each image
Crystal
Figure 37-7 The Two Basic Components of Gamma Camera Blur: Intrinsic Blur and
Collimator Blur
558 Physical Principles of Medical Imaging
At the present time, gamma cameras have intrinsic blur values in the approxi¬
mate range of 3 mm to 6 mm when measured at the crystal surface. The amount of
intrinsic blur in gamma cameras has been reduced over the years, with improve¬
ments in both the crystals and electronic circuits.
When considering the blur value for a specific camera application, one must
consider the location of the object being imaged. The degree of image quality
usually depends on the relationship of the amount of blur to the size of the object.
Therefore, the amount of blur must be considered not only at the crystal surface
but at the location of the object. If a parallel-hole collimator is used, the value of
the intrinsic blur will be the same for an object located at any distance from the
camera. If either a diverging, converging, or pin-hole collimator is used, the
amount of intrinsic blur projected to the object location will depend on the dis¬
tance between the object and the camera surface. This is illustrated in Figure 37-8.
object location, increases with an increase in the distance between the object and
the camera. This occurs because a diverging collimator minifies the image and
minification increases with distance between the object and camera. Therefore, as
the object is moved away from the camera surface, the image becomes smaller,
and the ratio of intrinsic blur to apparent object size increases. For reasons dis-
Diverging
T
Blur
Image
Object
Blur
Converging
o :::i.:
cussed later, this effect is represented as an increase in blur with respect to object
size.
If a
converging collimator, which produces image magnification, is used, the
intrinsic blur with respect to object size decreases as the object is moved away
from the camera surface.
In summary, the amount of intrinsic blur with respect to object size depends on
whether the image is magnified or minified by the collimator. Minification in¬
creases the effective blur, whereas magnification reduces it.
Collimator Blur
The purpose of the collimator is to "focus" the radiation from each point of the
object toa corresponding point on the crystal. Because of the finite size of the
collimator holes, no image point in the crystal can correspond to a single object
point, as shown in Figure 37-7. This causes the gamma photon image that is fo¬
cused onto the crystal by the collimator to be blurred. This effect is generally
designated collimator blur.
It is best to analyze collimator blur by starting from an image point within the
crystal, as shown in Figure 37-7. In the ideal, no-blur, imaging system, the field of
view (FOV) from the image point would be limited to a corresponding object
point of equal size. However, with an actual collimator, the FOV from a single
image point is much larger than the corresponding object point. From the stand¬
point of an image point within the crystal, this causes each object point to look
larger than it really is, or, in other words, to be blurred. If an object point (small
source) is located at the face of the collimator, it appears to be the same size as the
collimator hole; if the object point is moved away from the face of the collimator,
as shown in Figure 37-7, it appears to become even larger, or more blurred. This
effect should not be confused with the minification and magnification produced by
The three factors determine the FOV from a point on the crystal through a single
collimator hole. To the camera, a small point source appears to be the size of the
560 Physical Principles of Medical Imaging
FOV through a single hole. Figure 37-9 compares the blur produced by collima¬
tors with different hole sizes. As the collimator thickness is increased, the single-
hole FOV or blur, is decreased. Figure 37-9 also shows that a reduction in collima¬
tor hole size (diameter) reduces blur at a specific object location.
When selecting a collimator for a specific application, consideration must be
given to the compromise between blur and sensitivity. Design factors that de¬
crease blur, such as decreased hole diameter and increased hole length (collimator
thickness), also reduce detector efficiency and camera sensitivity. This relation¬
ship is shown in Figure 37-10 for some typical collimators. Collimators are often
placed into general categories by virtue of their blur/sensitivity characteristics.
Names for the various categories are usually chosen that emphasize the positive
IMAGE NOISE
In nuclear
imaging, the major source of noise is the random distribution of pho¬
tons over thesurface of the image. A gamma camera image typically contains
much more noise than a conventional x-ray image. This is because nuclear images
l T r
10 it 12
Blur FWHM (mm)
Figure 37-10 General Relationship between Gamma Camera Sensitivity and Collimator
Blur
562 Physical Principles of Medical Imaging
Figure 37-11 Relationship of Gamma Camera Blur to the Distance between the
Radioactive Object and the Collimator
are generally formed with fewer photons than x-ray images. The amount of noise,
or variation in photon concentration, is inversely related to the number of photons
used to form the image. The noise in an image can be reduced by increasing the
number of photons used in the imaging procedure.
The relationship between image noise and photon concentration, or count den¬
sity, is illustrated in Figure 37-12. A series of circular areas are drawn across the
surface of an image. The size of the area that should be used to analyze image
noise is approximately equal to the camera blur size (FWHM). For the purpose of
this discussion, an area size of 1 cm2 is used. A camera with a large blur value
tends to produce an image with less noise because the blur, in effect, averages or
blends the photons together over a larger area. The image section shown in Figure
Radionuclide Image Quality 563
140-1
(VI
120-
"
100
(O
c
3
80-
O
O 60"
40-
20-
200-
Figure 37-12 Area-to-Area Variation in Photon Concentration for Two Average Count
Densities
The upper count-density profile in Figure 37-12 shows the variation in count
density when the average is 100 counts per cm2. The standard deviation of the
count densities can be used to assign a numerical value to the noise. In our earlier
discussion of image noise (Chapter 21), we showed that an average of 100 counts
(photons) had a standard deviation of 10 counts, or 10%. Therefore, when the
count density is 100 counts per area, the standard deviation noise level is 10%.
The lower profile shows the area-to-area variation when the average count den¬
sity is 1,000 counts per cm2. In this case, the standard deviation (the square root of
1,000) is 32 counts, or 3.2%. This demonstrates that as the number of photons used
to form an image (count density) is increased, the area-to-area variation in photon
Lesion Visibility
Image noise degrades the overall quality of an image and makes it difficult to
detect certain lesions. Let us consider the situation shown in Figure 37-13. As¬
sume we have a lesion that has sufficient radioactive uptake to produce 20% con¬
trast. If an imaging system that is completely noise-free is used, the lesion will be
readily visible against a smooth background, illustrated by the upper graph. If, on
the other hand, the image is produced with a gamma camera that collects, on the
average, 100 counts per area, the situation changes rather dramatically. The ratio
of the lesion contrast to the noise level will be only 2:1. It is generally considered
that a lesion contrast-to-noise ratio of at least 4:1 is necessary in nuclear medicine
to ensure reliable lesion detection. In this case, detectability can be improved sim¬
ply by collecting more counts to form the image, so that it has a lower noise level.
UNIFORMITY
image area. A non-uniform sensitivity generally occurs when the various PM tube
outputs are not properly balanced. Most modern gamma cameras use special cir¬
cuits, usually containing microprocessors, to correct for the inherent non-uniform¬
ity in the detector array.
A gamma camera should be checked periodically to determine if it has a uni¬
form sensitivity response. This procedure requires a source of radiation that will
cover the entire crystal area with a uniform intensity. This can be achieved in two
• i
Lesion 1
_ Contrast 20%
>%
Background
v>
c
4>
o
Figure 37-13 Relationship of Lesion Contrast to Background, Both with and without
Noise
is mounted on the face of the camera, usually with a collimator in place. Another
method is to small radioactive source located at least 1.5 m from the face of
use a
the camera. With this method, the collimator must be removed in order to produce
a uniform exposure to the crystal surface.
566 Physical Principles of Medical Imaging
Small Source
4
Figure 37-14 Radioactive Sources Used To Test a Gamma Camera for Uniformity over
the Image Area
SPATIAL DISTORTION
Because of the manner in which the image is transferred from the crystal to the
viewing screen, a gamma camera may introduce spatial distortion. Distortion oc¬
curs when various points within an image are moved with respect to each other in
the transfer process. This distorts the size and shape of objects within the image.
A gamma camera can be checked for spatial distortion by using one of several
types of test objects, or phantoms. For this test, the test object must have a series of
lines or holes that are spaced uniformly within the image area. The test is per¬
formed by imaging the test object and then determining if the uniform spacing is
maintained in the image.
Chapter 38
Radionuclide Tomographic
Imaging
In this chapter we will consider the concept and general characteristics of both
methods.
particles of anti-matter that have the same mass as electrons. Certain radioactive
567
568 Physical Principles of Medical Imaging
mass. The most important characteristic is that the two photons leave the site of
biological molecules it makes it possible to tag and image a large number of com¬
pounds that play a role in biological functions.
Each possible pair of detectors is sampled for a very short period of time, which
is called a coincidence time window. If both detectors in a pair receive a photon
during the window, the event is recorded. The straight-line path between the two
detectors that passes through the annihilation point is known as the line of re¬
sponse (LOR). The geometric coordinates of the line of response for each annihi¬
lation event are stored in the computer memory. There can be many LORs passing
Image Reconstruction
Attenuation Correction
The photons from annihilation events can interact with and be attenuated by the
tissue of thepatient's body. Photons from near the center of the body will be at¬
tenuated more than those from near the surface. This produces a distortion in the
apparent distribution of radioactivity within the slice. Steps must be taken to cor¬
rect for the attenuation effects.
The basic process is to place a large ring-shaped source within the detector
array. The ring source is typically filled with Ge-68. The daughter product is Ga-
68, which is a positron emitter. Data is then collected both with and without the
patient's body in place. This data is then used to determine the amount of attenua¬
tion along each LOR and by the computer to correct the patient emission data for
attenuation effects.
Positron-Emitting Nuclides
Carbon, nitrogen, and oxygen are naturally occurring elements in most organic
compounds. The positron-emitting isotopes of these elements can be used to tag a
variety of compounds. Fluorine can be substituted for hydrogen in many com¬
pounds. Rubidium is a potassium analog that is taken up by the myocardium.
Most of these positron emitters are characterized by short lifetimes. This means
that they must be produced on-site just prior to their use.
The Cyclotron
and nuclear reactions are used to produce the other positron-emitting nuclides.
Design
The basic components typical cyclotron are shown in Figure 38-2. First,
of a
there is a produces a magnetic field between the two poles.
large magnet that
Within the magnetic field there are two electrodes that are known as dees because
Radionuclide Tomographic Imaging 571
Operation
Let us useFigure 38-2 to follow the sequence of events leading to the produc¬
tion of positron-emitting nuclide. Negative hydrogen ions are produced by the
a
ion source near the center. Because they are charged particles they follow a curved
path in the magnetic field. An electrical voltage is applied between the two dees
(electrodes). The negative ions are attracted to the positive electrode. They will be
accelerated and gain velocity and energy as they move between the electrodes.
This is similar, in principle, to the acceleration of electrons in an x-ray tube. After
the ions reach the dee, the polarity of the applied voltage is reversed so that the
other electrode is now positive and attractive to the ions. The ions continue their
^
x Stripping
Foil
ss Target To Be Made
Radioactive
circular path and move back and forth between the dees. They are accelerated and
gain velocity as they move between the two dees. This causes the circular path to
enlarge. As the ions move back and forth between the two dees they gain addi¬
tional energy with each passage and follow a spiral pathway. When they have
gained the desired energy and have reached the periphery, they strike a stripping
foil. The stripping foil removes the electrons from the negative ion, leaving a posi¬
tively charged proton. The difference in electrical charge changes the direction of
curvature in the magnetic field and directs the protons to the target where the reac¬
one photon per nuclear transition. The process is similar in principle to x-ray com¬
puted tomography (CT) except that a gamma camera is used as the imaging de¬
vice, as shown in Figure 38-3.
Data Acquisition
SPECT requires a gamma camera that can rotate or scan the detector assembly
around the patient's body. In each position it collects data that represent projection
profiles of the radiation emitted from the patient's body.
Image Reconstruction
Statistics
Photons are emitted from a radioactive source in a random manner with respect
to both time and location. The random emission of photons with respect to time
makes it somewhat difficult to get a precise indication of source activity. This is
because the random emission produces a fluctuation in the number of photons
emitted from one time interval to another. If the photons from the sample are
counted for several consecutive time intervals, the number of counts recorded will
be different for each, as illustrated in Figure 39-1. This natural variation intro¬
duces an error in the measurement of activity that is generally referred to as the
statistical counting error.
The random emission of photons with respect to location, or area, produces
image noise. The presence of this spatial noise within an image decreases the abil¬
ity to see and detect small objects, especially when their contrast is low.
In this chapter we first consider the nature of the random variation, or fluctua¬
tion, in photons from a radioactive source, and then show how this knowledge can
be used to increase the precision of activity measurements (counting) and the qual¬
conducting an experiment for the purpose of studying the random nature of photon
emissions. Let us assume we have a source of radiation in a scintillation well
counter that is being counted again and again for 1-minute intervals. We quickly
notice that the number of counts, or photons, varies from one interval to another.
Some of the values we observe might be 87, 102, 118, 96, 124, 92, 108, 73, 115,
97, 105, and 82. Although these data show that there is a fluctuation, they do not
readily show the range of fluctuations. The amount of fluctuation will become
more apparent if we arrange our data in the form of a graph, as shown in Figure
575
576 Physical Principles of Medical Imaging
COUNTS/SECOND IMAGE
92
I 15 •
V' '
87
105
Radioactive Source s. /
/ Counting \ f Image \
( Error I Noise
J
J
Figure 39-1 Statistical Photon Fluctuation As a Source of Counting Error and Image
Noise
39-2. In this graph, we plotted the number of times we measured a specific number
of counts versus the actual number of counts observed. Obviously, we would have
to count the number of
photons from our source many many times to obtain the
data to plot this type of graph.
When the data are presented in this manner, it is apparent that some count val¬
ues occurred more frequently than others. In our experiment we observed 100
counts more frequently than any other value. Also, most of the count values fell
within the range of 70 to 130 counts. Within this range, the number of times we
observed specific count values is distributed in the Gaussian, or normal, distribu¬
tion pattern. (This is actually a special type of Gaussian distribution known as the
Poisson distribution and is discussed later.)
At this point we need to raise a very significant question: Of all of these values,
which one is the "true" count rate that best represents the activity of our sample?
In example, the average, or mean, of all of the values is 100 counts. This
our is
considered to be the value that best represents the true activity of the sample.
Statistics 577
Number of Count*
Figure 39-2 Graph Showing the Relative Number of Times Different Count Values (Num¬
ber of Counts) Are Obtained When a Specific Radioactive Sample Is Counted Again and
Again
COUNTING ERROR
Error Ranges
From our experiment, we know that the value of any individual count
earlier
falls within certain range around the true count value. In our experiment, we
a
observed that all counts fell within 30 counts (plus or minus) of the true value
(100 counts). Based on this observation, we could predict the maximum error that
578 Physical Principles of Medical Imaging
Figure 39-3 The Amount of Error Is the Difference between the Measured and True Count
Values
could occur we make a single count. In our case, the maximum error would
when
(± 30%). We also observed that very few count values approached
be ± 30 counts
the maximum error. In fact, a large proportion of the count values are clustered
relatively close to the true value. In other words, the error associated with many
individual counts is obviously much less than the maximum error. To assume that
the error in a single count is always the maximum possible error is overstating the
Figure 39-4 Number of Measurements That Fall within Specific Error Ranges
various error ranges. Upon careful analysis of our data, we find that 68% of the
time the count values are within the first error range ( ± 10%), 95% of all count
values are within the next error range (± 20%), and essentially all values (theoreti¬
there is no determine the actual error because the true value is unknown.
way to
Therefore, think in terms of the probability of being within certain error
we must
ranges. With this in mind, we can now make several statements concerning the
error of an individual measurement in our earlier experiment:
While we are still not able to predict what the actual error is, we can make a
statement as to the probability that the error is within certain stated limits.
580 Physical Principles of Medical Imaging
might appear that the error ranges used above were chosen because they were
It
in simple increments of ten counts. Actually, they were chosen because they repre¬
sent "standard" error ranges used for values distributed in a Gaussian manner.
ship between error limits and the probability of a value falling within the specific
limits:
might be helpful to draw an analogy between the error limits and a bull's-eye
It
target, as shown in Figure 39-6. The small bull's-eye in the center represents the
true count value for a specific sample. If we make one measurement, we can ex¬
pect the count value to "hit" somewhere within the overall target area. Although
there is no way to predict where the value of a single measurement will fall, we do
Number of Counts
Figure 39-5 Relationship between the Number of Measurements within Error Limits
When the Limits Are Expressed in Terms of Standard Deviations
Statistics 581
know something about the probability, or chance, of it falling within certain areas.
For example, there is a 68% chance that our count value will fall within the small¬
est circle, which represents an error range of one standard deviation. There is a
95% probability that the value will fall within the next largest circle, which repre¬
sents an error range of two standard deviations. Essentially all of the values
(99.7%) will fall within the largest circle, which represents an error range of three
standard deviations.
Measuring the relative activity of a radioactive sample is like shooting at a
bull's-eye. We do not expect to get the true count value (hit the bull's-eye) each
time. The problem is that after making a measurement (taking a shot) we do not
know what our actual error is (by how far we missed the bull's-eye). This is be¬
cause we do not know what the true value is, only the value of our single measure¬
ment. We must, therefore, describe our performance in terms of error ranges and
the confidence we have of falling within the various ranges. We can express a
582 Physical Principles of Medical Imaging
level of confidence of falling within a certain error range if we know the probabil¬
ity of a single value falling within that range. For Gaussian distributed count val¬
ues, 68% will fall within one standard deviation of the true, or mean, value. Based
on this we could make the statement that we are 68% confident that the value of a
single measurement will fall within the one standard deviation error range. A more
complete description of our performance could be summarized as follows:
Error Range Confidence Level
±lo 68.0%
± 2 G 95.0%
± 3 G 99.7%
This will be recognized as the count value in our earlier example, where it was
stated that the value of one standard deviation was 10 counts, or 10%. Now let us
Statistics 583
examine the values of one standard deviation for other recorded count values
shown in Table 39-1. Examination of this table shows that as the number of counts
recorded during a single measurement increases, the value of the standard devia¬
tion, in number of counts, also increases; but it decreases when expressed as a
percentage of the total number of counts. We can use this last fact to improve the
precision of radiation measurements.
The error range, expressed as a percentage of the measured value, decreases as
the number of counts in an individual measurement is increased. The real signifi¬
cance of this is that the precision of a radiation measurement is determined by the
actual number of counts recorded during the measurement. The error limits for
different count values and levels of confidence are shown in Table 39-2.
We can use the information in Table 39-2 to
plan a radiation measurement that
has specific precision. For example, if we want our measurement to be within a
a
2% error range at the 95% confidence level, it will be necessary to record at least
10,000 counts. Most radiation counters can be set to record counts either for a
specific time interval or until a specific number of counts are accumulated. In
either case, the count rate of the sample (relative activity) is determined by divid¬
ing the number of counts recorded by the amount of time. Presetting the number of
Table 39-1 Standard Deviation Expressed in Number of Counts and Percentage of Count
Value
Counts Percent
100 10 10
1,000 32 3.2
10,000 100 1
Table 39-2 Error Limits (Percent) for Different Count Values and Levels of Confidence
100 10 20 30
1,000 3.2 6.3 9.5
10,000 1 2 3
100,000 0.32 0.63 0.95
584 Physical Principles of Medical Imaging
counts and then measuring the time required for that number of counts to accumu¬
late allows the userto obtain a specific precision in the measurement.
Combined Errors
3,600 6,400
Figure 39-7 Errors Associated with the Difference between Two Count Values
Statistics 585
number, we find that the standard deviation is 60 counts, or 1.67%. The second
sample has a count standard deviation of 80 counts, or
total of 6,400 counts with a
1.25%. If we now determine the standard deviation for the difference by using the
relationship given above, we see that
(Os) is related to the standard deviation of the individual measurements (01 and 02)
by
CTs = V Oi2 + o22.
Notice that this is the same relationship that for the difference between two
as
count values. A common mistake is to assume that the sign between the two stan¬
dard deviation values is different for addition and subtraction. It does not change;
it is positive in both cases.
Let us now determine the error range of the sum of the two count values in
Figure 39-7. As we have just seen, the standard deviation for the sum is the same
as the standard deviation for the difference. That is, in this case, 100 counts, but
since the sum of the two count values is 10,000 counts this now represents an error
range of only 1%. This is the same error range as we would find on a single mea¬
surement of 10,000 counts.
When expressed as a percentage, the error range increases when we take the
difference between two measurements, but it decreases when we add the results of
the two measurements.
Chapter 40
Patient Exposure and Protection
All medical
imaging methods deposit some form of energy in the patient's
body. Although the quantity of energy is relatively low, it is a factor that should be
given attention when conducting diagnostic examinations. In general, there is
more concern for the energy deposited by the ionizing radiations, x-ray and
gamma, than for ultrasound energy or radio frequency (RF) energy deposited in
magnetic resonance imaging (MRI) examinations. Therefore, this chapter gives
major emphasis to the issues relating to the exposure of patients to ionizing radia¬
tion.
undergoing either x-ray or radionuclide examinations are subject to a
Patients
wide range of exposure levels. One of our objectives is to explore the factors that
affect patient exposure. This is followed by an explanation of methods that can be
used to determine patient exposure values in the clinical setting.
Figure 40-1 identifies the major factors that affect patient exposure during a
radiographic procedure. Some factors, such as thickness and density, are deter¬
mined by the patient. Most of the others are determined by the medical staff. Many
of the factors that affect patient exposure also affect image quality. In most in¬
stances when exposure can be decreased by changing a specific factor, image
quality is also decreased. Therefore, the objective in setting up most x-ray proce¬
dures is to select factors that provide an appropriate compromise between patient
587
588 Physical Principles of Medical Imaging
anatomical location of the value should also be stated. Some exposure patterns are
characteristic of the different x-ray imaging methods. A review of these patterns
willgive us some background for considering factors that affect exposure and
applying methods to determine actual exposure values.
Radiography
patient's body, as shown in Figure 40-1. The point that receives maximum expo¬
sure is the entrance surface near the center of the beam. There are two reasons for
this. The primary x-ray beam has not been attenuated by the tissue at this point,
and the is exposed by some of the scattered radiation from the body. The
area
of the primary beam and the size of the exposed area. For typical radiographic
situations, scattered radiation can add at least 20% to the surface exposure pro¬
duced by the primary beam.
As the x-ray beam progresses through the body, it undergoes attenuation. The
rate of attenuation (or penetration) is determined by the photon-energy spectrum
Patient Exposure and Protection 589
(KV and filtration) and the type of tissue (fat, muscle, bone) through which the
beam passes. For the purpose of this discussion, we assume a body consisting of
homogeneous muscle tissue. In Figure 40-2, lines are drawn to divide the body
into HVLs. The exposure is reduced by a factor of one half each time it passes
sure decreases by one half as it passes through each 4 cm of tissue. At the exit
Fluoroscopy
The fluoroscopic beam projected through the body will produce a pattern simi¬
lar to a radiographic beam if the beam remains fixed in one position. If the beam is
100%
Figure 40-2 Typical Exposure Pattern (Depth Dose Curves) for an X-Ray Beam Passing
through a Patient's Body
590 Physical Principles of Medical Imaging
moved during the procedure, the radiation will be distributed over a large volume
of tissue rather thanbeing concentrated in one area. For a specific exposure time,
tissue exposure values (roentgens) are reduced by moving the beam, but the total
radiation (R - cm2) into the body is not changed. This was illustrated in Figure
3-5 (Chapter 3).
Computed Tomography
In computed tomography (CT) two factors are associated with exposure distri¬
bution and must be considered: (1) the distribution within an individual slice and
(2) the effect of imaging multiple slices.
The rotation of the x-ray beam around the body produces a much more uniform
distribution of radiation exposure than a stationary radiographic beam. A typical
CT exposure pattern is shown in Figure 40-3. A relatively uniform distribution
throughout the slice is obtained if a 360° scan is performed. However, if other scan
angles that are not multiples of 360° are used, the exposure distribution will be¬
come less uniform.
When multiple slices are imaged, the exposure (roentgens) does not increase in
proportion to the number of slices because the radiation is distributed over a larger
volume of tissue. This point was also illustrated in Chapter 3. However, when
slices are located close together, radiation from one slice can produce additional
exposure in adjacent slices because slice edges are not sharply defined (as de¬
scribed in Chapter 23) and because of scattered radiation.
Figure 40-3 Typical Exposure (Dose) Pattern Produced with Computed Tomography
Patient Exposure and Protection 591
This holds true for both x-ray and nuclear radiation imaging procedures. The vari¬
ables of an imaging procedure should be selected to produce adequate image qual¬
Surface Exposure
200 mR
4.0 mR
I
Tabletop, 1 HVL '
r2.0 mR •Contrast
Grid Bucky Factor = 4 UllllilllililllllllillUillllllllllll
0.5 mR
♦ Film Noise
Receptor
Sensitivity =
■
0.5 mR
c Screen —»-Blur
Receptor Sensitivity
One of the most significant factors is the amount of radiation that must be deliv¬
ered to the receptor to form a useful image. This is determined by the sensitivity of
the receptor. It was shown in Chapter 14 that there is a rather wide range of sensi¬
Intensifying Screens
The selection of intensifying screens for a specific procedure involves a com¬
promise between exposure and image blur or detail. The screens that require the
least exposure generally produce more image blur, as discussed in Chapter 14.
Films
Films with different sensitivity (speed) values are available for radiographic
procedures. The primary disadvantage in using high sensitivity film is that quan¬
tum noise is increased, as described in Chapter 21. In fact, it is
possible to manu¬
facture film that would require much less exposure than the film generally used.
However, the image noise level would be unacceptable.
Grid
It was shown in
Chapter 13 that the penetration of grids is generally in the range
of 0.17 to 0.4. This corresponds to a Bucky factor ranging from 6.0 to 2.5. The
exposure to the exit surface of the patient is the product of the receptor exposure
and the grid Bucky factor. This is assuming that the receptor surface is not sepa¬
rated from the surface of the patient by a significant distance. The use of a high-
ratio grid, which generally has a relatively low penetration, or high Bucky factor,
tends to increase the ratio of patient-to-receptor exposure. Low-ratio grids reduce
Tabletop
In many x-ray examinations, the receptor is located below the table surface that
supports the patient's body. The attenuation of radiation by the tabletop increases
the ratio of patient-to-receptor exposure. It is generally recommended that the
tabletop have a penetration of at least 0.5 (not more than 1 HVL). The patient
exposure with a tabletop that has a penetration of 0.5 will be double the exposure
if no tabletop is located between the patient and receptor.
Distance
Patient exposure is reduced by using the greatest distance possible between the
focal spot and body. The effect of decreasing this distance on patient exposure is
illustrated in Figure 40-5; two body sections are shown with x-ray beams that
cover the same receptor area. The x-ray beam with the shorter focal-to-patient
distance covers a smaller area at the entrance surface. Because the same radiation
is concentrated into the smaller area, the exposure to the entrance surface and
points within the patient is higher than for the x-ray beam with the greater focal-
patient distance.
It is generally recommended that the distance between focal spot and patient
surface should be at least 38 cm (15 in) in radiographic examinations. Fluoro¬
scopic tables should be designed so that the focal spot is at least 38 cm below the
tabletop.
The inverse-square effect increases the concentration of radiation (exposure
and dose) in the patient's body. However, the total amount of radiation (surface
In procedures in which the body section is separated from the receptor surface
to achieve magnification, exposure can significantly increase because of the in¬
verse-square effect. An air gap is also introduced, which reduces the amount of
scattered radiation reaching the receptor. To compensate for this and to achieve
594 Physical Principles of Medical Imaging
the same film exposure,it is generally necessary to increase the x-ray machine
output, which also increases exposure to the patient.
Tissue Penetration
If the point of interest, or organ, is not located at the exit surface of the body, the
attenuation in the tissue layer between the organ and exit surface will further in¬
crease the exposure. The ratio of the organ-to-exit surface exposure is determined
Exposure Values
Beam Limiting
Changing thex-ray beam area (or field of view, FOV) has relatively little effect
on the entrance surface exposure but has a significant effect on the total amount of
radiation delivered to the patient. The surface integral
exposure is directly propor¬
tional to the beam A
large beam will deliver more radiation to the body than
area.
Limiting the FOV to the smallest area that fulfills the clinical requirements is an
effective method for reducing unnecessary patient exposure. Under no circum¬
stances should an x-ray beam cover an area that is larger than the
receptor.
EXPOSURE DETERMINATION
Theprevious section identified the significant factors that affect the exposure,
or dose, to
a patient undergoing an x-ray examination. It is often desirable to deter¬
mine the dose received by a patient in a specific examination. The relationships
discussed above are generally not useful for this purpose because many of the
factors, such as receptor sensitivity, scatter factor, etc., are not precisely known. It
is usually easier to determine patient exposure and dose by starting with the tech¬
nical factors, KVP and MAS.
Table 40-1 Typical Patient Surface Exposure Values for Various X-Ray Procedures
Procedure Exposure
Skull(L) 40 - 60 mR
Chest(L) 50 - 100 mR
Chest (PA) 10- 30 mR
The exposure (X) delivered to a point located 1 m from the focal spot is given
by
X(mR) = Ex x MAS
where Ex is the efficacy of the x-ray tube. In most facilities, x-ray machines are
calibrated periodically, and the efficacy value can be obtained from the calibration
reports. In the absence of a measured efficacy value for a specific machine, it
might be necessary to use typical values such as are found in Figure 7-8. The
efficacy values depend on KVP, waveform, filtration, and the general condition of
the x-ray tube anode. The exposure to points at other distances from the focal spot
can be determined by adding an inverse-square correction to the above relation¬
where d is the distance between the focal spot and the point of interest. This rela¬
tionship will apply if there is no attenuation of the x-ray beam by materials such as
tissue.
When the point of interest is within the body, two additional factors must be
considered: (1) the attenuation of the radiation as it passes through the overlying
tissue and (2) the contribution of scattered radiation to the exposure. This can be
done by multiplying the exposure value in air by the appropriate tissue-air ratio
(TAR), as illustrated in Figure 40-6. Some typical TAR values for diagnostic
x-ray examinations are given in Table 40-2. TAR values depend on the depth of
A
l \
\
\
Dose «TAR • Exposure
\
\
\
\
Air \
Tissue
depth
(cm)
__ kVp
70 80 90 100 120
the point of interest within the body, the penetrating ability of the x-ray beam (KV,
filtration, waveform), and the size of the x-ray beam field that affects the amount
of scattered radiation produced.
The relationships discussed above can be combined to give
TV
Dose ( A\ VvTAD Exx MAS x TAR
(mrad) = X x TAR = -L
d2
The fact that both x-ray tube efficacy, E, and TAR increase with KV does not
mean that patients receive more radiation when the KV is increased in an exami¬
nation. An increase in KV must be compensated for by decreasing MAS to obtain
the same film exposure. This results in less radiation to the patient because of
better penetration.
RADIONUCLIDE DOSIMETRY
this section we consider the characteristics of the radioactive material and the hu¬
man body that determine the amount of energy that will be deposited.
598 Physical Principles of Medical Imaging
straightforward. As illustrated in Figure 40-7, the two factors that determine inte¬
gral dose are (1) the total number of radioactive transitions that occur within the
body and (2) the average energy emitted by each transition. The product of these
two quantities is the total energy emitted by the radionuclide, excluding the energy
carried away by the neutrino. If the radionuclide is located within the body, it can
generally be assumed that most of the emitted energy will be absorbed by the
body. This, however, depends on the penetrating characteristic of the radiation. If
the emitted radiation is in the form of high-energy photons, some of the energy
will escape from the body, but this will usually be a relatively small fraction of the
total amount.
The relationship among total energy (integral dose), the average energy per
transition, and the number of transitions expressed in terms of the cumulated ac¬
tivity is shown in Figure 40-7.
Cumulated Activity
Cumulated activity, A, is
a convenient way of expressing the number of transi¬
tions that occur. The units used for this quantity are microcurie-hours. Recall that
1 p.Ci-hr is equivalent to 1.33 x 108 radioactive transitions.
The first step in determining the amount of radiation energy deposited in a body
is to determine the cumulated activity. The cumulated activity depends on two
Integral Dose
(Total Energy)
Ai .1 a IK
Integral Dose (gm-rad)= 2.I3XIO A(xlC i -hr )-E(keV/Transition)
Figure 40-7 Factors That Determine the Total Amount of Radiation Energy (Integral
Dose) Imparted to the Patient's Body
Patient Exposure and Protection 599
factors: (1) the amount of activity administered to the patient (Ao) and (2) the
lifetime of the radioactive material within the body or organ of interest. The rela¬
tionship between cumulated activity and these two quantities is
A = 1.44 Ao, Te.
The half-life that determines cumulative activity is always the effective half-
life, Te. The relationship shown above applies only if the radioactive material is
administered to or taken up by the body or organ of interest very quickly. This is
usually the situation after administering a radiopharmaceutical in a single dose.
It is important to recognize the dependence of the number of transitions (cumu¬
lated activity) on the lifetime of the radionuclide. This is illustrated in Figure 40-8
for two nuclides with different half-lives. In both cases the administered activity is
the same, ie, 10 pCi. The illustrations show the relationship between activity re¬
maining in the body and elapsed time. The cumulated activity, or number of tran-
600 Physical Principles of Medical Imaging
sitions, is represented by the shaded area under the curve. The point to be made is
simply this: For a given amount of administered activity, the number of transitions
that occur within the body (cumulated activity) is directly proportional to the half-
life of the radionuclide.
In many cases when a radionuclide is administered to the patient, there is some
A= 1.44 Ao (Te-Tu).
Cumulated activity is related to the characteristics of both the radionuclide and
the patient. In other words, both physical and biological factors affect cumulated
activity. The physical factors, ie, administered activity and physical half-life of the
nuclide, are always known. The problem in determining cumulated activity is in
10-
Transition Energy
Most radionuclides emit a mixture of radiations, as discussed in
Chapter 5. The
radiation can consist of both electrons and
photons. Although the total transition
energy is the same for all nuclei of a specific nuclide, the radiation energy might
vary from nuclei to nuclei because of energy carried away by neutrinos and the
fact that all nuclei do not go through exactly the same transition
steps. The transi¬
tion diagram and radiation spectrum for a hypothetical nuclide are shown in Fig¬
ure 40-10. The total transition
energy of 290 keV is shared by the beta electrons,
neutrinos, and gamma photons.
This particular nuclide has two possible transition routes. Twenty percent of the
nuclei emit a beta and neutrino followed by a 190-keV gamma photon (gamma 1).
Eighty percent of the nuclei emit more energy in the form of beta and neutrino
T
290keV
radiation and a 160-keV gamma photon (gamma 2). It is assumed in this example
that the average energy of all beta electrons is 50 keV. The average beta and
gamma energy emitted per transition is
Gamma 1: 0.2 x 190 =38
Gamma 2: 0.8 x 160 = 128
Beta: = 50
216 keV.
Notice that the average radiation energy per transition (216 keV) is less than the
total transition energy (290 keV) because we exclude the energy carried away
from the body by the neutrinos.
The average transition energy is usually expressed in the units of gram-rad per
microcurie-hour, which is designated the equilibrium dose constant, A. The equi¬
librium dose constant is the amount of radiation energy emitted by 1.33 x 108
transitions (1 pCi-hr). This is a useful quantity because the integral dose can be
found by multiplying two quantities, the equilibrium dose constant and the cumu¬
lated activity, as shown in Figure 40-7. Since
1 pCi-hr = 1.33 x 108 transitions
and
ABSORBED DOSE
Electron Radiation
If the radioactive material emits electron or particle radiation such as beta, inter¬
nal conversion, Auger, or positron, the energy will be absorbed in the close vicin¬
ity of the radioactive material. Recall that a 300-keV electron can penetrate less
than 1 mm of soft tissue. Most of the electron radiations encountered in nuclear
Patient Exposure and Protection 603
medicine have energies much less than this and shorter ranges.
From the standpoint of dose estimation, the simplest situation is an organ that
contains an electron emitter that is
uniformly distributed throughout the organ, as
illustrated in Figure 40-11. In this case, the absorbed dose is
essentially the same
throughout the organ and is simply the total emitted energy divided by the mass of
the organ. The factors that determine the total emitted
energy (integral dose) were
discussed above. The absorbed dose is inversely related to
organ mass. If the same
amount of radiation energy is deposited in two
organs that differ in size, the ab¬
sorbed dose will be greater in the smaller organ.
Photon Radiation
identify two organs, as shown in Figure 40-12. The organ that contains the radio¬
active material is designated the source organ. The organ in which the dosage is
being considered is designated the target organ. With photon radiation, several
target organs must usually be considered. Obviously, the source organ is also a
target organ and is generally the organ that receives the greater dose.
In most cases, only a fraction of the emitted radiation energy is absorbed in a
regardless of the size of the target organ. The size of the target organ affects the
total amount of energy absorbed by the organ, but has relatively little effect on the
concentration or absorbed dose. The reason that changing target organ size might
604 Physical Principles of Medical Imaging
/ \
I I
l I
From From
A=1.44A0(Te-Tu) MIRD Tables
Figure 40-12 Factors Used To Determine the Dose to a Target Organ
not significantly affect absorbed dose is that dose is the amount of absorbed en¬
ergy per unit mass of tissue. A larger organ might absorb more energy, but because
of its greater mass the energy per unit mass is essentially the same.
I o
a a a a.
a s ON
"o
"T3
Other Tis ue 1.4 1.8 0.98 1.3 1.5 1.6 1.7 1.3 1.1 1.3 2.0 2.7 2.0 1.8 0.72 1.4 1.1 1.3 2.3 1.9
(Muscle)
Lungs 2.7 0.036
0.16
1.5 1.8 0.19 0.22 0.071
0.23
0.84 2.5
52.0 1.9 1.3 0.094
0.45
2.6 0.53 2.3 0. 079 0.92 0.082
0.062
2.0
Liver 4.5
1.1 1.9 1.6 2.5 3.9
46.0
2.5 1.6 1.1 4.2 0.49 0.92 0.15 0.39 2.2
11.0
0.28 1.4 3.6 2.9 2.9 0.72 3.9 0.85 3.8 1.3 1.1 6.6 0.53 8.6 0.088 0.048 0.94 2.2
190.0
0.36 6.9 1.6 1.8 9.4 4.2 0.86 0.25 0.079 5.1 1.7 0.74 0.48 0.8 1.8 0. 054 7.1 2.3
fT(AeCUDcrhaouncdmoptiu/mntC-9ilesvi.fet),y OSorguacnes
18.0
190.0
LUSIIKCCioodnntteeysss
0.91 2.2 1.1 3.8 3.2 2.8 2.6 0.26 3.7 1.5
12.0
2.3 0.41 1.4 0.27 0.016 5.4 2.2
17.0
130.0
TIntreasicnatl 1.0
2.7
2.6
0.27
1.3
0.9
3.7
130.0
78.0 24.0
2.7 3.5
7.3
1.2
3.2
3.5
1.8
2.0
0.22
1.7
4.3
1.6
1.5
1.4
11.0
0.5
2.1
18.0
0.41
0.44
1.5
10.0
0.31 0.015
0.051 0.087
0.6
0.77
2.4
1.9
WS1Nnaey.odlt.,r,
Blad er 0.15
160.0
0.92 0.27 3.0 2.2 7.4 0.26 0.17 2.2 1.8 7.3 0.23 0.55 0.66 4.7
16.0
1.9
Contes
Adrenals 0.13 2.0 2.9 0.83 0.93 0.22
11.0
4.9 2.4 3.6 1.4 0.61 9.0 0.51 6.3 0.032 0.13 1.1 2.2
ASVbaosloureef,sd
310 .0
40-3
Table
Organs
Target Wal (Total) wal) wal)
Gl Gl Gl Gl
(Red) (Muscle) (Nongravid) Body PMfDramoIaRpmthDlet
Adrenals Blad er Bone (Stomach (SI) (ULI (LwLIal) Kidneys Liver Lungs Marow TOtishuere Ovaries Pancreas Skin Sple n Testes Thyroid Uterus Total Source:
606 Physical Principles of Medical Imaging
or
EXPOSURE LIMITS
607
608 Physical Principles of Medical Imaging
lished exposure limits do not represent levels that ensure absolute safety but rather
exposure levels that carry acceptable risk to the persons involved. The recommen¬
dations of the NCRP are in the form of effective dose equivalent limits. The limits
are used in designing radiation facilities and in monitoring the effectiveness of
safety practices.
The recommended limits vary with the occupational status of the individual and
the parts of the body, as shown in Figure 41-1 as reported in NCRP Report No. 91.
Kjp rertT)
Occupational Exposure
Fetus Public
Occupational Exposure
Persons who enter facility as patients, visitors, or persons who do not rou¬
a
tinely work in the facility might be exposed to radiation. The effective dose
equivalent limit for the nonoccupationally exposed person is 0.1 rem per year.
Fetus
The limit for a fetus is 0.5 rem for the total gestational period.
depends on two factors: (1) the limit of the personnel occupying the areas and
(2) the occupancy rate by specific individuals.
For purposes of evaluating shielding requirements, the occupancy rate of an
area is assigned an occupancy factor (T). The occupancy factor represents the
fraction of time the area is occupied by any one individual. Most work areas (of¬
fices, laboratories, etc.) have occupancy factor values of 1 because they are typi¬
cally occupied by the same persons on a full-time basis. Areas such as stairways,
toilets, etc., have relatively low occupancy factor values because the same persons
are not present in these areas for any extended period of time. These areas are
typically assigned minimum occupancy factor values of 1/16. Other areas, such as
hallways, have occupancy factor values between these two extremes.
The maximum permissible exposure into any area is equal to the limit for the
personnel occupying the area divided by the appropriate occupancy factor, T. Ar¬
eas with relatively low occupancy factors have greater area exposure limits than
EXPOSURE SOURCES
The amount of radiation directed toward the walls of an x-ray room and produc¬
ing exposure in the adjacent areas depends on several factors that must be evalu¬
ated to determine the amount of wall shielding required.
Workload
The workload, W, represents how much a specific x-ray machine is used during
atypical week. It is expressed in the units of milliampere-minutes and is the sum
of the MAS values for all exposures during 1 week divided by 60. For example, if
we have a room that produces 250 exposures per week with an average MAS of 20
x-ray machine KVP are used to calculate exposure. A K value of 20 mR/ mAs is a
typical value for a three-phase x-ray machine operating at 120 kVP, taken from
Figure 7-10 (Chapter 7). The weekly exposure produced at this K value is
Utilization Factor
The direction of the x-ray beam must also be considered. For the purposes of
exposure analysis, beam direction is expressed in terms of a utilization fac¬
x-ray
tor, U, for each of the walls, ceiling, and floor of the room containing the x-ray
equipment. Each surface (wall) has a utilization factor value that represents the
fraction of time the x-ray beam is directed toward it. If an x-ray beam is fixed in
one direction, that particular wall will have a utilization factor value of 1.
The exposure to each wall also depends on the distance, d, between the x-ray
source and the wall. The exposure decreases inversely with the square of the dis¬
tance. These two factors are used in conjunction with the machine exposure to
0.5.
Scattered Radiation
In most facilities, the most significant source of area exposure is the scattered
radiation produced by the patient's body. The relationship used to calculate pri¬
mary beam exposure is modified in the following manner to produce an estimate
of exposure produced by scattered radiation.
It is generally assumed for this type of analysis that the scatter exposure at a
distance of 1 m from a patient's body is 0.001 of the primary beam exposure enter¬
ing the body; or, in other words, each roentgen of primary beam exposure pro¬
duces 1 mR of scattered radiation. Since scattered radiation goes in virtually all
directions, utilization factor values of 1 are assigned to all walls, ceilings, and
floors. The distance between the irradiated portion of the patient's body and the
wall is also used in the calculation.
If we assume that the wall of interest in our example is also 3 m from the
patient's body, the scattered radiation exposure to the wall is
Scatter Exposure = 100,000 (mR/wk) x 0.001 (Scatter Factor)/32 =11 mR/wk.
AREA SHIELDING
If the exposure to a wall exceeds the exposure limit for the adjacent area, a
barrier or shield is required. A specific barrier or shield is characterized by its
penetration value. The maximum permissible wall penetration is the ratio of the
area exposure limit to the calculated wall exposure. As an example, if the wall of
an area that has an exposure limit of 100 mR per week receives an exposure of
PERSONNEL SHIELDING
Shielding is usually required for personnel located in the same area as a patient
undergoing an x-ray examination. Although the scattered exposure produced by a
612 Physical Principles of Medical Imaging
single procedure is relatively small, significant exposures can result from working
with patients on a regular basis. The greatest potential for personnel exposure usu¬
ally comes from fluoroscopic procedures. If we assume a side scatter value of
0.001 (1 mR/R), a 10-minute fluoroscopic procedure with a patient exposure rate
of 3 R/min will produce a scatter exposure of 30 mR at a distance of 1 m from the
patient. This is approximately one-third the limit for 1 week. At a distance of 0.5 m
from the patient, the exposure from this one procedure could easily exceed the
limit for 1 week.
tables
from
SRaodiurctve
fGEraxmoatemn-ling
DEHexutpmorsainre
FTahctoarst
41-2
Figure
614 Physical Principles of Medical Imaging
Values for other nuclides are tabulated in various nuclear medicine reference
books. The precise calculation of the gamma constant value for a specific nuclide
requires a knowledge of the attenuation coefficient of air at each photon energy.
However, over the range of photon energies normally encountered in nuclear
medicine, the approximate gamma constant can be calculated from
ing the exposure at 1 m by the square of the distance (d2) because of the inverse-
square effect. If the radiation passes through a material that has significant absorp¬
tion characteristics, such as a shield, the exposure value must be multiplied by the
(A) will be the product of the activity (A) and the exposure time (t). In this case,
the expression for exposure is
Measurement Device
IONIZATION CHAMBERS
615
616 Physical Principles of Medical Imaging
Electrometer
The basic system for measuring exposure rate with an ionization chamber is
shown in Figure 42-1. In addition to the ionization chamber, a power supply and a
device that will measure small electrical currents are required. The device nor¬
applies a voltage to the chamber, which causes one electrode to be negative and
the other positive. If the air between the electrodes is not ionized, it is an insulator,
and no electrical current can flow between the electrodes and through the circuit.
However, when the air is exposed to x-radiation, it becomes ionized and electri¬
cally conductive. Ionization is a process that either adds or removes electrons from
an atom. This causes the atom to take on an electrical charge.
Figure 42-1 shows how conduction takes place when an electron is removed
from an atom by the radiation. The electron is attracted to the positive electrode,
where it is collected and pumped through the circuit by the power supply. The
arrows indicate the direction of electron flow. The positive ion is attracted to the
negative electrode. When it reaches the electrode, it picks up an electron and be¬
comes neutral. The electron that was absorbed from the negative electrode is re¬
placed by an electron from the circuit. Because electrons enter the ionization
chamber at the negative terminal and leave from the positive terminal, the ioniza¬
tion chamber becomes, in effect, a conductor. For a given ionization chamber, the
amount of conduction, or current, is proportional to the exposure rate. When the
system is calibrated, the electrometer indicates the exposure rate in units, such as
roentgens per minute or milliroentgens per hour.
Exposure Measurements
Charge
°
Power Supply v
"T~| i
Exposure
V *
x-ray .
©
-
r^> >
Read
electrometer
Exposure
t
V
412| 1151
I
depends on the size of the chamber and the voltage applied by the power supply.
The movement of electrons is what is generally referred to as charging the cham¬
ber. If the ionization chamber is disconnected from the power supply, the charge
will remain until a conductive path is formed between the two electrodes. Voltage
is present between the two electrodes even after the power supply is removed. For
a given ionization chamber, the voltage is proportional to the charge, or number,
of displaced electrons.
In the second step, the charged chamber is exposed to ionizing radiation, and
the air becomes conductive and discharges the chamber. This happens in the fol¬
lowing way. The positively charged ions are attracted to the negative electrode,
where they pick up some of the excess electrons. The electrons formed in the air
618 Physical Principles of Medical Imaging
by the ionization are attracted to and collected by the positive electrode. The num¬
ber of electrons that are returned to their normal locations is proportional to the
made. Free air ionization chambers are not used for routine exposure measure¬
ments but as a standard for calibrating other types of ionization chambers.
The response of an ionization chamber in terms of the number of ions produced
and collected per unit of exposure can change with exposure conditions. Because
of this, it is often necessary to use various correction factors to get precise expo¬
sure measurements.
Saturation
essary to separate the ion pairs before recombination can occur. This is achieved
by applying a sufficient voltage between the electrodes. As the voltage across the
chamber is increased, the force on the ions is also increased, and they are separated
more quickly. When the voltage applied to the chamber is enough to prevent any
SURVEY METERS
Laboratories that use radioactive materials are usually required to have an in¬
strument that can be used to measure exposure to personnel and to locate and
measure the relative activity of sources, such spilled radionuclide. From time
as a
to time it is also necessary to measure the environmental exposure produced by
scattered radiation during an x-ray procedure. An instrument that can be used for
these purposes is generally referred to as a survey meter. Survey meters can be
620 Physical Principles of Medical Imaging
constructed using several radiation detectors. The most common detectors used
for this purpose are Geiger-Mueller (GM) tubes and ionization chambers. Scintil¬
lation detectors are not widely used for this purpose because they are generally
larger, more complex, and more expensive than the other types of detectors.
Geiger-Mueller Detector
The major components of a GM survey meter are shown in Figure 42-3. The
detector (GM or Geiger tube) is a cylindrical tube that contains a specially formu¬
lated gas mixture and two electrodes. One electrode is a small wire running along
the axis of the cylinder. The other electrode is the wall of the cylinder, which is
Count-Rate Meter
CPM
G-M Tube +
Source High-Voltage
Power Supply
J_L Speaker
m ""click"
"click"
multiplying process is repeated until the ionization spreads throughout the tube.
Because of this avalanche effect, the number of electrons that eventually reach the
central electrode is many times larger than the number of electrons
produced by
the radiation photon or particle. When the electrons reach the electrode, they are
collected and conducted out of the tube in the form of an electrical pulse. An
important characteristic of a GM tube is that the pulse is relatively large and re¬
quires very little, if any, additional amplification. The avalanche effect is a form of
amplification that occurs within the tube.
The number of pulses produced by a GM tube is proportional to the number of
tively low. This is because many photons can penetrate the tube without interact¬
ing. Recall that radiation must interact with and be absorbed in a detector before a
signal can be created. Even so, a GM survey meter can detect low levels of radia¬
tion because it can respond to individual photons.
The basic problem in measuring beta radiation is getting the electrons into the
tube. Most beta electrons cannot penetrate the glass walls of the tube. This prob¬
lem is overcome by constructing tubes with a thin window in the end that can be
"click." This type of output is useful when searching for a source, such as spilled
radioactive material. When listening to the pulses, it is relatively easy to detect
small changes in the radiation level.
A more precise indication of radiation level can be obtained by electronically
counting the pulses and displaying the count rate on a meter. On most survey
meters, the count-rate meter has two scales. One scale indicates the count rate in
counts per minute (cpm), and the other scale indicates exposure rate, typically in
the units of milliroentgens per hour (mR/hr). GM survey meters usually do not
sure of 300-keVphotons. Survey meters are usually calibrated at one specific pho¬
ton energy. If the instrument is to be used to measure exposure rates, it is neces¬
sary to have some knowledge of its energy dependence.
The major advantage of a GM survey meter is that it is relatively simple and can
detect low levels of radiation.
Ionization Chambers
Another type of survey meter uses an ionization chamber detector. The cham¬
bers designed for survey meters generally use the cylinder wall as one electrode
and a wire along the cylinder axis for the other. The ionization chamber differs
from the GM tube in several respects. It is generally much larger. Rather than a
special gas mixture, the ionization chamber contains air at atmospheric pressure.
The survey meter contains a power supply that applies a voltage between the two
chamber electrodes. However, the voltage used to operate an ionization chamber
is much less than that required to operate a GM tube.
When radiation enters the ionization chamber, it interacts with the air and pro¬
duces ionization. The ions and electrons are attracted to the electrodes. Because a
lower voltage is used, the electrons in the ionization chamber are not accelerated
enough to produce additional ionization, as in the GM tube. The only electrons
and ions collected by the electrodes are the ones produced directly by the radia¬
tion. Because no electron multiplication (amplification) occurs in the ionization
chamber, the output signal is relatively small. The signal is in the form of a very
\ Ionization
"V. Source
chamber
Film Badges
Film can be used to exposure. The most common application of film is
measure
to measure personnel exposure within the clinical facility. This is normally done
by placing a small piece of film in a badge that is then worn on the body. Film
badges can be used to monitor personnel exposure over extended periods of time,
such as 1 month or longer. Film badges generally cannot measure exposure with
the same accuracy as most other devices.
Thermoluminescence Dosimetry
Thermoluminescence is a process in which materials emit light when they are
heated. Certain thermoluminescent materials can be used as dosimeters because
the amount of light emitted is proportional to the amount of radiation absorbed by
the material before heating. Two materials used in thermoluminescence dosimetry
(TLD) are lithium fluoride and calcium fluoride. These materials consist of small
crystals that can be used in a powdered form or molded into various shapes.
TLD is a two-step procedure, as illustrated in Figure 42-5. The first step is to
expose the TLD material to the radiation. A portion of the absorbed radiation en¬
ergy is used to raise electrons to higher energy levels. A characteristic of TLD
material is that some of the electrons are trapped in the higher energy locations.
The number of electrons that remain in the elevated energy positions is propor¬
tional to the amount of radiation energy absorbed, or the absorbed dose. The sec¬
ond step is to place the irradiated TLD material in a special reader unit. This unit
heats the TLD material and measures the amount of light emitted during the heat¬
ing process. Heating frees the trapped electrons and allows them to drop to their
normal low energy positions. The energy difference between the two electron lo¬
cations is given off in the form of light. By calibrating the system, the light output
is converted into absorbed dose values.
TLD dosimeters have several advantages over ionization chambers. A TLD can
measure a much greater range of dose (or exposure) values than a single ionization
624 Physical Principles of Medical Imaging
> Dose
1312141
Light Detector
X-ray
444
r 11
Heat
chamber. They are also dose-rate independent and do not have the saturation prob¬
lems to ionization chambers. Another useful property is the ability of a
common
TLD to collect radiation over a much longer period of time than is possible with
ionization chambers. This makes them very useful for monitoring personnel and
area exposures.
A TLD actually measures absorbed dose in the thermoluminescent material.
Since most materials used as thermoluminescent dosimeters have approximately
the same effective atomic number as soft tissue and air, the TLD reading will be
proportional to both tissue absorbed dose and exposure in air. Some TLD materi¬
als have a response that changes more with photon energy than others. Therefore,
it is desirable to calibrate a TLD system for the type of radiation (photon energy)
to be measured.
ACTIVITY MEASUREMENT
components of such a system are shown in Figure 42-6. Systems used to measure
Radiation Measurement 625
crystal enters a PM tube in which it is converted into an electrical pulse. The rela¬
tionship of the crystal to the PM tube is shown in Figure 42-7.
Crystals
The crystals in scintillation detectors perform two major functions: (1) to ab¬
sorb the gamma photons and (2) to convert the energy into light. A number of
materials will do this, but the most commonly used is sodium iodide (Nal). The
626 Physical Principles of Medical Imaging
Electron
Photomultiplier Tube Pulse
presence of the iodine in the crystal enhances photon absorption. When the crystal
is impregnated with an appropriate activator, such as thallium, Nal(Tl), it becomes
an efficient scintillator.
Animportant characteristic of a detector crystal is its ability to capture the pho¬
tons emitted by the radioactive material and produce a pulse. This is generally
referred to as the efficiency of the detector. To a large extent, the efficiency is
related to the size of the crystal. Photons from a radioactive source are emitted
All photons intercepted by the crystal are not necessarily absorbed. A photon
can penetrate the crystal without interacting and producing a scintillation. For a
given crystal material, the chance of a photon penetrating the crystal is determined
by photon energy and crystal thickness. The penetration through a specific crystal
increases with photon energy, except when a K edge is encountered. The point to
be made is simply this: The crystal thickness determines the absorption efficiency
for the various photon energies. In general, thick crystals must be used with high
Photomultiplier Tubes
The scintillations produced in the crystal orliquid scintillators by the absorbed
radiation is detected and converted into an electrical
pulse by the PM tube. The
basic construction of a PM tube is illustrated in
Figure 42-7. The electrical compo¬
nents are contained in a
glass cylinder approximately 15 cm long with a diameter
between 2.5 cm and 5 cm. One end of the tube is a flat
transparent window through
which the light from the scintillator enters. Inside the window is a thin
layer of
material that forms a photocathode. When
light photons strike the front surface of
the photocathode, they undergo a photoelectric interaction, and electrons are emit¬
ted from the rear surface. The number of electrons emitted is
proportional to the
number of light photons, which, in turn, is
proportional to the brightness of the
scintillation. The number of electrons produced by a single scintillation is rela¬
tively small and would be a very weak electrical pulse. In order to have a pulse that
can be counted or used to form an
image, it is necessary to increase the size, or
strength, of the pulse. This is the second function performed by the PM tube.
The PM tube contains a series of cup-shaped metal electrodes,
commonly re¬
ferred to as dynodes, positioned as shown in Figure 42-7. An electrical voltage
from an external power supply is applied to the dynodes through wires that enter
the rear end of the tube. The voltage is divided among the dynodes and applied so
that each succeeding dynode is more positive than the one before it. The first dyn-
ode is positive with respect to the photocathode. The electrons emitted from the
photocathode are therefore attracted to the first dynode. As they travel from the
photocathode to the dynode, they are accelerated and gain kinetic energy. When
one of the energetic electrons strikes the dynode, it has sufficient energy to knock
out several electrons. This process increases the number of electrons in the group
associated with a single scintillation. The group of electrons from the first dynode
is accelerated toward the second dynode where they strike the surface and emit
additional electrons. This process is repeated throughout the series of dynodes.
The electron group reaching the last electrode is collected and conducted out of
the tube in the form of an electrical pulse. The size of the pulse is determined by
the number of electrons. The number of electrons reaching the last electrode is
approximately 1 million times the number of electrons emitted from the cathode.
The output from the PM tube is an electrical pulse whose size is proportional to
the brightness of the scintillation, which should be proportional to the energy of
the radiation photon or particle. For a given scintillation brightness (photon en¬
ergy), the size of the pulse depends on the electron multiplication, or gain, that
occurs within the PM tube. This is influenced, to some extent, by the amount of
voltage applied between the tube dynodes. As this voltage is increased, the elec¬
trons gain more energy in moving from one dynode to another. This causes each
electron to produce a greater number of electrons when it strikes the next dynode.
628 Physical Principles of Medical Imaging
In some detector systems, the voltage applied to the PM tube is adjustable and can
be used as a gain control to change the pulse size (and to calibrate a scintillation
detector with respect to radiation energy).
Amplifier
Even though the pulse from the PM tube is increased in size approximately 1
million times, it is still too small to be processed by the counting or imaging com¬
ponents. The sizes of the pulse can be increased by passing it through an electronic
amplifier, important that all pulses be amplified by
as shown in Figure 42-8. It is
the same between pulse size and photon energy is
factor so that a proportionality
maintained. An amplifier with this characteristic is generally known as a linear
Scintillation Probes
specific organs or areas of the body. While this can be done with a gamma camera,
it may be more practical in some situations to perform the measurements with a
collimated scintillation detector or probe. This type of detector consists of a single
scintillation crystal mounted in a shield or collimator. The shield is usually con¬
structed of lead and performs two basic functions: It shields the detector crystal
from environmental radiation and produces a known field of view (FOV) for the
- -
High Voltage
Figure 42-8 Two Factors, Amplifier Gain and PM Tube High Voltage, Used To Adjust the
Size of the Pulse from a Scintillation Detector
Radiation Measurement 629
crystal. The efficiency of the detector is determined by the size of the crystal and
its distance from the radiation source.
Well Counters
ing efficiency is obtained by inserting the source into a hole or well in the crystal,
as shown in Figure 42-9. This configuration virtually surrounds the source with
Radioactive Sample
iCryisiftil
PM Tube
Shield
Figure 42-9 Scintillation Well Counter Used To Measure the Relative Activity of Radio¬
active Samples
630 Physical Principles of Medical Imaging
source material fills the well, more of the radiation will escape out of the top, and
mixed into the scintillator itself, as shown in Figure 42-10. The scintillation liquid
usually consists of three components: (1) a solvent, (2) a primary scintillator, and
(3) a secondary scintillator. The scintillators are chemical compounds with
fluorescent properties. The function of the primary scintillator is to convert some
of the beta particle energy into light. A portion of the light from many primary
scintillators is in the ultraviolet region of the spectrum and is not readily detectable
by the PM tubes. The purpose of the secondary scintillator is to absorb the ultra¬
violet light and emit light of a color (wavelength) more readily detectable by the
PM tubes.
The radioactive material must beproperly prepared so that it does not signifi¬
cantly decrease the transparency of the liquid. Also, the presence of certain chemi-
Radioactive Sample
Figure 42-10 Liquid Scintillation Counter Used To Measure the Activity of Beta-Emitting
Radionuclides
Radiation Measurement 631
Activity Calibrators
It isusually desirable to measure the activity of a radionuclide before it is ad¬
ministered to a patient. In most facilities, this is done with an instrument known as
Sample
Gas- filled
ionization chamber
Activity
0 0 5 4 0
Nuclide
• _
High-Voltage
Power Supply
are attracted to the electrode with a polarity opposite to the charge on the ion.
Positive ions attracted to the negative electrode, and negative ions, or freed elec¬
trons, are attracted to the positive electrode. The ionization, in effect, makes the
chamber electrically conductive. While ionization is taking place, electrons are
collected by the positive electrode, and a current flows through the circuit as indi¬
cated. The current is proportional to the rate at which ions are being produced,
which, in turn, is related to the activity. The current is measured by the electrom¬
eter. The system is calibrated so that the current value is displayed in units of
per transition. (This was discussed in Chapter 5.) Some nuclei go through a transi¬
tion process that creates only one photon, whereas others undergo transitions that
create two or more photons. The various nuclides also produce photons with dif¬
ferent energies. Both the percentage of photons that will interact with a gas and the
number of ions produced by each photon depend on the photon energy. Because of
this, it is necessary to use a different calibration factor to relate activity to ioniza¬
tion for each radionuclide. Most systems have a switch that can be used to set the
correct calibration factor for a variety of radionuclides.
Index
A ultrasound, 373
time gain compensation, 373
633
634 Physical Principles of Medical Imaging
misalignment, 193-194
patient exposure, 592
receptor sensitivity, 248-249 Illuminance, 51-52
scattered radiation, 186
Image acquisition time, magnetic
artifact, 192-194 resonance imaging, 498-500
contrast,174, 176-178
radionuclide, 81 Lateral blur, ultrasound, 399, 400
Ionization, 145 Latitude film, 240
Magnetic field
Kinetic energy, electron, 27-28 magnetic resonance imaging, 418^419
K edge, 153 nuclear magnetic resonance, 427-429
total, 40 defined, 22
Photon concentration, 40 ultrasound, 379-381
unit, 52
exposure, 42
Photon energy, 24—25
Power supply. See also Generator
atomic number, 152 high-frequency, 123
Presaturation
contrast, 175-178
flow-void effect, 507-508
intensifying screen, 202
regional, 523-525
photoelectric interaction, 152-153 Preservative, photographic process, 215,
wavelength, relationship, 26 216
Photon fluence, 40
Proton, 54—55
Photon interaction, 141-144
attenuation, 149-152 magnetic resonance imaging, 417
Proton density, magnetic resonance
rates, 149—156
imaging, 417, 445^146
Photon radiation, 612-614 Pulse diameter, ultrasound, 385-388
absorbed dose, 603-604
650 Physical Principles of Medical Imaging
coils, 420—421
Rad, defined, 46 receiver, 421
Radiation, 18-20 transmitter, 421
area exposure limits, 609 Radioactive equilibrium, 90-95
biological impact, 48-50 secular equilibrium, 91-92
conversion factors, 38 transient equilibrium, 92-95
energy,44-45 Radioactive lifetime, 83-87
exposure, 41—44 variation, 83, 84
concept, 41-4-2 Radioactive material fraction, half-life,
exposure limits, 607-609 88-90
exposure sources, 610-611 tabulation, 89
utilization factor, 610 Radioactive transition, 69-82
workload, 610 radiation produced, 69, 70
external source, 612-614 Radioactivity, 83-96
matter, interactions with cumulated activity, 86-87
coherent scatter, 144 effective lifetime, 96
competitive interaction, 156-157 quantity of radioactive material, 86
Compton interaction, 143-144 time, 87-90
electron interactions, 144-148 Radiographic blur
electron range, 145-146 object location, nomogram, 279
interaction, 141-158 sources, 268
linear energy transfer, 146-148 Radiographic density
pair production, 144 automatic exposure control, 250-251
photoelectric interaction, 142-143 radiation-measuring device, 250
photon interactions, 141-144 exposure time, 245-246
positron interactions, 148-149 factors, 244
measurement, 615-632 KVp, 246-247
occupational exposure, 609 MA, 244-245
Index 651
gamma camera, 530-531 Star test pattern, focal spot size, 275-276
collimator, 559-561, 562 Statistical counting error, 575-585
image blur, 203, 204 Statistical fluctuation,
photon
imaging method, 13-15 counting error, 575-576
receptor exposure, 243 counting noise, 575-576
Shadowing, ultrasound, 403^104 Statistics, 575-585
Sievert (Sv), defined, 49 Step-down transformer, 113
Signal acquisition, magnetic resonance Step-up transformer, 113
imaging, 475 Stimulable phosphor receptor, digital
k space, 475 image processing, 339
Signal strength, nuclear magnetic Storage capacity, digital image, 319
resonance Strontium, radionuclide, 81
relative sensitivity, 431 Structure noise, grain, 313-314
relative signal strength, 431 Subject contrast, 174
Single photon emission computed Superconducting magnet, magnetic
resonance imaging, 419-420
tomography, 572-573
data acquisition, 573 Superparamagnetic material, magnetic
resonance imaging, 455
image reconstruction, 573
imaging system, 572 Surface integral exposure, radiation, 42-
Slice selection, magnetic resonance 44