NDT Level II
NDT Level II
Level I & II
2nd Floor Safa Apartments, Red Hills, Near Niloufer Hospital, Lakdikapul, Hyderabad T.S-
500004
email: [email protected] website: www.imechinstitute.in | Contact : 9700008685,
9700008797
TABLE OF CONTENT
1 Introduction 2-5
1. Penetrant Testing
2. Magnetic Particle Testing
3. Ultrasonic Testing
4. Radiography Testing
There are also ranges of other new techniques that have particular specialized
applications in limited fields. They include:
In a book of this size, it is not possible to describe all the methods in detail.
Many NDT methods have reached a stage of development where a semi-skilled
operator, following the detailed procedural instructions, with safeguards built in to the
equipment, can use them. The advent of microcomputers allows procedures to be
preprogrammed and crosschecked, so that a competent operator does not necessarily
need to understand the physics of the technique being used. However, it is desirable
that the supervisors of inspection, the designers who specify the techniques to be used
in terms of their performance and attainable sensitivity, and the development engineers
working on new methods, do have a thorough scientific understanding of their
fundamental physics involved.
One of the problems in NDT is that there is often too large a choice of methods and
techniques, with too little information on performance of each in terms of overall defect
sensitivity, speed of operation, running costs or overall reliability. Some NDT methods
have been much over-sold in the recent years. However, there are rapid developments
made in the computer modeling of electrical magnetic and radiation fields, which appear
to have considerable potential in realistically representing the conditions met in practical
specimens.
2
‘defect detectability’ are so widespread in NDT, and have been used for many years,
that it seems unnecessary to propose anything. The term “defect” signifies that the
material fabrication is defective, unserviceable. Thus, in the interpretation of words,
there is no such thing as “acceptable defect”.
Although a great deal of non-destructive testing is carried out for flaw detection in
materials- e.g., the detection of weld defects, lack of bonding in adhesive joints and fatigue
cracks developing during service, it should be forgotten that NDT has important
applications in the examinations of assemblies, missing or displaced components, to
measure spacing etc.,
It has become quite fashionable in recent years to divide defect evaluation methods
into quality-control criteria and fitness –for—purpose criteria. For the latter, acceptance
standards must be defined on a case-to-case basis, usually on fracture mechanics. For
quality control criteria, the requirements are based on a more general engineering
experience, and the inspection is directed towards detecting the most common defects
with an implication of less-severe NDT requirements.
Regarding the comparison and evaluation of different NDT methods, set of different
specimens are collected and inspected non-destructively. In a trial involving fatigue
cracks at section changes on a light-alloy panel, it is not surprising to note that eddy
current and penetrant testing techniques proved superior to ultrasonic or radiography
testing. On corroded surfaces, penetrant testing was found to be inferior to eddy
current testing, in reliability, if not sensitivity.
Finally, most NDT techniques have a wide range of applications and comparison data is
valid only for particular application, a specific type of defect and a particular material.
Probabilities and confidence levels are also needed and laboratory trials cannot be
extrapolated to field results.
Radiography inspection can be applied to any materials. It uses radiation from isotopic
sources and X – radiation that will penetrate through the job and produce an image in
the film. The amount of radiation transmitted through the material is dependent on the
density and the material thickness. As material thickness increases, radiography
becomes less sensitive as an inspection method.
3
Surface discontinuities that can be detected by this method include undercuts,
longitudinal grooves, incomplete filling of grooves, excessive reinforcement, overlap,
concavity at the root etc. Subsurface discontinuities include gas porosity, slag
inclusions, cracks, inadequate penetration, incomplete fusion etc.
Leak Testing:
Leak testing is used to check fabricated components and systems, for nuclear reactors,
pressure vessels, electronic valves, vacuum equipment, gas containers, etc. The leak is
a passage of gas from one side of the wall or container, under pressure or
concentration difference. It is measured in cc/ sec.
Depending on the range of leak detection capability, a number of test methods are
available. Some examples are, pressure drop/ rise, ultrasonic leak detectors, bubble
tests, and ammonia sensitized paper, with detection capabilities up to 10 – 4 cc/sec.
The Halogen diode sniffer, Helium mass spectrometer and Argon mass spectrometer in
the range of 10 – 7, 10 – 11 cc/sec.
Ultrasonic Testing:
Ultrasonic testing is method, which is especially applied for the detection of internal
flaws. Beams of high-frequency sound waves are introduced into a specimen by a unit
referred to as a Probe or a Search unit. These sound waves travel through the
specimen with attended loss of energy, and are reflected at interfaces.
The degree of reflection largely depends on the physical state of the matter on
the opposite side of the matter, and to a lesser extent on the specific properties
of the matter. This reflected energy is received by the probe, processed and
represented in the Cathode Ray Tube (CRT) in the equipment in the form of pips
or echoes. The position of this echo and the nature of this echo is analyzed for
determining the exact location and nature of the defect.
Cracks, laminations, shrinkage cavities, bursts, pores, bonding faults and other
discontinuities that act as a metal-gas interface can be easily detected.
Inclusions and other inhomogenities can also be detected by partial reflection of
sound waves. Superior penetrating power and high sensitivity to finer flaws are
some of the advantages of this inspection.
4
The specific advantage of this testing is that they are non localized; it is not
necessary to examine specific regions of a structure, but a large volume can be
inspected at one time. This can also serve as an continuous monitoring system.
Thermographic Methods:
The first industrial applications of Thermographic (infrared) techniques were the
measurement of stationary temperature fields, such as the measurement of
temperature across hot-rolled steel strip, or, variations in insulation on a wall of a
building. There have been few studies with contact sensors, but most of the
applications use an “Thermographic Camera”. With this a suitable lens on to an
infrared-sensitive detector images the infrared image of the specimen. Modern
camera might typically have a cadmium- mercury-telluride detector, with a
special bandwidth of 8 – 14 micrometer, a germanium lens system, a spatial
resolution of about 0.2k and an effective temporal resolution of about 20ms.
5
MAGNETIC PARTICLE INSPECTION
The sensitivity is greater for surface discontinuities and diminishes rapidly with
increasing depth of subsurface discontinuities below the surface. Typical
discontinuities that can be detected by this method are cracks, laps, cold shuts,
seams and laminations.
Whenever this technique is used to produce a magnetic flux in the part, maximum
sensitivity will be to the linear discontinuities oriented perpendicularly to the
lines of flux.
6
Basic Principles
In theory, magnetic particle inspection (MPI) is a relatively simple concept. It can be
considered as a combination of two nondestructive testing methods: magnetic flux
leakage testing and visual testing. Consider a bar magnet. It has a magnetic field in and
around the magnet. Any place that a magnetic line of force exits or enters the magnet is
called a pole. A pole where a magnetic line of force exits the magnet is called a north
pole and a pole where a line of force enters the magnet is called a south pole. When a
bar magnet is broken in the center of its length, two complete bar magnets with
magnetic poles on each end of each piece will result. If the magnet is just cracked but
not broken completely in two, a north and south pole will form at each edge of the
crack. The magnetic field exits the north pole and reenters the at the south pole. The
magnetic field spreads out when it encounter the small air gap created by the crack
because the air can not support as much magnetic field per unit volume as the magnet
can. When the field spreads out, it appears to leak out of the material and, thus, it is
called a flux leakage field.
If iron particles are sprinkled on a cracked magnet, the particles will be attracted to and
cluster not only at the poles at the ends of the magnet but also at the poles at the edges
of the crack. This cluster of particles is
much easier to see than the actual crack
and this is the basis for magnetic particle
inspection.The first step in a magnetic
particle inspection is to magnetize the
component that is to be inspected. If any
defects on or near the surface are present,
the defects will create a leakage field. After
the component has been magnetized, iron
particles, either in a
dry or wet suspended form, are applied to the surface of the magnetized part. The
particles will be attracted and cluster at the flux leakage fields, thus forming a visible
indication that the inspector can detect.
In the early 1920’s, William Hoke realized that magnetic particles (colored metal
shavings) could be used with magnetism as a means of locating defects. Hoke
discovered that a surface or subsurface flaw in a magnetized material caused the
magnetic field to distort and extend beyond the part. This discovery was brought to his
attention in the machine shop. He noticed that the metallic grindings from hard steel
parts, which were being held by a magnetic chuck while being ground, formed patterns
7
on the face of the parts which corresponded to the cracks in the surface. Applying a fine
ferromagnetic powder to the parts caused a build up of powder over flaws and formed a
visible indication.
In the early 1930’s, magnetic particle inspection (MPI) was quickly replacing the oiland-
whiting method (an early form of the liquid penetrant inspection) as the method of
choice by the railroad to inspect steam engine boilers, wheels, axles, and the tracks.
Today, the MPI inspection method is used extensively to check for flaws in a large
variety of manufactured materials and components. MPI is used to check materials
such as steel bar stock for seams and other flaws prior to investing machining time
during the manufacturing of a component. Critical automotive components are
inspected for flaws after fabrication to ensure that defective parts are not placed into
service. MPI is used to inspect some highly loaded components that have been
inservice for a period of time. For example, many components of high performance
race cars are inspected whenever the engine, drive train and other systems are
overhauled. MPI is also used to evaluate the integrity of structural welds on bridges,
storage tanks, and other safety critical structures.
8
Magnetism
Magnets are very common items in the workplace and household. Uses of magnets
range from holding pictures on the refrigerator to causing torque in electric motors. Most
people are familiar with the general properties of magnets but are less familiar with the
source of magnetism. The traditional concept of magnetism centers around the
magnetic field and what is know as a dipole. The term “magnetic field” simply describes
a volume of space where there is a change in energy within that volume. This change in
energy can be detected and measured. The location where a magnetic field can be
detected exiting or entering a material is called a magnetic pole. Magnetic poles have
never been detected in isolation but always occur in pairs and, thus, the name dipole.
Therefore, a dipole is an object that has a magnetic pole on one end and a second
equal but opposite magnetic pole on the other.
A bar magnet can be considered a dipole with a north pole at one end and south pole at
the other. A magnetic field can be measured leaving the dipole at the north pole and
returning the magnet at the south pole. If a magnet is cut in two, two magnets or dipoles
are created out of one. This sectioning and creation of dipoles can continue to the atomic
level. Therefore, the source of magnetism lies in the basic building block of all matter...the
atom.
All matter is composed of atoms, and atoms are composed of protons, neutrons and
electrons. The protons and neutrons are located in the atom’s nucleus and the
electrons are in constant motion around the nucleus. Electrons carry a negative
electrical charge and produce a magnetic field as they move through space. A magnetic
field is produced whenever an electrical charge is in motion. The strength of this field is
called the magnetic moment.
This maybe hard to visualize on a subatomic scale but consider electric current flowing
through a conductor. When the electrons (electric current) are flowing through the
conductor, a magnetic field forms around the conductor. The magnetic field can be
detected using a compass. The magnetic field will place a force on the compass
needle, which is another example of a dipole. Since all matter is comprised of atoms, all
materials are affected in some way by a magnetic field. However, not all materials react
the same way. This will be explored more in the next section.
9
In most atoms, electrons occur in pairs. Each electron in a pair spins in the
opposite direction. So when electrons are paired together, their opposite spins
cause there magnetic fields to cancel each other. Therefore, no net magnetic
field exists. Alternately, materials with some unpaired electrons will have a net
magnetic field and will react more to an external field. Most materials can be
classified as ferromagnetic, diamagnetic or paramagnetic.
Diamagnetic metals have a very weak and negative susceptibility to magnetic fields.
Diamagnetic materials are slightly repelled by a magnetic field and the material does
not retain the magnetic properties when the external field is removed. Diamagnetic
materials are solids with all paired electron and, therefore, no permanent net magnetic
moment per atom. Diamagnetic properties arise from the realignment of the electron
orbits under the influence of an external magnetic field. Most elements in the periodic
table, including copper, silver, and gold, are diamagnetic.
Paramagnetic metals have a small and positive susceptibility to magnetic fields. These
materials are slightly attracted by a magnetic field and the material does not retain the
magnetic properties when the external field is removed. Paramagnetic properties are
due to the presence of some unpaired electrons and from the realignment of the
electron orbits caused by the external magnetic field. Paramagnetic materials include
Magnesium, molybdenum, lithium, and tantalum.
Magnetic Domains
Ferromagnetic materials get their magnetic properties not only because their atoms
carry a magnetic moment but also because the material is made up of small regions
known as magnetic domains. In each domain, all of the atomic dipoles are coupled
together in a preferential direction. This alignment develops as the material develops its
crystalline structure during solidification from the molten state. Magnetic domains can
be detected using Magnetic Force Microscopy (MFM) and images of the domains like
the one shown below can be constructed. During solidification a trillion or more atom
moments are aligned parallel so that the magnetic force within the domain is strong in
one direction. Ferromagnetic materials are said to be characterized by “spontaneous
magnetization” since they obtain saturation magnetization in each of the domains
without an external magnetic field being applied. Even though the domains are
magnetically saturated, the bulk material may not show any signs of magnetism
because the domains develop themselves are randomly oriented relative to each other.
10
Magnetic Force Microscopy (MFM) image showing the magnetic domains in a piece of
heat treated carbon steel. Ferromagnetic materials become magnetized when the
magnetic domains within the material are aligned. This
can be done my placing the material in a strong
external magnetic field or by passes electrical current
through the material. Some or all of the domains can
become aligned. The more domains that are aligned,
the stronger the magnetic field in the material. When all
of the domains are aligned, the material is said to be
magnetically saturated. When a material is magnetically
saturated, no additional amount of external
magnetization force will cause an increase in its internal
level of magnetization.
11
Magnets come in a variety of shapes and one of the more common is the horseshoe (U) magnet.
The horseshoe magnet has north and south poles just like a bar magnet but the magnet is curved
so the poles lie in the same plane. The magnetic lines of force flow from pole to pole just like in
the bar magnet. However, since the poles are located closer together and a more direct path
exists for the lines of flux to travel between the poles, the magnetic field is concentrated between
the poles. If a bar magnet was placed across the end of a horseshoe magnet or if a magnet was
formed in the shape of a ring, the lines of magnetic force would not even need to enter the air.
The value of such a magnet where the magnetic field is completely contained with the material
probably has limited use. However, it is important to understand that the magnetic field can flow
in loop within a material when the concept of circular magnetism is later covered.
12
Electromagnetic Fields
Magnets are not the only source of magnetic fields. In 1820, Hans Christian Oersted
discovered that an electric current flowing through a
wire caused a nearby compass to deflect. This
indicated that the current in the wire was generating
a magnetic field. Oersted studied the nature of the
magnetic field around the long straight wire. He
found that the magnetic field existed in circular form
around the wire and that the intensity of the field
was directly proportional to the amount of current
carried by the
wire. He also found that the strength of the field
was
strongest close to the
wire and diminished with distance from the
conductor until it could no longer be detected. In
most conductors, the magnetic field exists only
as long as the current is flowing (i.e. an
electrical charge is in motion). However, in
ferromagnetic materials the electric current will
cause some or all of the magnetic domains to
align and a residual magnetic field will
remain.Oersted also noticed that the direction
of the magnetic field was dependent on the direction of the electrical current in the wire.
A three-dimensional representation of the magnetic field is shown below. There is a
simple rule for remembering the direction of the magnetic field around a conductor. It is
called the right-hand rule. If a person grasps a conductor in ones right hand with the
thumb pointing in the direction of the current, the fingers will circle the conductor in the
direction of the magnetic field.
For the right-hand rule to work, one important thing that must remembered about the
direction of current flow. Standard
convention has current flowing from the
positive terminal to the negative
terminal. This convention is credited to
the French physicist Ampere who
theorized that electric current was due
to a positive charge
moving from the positive terminal to the negative terminal. However, it was later
discovered that it is the movement of the negatively charged electron that is responsible
for electrical current. Rather than changing several centuries of theory and equations,
Ampere’s convention is still used today.
13
Magnetic Field Produced by a Coil
When a current carrying conductor is formed into a loop or several loops to form a coil,
a magnetic field develops that flows through the center of the loop or coil along
longitudinal axis and circles back around the outside of the loop or coil. The magnetic
field circling each loop of wire combines with the fields from the other loops to produce
a concentrated field down the center of the coil. A loosely wound coil is illustrated
below to show the interaction of the magnetic field. The magnetic field is essentially
uniform down the length of the coil when it is wound tighter.
The strength of a coil’s magnetic field increases not only with increasing current but
also with each loop that is added to the coil. A long straight coil of wire is called a
solenoid and can be used to generate a nearly uniform magnetic field similar to that of
a bar magnet. The concentrated magnetic field inside a coil is very useful in
magnetizing ferromagnetic materials for inspection using the magnetic particle testing
method. Please be aware that the field outside the coil is weak and is not suitable for
magnetize ferromagnetic materials.
Until now, only the qualitative features of the magnetic field have been discussed.
However, it is necessary to be able to measure and express quantitatively the various
characteristics of magnetism. Unfortunately, a number of unit conventions are in use as
shown below. SI units will be used in this material. The advantage of using SI units is
that they are traceable back to an agreed set of four base units - meter, kilogram,
second, and Ampere.
The units for magnetic field strength H are ampere/meter. A magnetic field strength of 1
ampere/meter is produced at the center of a single circular conductor of diameter 1
meter carrying a steady current of 1 ampere. The number of magnetic lines of force
cutting through a plane of a given area at a right angle is known as the magnetic flux
density B. The flux density or magnetic induction has the tesla as its unit. One tesla is
equal to 1 Newton/(A/m). From these units it can be seen that the flux density is a
measure of the force applied to a particle by the magnetic field. The Gauss is CGS unit
for flux density and is commonly used by US industry. One gauss represents one line of
flux passing through one square centimeter of air oriented 90 degrees to flux flow.
14
SI units
Quantity (Sommerfield) SI units CGS units
(kennely) (Gaussian)
Flux
Density (B) tesla tesla Gauss
Magnetization erg.Oe-
(M) A/m ---- 1.c-
m3
The total number of lines of magnetic force in a material is called magnetic flux f. The
strength of the flux is determined by the number of magnetic domains that are aligned
within a material. The total flux is simply the flux density applied over an area. Flux
carries the unit of a weber, which is simply a tesla-square meter. The magnetization is a
measure of the extent to which an object is magnetized. It is a measure of the magnetic
dipole moment per unit volume of the object. Magnetization carries the same units as a
magnetic field; amperes/meter.
15
The Hysteresis Loop and Magnetic Properties
A great deal of information can be learned about the magnetic properties of a material
by studying its hysteresis loop. A hysteresis loop shows the relationship between the
induced magnetic flux density B and the magnetizing force H. It is often referred to as
the B-H loop. An example hysteresis loop is
shown in the figure.
As the magnetizing force is increased in the negative direction, the material will again
become magnetically saturated but in the opposite direction (point “d”). Reducing H to
zero brings the curve to point “e.” It will have a level of residual magnetism equal to that
achieved in the other direction. Increasing H back in the positive direction will return B to
zero. Notice that the curve did not return to the origin of the graph because some force
is required to remove the residual magnetism. The curve will take a different path form
point “f” back the saturation point where it with complete the loop.
From the hysteresis loop, a number of primary magnetic properties of a material can be
determined.
16
Residual Magnetism or Residual Flux - the magnetic flux density that remains
in a material when the magnetizing force is zero. Note that residual magnetism
and retentivity are the same when the material has been magnetized to the
saturation point. However, the level of residual magnetism may be lower than
the retentivity value when the magnetizing force did not reach the saturation
level.
Coercive Force - The amount of reverse magnetic field which must be applied to
a magnetic material to make the magnetic flux return to zero. (The value of H at
point C on the hysteresis curve.)
17
The shape of the hysteresis loop tells a great deal about the material being magnetized.
The hysteresis curves of two different materials are shown in the graph.
Relative to the other material, the materials with the wide hysteresis loop has:
Lower Permeability
Higher Retentivity
Higher Coercivity
Higher Reluctance
Higher Residual Magnetism
Higher Permeability
Lower Retentivity
Lower Coercivity
Lower Reluctance
Lower Residual Magnetism.
18
field within the part and the greatest flux leakage at the surface of the part. As can be
seen in the image below, if the magnetic field is parallel to the defect, the field will see
little disruption and no flux leakage field will be produced.
With direct magnetization, current is passed directly through the component. Recall that
whenever current flows a magnetic field is produced.
Using the right-hand rule, which was introduced
earlier, it is known that the magnetic lines of flux form
normal to the direction of the current and form a
circular field in and around the conductor. When
using the direct magnetization method, care must be
taken to ensure that good electrical contact is
established and maintained between the test
19
equipment and the test component. Improper contact can result in arcing that may
damage the component. It is also possible to overheat components in areas of high
resistance such as the contact points and in areas of small crosssectional area.
There are several ways that direct magnetization
is commonly accomplished. One way involves clamping the component between two
electrical contacts in a special piece of equipment. Current is passed through the
component and a circular magnetic field is established in and around the component.
When the magnetizing current is stopped, a
residual magnetic field will remain within the
component. The strength of the induced
magnetic field is proportional to the amount of
current passed through the component
The use of permanent magnets is a low cost method of establishing a magnetic field.
However, their use is limited due to lack of control of the field strength and the difficulty
of placing and removing strong permanent magnets from the component.
20
technique is often referred to as a “coil shot.”
Magnetizing Current
Electric current is often used to establish the magnetic field in components during
magnetic particle inspection. Alternating current and direct current are the two basic
types of current commonly used. Current from single phase 110 volts, to three phase
440 volts are used when generating an electric field in a component. Current flow is
often modified to provide the appropriate field within the part. The type of current used
can have an effect on the inspection results so the types of currents commonly used will
be briefly reviewed.
Direct Current
Direct current (DC) flows continuously in one direction at a constant voltage. A battery is
the most common source of direct current. As previously mentioned, current is said to
flow from the positive to the negative terminal when in actuality the electrons flow in the
opposite direction. DC is very desirable when performing magnetic particle inspection in
search of subsurface defects because DC generates a magnetic field that penetrates
deeper into the material. In ferromagnetic materials, the magnetic field produced by DC
generally penetrates the entire cross-section of the component; whereas, the field
produced using alternating current is concentrated in a thin layer at the surface of the
component.
Alternating Current
Clearly, the skin effect limits the use of AC since many inspection applications call for
the detection of subsurface defects. However, the convenient access to AC, drive its
use beyond surface flaw inspections. Luckily, AC can be converted to current that is
21
very much like DC through the process of rectification. With the use of rectifiers, the
reversing AC can be converted to a one-directional current. The three commonly used
types of rectified current are described below.
Half Wave Rectified Alternating Current (HWAC)
When single phase alternating current is passed through a rectifier, current is allowed to
flow in only one direction. The reverse half of each cycle is blocked out so that a one
directional, pulsating current is produced. The current rises from zero to a maximum
and then returns to zero. No current flows during the time when the reverse cycle is
blocked out. The HWAC repeats at same rate as the unrectified current (50 or 60 hertz
typical). Since half of the current is blocked out, the amperage is half of the unaltered
AC.
This type of current is often referred to as half wave DC or pulsating DC. The pulsation
of the HWAC helps magnetic particle indications form by vibrating the particles and
giving them added mobility. This added mobility is especially important when using dry
particles. The pulsation is reported to significantly improve inspection sensitivity. HWAC
is most often used to power electromagnetic yokes.
Full wave rectification inverts the negative current to positive current rather than blocking
it out. This produces a pulsating DC with no interval between the pulses. Filtering is
usually performed to soften the sharp polarity switching in the rectified current. While
particle mobility is not as good as half-wave AC due to the reduction in pulsation, the
depth of the subsurface magnetic field is improved.
Three phase current is often used to power industrial equipment because it has more
favorable power transmission and line loading characteristics. It is also highly desirable
for magnetic part testing because when it is rectified and filtered, the resulting current
very closely resembles direct current. Stationary magnetic particle equipment wire with
three phase AC will usually have the ability to magnetize with AC or DC (three phase
full wave rectified), providing the inspector with the advantages of each current form.
Figure below shows waveforms of different current types: Input AC, Rectified AC, and
Rectified + Filtered AC at Half-wave, Full-wave (single phase), and Full-wave (three
phase) currents.
22
23
Longitudinal Magnetic Fields Distribution and Intensity
When the length of a component is several time larger than its diameter, a longitudinal
magnetic field can be established in the component. The component is often placed
longitudinally in the concentrated magnetic field that fills the center of a coil or solenoid.
This magnetization technique is often referred to
as a “coil shot.” The magnetic field travels
through the component from end to end with
some flux loss along its length as shown in the
image to the right. Keep in mind that the
magnetic lines of flux occur in three dimensions
and are only shown in 2D in the image. The
magnetic lines of flux are much denser inside the
ferromagnetic material than in air because ferromagnetic materials have much higher
permeability than does air. When the concentrated flux within the material comes to the
air at the end of the component, it must spread out since the air can not support as
many lines of
flux per unit volume. To keep from crossing as they spread out, some of the
magnetic lines of flux are forced out
the side of the component. When a
component is magnetized along its
complete length, the flux loss is
small along its length. Therefore,
when a component is uniform in
cross
section and magnetic permeability, the
flux density will be relatively uniform
throughout the component. Flaws that
run normal to the magnetic lines of flux
will disturb the flux lines and often
cause a leakage field at the surface of
the component.
Solenoid - An electrically energized coil of insulated wire, which produces a magnetic field
within the coil.
24
TECHNIQUES FOR LONGITUDINAL MAGNETIZATION
Techniques for creating longitudinal magnetization are as follows:
YOKE
There are two basic types of yokes that are commonly used for magnetizing purposes.
They are permanent – magnet and electromagnetic yokes. Both are handheld and
therefore are quite mobile.
They consist of a coil wound around a soft iron core, usually in the form of a horseshoe.
The legs of a yoke can be either fixed or adjustable. Adjustable legs permit changing
25
the contact spacing to accommodate irregular objects. Unlike a permanent yoke, the
electromagnetic yoke can be readily switched on or off. This feature makes the yokes to
be applied or removed whenever required form the test piece. The design of the
electromagnetic yoke can be based on the use of either direct current or alternating
current. Varying the amount of current through the coil can vary the flux density of the
magnetic field. The direct current yoke have better penetration, while an AC yoke
concentrates its magnetic field on the surface providing good sensitivity to surface
discontinuities over a broad area. In general, the discontinuities to be disclosed should
be centrally located between the pole pieces and should essentially lie perpendicularly
to an imaginary line connecting them.
Extraneous leakage fields in the immediate vicinity of the poles cause an excessive
particle buildup. If such case is encountered, the pole spacing is increased, to limit this
cause. As for prods, the maximum pole spacing is 8" and the minimum pole spacing is
3". Spacing less than 3" will cause banding of particles around the poles of yoke,
obscuring any indication.
In operation, the part completes the magnetic circuit for the flow of magnetic flux. Yokes
that use AC for magnetization have numerous applications and can be used for
demagnetization also.
(a) Check that the AC yoke will lift a 10- pound (2.2 Kg) steel bar with the legs at
theinspection spacing.
(b) Check that the DC yoke will lift a 40- pound (8.8 Kg) steel bar with the legs at
theinspection spacing.
(c) Note that 2-leg or 1-leg yoke should produce clearly defined indications on
themagnetic field indicator. The indication must be remaining even after the excess
particles have been removed.
The 2-leg yoke should produce a minimum of 30 Oersteds (24Amperes/cm) in air, in the
area of inspection.
The yoke shall be calibrated once in a year, or whenever the yoke is damaged. If the
yoke has not been in use for the past one year, a calibration check shall be done before
its first use.
26
Inspection of Large Surface Areas for Surface Discontinuities
Advantages
No electrical contact
Highly portable
Can locate discontinuities in any direction, with proper yoke orientation
Disadvantages
Time consuming
Yoke must be systematically repositioned to locate discontinuities with random orientation.
Disadvantages
Yoke must be properly positioned relative to the orientation of the discontinuity.
Relatively good contact must be established between part and poles of the magnet.
Complex part shape may cause a difficulty.
Poor sensitivity to subsurface discontinuities except in isolated areas.
COIL MAGNETIZATION
Single and multiple loop coils are used for longitudinal magnetization of components.
The field within the coil has definite direction, corresponding to the direction of lines of
force running through it. The flux density passing through the interior of the coil is
proportional to the product of current and the number of turns of the coil. Therefore
changing either the current or the number of turns of the coil can vary the magnetizing
force. For large parts, winding several turns of cable around the part can produce a coil.
Care must be taken to ensure that no indications are concealed beneath the cable.
The relationship between the length of the part being inspected to the width of the coil
must be considered. For a simple part, the effective overall distance that can be
inspected using a coil is approximately 6 – 9 inches on either side of the coil. Thus, a
part of 12 – 18 inches long can be inspected using a normal coil approximately 1" thick.
In testing large parts, either the part or the coil is moved at regular intervals for
complete coverage.
The ease with which the part can be magnetized in a coil is significantly related to the
length – diameter ratio (L/D) of the part. This is due to the demagnetizing effect of the
magnetic poles setup at the ends of the part. This effect is considerable for L/D ratios
less than 10 to 1 and is very significant for ratios less than 3 to 1. When using a coil for
27
magnetizing a long bar, strong polarity at the ends of the part could mask transverse
defects. An advantageous field in this area is assured on full wave, three phase, DC
units by a special circuitry known as “quick” or “fast” break.
The various advantages and disadvantages for different material forms for Coil method are
given below:
Advantages
All generally longitudinal surfaces are longitudinally magnetized to transverse discontinuities.
Disadvantages
Parts should be centered in the coil to maximize length effectively during a given shot. Length
may dictate additional shots as coil is repositioned.
Advantages
Longitudinal field easily attained by wrapping the part with a flexible cable.
Disadvantages
Multiple processing may be required because of part shape.
Advantages
Easy and fast, especially where residual method is applicable.
Non-contact with part
Relatively complex parts can be usually processed with same ease as for a simple part.
Disadvantages
L/D ratio is important in determining the current adequacy.
Sensitivity diminishes at the end of the part because of general leakage field pattern.
Quick break of current is desirable to minimize end effect on short parts with low L/D
ratio’s.
Traditional vs. Multidirectional Coil
Most traditional units will magnetize a part with a direct contact shot (for a circular field)
and a longitudinal coil shot in order to inspect all planes of a part. The direct current shot
will magnetize the part of the planes while the coil will cover the third plane. The Multi-
directional Coil induces the field in the part in the same way as the coil shot on a
traditional unit. The coil can only induce a field in one direction so the Multi-directional
Coil is made up of a three coil system. Each part of the coil system induces a field in a
different plane.
28
(a) Since the fields are all included into the part, chance of contact damage from arcing
or clamping are eliminated.
(b) The no contact method also allows all surfaces to be inspected in one step since
the contact surfaces are now free from interference.
The coil system can be set up to handle multiple parts at once increasing productivity,
or set up to allow any part orientation to decrease handling time, loading and unloading.
MULTIDIRECTIONAL MAGNETIZATION
With all magnetizing methods, discontinuities that are perpendicular to the magnetic
field are optimally detected. However, discontinuity detection depends heavily on
material penetrability and properties of testing medium. It is true that magnetic particle
inspection also detects defects that are not exactly perpendicular to the magnetic field.
In this case, the lines of flux can be decomposed into two ways, one parallel and other
perpendicular to the direction of the crack. The perpendicular component constitutes for
their detectability. In some cases, even cracks appearing to be parallel to the field are
weakly detected. The reason is that most cracks are ragged in outline so that some
sections may be favorable for detection. However, at best, cracks can be detected
when the angle between them and the direction of magnetization is more than 30o.
VECTOR FIELD
When two magnetizing forces are imposed simultaneously in the same part, the object
is not magnetized in the in two directions simultaneously. A vector field is formed, which
is in the resultant direction and the strength of the new field depends on the strength of
two imposed fields. This is illustrated above, where Fa is the first magnetizing force in
one direction, and Fb is the second magnetizing force in another direction. As a result
of this, a vector field, indicated by Fa+b, is produced. One of the major advantages of
this type of magnetization is that defects occurring in almost all directions can be
determined.
Fa Fb
29
Combined Direct Fields
When a DC magnetic field of certain direction and strength is superimposed on one of a
different direction and strength, both fields can be combined to form another field as
shown in the above figure. The resulting field has a direction and strength different from
either of the primary fields. and is therefore, very difficult to predict, especially when
induced on complex shapes.
Two perpendicular magnetic fields are created and the resultant field changes its
direction as shown in the figure below. The first two lines indicate the circular flux, the
next two lines indicate the longitudinal flux, and the third two lines
indicate the resultant flux direction of the created by the combined field. The direction
change occurs again in such a way that, for at least a short time, some field component
is perpendicular to any existing crack direction. This in turn causes subsequent particle
accumulation and hence formation if indications.
Direction of Magnetization
At least two separate examinations shall be performed on each area. During the
second examination, the lines of magnetic flux shall be perpendicular to those during
the first examination. All examinations shall be conducted with sufficient overlap to
assure 100% coverage at the required sensitivity.
White contrast paints can be applied over the surface for the enhancement of
indications. However, they should be applied only in small amounts and the same must
be demonstrated that the indications can be detected through these enhancement
coatings.
TOROIDAL MAGNETIZATION
When magnetizing a part with a toroidal shape, such as solid wheel or the disk with
center opening, an induced field that is radial to the disk is most useful for the detection
of discontinuities that are oriented in the circumferential direction. In such applications,
this field may be more effective than multiple shots across the periphery.
30
TECHNIQUES OF CIRCULAR MAGNETIZATION
The techniques used to create a circular magnetic field in the material are as follows;
HEADSHOT TECHNIQUE
This is otherwise called as Direct Contact Method. For small parts having no openings
through the interior, circular magnetic fields are produced by direct contact to the part.
This is done by clamping the parts between the contact heads of a headshot machine,
generally a bench unit as shown in the figure. A similar unit can also be used to supply
current to the central conductor. The contact heads are so constructed that the surfaces
of the part or not damaged-either physically or pressure or structurally by heat from
arcing or from high resistance at the point of contact.
The contact heads must be kept clean in order to carry out useful inspection. For the
complete inspection of a complex part, it may be necessary to attach clamps at several
points or to wrap cables around the parts to orient the fields in a particular direction.
Copper braided pads are often used in headstocks in order to provide the contact area
and reduce the possibility of burning of part during inspection, as high values of current
are passing through it.
The various advantages and disadvantages for different material forms for headstock
contact are given below:
31
Solid, relatively small parts (castings, forgings and machined pieces)
Advantages
Fast and easy technique
Circular magnetic field surrounds current path
Good sensitivity to surface and near surface discontinuities
Simple as well as relatively complex parts can be easily processed with one or more shots.
Complete magnetic path is conducive to maximizing residual characteristics of material.
Disadvantages
Possibility of arc burns, if poor contact conditions exist.
Long parts should be magnetized in sections to facilitate bath application without resorting
to an overly long current shot.
Disadvantages
High amperage requirements (16000 – 20000) dictate special power supply.
Advantages
Entire length can be circularly magnetized by contacting, end to end.
Disadvantages
Effective limited field to outside surface and cannot be used for inner diameter
examination.
Ends must be conducive to electrical contacts and capable of carrying current without
much heat.
Cannot be used on oil country tubular goods because of arcburns.
Advantages
Contacting, end to end, can circularly magnetize entire length.
Current requirements are independent of length. No
end loss
Disadvantages
Voltage requirements increase as length increases, due to greater impedance of cable and
part.
Ends must be conducive to electrical contacts and capable of carrying current without
much heat.
32
PROD CONTACTS
For the inspection of large and massive parts too bulky to be put into any sort of units,
are inspected by using prod contacts, to pass the current directly through the part or
through a local portion of it. Such local contacts do not always produce true circular
fields, but they are very convenient and practical for many purposes. Prod contacts are
often used in the magnetic particle inspection of large castings and weldments.
The prod tips that contact the piece should be aluminum, copper braid, or, copper pads
rather than solid copper. With solid copper tips, accidental arcing during prod placement
or removal could cause copper penetration in the surface, which may result in
metallurgical damage (softening, hardening, cracking, etc.).
The prod electrodes (legs) are first pressed firmly placed against the test part. The
magnetizing current is then passed through the prods and into the area of the part in
contact with the part. This establishes a circular field in the part around and in between
each prod electrode, sufficient to carry out a local magnetic particle testing. Extreme
care should be taken to maintain clean pod tips, to minimize heating at the point of
contact and to prevent arc burns and local overheating of the surface. This could cause
adverse effects on the surface being examined and might cause an effect to the
material properties as well. Prods should be used on machined surfaces or on
aerospace component parts.
Un-rectified AC limits the prod technique to the detection of surface discontinuities.
HWAC is the most desirable form of current since it will detect both surface and near
surface discontinuities. The prod technique generally uses dry magnetic particles due to
better particle mobility. Wet magnetic particles are not generally used with prods
because of potential electrical and flammability hazards. Proper prod placement
requires a second placement with the prods rotated approximately 900 from the first
placement to assure that all existing discontinuities are revealed. Sufficient overlap
should be given between successive prod placements. On large surfaces, it is good
practice to layout a grid for prod or yoke technique.
Prod spacing is measured as the distance from a centerline connecting the prod
centers. It should not exceed 8" (200mm). Shorter spacing may be used to
accommodate geometric limitations of the area being examined, or to increase the
sensitivity. Prod spacing of less than 3" are usually not practical due to the banding of
particles around the legs of the prods. When the area of examination exceeds a width
33
of one-quarter of prod spacing, the magnetic intensity should be verified at the edges of
the area being examined. The various advantages and disadvantages for different
material forms for prod contact are given below:
Welds
Advantages
Circular field can be selectively directed to weld area by prod placement.
In conjunction with HWAC and dry powder, provides excellent sensitivity to surface and
subsurface discontinuities.
Portability – can be brought to examination site easily.
Disadvantages
Only small area can be examined at one time.
Arc burns due to poor contact
Surface must be dry when dry powder is being used
Prod spacing must be in accordance with the magnetizing current level
Advantages
Entire surface area can be examined in small increments using nominal current values.
Circular fields can be concentrated in specific areas that are historically prone to
discontinuities.
Equipments can be brought to the location of parts that are difficult to move.
In conjunction with HWAC and dry powder, provides excellent sensitivity to subsurface
discontinuities.
Disadvantages
Coverage of large area requires multiple of shots and can be very time consuming
Possibility of arc burns due to poor contact.
Surface should be dry before application of dry powder.
CENTRAL CONDUCTOR
Central conductor forms a part of inducing indirect magnetization in the specimen. For
many tubular and ring-shaped objects, it is advantageous to use a separate conductor
to carry the current rather than the part itself. Such a conductor, commonly referred to
as central conductor, is threaded through the inside of the part and is a convenient
means of circularly magnetizing a part without the need for making direct contact with
the material itself. Central conductors are made of solid and tubular, magnetic and
nonmagnetic materials that are good conductors of electricity.
The basic rules regarding the magnetic fields around the circular conductor carrying direct
current are as follows:
(a) The magnetic field around the conductor of uniform cross-section is uniform
34
Circular Magnetic Fields Distribution and Intensity
As discussed previously, when current is passed through a solid conductor, a magnetic
field forms in and around the conductor. The following statements can be made about
the distribution and intensity of the magnetic field.
• The field strength varies from zero at the center of the component to a maximum
at the surface.
• The field strength at the surface of the conductor decreases as the radius of the
conductor increases when the current strength is held constant. (However, a
larger conductor is capable of carrying more current.)
• The field strength outside the conductor is directly proportional to the current
strength. Inside the conductor the field strength is dependent on the current
strength, magnetic permeability of the material, and if magnetic, the location on
the B-H curve.
• The field strength outside the conductor decreases with distance from the
conductor.
35
is zero at the center, but at the surface it is ¼H, where ¼ is the permeability of the
magnetic material. The actual flux density may be much more than in a nonmagnetic
material.
36
Central Conductor Enclosed Within Hollow Ferromagnetic Cylinder
When the central conductor is used to magnetize a hollow cylindrical part made of a
ferromagnetic material, the flux density is maximum at the inside surface of the part.
The flux density produced by the current in the central conductor is maximum at the
surface of the conductor, through the space between the conductor and the inside
surface of the part. At this surface, the flux
density is immediately increased by the
permeability factor, ¼, of the material of the part
and then decreases to the outer surface. Hence
the flux density again drops to the same
decreasing curve it was following inside the part.
In small hollow cylindrical parts, it is desirable that the conductor be placed centrally so
that a uniform magnetic field will exist for the detection of discontinuities at all portions
of the part. When AC is passed through a hollow circular conductor the skin effect
concentrates the magnetic field at the OD of the component.
As can be seen in the field distribution images, the field strength at the inside surface of
hollow conductor carrying a circular magnetic field
produced by direct magnetization is very low.
Therefore, the direct method of magnetization is
not recommended when inspecting the inside
diameter wall of a hollow component for shallow
defects. The field strength increases rather rapidly
as one moves in from the ID so if the defect has
significant depth, it may be detectable. However, a
much better method of magnetizing hollow
components for inspection of the ID and OD
surfaces is with the use of a central conductor. As
can be seen in the field distribution image to the
right, when current is passed through
a nonmagnetic central conductor (copper bar) the magnetic field produced on the inside
diameter surface of a magnetic tube is much greater and the field is still strong enough
for defect detection on the OD surface.
37
(interior or exterior) that is effectively magnetized will be taken as four times the
diameter of the central conductor. Rotating the conductor by allowing approximately
10% magnetic field overlap inspects the entire circumference. The diameter of a central
conductor is not related to the inside diameter or the wall thickness of the cylindrical
part. Conductor size is usually based on its current carrying capacity and ease of
handling.
The various advantages and disadvantages for different material forms for Central
conductor are given below:
Advantages
No electrical contact to part and no possibility of arcing.
38
Circumferentially directed magnetic field is generated in all surfaces surrounding the
conductor.
Ideal for those cases where residual method is applicable. Central
conductor can support lightweight parts.
Multiple turns may be used to reduce the required current.
Disadvantages
Size of the conductor must be ample to carry required current.
Ideally, conductor should be centrally located within hole.
Larger diameters require repeated magnetization and rotation of parts for complete coverage.
Where continuous magnetization technique is employed, inspection is required after each
magnetization.
Advantages
No electrical contact to part is required.
Both inside and outside diameter examination possible. Entire
length of part circularly magnetized.
Disadvantages
Outside diameter sensitivity may be somewhat diminished for parts with large diameter and
heavy wall.
Disadvantages
Outside diameter sensitivity may be somewhat diminished for parts with large diameter and
heavy wall.
CURRENT SELECTION
PROD TECHNIQUE
The required field strength shall be calculated based on the length (L) and the
diameter (D). Long parts shall be examined inspections not to exceed 18", and
39
18" shall be used for calculating the required field strength. For non-cylindrical
parts, D shall be the maximum cross-sectional diagonal.
The magnetizing current shall be within +/- 10% of the ampere-turns value
determined as below:
Ampere-turns = 35000/((L/D)+2)
For example, a part with 10" long, 2" diameter has an L/D ratio of 5, and then the
required ampere-turns shall be
(ii) Parts with L/D Ratio Less than 4, but Not Less Than 2
The magnetizing current shall be within +/- 10% of the ampere-turns value
determined as below:
(iii) If the area to be magnetized extends beyond 6" on either side of coil, field
adequacy shall be demonstrated by field indicator.
(iv) For large parts, due to size and shape, the magnetizing current shall be 1200 AT
to 4500 AT. Field indicator shall demonstrate the field adequacy.
For this technique, passing current through the part magnetizes the part. Direct
or rectified current shall be used.
40
CENTRAL CONDUCTOR TECHNIQUE
1/2
0.125 500
0.250 750
0.375 1000
0.500 1250
1 0.125 750
0.250 1000
0.375 1250
0.500 1500
0.250 1250
0.375 1500
0.500 1750
2 0.125 1250
0.250 1500
0.375 1750
0.500 2000
Note: For cylinder wall thickness greater than 0.500" add 250A (+- 10%)for each additional 0.125".
When using offset central conductors the conductor passing through the inside of the
part is placed against the inside wall of the object. The current shall be from 12A per
mm of part diameter to 32A per mm of part diameter (300 – 800A/inch). The diameter of
the part shall be taken as the largest distance between any two points on the outside
circumference of the part. Generally currents will be 500A/inch (20A per mm) or lower
with the higher currents (up to 800A/inch) being used to examine inclusions or to
examine low permeability alloys, such as precipitation-hardening steels. The entire
41
circumference of the part shall be examined by rotating the part on the conductor, and
by allowing a 10% overlap.
Central Conductor
Characteristics of Mediums
Magnetic properties:
42
The particles used for magnetic particle inspection should have high permeability
so that they can be readily magnetized to low-level leakage fields that occur
around discontinuities and can be drawn to these leakage sites to form a visible
indication. The leakage field produced by some tight cracks is extremely weak.
Low coercive force and low retentivity are other desirable magnetic properties for
mediums. If in high coercive force, wet particles become strongly magnetized
and form an objectionable background. In addition the particles will adhere to
any steel in tanks or piping of the unit causing heavy settling–out losses. Highly
retentive wet particles tend to cling together quickly in large aggregates on the
test surface, leading to lack in mobility and coarse indication may backgrounds.
(b) Effect of Particle Size: Large and heavy particles are not likely to be arrested by
weak fields, but very weak fields will hold fine particles. However extremely fine
particles will adhere to fingerprints, rough surfaces and soiled or other damp
areas, thus obscuring indications. For a wet medium, if the particle size is finer,
the liquid will readily drain away leaving a thin film of particles on the surface.
Coarser particles will become stranded and immobilized.
(c) Effect of Particle Shape: Long slender particles develop a strong polarity than
globular particles. Because of the attraction exhibited by the opposite poles,
these tiny, slender particles, which have pronounced north and south poles,
arrange themselves into strings more readily than globular particles. The ability
of globular dry particles is that they flow freely and smoothly under similar
conditions, where elongated particles tend to mat. The greatest sensitivity is
provided by a blend of elongated and globular particles.
(d) Visibility and Contrast: are promoted by choosing particles that are easy to see
against the color of the surface of the part being inspected. The natural color of
metallic powders used in dry method is silver-gray, but pigments are added to
impart color to the particles. The colors of wet method particles are limited to
black and red of the iron oxides that are used as the base for wet particles. For
increased visibility, the manufacturers coat particles with a fluorescent pigment.
The search for indications is conducted in total or partial darkness, using
ultraviolet radiation to activate the fluorescent dyes. They are available in both
wet and dry mediums, but fluorescent method is most commonly used with wet
method.
The two primary types of particles used in magnetic particle inspection are dry and wet
particles. The particle selection is influenced by:
43
DRY PARTICLES
Dry particles, when used with direct current for magnetization, are superior for detecting
discontinuities wholly below the surface. The use of AC is with dry particles is excellent
for detection of surface cracks that are not exceedingly fine, but is of little interest for
subsurface discontinuities. Dry powder is not recommended for the detection of fine
discontinuities such as fatigue cracks and grinding cracks.
Dry particles are particularly used for magnetic particle inspection of large castings.
Cast objects are usually tested with yokes or prods, with test covering small
overlapping areas. Dry particles are most sensitive for the use on very rough surfaces
and for detecting the flaws beneath the surface. The reclamation and re-use of the
particles is not recommended.
“Magnetic particle examination using dry mediums shall not be performed, if the
surface temperature of the part exceeds 6000F”.
Application methods:
Air is used as a medium to carry the particles to the part’s surface and care must be
taken to apply them correctly. Dry particles must be applied in gentle stream or cloud. If
applied too rapidly, the stream may dislodge already formed indications and will present
a objectionable background masking real indications. Manual and mechanized
applicators can provide proper density and speed of particle applications. When the
particles are applied, they come under the influence of the leakage fields when they are
airborne and have a 3-D mobility.
float a cloud of particles onto the test objects surface and then a gentle stream of air to
remove lightly held background particles.
44
For best results, the magnetizing current should be present
throughout the application of the particles and the removal of
background. It is equally important to monitor the particles while
they are being applied. This is especially true if all subsurface
discontinuity indications that are weakly held and not well
delineated, as they are susceptible to damage form particles
applied later.
Particle Reuse
It is recommended that dry magnetic particles be used only once. Ferrous magnetic
powders are dense. When agitated in bulk, as in a powder blower or a bulb, a lot of
shearing and abrasion happens which wears off the pigment. As a result after each
reuse, the color and contrast will continually diminish to the point that discontinuity
indications are not visible.
Particle Storage
The storage method for dry powders is critical to their subsequent use. The primary
environment condition is moisture. If they are exposed to high levels of moisture, they
immediately begin to form oxides. Rusting alters colors, but the major problem is that
the particles adhere to each other, forming lumps or large masses that are useless for
magnetic particle inspection.
There are limitations, though not severe, on the temperatures at which dry powders can
be stored and used. Visible powders work on surfaces as hot as 3700C at which some
particles will become sticky and others lose some of their color. Beyond 3700 C,
magnetic powders can ignite and burn. Fluorescent and daylight fluorescent powders
lose their visible contrast at 1500C and sometimes at lower temperatures. This occurs
because the pigments are organic compounds that decompose or lose their ability to
fluoresce at particular temperatures.
(i) Dry magnetic particle is superior to wet particles in the detection of near
surface discontinuities.
(ii) For large objects, when using portable equipment for local magnetization.
(iii) Superior particle mobility is obtained for relatively deep-seated flaws, with
half-wave rectified current as the source.
(iv) Can be readily removed from the surface easily.
(i) Cannot be used in confined areas without proper safety breathing apparatus.
(ii) Probability of Detection (POD) is appreciably less than the wet medium for fine
surface discontinuities.
(iii) Difficult to use in overhead magnetizing positions
(iv) No evidence exists of complete coverage of the part as like wet method.
45
(v) Lower production rates.
(vi) It is difficult to adapt to any sort of automotive system.
WET MEDIUMS
Wet mediums are suited for the detection of fine surface discontinuities such as fatigue
cracks. Wet particles are commonly used with stationary equipments where the bath
can remain in use until contaminated. They are also used in field operations, but care
should be taken to maintain bath concentration by constant agitation.
They are available in red and black colors or as fluorescent particles that impart bluish
green or greenish yellow color. The particles are supplied in the form of a paste, or as a
concentrate that is suspended in a liquid to produce a bath. The liquid may be either water
or petroleum distillate having specific properties.
In wet method, particles size may range from 1/8 th of a micron up to 40 or 60 microns
(0.0015 – 0.0025 inches. The particle size is smaller compared to dry mediums and
hence they are not a form of substitution for dry particles, when they are not available.
The small size and generally compact shape of wet particles have a dominating effect
on their behavior. Their size renders permeability measurements highly inexact and of
limited utility. In addition, the size influences the brightness of fluorescent indications
when detected by this medium. The temperature of the wet suspension and the surface
of the part shall not exceed 1350F.
Oil is used as a suspending liquid for magnetic particle testing and should be odorless,
well-refined light petroleum distillate of low viscosity having low sulfur content and high
flash point. Low viscosity is desired because the movement of the particles in the bath
is sufficiently retarded to have a definite effect in reducing buildup, and therefore
visibility of indication from a small discontinuity. Parts should be pre-cleaned to remove
oil and grease from the surface of the part, because oil from the surface accumulates in
the bath and increases its viscosity.
Oil vehicles are preferred in certain applications:
(a) Where lack of corrodibility to ferrous alloys is vital, such as finished bearing
andbearing races.
(b) Where water should pose a electrical hazard.
(c) On some high strength alloys, where hydrogen atoms from watercan diffuse into the
crystal structure of certain alloys and cause hydrogen embrittlement.
46
The use of water as a suspendible liquid eliminates additional cost and flammability
hazards. Water cannot by itself be used as a medium. It rusts ferrous alloys, wets the
surface poorly, and does not disperse the particles efficiently. Hence water suspendible
particle concentrates should contain the necessary wetting agents, dispersing agents,
rust inhibitors, and anti-foaming agents. It should not be used in freezing conditions
where formation of ice would seriously retard the mobility of the particles. However,
ethylene glycol can be used to protect against reasonably low temperatures. Water
vehicles are preferred for the following reasons:
Bath Preparation
Wet magnetic particle baths may be mixed by the supplier or may be sold dry for mixing
by the user. If the user is preparing the bath, the concentration of the bath should be
given primary importance. The effectiveness and reliability of inspection is primarily
dependent on concentration of bath. If the concentration of the bath is too high, the
background will be intense enough to camouflage the indications. On the other hand, if
the concentration is too low, the indications will be weak and difficult to locate. Keeping
the concentration at a constant level eliminates the indication to background contrast.
Settling Test
The effective concentration is ably achieved through a settling test. Since the 1940’s,
settling test has been used to measure the particle bath concentrations. It is a
convenient method that requires little equipment, a simple procedure and only 30 – 60
minutes to perform. Its accuracy is sometimes less than 80%, but its levels of precision
are appropriate for most applications.
Settling Parameters
It is essentially that the settling test takes place in a location free from vibration. The
settling tube must be positioned in an area that is proven to be free of strong magnetic
fields. Freshly prepared bath settles very rapidly, of ten in 15 minutes or less.
Magnetization causes the particles to cling together and form large and fast sinking
clusters. Speed and settling volume depend on the particles magnetization level and
this is the basis for demagnetizing the settling tube sample.
Apparatus:
Settling test equipment is simple in construction and applicability. It consists of: (a) a
100mL pear-shaped graduated centrifuge tube (b) a stand for supporting the tube
vertically, and (c) a timer to signal the end of settling process.
47
The centrifuge tube with a 1mL (0.05mL subdivisions) is used for a determining the
concentrations of fluorescent particle suspensions, and a 1.5mL (0.1mL subdivisions) is
used for non-fluorescent particle suspensions. The test method for determining perform
the settling test is as per ASTM – D – 96. Before sampling, the suspension should run
through a recirculating system for about 30 minutes to ensure proper mixing of
particles. A sample of 100mL is taken and demagnetized to avoid particle clinging to
each other and to the tube body. The settling conditions mentioned above are
maintained, and a settling time of 30 minutes is given for water-based suspension, and
60 minutes for petroleum distillate suspensions.
After the recommended settling time, the sample is interpreted visually without
disturbing the setup. If the settling volume is too low than the prescribed limits, then
add sufficient amount of particles. If the concentration is high, then add sufficient
vehicle. The particles or vehicles should not be added directly to the centrifuge tube.
They are added to the container, in which the medium was previously prepared.
If the particles that are settled appear as loose agglomerates, then the existing sample
in the tube is discarded, and a second sample is taken and the procedures are
repeated. If they continue to appear agglomerated, then the particles may already be in
a magnetized state. Hence the whole suspension is discarded and replaced. The
procedures are again carried out to determine the concentration. Sometimes bands or
striations are noticed in the suspension. These are due to the presence of
contaminants in the suspension. If the total volume of contaminates in the suspension
exceeds 30 % of the volume of magnetic particle or if the liquid is noticeably
fluorescent, the bath should be replaced.
Settling Volumes
For fluorescent bath, the concentration shall be 0.1mL to 0.4mL in a 100mL settling
volume.
For a Non-fluorescent bath, the concentration shall be 1.2mL to 2.4mL in a 100mL of
settling volume.
48
CONTINUOUS MAGNETIZATION
Continuous magnetization is employed for most of the operations utilizing either dry or
wet particles. The sequence of operations for both dry and wet applications is different.
In this technique, the examination medium is applied after the magnetizing force has
been discontinued. It can be used only on materials that possess high retentivity
property, such that the particles can be held on the surface and produce indications.
This technique may be advantageous for integration with production or handling
requirements or for intentionally limiting the sensitivity of the examination. It has found
wide use in the inspection of pipe and tubular goods.
49
The key to ideal magnetic particle inspection is to provide the highest sensitivity to
smallest possible discontinuity. This can be achieved through careful combination of:
In a magnetic particle test, it is important to raise the field strength and the flux density
in the object to a level that produces a flux leakage sufficient for holding the particles in
place over discontinuities. On the other hand, excessive magnetization causes the
particles to stick together to minor leakage fields not caused due to discontinuities. If
such leakage occurs and attracts large number of particles, the result is a false
indication and the test object is said to be over magnetized for this inspection. Such
false indications may result from local permeability changes, which are caused by local
stresses in the test object. In some cases, the flux leakage may be caused by a
subsurface discontinuity and may not be possible to distinguish the cause for the
leakage field without the use of additional NDT methods.
Discontinuity Parameters
The discontinuity parameters are critical and they include depth, width, and angle to the
object surface. In cases where the discontinuity is narrow surface breaking (seams,
laps, quench cracks, and grind tears), the magnetic flux leakage near the mouth of
discontinuity is highly curved. In case of subsurface discontinuities (inclusions and
laminations), the leakage field is much less curved. Relatively high values of field
strength and flux density within the object are required for testing. This lack of leakage
curvature greatly reduces the particle’s ability to stick to such indications.
The magnetic field parameters that most affect flux leakage are the field strength, local
B – H properties, and the angle to the discontinuity opening.
The leakage field’s ability to attract the magnetic particles is determined by several
additional factors. These include:
The magnetic forces between the magnetic flux leakage field and the particle;
Image forces between a magnetized particle and its magnetic image in the surface
plane of the test object;
50
Gravitational forces that may act to pull the particle into or out of the leakage site;
and
Surface tension forces between particle vehicle and the object surface for wet
method tests.
Some of these forces may in turn vary with discontinuity orientation, earth’s gravitational
field, particle size and shape, and type of medium.
Surface Discontinuities
The largest and most important category of discontinuity consists of those that are
exposed to the surface. Surface cracks or discontinuities are effectively located with
magnetic particle testing. Surface discontinuities are also the most detrimental to the
service life of the component than subsurface discontinuities and as a result they are
more frequent of inspection. Magnetic particle inspection is capable of detecting seams,
laps, quenching cracks and surface ruptures in castings, forgings, and weldments. For
maximum detectability, the discontinuity should essentially lie perpendicular to the
magnetic field. This is especially true for a discontinuity that is small and fine. The
characteristics of a discontinuity that enhance its detectability are:
Many incipient fatigue cracks and fine grinding cracks are less than 0.025mm deep and
have surface openings of perhaps 1/10th of thickness or less. Such cracks are readily
detected by wet method. The depth of the crack has a pronounced effect on its
detectability; the deeper the crack, the stronger the indication for a given level of
magnetization. This is because the stronger flux causes greater distortion of the field in
the part. This is effect is particularly not noticeable beyond 6mm. in depth. If the crack is
not tight-lipped, but wide-open at the surface, the reluctance of the resulting air gap
reduces the strength of the leakage field. This combined with the inability of the
particles to bridge the gap results in a weaker indication. Surface opening also plays a
part in detectability. A surface scratch, which may be as wide at the surface, usually
does not produce indications, although they may, at high levels of magnetization. Thus
so many variables influence the formation of a indication.
There are also certain limitations regarding a crack, which is tightlipped virtually
eliminating the presence of air gap, produce no indications. Sometimes, with careful
interpretation and maximizing techniques, faint indications of such cracks may be
produced. One other type of discontinuity that sometimes poses a problem for its
detectability is a forging or a rolling lap.
In this case, the leakage field produced is weak due to small angle of emergence and
the resultant high reluctance gap. Hence when such conditions, demands its
detectability, DC magnetization with the use of wet fluorescent method is desirable.
In general, a surface discontinuity, whose depth is at least 5 times it’s opening at the
surface, will be detected.
51
Internal Discontinuities
Magnetic particle inspection is also capable of detecting subsurface discontinuities.
Although radiography and ultrasonic methods are extensively used in the detection of
subsurface discontinuities, the shape of the discontinuities, sometimes, initiates the
requirement for magnetic particle examination. The internal discontinuities that can be
detected by magnetic particle inspection can be divided into two groups:
Subsurface discontinuities comprise of voids or inclusions that lie just beneath the
surface. Nonmetallic inclusions, as either scattered or individual, occur in almost all steel
products to some degree. These discontinuities are usually very small and cannot be
detected unless they lie close to the surface.
When multiple variables can affect the outcome of a test, a means should be used to
normalize or standardize the test. This ensures that consistent, repeatable results are
achieved, independent of the machine, operator, or time of inspection.
More often, a form of artificial discontinuity indicator is used. This is so called the
reference standard is designed to help evaluate several aspects of a magnetic particle
test system’s performance, including:
52
(a) Tests were carried out according to the specified procedure on a test
materialwithout discontinuities;
(b) The test system was not properly working and unable to detect even the largestof
the leakage fields form the discontinuities.
As a result, some form of reference standards are required to determine the sensitivity
of the test system. Reference standards may be used to evaluate the functionality or
performance of a magnetic particle test system.
On a periodic basis, reference standards are used as test objects in order to monitor the
test system changes in:
The tool steel ring is a commonly used standard reference standard for magnetic
particle test systems, but it essentially indicates only particle sensitivity. Its use has
been for both dry and wet mediums. The sample picture of the ring is shown below.
The ring standard is used by passing a specified DC through a conductor, which in turn
passes through the ring’s center. The test system is evaluated on the basis number of
holes detected using various current levels. The number of holes that should be
detected is given as per the table below:
Distancefrom the
Hole Diameterin edgetothecenterof
No inches theholeinmm
(inches)
1 1.78(0.07) 1.8(0.07)
2 1.78(0.07) 3.6(0.14)
3 1.78(0.07) 5.3(0.21)
4 1.78(0.07) 7.1(0.28)
5 1.78(0.07) 9.0(0.35)
6 1.78(0.07) 10.7(0.42)
7 1.78(0.07) 12.5(0.49)
8 1.78(0.07) 14.2(0.56)
53
9 1.78(0.07) 16.0(0.63)
10 1.78(0.07) 17.8(0.70)
11 1.78(0.07) 19.6(0.77)
12 1.78(0.07) 21.4(0.84)
Table: Test indications required when using the tool steel ring
standard
**Wetsuspension 1400 6
2500 7
3400 7
*Drypowder 1400 7
2500 9
3400 9
Note: * Full-wave DC at central conductor; ** Visible or fluorescent
Measuring Magnetic Fields
When performing a magnetic particle inspection, it is very important to be able to
determine the direction and intensity of the magnetic field. As discussed previously, the
direction of the magnetic field should be between 45 and 90 degrees to the longest
dimension of the flaw for best detectability. The field intensity must be high enough to
cause an indication to form, but not too high or nonrelevant indications may form that
could mask relevant indications. To cause an indication to form, the field strength in the
object must produce a flux leakage field that is strong enough to hold the magnetic
particles in place over a discontinuity. Flux measurement devices can provide important
information about the field strength.
Since it is impractical to measure the actual field strength within the material, all the
devices measure the magnetic field that is outside of the material. There are a number
of different devices that can be used to detect and measure an external magnetic field.
The two devices commonly used in magnetic particle inspection are the field indicator
and the Hall effect meter, which is also often called a Gauss meter. Pie gages and
shims are devices that are often used to provided an indication of the field direction and
strength but do not actually provide a quantitative measure.
54
Pie Gages and Raised Cross Indicators
Pie gages are disks of high permeability material divided into triangular segments
separated by known gaps. The gaps are typically filled with a nonmagnetic material.
The pie gage contains 8 segments, separated by gaps up to 0.75mm, which run to
full depth of the material. Raised cross indicators contain 4
gaps (in the shape of a cross) approximately 0.13mm
(0.5") in width. The segments are cut away so that the
known gap is raised a fixed distance off the test object’s
surface. Both of these devices are used to determine the
approximate orientation and to a limited extent, indicate the
adequacy of the field strength. However, they do not
measure the internal field strength of the object. The
presence of multiple gaps at different orientations helps
reveal the approximate orientation of
the magnetic field. Slots perpendicular to the flux lines produce distinct indications,
while those lying parallel to the magnetic flux give little or no indications.
Shim discontinuity indicators are thin foils of high permeable materials containing
wellcontrolled notch discontinuities. Frequently, multiple shims are used at different
locations and different orientations on the test object to examine the field distribution.
One popular version of the shim indicator is a strip containing 3 slots of different widths.
The strip is placed in contact with the test object surface and shares the flux with the
test object. The principal limitation of this standard is that they require 50mm gage
length. Shims are most often used while preparing test procedures, where they help in
indicating particular test configuration. Once the field distribution is found adequate, the
testing procedure is recorded and the components are tested with the parameters
established by the shims.
Field Indicators
Hall-Effect (Gauss/Tesla)Meter
55
or semiconductor element at the tip of the probe. Electric current is passed through
the conductor. In a magnetic field, the
magnetic field exerts a force on the
moving electrons which tends to push
them to one side of the conductor. A
buildup of charge at the sides of the
conductors will balance this magnetic
influence, producing a measurable voltage
between the two sides of the conductor.
The presence of this measurable
transverse voltage is called
the Hall effect after Edwin H. Hall who discovered it in 1879.
Where:
Probes are available with either tangential (transverse) or axial sensing elements.
Probes can be purchased in a wide variety of sizes and configurations and with different
measurement ranges. The probe is placed in the magnetic field such that the magnetic
lines of force intersect the major dimensions of the sensing element at a right angle.
56
Discontinuity Standards
Production Test Parts with Discontinuities: A practical way to evaluate the performance
and sensitivity of the dry or wet magnetic particles or overall system performance, is to
use representative test parts with known discontinuities of type and severity normally
encountered during actual production inspection. However, the usefulness of such
standards is limited because the orientation and magnitude of the discontinuities cannot
be controlled. If such parts are being used as reference, they should be thoroughly
cleaned and demagnetized after each usage. As an alternative, fabricated test parts
with discontinuities of varying degree and severity can be used to provide an indication
of the effectiveness of the particles used in inspection.
INTERPRETATION OF INDICATIONS
CLASSIFICATION OF INDICATIONS
Magnetic particle testing indications are classified as follows:
NONRELEVANT INDICATIONS
Nonrelevant indications are true patterns caused by leakage fields that do not result from
the presence of flaws. Nonrelevant indications have several causes and their indication
57
is fuzzy as that of a subsurface discontinuity indication. They should not be interpreted
as flaws and therefore require careful evaluation.
Configurations
Configurations that result from in a restriction of the magnetic field are a cause for this
type of nonrelevant indication. Typical restrictive configurations are internal notches
such as splines, threads, grooves for indexing, or keyways.
Magnetized Writing
This is another form of nonrelevant indication. Magnetic writing is usually associated
with parts displaying good residual characteristics in the magnetized state. If such a
part is contacted with a sharp corner or edge of another part, the residual field is locally
reoriented, giving rise to a leakage field and consequently an indication. The point of
common nail can be used as an example to write on a part susceptible to magnetic
writing. Magnetic writing is not always easy to interpret, because the particles are
loosely held and are usually fuzzy or intermittent in appearance. If magnetic writing is
suspected, the only way is to demagnetize the part and retest. If the indication was due
to magnetic writing, it will not reappear.
Techniques For Identifying Nonrelevant Indications
There are several techniques for distinguishing relevant from nonrelevant indications.
They are:
58
• Carrying out a visual inspection before the commencement of magnetic particle
testing, as this would eliminate indications due to the presence of mill scale or
surface roughness.
• A careful study of the part’s design or drawing, to readily locate the section
changes or shape constrictions.
RELEVANT INDICATIONS
Relevant indications are indications caused due to leakage flux emanating from the
actual discontinuities. They are the result of errors made during or after metal
processing. They may or may not be considered defects.
Terminology
Discontinuity: is any interruption in the normal physical structure or composition of a
part. It can also be termed as an intentional or unintentional lack in continuity. If the
lack in continuity is intentional, such as a case of a design requirement, then the
indication arising from these discontinuities are termed as Nonrelevant indications. If
the lack in continuity is unintentional, the indications arising from these are termed as
Relevant indications. Examples of such type of indications are cracks, porosity, lack of
fusion, lack of penetration, etc.
Defect: is any discontinuity that interferes with the service life or application of the
component is termed as a defect. It can also be defined as an imperfection of sufficient
magnitude to warrant rejection of a part with respect to standards.
Classification of Indications
The linear indication is one having a length greater than three times the width. A
rounded indication is one having a length equal to or less than three times its
width. A rounded indication need not be essentially rounded; it may be circular, or
elliptical in shape.
59
An indication is the evidence of a mechanical imperfection. Only indications that have
any dimension greater than 1/16" (1.6mm) shall be considered relevant. Any
questionable or doubtful indication shall be reexamined to determine whether or not
they are relevant.
ACCEPTANCE STANDARDS
Everything that has been said in this discussion thus for has emphasized the fact that
general rules for evaluation cannot be wholly laid down. These are not necessary for an
evaluator with sufficient knowledge and experience. Sometimes there exists a situation
where inspectors are called upon to make decisions regarding the seriousness of a
defect. Hence it should be the inspector’s responsibility to be aware of the general
conditions, which will be of great use to him in demanding conditions. As a guide for
inspectors, a few basic considerations are set forth below:
(a) A discontinuity of any kind, lying at the surface is more likely to be harmful
thanthose, of same size and shape lying wholly below the surface. The deeper it
lies, the less harmful it is said to be.
(b) Any discontinuity, having a principal dimension or plane which lies at rightangles
or at a considerable angle of principal tensile stress, whether surface or
subsurface, is more likely considered to be harmful than a defect of same size,
location and shape lying parallel to the tensile stress.
(c) Any discontinuity, which lies in the area of high tensile stress, should
beconsidered primary than a defect of same size, located in the area of low
tensile stress.
(d) Discontinuities, which are sharp at the bottom, such as the grinding cracks,
aresevere stress-raisers and therefore are more harmful in any location. These
defects offer much probability to propagate under severe load conditions.
(e) Any discontinuity, which occurs in a location close to a keyway or other
designstress raiser, is likely to increase the latter and must be considered much
harmful than those of same size and shape that occurs away from such a
location.
INTERPRETATION OF PATTERNS
The shape, sharpness of the outline, width, and height to which the build up are the
principle features by which discontinuities can be identified and distinguished from each
other.
Surface Cracks
Powder patterns for surface cracks are sharply defined, tightly held and usually built up
heavily. The deeper the crack, the heavier the buildup of the indication. Crater cracks
are recognized by a small indication at the terminal point of the weld. The indication
may be single line or multiple or star-shaped.
60
Incomplete Fusion
Accumulation of powder will generally be pronounced and the edge of the weld
indicated. The closer the incomplete fusion is to the surface, the sharper the pattern.
Undercut
A pattern is produced at the weld edge that adheres less strongly than the indications
obtained from an incomplete fusion. Undercut can also be detected by visual
examination.
Subsurface Discontinuities
The powder patterns have a fuzzy appearance and or not clearly defined. They are
neither strong nor pronounced; yet they are readily distinguished from the indications of
surface conditions.
Slag Inclusions
A fuzzy pattern similar to the subsurface discontinuity or porosity appears when a high
magnetizing field is used and they are present.
Seams
The indications are straight, sharp, and often intermittent. Buildup is small. A
magnetizing current greater than required for the detection of the cracks is necessary.
When magnetic particle inspection is used, cracks on the surface of the part appear as
sharp lines that follow the path of the crack. Flaws that exist below the surface of the
part are less defined and more difficult to detect. Below are some examples of magnetic
particle indications. Given below are some examples of the magnetic particle
indications.
61
Magnetic particle wet fluorescent indication of
a crack in the crane hook
62
The figure above 3 three pictures of the same crack. The first one shows the original size of the
crack, in the second one the crack is closed due to metal smearing operation.
As for NDT, codes should indicate the application of NDT methods towards inspection,
the need for NDT and the acceptance limits.
STANDARDS
The codes will often refer to the standards which are more specific documents giving
the details of how a particular operation is to be carried out. These documents take into
account the technological levels and the operational skills of the operators, in laying
down the requirements of the standards. For example, in regard to Radiography testing
63
or any other NDT method, test results are greatly dependent on the skill of the
personnel. Hence, the procedures and specifications for testing and evaluation must be
standardized in accordance with the requirements such that the results will be least
affected by the difference in the skill of the personnel.
SPECIFICATIONS
The document that prescribes in detail, the requirements with which the product or
service has to comply is the specification. The specification is of paramount importance
in the achievement of quality. In many cases, poor products or sevices are a result of
inadequate or ambiguous or improper specification. For a product to be manufactured
and operated properly there will be different specifications like material specification,
process specification, inspection specification, acceptance specification, installation
specification, maintenance specification, disposal specification etc. The specification
may be evolved by national bodies or by the manufacturer through his own experience.
PROCEDURES
These are the last level documents for any process, service, method etc. to be adopted
in the shop floor. The procedures are written giving all the specific details pertaining to
the activity so that the personnel in the shop floor can follow with ease. No changes are
allowed to be made without the approval from the authorized person in the
organization.
Initial enactment of the ASME Boiler and Pressure Code was in 1914. It was
established by a committee set up in 1911 with members from utilities, state insurance
companies and manufacturers. Whether or not it is adopted in USA is left to the
discretion of each state and the municipality. In any event, its effectiveness in reducing
the human casualties due to the boiler accidents since its adoption is widely
recognized.
ASME Boiler and Pressure Code have 11 sections of which Section V deals with
Nondestructive Testing. It is divided into further subsections; Subsection A contains the
code articles for various NDT methods whereas Subsection B deals with various
standards of testing. These standards become mandatory when they are specifically
referenced in whole or in part in subsection A.
After initial revision, ASME issues revisions once every three years. One of the features
of the code is that partial revisions are issued twice a year, the summer addenda (July
1st) and the winter addenda (January 1st). These addenda’s are effective upon
issuance. Any question about the interpretation of rules may be submitted to the
64
company in the form of letter of enquiry and the answers from the company will be
published as code cases from time to time.
Rules for Nondestructive testing are collectively prescribed in Section V. Other sections
for each component (sections I, II, III, and VIII) refer section V or other applicable rules
for examination methods and SNT-TC-1A (ASNT Recommended Practice) for
qualification of nondestructive examination. Acceptances criteria are specified in each
section are sometimes quoted from ASTM.
Linear Indications: are those in which length is more than three times the width; rounded
indications are indications, which are circular or elliptical with the length less than three
times the width.
65
6.5 Classification of Indications
6.5.1 Indications produced by Liquid Penetrant Inspection are not necessarily defects.
Machining marks, scratches, and surface conditions may produce indications
that are similar to those produced discontinuities, but that are not relevant to
acceptability. The criteria given under apply when indications are evaluated.
6.5.1.2 Any indication with a maximum dimension of 1/16 inches (1.59mm)or less, shall
be classified as Nonrelevant. Any larger indication believed to be Nonrelevant
shall be regarded as relevant, until reexamined by penetrant inspection or any by
any other nondestructive examination methods, to determine whether or not an
actual discontinuity exists. The surface may be ground are otherwise conditioned
before re-examination.
6.5.1.3 Relevant indications are those caused by actual discontinuities. Linear
indications are those in which the length is more than three times its width.
Rounded indications are those in which the length is three times its width or less.
(a) Linear indications are evaluated as crater cracks or star cracks and exceed5/32"
(3.96mm) in length.
(b) Linear indications are evaluated as cracks other than crater cracks or star
cracks.
(c) Linear indications are evaluated as Incomplete Fusion and exceeds 1 inch
(25.4mm) in total length in a continuous 12 inches (304.8mm) length of the weld
or 8% of the weld length.
66
DEMAGNETIZATION
(a) The residual field is in the same direction as the original magnetic field.
(b) The residual field is weaker than the original field.
(c) The original magnetizing force causes the residual field.
(d) When an article has been magnetized in more than one direction, the secondfield
applied will completely overcome the first field. However, this is only true if the
second field applied, is stronger than the first in magnitude.
This field may be negligible in soft materials, but in harder materials it may be
comparable to the intense fields associated with the special alloys used for permanent
magnets. Although it is time consuming and represents additional expense, the
demagnetization of parts is sometimes necessary in many cases. Demagnetization may
be easy or difficult depending on the type of material. Metals having high coercive force
are difficult to magnetize and once they are magnetized, it is equally difficult to remove
the residual field from it.
67
(d) Abrasive particles that may be attracted to parts such as bearing races,
gearteeth, bearing surfaces, etc., may lead to abrasion or may obstruct oil holes
or grooves.
(a) During some electric arc-welding operations, strong residual fields may
deflectthe arc away from the point where it should be applied.
(b) Finally, the residual field will interfere with the re-magnetization of part at
fieldintensity too low to overcome.
The residual fields may sometimes be allowed to remain in the part, without
demagnetizing it. The reasons for not demagnetizing being:
(a) Parts made of magnetically soft materials do not retain residual magnetism,
asthey have low retentivity properties.
(b) If the subsequent manufacturing process calls for the object to be heated
aboveCurie point, the material will readily be demagnetized as it loses all its
magnetic properties.
(c) If the part does not require additional machining and its intended function is
notcompromised by the presence of a residual field, then demagnetization
becomes unnecessary.
(d) The part is to be re-magnetized for further magnetic particle inspection or
forsome secondary operation in which a magnetic plate or chuck may be used to
hold a part.
(e) Finally, demagnetization is only required if specified in the
drawings,specifications, or procedures.
Types of Residual Fields
Longitudinal magnetic fields
Unlike longitudinal fields, circular residual fields offer little or no external evidence of
their presence. The flux may be entirely confined within the part, depending to some
extent on part geometry and magnetizing procedures. Without special equipment,
demagnetization of a circular residual field is very difficult. Confirmation of an adequate
demagnetization level is another additional problem. Leakage field measuring devices
are ineffective since there is absence of external field. In such cases, reorientation of a
circular field into a longitudinal field prior to demagnetization may be advantageous in
some instances.
68
Fastest and the most simple method for demagnetization is to pass current through a
high intensity AC coil and slowly withdrawing the part form the coil. A coil of 5000 –
10000 Ampere-turns, line frequency from 50 – 60 Hz is recommended. The part to be
demagnetized should enter the coil from a 12" distance and move through it steadily
and slowly until the piece is 36" beyond the coil. The operation is repeated until all of
the residual magnetism is being removed. The strength of the field is gradually reduced
to zero as the object exists the coil and reaches a point beyond the influence of the
coil’s field. Rotating and tumbling the part while passing through the field of the coil can
achieve demagnetization of smaller parts.
Direct Current Demagnetization
Reversing Direct Current Contact Coil Method
This type is usually associated with large test objects that have been magnetized using
DC source. It is also applicable where AC demagnetization procedures prove
ineffective. The method requires high values of current or full-wave rectified AC that
can be directed to a coil or plate. There must be also provisions for reversing the
polarity and reducing the amplitude, to zero. Although fewer steps may yield
satisfactory results, greater reliability is achieved by using about 30 reversals and
current reductions to approach zero asymptotically.
Reversing Cable Wrap Method
A high amperage DC coil demagnetizer has been designed to produce alternate pulses
of positive and negative current. The pulses are generated at fixed amplitude and a
repetition rate of 5 – 10 cycles per second. This permits relatively small objects to be
demagnetized by the through-coil method. The object is subjected to a constantly
reversing magnetic field as it passes through the coil and the effective field is reduced
to zero, as the test object is gradually withdrawn form the coil. The low repetition rate
substantially reduces the skin effect with a corresponding increase in the magnetic field
penetration.
Demagnetization With Yokes
AC yokes may be used for local demagnetization by placing the poles on the surface
and moving them around the area and slowly withdrawing the yoke, while it is still
energized.
69
residual field is necessary. This is achieved through a Residual Field Meter, commonly
known as Gauss Meter. The main purpose of this device is to measure the relative
strength of magnetic leakage fields. Leakage field measurements are undertaken to
ascertain the level of residual magnetic fields emanating from the test object. It consists
of an elliptical vane, which is attached to a pointer that is free to move. A rectangular
permanent magnet is attached in a fixed position directly above the soft iron vane.
Because the vane is under the influence of the magnet, it tends to align its long axis in
the direction of the leakage field emanating from the magnet. In doing so, the vane
becomes magnetized in a fixed position. In the absence of external magnetic fields, the
pointer reads zero on the graduated scale. When the north pole of the residually
magnetized object is moved closer to the pivot end of the pointer, the south pole of the
vane is attracted towards the object and the north pole of the magnet is repelled. The
resulting torque causes the pointer to move in the positive (+) direction.
The relative strength of the residual field is measured by bringing the indicator near the
object and noting the reflection of the pointer. The edge of the pivot end of the pointer
should be closest to the object under investigation. To increase the accuracy and
repeatability of such measurements, it is a good practice to isolate the device from
extraneous magnetic fields. If such fields magnetize them, the sensitivity of these
devices becomes substantially reduced.
POST EXAMINATION CLEANING
The effect of particles, if allowed to remain on the test surface, can cause difficulty in
subsequent processes such as painting or coating, or even a shot-blasting operation
(when tested using wet medium). Hence it is recommended to remove the magnetic
particles after the inspection, which is referred to as Post-cleaning.
Means of Particle Removal
70
CHAPTER-2 VARIOUS
NDT METHODS
Page 71 of 330
Basic principle of radiography involves the use of penetrating
radiations, which are made to pass through the material under inspection
and the transmitted radiation is recorded in film.
Radiography inspection can be applied to any materials. It uses
radiation from isotopic sources and X-radiation that will penetrate through
the job and produce an image in the film. The amount of radiation
transmitted through the material is dependent on the density and the
material thickness. As material thickness increases, radiography becomes
less sensitive as an inspection method.
Surface discontinuities that can be detected by this method include
undercut, longitudinal grooves, incomplete filling of grooves, excessive
reinforcement, overlap, concavity at the root etc., Subsurface
discontinuities include gas porosity, slag inclusions, cracks, inadequate
penetration, incomplete fusion etc.,
Page 72 of 330
metals and other materials. Typical discontinuities detected by this
method are cracks, seams, cold shuts, laminations, and porosity.
In principle, a liquid penetrant is applied to the surface of the
specimen to be examined and some time is allowed for the penetrant to
enter into the discontinuities.
All the excess penetrant is then removed and a developer is then
applied to the surface. The developer functions as both blotters, to absorb
the penetrant from the discontinuities as well as a means of providing a
contrasting background for meaningful interpretation of indications.
Page 73 of 330
recommended by the penetrant producers or required by the
specification being followed. The times vary depending on the
application, penetrant materials used, the material, the form of the
material being inspected, and the type of defect being inspected.
Minimum dwell times typically range from 5 to 60 minutes.
Generally, there is no harm in using a longer penetrant dwell time as
long as the penetrant is not allowed to dry. The ideal dwell time is
often determined by experimentation and is often very specific to a
particular application.
2.1.4 Excess Penetrant Removal: This is a most delicate part of the
inspection procedure because the excess penetrant must be removed
from the surface of the sample while removing as little penetrant as
possible from defects. Depending on the penetrant system used, this
step may involve cleaning with a solvent, direct rinsing with water,
or first treated with an emulsifier and then rinsing with water
2.1.5 Developer Application: A thin layer of developer is then applied to
the sample to draw penetrant trapped in flaws back to the surface
where it will be visible. Developers come in a variety of forms that
may be applied by dusting (dry powdered), dipping, or spraying (wet
developers).
2.1.6 Indication Development: The developer is allowed to stand on the
part surface for a period of time sufficient to permit the extraction
of the trapped penetrant out of any surface flaws. This development
time is usually a minimum of 10 minutes and significantly longer
times may be necessary for tight cracks.
2.1.7 Inspection: Inspection is then performed under appropriate lighting
to detect indications from any flaws which may be present.
2.1.8 Clean Surface: The final step in the process is to thoroughly clean
the part surface to remove the developer from the parts that were
Page 74 of 330
found to be acceptable. Common Uses of Liquid Penetrant
Inspection Liquid penetrant inspection (LPI) is one of the most
widely used nondestructive evaluation (NDE) methods. Its
popularity can be attributed to two main factors, which are its
relative ease of use and its flexibility. LPI can be used to inspect
almost any material provided that its surface is not extremely rough
or porous. Materials that are commonly inspected using
Page 75 of 330
specific area or it can be applied by dipping or spraying to quickly inspect
large areas. At right, visible dye penetrant being locally applied to a highly
loaded connecting point to check for fatigue cracking. Liquid penetrant
inspection is used to inspect of flaws that break the surface of the sample.
Page 76 of 330
• Large areas and large volumes of parts/materials can be inspected
rapidly and at low cost.
• Parts with complex geometric shapes are routinely inspected.
• Indications are produced directly on the surface of the part
andconstitute a visual representation of the flaw.
• Aerosol spray cans make penetrant materials very portable.
• Penetrant materials and associated equipment are
relativelyinexpensive.
Primary Disadvantages
Penetrant Properties:
Page 77 of 330
responsible for the performance or sensitivity of the penetrant.The
properties of penetrant materials that are controlled by AMS 2644 and
MIL-I-25135E include flash point, surface wetting capability, viscosity,
color, brightness, ultraviolet stability, thermal stability, water tolerance,
and removability.
Surface Energy (Surface Wetting Capability): As previously
mentioned, one of the important characteristics of a liquid penetrant
material is its ability to freely wet the surface of the object being
inspected. At the liquid-solid surface interface, if the molecules of the
liquid have a stronger attraction to the molecules of the solid surface than
to each other (the adhesive forces are stronger than the cohesive forces),
then wetting of the surface occurs. Alternately, if the liquid molecules are
more strongly attracted to each other and not the molecules of the solid
surface (the cohesive forces are stronger than the adhesive forces), then
the liquid beads-up and does not wet the surface of the part.
Density or Specific Gravity: The density or the specific gravity of a
penetrant material probably has a slight to negligible effect on the
performance of a penetrant. The gravitational force acting on the penetrant
liquid can be working in cooperation with or against the capillary force
depending on the orientation of the flaw during the dwell cycle. When the
gravitational pull is working against the capillary rise the strength of the
force is given by the following equation:
Force = πr2hpg
Where: r = radius of the crack opening; h = height of penetrant above its
free surface p = density of the penetrant; g = acceleration due to gravity
Page 78 of 330
Viscosity: It has little effect on the ability of a penetrant material to enter
a defect but it does have an effect on speed at which the penetrant fills a
defect
Capillarity: The ability of a liquid to rise or fall in narrow openings is
due to capillary action. It is demonstrated by an simple experiment, which
uses two tubes of varying cross-section. The tubes are then immersed into
a container filled with water. It was observed that level of water in the
thinner tube rises faster. This experiment proves that, a liquid with a low
viscosity penetrates much faster into narrow openings. So cavities, which
offer narrow openings, such as a tight crack or fatigue cracks, which are
hairline, are best detected with penetrant test system as it penetrates
readily into narrow openings. When repeating the experiment with a
liquid such as mercury, the level of mercury falls down in the thinner tube.
The height to which the liquid is raised is determined largely by surface
tension and the wetting ability of the liquid. Also, lifting ability due to
capillary action increases, as the diameter of the bore decreases. Capillary
forces will be less in a closed tube than in an open tube, because of the air
that is trapped in the former. This can be compared with a discontinuity,
which has a closed end on one side. The air that is trapped inside will be
dissolved by the penetrant and is diffused at the surface.
Fluidity: The ability of a liquid to flow is termed as Fluidity. The
Penetrant should have the ability to drain away from the component well,
but with a dragging-out of the penetrant from the defects.
Surface tension: The force acting per unit length of an imaginary line
drawn on the surface on the liquid, normal to it, is called surface tension.
It is the property of a liquid to act like a stretched membrane. Surface
tension plays an important role in the effectiveness of a penetrant. High
surface tension liquids are usually are excellent solvents, and will easily
dissolve the dyes. However, low surface tension liquids provide the
Page 79 of 330
penetrating power and spreading properties necessary for a good
penetrant.
Volatility: Penetrant should essentially non-volatile liquids. A small
amount of evaporation at the discontinuity could help to intensify dye
brilliance and also prevent excessive spreading of indications. However
low volatility is desirable to minimize the losses due to evaporation of
penetrant stored in open tanks.
Flammability: Penetrant should have high flash point as a matter of
safety in use. Flash point is defined as the lowest temperature at which the
liquid gives a flash. When small flame is passed across its surface, the
liquid gets heated up and subsequently burns. Flash point of a penetrant
should not be less than 135o F (or) 52oC
Chemical activity: By chemical activity, we mean, the ability of the
penetrant to cause corrosion on the metals which are tested. This is due to
presence of halogens, elements which belong to a highly reactive group.
Examples of such type of elements are chlorine, fluorine, bromine, and
iodine. No penetrant is without the presence of halogens, and should be
noted that these can cause corrosion on the metal surface, if not removed.
This is the primary reason why post-cleaning operation is essential in
penetrant testing. Hence penetrant with presence of halogens are usually
restricted on austenitic steels, titanium, and other high-nickel alloys.
Drying characteristics: The penetrant must resist drying out, and
complete bleed out, during hot air drying of the component after the wash
operation has been completed. Ideally, heat should aid the penetrant in
promoting a return of penetrant to the component surface, in order to
produce a sharply defined indication.
Page 80 of 330
Techniques for Standard Temperatures
As a standard technique, the temperature of the penetrant and the
surface of the part to be processed shall neither be below 50 F (10 C) nor
above 125 F (52 C) throughout the examination period. Local heating or
cooling is permitted provided the part temperature remains within the
above-mentioned range during the examination. Where it is not practical
to comply with these temperature limitations, the examination procedure
mentioned below is proposed for higher or lower temperature ranges and
requires qualification. This shall require the use of a quench cracked
aluminum block, which is designated as a Liquid Penetrant Comparator
Block.
The block shall be made of Aluminum, ASTM B 209, Type 2024, 3/8"
thick, and should have approximate face dimensions of 2" x 3" (52mm x
76mm). At the center of each face, an area approximately 1" in diameter
shall be marked with a 950 degree temperature indicating crayon or
pencil. The marked area shall be heated with a blowtorch, a Bunsen burner
or similar device to a temperature between 950 F and 975 F (510 – 524
C). The specimen shall then be immediately quenched in cold water,
which will produce a network of fine cracks on each face.
The block shall then be dried by heating to approximately 300 F. After
cooling, the block shall be cut in halves and shall be designated as “A”
and “B”. If it is desired to qualify a penetrant examination procedure at a
temperature of less than 60 F (16 C), the proposed procedure shall be
applied to block “B”. A standard test procedure, which has previously
demonstrated as suitable for use, shall be applied to block “A”. If the
indications obtained under proposed conditions on block “B” is
essentially considered to be same as those obtained on block “A”, during
Page 81 of 330
examination at 50 F to 125 F range, the proposed procedure shall be
considered qualified for use.
These requirements are as per T – 652, Article 6, Sec V of ASME Boiler
and Pressure Vessel Code.
PRECLEANING
Page 82 of 330
preparation, therefore, is one, which leaves the surface and the flaw in a
clean and dry condition.
CLEANING METHODS
Page 83 of 330
They are generally classified as chemical, mechanical, solvent, or
thereof. The cleaning methods and the type of contaminant it removes is
discussed below:
MECHANICAL METHODS
Abrasive tumbling:
Removing light scales, burrs, welding flux, braze stop off, rust,
casting mold and core material. Should not be used on soft metals such as
aluminum, magnesium, or titanium.
Page 84 of 330
Mechanical methods should be used with care, because they often
mask flaws by smearing adjacent metal over them or by filling them with
abrasive material. This is more likely to happen with soft metals than with
hard metals. Hence, before a decision is made to use a specific method, it
is good practice to test the method on known flaws to ensure that this will
not mask the true flaws.
CHEMICAL METHODS
Alkaline cleaning:
Removes typical machine shop soils such as cutting oils, polishing
components, grease, and chips, carbon deposits. Ordinarily used on large
articles where hand methods are too laborious. Acid cleaning:
Typically known as ‘etching’, uses acid to remove scales and
smeared metal pieces due to machining operation. A concentrated solution
for removing heavy scales; mild solutions for light scales; weak solution
for removing lightly smeared metal is recommended. Besides the above-
mentioned methods, molten salt bath cleaning is also used for
conditioning and removing heavy scale. A chemical cleaning method
should be carefully chosen to ensure that neither the braze nor the
components of the assembly is attacked.
Vapor Degreasing:
Removing typical shop soil, oil, and grease .Usually employs
chlorinated solvents; not suitable for titanium.
Solvent Wiping:
Same as vapor degreasing, except a hand operation may employ
non-chlorinated solvents. Used for localized low volume cleaning.
Page 85 of 330
Surface finish of the work piece must always be considered. When
further processing is scheduled, such as machining or polishing, an
abrasive cleaning method is frequently a good choice. Generally chemical
cleaning methods have less degrading effects on the surface finish than
mechanical methods. The choice of a cleaning method is based on such
factors as;
(1) Type of contaminant to be removed, since one method does not
remove all the contaminants equally well.
(2) Effect of the cleaning method on the part.
(3) Practicality of the cleaning method for the part, for example, a
largepart cannot be put into a small degreaser or ultrasonic cleaner.
(4) Specific cleaning requirements of the purchaser.
PENETRANT SYSTEMS
Page 86 of 330
A number of penetrant types or classes have been developed over
the years, to cater for the wide variety of inspection conditions that occur
in practice. The main types of penetrant are classified based on
(1) Interpretation of indications
(2) Type of removal process
The dye in the penetrant is the prime constituent, which aids in the
visible interpretation of indications. The dye could be of color contrast
type, or a fluorescent type or could be a combination of both. The Color
contrast penetrant consists of a brightly colored dye, usually red, that is
highly visible under normal lighting conditions.
The fluorescent dye, on the other hand, an almost colorless dye,
which emits a visible light, rays, when viewed under a source of
ultraviolet light. The equipment that aids in the interpretation of
fluorescent indications is termed as a black light.
The dual sensitivity penetrant contains both a visible dye for
examination under normal light and a fluorescent dye for a more sensitive
evaluation of small discontinuities. The processes used to remove the
excess penetrant from the surface of the specimen can further categorize
penetrant. The excess penetrant can be removed in four different ways:
(a) By water only.
(b) By a liquid solvent.
(c) By water, followed by a penetrant removal solution, which is
watersoluble (hydrophilic), followed by water.
(d) By an emulsifier which is oil-soluble (lipophilic), followed by water.
Page 87 of 330
1. Water-washable penetrant, can be self emulsifying or removable
with plain water.
2. Post-emulsifiable penetrant, which require a separate emulsifier to
make the penetrant water washable.
3. Solvent-removable penetrant, which must be removed with a
solvent, typical when using color contrast dye in pressurized spray cans.
Water-Washable Penetrant
This system (using a fluorescent or a visible dye penetrant) is
designed so that the penetrant can be directly removed from the
component surface by washing with water. The process is, thus, rapid and
efficient. It is extremely important, however, to maintain a controlled
washing operation, especially where the removal of excess penetrant is by
means of water sprays. A good system will be an optimization of the
processing conditions, such as, water pressure, temperature, duration of
rinse cycle, surface condition of the work piece, and the inherent removal
characteristics of the penetrant. Even so, it is possible that penetrant may
be washed away from small defects.
Page 88 of 330
1. Not reliable for detecting scratches and similar shallow
surfacediscontinuities.
2. Not reliable repeatability on specimens.
3. Not reliable on anodized surfaces.
4. Presence of acids and chromates affect the sensitivity of the system.
5. Indications can be easily over-washed.
6. There is a high chance for the contamination of the penetrant byWater.
Page 89 of 330
required by them being 2 to 4 minutes, whereas, the emulsifiers with
viscosity of 30 to 50 mm2/sec react relatively at a faster rate. The time
required by them being up to 2 minutes.
Emulsification dwell time begins as soon as the emulsifier is
applied. The length of time that the emulsifier is allowed to remain on the
work piece and in contact with the penetrant depends mainly on the type
of emulsifier i.e. fast-acting or slow-acting, water-based or oilbased, and
the surface roughness of the component under inspection.
Recommendations from the manufacturers can serve only as guidelines,
but the optimum time for a specific work piece is to be determined
experimentally. The period ranges from a few seconds to several minutes,
typically, 15 s to 4 min, although a maximum time of 5 min is established
by some specifications.
Page 90 of 330
Disadvantages of Post-emulsifiable penetrant systems
1. It is two step process and requires additional time in making the
penetrant water-washable.
2. Separate emulsifiers are required additionally, which raises the cost
ofinspection.
3. It becomes a cumbersome operation to remove penetrants
formkeyways, threads, blind holes and rough surfaces.
Page 91 of 330
Solvent Removers (Cleaners)
Solvent removers, sometimes referred as cleaners, differ from
emulsifiers in that they remove excess surface penetrant through direct
solvent action. The solvent remover dissolves the penetrant.
There are two types of solvent removers: flammable and
nonflammable removers. Flammable cleaners are free from halogens, but
are potential fire hazards. Non-flammable cleaners are usually
halogenated solvents, which renders them unsuitable for certain
applications- usually because of their high toxicity. The most important
precaution, while using solvent-removable penetrants is that; do not apply
the solvent directly on the test piece.
Flushing the surface with the solvent, following the application of the
penetrant and prior to development, is prohibited. This will dilute the
penetrant inside discontinuities.
Page 92 of 330
Selection of Penetrant Test Systems
Fluorescent Penetrants:
Water-washable Penetrant:
Page 93 of 330
(2) Inspection of large volumes of parts, when time is not a constraint.
1. Portable kits for carrying out inspection of small areas; for use on
Site; these often arecontained in pressurized aerosol cans.
2. Fixed installations are used for testing components on a
continuousbasis, with a seriesof processing stations in sequential order to
form a flow line. Increasingly, these areautomated component handling
and timing.
Page 94 of 330
3. Self-contained processing booths are used for testing components,
which cannot be movedfor testing.
Many of the materials used in penetrant inspection are potential fire
hazards and may be toxic. Hence, necessary safety precautions should be
taken on any installation.
Page 95 of 330
Method B – Solvent Removable Penetrants
Technique Restrictions
Fluorescent penetrant examination should not follow a color contrast
penetrant examination.Intermixing of penetrant from different families or
different manufacturers is not permitted. A retest with a water washable
penetrant may cause marginal loss in indications due to sensitivity.
Selection of a Penetrant Technique
The selection of a liquid penetrant system is not a straightforward
task. There are a variety of penetrant systems and developer types that are
available for use, and one set of penetrant materials will not work for all
applications. Many factors must be considered when selecting the
penetrant materials for a particular application. These factors include the
sensitivity required, materials cost, number of parts and size of area
requiring inspection, and portability. When sensitivity is the primary
consideration for choosing a penetrant system, the first decision that must
be made is whether to use fluorescent dye penetrant, or visible dye
penetrant. Fluorescent penetrants are generally more capable of producing
a detectable indication from a small defect because the human eye is more
sensitive to a light indication on a dark background and the eye is naturally
drawn to a fluorescent indication. The graph below presents a series of
curves that show the contrast ratio required for a spot of a certain diameter
to be seen. The curves show that for indications spots larger than 0.076
mm (0.003 inch) in diameter, it does not really matter if it is a dark spot
on a light background or a dark spot on a light background. However,
when a dark indication on a light background is further reduced in size, it
is no longer detectable even though contrast is increased. Furthermore,
with a light indication on a dark background, indications down to 0.003
Page 96 of 330
mm (0.0001 inch) were detectable when the contrast between the flaw and
the background was high enough.
From this data, it can be seen why a fluorescent penetrant offers an
advantage over visible penetrant for finding very small defects. Data
presented by De Graaf and De Rijk supports this statement. They
inspected “Identical” fatigue cracked specimens using a red dye penetrant
and a fluorescent dye penetrant. The fluorescent penetrant found 60
defects while the visible dye was only able to find 39 of the defects. Under
certain conditions, the visible penetrant may be a better choice. When
fairly large defects are the subject of the inspection, a high sensitivity
system may not be warranted and may result in a large number of
irrelevant indications. Visible dye penetrants have also been found to give
better results when surface roughness is high or when flaws are located in
areas such as weldments. Since visible dye penetrants do not require a
darkened area for the use of an ultraviolet light, visible systems are more
easy to use in the field. Solvent removable penetrants, when properly
applied can have the highest sensitivity and are very convenient to use but
are usually not practical for large area inspection or in high-volume
production settings. Another consideration in the selection of a penetrant
system is whether water washable, post-emulsifiable or solvent removable
penetrants will be used. Post-emulsifiable systems are designed to reduce
the possibility of overwashing, which is one of the factors known to
reduce sensitivity. However, these systems add another step, and thus
cost, to the inspection process.
Penetrants are evaluated by the US Air Force according to the
requirements in MIL-I- 25135 and each penetrant system is classified into
one of five sensitivity levels. This procedure uses titanium and Inconel
specimens with small surface cracks produced in low cycle fatigue
Page 97 of 330
bending to classify penetrant systems. The brightness of the indications
produced after processing a set of specimens with a particular
penetrantsystem is measured using a photometer. Most commercially
available penetrant materials are listed in the Qualified
Products List of MIL-I-25135according to their type, method and
sensitivity level. Visible dye and dual-purpose penetrants are not
classified into sensitivity levels as fluorescent penetrants are. The
sensitivity of a visible dye penetrant is regarded as level 1 and largely
dependent on obtaining good contrast between the indication and the
background.
Page 98 of 330
(3) Blue: Concentrate with water, as a penetrant. Provides
acheaper penetrant when large volumes of parts are to be inspected
together with marked differentiation.
Yellowish-Green Oil: Penetrant incorporated in refrigerator oil for leak
detection in refrigerators. Penetrant are then classified based on the
strength or detect ability of the indication that is produced for a number
of very small and tight fatigue cracks.
The five sensitivity levels are shown below:
_ Level ½ - Ultra Low Sensitivity
_ Level 1 - Low Sensitivity
_ Level 2 - Medium Sensitivity
_ Level 3 - High Sensitivity
_ Level 4 - Ultra-High Sensitivity
Page 99 of 330
Modes of Application
There are various methods of application of penetrant such as
dipping, brushing, spraying, or flooding. Small parts are quite often placed
in small baskets and dipped into a tank of penetrant. On larger parts, and
those with complex geometries, penetrant can be applied effectively by
brushing or spraying. Both conventional and electrostatic spray guns are
effective means of application of penetrants to the part surfaces.
Electrolytic spray application can eliminate excess liquid buildup of
penetrant on the part, minimize overspray, and minimize the amount of
penetrant entering hollow-cored passages which might serve as penetrant
reservoirs. This would result in severe bleed out problems during
examination. Aerosol cans are conveniently portable and ideal for spot
checking and local applications. There is a word of caution when using
spray application. With spray application, it is important that there be
proper ventilation. This is generally accomplished through properly
designed spray booth and exhaust systems.
Penetrant dwell time is the total time that the penetrant is in contact with
the part surface. The dwell time is important because it allows the
Water-washable penetrants:
General considerations:
(a) The temperature of the water should be relatively constant and should
be maintained within the range of 50 to 100 degree F (10 – 38 C). (b)
Spray-rinse water pressure should not be greater than 40 psi (280 kPa).
(c) Rinse time should not exceed 120 s unless otherwise specified by
governing procedures.
Rinse Effectiveness:
General considerations:
(a) The temperature of the water should be relatively constant
andshould be maintained within the range of 50 to 100 degree F (10 – 38
C).
(b) Spray-rinse pressure should be in accordance with
manufacturers recommendations.
(c) Rinse time should not exceed 120 s unless otherwise specified
by governing procedures.
Rinse Effectiveness:
If the emulsification and the final rinse step is not effective, as
evidenced by the excessive residual penetrant on the surface after rinsing,
it is desirable that the part is dried and recleaned, followed by re-
application of penetrant.
Post-emulsifiable Penetrant (Hydrophilic):
DRYING OF JOBS
Drying Time Limits: The parts are not allowed to remain in the drying
oven any longer than is necessary to dry the surface. Normally, times over
30 min in the drier may impair the sensitivity of the examination.
DEVELOPERS
Ideally, dry developers should be light and fluffy and should cling
to the dry metallic surfaces in a fine film. However, adherence of powder
should not be excessive, because the amount of penetrant at fine flaws is
so small that it cannot work through a thick coating of powder. Also, the
powder should not float and fill the air with dust. Developer bins with dust
control systems should used to minimize inhalation of dust. The color of
most dry developers is white, although sometimes, an identifying tinting
color is added, because the whiteness is only of real importance if used
with color-contrast visible dye penetrant. For fluorescent penetrants, the
tinting color, which is added to the developer should be in small amounts,
as many additives for tinting quench the luminescence of fluorescent dyes.
Even a slight quenching of fluorescence at a marginal fine indication can
cause serious consequences.
Wet Developers
Water-Soluble Developers
By using a material soluble in water, many of the problems inherent
in suspension-type wet developers can be avoided. Unfortunately, most
water-soluble developers are considered to be inferior to other types of
developers, as they produce dimmer indications, when used with
fluorescent penetrants. Proper wetting and resistance to corrosion are
primary concerns with water-soluble developers. The problem of
maintaining the suspension is however eliminated. But changes in
concentration due to evaporation must be controlled, to ensure consistent
sensitivity. The soluble types are somewhat easier to remove than the
suspendible types.
Development Time
The length of time the developer is allowed to remain on the part
prior to examination should not be less than 10 minutes. Development
time begins immediately after the application of dry developer, and soon
Patternmaking
The pattern is a physical model of the casting used to make the mold.
The mold is made by packing some readily formed aggregate material,
such as molding sand, around the pattern. When the pattern is withdrawn,
its imprint provides the mold cavity, which is ultimately filled with metal
to become the casting. If the casting is to be hollow, as in the case of pipe
fittings, additional patterns, referred to as cores, are used to form these
cavities.
Core making
Cores are forms, usually made of sand, which are placed into a mold
cavity to form the interior surfaces of castings. Thus the void space
Molding
Molding consists of all operations necessary to prepare a mold for
receiving molten metal. Molding usually involves placing a molding
aggregate around a pattern held with a supporting frame, withdrawing the
pattern to leave the mold cavity, setting the cores in the mold cavity and
finishing and closing the mold.
Melting and Pouring
The preparation of molten metal for casting is referred to simply as
melting. Melting is usually done in a specifically designated area of the
foundry, and the molten metal is transferred to the pouring area where the
molds are filled.
Cleaning
Cleaning refers to all operations necessary to the removal of sand,
scale, and excess metal from the casting. The casting is separated from the
mold and transported to the cleaning department. Burned-on sand and
scale are removed to improved the surface appearance of the casting.
Excess metal, in the form of fins, wires, parting line fins, and gates, is
removed. Castings may be upgraded by welding or other procedures.
Inspection of the casting for defects and general quality is performed.
Other processes includes before shipment, further processing such
as heat-treatment, surface treatment, additional inspection, or machining
may be performed as required by the customer’s specifications.
If the lack of root fusion is accessible from the root side, dye
penetrant system is used to detect this defect. Considered as detrimental
defect, by almost all codes and standards. If the defective area is
accessible from the root side, the root defect should be cut out or defect
line widened and re-welded. If the root defect is not accessible for the root
side, the complete weld must be cut out and re-welded.
10. Lack of Sidewall Fusion: Lack of fusion between the weld and
the parent metal at a side of the weld. The common causes for the
occurrence of this defect is due to incorrect welding conditions, such
as, arc energy too low; travel speed too fast; Incorrect electrode
angle; molten metal flooding ahead of arc because of work position.
If lack of sidewall fusion reaches, penetrant testing can be used.
11. Lack of Inter-Run Fusion: This is otherwise termed as inter
pass lack of fusion. This is caused due to lack of union between
adjacent runs of weld metal in a multi-pass weld. The common
causes for the occurrence of this defect is due to incorrect welding
conditions, such as, arc energy too low; travel speed too fast;
Incorrect electrode angle; molten metal flooding ahead of arc
because of work position
15. Excess Weld Metal: The extra metal which produces convexity in
fillet welds and weld thicknesses greater than the parent metal plate in
butt welds. The term “reinforcement” is misleading, since the excess does
not normally produce a stronger weld in a butt joint. In certain situations,
however, excess metal may be required for metallurgical reasons. This
feature of weld is regarded as a defect only when the height of the excess
metal is greater than the specified limits.
16. Undercut: During the final pass or cover pass, the exposed upper
edges of the beveled weld preparation tend to melt and run down into the
deposited metal in the groove. Undercutting often occurs when
insufficient filler metal is deposited to fill the resultant grooves at the edge
of the weld bead. Excessive welding current, incorrect arc length,
incorrect manipulation etc may cause undercutting.
17. Burn Through: A burn through is that portion of the weld bead
where excessive penetration has caused the weld pool to be blown into
the pipe or vessel. It is caused by the factors that produce excessive heat
in one area, such as high current, slow rod speed, incorrect rod
manipulation etc.
Nature of the Defect
small round defects than small linear defects: Small round defects
are generally easier to detect for several reasons. First, they are typically
volumetric defects that can trap significant amounts of penetrant. Second,
round defects fill with penetrant faster than linear defects. One research
effort found that elliptical flaw with length to width ratio of 100, will take
the penetrant nearly 10 times longer to fill than a cylindrical flaw with the
same volume.
Deeper flaws than shallow flaws: Deeper flaws will trap more
penetrant than shallow flaws, and they are less prone to over washing.
flaws with a narrow opening at the surface than wide open flaws. Flaws
with narrow surface openings are less prone to over washing.
Chemical Segregation
The elements in the alloy are seldom uniformly distributed. Even in
unalloyed elements contain randomly distributed impurities in the form of
tramp elements. Therefore, the composition of metal or alloy will vary.
Laps:
Laps are surface irregularities that appear as linear defects and are
caused by the folding over of hot metal at the surface. These folds are
forged into the surface, but are not metallurgically bonded (welded),
because of the oxide present between the surfaces. Therefore, a
discontinuity with a sharp notch is created.
Seam:
Seam is a surface defect that also appears as a longitudinal indication
and is a result of a crack, a heavy cluster of nonmetallic inclusions, or a
deep lap (a lap that intersects the surface at a large angle). A seam can also
result from the result from a defect in the ingot surface, such as a hole,
that becomes oxidized and is prevented from healing
during working. In this case, the hole simply stretches out during forging
or rolling, producing a linear crack like seam in the work piece surface.
Slivers are loose torn pieces of steel rolled into the surface. Rolled-in scale
is a scale formed during rolling. Ferrite fingers are surface cracks that
have been welded shut but still contain the oxides and decarburization.
Internal flaws in forgings often appear as cracks or tears, and may result
either from forging with too light a hammer or from continuing forging
after the metal has cooled down before a safe forging temperature. A
number of surface flaws can be produced by the forging operation. The
movement of metal over or upon another surface often causes these flaws
without actual welding or fusing of the surfaces; such flaws may be laps
or folds.
Cold shuts often occur in closed-die forgings. They are junctures of two
adjoining surfaces caused by incomplete metal fill and incomplete fusion
of surfaces. Shear cracks often occur in forgings. They are diagonal cracks
occurring on the trimmed edges and are caused by shear stresses.
INTERPRETATION OF INDICATIONS
Inspection
CLASSIFICATION OF INDICATIONS
Relevant Indications:
Evaluation
The main purpose of evaluation is to classify the indications as acceptable
or rejectable. An experienced inspector readily determines which
indications are within acceptable limits and which ones are not. The
inspector then measures all other indications. If the length or diameter of
an indication exceeds allowable limits, it must be evaluated. One of the
most common and accurate ways of measuring indications is to lay a flat
gauge of the maximum acceptable dimension of discontinuity over the
indication. If the indication is not completely covered by the gauge, it is
not acceptable. Each indication that is not acceptable is evaluated. It may
actually be unacceptable, it may be worse than it appears, it may be false,
it may be real, or to may be acceptable upon closer examination. The
common method of evaluation includes the following steps:
a) Wipe the area of the indication with a small brush or clean cloth that
is dampened with solvent.
b) Dust the area with a dry developer or spray it with a light coat of
non-aqueous developer.
c) Re measure under lighting appropriate for the penetrant for the type
of penetrant used.
If the discontinuity originally appeared to be of excessive length because
of bleeding of penetrant along a scratch, service, or machining mark, that
will be evident to a trained eye. Finally, to gain maximum assurance that
the indication is properly interpreted, it is good practice to wipe the
surface again with solvent- dampened cotton and examine the indication
The ASME Boiler and Pressure Vessel Code is widely used acceptance
standard. The code contains nine sections in total, of which Sec V and Sec
VIII deals chiefly with Nondestructive Testing.
The following are the acceptance standards as per various codes, which
are being referred most commonly.
Acceptance Standards
Acceptance as per ASME Boiler and Pressure Vessel Code, Sec VIII,
Article
Following types of relevant indications are unacceptable:
(1) Crack or any other linear indications
(2) Rounded Indications, whose dimension exceeds 4.8 mm.
(3) Four or more 1.6mm diameter or greater, rounded indications in aline
separated by a distance of 1.6mm, edge to edge.
(4) Ten or more Rounded indications of diameter 1.6mm or greater in any
6 Square inch of surface, whose major dimension is not more than 153
mm with the dimensions taken in the least favorable location relative
to the indications being evaluated.
Acceptance as per API 1104 – Welding of Pipelines and Related
Facilities
Classification of Indications
Linear indications are those in which the length is more than three times
its width. Rounded indications are those in which the length is three times
its width or less.
ACCPETANCE STANDARDS
NONRELEVANT INDICATIONS
FALSE INDICATION
Recording
Some residue will remain on the work pieces after the penetrant inspection
is completed. These residues can result in the formation of voids during
subsequent welding or stop-off brazing, in the contamination of surfaces
or in unfavorable reactions in chemical processing operations. Post
cleaning is particularly important where residual penetrant materials
might combine with other factors in service to produce corrosion. A
suitable technique, such as a simple water spray, water rinse, machine
wash, vapor degreasing, solvent soaking, or ultrasonic cleaning may be
employed. If it is recommended to remove the developer, it should be
carried out as promptly as possible, preventing fixing of developer.
Hydrophilic Emulsifiers
The primary in-service checks used to check the quality of the penetrant
test systems are as follows:
Measure: Dimensions of 2" x 3" (50mm x 75mm), and are cut from 5/16"
(8mm) thick 2024-T3 aluminum alloy plate. The blocks are heated non-
uniformly and water quenched to produce thermal cracks. This is
accomplished by supporting the blocks in the frame and heating it with a
flame of a gas burner or torch in the center of the under side of the block.
A temperature of about 510 degrees C to 527 degrees C is maintained,
templestik, tempilac or equivalent is applied on the area of size of a penny
on the topside of the block for approximately 4 minutes.
He concluded that a new type of ray was being emitted from the tube. This ray was capable of
passing through the heavy paper covering and exciting the phosphorescent materials in the
room. He found the new ray could pass through most substances casting shadows of solid
objects. Roentgen also discovered that the ray could pass through the tissue of humans, but not
bones and metal objects. One of Roentgen’s first experiments late in 1895 was a film of the
hand of his wife, Bertha. It is interesting that the first use of X-rays were for an industrial (not
medical) application as Roentgen produced a radiograph of a set of weights in a box to show his
colleagues
Roentgen’s discovery was a scientific bombshell, and was received with extraordinary interest
by both scientist and laymen. Scientists everywhere could duplicate his experiment because the
cathode tube was very well known during this period. Many scientist dropped other lines of
research to pursue the mysterious rays. Newspapers and magazines of the day provided the
public with numerous stories, some true, others fanciful, about the properties of the newly
discovered rays.
Public fancy was caught by this invisible ray with the ability to pass through solid matter, and, in
conjunction with a photographic plate, provide a picture of bones and interior body parts.
Scientific fancy was captured by demonstration of a wavelength shorter than light. This
generated new possibilities in physics, and for investigating the structure of matter. Much
enthusiasm was generated about potential applications of rays as an aid in medicine and
surgery. Within a month after the announcement of the discovery, several medical radiographs
had been made in Europe and the United States which were used by surgeons to guide them in
their work. In June 1896, only 6 months after Roentgen announced his discovery, X-rays were
being used by battlefield physicians to locate bullets in wounded soldiers.
Prior to 1912, X-rays were used little outside the realms of medicine, and dentistry, though
some X-ray pictures of metals were produced. The reason that X-rays were not used in
industrial application before this date was because the X-ray tubes (the source of the X-rays)
broke down under the voltages required to produce rays of satisfactory penetrating power for
industrial purpose. However, that changed in 1913 when the high vacuum X-ray tubes designed
by Coolidge became available. The high vacuum tubes were an intense and reliable X-ray
sources, operating at energies up to 100,000 volts.
In 1922, industrial radiography took another step forward with the advent of the 200,000-volt X-
ray tube that allowed radiographs of thick steel parts to be produced in a reasonable amount of
time. In 1931, General Electric Company developed 1,000,000 volt X-ray generators, providing
an effective tool for industrial radiography. That same year, the American Society of Mechanical
Engineers (ASME) permitted X-ray approval of fusion welded pressure vessels that further
opened the door to industrial acceptance and use.
Page 150
1896, French scientist Henri Becquerel discovered natural radioactivity. Many scientists of the period
were working with cathode rays, and other scientists were gathering evidence on the theory that the
atom could be subdivided. Some of the new research showed that certain types of atoms disintegrate
by themselves. It was Henri Becquerel who discovered this phenomenon while investigating the
properties of fluorescent minerals. Becquerel was researching the principles of fluorescence, certain
minerals glow (fluoresce) when exposed to sunlight. He utilized photographic plates to record this
fluorescence.
One of the minerals Becquerel worked with was a uranium compound. On a day when it was
too cloudy to expose his samples to direct sunlight, Becquerel stored some of the compound in
a drawer with his photographic plates. Later when he developed these plates, he discovered
that they were fogged (exhibited exposure to light.) Becquerel questioned what would have
caused this fogging? He knew he had wrapped the plates tightly before using them, so the
fogging was not due to stray light. In addition, he noticed that only the plates that were in the
drawer with the uranium compound were fogged. Becquerel concluded that the uranium
compound gave off a type of radiation that could penetrate heavy paper and expose
photographic film. Becquerel continued to test samples of uranium compounds and determined
that the source of radiation was the element uranium. Bacquerel’s discovery was, unlike that of
the X-rays, virtually unnoticed by laymen and scientists alike. Only a relatively few scientists
were interested in Becquerel’s findings. It was not until the discovery of radium by the Curies
two years later that interest in radioactivity became wide spread.
While working in France at the time of Becquerel’s discovery, Polish scientist Marie Curie
became very interested in his work. She suspected that a uranium ore known as pitchblende
contained other radioactive elements. Marie and her husband, a French scientist, Pierre Curie
started looking for these other elements. In 1898, the Curies discovered another radioactive
element in pitchblende, they named it ‘polonium’ in honor of Marie Curie’s native homeland.
Later that year, the Curie’s discovered another radioactive element which they named ‘radium’,
or shining element. Both polonium and radium were more radioactive than uranium.
Since these discoveries, many other radioactive elements have been discovered or produced.
Radium became the initial industrial gamma ray source. The material allowed radiographing
castings up to 10 to 12 inches thick. During World War II, industrial radiography grew
tremendously as part of the Navy’s shipbuilding program. In 1946, manmade gamma ray
sources such as cobalt and iridium became available. These new sources were far stronger
than radium and were much less expensive. The manmade sources rapidly replaced radium,
and use of gamma rays grew quickly in industrial radiography.
Health Concerns
The science of radiation protection, or “health physics” as it is more properly called, grew out of
the parallel discoveries of X-rays and radioactivity in the closing years of the 19th century.
Experimenters, physicians, laymen, and physicists alike set up X-ray generating apparatus and
proceeded about their labors with a lack of concern regarding potential dangers. Such a lack of
concern is quite understandable, for there was nothing in previous experience to suggest that
X-rays would in any way be hazardous. Indeed, the opposite was the case, for who would
suspect that a ray similar to light but unseen, unfelt, or otherwise undetectable by the senses
would be damaging to a person? More likely, or so it seemed to some, X-rays could be
beneficial for the body.
Inevitably, the widespread and unrestrained use of X-rays led to serious injuries. Often injuries
were not attributed to X-ray exposure, in part because of the slow onset of symptoms, and
because there was simply no reason to suspect X-rays as the cause. Some early experimenters
did tie X-ray exposure and skin burns together. The first warning of possible adverse effects of
X-rays came from Thomas Edison, William J. Morton, and Nikila Tesla who each reported eye
irritations from experimentation with X-rays and fluorescent substances.
Page 151
Today, it can be said that radiation ranks among the most thoroughly investigated causes of
disease. Although much still remains to be learned, more is known about the mechanisms of
radiation damage on the molecular, cellular, and organ system than is known for most other
health stressing agents. Indeed, it is precisely this vast accumulation of quantitative
doseresponse data that enables health physicists to specify radiation levels so that medical,
scientific, and industrial uses of radiation may continue at levels of risk no greater than, and
frequently less than, the levels of risk associated with any other technology.
X-rays and Gamma rays are electromagnetic radiation of exactly the same nature as light, but of
much shorter wavelength. Wavelength of visible light is of the order of 6000 angstroms while the
wavelength of x-rays is in the range of one angstrom and that of gamma rays is 0.0001
angstrom. This very short wavelength is what gives x-rays and gamma rays their power to
penetrate materials that light cannot. These electromagnetic waves are of a high energy level
and can break chemical bonds in materials they penetrate. If the irradiated matter is living tissue
the breaking of chemical bond may result in altered structure or a change in the function of cells.
Early exposures to radiation resulted in the loss of limbs and even lives. Men and women
researchers collected and documented information on the interaction of radiation and the
human body. This early information helped science understand how electromagnetic radiation
interacts with living tissue. Unfortunately, much of this information was collected at great
personal expense.
In many ways radiography has changed little from the early days of its use. We still capture a
shadow image on film using similar procedures and processes technicians were using in the
late 1800’s. Today, however, we are able to generate images of higher quality, and greater
sensitivity through the use of higher quality films with a larger variety of film grain sizes.
Film processing has evolved to an automated state producing more consistent film quality by
removing manual processing variables. Electronics and computers allow technicians to now
capture images digitally. The use of “filmless radiography” provides a means of capturing an
image, digitally enhancing, sending the image anywhere in the world, and archiving an image
that will not deteriorate with time.
Technological advances have provided industry with smaller, lighter, and very portable
equipment that produce high quality X-rays. The use of linear accelerator provide a means of
generating extremely short wavelength, highly penetrating radiation, a concept dreamed of only
a few short years ago. While the process has changed little, technology has evolved allowing
radiography to be widely used in numerous areas of inspection.
Radiography has seen expanded usage in industry to inspect not only welds and castings, but
to radiographically inspect items such as airbags and caned food products. Radiography has
found use in metallurgical material identification and security systems at airports and other
facilities.
Gamma ray inspection has also changed considerably since the Curies’ discovery of radium.
Man-made isotopes of today are far stronger and offer the technician a wide range of energy
levels and half-lives. The technician can select Co-60 which will effectively penetrate very thick
materials, or select a lower energy isotope, such as Tm-170, which can be used to inspect
plastics and very thin or low density materials. Today gamma rays find wide application in
industries such as petrochemical, casting, welding, and aerospace.
RADIATION FUNDAMENTALS
For the purposes of this manual, we can use a simplistic model of an atom. The atom
can be thought of as a system containing a positively charged nucleus and negatively
charged electrons that are in orbit around the nucleus. The nucleus is the central core
of the atom and is composed of two types of particles, protons, which are positively
charged, and neutrons, which have a neutral charge. Each of these particles has a
Page 152
mass of approximately one atomic mass unit (amu). (1 amu = 1.66E-24 g). Electrons
surround the nucleus in orbital of various energies. (In simple terms, the farther an
electron is from the nucleus, the less energy is required to free it from the atom.)
Electrons are very light compared to protons and neutrons. Each electron has a mass of
approximately 5.5E-4 amu. A nuclide is an atom described by its atomic number (Z) and
its mass number (A). The Z number is equal to the charge (number of protons) in the
nucleus, which is a characteristic of the element. The A number is equal to the total
number of protons and neutrons in the nucleus. Nuclides with the same number of
protons but different number of neutrons are called isotopes.
For example, deuterium (2,1H) and tritium (3,1H) are isotopes of hydrogen with mass
numbers two and three, respectively. There are on the order of 200 stable nuclides and
over 1100 unstable (radioactive) nuclides. Radioactive nuclides can generally be
described as those, which have an excess or deficiency of neutrons in the nucleus.
RADIOACTIVE DECAY
Radioactive nuclides (also called radio nuclides) can regain stability by nuclear
transformation (radioactive decay) emitting radiation in the process. The radiation
emitted can be particulate or electromagnetic or both. The various types of radiation
and examples of decay are shown below.
ALPHA (a)
Alpha particles have a mass and charge equal to those of helium nuclei (2 protons + 2
neutrons). Alpha particles are emitted during the decay of some very heavy nuclides (Z
> 83).
226,88Ra —> 222,86Rn + 4,2a
Positively charged betas (positrons) are emitted during the decay of proton rich
nuclides.
GAMMA (g)
Gammas (also called gamma rays) are electromagnetic radiation (photons). Gammas
are emitted during energy level transitions in the nucleus. They may also be emitted
during other modes of decay.
99m, 43Tc —> 99,43Tc + g
ELECTRON CAPTURE
In certain neutron deficient nuclides, the nucleus will capture an orbital electron resulting
in conversion of a proton into a neutron. This type of decay also involves gamma
emission as well as x-ray emission as other electrons fall into the orbital vacated by the
captured electrons.
NEUTRONS (n)
For a few radionuclides, a neutron can be emitted during the decay process.
HALF-LIFE
The half-life of a radionuclide is the time required for one-half of a collection of atoms of
that nuclide to decay. Decay is a random process which follows an exponential curve.
The number of radioactive nuclei remaining after time (t) is given by:
where
t = decay time
T = half-life
ENERGY
The basic unit used to describe the energy of a radiation particle or photon is the
electron volt (eV). An electron volt is equal to the amount of energy gained by an
electron passing through a potential difference of one volt. The energy of the radiation
emitted is a characteristic of the radionuclide. For example, the energy of the alpha
emitted by Cm-238 will always be 6.52 MeV, and the gamma emitted by Ba-135m will
always be 268 keV. Many radionuclides have more than one decay route. That is, there
may be different possible energies that the radiation may have, but they are discreet
possibilities. However, when a beta particle is emitted, the energy is divided between the
beta and a neutrino. (A neutrino is a particle with no charge and infinitesimally small
mass.) Consequently, a beta particle may be emitted with an energy varying in a
continuous spectrum from zero to a maximum energy (Emax), which is characteristic of
the radionuclide. The average energy is generally around forty percent of the maximum.
Page 154
RADIATION UNITS
ACTIVITY
1 Curie (Ci) = 3.7E10 disintegrations per sec (dps). The Becquerel (Bq) is also coming
into use as the International System of Units (SI){XE “International System of Units
(SI)”} measure of disintegration rate. 1 Bq = 1 dps, 3.7E10 Bq = 1 Ci, and 1 mCi = 37
MBq.
CALCULATION OF ACTIVITIES
The half-life of a radionuclide is the time required for one-half of a collection of atoms of
that nuclide to decay. This is the same as saying it is the time required for the activity of
the sample to be reduced to one-half the original activity. This can be written as:
where
t = decay time T =
half-life
EXAMPLE
P-32 has a half-life of 14.3 days. On January 10, the activity of a P-32 sample was 10
uCi. What will the activity be on February 6? February 6 is 27 days after January 10, so
A quick estimate could also have been made by noting that 27 days is about two
halflives. So the new activity would be about one-half of one-half (i.e. one-fourth) of the
original activity.
EXPOSURE
The unit of radiation exposure in air is the roentgen (R). It is defined as that quantity of
gamma or x-radiation causing ionization in air equal to 2.58E-4 coulombs per kilogram.
Exposure applies only to absorption of gammas and x-rays in air.
CALCULATION OF EXPOSURE RATES
Gamma exposure constants (G) for some radionuclides are shown below. G is the
exposure rate in R/hr at 1 cm from a 1 mCi point source.
Nuclide G
Chromium-51 0.16
Cobalt-57 0.9
Cobalt-60 13.2
Gold-198 2.3
Iodine-125 1.5
Nickel-63 3.1
Page 155
Radium-226 8.25
Tantalum-182 6.8
Zinc-65 2.7
An empirical rule which may also be used is
It should be noted that this formula and the gamma constants are for exposure rates
from gammas and x-rays only. Any dose calculations would also have to include the
contribution from any particulate radiation that may be emitted.
X-RAY SOURCES
PRODUCTION OF X-RAYS
X-rays are produced when electrons, traveling at high speed, collide with matter or
change direction. In the usual type of x-ray tube, an incandescent filament supplies the
electrons and thus forms the cathode, or negative electrode, of the tube. A high voltage
applied to the tube drives the electrons to the anode, or target. The sudden stopping of
these rapidly moving electrons in the surface of the target results in the generation of
xradiation.
The design and spacing of the electrodes and the degree of vacuum are such that no
flow of electrical charge between cathode and anode is possible until the filament is
heated.
Page 156
The higher the temperature of the filament, the greater is its emission of electrons and
the larger the resulting tube current. The tube current is controlled, therefore, by some
device that regulates the heating current supplied to the filament. This is usually
accomplished by a variable-voltage transformer, which energizes the primary of the
filament transformer. Other conditions remaining the same, the x-ray output is
proportional to the tube current.
Most of the energy applied to the tube is transformed into heat at the focal spot, only a
small portion being transformed into x-rays. The high concentration of heat in a small
area imposes a severe burden on the materials and design of the anode. The high
melting point of tungsten makes it a very suitable material for the target of an x-ray
tube. In addition, the efficiency of the target material in the production of x-rays is
proportional to its atomic number.1 Since tungsten has a high atomic number, it has a
double advantage. The targets of practically all industrial x-ray machines are made of
tungsten.
COOLING
Circulation of oil in the interior of the anode is an effective method of carrying away the
heat. Where this method is not employed, the use of copper for the main body of the
anode provides high heat conductivity, and radiating fins on the end of the anode
outside the tube transfer the heat to the surrounding medium. The focal spot should be
as small as conditions permit, in order to secure the sharpest possible definition in the
radiographic image. However, the smaller the focal spot, the less energy it will
withstand without damage. Manufacturers of x-ray tubes furnish data in the form of
charts indicating the kilovoltages and milliamperages that may be safely applied at
various exposure times. The life of any tube will be shortened considerably if it is not
always operated within the rated capacity.
FOCAL-SPOT SIZE
The principle of the line focus is used to provide a focal spot of small effective size,
though the actual focal area on the anode face may be fairly large, as illustrated in the
figure below. By making the angle between the anode face and the central ray small,
Page 157
usually 20 degrees, the effective area of the spot is only a fraction of its actual area.
With the focal area in the form of a long rectangle, the projected area in the direction of
the central ray is square.
Diagram of a line-focus tube depicting the relation between actual focal-spot area (area of
bombardment) and effective focal spot, as projected from a 20° anode.
EFFECTS OF KILOVOLTAGE
As will be seen later, different voltages are applied to the x-ray tube to meet the
demands of various classes of radiographic work. The higher the voltage, the greater
the speed of the electrons striking the focal spot. The result is a decrease in the
wavelength of the x-rays emitted and an increase in their penetrating power and
intensity. It is to be noted that x-rays produced, for example, at 200 kilovolts contain all
the wavelengths that would be produced at 100 kilovolts, and with greater intensity.
In addition, the 200-kilovolt x-rays include some shorter wavelengths that do not exist in
the 100-kilovoIt spectrums at all. The higher voltage x-rays are used for the penetration
of thicker and heavier materials.
If, for example, the primary has 100 turns, and the secondary has 100,000, the voltage
in the secondary is 1,000 times as high as that in the primary. At the same time, the
current in the coils is decreased in the same proportion as the voltage is increased. In
the example given, therefore, the current in the secondary is only 1/1,000 that in the
primary. A step-up transformer is used to supply the high voltage to the x-ray tube.
Page 158
adjustment of the primary voltage applied to the step-up transformer and, hence, of the
high voltage applied to the x-ray tube.
Many different high-voltage waveforms are possible, depending on the design of the
xray machine and its installation. The figure below shows idealized waveforms difficult
to achieve in practical high-voltage equipment. Departures from these terms may vary
in different x-ray installations. Since x-ray output depends on the entire waveform, this
accounts for the variation in radiographic results obtainable from two different x-ray
machines operating at the same value of peak kilovoltage.
Tubes with the anodes at the end of a long extension cylinder are known as “rodanode”
tubes. The anodes of these tubes can be thrust through small openings (see
the figure below, top) to facilitate certain types of inspection. If the target is
perpendicular to the electron stream in the tube, the x-radiation through 360 degrees
can be utilized (see the figure below, bottom), and an entire circumferential weld can be
radiographed in a single exposure.
Page 159
With tubes of this type, one special precaution is necessary. The long path of the
electron stream down the anode cylinder makes the focusing of the electrons on the
target very susceptible to magnetic influences. If the object being inspected is
magnetized—for example, if it has undergone a magnetic inspection and has not been
properly demagnetized—a large part of the electron stream can be wasted on other
than the focal-spot area, and the resulting exposures will be erratic.
The foregoing describes the operation of the most commonly used types of x-ray
equipment. However, certain high-voltage generators operate on principles different
from those discussed.
Top: Rod-anode tube used in the examination of a plug weld. Bottom: Rod-anode tube with
a 360° beam used to examine a circumferential weld in a single exposure.
In the linear accelerator, the electrons are accelerated to high velocities by means of a
high-frequency electrical wave that travels along the tube through which the electrons
Page 160
travel. Both the betatron and the linear accelerator are used for the generation of
xradiation in the multimillion-volt range.
X-ray machines may be either fixed or mobile, depending on the specific uses for which
they are intended. When the material to be radiographed is portable, the x-ray machine
is usually permanently located in a room protected against the escape of x-radiation.
Page 161
The x-ray tube itself is frequently mounted on a stand allowing considerable freedom of
movement. For the examination of objects that are fixed or that are movable only with
great difficulty, mobile x-ray machines may be used.
These may be truck-mounted for movement to various parts of a plant, or they may be
small and light enough to be carried onto scaffolding, through manholes, or even
selfpropelled to pass through pipelines. Semiautomatic machines have been designed
for the radiography of large numbers of relatively small parts on a “production line”
basis. During the course of an exposure, the operator may arrange the parts to be
radiographed at the next exposure, and remove those just radiographed, with an
obvious saving in time.
GAMMA-RAY SOURCES
Radiography with gamma rays has the advantages of simplicity of the apparatus used,
compactness of the radiation source, and independence from outside power. This
facilitates the examination of pipe, pressure vessels, and other assemblies in which
access to the interior is difficult; field radiography of structures remote from power
supplies; and radiography in confined spaces, as on shipboard.
Note that gamma rays are most often specified in terms of the energy of the individual
photon, rather than in the wavelength. The unit of energy used is the electron volt
(eV)—an amount of energy equal to the kinetic energy an electron attains in falling
through a potential difference of 1 volt. For gamma rays, multiples—kiloelectron volts
(keV; 1 keV = 1,000 eV) or million electron volts (MeV; 1 MeV = 1,000,000 eV)—are
commonly used.
A gamma ray with an energy of 0.5 MeV (500 keV) is equivalent in wavelength and in
penetrating power to the most penetrating radiation emitted by an x-ray tube operating
Page 162
at 500 kV. The bulk of the radiation emitted by such an x-ray tube will be much less
penetrating (much softer) than this. Thus the radiations from cobalt 60, for example,
with energies of 1.17 and 1.33 MeV, will have a penetrating power (hardness) about
equal to that of the radiation from a 2-million-volt x-ray machine.
For comparison, a gamma ray having an energy of 1.2 MeV has a wavelength of about
0.01 angstrom (A); a 120 keV gamma ray has a wavelength of about 0.1 angstrom.
The wavelengths (or energies of radiation) emitted by a gamma-ray source, and their
relative intensities, depend only on the nature of the emitter. Thus, the radiation quality
of a gamma ray source is not variable at the will of the operator.
The gamma rays from cobalt 60 have relatively great penetrating power and can be
used, under some conditions, to radiograph sections of steel 9 inches thick, or the
equivalent. Radiations from other radioactive materials have lower energies; for
example, iridium 192 emits radiations roughly equivalent to the x-rays emitted by a
conventional x-ray tube operating at about 600 kV.
The intensity of gamma radiation depends on the strength of the particular source used—
specifically, on the number of radioactive atoms in the source that disintegrate in one
second. This, in turn, is usually given in terms of curies (1 Ci = 3.7 x 1010s-1). For small or
moderate-sized sources emitting penetrating gamma rays, the intensity of radiation
emitted from the source is proportional to the source activity in curies.
The proportionality between the external gamma-ray intensity and the number of curies
fails, however, for large sources or for those emitting relatively low-energy gamma rays.
In these latter cases, gamma radiation given off by atoms in the middle of the source
will be appreciably absorbed (self-absorption) by the overlying radioactive material itself.
Thus, the intensity of the useful radiation will be reduced to some value below that
which would be calculated from the number of curies and the radiation output of a
physically small gamma-ray source.
A term often used in speaking of radioactive sources is specific activity, a measure of the
degree of concentration of a radioactive source. Specific activity is usually expressed in
terms of curies per gram. Of two gamma-ray sources of the same material and activity,
the one having the greater specific activity will be the smaller in actual physical size.
Thus, the source of higher specific activity will suffer less from self-absorption of its own
gamma radiation. In addition, it will give less geometrical unsharpness in the radiograph
or, alternatively, will allow shorter source-film distances and shorter exposures.
Gamma-ray sources gradually lose activity with time, the rate of decrease of activity
depending on the kind of radioactive material (see the table below). For instance, the
intensity of the radiation from a cobalt 60 source decreases to half its original value in
about 5 years; and that of an iridium 192 source, in about 70 days. Except in the case
of radium, now little used in industrial radiography, this decrease in emission
necessitates more or less frequent revision of exposures and replacement of sources.
1
The roentgen (R) is a special unit for x- and gamma-ray exposure(ionization of air): 1 roentgen
= 2.58 x.10-4 coulombs per kilogram (Ckg-1). (The International Commission on Radiation Units
Page 163
Radioactive Materials Used in Industrial Radiography
The exposure calculations necessitated by the gradual decrease in the radiation output
of a gamma-ray source can be facilitated by the use of decay curves similar to those for
indium 192 shown in the figure below. The curves contain the same information, the
only difference being that the curve on the left shows activity on a linear scale, and the
curve on the right, on a logarithmic scale. The type shown on the right is easier to draw.
Locate point X, at the intersection of the half-life of the isotope (horizontal scale) and the
“50 percent remaining activity” line (vertical scale). Then draw a straight line from the
“zero time, 100 percent activity” point Y through point X.
Decay curves for iridium 192. Left: Linear plot. Right: Logarithmic plot.
Page 164
intended only as a rough guide and in any particular case will depend on the source
size used and the requirements of the operation.
1
The atomic number of an element is the number of protons in the nucleus of the atom, and is
equal to the number of electrons outside the nucleus. In the periodic table the elements are
arranged in order of increasing atomic number. Hydrogen has an atomic number of 1; iron, of 26;
copper, of 29; tungsten, of 74; and lead of 82.
Page 165
GEOMETRIC PRINCIPLES
A radiograph is a shadow picture of an object that has been placed in the path of an x-ray
or gamma-ray beam, between the tube anode and the film or between the source of gamma
radiation and the film. It naturally follows, therefore, that the appearance of an image thus
recorded is materially influenced by the relative positions of the object and the film and by
the direction of the beam. For these reasons, familiarity with the elementary principles of
shadow formation is important to those making and interpreting radiographs.
GENERAL PRINCIPLES
Since x-rays and gamma rays obey the common laws of light, their shadow formation may
be explained in a simple manner in terms of light. It should be borne in mind that the
analogy between light and these radiations is not perfect since all objects are, to a greater
or lesser degree, transparent to x-rays and gamma rays and since scattering presents
greater problems in radiography than in optics. However, the same geometric laws of
shadow formation hold for both light and penetrating radiation.
Suppose, as in Figure A below, that there is light from a point L falling on a white card C,
and that an opaque object O is interposed between the light source and the card. A shadow
of the object will be formed on the surface of the card. This shadow cast by the object will
naturally show some enlargement because the object is not in contact with the card; the
degree of enlargement will vary according to the relative distances of the object from the
card and from the light source. The law governing the size of the shadow may be stated:
The diameter of the object is to the diameter of the shadow as the distance of the light from
the object is to the distance of the light from the card.
where S is the size of the object; S is the size of the shadow (or the radiographic image); D
the distance from source of radiation to object; and D the distance from the source of
radiation to the recording surface (or radiographic film).
The degree of sharpness of any shadow depends on the size of the source of light and on the
position of the object between the light and the card—whether nearer to or farther from
one or the other. When the source of light is not a point but a small area, the shadows cast
are not perfectly sharp (in Figures B to D) because each point in the source of light casts its
own shadow of the object, and each of these overlapping shadows is slightly displaced from
the others, producing an ill-defined image.
The form of the shadow may also differ according to the angle that the object makes with
the incident light rays. Deviations from the true shape of the object as exhibited in its
shadow image are referred to as distortion. Figures A to F shows the effect of changing the
size of the source and of changing the relative positions of source, object, and card. From
Page 166
an examination of these drawings, it will be seen that the following conditions must be
fulfilled to produce the sharpest, truest shadow of the object:
1. The source of light should be small, that is, as nearly a point as can be
obtained.Compare Figures A and C.
2. The source of light should be as far from the object as practical. Compare Figures
Band C.
3. The recording surface should be as close to the object as possible. Compare FiguresB
and D.
4. The light rays should be directed perpendicularly to the recording surface. SeeFigures
A and E.
5. The plane of the object and the plane of the recording surface should be parallel.
Compare Figures A and F.
Page 167
Illustrating the general geometric principles of shadow formation as explained in these
sections.
RADIOGRAPHIC SHADOWS
The basic principles of shadow formation must be given primary consideration in order
to assure satisfactory sharpness in the radiographic image and essential freedom from
distortion. A certain degree of distortion naturally will exist in every radiograph because
some parts will always be farther from the film than others, the greatest magnification
being evident in the images of those parts at the greatest distance from the recording
surface (see the figure above).
Note, also, that there is no distortion of shape in Figure E above—a circular object having
been rendered as a circular shadow. However, under circumstances similar to those
shown, it is possible that spatial relations can be distorted. In the figure below the two
Page 168
circular objects can be rendered either as two circles (A) or as a figure-eightshaped
shadow (B). It should be observed that both lobes of the figure eight have circular
outlines.
Two circular objects can be rendered as two separate circles (A) or as two overlapping
circles (B), depending on the direction of the radiation.
APPLICATION TO RADIOGRAPHY
The application of the geometric principles of shadow formation to radiography leads to
five general rules. Although these rules are stated in terms of radiography with x-rays, they
also apply to gamma-ray radiography.
6. The focal spot should be as small as other considerations will allow, for there isa
definite relation between the size of the focal spot of the x-ray tube and the
definition in the radiograph. A large-focus tube, although capable of withstanding
large loads, does not permit the delineation of as much detail as a small-focus
tube. Long source-film distances will aid in showing detail when a large-focus
tube is employed, but it is advantageous to use the smallest focal spot
permissible for the exposures required.
B and H in the figure below show the effect of focal spot size on image quality.
As the focal spot size is increased from 1.5 mm (B) to 4.0 mm (H), the definition
of the radiograph starts to degrade. This is especially evident at the edges of the
chambers, which are no longer sharp.
Page 169
7. The distance between the anode and the material examined should always be
asgreat as is practical. Comparatively long-source distances should be used in
the radiography of thick materials to minimize the fact that structures farthest
from the film are less sharply recorded than those nearer to it. At long distances,
radiographic definition is improved and the image is more nearly the actual size
of the object.
10.As far as the shape of the specimen will allow, the plane of maximum interest
should be parallel to the plane of the film.
These graphics illustrate the effects on image quality when the geometric exposure factors
are changed.
Page 170
CALCULATION OF GEOMETRIC UNSHARPNESS
The width of the “fuzzy” boundary of the shadows in B, C, and D in the above figure is
known as the geometric unsharpness (Ug). Since the geometric unsharpness can
strongly affect the appearance of the radiographic image, it is frequently necessary to
determine its magnitude.
From the laws of similar triangles, it can be seen (in the figure below) that:
where Ug is the geometric unsharpness, F is the size of the radiation source, Do is the
source-object distance, and t is the object-film distance. Since the maximum
unsharpness involved in any radiographic procedure is usually the significant quantity,
the object-film distance (t) is usually taken as the distance from the source side of the
specimen to the film.
Geometric construction for determining geometric unsharpness (Ug).
Page 171
Do and t must be measured in the same units; inches are customary, but any other unit
of length—say, centimeters—would also be satisfactory. So long as Do and t are in the
same units, the formula above will always give the geometric unsharpness Ug in
whatever units were used to measure the dimensions of the source. The projected size
of the focal spots of x-ray tubes is usually stated in millimeters, and Ug will also be in
millimeters. If the source size is stated in inches, Ug will be in inches.
For rapid reference, graphs of the type shown in the figure below can be prepared by
the use of the equation above. These graphs relate source-film distance, object-film
distance and geometric unsharpness. Note that the lines of the figure are all straight.
Therefore, for each source-object distance, it is only necessary to calculate the value of
U for a single specimen thickness, and then draw a straight line through the point so
determined and the origin. It should be emphasized, however, that a separate graph of
the type shown in the figure below must be prepared for each size of source.
Page 172
PINHOLE PROJECTION OF FOCAL SPOT
Since the dimensions of the radiation source have considerable effect on the sharpness
of the shadows, it is frequently desirable to determine the shape and size of the x-ray
tube focal spot. This may be accomplished by the method of pinhole radiography, which
is identical in principle with that of the pinhole camera. A thin lead plate containing a
small hole is placed exactly midway between the focal spot and the film, and lead
shielding is so arranged that no x-rays except those passing through the pinhole reach
the film (See the figure below).
The developed film will show an image that, for most practical radiographic purposes,
may be taken as equal in size and shape to the focal spot (See the second figure
below). If precise measurements are required, the measured dimensions of the
focalspot image should be decreased by twice the diameter of the pinhole.
Page 173
The method is applicable to x-ray tubes operating up to about 250 kV. Above this
kilovoltage, however, the thickness of the lead needed makes the method impractical.
(The entire focal spot cannot be “seen” from the film side of a small hole in a thick
plate.) Thus the technique cannot be used for high-energy x-rays or the commonly used
gamma-ray sources, and much more complicated methods, suitable only for the
laboratory, must be employed.
A density in the image area of 1.0 to 2.0 is satisfactory. If the focal-spot area is
overexposed, the estimate of focal-spot size will be exaggerated, as can be seen by
comparing the two images in the figure below.
Pinhole pictures of the focal spot of an x-ray tube. A shorter exposure (left) shows only
focal spot. A longer exposure (right) shows, as well as the focal spot, some details of
the tungsten button and copper anode stem. The x-ray images of these parts result
from their bombardment with stray electrons
Page 174
GEOMETRIC UNSHARPNESS LIMITATIONS
The under-mentioned limitations are as per ASME Boiler and Pressure Code, SecV
(T285):
Material Thickness Ug Maximum
RADIOGRAPHY TECHNIQUES
It is usually best to direct the radiation at right angles to the surface, along a path that
represents a minimum thickness to the radiation. This not only minimizes the exposure
time but also assures that the internal structure will present the greatest subject contrast
to the radiation. When the presence of a planar discontinuity is suspected, radiation
must be essentially parallel to the expected occurrence of the discontinuity, regardless
of the test piece thickness in that direction.
Flat plates are the simplest of the shapes that can be inspected by radiography. The
most favorable direction for the viewing of a flat plate id the one in which the radiation
impinges perpendicular to the plate surface and penetrates the shortest dimension.
When large areas are to be inspected, they should be serially radiographed, each one
Page 175
overlapping another (usually 1 inch). Use of a relatively short SFD and multiple
overlapping exposures is more satisfactory than taking a single radiograph.
Curved plates are more satisfactorily inspected using views that are similar to flat
shapes. For best resolution, the recording plane should be shaped to confirm to that of
the back surface of the cured plate. If the curved plate has its convex side towards the
radiation, it is usually advantageous to minimize distortion by making multiple
exposures with reduced individual areas of coverage. If the curved plate has its
concave side towards the radiation, a distortion-free image can be obtained by placing
the source at the center of radius of curvature.
Sometimes section-equalizing techniques are helpful. In this technique the outer edges
of the cylinder are built-up to present greater radiographic density to the X-rays. Close
fitting solid cradles, liquid absorbers or many layers of shim stock, all having
radiographic absorption characteristics equivalent to that of the cylinder, are alternative
means of equalizing radiographic density.
There are three major inspection techniques for tubular sections, namely:
This technique is mainly applicable to the sections of no more than 3 ½ inches OD. It
produces a radiograph in which the images of both the walls are superimposed on one
another. The beam of radiation is directed towards one side of the section and the
recording media is placed on the other side, usually tangent to the section. Two
exposures 900 apart are required to provide complete coverage, when the ratio of the
outside diameter to inside diameter is 1.4 or less. The area at the edges of the pipe
exhibits too much subject contrast for meaningful interpretation, and hence more than
one exposure is required.
Page 176
When the ratio of the outside diameter to inside diameter is greater than 1.4 (i.e. when
radiographing a thick walled specimen, the number of exposures required to provide
complete coverage can be determined by multiplying the ratio by 1.7 and rounding off
to the next highest integer. For instance, to examine a 2" OD specimen with a 1"
diameter axial hole, a total of 1.7 X (2"/1") = 3.4, or 4 shots must be taken. The
circumferential displacement between shots is found by dividing 1800 by the number of
shots. When an odd number of exposures are required for complete coverage, the
angular spacing between shots an be determined by dividing the number of shots by
360o.
The correct number of exposures and the circumferential location of the corresponding
views can be determined in the same manner as that for superimposed technique. In all
the variations of the DWDI, the image of the section of the cylinder that is closest to the
radiation source will exhibit the greatest amount of unsharpness. Hence the
penetrameters should be located where they will evaluate the image of the section that
is closest to the source.
The area of coverage is limited by the geometric unsharpness and distortion at the
edges of the resolved image for hollow cylinders that are less than 15" in outside
diameter. For large cylinders, the film size is usually a limiting factor.
TECHNIQUE DEVELOPMENT
Sensitivity
The choice of a radiographic technique is usually based on sensitivity. A weld zone will
always be in different in structure and density from the parent material and ordinarily
there is no need for these differences to be highlighted. Sensitivity is expressed in terms
of percentages, with 2% applying to fairly critical projects and 4% to less critical ones.
Page 177
Exposure setup for Welds
The above mentioned techniques will apply for a simple butt joint for a part that is flat or
tubular. The following chapters will discuss the techniques used for weld joints other
than butt welds.
Lap Joints
The butt joint with a groove is the simplest welding arrangement and the one yielding
the easiest interpretation because of the general uniformity of arrangement of the
assembly. A less uniform assembly is the lap joint secured by two fillet welds. It is
possible to make a fillet weld joint that appears full-sized from the outside, but is
actually hollow. A radiographic requirement is imposed to ensure that the bond has
been made over the complete edge of the applied pipe. The simplest technique is to
shoot the radiograph through the weld area. Because the weld is triangular, there is no
means of directing the beam so that the thickness is examined. Also, the film plane is
not so close to the weld zone.
The interpretation of the film from this seemingly simple joint is quite complex as
because there is a large film density gradient over a small dimension. In this case, one
could only expect to confirm the presence of relatively large discontinuities. This is
acceptable because of the fact that the use of a simple lap joint indicates that the joint is
not expected to meet high strength requirements.
T-joints
A further increase in complexity occurs with the T-joint, of which there are two types:
one with full penetration, making a completely welded assembly and the simpler case
with fillet welds at the corners. When there are only two fillet welds involved, the
radiographic assessment becomes very similar to the lap joint: a varying thickness of
material is presented to the radiation beam and the film plane is separated from the
weld metal by the lower plate.
A refined T-joint involves a groove weld, or welds rather than a simple fillet weld. The
vertical member is prepared from one or both sides. The complete weld is made
through the thickness of the web. When such a weld is radiographed, a distorted image
will appear and the effective sensitivity will be reduced because of the base material
thickness. The weld in a prepared T-joint can be considered fully load-bearing and may
therefore have performance requirements that will justify a sensitive radiographic
technique.
Corner Joints
As with the simple lap joint and the simple T-joint, the corner joint may be assembled
with a minimum of welding using a fillet weld in the corner. This weld is not used when
full loading is required and thus not require refined radiography. If radiography is used,
the joint has special advantage: there could be a preferred direction for the beam that
would not involve the welded portion of the joint.
Page 178
Fillet Welds
The fillet weld is commonly used with lap joint, the corner joint, and T-joint: the fillet
weld may be used in conjunction with a groove weld in a corner or T-joint. For a 900
Tjoint with a symmetrical fillet weld, the shortest path through the weld is that bisecting
the 900 angle, giving the radiation beam angle of 450. If the film holder is positioned in
contact with the flange of the T-joint, then the radiation must pass through a relatively
large thickness of the metal. Another approach to radiography of fillet welds is to
position the film on the weld side of the joint. In this case the thickness of the film
holder will have some considerable influence on the resolution, but this limitation only
applies when the film holder is flat. Flexible holders, if used, can be brought closer to
the weld bead
Other arrangements may be used to compensate for the uneven geometry of the fillet
weld. One example is to introduce metal wedges prepared with shapes complimentary
to the fillet shape and fitted between the radiation source and the film. The wedges may
be in contact with the weld surface or on the opposite side, where they should be in
contact with the film holder. This setup can be used with any of the three joint types
using fillet weld.
PENETRAMETERS
A standard test piece is usually included in every radiograph as a check on the
adequacy of the radiographic technique. The test piece is commonly referred to as a
penetrameter in North America and an Image Quality Indicator (IQl) in Europe. The
penetrameter (or lQI) is made of the same material, or a similar material, as the
specimen being radiographed, and is of a simple geometric form. It contains some small
structures (holes, wires, etc), the dimensions of which bear some numerical relation to
the thickness of the part being tested. The image of the penetrameter on the radiograph
is permanent evidence that the radiographic examination was conducted under proper
conditions.
Codes or agreements between customer and vendor may specify the type of
penetrameter, its dimensions, and how it is to be employed. Even if penetrameters are
not specified, their use is advisable, because they provide an effective check of the
overall quality of the radiographic inspection.
Hole Type Penetrameters
The common penetrameter consists of a small rectangular piece of metal, containing
several (usually three) holes, the diameters of which are related to the thickness of the
penetrameter (see the figure below).
The ASTM (American Society for Testing and Materials) penetrameter contains three
holes of diameters T, 2T, and 4T, where T is the thickness of the penetrameter.
Because of the practical difficulties in drilling minute holes in thin materials, the
minimum diameters of these three holes are 0.010, 0.020, and 0.040 inches,
respectively. These penetrameters may also have a slit similar to the ASME
penetrameter described below.
Thick penetrameters of the hole type would be very large, because of the diameter of the
4T hole. Therefore, penetrameters more than 0.180 inch thick are in the form of discs,
the diameters of which are 4 times the thickness (4T) and which contain two holes of
Page 179
diameters T and 2T. A lead number showing the thickness in thousandths of an inch
identifies each penetrameter.
The quality level 2-2T is probably the one most commonly specified for routine
radiography. However, critical components may require more rigid standards, and a
level of 1-2T or 1-1T may be required. On the other hand, the radiography of less critical
specimens may be satisfactory if a quality level of 2-4T or 4-4T is achieved. The more
critical the radiographic examination—that is, the higher the level of radiographic
sensitivity required—the lower the numerical designation for the quality level.
American Society for Testing and Materials (ASTM) penetrameter (ASTM E 14268).
Page 180
Some sections of the ASME (American Society of Mechanical Engineers) Boiler and
Pressure Vessel Code require a penetrameter similar in general to the ASTM
penetrameter. It contains three holes, one of which is 2T in diameter, where T is the
penetrameter thickness. Customarily, the other two holes are 3T and 4T in diameter,
but other sizes may be used. Minimum hole size is 1/6 inch.
Penetrameters 0.010 inch, and less, in thickness also contain a slit 0.010-inch wide and
1
/4 inch long. A lead number designating the thickness in thousandths of an inch identifies
each.
In some cases, the materials involved do not appear in published tabulations. Under
these circumstances the comparative radiographic absorption of two materials may be
determined experimentally. A block of the material under test and a block of the material
proposed for penetrameters, equal in thickness to the part being examined, can be
radiographed side by side on the same film with the technique to be used in practice. If
the density under the proposed penetrameter materials is equal to or greater than the
density under the specimen material, that proposed material is suitable for fabrication of
penetrameters.
In practically all cases, the penetrameter is placed on the source side of the specimen—
that is, in the least advantageous geometric position. In some instances, however, this
location for the penetrameter is not feasible. An example would be the radiography of a
circumferential weld in a long tubular structure, using a source positioned within the
tube and film on the outer surface. In such a case a “film-side” penetrameter must be
used. Some codes specify the film-side penetrameter that is equivalent to the source-
side penetrameter normally required. When such a specification is not made, the
required film-side penetrameter may be found experimentally.
Page 181
In the example above, a short section of tube of the same dimensions and materials as
the item under test would be used to demonstrate the technique. The required
penetrameter would be used on the source side, and a range of penetrameters on the
film side. If the penetrameter on the source side indicated that the required radiographic
sensitivity was being achieved, the image of the smallest visible penetrameter hole in
the film-side penetrameters would be used to determine the penetrameter and the hole
size to be used on the production radiograph.
Sometimes the shape of the part being examined precludes placing the penetrameter
on the part. When this occurs, the penetrameter may be placed on a block of
radiographically similar material of the same thickness as the specimen. The block and
the penetrameter should be placed as close as possible to the specimen.
Wire Penetrameters
A number of other penetrameter designs are also in use. The German DIN (Deutsche
Industrie-Norm) penetrameter (See the figure below) is one that is widely used. It
consists of a number of wires, of various diameters, sealed in a plastic envelope that
carries the necessary identification symbols. The thinnest wire visible on the radiograph
indicates the image quality. The system is such that only three penetrameters, each
containing seven wires, can cover a very wide range of specimen thicknesses. Sets of DIN
penetrameters are available in aluminum, copper, and steel. Thus a total of nine
penetrameters is sufficient for the radiography of a wide range of materials and
thicknesses.
On the other hand, the hole penetrameter can be made of any desired material but the
wire penetrameter is made from only a few materials.
Page 182
Therefore, using the hole penetrameter, a quality level of 2-2T may be specified for the
radiography of, for example, commercially pure aluminum and 2024 aluminum alloy,
even though these have appreciably different compositions and radiation absorptions.
The penetrameter would, in each case, be made of the appropriate material. The wire
penetrameters, however, are available in aluminum but not in 2024 alloy. To achieve
the same quality of radiographic inspection of equal thicknesses of these two materials,
it would be necessary to specify different wire diameters—that for 2024 alloy would
probably have to be determined by experiment.
Special Penetrameters
Special penetrameters have been designed for certain classes of radiographic
inspection. An example is the radiography of small electronic components wherein some
of the significant factors are the continuity of fine wires or the presence of tiny balls of
solder. Special image quality indicators have been designed consisting of fine wires and
small metallic spheres within a plastic block, the whole covered on top and the bottom
with steel approximately as thick as the case of the electronic component.
Penetrameters and Visibility of Discontinuities
It should be remembered that even if a certain hole in a penetrameter is visible on the
radiograph, a cavity of the same diameter and thickness may not be visible. The
penetrameter holes, having sharp boundaries, result in an abrupt, though small, change
in metal thickness whereas a natural cavity having more or less rounded sides causes a
gradual change.
Therefore, the image of the penetrameter hole is sharper and more easily seen in the
radiograph than is the image of the cavity. Similarly, a fine crack may be of considerable
extent, but if the x-rays or gamma rays pass from source to film along the thickness of
the crack, its image on the film may not be visible because of the very gradual transition
in photographic density. Thus, a penetrameter is used to indicate the quality of the
radiographic technique and not to measure the size of cavity that can be shown.
In the case of a wire image quality indicator of the DIN type, the visibility of a wire of a
certain diameter does not assure that a discontinuity of the same cross section will be
visible. The human eye perceives much more readily a long boundary than it does a
short one, even if the density difference and the sharpness of the image are the same.
Placement of Penetrameter
When due to part or weld configuration or size, it is not practical to place the
penetrameters on the part or weld; the penetrameters may be placed on a separate
block. Separate blocks shall be made of the same or Radiographically similar materials
and may be used to facilitate penetrameter positioning. There is no restriction on the
separate block thickness, provided the penetrameters are met.
Page 183
(1) The penetrameters on the source side of the separate block shall be placed
no closer to the film than the source side of the part being radiographed.
(2) The separate block shall be placed as close as possible to the part being
radiographed.
(3) The separate block shall exceed the penetrameters dimensions such that the
outline of at least three sides of the penetrameter image shall be visible on
the radiograph.
(b) Film side penetrameters: Where inaccessibility prevents hard placing the
penetrameters on the source side, the penetrameters shall be placed on the film
side in contact with the part being examined. A lead letter “F” shall be placed
adjacent to or on the penetrameters, but shall not mask the essential hole where
hole penetrameters are used.
Number of Penetrameters:
When one or more film holders are used for an exposure at least onepenetrameter
image shall appear on each radiograph except as outlined in (b) below.
(a) Multiple penetrameters. If the requirements of are met by using more than
one penetrameter, one shall be representative of the lightest area of interest
and the other the darkest area of interest and the other the darkest area of
interest the intervening densities on the radiograph shall be considered as
having acceptable density.
(1) For cylindrical components where the source is placed on the axis of the
component for a single exposure, at least three penetrameters, spaced
approximately120 deg. Apart are required under the following conditions:
Page 184
(a) When the complete circumference is radiographed using one or
morefilm holders, on
(b) When a section or section of the circumference. Where the
lengthbetween the ends of the outermost sections span 240 or more
degree is radiographed using one more film holders. Additional film
locations may be required to obtain necessary penetrameter spacing.
(2) For cylindrical components where the source I placed on the axis of the
component for a single exposure, at least three penetrameters, with one placed at each
end of the span of the circumference radiographed and one in the following conditions:
(5) For spherical components where the source is placed at the center of the
component for a single exposure, at lest three penetrameter, spaced
approximately 120deg. Apart, are required under the following conditions:
(a) When a complete circumference is radiographed using one or morefilm
holder, or,
(b) When a section or section of circumference. Where the length
betweenthe ends of the outermost sections spans 240 or more deg, is
radiographed using one or more film holder. Additional film locations
may be required to obtain necessary penetrameter spacing.
(6) For spherical components where the source is placed at the center of the
component for a single exposure at least three penetrameters, with one
placed at each end of the radiographed span of the circumference,
radiographed and one in the approximate center of the span, are required the
following conditions:
(7) In (4)&(5) above, where other welds are radiographed simultaneously with the
circumferential weld, one additional penetrameter shall be placed on each
other weld.
Page 185
(8) When an array of components in a circle is radiographed, at least one
penetrameter shall show on each component image.
The shim dimensions shall exceed the penetrameter dimensions such that the
outline of atleast three sides of the penetrameter image shall be visible in the
radiograph.
Page 186
ARITHMETIC OF EXPOSURE
With a given kilo voltage of x-radiation or with the gamma radiation from a
particular isotope, the three factors governing the exposure are the
milliamperage (for x-rays) or source strength (for gamma rays), time, and
sourcefilm distance. The numerical relations among these three quantities are
demonstrated below, using x-rays as an example. The same relations apply for
gamma rays, provided the number of curies in the source is substituted wherever
milliamperage appears in an equation.
The necessary calculations for any changes in focus-film distance (D), milliamperage
(M), or time (T) are matters of simple arithmetic and are illustrated in the following
example. As noted earlier, kilo voltage changes cannot be calculated directly but must
be obtained from the exposure chart of the equipment or the operator’s logbook. All of
the equations shown on these pages can be solved easily for any of the variables (mA,
T, D), using one basic rule of mathematics: If one factor is moved across the equals
sign (=), it moves from the numerator to the denominator or vice versa.
1. Eliminating any factor that remains constant (has the same value and is in
thesame location on both sides of the equation).
Rule: The milliamperage (M) required for a given exposure is directly proportional to the
square of the focus-film distance (D). The equation is expressed as follows:
Example: Suppose that with a given exposure time and kilovoltage, a properly exposed
radiograph is obtained with 5mA (M1) at a distance of 12 inches (D1), and that it is
desired to increase the sharpness of detail in the image by increasing the focus-film
Page 187
distance to 24 inches (D2). The correct milliamperage (M2) to obtain the desired
radiographic density at the increased distance (D2) may be computed from the
proportion:
When very low kilovoltages, say 20 kV or less, are used, the x-ray intensity decreases
with distance more rapidly than calculations based on the inverse square law would
indicate because of absorption of the x-rays by the air. Most industrial radiography,
however, is done with radiation so penetrating that the air absorption need not be
considered. These comments also apply to the time-distance relations discussed
below.
Time-Distance Relation
Rule: The exposure time (T) required for a given exposure is directly proportional to the
square of the focus-film distance (D). Thus:
To solve for either a new Time (T2) Or a new Distance (D2), simply follow the steps
shown in the example above.
Milliamperage-Time Relation
Rule: The milliamperage (M) required for a given exposure is inversely proportional to
the time (T):
Another way of expressing this is to say that for a given set of conditions (voltage,
distance, etc), the product of milliamperage and time is constant for the same
photographic effect.
This is commonly referred to as the reciprocity law. (Important exceptions are discussed
below.).To solve for either a new time (T2) or a new milliamperage (M2), simply follow the
steps shown in the example in “Milliamperage-Distance Relation”.
Page 188
of uniform plates, but they serve only as rough guides for objects, such as complicated
castings, having wide variations of thickness.
Typical exposure chart for steel. This chart may be taken to apply to Film X (for
example), with lead foil screens, at a film density of 1.5. Source-film distance,
40 inches.
Exposure charts are usually available from manufacturers of x-ray equipment. Because,
in general, such charts cannot be used for different x-ray machines unless suitable
correction factors are applied, individual laboratories sometimes prepare their own.
PREPARING AN EXPOSURE CHART
A simple method for preparing an exposure chart is to make a series of radiographs of a
pile of plates consisting of a number of steps. This “step tablet” or stepped wedge, is
radiographed at several different exposure times at each of a number of kilovoltages.
The exposed films are all processed under conditions
identical to those that will later be used for routine work. Each radiograph
Another method, requiring fewer stepped wedge exposures but more arithmetical
manipulation, is to make one step-tablet exposure at each kilovoltage and to measure
Page 189
the densities in the processed stepped-wedge radiographs. The exposure that would
have given the chosen density (in this case 1.5) under any particular thickness of the
stepped wedge can then be determined from the characteristic curve of the film used
The values for thickness, kilovoltage, and exposure are plotted as described in the
figure above.
Any given exposure chart applies to a set of specific conditions. These fixed conditions
are:
Only if the conditions used in making the radiograph agree in all particulars with those
used in preparation of the exposure chart can values of exposure be read directly from
the chart. Any change requires the application of a correction factor. The correction
factor applying to each of the conditions listed previously will be discussed separately.
For example, to obtain a density of 1.5 using Film Y, 0.6 more exposure is
required than for Film X.
You can use these procedures to change densities on a single film as well.
Simply find the log E difference needed to obtain the new density on the film
curve; read the corresponding exposure factor from the chart; then multiply to
increase density or divide to decrease density.
6. If the type of screens is changed, for example from lead foil to fluorescent, it
iseasier and more accurate to make a new exposure chart than to attempt to
determine correction factors.
Sliding scales can be applied to exposure charts to allow for changes in one or more of
the conditions discussed, with the exception of the first and the last. The methods of
preparing and using such scales are described in detail later on.
In some radiographic operations, the exposure time and the source-film distance are set
by economic considerations or on the basis of previous experience and test radiographs.
The tube current is, of course, limited by the design of the tube. This leaves as variables
only the thickness of the specimen and the kilovoltage. When these conditions exist, the
exposure chart may take a simplified form as shown in the figure below, which allows the
kilovoltage for any particular specimen thickness to be chosen readily. Such a chart will
probably be particularly useful when uniform sections must be radiographed in large
numbers by relatively untrained persons. This type of exposure chart may be derived
from a chart similar to the figure above by following the horizontal line corresponding to
the chosen milliampere-minute value and noting the thickness corresponding to this
exposure for each kilovoltage. These thicknesses are then plotted against kilovoltage.
GAMMA-RAY EXPOSURE CHARTS
The figure below shows a typical gamma-ray exposure chart. It is somewhat similar to
the next to the last figure above.
However, with gamma rays, there is no variable factor corresponding to the kilovoltage.
Therefore, a gamma-ray exposure chart contains one line, or several parallel lines,
each of which corresponds to a particular film type, film density, or source-film distance.
Gamma-ray exposure guides are also available in the form of linear or circular slide
rules. These contain scales on which can be set the various factors of specimen
thickness, source strength and source-film distance, and from which exposure time can
be read directly.
Sliding scales can also be applied to gamma-ray exposure charts of the type in the
figure below to simplify some exposure determinations. For the preparation and use of
such scales,
Page 191
Typical gamma-ray exposure chart for iridium 192, based on the use of Film X (for
example).
the slower film curve. An earlier figure shows that when Films X and Y are used, the
difference is 1.22, which is the difference between 1.57 and 2.79. It is necessary that
the films be close enough together in speed so that their curves will have some
“overlap” on the log E axis.
LIMITATIONS OF EXPOSURE CHARTS
Although exposure charts are useful industrial radiographic tools, they must be used with
some caution. They will, in most cases, be adequate for routine practice, but they will not
always show the precise exposure required to radiograph a given thickness to a particular
density.
Several factors have a direct influence on the accuracy with which exposures can be
predicted. Exposure charts are ordinarily prepared by radiographing a stepped wedge.
Since the proportion of scattered radiation depends on the thickness of material and,
therefore, on the distribution of the material in a given specimen, there is no assurance
that the scattered radiation under different parts will correspond to the amount under the
same thickness of the wedge. In fact, it is unreasonable to expect exact
correspondence between scattering conditions under two objects the thicknesses of
Page 192
which are the same but in which the distribution of material is quite different. The more
closely the distribution of metal in the wedge resembles that in the specimen the more
accurately the exposure chart will serve its purpose. For example, a narrow wedge
would approximate the scattering conditions for specimens containing narrow bars.
Although the lines of an exposure chart are normally straight, they should in most cases
be curved—concave downward. The straight lines are convenient approximations,
suitable for most practical work, but it should be recognized that in most cases they are
only approximations. The degree to which the conventionally drawn straight line
approximates the true curve will vary, depending on the radiographic conditions, the
quality of the exposing radiation, the material radiographed, and the amount of
scattered radiation reaching the film.
In addition, time, temperature, degree of activity, and agitation of the developer are all
variables that affect the shape of the characteristic curve and should therefore be
standardized. When, in hand processing, the temperature or the activity of the
developer does not correspond to the original conditions, proper compensation can be
made by changing the time. Automated processors should be carefully maintained and
cleaned to achieve the most consistent results. In any event, the greatest of care should
always be taken to follow the recommended processing procedures.
SCATTERED RADIATION
When a beam of x-rays or gamma rays strikes any object, some of the radiation is
absorbed, some is scattered, and some passes straight through. The electrons of the
atoms constituting the object scatter radiation in all directions, much as light is
dispersed by a fog. The wavelengths of much of the radiation are increased by the
scattering process, and hence the scatter is always somewhat “softer,” or less
penetrating, than the unscattered primary radiation. Any material—whether specimen,
cassette, tabletop, walls, or floor—that receives the direct radiation is a source of
scattered radiation. Unless suitable measures are taken to reduce the effects of scatter,
it will reduce contrast over the whole image or parts of it.
Scattering of radiation occurs, and is a problem, in radiography with both x-rays and
gamma rays. In the material that follows, the discussion is in terms of x-rays, but the
same general principles apply to gamma radiography.
In the radiography of thick materials, scattered radiation forms the greater percentage of
the total radiation. For example, in the radiography of a 3/4-inch thickness of steel, the
scattered radiation from the specimen is almost twice as intense as the primary radiation;
in the radiography of a 2-inch thickness of aluminum, the scattered radiation is two and
a half times as great as the primary radiation. As may be expected, preventing scatter
from reaching the film markedly improves the quality of the radiographic image.
Page 193
As a rule, the greater portion of the scattered radiation affecting the film is from the
specimen under examination (A in the figure above). However, any portion of the film
holder or cassette that extends beyond the boundaries of the specimen and thereby
receives direct radiation from the x-ray tube also becomes a source of scattered
radiation, which can affect the film. The influence of this scatter is most noticeable just
inside the borders of the image (B in the figure above). In a similar manner, primary
radiation striking the film holder or cassette through a thin portion of the specimen will
cause scattering into the shadows of the adjacent thicker portions. Such scatter is called
undercut. Another source of scatter that may undercut a specimen is shown as C in the
figure above. If a filter is used near the tube, this too will scatter x-rays.
However, because of the distance from the film, scattering from this source is of
negligible importance. Any other material, such as a wall or floor, on the film side of the
specimen may also scatter an appreciable quantity of x-rays back to the film, especially
if the material receives the direct radiation from the x-ray tube or gamma-ray source
(See the figure below). This is referred to as backscattered radiation.
Intense backscattered radiation may originate in the floor or wall. Coning, masking, or
diaphragming should be employed. Backing the cassette with lead may give adequate
protection.
Page 194
REDUCTION OF SCATTER
Although scattered radiation can never be completely eliminated, a number of means
are available to reduce its effect. The various methods are discussed in terms of x-rays.
Although most of the same principles apply to gamma-ray radiography, differences in
application arise because of the highly penetrating radiation emitted by most common
industrial gamma-ray sources. For example, a mask (See the figure below) for use with
200 kV x-rays could easily be light enough for convenient handling. A mask for use with
cobalt 60 radiation, on the other hand, would be thick, heavy, and probably
cumbersome. In any event, with either x-rays or gamma rays, the means for reducing
the effects of scattered radiation must be chosen on the basis of cost, convenience, and
effectiveness.
The combined use of metallic shot and a lead mask for lessening scattered radiation is
conducive to good radiographic quality. If several round bars are to be radiographed,
they may be separated with lead strips held on edge on a wooden frame and the voids
filled with fine shot.
Page 195
LEAD FOIL SCREENS
Lead screens, mounted in contact with the film, diminish the effect on the film of
scattered radiation from all sources. They are beyond doubt the least expensive, most
convenient, and most universally applicable means of combating the effects of
scattered radiation. Lead screens lessen the scatter reaching the films regardless of
whether the screens permit a decrease or necessitate an increase in the radiographic
exposure.
Many x-ray exposure holders incorporate a sheet of lead foil in the back for the specific
purpose of protecting the film from backscatter. This lead will not serve as an
intensifying screen, first, because it usually has a paper facing, and second because it
often is not lead of “radiographic quality”. If intensifying screens are used with such
holders, definite means must be provided to insure good contact.
X-ray film cassettes also are usually fitted with a sheet of lead foil in the back for protection
against backscatter.
Using such a cassette or film holder with gamma rays or with million-volt x-rays, the film
should always be enclosed between double lead screens; otherwise, the secondary
radiation from the lead backing is sufficient to penetrate the intervening felt or paper and
cast a shadow of the structure of this material on the film, giving a granular or mottled
appearance. This effect can also occur at voltages as low as 200 kV unless the film is
enclosed between lead foil or fluorescent intensifying screens.
MASKS AND DIAPHRAGMS
Scattered radiation originating in matter outside the specimen is most serious for
specimens that have high absorption for x-rays, because the scattering from external
sources may be large compared to the primary image-forming radiation that reaches
the film through the specimen. Often, the most satisfactory method of lessening this
scatter is to use cutout diaphragms or some other form of mask mounted over or
around the object radiographed. If many specimens of the same article are to be
radiographed, it may be worthwhile to cut an opening of the same shape, but slightly
smaller, in a sheet of lead and place this on the object.
The lead serves to reduce the exposure in surrounding areas to a negligible value and
therefore to eliminate scattered radiation from this source. Since scatter also arises
from the specimen itself, it is good practice wherever possible, to limit the cross an xray
beam to cover only the area of the specimen that is of interest in the examination.
For occasional pieces of work where a cutout diaphragm would not be economical,
barium clay packed around the specimen will serve the same purpose. The clay should
be thick enough so that the film density under the clay is somewhat less than that under
the specimen. Otherwise, the clay itself contributes appreciable scattered radiation.
Page 196
It may be advantageous to place the object in aluminum or thin iron pans and to use a
liquid absorber, provided the liquid chosen will not damage the specimen. A combined
saturated solution of lead acetate and lead nitrate is satisfactory.
Warning
To prepare this solution, dissolve approximately 31/2 pounds of lead acetate in 1 gallon
of hot water. When the lead acetate is in solution, add approximately 3 pounds of lead
nitrate.
Because of its high lead content this solution is a strong absorber of x-rays. In masking
with liquids, be sure to eliminate bubbles that may be clinging to the surface of the
specimen.
In some cases, a lead diaphragm or lead cone on the tube head may be a convenient
way to limit the area covered by the x-ray beam. Such lead diaphragms are particularly
useful where the desired cross section of the beam is a simple geometric figure, such
as a circle, square, or rectangle.
FILTERS
In general, the use of filters is limited to radiography with x-rays. A simple metallic filter
mounted in the x-ray beam near the x-ray tube (See the figure below) may adequately
serve the purpose of eliminating overexposure in the thin regions of the specimen and
in the area surrounding the part. Such a filter is particularly useful to reduce scatter
undercut in cases where a mask around the specimen is impractical, or where the
specimen would be injured by chemicals or shot. Of course, an increase in exposure or
kilovoltage will be required to compensate for the additional absorption; but, in cases
where the filter method is applicable, this is not serious unless the limit of the x-ray
machine has been reached.
The underlying principle of the method is that the addition of the filter material causes a
much greater change in the amount of radiation passing through the thin parts than
through the thicker parts. Suppose the shape of a certain steel specimen is as shown in
the figure below and that the thicknesses are 1/4 inch, 1/2 inch, and 1 inch. This
specimen is radiographed first with no filter, and then with a filter near the tube.
Page 197
A filter placed near the x-ray tube reduces subject contrast and eliminates much of the
secondary radiation, which tends to obscure detail in the periphery of the specimen.
Column 3 of the table below shows the percentage of the original x-ray intensity
remaining after the addition of the filter, assuming both exposures were made at
180 kV. (These values were derived from actual exposure chart data.)
Region Specimen Thickness (inches)
Percentage of Original X-ray Intensity Remaining After Addition of a Filter
Outside specimen 0 less
than 5%
Thin section 1/4 about 30%
Medium section 1
/2
about 40%
Thick section 1 about 50%
Note that the greatest percentage change in x-ray intensity is under the thinner parts of
the specimen and in the film area immediately surrounding it. The filter reduces by a
large ratio the x-ray intensity passing through the thin sections or sticking the cassette
around the specimen, and hence reduces the undercut of scatter from these sources.
Thus, in regions of strong undercut, the contrast is increased by the use of a filter since
the only effect of the undercutting scattered radiation is to obscure the desired image. In
regions where the undercut is negligible, a filter has the effect of decreasing the
contrast in the finished radiograph.
Although frequently the highest possible contrast is desired, there are certain instances
in which too much contrast is a definite disadvantage For example, it may be desired to
render detail visible in all parts of a specimen having wide variations of thickness. If the
exposure is made to give a usable density under the thin part, the thick region may be
underexposed. If the exposure is adjusted to give a suitable density under the thick
parts, the image of the thin sections may be grossly overexposed.
Curves illustrating the effect of a filter on the composition and intensity of an xray
beam.
Although filtering reduces the total quantity of radiation, most of the wavelengths
removed are those that would not penetrate the thicker portions of the specimen in any
case. The radiation removed would only result in a high intensity in the regions around
the specimen and under its thinner sections, with the attendant scattering undercut and
overexposure. The harder radiation obtained by filtering the x-ray beam produces a
radiograph of lower contrast, thus permitting a wider range of specimen thicknesses to
be recorded on a single film than would otherwise be possible.
Thus, a filter can act either to increase or to decrease the net contrast. The contrast
and penetrameter visibility are increased by the removal of the scatter that undercuts
the specimen (see the figure below) and decreased by the hardening of the original
beam. The nature of the individual specimen will determine which of these effects will
predominate or whether both will occur in different parts of the same specimen.
Page 199
The choice of a filter material should be made on the basis of availability and ease of
handling. For the same filtering effect, the thickness of filter required is less for those
materials having higher absorption. In many cases, copper or brass is the most useful,
since filters of these materials will be thin enough to handle easily, yet not so thin as to
be delicate. See the figure below.
Definite rules as to filter thicknesses are difficult to formulate exactly because the
amount of filtration required depends not only on the material and thickness range of
the specimen, but also on the distribution of material in the specimen and on the
amount of scatter undercut that it is desired to eliminate. In the radiography of
aluminum, a filter of copper about 4 percent of the greatest thickness of the specimen
should prove the thickest necessary. With steel, a copper filter should ordinarily be
about 20 percent, or a lead filter about 3 percent, of the greatest specimen thickness for
the greatest useful filtration. The foregoing values are maximum values, and, depending
on circumstances, useful radiographs can often be made with far less filtration.
In radiography with x-rays up to at least 250 kV, the 0.005-inch front lead screen
customarily used is an effective filter for the scatter from the bulk of the specimen.
Page 200
Additional filtration between specimen and film only tends to contribute additional
scatter from the filter itself. The scatter undercut can be decreased by adding an
appropriate filter at the tube as mentioned before (see also the figures above). Although
the filter near the tube gives rise to scattered radiation, the scatter is emitted in all
directions, and since the film is far from the filter, scatter reaching the film is of very low
intensity.
Further advantages of placing the filter near the x-ray tube are that specimen-film
distance is kept to a minimum and that scratches and dents in the filter are so blurred
that their images are not apparent on the radiograph.
GRID DIAPHRAGMS
One of the most effective ways to reduce scattered radiation from an object being
radiographed is through the use of a Potter-Bucky diaphragm. This apparatus (see the
figure below) consists of a moving grid, composed of a series of lead strips held in
position by intervening strips of a material transparent to x-rays. The lead strips are
tilted, so that the plane of each is in line with the focal spot of the tube. The slots
between the lead strips are several times as deep as they are wide. The parallel lead
strips absorb the very divergent scattered rays from the object being radiographed, so
that most of the exposure is made by the primary rays emanating from the focal spot of
the tube and passing between the lead strips. During the course of the exposure, the
grid is moved, or oscillated, in a plane parallel to the film as shown by the black arrows
in the figure below. Thus, the shadows of the lead strips are blurred out so that they do
not appear in the final radiograph.
Schematic diagram showing how the primary x-rays pass between the lead strips of the
Potter-Bucky diaphragm while most of the scattered x-rays are absorbed because they
strike the sides of the strips.
The Potter-Bucky diaphragm is seldom used elsewhere in the industrial field, although
special forms have been designed for the radiography of steel with voltages as high as
200 to 400 kV. These diaphragms are not used at higher voltages or with gamma rays
because relatively thick lead strips would be needed to absorb the radiation scattered
at these energies. This in turn would require a Potter-Bucky diaphragm, and the
associated mechanism, of an uneconomical size and complexity
MOTTLING CAUSED BY X-RAY DIFFRACTION
A special form of scattering caused by x-ray diffraction is encountered occasionally. It is
most often observed in the radiography of fairly thin metallic specimens whose grain
size is large enough to be an appreciable fraction of the part thickness. The
radiographic appearance of this type of scattering is mottled and may be confused with
the mottled appearance sometimes produced by porosity or segregation. It can be
distinguished from these conditions by making two successive radiographs, with the
specimen rotated slightly (1 to 5 degrees) between exposures, about an axis
perpendicular to the central beam. A pattern caused by porosity or segregation will
change only slightly; however, one caused by diffraction will show a marked change.
The radiographs of some specimens will show a mottling from both effects, and careful
observation is needed to differentiate between them.
Briefly, however, a relatively large crystal or grain in a relatively thin specimen may in
some cases “reflect” an appreciable portion of the x-ray energy falling on the specimen,
much as if it were a small mirror. This will result in a light spot on the developed
radiograph corresponding to the position of the particular crystal and may also produce
a dark spot in another location if the diffracted, or “reflected,” beam strikes the film.
Should this beam strike the film beneath a thick part of the specimen, the dark spot may
be mistaken for a void in the thick section.
This effect is not observed in most industrial radiography, because most specimens are
composed of a multitude of very minute crystals or grains, variously oriented; hence,
scatter by diffraction is essentially uniform over the film area. In addition, the directly
transmitted beam usually reduces the contrast in the diffraction pattern to a point where
it is no longer visible on the radiograph.
The mottling caused by diffraction can be reduced, and in some cases eliminated, by
raising the kilovoltage and by using lead foil screens.
The former is often of positive value even though the radiographic contrast is reduced.
Since definite rules are difficult to formulate, both approaches should be tried in a new
situation, or perhaps both used together.
Page 202
It should be noted, however, that in same instances, the presence or absence of
mottling caused by diffraction has been used as a rough indication of grain size and
thus as a basis for the acceptance or the rejection of parts.
SCATTERING IN 1- AND 2-MILLION-VOLT RADIOGRAPHY
Lead screens should always be used in this voltage range. The common thicknesses,
0.005-inch front and 0.010-inch back, are both satisfactory and convenient. Some
users, however, find a 0.010-inch front screen of value because of its greater selective
absorption of the scattered radiation from the specimen.
Lead filters are most convenient for this voltage range. When thus used between
specimen and film, filters are subject to mechanical damage. Care should be taken to
reduce this to a minimum, lest filter defects be confused with structures in or on the
specimen. In radiography with million-volt x-rays, specimens of uniform sections may be
conveniently divided into three classes. Below about 11/2 inches of steel, filtration
affords little improvement in radiographic quality. Between 11/2 and 4 inches of steel, the
thickest filter, up to 1/8-inch lead, which at the same time allows a reasonable exposure
time, may be used. Above 4 inches of steel, filter thicknesses may be increased to1/ 4
inch of lead, economic considerations permitting.
It should be noted that in the radiography of extremely thick specimens with million-volt x-
rays, fluorescent screens may be used to increase the photographic speed to a point
where filters can be used without requiring excessive exposure time.
A very important point is to block off all radiation except the useful beam with heavy
(1/2inch to 1-inch) lead at the anode. Unless this is done, radiation striking the walls of the
x-ray room will scatter back in such quantity as to seriously affect the quality of the
radiograph. This will be especially noticeable if the specimen is thick or has parts
projecting relatively far from the film.
MULTIMILLION-VOLT RADIOGRAPHY
Techniques of radiography in the 6- to 24-million-volt range are difficult to specify. This
is in part because of the wide range of subjects radiographed, from thick steel to
several feet of mixtures of solid organic compounds, and in part because the sheer size
of the specimens and the difficulty in handling them often impose limitations on the
radiographic techniques that can be used.
Page 203
degradation of the image quality. Vacuum cassettes are especially useful in this
application and several devices have been constructed for the purpose, some of which
incorporate such refinements as automatic preprogrammed positioning of the film
behind the various areas of a large specimen.
The electrons liberated in lead by the absorption of multimegavolt x-radiation are very
energetic. This means that those arising from fairly deep within a lead screen can
penetrate the lead, being scattered as they go, and reach the film. Thus, when thick
screens are used, the electrons reaching the film are “diffused,” with a resultant
deleterious effect on image quality. Therefore, when the highest quality is required in
multimillion-volt radiography, a comparatively thin front screen (about 0.005 inch) is
used, and the back screen is eliminated. This necessitates a considerable increase in
exposure time. Naturally, the applicability of the technique depends also on the amount
of backscattered radiation involved and is probably not applicable where large amounts
occur.
The photographic latent image may be defined as that radiation-induced change in a grain
or crystal that renders the grain readily susceptible to the chemical action of a developer.
To discuss the latent image in the confines of this site requires that only the basic concept be
outlined. A discussion of the historical development of the subject and a consideration of
most of the experimental evidence supporting these theories must be omitted because of lack
of space.
It is interesting to note that throughout the greater part of the history of photography, the
nature of the latent image was unknown or in considerable doubt. The first public
announcement of Daguerre’s process was made in 1839, but it was not until 1938 that a
reasonably satisfactory and coherent theory of the formation of the photographic latent
image was proposed. That theory has been undergoing refinement and modification ever
since.
Some of the investigational difficulties arose because the formation of the latent image is a
very subtle change in the silver halide grain. It involves the absorption of only one or a few
photons of radiation and can therefore affect only a few atoms, out of some 109 or 1010 atoms
in a typical photographic grain. The latent image cannot be detected by direct physical or
analytical chemical means.
However, even during the time that the mechanism of formation of the latent image was a
subject for speculation, a good deal was known about its physical nature. It was known, for
example, that the latent image was localized at certain discrete sites on the silver halide
grain.
If a photographic emulsion is exposed to light, developed briefly, fixed, and then examined
under a microscope (see the figure below), it can be seen that development (the reduction of
silver halide to metallic silver) has begun at only one or a few places on the crystal. Since
small amounts of silver sulfide on the surface of the grain were known to be necessary for a
Page 204
photographic material to have a high sensitivity, it seemed likely that the spots at which the
latent image was localized were local concentrations of silver sulfide.
Electron micrograph of exposed, partially developed, and fixed grains, showing initiation of
development at localized sites on the grains (1µ = 1 micron = 0.001 mm).
It was further known that the material of the latent image was, in all probability, silver. For
one thing, chemical reactions that will oxidize silver will also destroy the latent image. For
another, it is a common observation that photographic materials given prolonged exposure
to light darken spontaneously, without the need for development. This darkening is known
as the print-out image. The printout image contains enough material to be identified
chemically, and this material is metallic silver. By microscopic examination, the silver of the
print-out image is discovered to be localized at certain discrete areas of the grain (see the
figure below), just as is the latent image.
Thus, the change that makes an exposed photographic grain capable of being transformed
into metallic silver by the mild reducing action of a photographic developer is a
concentration of silver atoms—probably only a few—at one or more discrete sites on the
Page 205
grain. Any theory of latent-image formation must account for the way that light photons
absorbed at random within the grain can produce these isolated aggregates of silver atoms.
Most current theories of latent-image formation are modifications of the mechanism
proposed by R. W. Gurney and N. F. Mott in 1938.
In order to understand the Gurney-Mott theory of the latent image, it is necessary to digress
and consider the structure of crystals—in particular, the structure of silver bromide
crystals.
When solid silver bromide is formed, as in the preparation of a photographic emulsion, the
silver atoms each give up one orbital electron to a bromine atom. The silver atoms, lacking
one negative charge, have an effective positive charge and are known as silver ions (Ag+).
The bromine atoms, on the other hand, have gained an electron—a negative charge—and
have become bromine ions (Br-). The “plus” and “minus” signs indicate, respectively, one
fewer or one more electron than the number required for electrical neutrality of the atom.
A crystal of silver bromide is a regular cubical array of silver and bromide ions, as shown
schematically in the figure below. It should be emphasized that the “magnification” of the
figure is very great. An average grain in an industrial x-ray film may be about 0.00004 inch
in diameter, yet will contain several billions of ions.
A silver bromide crystal is a rectangular array of silver (Ag+) and bromide (Br-) ions.
These may be “foreign” molecules, within or on the crystal, produced by reactions with the
components of the gelatin, or distortions or dislocations of the regular array of ions shown
in the figure above. These may be classed together and called “latent-images sites.”
“Plain view” of a layer of ions of a crystal similar to that of the previous figure. A
latentimage site is shown schematically, and two interstitial silver ions are indicated.
Page 206
The Gurney-Mott theory envisions latent-image formation as a two-stage process. It will be
discussed first in terms of the formation of the latent image by light, and then the special
considerations of direct x-ray or lead foil screen exposures will be covered.
THE GURNEY-MOTT THEORY
When a photon of light of energy greater than a certain minimum value (that is, of
wavelength less than a certain maximum) is absorbed in a silver bromide crystal, it releases
an electron from a bromide (Br-) ion. The ion, having lost its excess negative charge, is
changed to a bromine atom. The liberated electron is free to wander about the crystal (see
the figure below).
As it does, it may encounter a latent image site and be “trapped” there, giving the latentimage
site a negative electrical charge. This first stage of latent-image formation—involving as it
does transfer of electrical charges by means of moving electrons—is the electronic
conduction stage.
Stages in the development of the latent image according to the Gurney-Mott theory.
Page 207
The negatively charged trap can then attract an interstitial silver ion because the silver ion
is charged positively (C in the figure above). When such an interstitial ion reaches a
negatively charged trap, its charge is counteracted, an atom of silver is deposited at the
trap, and the trap is “reset” (D in the figure above). This second stage of the Gurney-Mott
mechanism is termed the ionic condition stage, since electrical charge is transferred
through the crystal by the movement of ions—that is, charged atoms. The whole cycle can
recur several, or many, times at a single trap, each cycle involving absorption of one photon
and addition of one silver atom to the aggregate. (See E to H in the figure above.)
In other words, this aggregate of silver atoms is the latent image. The presence of these few
atoms at a single latent-image site makes the whole grain susceptible to the reducing action
of the developer. In the most sensitive emulsions, the number of silver atoms required may
be less than ten.
The mark of the success of a theory is its ability to provide an understanding of previously
inexplicable phenomena. The Gurney-Mott theory and those derived from it have been
Page 208
notably successful in explaining a number of photographic effects. One of these effects—
reciprocity-law failure—will be considered here as an illustration.
Low-intensity reciprocity-law failure (left branch of the curve ) results from the fact that
several atoms of silver are required to produce a stable latent image. A single atom of silver
at a latent-image site (D in the figure above) is relatively unstable, breaking down rather
easily into an electron and a positive silver ion. Thus, if there is a long interval between the
formation of the first silver atom and the arrival of the second conduction electron (E in the
figure above), the first silver atom may have broken down, with the net result that the
energy of the light photon that produced it has been wasted. Therefore, increasing light
intensity from very low to higher values increases the efficiency, as shown by the downward
trend of the left-hand branch of the curve, as intensity increases.
Electrons thus denied access to the same traps may be trapped at others, and the latent
image silver therefore tends to be inefficiently divided among several latent-image sites.
(This has been demonstrated by experiments that have shown that high-intensity exposure
produces more latent image within the volume of the crystal than do either low- or
optimum-intensity exposures.) Thus, the resulting inefficiency in the use of the conduction
electrons is responsible for the upward trend of the right-hand branch of the curve.
X-RAY LATENT IMAGE
In industrial radiography, the photographic effects of x-rays and gamma rays, rather than
those of light, are of the greater interest. At the outset it should be stated that the agent
that actually exposes a photographic grain, that is, a silver bromide crystal in the
emulsion, is not the x-ray photon itself, but rather the electrons—photoelectric and
Compton— resulting from the absorption event. It is for this reason that direct x-ray
exposures and lead foil screen exposures are similar and can be considered together.
The most striking differences between x-ray and visible-light exposures to grains arise
from the difference in the amounts of energy involved. The absorption of a single photon of
light transfers a very small amount of energy to the crystal. This is only enough energy to
free a single electron from a bromide (Br-) ion, and several successive light photons are
required to render a single grain developable.
The passage through a grain of an electron, arising from the absorption of an x-ray photon,
can transmit hundreds of times more energy to the grain than does the absorption of a light
photon. Even though this energy is used rather inefficiently, in general the amount is
sufficient to render the grain traversed developable—that is, to produce within it, or on it, a
stable latent image.
Page 209
As a matter of fact, the photoelectric or Compton electron, resulting from absorption or
interaction of a photon, can have a fairly long path in the emulsion and can render several
or many grains developable. The number of grains exposed per photon interaction can
vary from 1 grain for x-radiation of about 10 keV to possibly 50 or more grains for a 1
meV photon.
However, for 1 meV and higher energy photons, there is a low probability of an interaction
that transfers the total energy to grains in an emulsion.
For comparatively low values of exposure, each increment of exposure renders on the
average the same number of grains developable, which, in turn, means that a curve of net
density versus exposure is a straight line passing through the origin (see the figure below).
This curve departs significantly from linearity only when the exposure becomes so great
that appreciable energy is wasted on grains that have already been exposed. For
commercially available fine-grain x-ray films, for example, the density versus exposure
curve may be essentially linear up to densities of 2.0 or even higher.
Typical net density versus exposure curves for direct x-ray exposures.
Page 210
The fairly extensive straight-line relation between exposure and density is of considerable
use in photographic monitoring of radiation, permitting a saving of time in the
interpretation of densities observed on personnel monitoring films.
It the D versus E curves shown in the figure above are replotted as characteristic curves (D
versus log E), both characteristic curves are the same shape (see the figure below) and are
merely separated along the log exposure axis. This similarity in toe shape has been
experimentally observed for conventional processing of many commercial photographic
materials, both x-ray films and others.
Because a grain is completely exposed by the passage of an energetic electron, all x-ray
exposures are, as far as the individual grain is concerned, extremely short. The actual time
that an x-ray-induced electron is within a grain depends on the electron velocity, the grain
dimensions, and the “squareness” of the hit. However, a time of the order of 10-13 second is
representative. (This is in distinction to the case of light where the “exposure time” for a
single grain is the interval between the arrival of the first photon and that of the last photon
required to produce a stable latent image.)
The complete exposure of a grain by a single event and in a very short time implies that
there should be no reciprocity-law failure for direct x-ray exposures or for exposures made
with lead foil screens. The validity of this has been established for commercially available
film and conventional processing over an extremely wide range of x-ray intensities. That
films can satisfactorily integrate x-, gamma-, and beta-ray exposures delivered at a wide
range of intensities is one of the advantages of film as a radiation dosimeter.
In the discussion on reciprocity-law failure it was pointed out that a very short, very high
intensity exposure to light tends to produce latent images in the interior of the grain.
Because x-ray exposures are also, in effect, very short, very high intensity exposures, they
too tend to produce internal, as well as surface, latent images.
DEVELOPMENT
Many materials discolor on exposure to light—a pine board or the human skin, for
example—and thus could conceivably be used to record images. However, most such
systems reset to exposure on a “1:1” basis, in that one photon of light results in the
Page 211
production of one altered molecule or atom. The process of development constitutes one of
the major advantages of the silver halide system of photography.
In this system, a few atoms of photolytically deposited silver can, by development, be made
to trigger the subsequent chemical deposition of some 109 or 1010 additional silver atoms,
resulting in an amplification factor of the order of 109 or greater. The amplification process
can be performed at a time, and to a degree, convenient to the user and, with sufficient
care, can be uniform and reproducible enough for the purposes of quantitative
measurements of radiation.
Many practical developing agents are relatively simple organic compounds (see the figure
below) and, as shown, their activity is strongly dependent on molecular structure as well as
on composition. There exist empirical rules by which the developing activity of a particular
compound may often be predicted from a knowledge of its structure.
The simplest concept of the role of the latent image in development is that it acts merely as
an electron-conducting bridge by which electrons from the developing agent can reach the
silver ions on the interior face of the latent image. Experiment has shown that this simple
concept is inadequate to explain the phenomena encountered in practical photographic
development.
Page 212
Adsorption of the developing agent to the silver halide or at the silver-silver halide
interface has been shown to be very important in determining the rate of direct, or
chemical, development by most developing agents. The rate of development by
hydroquinone (see the figure above), for example, appears to be relatively independent of
the area of the silver surface and instead to be governed by the extent of the silver-silver
halide interface.
The exact mechanisms by which a developing agent acts are relatively complicated, and
research on the subject is very active.
The broad outlines, however, are relatively clear. A molecule of a developing agent can
easily give an electron to an exposed silver bromide grain (that is, to one that carries a
latent image), but not to an unexposed grain. This electron can combine with a silver (Ag+)
ion of the crystal, neutralizing the positive charge and producing an atom of silver. The
process can be repeated many times until all the billions of silver ions in a photographic
grain have been turned into metallic silver.
The development process has both similarities to, and differences from, the process of latent-
image formation. Both involve the union of a silver ion and an electron to produce an atom
of metallic silver. In latent image formation, the electron is freed by the action of radiation
and combines with an interstitial silver ion. In the development process, the electrons are
supplied by a chemical electron-donor and combine with the silver ions of the crystal lattice.
The physical shape of the developed silver need have little relation to the shape of the silver
halide grain from which it was derived. Very often the metallic silver has a tangled,
filamentary form, the outer boundaries of which can extend far beyond the limits of the
original silver halide grain (see the figure below). The mechanism by which these filaments
are formed is still in doubt although it is probably associated with that by which filamentary
silver can be produced by vacuum deposition of the silver atoms from the vapor phase onto
suitable nuclei.
The discussion of development has thus far been limited to the action of the developing
agent alone. However, a practical photographic developer solution consists of much more
than a mere water solution of a developing agent. The function of the other common
components of a practical developer are the following:
An Alkali
Page 213
The activity of developing agents depends on the alkalinity of the solution. The alkali
should also have a strong buffering action to counteract the liberation of hydrogen ions—
that is, a tendency toward acidity—that accompanies the development process. Common
alkalis are sodium hydroxide, sodium carbonate, and certain borates.
A Preservative
This is usually a sulfite. One of its chief functions is to protect the developing agent from
oxidation by air. It destroys certain reaction products of the oxidation of the developing
agent that tend to catalyze the oxidation reaction. Sulfite also reacts with the reaction
products of the development process itself, thus tending to maintain the development rate
and to prevent staining of the photographic layer.
A Restrainer
Commercial developers often contain other materials in addition to those listed above. An
example would be the hardeners usually used in developers for automatic processors.
1.0 100 1 0
0.50 50 2 0.3
0.25 25 4 0.6
0.10 10 10 1.0
0.01 1 100 2.0
0.001 0.1 1,000 3.0
0.0001 0.01 10,000 4.0
DENSITOMETERS
A densitometer is an instrument for measuring photographic densities. A number of
different types, both visual and photoelectric, are available commercially. For purposes
of practical industrial radiography, there is no great premium on high accuracy of a
densitometer. A much more important property is reliability, that is, the densitometer
should reproduce readings from day to day.
Page 214
RADIOGRAPHIC CONTRAST
The first subjective criteria for determining radiographic quality are radiographic
contrast. Essentially, radiographic contrast is the degree of density difference between
adjacent areas on a radiograph.
With an x-ray source of high kilovoltage, we see a sample of relatively low radiographic
contrast, that is, the density difference between the two adjacent areas (A and B) is low.
Definition
Besides radiographic contrast as subjective criteria for determining radiographic quality,
there exists one other, radiographic detail. Essentially, radiographic definition is the
abruptness of change in going from one density to another. For example, it is possible to
radiograph a particular subject and, by varying certain factors, produce two radiographs,
which possess different degrees of definition.
In the example to the left, a two-step step tablet with the transition from step to step
represented by Line BC is quite sharp or abrupt. Translated into a radiograph, we see
that the transition from the high density to the low density is abrupt. The Edge Line BC is
still a vertical line quite similar to the step tablet itself. We can say that the detail portrayed
in the radiograph is equivalent to physical change present in the step tablet. Hence, we
can say that the imaging system produced a faithful visual reproduction of the step table.
It produced essentially all of the information present in the step tablet on the radiograph.
In the example on the right, the same two-step step tablet has been radiographed.
However, here we note that, for some reason, the imaging system did not produce a
faithful visual reproduction. The Edge Line BC on the step tablet is not vertical. This is
evidenced by the gradual transition between the high and low-density areas on the
radiograph. The edge definition or detail is not present because of certain factors or
conditions, which exist, in the imaging system.
One must bear in mind that radiographic contrast and definition are not dependent upon
the same set of factors. If detail in a radiograph is originally lacking, then attempts to
manipulate radiographic contrast will have no effect on the amount of detail present in
that radiograph.
Page 215
GRAININESS
Graininess is defined as the visual impression of nonuniformity of density in a
radiographic (or photographic) image. With fast films exposed to high-kilovoltage
radiation, the graininess is easily apparent to the unaided vision; with slow films
exposed to low-kilovoltage x-rays, moderate magnification may be needed to
make it visible. In general, graininess increases with increasing film speed and
with increasing energy of the radiation.
The “clumps” of developed silver, which are responsible for the impression of graininess
do not each arise from a single developed photographic grain. That this cannot be so
can be seen from size consideration alone. The particle of black metallic silver arising
from the development of a single photographic grain in an industrial x-ray film is rarely
larger than 0.001 mm (0.00004 inch) and usually even less. This is far below the limits
of unaided human vision.
Rather, the visual impression of graininess is caused by the random, statistical grouping
of these individual silver particles. Each quantum (photon) of x-radiation or gamma
radiation absorbed in the film emulsion exposes one or more of the tiny crystals of silver
bromide of which the emulsion is composed. These “absorption events” occur at
random and even in a uniform x-ray beam, the number of absorption events will differ
from one tiny area of the film to the next for purely statistical reasons. Thus, the
exposed grains will be randomly distributed; that is, their numbers will have a statistical
variation from one area to the next.
Some would receive more than 10,000 and others less. In other words, the actual
number of drops falling on any particular square will most likely differ from the average
number of drops per square along the whole length of the sidewalk. The laws of
statistics show that the differences between the actual numbers and the average
number of drops can be described in terms of probability. If a large number of blocks is
involved, the actual number of raindrops on 68 percent of the block, will differ from the
average by no more than 100 drops or ±1 percent of the average.1 The remaining 32
percent will differ by more than this number. Thus, the differences in “wetness” from
one block of sidewalk to another will be small and probably unnoticeable.
This value of 100 drops holds only for an average of 10,000 drops per square. Now
consider the same sidewalk in a light shower in which the average number of drops per
square is only 100. The same statistical laws show that the deviation from the average
number of drops will be 10 or ±10 percent of the average. Thus, differences in wetness
from one square to the next will be much more noticeable in a light shower (±10
percent) than they are in a heavy downpour (±1 percent).
Now we will consider these drops as x-ray photons absorbed in the film. With a very
slow film, it might be necessary to have 10,000 photons absorbed in a small area to
Page 216
produce a density of, for example, 1.0. With an extremely fast film it might require only
100 photons in the same area to produce the same density of 1.0. When only a few
photons are required to produce the density, the random positions of the absorption
events become visible in the processed film as film graininess. On the other hand, the
more x-ray photons required, the less noticeable the graininess in the radiographic
image, all else being equal.
It can now be seen how film speed governs film graininess. In general, the silver
bromide crystals in a slow film are smaller than those in a fast film, and thus will
produce less light-absorbing silver when they are exposed and developed. Yet, at low
kilovoltages, one absorbed photon will expose one grain, of whatever size. Thus, more
photons will have to be absorbed in the slower film than in the faster to result in a
particular density. For the slower film, the situation will be closer to the “downpour”
case in the analogy above and film graininess will be lower.
The increase in graininess of a particular film with increasing kilovoltage can also be
understood on this basis. At low kilovoltages each absorbed photon exposes one
photographic grain; at high kilovoltages one photon will expose several, or even many,
grains. At high kilovoltages, then, fewer absorption events will be required to expose
the number of grains required for a given density than at lower kilovoltages. The fewer
absorption events, in turn, mean a greater relative deviation from the average, and
hence greater graininess.
It should be pointed out that although this discussion is on the basis of direct x-ray
exposures, it also applies to exposures with lead screens. The agent that actually
exposes a grain is a high-speed electron arising from the absorption of an x- or gamma-
ray photon. The silver bromide grain in a film cannot distinguish between an electron
that arises from an absorption event within the film emulsion and one arising from the
absorption of a similar photon in a lead screen.
The quantum mottle observed in radiographs made with fluorescent intensifying screens
has a similar statistical origin. In this case, however, it is the numbers of photons
absorbed in the screens that are of significance.
THE CHARACTERISTIC CURVE
The characteristic curve, sometimes referred to as the sensitometric curve or the H and
D curve (after Hurter and Driffield who, in 1890, first used it), expresses the relation
between the exposure applied to a photographic material and the resulting photographic
density. The characteristic curves of three typical films, exposed between lead foil
screens to x-rays, are given in the figure below. Such curves are obtained by giving a
film a series of known exposures, determining the densities produced by these
exposures, and then plotting density against the logarithm of relative exposure.
Relative exposure is used because there are no convenient units, suitable to all
kilovoltages and scattering conditions, in which to express radiographic exposures.
Hence, the exposures given a film are expressed in terms of some particular exposure,
giving a relative scale. In practical radiography, this lack of units for x-ray intensity or
quantity is no hindrance. The use of the logarithm of the relative exposure, rather than
the relative exposure itself, has a number of advantages. It compresses an otherwise
long scale. Furthermore, in radiography, ratios of exposures or intensities are usually
Page 217
more significant than the exposures or the intensities themselves. Pairs of exposures
having the same ratio will be separated by the same interval on the log relative
exposure scale, no matter what their absolute value may be. Consider the following
pairs of exposures.
Characteristic curves of three typical x-ray films, exposed between lead foil screens.
As the figure above shows, the slope (or steepness) of the characteristic curves is
continuously changing throughout the length of the curves.. It will suffice at this point to
give a qualitative outline of these effects. For example, two slightly different thicknesses
in the object radiographed transmit slightly different exposures to the film. These two
exposures have a certain small log E interval between them, that is, have a certain
ratio. The difference in the densities corresponding to the two exposures depends on
just where on the characteristic curve they fall, and the steeper the slope of the curve,
the greater is this density difference.
For example, the curve of Film Z (see the figure above), is steepest in its middle
portion. This means that a certain log E interval in the middle of the curve corresponds
Page 218
to a greater density difference than the same log E interval at either end of the curve. In
other words, the film contrast is greatest where the slope of the characteristic curve is
greatest. For Film Z, as has been pointed out, the region of greatest slope is in the
central part of the curve. For Films X and Y, however, the slope—and hence the film
contrast continuously increases throughout the useful density range. The curves of
most industrial x-ray films are similar to those of Films X and Y.
USE OF THE CHARACTERISTIC CURVE
The characteristic curve can be used to solve quantitative problems arising in
radiography, in the preparation of technique charts, and in radiographic research.
Ideally, characteristic curves made under the radiographic conditions actually
encountered should be used in solving practical problems. However, it is not always
possible to produce characteristic curves in a radiographic department, and curves
prepared elsewhere must be used. Such curves prove adequate for many purposes
although it must be remembered that the shape of the characteristic curve and the
speed of a film relative to that of another depend strongly on developing conditions.
Example 1: Suppose a radiograph made on Film Z (see the figure below) with an
exposure of 12 mA-min has a density of 0.8 in the region of maximum interest. It is
desired to increase the density to 2.0 for the sake of the increased contrast there
available.
Page 219
Example 2: Film X has a higher contrast than Film Z at D = 2.0 (see the figure below)
and also a finer grain. Suppose that, for these reasons, it is desired to make the
radiograph on Film X with a density of 2.0 in the same region of maximum interest.
Form of transparent overlay of proper dimensions for use with the characteristic curves
in the solution of exposure problems. The use of the overlay and curves is
demonstrated below.
Nomogram Methods
In the figure above, the scales at the far left and far right are relative exposure values.
They do not represent milliampere-minutes, curie-hours, or any other exposure unit;
they are to be considered merely as multiplying (or dividing) factors, the use of which is
explained below. Note, also, that these scales are identical, so that a ruler placed
across them at the same value will intersect the vertical lines, in the center of the
diagram, at right angles.
On the central group of lines, each labeled with the designation of a film whose curve is
shown in the earlier figure, the numbers represent densities.
The use of the figure above will be demonstrated by a re-solution of the same problems
used as illustrations in both of the preceding sections. Note that in the use of the
nomogram, the straightedge must be placed so that it is at right angles to all the lines—
that is, so that it cuts the outermost scales on the left and the right
Page 220
at the same value.
Place the straightedge across the figure above so that it cuts the Film Z scale at 0.8.
The reading on the outside scales is then 9.8. Now move the straightedge upward so that
it cuts the Film Z scale at 2.0; the reading on the outside scales is now 41. The original
exposure (12 mA-min) must be multiplied by the ratio of these two numbers— that is, by
41/9.8 = 4.2. Therefore, the new exposure is 12 x 4.2 mA-min or 50 mA-min.
Example 2: Film X has a higher contrast than Film Z at D = 2.0 (see the earlier figure)
and also lower graininess. Suppose that, for these reasons, it is desired to make the
aforementioned radiograph on Film X with a density of 2.0 in the same region of
maximum interest.
Place the straightedge on the figure above so that it cuts the scale for Film Z at 2.0. The
reading on the outside scales is then 41, as in Example 1. When the straightedge is
placed across the Film X scale at 2.0, the reading on the outside scale is 81. In the
previous example, the exposure for a density of 2.0 on Film Z was found to be 50
mAmin. In order to give a density of 2.0 on Film X, this exposure must be multiplied by
the ratio of the two scale readings just found—81/41 = 1.97. The new exposure is
therefore 50 x 1.97 or 98 mA-min.
Example 3: The types of problems given in Examples 1 and 2 are often combined in
actual practice. Suppose, for example, that a radiograph was made on Film X (see the
earlier figure) with an exposure of 20 mA-min and that a density of 1.0 was obtained. A
radiograph at the same kilovoltage on Film Y at a density of 2.5 is desired for the sake
of the higher contrast and the lower graininess obtainable. The problem can be solved
graphically in a single step.
Page 221
The reading on the outside scale for D = 1.0 on Film X is 38. The corresponding reading
for D = 2.5 on Film Y is 420. The ratio of these is 420/38 = 11, the factor by which the
original exposure must be multiplied. The new exposure to produce D = 2.5 on Film Y is
then 20 x 11 or 220 mA-min
FUNDAMENTALS OF PROCESSING
In the processing procedure, the invisible image produced in the film by exposure to
xrays, gamma rays, or light is made visible and permanent. Processing is carried out
under subdued light of a color to which the film is relatively insensitive. The film is first
immersed in a developer solution, which causes the areas exposed to radiation to
become dark, the amount of darkening for a given degree of development depending
on the degree of exposure. After development, and sometimes after a treatment
designed to halt the developer reaction abruptly, the film passes into a fixing bath. The
function of the fixer is to dissolve the darkened portions of the sensitive salt. The film is
then washed to remove the fixing chemicals and solubilized salts, and finally is dried.
If the volume of work is small, or if time is of relatively little importance, radiographs may
be processed by hand. The most common method of manual processing of industrial
radiographs is known as the tank method. In this system, the processing solutions and
wash water are contained in tanks deep enough for the film to be hung vertically. Thus,
the processing solutions have free access to both sides of the film, and both emulsion
surfaces are uniformly processed to the same degree. The all-important factor of
temperature can be controlled by regulating the temperature of the water in which the
processing tanks are immersed.
Where the volume of work is large or the holding time is important, automated processors
are used. These reduce the darkroom manpower required, drastically shorten the interval
between completion of the exposure and the availability of a dry radiograph ready for
interpretation, and release the material being inspected much faster. Automated
Page 222
processors move films through the various solutions according to a predetermined
schedule. Manual work is limited to putting the unprocessed film into the processor or
into the film feeder, and removing the processed radiographs from the receiving bin.
GENERAL CONSIDERATIONS
Cleanliness
In handling x-ray films, cleanliness is a prime essential. The processing room, as w ell
as the accessories and equipment, must be kept scrupulously clean and used only for
the purposes for which they are intended. Any solutions that are spilled should be
wiped up at once; otherwise, on evaporation, the chemicals may get into the air and
later settle on film surfaces, causing spots. The thermometer and such accessories as
film hangers should be thoroughly washed in clean water immediately after being used,
so that processing solutions will not dry on them and possibly cause contamination of
solutions or streaked radiographs when used again.
All tanks should be cleaned thoroughly before putting fresh solutions into them.
The necessary vessels or pails should be made of AISI Type 316 stainless steel with 2
to 3 percent molybdenum, or of enamelware, glass, plastic, hard rubber, or glazed
earthenware. (Metals such as aluminum, galvanized iron, tin, copper, and zinc cause
contamination and result in fog in the radiograph.). Paddles or plunger-type agitators
are practical for stirring solutions. They should be made of hard rubber, stainless steel,
or some other material that does not absorb or react with processing solutions.
Separate paddles or agitators should be provided for the developer and fixer. If the
paddles are washed thoroughly and hung up to dry immediately after use, the danger of
contamination when they are employed again will be virtually nil. A motor-driven stirrer
with a stainless steel propeller is a convenient aid in mixing solutions. In any event, the
agitation used in mixing processing solutions should be vigorous and complete, but not
violent.
MANUAL PROCESSING
When tank processing is used, the routine is, first, to mount the exposed film on a
hanger immediately after it is taken from the cassette or film holder, or removed from
the factory-sealed envelope. (See the figure below.) Then the film can be conveniently
immersed in the developer solution, stop bath, fixer solution, and wash water for the
predetermined intervals, and it is held securely and kept taut throughout the course of
the procedure.
Page 223
At frequent intervals during processing, radiographic films must be agitated. Otherwise,
the solution in contact with the emulsion becomes exhausted locally, affecting the rate
and evenness of development or fixation.
Another precaution must be observed: The level of the developer solution must be kept
constant by adding replenisher. This addition is necessary to replace the solution carried
out of the developer tank by the films and hangers, and to keep the activity of the
developer constant.
Special precautions are needed in the manual processing of industrial x-ray films in roll
form. These are usually processed on the commercially available spiral stainless-steel
reels. The space between the turns of film on such a reel is small, and loading must be
done carefully lest the turns of film touch one another. The loaded reel should be placed
in the developer so that the film is vertical—that is, the plane of the reel itself is
horizontal. Agitation in the developer should not be so vigorous as to pull the edges of
the film out of the spiral recesses in the reel. The reel must be carefully cleaned with a
brush to remove any emulsion or dried chemicals that may collect within the
filmretaining grooves.
Method of fastening film on a developing hanger. Bottom clips are fastened first,
followed by top clips.
Cleanliness
Processing tanks should be scrubbed thoroughly and then well rinsed with fresh water
before fresh solutions are put into them. In warm weather especially, it is advisable to
sterilize the developer tanks occasionally. The growth of fungi can be minimized by
filling the tank with an approximately 0.1 percent solution of sodium hypochlorite
Page 224
(Clorox, “101,” Sunny Sol bleaches, etc, diluted 1:30), allowing it to stand several hours
or overnight, and then thoroughly rinsing the tank. During this procedure, rooms should
be well ventilated to avoid corrosion of metal equipment and instruments by the small
concentrations of chlorine in the air. Another method is to use a solution of sodium
pentachlorphenate, such as Dowicide G fungicide, at a strength of 1 part in 1,000 parts
of water.
This solution has the advantage that no volatile substance is present and it will not
corrode metals. In preparing the solution, avoid breathing the dust and getting it or
the solution on your skin or clothing or into your eyes.
DEVELOPMENT
Developer Solutions
Prepared developers that are made ready for use by dissolving in water or by dilution
with water provide a carefully compounded formula and uniformity of results. They are
comparable in performance and effective life, but the liquid form offers greater
convenience in preparation, which may be extremely important in a busy laboratory.
Powder chemicals are, however, more economical to buy.
When the exposed film is placed in the developer, the solution penetrates the emulsion
and begins to transform the exposed silver halide crystals to metallic silver. The longer
the development is carried on, the more silver is formed and hence the denser the image
becomes.
In particular, “sight development” should not be used; that is, the development time for a
radiograph should not be decided by examining the film under safelight illumination at
intervals during the course of development. It is extremely difficult to judge from the
appearance of a developed but unfixed radiograph what its appearance will be in the
dried state.
Page 225
Even though the final radiograph so processed is apparently satisfactory, there is no
assurance that development was carried far enough to give the desired degree of film
contrast. Further, “sight development” can easily lead to a high level of fog caused by
excessive exposure to safelights during development.
Ideally, the temperature of the developer solution should be 68°F (20°C). A temperature
below 60°F (16°C) retards the action of the chemical and is likely to result in
underdevelopment, whereas an excessively high temperature not only may destroy the
photographic quality by producing fog but also may soften the emulsion to the extent
that it separates from the base.
When, during extended periods, the tap water will not cool the solutions to
recommended temperatures, the most effective procedure is to use mechanical
refrigeration. Conversely, heating may be required in cold climates. Under no
circumstances should ice be placed directly in processing solutions to reduce their
temperature because, on melting, the water will dilute them and possibly cause
contamination.
Because of the direct relation between temperature and time, both are of equal
importance in a standardized processing procedure. So, after the temperature of the
developer solution has been determined, films should be left in the solution for the
exact time that is required. Guesswork should not be tolerated. Instead, when the films
are placed in the solution, a timer should be set so that an alarm will sound at the end
of the time.
Agitation
It is essential to secure uniformity of development over the whole area of the film. This
is achieved by agitating the film during the course of development.
An example of streaking that can result when a film has been allowed to remain in
the solution without agitation during the entire development period.
Agitation of the film during development brings fresh developer to the surface of the film
and prevents uneven development. In small installations, where few films are
processed, agitation is most easily done by hand. Immediately after the hangers are
lowered smoothly and carefully into the developer, the upper bars of the hangers should
be tapped sharply two or three times on the upper edge of the tank to dislodge any
bubbles clinging to the emulsion. Thereafter, films should be agitated periodically
throughout the development.
Acceptable agitation results if the films are shaken vertically and horizontally and moved
from side to side in the tank for a few seconds every minute during the course of the
development. More satisfactory renewal of developer at the surface of the film is
obtained by lifting the film clear of the developer, allowing it to drain from one corner for
2 or 3 seconds, reinserting it into the developer, and then repeating the procedure, with
drainage from the other lower corner. The whole cycle should be repeated once a
minute during the development time.
Another form of agitation suitable for manual processing of sheet films is known as
“gaseous burst agitation.” It is reasonably economical to install and operate and,
because it is automatic, does not require the full-time attention of the processing room
operator. Nitrogen, because of its inert chemical nature and low cost, is the best gas to
use.
Page 227
first released, the bursts impart a sharp displacement pulse, or piston action, to the
entire volume of the solution. As the bubbles make their way to the surface, they
provide localized agitation around each small bubble. The great number of bubbles, and
the random character of their paths to the surface, provide effective agitation at the
surfaces of films hanging in the solution (See the figure below.)
If the gas were released continuously, rather than in bursts, constant flow patterns
would be set up from the bottom to the top of the tank and cause uneven development.
These flow patterns are not encountered, however, when the gas is introduced in short
bursts, with an interval between bursts to allow the solution to settle down.
Note that the standard sizes of x-ray developing tanks will probably not be suitable for
gaseous burst agitation. Not only does the distributor at the bottom of the tank occupy
some space, but also the tank must extend considerably above the surface of the still
developer to contain the froth that results when a burst of bubbles reaches the surface.
It is therefore probable that special tanks will have to be provided if the system is
adopted.
Some compensation must be made for the decrease in developing power if uniform
radiographic results are to be obtained over a period of time. The best way to do this is
Page 228
to use the replenisher system, in which the activity of the solution is not allowed to
diminish but rather is maintained by suitable chemical replenishment.
Thus, the replenisher performs the double function of maintaining both the liquid level in
the developing tank and the activity of the solution. Merely adding original-strength
developer would not produce the desired regenerating effect; development time would
have to be progressively increased to achieve a constant degree of development.
The quantity of replenisher required to maintain the properties of the developer will
depend on the average density of the radiographs processed. It is obvious that if 90
percent of the silver in the emulsion is developed, giving a dense image over the
entire film, more developing agent will be consumed. Therefore, the developer will be
exhausted to a greater degree than if the film were developed to a low density.
Page 229
If this step is omitted, development continues for the first minute or so of fixation and,
unless the film is agitated almost continuously during this period, uneven development
will occur, resulting in streakiness.
In addition, if films are transferred to the fixer solution without the use of an acid stop
bath or thorough rinsing, the alkali from the developer solution retained by the gelatin
neutralizes some of the acid in the fixer solution. After a certain quantity of acid has
been neutralized, the chemical balance of the fixer solution is upset and its usefulness
is greatly impaired—the hardening action is destroyed and stains are likely to be
produced in the radiographs. Removal of as much of the developer solution as possible
before fixation prolongs the life of the fixer solution and assures the routine production
of radiographs of better quality.
Stop Bath
A stop bath consisting of 16 fluidounces of 28 percent acetic acid per gallon of bath (125
mL per litre) may be used. If the stop bath is made from glacial acetic acid, the proportions
should be 41/2 fluidounces of glacial acetic acid per gallon of bath, or 35 mL per litre.
Warning
Glacial acetic acid should be handled only under adequate ventilation, and great care
should be taken to avoid injury to the skin or damage to clothing. Always add the glacial
acetic acid to the water slowly, stirring constantly, and never water to acid; otherwise,
the solution may boil and spatter acid on hands and face, causing severe burns.
When development is complete, the films are removed from the developer, allowed to
drain 1 or 2 seconds (not back into the developer tank), and immersed in the stop bath.
The developer draining from the films should be kept out of the stop bath. Instead of
draining, a few seconds’ rinse in fresh running water may be used prior to inserting the
films in the stop bath. This will materially prolong the life of the bath.
Films should be immersed in the stop bath for 30 to 60 seconds (ideally, at 65 to 70°F
or 18 to 21°C) with moderate agitation and then transferred to the fixing bath. Five
gallons of stop bath will treat about 100 14 x 17-inch films, or equivalent. If a developer
containing sodium carbonate is used, the stop bath temperature must be maintained
between (65 and 70°F or 18 to 21°C); otherwise, blisters containing carbon dioxide may
be formed in the emulsion by action of the stop bath.
Rinsing
If a stop bath cannot be used, a rinse in running water for at least 2 minutes should be
used. It is important that the water be running and that it be free of silver or fixer
chemicals. The tank that is used for the final washing after fixation should not be used
for this rinse.
Page 230
If the flow of water in the rinse tanks is only moderate, it is desirable to agitate the films
carefully, especially when they are first immersed. Otherwise, development will be uneven,
and there will be streaks in areas that received a uniform exposure.
Fixing
The purpose of fixing is to remove all of the undeveloped silver salt of the emulsion,
leaving the developed silver as a permanent image. The fixer has another important
function—hardening the gelatin so that the film will withstand subsequent drying with warm
air. The interval between placing the film in the fixer solution and the disappearance of the
original diffuse yellow milkiness is known as the clearing time. It is during this time that the
fixer is dissolving the undeveloped silver halide.
However, additional time is required for the dissolved silver salt to diffuse out of the
emulsion and for the gelatin to be hardened adequately. Thus, the total fixing time
should be appreciably greater than the clearing time. The fixing time in a relatively fresh
fixing bath should, in general, not exceed 15 minutes; otherwise, some loss of low
densities may occur. The films should be agitated vigorously when first placed in the
fixer and at least every 2 minutes thereafter during the course of fixation to assure
uniform action of the chemicals.
During use, the fixer solution accumulates soluble silver salts which gradually inhibit its
ability to dissolve the unexposed silver halide from the emulsion. In addition, the fixer
solution becomes diluted by rinse water or stop bath carried over by the film. As a
result, the rate of fixing decreases, and the hardening action is impaired. The dilution
can be reduced by thorough draining of films before immersion in the fixer and, if
desired, the fixing ability can be restored by replenishment of the fixer solution.
The usefulness of a fixer solution is ended when it has lost its acidity or when clearing
requires an unusually long interval. The use of an exhausted solution should always be
avoided because abnormal swelling of the emulsion often results from deficient
hardening and drying is unduly prolonged; at high temperatures reticulation or sloughing
away of the emulsion may take place. In addition, neutralization of the acid in the fixer
solution frequently causes colored stains to appear on the processed radiographs.
Washing
X-ray films should be washed in running water so circulated that the entire emulsion area
receives frequent changes. For a proper washing, the bar of the hanger and the top clips
should always be covered completely by the running water, as illustrated in the figure
below.
Water should flow over the tops of the hangers in the washing compartment. This
avoids streaking due to contamination of the developer when hangers are used
over again.
Page 231
Efficient washing of the film depends both on a sufficient flow of water to carry the fixer
away rapidly and on adequate time to allow the fixer to diffuse from the film. Washing
time at 60 to 80° F (15.5 to 26.5° C) with a rate of water flow of four renewals per hour
is 30 minutes.
The films should be placed in the wash tank near the outlet end. Thus, the films most
heavily laden with fixer are first washed in water that is somewhat contaminated with
fixer from the films previously put in the wash tank. As more films are put in the wash
tank, those already partially washed are moved toward the inlet, so that the final part of
the washing of each film is done in fresh, uncontaminated water.
The tank should be large enough to wash films as rapidly as they can be passed
through the other solutions. Any excess capacity is wasteful of water or, with the same
flow as in a smaller tank, diminishes the effectiveness with which fixer is removed from
the film emulsion. Insufficient capacity, on the other hand, encourages insufficient
washing, leading to later discoloration or fading of the image.
The “cascade method” of washing is the most economical of water and results in better
washing in the same length of time. In this method, the washing compartment is divided
into two sections. The films are taken from the fixer solution and first placed in Section
A. (See the figure below.) After they have been partially washed, they are moved to
Section B, leaving Section A ready to receive more films from the fixer. Thus, films
heavily laden with fixer are washed in somewhat contaminated water, and washing of
the partially washed films is completed in fresh water.
Page 232
gelatin has a natural tendency to soften considerably with prolonged washing in water
above 68°F (20°C). Therefore, if possible the temperature of the wash water should be
maintained between 65 and 70°F or 18 and 21°C).
Formation of a cloud of minute bubbles on the surfaces of the film in the wash tank
sometimes occurs. These bubbles interfere with washing the areas of emulsion
beneath them, and can subsequently cause a discoloration or a mottled appearance of
the radiograph. When this trouble is encountered, the films should be removed from the
wash water and the emulsion surfaces wiped with a soft cellulose sponge at least twice
during the washing period to remove the bubbles. Vigorous tapping of the top bar of the
hanger against the top of the tank rarely is sufficient to remove the bubbles.
Prevention of Water SpotsWhen films are removed from the wash tanks, small
drops of water cling to the surfaces of the emulsions. If the films are dried
rapidly, the areas under the drops dry more slowly than the surrounding areas.
This uneven drying causes distortion of the gelatin, changing the density of the
silver image, and results in spots that are frequently visible and troublesome in
the finished radiograph. Such “water spots” can be largely prevented by
immersing the washed films for 1 or 2 minutes in a wetting agent, then allowing
the bulk of the water to drain off before the films are placed in the drying cabinet.
This solution causes the surplus water to drain off the film more evenly,
reducing the number of clinging drops. This reduces the drying time and lessens
the number of water spots occurring on the finished radiographs. Drying
Convenient racks are available commercially for holding hangers during drying when
only a small number of films are processed daily. When the racks are placed high on
the wall, the films can be suspended by inserting the crossbars of the processing
hangers in the holes provided. This obviates the danger of striking the radiographs
while they are wet, or spattering water on the drying surfaces, which would cause spots
on them. Radiographs dry best in warm, dry air that is changing constantly.When a
considerable number of films are to be processed, suitable driers with built-in fans,
filters, and heaters or desiccants are commercially available.
Marks in Radiographs
Defects, spots, and marks of many kinds may occur if the preceding general rules for
manual processing are not carefully followed. Perhaps the most common processing
defect is streakiness and mottle in areas that receive a uniform exposure. This
unevenness may be a result of:
• Failure to agitate the films sufficiently during development or the presence of too
many hangers in the tank, resulting in inadequate space between neighboring
films.
• The use of an exhausted stop bath or failure to agitate the film properly in the
stop bath.
Page 233
Other characteristic marks are dark spots caused by the spattering of developer
solution, static electric discharges, and finger marks; and dark streaks occurring when
the developer-saturated film is inspected for a prolonged time before a safelight lamp. If
possible, films should never be examined at length until they are dry.
A further trouble is fog - that is, development of silver halide grains other than those
affected by radiation during exposure. It is a great source of annoyance and may be
caused by accidental exposure to light, x-rays, or radioactive substances; contaminated
developer solution; development at too high a temperature; or storing films under
improper storage conditions or beyond the expiration dates stamped on the cartons.
Processing Control
The essence of automated processing is control, both chemical and mechanical. In
order to develop, fix, wash, and dry a radiograph in the short time available in an
automated processor, specifically formulated chemicals are used. The processor
maintains the chemical solutions at the proper temperatures, agitates and replenishes
the solutions automatically, and transports the films mechanically at a carefully
controlled speed throughout the processing cycle. Film characteristics must be
compatible with processing conditions, shortened processing times and the mechanical
transport system. From the time a film is fed into the processor until the dry radiograph
is delivered, chemicals, mechanics, and film must work together.
Automated Processor Systems
Page 234
Automated processors incorporate a number of systems which transport, process, and
dry the film and replenish and recirculate the processing solutions. A knowledge of
these systems and how they work together will help in understanding and using
automated processing equipment.
Transport System
The function of the transport system (see the figure below) is to move film through the
developer and fixer solutions and through the washing and drying sections, holding the
film in each stage of the processing cycle for exactly the right length of time, and finally to
deliver the ready-to-read radiograph.
In most automated processors now in use, the film is transported by a system of rollers
driven by a constant speed motor. The rollers are arranged in a number of
assemblies—entrance roller assembly, racks, turnarounds (which reverse direction of
film travel within a tank), crossovers (which transfer films from one tank to another), and
a squeegee assembly (which removes surface water after the washing cycle). The
number and specific design of the assemblies may vary from one model of processor to
another, but the basic design is the same.
It is important to realize that the film travels at a constant speed in a processor, but that
the speed in one model may differ from that in another. Processing cycles—the time
interval from the insertion of an unprocessed film to the delivery of a dry radiograph—in
general range downward from 15 minutes. Because one stage of the cycle may have to
be longer than another, the racks may vary in size—the longer the assembly, the longer
the film takes to pass through a particular stage of processing.
Although the primary function of the transport system is to move the film through the
processor in a precisely controlled time, the system performs two other functions of
importance to the rapid production of high-quality radiographs. First, the rollers produce
vigorous uniform agitation of the solutions at the surfaces of the film, contributing
significantly to the uniformity of processing.
Page 235
Second, the top wet rollers in the racks and the rollers in the crossover assemblies
effectively remove the solutions from the surfaces of the film, reducing the amount of
solution carried over from one tank to the next and thus prolonging the life of the fixer
and increasing the efficiency of washing. Most of the wash water clinging to the surface
of the film is removed by the squeegee rollers, making it possible to dry the processed
film uniformly and rapidly, without blemishes.
Water System
The water system of automated processors has two functions - to wash the films and to
help stabilize the temperature of the processing solutions. Hot and cold water are
blended to the proper temperature and the tempered water then passes through a flow
regulator which provides a constant rate of flow.
Depending upon the processor, part or all of the water is used to help control the
temperature of the developer. In some processors, the water also helps to regulate the
temperature of the fixer. The water then passes to the wash tank where it flows through
and over the wash rack. It then flows over a weir (dam) at the top of the tank and into the
drain.
Sometimes the temperature of the cold water supply may be higher than required by
the processor. In this situation, it is necessary to cool the water before piping it to the
processor. This is the basic pattern of the water system of automated processors; the
details of the system may vary slightly, however.
Recirculation Systems
Recirculation of the fixer and developer solutions performs the triple functions of uniformly
mixing the processing and replenisher solution, maintaining them at constant
temperatures, and keeping thoroughly mixed and agitated solutions in contact with the
film.
The solutions are pumped from the processor tanks, passed through devices to
regulate temperature, and returned to the tanks under pressure. This pressure forces
the solutions upward and downward, inside, and around the transport system
assemblies. As a result of the vigorous flow in the processing tanks, the solutions are
thoroughly mixed and agitated and the films moving through the tanks are constantly
bathed in fresh solutions.
Replenishment Systems
Accurate replenishment of the developer and fixer solutions is even more important in
automated processing than in manual processing. In both techniques, accurate
replenishment is essential to proper processing of the film and to long life of the
processing solutions; but, if the solutions are not properly replenished in an automated
processor, the film may swell too much and become slippery, with the result that it might
get stuck in the processor.
Page 236
When a film is fed into the processor, pumps are activated, which pump replenisher
from storage tanks to the processing tanks. As soon as the film has passed the
entrance assembly, the pumps stop—replenisher is added only during the time required
for a sheet of film to pass through the entrance assembly. The amount of replenisher
added is thus related to the size of the sheet of film. The newly added replenisher is
blended with the processor solutions by the recirculation pumps. Excess processing
solutions flow over a weir at the top of the tanks into the drain.
Different types of x-ray films require different quantities of processing chemicals. It is,
therefore, important that the solutions be replenished at the rate proper for the type or
types of film being processed and the average density of the radiographs.
Dryer System
Rapid drying of the processed radiograph depends on proper conditioning of the film in
the processing solutions, effective removal of surface moisture by the squeegee rollers,
and a good supply of warm air striking both surfaces of the radiograph.
Heated air is supplied to the dryer section by a blower. Part of the air is recirculated; the
rest is vented to prevent buildup of excessive humidity in the dryer. Fresh air is drawn
into the system to replace that which is vented.
Rapid Access to Processed Radiographs
Approximately twelve or fourteen minutes after exposed films are fed into the unit, they
emerge processed, washed, dried, and ready for interpretation. Conservatively, these
operations take approximately 1 hour in hand processing. Thus, with a saving of at least
45 minutes in processing time, the holding time for parts being radiographed is greatly
reduced. It follows that more work can be scheduled for a given period because of the
speed of processing and the consequent reduction in space required for holding
materials until the radiographs are ready for checking.
Uniformity of Radiographs
Automated processing is very closely controlled time-temperature processing. This,
combined with accurate automatic replenishment of solutions, produces day-after-day
uniformity of radiographs rarely achieved in hand processing. It permits the setting up
of exposure techniques that can be used with the knowledge that the films will receive
optimum processing and be free from processing artifacts. Processing variables are
virtually eliminated.
Small Space Requirements
Page 237
Automated processors require only about 10 square feet of floor space. The size of the
processing room can be reduced because hand tanks and drying facilities are not
needed. A film loading and unloading bench, film storage facilities, plus a small open
area in front of the processor feed tray are all the space required. The processor, in
effect, releases valuable floor space for other plant activities. If the work load increases
to a point where more processors are needed, they can be added with minimal
additional space requirements. Many plants with widely separated exposure areas have
found that dispersed processing facilities using two or more processors greatly increase
the efficiency of operations.
Chemistry of Automated Processing
Automated processing is not just a mechanization of hand processing, but a system
depending on the interrelation of mechanics, chemicals, and film. A special chemical
system is therefore required to meet the particular need of automated processing.
In automated processors, if a film becomes slippery, it could slow down in the transport
system, so that films following it could catch up and overlap. Or it might become too
sticky to pass come point and get stuck or even wrap around a roller. If the emulsion
becomes too soft it could be damaged by the rollers. These occurrences, of course,
cannot be tolerated. Therefore, processing solutions used in automated processors
must be formulated to control, within narrow limits, the physical properties of the film.
Consequently, the mixing instructions with these chemicals must be followed exactly.
Page 238
part by using them at temperatures higher than those suitable for manual processing of
film.
The hardening developer develops the film very rapidly at its normal operating
temperature. Moreover, the formulation of the solution is carefully balanced so that
optimum development is achieved in exactly the time required for the hardener to
harden the emulsion. If too much hardener is in a solution, the emulsion hardens too
quickly for the developer to penetrate sufficiently, and underdevelopment results. If too
little hardener is in the solution, the hardening process is slowed, overdevelopment of
film occurs, and transport problems may be encountered. To maintain the proper
balance, it is essential that developer solution be replenished at the rate proper for the
type or types of film being processed and the average density of the radiographs.
Because washing, drying, and keeping properties of the radiograph are closely tied to
the effectiveness of the fixation process, special fixers are needed for automatic
processing. Not only must they act rapidly, but they must maintain the film at the proper
degree of hardness for reliable transport. Beyond this, the fixer must be readily removed
from the emulsion so that proper washing of the radiograph requires only a short time. A
hardening agent added to the fixer solution works with the fixing chemicals to condition
the film for washing and for rapid drying without physical damage to the emulsion.
Experience has shown that the solutions in this chemical system have a long life. In
general, it is recommended that the processor tanks be emptied and cleaned after
50,000 films of mixed sizes have been processed or at the end of 3 months, whichever
is sooner. This may vary somewhat depending on local use and conditions; but, in
general, this schedule will give very satisfactory results.
Film-Feeding Procedures
Sheet Film
The figure below shows the proper film-feeding procedures. The arrows indicate the
direction in which films are fed into the processor. Wherever possible, it is advisable to
feed all narrower films side by side so as to avoid Overreplenishment of the solutions.
This will aid in balanced replenishment and will result in maximum economy of the
solutions used.
Care should be taken that films are fed into the processor square with the edge of a
side guide of the feed tray, and that multiple films are started at the same time. In no
event should films less than 7 inches long be fed into the processor.
Page 239
Roll Film
Roll films in widths of 16 mm to 17 inches and long strips of film may be processed in a
KODAK PROFESSSIONAL INDUSTREX Processor. This requires a somewhat different
procedure than is used when feeding sheet film. Roll film in narrow widths and many
strips have an inherent curl because they are wound on spools. Because of this curl, it
is undesirable to feed roll or strip film into the processor without attaching a sheet of
leader film to the leading edge of the roll or strip. Ideally, the leader should be
unprocessed radiographic film. Sheet film that has been spoiled in exposure or
accidentally light-fogged can be preserved and used for this purpose.
The leader film should be at least as wide as, and preferably wider than, the roll film and
be a minimum of 10 inches long. It is attached to the roll film with a butt joint using
pressure-sensitive polyester tape, such as SCOTCH Brand Electrical Tape No. 3, one
inch in width. (Other types of tape may not be suitable due to the solubility of their bases
in the processing solutions.) Care should be taken that none of the adhesive side of the
tape is exposed to the processing solutions. Otherwise, the tape may stick to the
processor rollers or bits of adhesive may be transferred to the rollers, resulting in
processing difficulties.
If narrow widths of roll or strip films are being fed, they should be kept as close as possible
to one side guide of the feed tray. This will permit the feeding of standard-size sheet films
at the same time. Where quantities of roll and strip films are fed, the replenisher pump
should be turned off for a portion of the time. This will prevent overreplenishment and
possible upset of the chemical balance in the processor tanks.
FILING RADIOGRAPHS
After the radiograph is dry, it must be prepared for filing. With a manually processed
radiograph, the first step is the elimination of the sharp projections that are caused by
Page 240
the film-hanger clips. Use of film corner cutters will enhance the appearance of the
radiograph, preclude its scratching others with which it may come in contact, facilitate
its insertion into an envelope, and conserve filing space.
The radiograph should be placed in a heavy manila envelope of the proper size, and all
of the essential identification data should be written on the envelope so that it can be
easily handled and filed. Envelopes having an edge seam, rather than a center seam,
and joined with a non-hygroscopic adhesive are preferred, since occasional staining
and fading of the image is caused by certain adhesives used in the manufacture of
envelopes. Ideally, radiographs should be stored at a relative humidity of 30 to 50
percent.
Page 241
RADIOGRAPHIC IMAGE QUALITY AND DETAIL VISIBILITY
Because the purpose of most radiographic inspections is to examine a specimen for
inhomogenity, a knowledge of the factors affecting the visibility of detail in the finished
radiograph is essential. The summary chart below shows the relations of the various
factors influencing image quality and radiographic sensitivity, together with page
references to the discussion of individual topics. For convenience, a few important
definitions will be repeated.
Factors Affecting Image Quality
Radiographic Image Quality
Radiographic Contrast
Definition
Subject Contrast Film
Contrast
Geometric Factors Film
Graininess, Screen Mottle Factors
Affected by:A - Absorption differences in specimen (thickness, composition, density)B - Radiation
wavelength C - Scattered radiation Reduced by: 1 - Masks and diaphragms 2 - Filters 3 - Lead screens 4
- Potter-Bucky diaphragm
Affected by:A - Type of Film B - Degree of development (type of developer, time and temperature of
development, activity of developer, degree of agitation) C - Density D - Type of screens (fluorescent vs
lead or none)
Affected by:A - Focal-spot size B - Source-film distance C - Specimen-film distance D - Abruptness of
thickness changes in specimen E - Screen-film contact F - Motion of specimen
Affected by:A - Type of Film B - Type of screen C - Radiation wavelength D - Development
Radiographic sensitivity is a general or qualitative term referring to the size of the
smallest detail that can be seen in a radiograph, or to the ease with which the images
of small details can be detected. Phrased differently, it is a reference to the amount of
information in the radiograph. Note that radiographic sensitivity depends on the
combined effects of two independent sets of factors. One is radiographic contrast (the
density difference between a small detail and its surroundings) and the other is
definition (the abruptness and the “smoothness” of the density transition). See the figure
below.
Advantage of higher radiographic contrast (left) is largely offset by poor definition. Despite
lower contrast (right), better rendition of detail is obtained by improved definition.
Radiographic contrast between two areas of a radiograph is the difference between the
densities of those areas. It depends on both subject contrast and film contrast. Subject
Page 242
contrast is the ratio of x-ray or gamma-ray intensities transmitted by two selected
portions of a specimen. (See the figure below.) Subject contrast depends on the nature
of the specimen, the energy (spectral composition, hardness, or wavelengths) of the
radiation used, and the intensity and distribution of the scattered radiation, but is
independent of time, milliamperage or source strength, and distance, and of the
characteristics or treatment of the film.
With the same specimen, the lower-kilovoltage beam (left) produces higher subject contrast
than does the higher-kilovoltage beam (right).
Film contrast refers to the slope (steepness) of the characteristic curve of the film. It
depends on the type of film, the processing it receives, and the density. It also depends
on whether the film is exposed with lead screens (or direct) or with fluorescent screens.
Film contrast is independent, for most practical purposes, of the wavelengths and
distribution of the radiation reaching the film, and hence is independent of subject
contrast.
Definition refers to the sharpness of outline in the image. It depends on the types of
screens and film used, the radiation energy (wavelengths, etc), and the geometry of the
radiographic setup.
SUBJECT CONTRAST
Subject contrast decreases as kilovoltage is increased. The decreasing slope
(steepness) of the lines of the earlier exposure chart as kilovoltage increases illustrates
the reduction of subject contrast as the radiation becomes more penetrating. For
example, consider a steel part containing two thicknesses, 3/4 inch and 1 inch, which is
radiographed first at 160 kV and then at 200 kV.
Page 243
In the table above, column 3 shows the exposure in milliampere-minutes required to
reach a density of 1.5 through each thickness at each kilovoltage. These data are from
the exposure chart mentioned above. It is apparent that the milliampere-minutes
required to produce a given density at any kilovoltage are inversely proportional to the
corresponding x-ray intensities passing through the different sections of the specimen.
Column 4 gives these relative intensities for each kilovoltage. Column 5 gives the ratio
of these intensities for each kilovoltage.
Column 5 shows that, at 160 kV, the intensity of the x-rays passing through the 3/4-inch
section is 3.8 times greater than that passing through the 1-inch section. At 200 kV, the
radiation through the thinner portion is only 2.5 times that through the thicker. Thus, as
the kilovoltage increases, the ratio of x-ray transmission of the two thicknesses
decreases, indicating a lower subject contrast.
FILM CONTRAST
The dependence of film contrast on density must be kept in mind when considering
problems of radiographic sensitivity. In general the contrast of radiographic films,
except those designed for use with fluorescent screens, increases continuously with
density in the usable density range. Therefore, for films that exhibit this continuous
increase in contrast, the best density, or upper limit of density range, to use is the
highest that can conveniently be viewed with the illuminators available. Adjustable high-
intensity illuminators that greatly increase the maximum density that can be viewed are
commercially available.
The use of high densities has the further advantage of increasing the range of radiation
intensities that can be usefully recorded on a single film. This in turn permits, in x-ray
radiography, the use of lower kilovoltage, with resulting increase in subject contrast and
radiographic sensitivity.
All films exhibit graininess to a greater or lesser degree. In general, the slower films
have lower graininess than the faster. Thus, Film Y would have a lower graininess than
Film X.The graininess of all films increases as the penetration of the radiation
increases, although the rate of increase may be different for different films. The
graininess of the images produced at high kilovoltages makes the slow, inherently
finegrain films especially useful in the million- and multimillion-volt range. When
sufficient exposure can be given, they are also useful with gamma rays.
The use of lead screens has no significant effect on film graininess. However,
graininess is affected by processing conditions, being directly related to the degree of
Page 244
development. For instance, if development time is increased for the purpose of
increasing film speed, the graininess of the resulting image is likewise increased.
Conversely, a developer or developing technique that results in an appreciable
decrease in graininess will also cause an appreciable loss in film speed. However,
adjustments made in development technique to compensate for changes in temperature
or activity of a developer will have little effect on graininess.
Such adjustments are made to achieve the same degree of development as would be
obtained in the fresh developer at a standard processing temperature, and therefore the
graininess of the film will be essentially unaffected. Another source of the irregular
density in uniformly exposed areas is the screen mottle encountered in radiography with
the fluorescent screens. The screen mottle increases markedly as hardness of the
radiation increases. This is one of the factors that limits the use of fluorescent screens
at high voltage and with gamma rays.
Page 245
INTERPRETATION
Viewing of radiographs
Illuminator Requirements
1. Spot Viewers
2. Area Viewers
3. Strip Viewers
4. Combination of Spot and Area Viewers.
Viewers must have power ventilators to cool the intense light source required for
viewing high density radiographs. Most viewers will use one or more photoflood
incandescent lamps as the light source. In addition, a light diffuser is required to
eliminate the variation in light intensity. A good illuminator will employ a rheostat to vary
the light intensity, allowing the lower density areas to view with optimum light
conditions. A film density of 2.0 is allowing only 1% of incident light to be transmitted
through the film, whereas a 4.0 density allows only 0.01% transmissions. This fact
illustrates the necessity of high-density illuminators.
Background Lighting
The film illuminator should be located in such areas that allows for background light
control. Although the viewing room need not be completely dark, the direct light should
impinge on the radiograph being viewed except from the high intensity light source. All
precautions should be taken to ensure that light is not reflecting off the surface of the
radiograph and , potentially, distracting the film interpreter. It is desirable for the
interpreter to adapt himself to the conditions of the viewing room for at least 10 minutes
before interpreting radiographs.
Viewing Aids
During the process of interpreting and evaluating radiographs, numerous aids may be
used to enhance the ability of the interpreter in discovering small indications. Included are
masks to cover the portions of film and allow concentrations on specific areas. Magnifiers
are also used to magnify small indications wherever necessary.
INTERPRETATION AIDS
Reference radiographs with known discontinuity images are very useful in evaluating
and interpreting radiographs. They are available for different product forms, such as
castings, welds, tubular components, pressure vessels etc. They are commercially
available from several sources including ASTM and ASNT.
JUDGING RADIOGRAPHIC QUALITY
Film Density
Page 246
The optical density of the film is an accepted measure of the amount of information that
has been recorded. The density is in proportion to the number of silver halide crystals
that has been exposed and converted into metallic silver. The greater the density, more
the detail available for interpretation. In accordance with the most commonly used
standards, the following film density guidelines are used to judge the quality of the
radiograph in the area of interest.
For viewing two superimposed radiographs, known as composite film viewing, each film
is usually required to have a minimum density of 1.3. The transmitted film density shall
be measured through the radiographic image of the body of the appropriate hole
penetrameter or adjacent to the designated wire of a wire penetrameter. A tolerance of
0.5 density units is allowed for variation between densitometer readings.
If the density of the radiograph anywhere on the area of interest varies by more than
minus 15% or plus 30% from the density through the body of the hole penetrameter or
adjacent to the wire of wire penetrameter, additional penetrameter is used for each
exceptional areas and the radiograph is retaken.
Artifacts
Artifacts on radiograph can reduce the quality of the radiographs significantly and can
cause misinterpretation, if not thoroughly understood. Most film artifacts are caused by
improper film processing and careless film handling. In addition to the above, the film
can be partially fogged or mottled due to improper storage. Commonly occurring film
artifacts include:
Penetrameters
While evaluating the adequacy of the radiograph, carefully examine the penetrameter
image to ensure complete penetrameter outline as well as the required hole can be
seen. Make sure that the correct penetrameter is been selected for the required
thickness and type of the material that is to be radiographed.
To interpret and analyze the results of any radiographic examination, the interpreter
must first judge the quality of the radiograph with regard to the applied technique,
density, penetrameter selection and sensitivity, identification of film, coverage of parts
and artifacts. Secondly, the interpreter must identify the rejectable discontinuities and
judge them as true flaws. Any artifact that masks the area of interest must be verified
with another exposure with the same view or another. Knowledge of the part or
Page 247
component and the manufacturing process the component has undergone together
with the defects associated with the respective process helps the interpreter in judging
the nature of discontinuity
Awareness of the following factors permits the interpreter to make sound judgments.
1. Thickness
2. Surface finish
3. Process
4. Design
5. Material form
6. Heat treatment methods
7. Physical accessibility in making the radiograph.
Welding Process
The fusion of two similar or dissimilar metals is often referred to as welding. Over forty
welding processes are available and are not limited to welding, brazing, resistance
welding and solid state welding. Regardless fo the process, there are three common
variables: source of heat, source of shielding, source of chemical elements.
Control of these variables is essential and when any of these variables is left uncontrolled,
the individual interpreting the radiograph would expect to find:
1. Porosity
2. Slag inclusions
3. Lack of fusion
4. Lack of Penetration
5. Cracks
6. Tungsten inclusions and/or other indications.
Casting
The ability of a molten metal to fill a mold is based on the fluidity of the molten metal. It
varies with the material type and temperature. The process of solidification occurs when
the
Typical of the types of discontinuities formed during the casting process which may or
may not be detrimental to the casting integrity are
Page 248
1. Porosity
2. Gas voids
3. Sand inclusions
4. Slag inclusions
5. Tears
6. Cracks
7. Unfused chaplets
8. Cold shuts
Incomplete Penetration or lack of penetration results when the weld metal did not
penetrate and fuse at the root. Incomplete penetration is normally a cause for rejection
as they are potential stress raisers. In radiograph, they will appear as a sharp, dark
continuous or intermittent line. Depending on the weld geometry, they may occur in the
center of the weld or along the edge of the weld bevel.
Slag inclusion is caused when nonmetallic materials become trapped in the weld metal
between passes or between the weld metal and the base metal. They are generally
permissible unless they are bigger run along a sufficient length or they are large in
number. In radiograph they usually appear as dark irregular shapes of varying lengths
and widths. They are dark when the oxide that makes up for the inclusions is of a lower
atomic weight than the weld metal.
Cracks result from fractures or ruptures of the weld metal. They occur when stresses in
the localized areas exceed the materials ultimate tensile strength. Cracks in all forms
Page 249
are considered to be detrimental because their sharp extremities act as severe stress
concentrators. They normally appear as dark, irregular, wavy, or zigzag lines and may
have fine hairline indications branching off from the main crack indication.
Casting-Related Discontinuities
Porosity occurs when the gas is trapped in the metal during solidification. The pores
vary in size and distribution. The conditions are similar as for that of welding process.
In radiograph, radiographically rounded dark spots or various sizes.
Gas Voids i.e., gas holes, worm holes, or blowholes are also formed during
solidification and are considered more critical when in a tail-like linear pattern. In
radiograph, they appear as large rounded, dark indications, normally with smooth
edges.
Slag inclusions results when impurities are introduced during the solidification process
and is of little concern unless the amount and concentration are excessive. In
radiograph, they appear as light or dark indications depending on the relative densities of
the inclusions and the base metal.
Sand inclusions results from sand breaking free from the mold and subsequently
becoming trapped. Unless the amount of trapped sand is excessive, it would normally
be an acceptable condition.
Shrinkage results from localized contraction of the cast metal as it solidifies and cools.
It may or may not be acceptable, depending on the population, design function and
several other factors. Irregularly shaped spots of varying densities, which often appear
to be interconnected.
Hot tears are cracks or ruptures occurring when the metal is very hot; usually not
acceptable. They appear as dark, ragged, irregular lines and may have branches of
varying densities; less clearly defined than cracks.
Cracks result from stresses in the cast material which occur at relatively low
temperatures; always unacceptable as they have a tendency to propagate under stress.
Appear as dark continuous or intermittent lines, usually quite well defined.
Unfused chaplets result from failure of the liquid metal to consume the metal device
used to support the core inside the mold. If the chaplet is totally consumed, the part is
normally unacceptable because of obvious metallurgical problems. They are easily
identified as circular dark lines approximately the same size as the core material
support device.
Cold shuts are a result of splashing, surging, interrupted pouring, or the meeting of
two streams of molten metal. In radiograph, cold shuts appear as dark lines or linear
areas of varying length.
Page 250
The average person in the United States receives about 360 mrem every year whole
body equivalent dose. This is mostly from natural sources of radiation, such as radon. In
1992, the average dose received by nuclear power workers in the United States was 3
mSv whole body equivalent in addition to their background dose.
What is the effect of radiation?
Radiation causes ionizations in the molecules of living cells. These ionizations result in
the removal of electrons from the atoms, forming ions or charged atoms. The ions
formed then can go on to react with other atoms in the cell, causing damage. An
example of this would be if a gamma ray passes through a cell, the water molecules
near the DNA might be ionized and the ions might react with the DNA causing it to
break.
At low doses, such as what we receive every day from background radiation, the cells
repair the damage rapidly. At higher doses (up to 1 Sv), the cells might not be able to
repair the damage, and the cells may either be changed permanently or die. Most cells
that die are of little consequence, the body can just replace them. Cells changed
permanently may go on to produce abnormal cells when they divide. In the right
circumstance, these cells may become cancerous. This is the origin of our increased
risk in cancer, as a result of radiation exposure.
At even higher doses, the cells cannot be replaced fast enough and tissues fail to
function. An example of this would be “radiation sickness.” This is a condition that
results after high acute doses to the whole body (>2 Gy), the body’s immune system is
damaged and cannot fight off infection and disease. Several hours after exposure
nausea and vomiting occur. This leads to nausea, diarrhea and general weakness. With
higher whole body doses (>10 Gy), the intestinal lining is damaged to the point that it
cannot perform its functions of intake of water and nutrients, and protecting the body
against infection. At whole body doses near 7 Gy, if no medical attention is given, about
50% of the people are expected to die within 60 days of the exposure, due mostly from
infections.
If someone receives a whole body dose more than 20 Gy, they will suffer vascular
damage of vital blood providing systems for nervous tissue, such as the brain. It is likely
at doses this high, 100% of the people will die, from a combination of all the reasons
associated with lower doses and the vascular damage.
There a large difference between whole body dose, and doses to only part of the body.
Most cases we will consider will be for doses to the whole body.
What needs to be remembered is that very few people have ever received doses more
than 2 Gy. With the current safety measures in place, it is not expected that anyone will
receive greater than 0.05 Gy in one year where these sicknesses are for sudden doses
delivered all at once. Radiation risk estimates, therefore, are based on the increased
rates of cancer, not on death directly from the radiation. Non-Ionizing radiation does not
cause damage the same way that ionizing radiation does. It tends to cause chemical
changes (UV) or heating (Visible light, Microwaves) and other molecular changes
(EMF).
Risk
How is risk determined?
Page 251
Risk estimates for radiation were first evaluated by scientific committees in the starting
in the 1950s. The most recent of these committees was the Biological Effects of
Ionizing Radiation committee five (BEIR V). Like previous committees, this one was
charged with estimating the risk associated with radiation exposure. They published
their findings in 1990. The BEIR IV committee established risks exclusively for radon
and other internally alpha emitting radiation, while BEIR V concentrated primarily on
external radiation exposure data.
It is difficult to estimate risks from radiation, for most of the radiation exposures that
humans receive are very close to background levels. In most cases, the effects from
radiation are not distinguishable from normal levels of those same effects. With the
beginning of radiation use in the early part of the century, the early researchers and
users of radiation were not as careful as we are today though. The information from
medical uses and from the survivors of the atomic bombs (ABS) in Japan, have given
us most of what we know about radiation and its effects on humans. Risk estimates
have their limitations,
1. The doses from which risk estimates are derived were much higher than
theregulated dose levels of today;
3. The actual doses received by the ABS group and some of the medical
treatmentcases have had to be estimated and are not known precisely;
4. Many other factors like ethnic origin, natural levels of cancers, diet, smoking,stress
and bias effect the estimates.
What is the risk estimate?
According to the Biological Effects of Ionizing Radiation committee V (BEIR V), the risk
of cancer death is 0.08% per rem for doses received rapidly (acute) and might be 2-4
times (0.04% per rem) less than that for doses received over a long period of time
(chronic). These risk estimates are an average for all ages, males and females, and all
forms of cancer. There is a great deal of uncertainty associated with the estimate.
Risk from radiation exposure has been estimated by other scientific groups. The other
estimates are not the exact same as the BEIR V estimates, due to differing methods of
risk and assumptions used in the calculations, but all are close.
Risk comparison
The real question is: how much will radiation exposure increase my chances of cancer
death over my lifetime.
Page 252
So, now the risk estimates. If you were to take a large population, such as 10,000
people and expose them to one rem (to their whole body), you would expect
approximately eight additional deaths (0.08%*10,000*1 rem). So, instead of the 2,000
people expected to die from cancer naturally, you would now have 2,008. This small
increase in the expected number of deaths would not be seen in this group, due to
natural fluctuations in the rate of cancer.
What needs to be remembered it is not known that 8 people will die, but that there is a
risk of 8 additional deaths in a group of 10,000 people if they would all receive one rem
instantaneously.
If they would receive the 1 rem over a long period of time, such as a year, the risk would
be less than half this (<4 expected fatal cancers). Risks can be looked at in many ways,
here are a few ways to help visualize risk.
One way often used is to look at the number of “days lost” out of a population due to
early death from separate causes, then dividing those days lost between the population
to get an “Average Life expectancy lost” due to those causes. The following is a table of
life expectancy lost for several causes:
Health Risk Est. life expectancy lost
Smoking 20 cigs a day 6 years
You can also use the same approach to looking at risks on the job:
Industry type Est. life expectancy lost
All Industries 60 days
Another way of looking at risk, is to look at the Relative Risk of 1 in a million chances of
dying of activities common to our society.
Page 253
3. Spending 2 days in New York City (air pollution)
The following is a comparison of the risks of some medical exams and is based on the
following information:
Cigarette Smoking - 50,000 lung cancer deaths each year per 50 million smokers
consuming 20 cigarettes a day, or one death per 7.3 million cigarettes smoked or 1.37 x
10-7 deaths per cigarette
Highway Driving - 56,000 deaths each year per 100 million drivers, each covering
10,000 miles or one death per 18 million miles driving, or 5.6 x 10-8 deaths per mile
driven
Radiation Induced Fatal Cancer - 4% per Sv (100 rem) for exposure to low doses and
dose rates
Procedure Effective Dose (Sv) Effective Dose (mrem) Risk of Fatal
Cancer Equivalent to Number of Cigarettes Smoked Equivalent to
Number of Highway Miles Driven
3.2 1.3 x 10-6 9
Chest Radiograph 3.2 x 10 23 -5
Doses
The following is a comparison of limits, doses and dose rates from many different
sources. Most of this data came from Radiobiology for the Radiologist, by Eric Hall or
BEIR V, National Academy of Science. Ranges have been given if known. All doses are
TEDE (whole body total) unless otherwise noted. Units are defined on our. The doses
Page 254
for x-rays are for the years 1980-1985 and could be lower today. Any correction or
comments can be sent to us at the University of Michigan using our.
Radiation Safety
Radionuclides in various chemical and physical forms have become extremely
important tools in modern research. The ionizing radiation emitted by these materials,
however, can pose a hazard to human health. For this reason, special precautions must
be observed when radionuclides are used.
The possession and use of radioactive materials in the United States is governed by
strict regulatory controls. The primary regulatory authority for most types and uses of
radioactive materials is the federal Nuclear Regulatory Commission (NRC). However,
more than half of the states in the US (including Iowa) have entered into “agreement”
with the NRC to assume regulatory control of radioactive material use within their
borders. As part of the agreement process, the states must adopt and enforce
regulations comparable to those found in Title 10 of the Code of Federal Regulations.
Regulations for control of radioactive material use in Iowa are found in Chapter 136C of
the Iowa Code.
For most situations, the types and maximum quantities of radioactive materials
possessed, the manner in which they may be used, and the individuals authorized to
use radioactive materials are stipulated in the form of a “specific” license from the
appropriate regulatory authority. In Iowa, this authority is the Iowa Department of Public
Health. However, for certain institutions which routinely use large quantities of
numerous types of radioactive materials, the exact quantities of materials and details of
use may not be specified in the license. Instead, the license grants the institution the
authority and responsibility for setting the specific requirements for radioactive material
use within its facilities. These licensees are termed “broadscope” and require a
Radiation Safety Committee and usually a full-time Radiation Safety Officer.
At Iowa State University, the Department of Environmental Health and Safety (EH&S)
has the responsibility for ensuring that all individuals who work with radionuclides are
aware of both the potential hazards associated with the use of these materials and the
proper precautions to employ in order to minimize these hazards. As a means to
accomplish this objective, EH&S has established a radiation safety training program.
This document, together with the in-class training, should give each radionuclide user
sufficient information to enable him or her to use radionuclides in a safe manner.
The quantity which expresses the degree of radioactivity or radiation producing potential
of a given amount of radioactive material is activity. The special unit for activity is the
curie (Ci) which was originally defined as that amount of any radioactive material which
disintegrates at the same rate as one gram of pure radium. The curie has since been
defined more precisely as a quantity of radioactive material in which 3.7 x 10^10 atoms
disintegrate per second. The International System (SI) unit for activity is the becquerel
(Bq), which is that quantity of radioactive material in which one atom is transformed per
second. The activity of a given amount of radioactive material does not depend upon
the mass of material present. For example, two one-curie sources of Cs137 might have
very different masses depending upon the relative proportion of non radioactive atoms
present in each source. The concentration of radioactivity, or the relationship between
Page 255
the mass of radioactive material and the activity, is called the specific activity. Specific
activity is expressed as the number of curies or becquerels per unit mass or volume.
EXPOSURE
The Roentgen is a unit used to measure a quantity called exposure. This can only be
used to describe an amount of gamma and x-rays, and it can only be in air. Specifically
it is the amount of photon energy required to produce 1.610 x 10^12 ion pairs in one
cubic centimeter of dry air at 0°C. One Roentgen is equal depositing to 2.58 x 10^-4
coulombs per kg of dry air. It is a measure of the ionization of the molecules in a mass
of air. The main advantage of this unit is that it is easy to measure directly, but it is
limited because it is only for deposition in air, and only for gamma and x-rays.
The absorbed dose is the quantity that expresses the amount of energy which ionizing
radiation imparts to a given mass of matter. The special unit for absorbed dose is the
RAD (Radiation Absorbed Dose), which is defined as a dose of 100 ergs of energy per
gram of matter. The SI unit for absorbed dose is the gray (Gy), which is defined as a
dose of one joule per kilogram. Since one joule equals 10^7 ergs, and since one
kilogram equals 1000 grams, 1 Gray equals 100 rads. The size of the absorbed dose is
dependent upon the strength (or activity) of the radiation source, the distance from the
source to the irradiated material, and the time over which the material is irradiated. The
activity of the source will determine the dose rate which can be expressed in rad/hr,
mrad/hr, mGy/sec, etc.
Although the biological effects of radiation are dependent upon the absorbed dose,
some types of particles produce greater effects than others for the same amount of
energy imparted. For example, for equal absorbed doses, alpha particles may be 20
times as damaging as beta particles. In order to account for these variations when
describing human health risk from radiation exposure, the quantity called dose
equivalent is used. This is the absorbed dose multiplied by certain “quality” and
“modifying” factors (QF) indicative of the relative biological damage potential of the
particular type of radiation. The special unit for dose equivalent is the rem (Roentgen
Equivalent Man). The SI unit for dose equivalent is the sievert (Sv).
Radiation Measurement
When given a certain amount of radioactive material, it is customary to refer to the
quantity based on its activity rather than its mass. The activity is simply the number of
disintegrations or transformations the quantity of material undergoes in a given period
of time.
The two most common units of activity are the Curie and the Becquerel. The Curie is
named after Pierre Curie for his and his wife Marie’s discovery of radium. One Curie is
Page 256
equal to 3.7x1010 disintegrations per second. A newer unit of activity if the Becquerel
named for Henry Becquerel who is credited with the discovery of radioactivity. One
Becquerel is equal to one disintegration per second.
It is obvious that the Curie is a very large amount of activity and the Becquerel is a very
small amount. To make discussion of common amounts of radioactivity more
convenient, we often talk in terms of milli and microCuries or kilo and MegaBecquerels.
Radiation Units
Roentgen: A unit for measuring the amount of Gamma or X-rays in air.
Radiation is often measured in one of these three units, depending on what is being
measured and why. In international units, these would be Coulombs/kg for roentgen,
Grays for rads and Seiverts for rem.
The most common type of instrument is a gas filled radiation detector. This instrument
works on the principle that as radiation passes through air or a specific gas, ionization of
the molecules in the air occur. When a high voltage is placed between two areas of the
gas filled space, the positive ions will be attracted to the negative side of the detector
(the cathode) and the free electrons will travel to the positive side (the anode). These
charges are collected by the anode and cathode which then form a very small current in
the wires going to the detector. By placing a very sensitive current measuring device
Page 257
between the wires from the cathode and anode, the small current measured and
displayed as a signal. The more radiation which enters the chamber, the more current
displayed by the instrument.
Many types of gas-filled detectors exist, but the two most common are the ion chamber
used for measuring large amounts of radiation and the Geiger-Muller or GM detector
used to measure very small amounts of radiation.
The second most common type of radiation detecting instrument is the scintillation
detector. The basic principle behind this instrument is the use of a special material which
glows or “scintillates” when radiation interacts with it. The most common type of material
is a type of salt called sodium-iodide. The light produced from the scintillation process
is reflected through a clear window where it interacts with device called a photomultiplier
tube.
The first part of the photomultiplier tube is made of another special material called a
photocathode. The photocathode has the unique characteristic of producing electrons
when light strikes its surface. These electrons are then pulled towards a series of
plates called dynodes through the application of a positive high voltage. When
electrons from the photo cathode hit the first dynode, several electrons are produced
for each initial electron hitting its surface. This “bunch” of electrons is then pulled
towards the next dynode, where more electron “multiplication” occurs. The sequence
continues until the last dynode is reached, where the electron pulse is now millions of
times larger then it was at the beginning of the tube. At this point the electrons are
collected by an anode at the end of the tube forming an electronic pulse. The pulse is
then detected and displayed by a special instrument.
Page 258
Scintillation detectors are very sensitive radiation instruments and are used for special
environmental surveys and as laboratory instruments.
Page 259
ULTRASONIC TESTING
Prior to World War II, sonar, the technique of sending sound waves through water and
observing the returning echoes to characterize submerged objects, inspired early
ultrasound investigators to explore ways to apply the concept to medical diagnosis. In
1929 and 1935, Sokolov studied the use of ultrasonic waves in detecting metal objects.
Mulhauser, in 1931, obtained a patent for using ultrasonic waves, using two
transducers to detect flaws in solids. Firestone (1940) and Simons (1945) developed
pulsed ultrasonic testing using a pulse-echo technique.
Shortly after the close of World War II, researchers in Japan began to explore medical
diagnostic capabilities of ultrasound. The first ultrasonic instruments used an A-mode
presentation with blips on an oscilloscope screen. That was followed by a B-mode
presentation with a two dimensional, gray scale imaging.
Japan’s work in ultrasound was relatively unknown in the United States and Europe until
the 1950s. Then researchers presented their findings on the use of ultrasound to detect
gallstones, breast masses, and tumors to the international medical community. Japan
was also the first country to apply Doppler ultrasound, an application of ultrasound that
detects internal moving objects such as blood coursing through the heart for
cardiovascular investigation.
Ultrasound pioneers working in the United States contributed many innovations and
important discoveries to the field during the following decades. Researchers learned
to use ultrasound to detect potential cancer and to visualize tumors in living subjects
and in excised tissue. Real-time imaging, another significant diagnostic tool for
physicians, presented ultrasound images directly on the system’s CRT screen at the
time of scanning. The introduction of spectral Doppler and later color Doppler depicted
blood flow in various colors to indicate speed of flow and direction.
The United States also produced the earliest hand held “contact” scanner for clinical
use, the second generation of B-mode equipment, and the prototype for the first
articulated-arm hand held scanner, with 2-D images.
In the early 1970’s, two events occurred which caused a major change. The continued
improvement of the technology, in particular its ability to detect small flaws, led to the
unsatisfactory situation that more and more parts had to be rejected, even though the
probability of failure had not changed. However, the discipline of fracture mechanics
emerged, which enabled one to predict whether a crack of a given size would fail under
Page 260
a particular load if a material property, fracture toughness, were known. Other laws were
developed to predict the rate of growth of cracks under cyclic loading (fatigue). With the
advent of these tools, it became possible to accept structures containing defects if the
sizes of those defects were known. This formed the basis for new philosophy of “fail
safe” or “damage tolerant” design. Components having known defects could continue in
service as long as it could be established that those defects would not grow to a critical,
failure producing size.
A new challenge was thus presented to the nondestructive testing community. Detection
was not enough. One needed to also obtain quantitative information about flaw size to
serve as an input to fracture mechanics based predictions of remaining life. These
concerns, which were felt particularly strongly in the defense and nuclear power
industries, led to the creation of a number of research programs around the world and
the emergence of quantitative nondestructive evaluation (QNDE) as a new discipline.
NDT) has been practiced for many decades with initial rapid developments in
instrumentation spurred by the technological advances that occurred during World War
II and the subsequent defense effort. During the earlier days, the primary purpose was
the detection of defects. As a part of “safe life” design, it was intended that a structure
should not develop macroscopic defects during its life, with the detection of such
defects being a cause for removal of the component from service. In response to this
need, increasingly sophisticated techniques using ultrasonics, eddy currents, x-rays,
dye penetrants, magnetic particles, and other forms of interrogating energy emerged to
fuel this trend.
In the early 1970’s, two events occurred which caused a major change. The continued
improvement of the technology, in particular its ability to detect small flaws, led to the
unsatisfactory situation that more and more parts had to be rejected, even though the
probability of failure had not changed. However, the discipline of fracture mechanics
emerged, which enabled one to predict whether a crack of a given size would fail under
a particular load if a material property, fracture toughness, were known. Other laws
were developed to predict the rate of growth of cracks under cyclic loading (fatigue).
With the advent of these tools, it became possible to accept structures containing
defects if the sizes of those defects were known. This formed the basis for new
philosophy of “fail safe” or “damage tolerant” design. Components having known
defects could continue in service as long as it could be established that those defects
would not grow to a critical, failure producing size.
A new challenge was thus presented to the nondestructive testing community. Detection
was not enough. One needed to also obtain quantitative information about flaw size to
serve as an input to fracture mechanics based predictions of remaining life. These
concerns, which were felt particularly strongly in the defense and nuclear power
industries, led to the creation of a number of research programs around the world and
the emergence of quantitative nondestructive evaluation (QNDE) as a new discipline.
The Center for Nondestructive Evaluation at Iowa State University (growing out of a
major research effort at the Rockwell International Science Center); the Electric Power
Research Institute in Charlotte, North Carolina; the Fraunhofer Institute for
Nondestructive Testing in Saarbrucken, Germany; and the Nondestructive Testing
Centre in Harwell, England can all trace their roots to those changes.
In the ensuing years, many important advances have been made. Quantitative theories
have been developed to describe the interaction of the interrogating fields with flaws.
Page 261
Models incorporating the results have been integrated with solid model descriptions of
real-part geometries to simulate practical inspections. Related tools allow NDE to be
considered during the design process on an equal footing with other failure-related
engineering disciplines. Quantitative descriptions of NDE performance, such as the
probability of detection (POD), have become an integral part of statistical risk
assessment. Measurement procedures initially developed for metals have been
extended to engineered materials, such as composites, where anisotropy and
inhomogeneity have become important issues.
The rapid advances in digitization and computing capabilities have totally the changed
the faces of many instruments and the type of algorithms that are used in processing
the resulting data. High-resolution imaging systems and multiple measurement
modalities for characterizing a flaw have emerged. Interest is increasing not only in
detecting, characterizing and sizing defects, but in characterizing the materials in which
they occur. Goals range from the determination of fundamental micro structural
characteristics such as grain size, porosity and texture (preferred grain orientation) to
material properties related to such failure mechanisms as fatigue, creep, and fracture
toughness—determinations that are sometimes quite challenging to make due to the
problem of competing effects.
Most ultrasonic instruments detect flaws by monitoring one or more of the following:
Reflection of sound from interfaces consisting of material boundaries or discontinuities
within the material itself.
Time of transit of a sound wave through the test piece from the entrance point of the
transducer to the transducer
Attenuation of sound waves by absorption of the sound waves within the material
Features in the spectral response for either a transmitted signal or a reflected one.
Ultrasonic waves are mechanical vibrations; the amplitude of vibrations in metal parts
being ultrasonically inspected impose stresses well below the elastic limit, thus
preventing permanent effects on parts.
Basic Equipment
An electronic signal generator that produces bursts of alternating voltage (a negative spike
or a square wave) when electronically triggered.
A transducer (probe or search unit) that emits a beam of ultrasonic wave when bursts of
alternating voltage are applied to it.
A couplant to transfer energy in the beam of ultrasonic waves to the test piece
A couplant to transfer the output of ultrasonic waves (acoustic energy) from the test piece to
the transducer
A transducer ( can be the same as the transducer initiating the sound or it can be a
separate one) to accept and convert the output of ultrasonic waves from the test piece to
corresponding bursts of alternating voltage. In most system, a single transducer
alternately acts as sender and receiver.
An electronic device to amplify and if necessary demodulate or otherwise modify the signal
from the transducer.
A display or indicating device to characterize or record the output from the test piece. The
display device may be a CRT, sometimes referred to as an oscilloscope; a chart or strip
recorder; a marker, indicator , or alarm device; or a computer printout
An electronic clock, or timer, to control the operation of the various components of the
Page 262
system, to serve as a primary reference point, and to provide coordination for the
entire system..
Applicability
The ultrasonic inspection of metal is principally conducted for the detection of
discontinuities. This method can be used to detect internal flaws in most engineering
metals and alloys. Bonds produced by welding, brazing, soldering, and adhesive
bonding can also be ultrasonically inspected. In-line techniques have been developed
Page 263
for process control. Both line-powered and battery –operated commercial equipment is
available permitting inspection in shop laboratory, warehouse, or field.
Ultrasonic inspection is used for quality control and materials inspection in all major
industries .This includes electrical and electronic component manufacturing; production
of metallic and composite materials; and fabrication of structures such as airframes,
piping and pressure vessels, ships, bridges, motor vehicle, machinery, and jet engines.
In service ultrasonic inspection for preventive maintenance is used for detecting the
impending failure of railroad- rolling-stock axles, press columns, earth moving
equipment, mill rolls, mining equipment, nuclear systems, and other systems and
components.
The flaws that can be detected include, but not limited to cracks, inclusions, voids,
laminations, debonding, pipes and flakes. They may be inherent in the raw material,
may result from fabrication and heat treatment, or may occur in service as a result of
fatigue, impact, abrasion, or corrosion.
Page 264
Physics of Ultrasound
Particle Displacement and Strain
Acoustics is the study of time-varying deformations or vibrations in materials. All
material substances are composed of atoms, which may be forced into vibrational
motion about their equilibrium positions. Many different patterns of vibrational motion
exist at the atomic level. Most of these patterns, however, are irrelevant to the study of
acoustics, which is concerned only with material particles that, although small, contain
many atoms; for within each particle, the atoms move in unison.
When the particles of a medium are displaced from their equilibrium positions, internal
(electro-static) forces arise. It is these elastic restoring forces between particles,
combined with inertia of the particles, which lead to oscillatory motions of the medium.
= 1000 Hz
Most of the industrial ultrasonic inspections are carried out in the MHz range.
Conventional ultrasonic inspections are being carried out in the range of 2 - 5 MHz
frequency range. However, ultrasonic testing can be carried out upto 25 Mhz range.
The lower limit can also be in the range of 0.5 MHz (for theinspection of castings) and
even in the range of some kHz (inspection of concrete structures).
Wave
The nature of sound travel is in the form of waves. If a particle displacement Vs time
graph is plotted, the resulting graph would resemble a sine wave and hence we refer
sound as a wave.
Properties of Waves:
Frequency: Frequency (f) is defined as the number of times a repetitive event (cycle)
occurs per given unit of time. Normal ultrasonic testing is carried out in the range of 0.5
MHz to 25 MHz. However, for specialized inspections, such as those involving testing
concrete structures will employ frequency range in the order of several kHz. The
frequency of ultrasonic waves affects the inspection capabilities in several ways.
Generally, a compromise has to be made between favorable and adverse effects to
Page 265
achieve an optimum balance and to overcome the limitations imposed by the ultrasonic
equipment and material.
Wavelength
If the length of a particular sound wave is measured from trough to
trough, or from crest to crest, the distance is always the
same. This distance is known as the wavelength (ë) and is defined in
the following equation. The time it takes for the wave to travel a distance of one complete
wavelength, is the same amount of time it takes for the source to execute one complete
vibration.
ë = V/f
V = velocity f = frequency
of the wave
Detection of a defect involves many factors other than the relationship of wavelength
and flaw size. Sound reflects from a defect if its acoustic impedance differs from the
surrounding material. Often, the surrounding material has competing reflections, for
example microstructure grains in metals and the aggregate of concrete. A good
measure of detectability of a flaw is its signal-to-noise ratio (S/N), that is the signal from
the defect against the background reflections (categorized as “noise”). The absolute
noise level and the absolute strength of a echo from a “small” defect depend on a
number of factors:
Page 266
• inherent reflectivity of the flaw which is dependent on its acoustic impedance, size,
shape, and orientation. Cracks and volumetric defects can reflect ultrasonic waves
quite differently. Many cracks are “invisible” from one direction and strong reflectors
from another.
The signal-to-noise ratio (S/N), and therefore the detectability of a defect:
increases with a more focused beam. In other words, flaw detectability is inversely
proportional to the transducer beam width.
increases with decreasing pulse width (delta-t). In other words, flaw detectability is
inversely proportional to the duration of the pulse produced by an ultrasonic
transducer. The shorter the pulse (often higher frequency), the better the
detection of the defect. Shorter pulses correspond to broader bandwidth
frequency response. See the figure below showing the waveform of a transducer
and its corresponding frequency spectrum.
decreases in materials with high density and/or a high ultrasonic velocity. The
signal-to-noise ratio (S/N) in inversely proportional to material density and
acoustic velocity.
increases with frequency. However, in some materials, such as titanium alloys, both
Aflaw and the Figure of Merit (FOM) both change about the same with frequency,
so in those cases the signal-to-noise ratio (S/N) in somewhat independent of
frequency, fo.
Page 267
Z = pV
Air has a very low impedance ratio, while the impedance of water is relatively higher
than the impedance of air. Aluminum and steel still have higher impedances.
Impedance Ratio
Impedance ratio between two materials is simply the acoustical impedance of one
material divided by the acoustical impedance of the other material. When a sonic beam
is passing from first material to the second, the impedance ratio is the impedance of the
second material divided by the impedance of the first material. As the ratio increases,
more of the original energy will be reflected.
• Possibility of chemical reactions between the test piece surface and the couplant
• Cleaning requirements
The couplant should be selected such that its viscosity is appropriate for the surface
finish of the test material to be examined. The examination on rough surfaces generally
Page 268
requires a high viscosity couplant e.g. for scanning castings the couplant generally used
is grease. At elevated temperatures as conditions warrant, heat resistant coupling
materials such as silicone gels, or greases should be used.
Water is a suitable couplant for the use on a relatively smooth surface; however a
wetting agent should be added. It is sometimes appropriate to add glycerin to increase
viscosity, but they tend to induce corrosion in aluminum and therefore it is not
recommended for aerospace applications. Heavy grease or oil can be used on
overhead surfaces or on rough surfaces when the primary purpose of the couplant is to
smoothen out the irregularities.
Wall paper paste is especially used on rough surfaces when good coupling is needed to
minimize background noise and yield a adequate signal - to – noise ratio. Water is not
an good couplant to be used on carbon steel pieces as it promotes surface corrosion.
Wall paper paste has a tendency to flake off when exposed to air. When dry and hard,
they can be removed by blasting or by wire brushing. Couplant’s used in contact
inspection should be applied as thin as possible to obtain consistent test results. The
necessary for a couplant is one of the drawbacks of the inspection and may be a
limitation.
The couplant used in calibration should be used for examination. During the
performance of an inspection the couplant layer must be maintained throughout the
inspection such that the contact area is held constant while maintaining adequate
couplant thickness. Lack of couplant thickness may reduce the amount of energy
transferred between the test piece and the transducer. These couplant variations in turn
severely results in examination sensitivity variations.
The velocity of sound in each material is determined by the material properties (in the
case of sound, elastic modulus and density) for that material. When sound waves
pass between materials having different acoustic velocities refraction takes place at an
interface.
Only when an ultrasound wave is incident at right angles on an interface between two
materials (i.e. at angle of incidence = 0) do transmission and reflection occur at the
interfaces without any change in the beam direction. At any other angle of incidence,
the phenomenon of mode conversion, i.e. s change in the nature of the wave motion
and refraction i.e. a change in the direction of the wave propagation must be considered.
The angle of incidence è of a ray or beam is the angle measured from i
the ray to the surface normal.
Page 269
The angle of reflection of a ray or beam is the angle measured from the reflected
ray to the surface normal. The law of reflection states that the angle of incidence of a
wave or stream of particles reflecting from a boundary, conventionally measured from
the normal to the interface (not the surface itself), is equal to the angle of reflection
measured from the same interface,
Angle of refraction
A familiar example of this phenomenon occurs as we view an object on the bottom of the
lake. This object is not exactly where it is, but it appears to be due to the bending of light.
This bending is referred to as refraction and is determined by
It has been observed that unlike light, a sound wave of one type, such as longitudinal
will not only be refracted in the second material, but according to the incident angle, it
may be transformed into another wave mode such as shear or surface. The wave that
propagate in an given instance depend on the ability of the waveform to exist in the
given material, the angle of incidence, and the velocities of the waveforms in both
materials.
The general law that describes waves behavior at an interface is known as Snell’s law.
Although originally derived from light waves, snell’s law applies to acoustic waves and
to many other typeof waves. According to Snell’s law, the ratio if sine of the angle of
incidence to the sine of angle of refraction equals the ratio of the corresponding wave
velocities. Mathematically, Snell’s law can be expressed as:
Critical Angles
If the angle of incidence is small, sound waves propagating in a given medium may
undergo mode conversion at the boundaries. Moreover, at this angle of incidence, two
wave mode exists as a result of mode conversion. One being longitudinal wave and the
other being shear wave. Two wave modes of ultrasound at different velocities and
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it.
Page 271
Critical Angles
If the angle of incidence is small, sound waves propagating in a given medium may
undergo mode conversion at the boundaries. Moreover, at this angle of incidence, two
wave mode exists as a result of mode conversion. One being longitudinal wave and the
other being shear wave. Two wave modes of ultrasound at different velocities and
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it. With further enlargement of the angle of incidence
the angle of
refraction b also increases until finally, at an angle of
incidence of a = 27.5° (1st
critical angle), the
longitudinal wave, with an angle
b of 90°, is refracted. This
means that it runs along the
interface whilst the transverse
wave is still transmitted into
the test object, Fig 32a.
Page 272
45°, Fig. 32 b. Finally, with an
angle of incidence of about 57°
(2nd critical angle), the
transverse wave, with an angle of
90°, is refracted and propagates
along the surface of the test
object, it then becomes a surface
wave, Fig. 32 c. That is the limit
over which no more sound waves are
transmitted into the test object.
Total reflection starts from
here, Fig. 32d. The area in which
an angle of incidence is present
between the 1st and 2nd critical
angle (27.5° - 57°) gives us a
clear evaluable sound wave in the test object (made of steel),
namely the transverse wave between 33.3° and 90°, Fig. 33.
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it.
Page 273
GENERATION OF ULTRASONIC WAVES
1. Piezoelectricity
Ultrasonic transmitters and receivers are mainly made from small plates cut from certain
crystals. If no external forces act upon such a small plates electric charges are arranged
in certain crystal symmetry and thus compensate each other. Due to external pressure
the thickness of the small plate is changed and thus the symmetry of the charge. An
electric field develops and at the silver-coated faces of the crystal voltage can be
tapped off. This effect is called “direct piezoelectric effect”. Pressure fluctuations and
thus also sound waves are directly converted into electric voltage variations by this
effect; the small plate serves as receiver. The direct piezoelectric effect is
reversible(reciprocal piezoelectric effect). If voltage is applied to the contact face of the
crystal the thickness of the small plate changes, according to the polarity of the voltage
the plate becomes thicker or thinner. Due to an applied high-frequency a.c. voltage the
crystal oscillates at the frequency of the a.c. voltage. A short voltage pulse of less than
1/1000000 seconds and a voltage of 300-1000 v excites the crystal into oscillations at
its “natural frequency (resonance) which depends on the thickness and the material of
small plate. The thinner the crystal, the higher its resonance frequency. Therefore it is
possible to generate an ultrasonic signal with a defined primary frequency. The
thickness of the crystal is calculated from the required resonance frequency f-0according
to the following formula:
7320
Sound
velocity 4,000 5100 3300 5460 5740
Cm/s
Acoustic impedance z
106 kg/m2 s
30 27 20,5 11,2 15,2 34
Electromechanic
coupling factor k
O,6-o,7 0,45 0,4 0,38 0,1 0,2
Piezoelectric
modulus d 150-593 125-190 85 15 2,3 6
Piezoelectric
deformation constant
H 1,8-4,6 1,1-1,6 1,9 8,2 4,9 6,7
Coupling factor
for
radical cecillation kp 0,5-0,6 0,3 0,07 0 0,1 -
The efficiency during the conversion from electrical into mechanical energy and vice versa
differs according to the crystal material used.
The corresponding features are characterized by the piezoelectric constants and the
coupling factor. The constant (piezoelectric modulus) is a measure for the quality of the
crystal material as ultrasonic transmitter. The constant H (piezoelectric deformation
constant) is a measure for the quality as receiver. The table shows that lead
zirconatetitanate has the best transmitter characteristics and lithium sulphate the best
receiver characteristics. The constant k (theoretical value) shows the efficiency for the
conversion of electric voltage into mechnical displacement and vice versa. This value is
important for the pulse echo operating as the crystal acts as transmitter and receiver.
Here the values for lead zirconate-titanate, barium-titanate and lead meta-niobate lie in
a comparable order. As in the case of direct contact as well as immersion testing a
liquid couplant with low acoustic impedance z is required the crystal material should
have an acoustic impedance of the same order in order to b e able to transmit as much
sound energy as possible. Thus the best solution would be to use lead meta-niobate
and lithium sulphate as they have the lowest acoustic impendance. A satisfactory
resolution power requires that the constant k---p- (coupling factor for radical oscillation) is
as low as possible. K-p- is a measure for the appearance of disturbing radial oscillations
which widen the signals. From this point of view lead meta-niobate and lithium sulphate
are the best crystal materials. The characteristics of the crystal materials described
here show that no ideal crystal material exists; according to the problem compromises
have always to be made. As lithum sulphate presents additional difficulties due to its
water solubility the most common materials due to its water solubility the most common
materials as lead zinconate-titanate, barium-titanate and lead meta-niobate.
1. Set-up of the probe
For the practical application in the material testing probes are used into which the
piezoelectric crystal are installed. In order to protect the crystals against damage they
are pasted on a plane-parallel or wedge-shaped plastic delay block, the shape of the
delay block depends on whether the sound wave is to be transmitted perpendicularly or
angularly into the workpiece to be tested. The rear of the crystal is closely connected
with the damping elements which dampens the natural oscillations of the crystal as
quickly as possible.
Page 275
In this way the short pulses required for the pulse echo method are generated. The unit
comprising crystal, delay block and damping element are installed into a robust plastic
or metal housing and the crystal contacts are connected with the connector socket.
Probes transmitting and receiving the sound pulses perpendicularly to the surface of
the workpiece are called normal beam probes or straight beam probes. If the crystal is
equipped with a wedge-shaped delay block and angle beam probe is concerned which
transmits/receives the sound pulses at a fixed probe angle into/ from the workpiece to
be tested. In both probe types the crystal serves for both transmission and the
reception of sound pulses. A third probe type comprises two electrically and
acoustically separated crystal units of which one only transmits and the other received
of sound pulses. This probe is called TR probe (transmitter-raceiver-probe) or twin
crystal probe and is used-due to its design and functioning –for the testing of thin
materials or the detection of material flaws located near the surface of the workpiece.
1. probe characteristics
Page 276
Not only the damping element but all components of the crystal unit determine the
oscilation process. If the crystal is
excited into oscillation by an applied a.c.
voltage it will oscillate at this frequency
of excitation. The amplitude (max.
change of thickness) strongly depends
on the frequency of excitation. Near the
resonance frequency f-0 the amplitude
reaches a maximum (case of
resonance). The lower the damping the
higher the amplitude in the case of
resonance and the narrower the
resonance curve. Therefore a low
damping factor can produce high sound
pressure and thus a great sensitivity of
the test system. The disadvantage of low
damping factor
are long sound pulses and thus correspondingly long signals results in a poor
resolution power of complete test system. Signals from two reflectors located closely to
each other are not resolved, i.e. both reflector reflect only one wide signal. Thus a
compromise has to be made between great sensitivity and satisfactory resolution
power. In the case of given damping the resolution power can be improved by an
increase in frequency, however, a transit time-dependent decrease of the sensitivity has
to be accepted. According to the application probes with different damping and thus
different resolution power are manufactured. In this connection it is mainly distinguished
between 3 different types of damping:
In the case of medium damped probes the crystal carries out a relatively large number
of oscillations after its excitation. This results in a pulse length according to the number
of oscillations.
In the case of highly damped probe the pulse consists of only a few (two to four)
oscillations.
Shock wave probes are so highly damped that the crystal carries out only half an
oscillation or one oscillation (a periodic damping).
The number of probe types available shows that for each test problem encountered in
practice the suitable probe has to be selected. When selecting the probe it is first of all
important to know the test object with the expected flaws and to compare the available
probes on the basis of the shape of the sound beam and the sensitivity characteristic.
The sensitivity characteristic depends on the shape of the sound beam, the resolution
power and the test object itself. The length of the initial pulse determines the dead
zone. That is the area immediately below the surface of the workpiece where no
reflectors can be surface of the workpiece where no reflectors can be detected. The
sensitivity in ores with increasing distance and reaches the peak value in the focus of
the sound beam, i.e. at the distance of the near field length unless the dead zone
exceeds this areas.
Page 277
Behind the focus the sound beam becomes divergent. The sound pressure decrease in
inverse ratio to the distance and thus the sensitivity decreases, too. Moreover, a
considerable sound attenuation in the material may further decrease the sensitivity and
thus the flaw detectability.
Quartz
Lithium Sulfate
Polarized Ceramics
In the case of direct contact in ultrasonic testing the probe is coupled to the surface of
the workpiece by a couplant as the ultrasonic waves are not transmitted in an air gap
between probe and workpiece. The workpiece is tested by scanning its surface using
the probe. Therefore the coupling face of the probe is subject to wear and the testing
operator has to see to it that the crystal of the probe does not get damaged. The wear
of the probes can be decreased by using exchangeable protective membranes which,
however reduce the sensitivity and widen the pulses. Probes with a particularly hard
ceramic guard plate are also offered. If the delay block of a probe is worn off up to the
crystal the probe can no longer be used for testing purposes.
Heavy shocks such as a dropping of the probe may lead to the break of the crystal.
Temperatures exceeding the recommended max. operating temperature may also
damage the probe as in such a case the adhesive bonding between crystal and delay
block or crystal and damping element are dissolved. The crystals themselves undergo
an aging process as the polarization of the crystal gradually decreases.
In addition to the careful handling of the probes the check of the probes is therefore of
particular importance. Using an ultrasonic flaw detector and a suitable calibration block
the characteristics of the probes can easily be checked so that the ultrasonic operator
can rapidly make sure whether a certain probe produces the specified values or is
defective.
Page 279
TRANSDUCER TYPES
Page 280
Most standard straight-beam probes
transmit and receive longitudinal waves
(pressure waves). The oscillations of such
a wave can be described by compression and
decompression of the atoms propagating
through the material (gas, liquid and
solid), Fig 27. There is a large selection
of straight-beam probes in various sizes
and range from frequencies of
approximately 0.5 MHz to 25 MHz. Distances
of over 10†m can be obtained thus enabling
large test objects to be tested. The wide
range enables individual matching of probe
characteristics to every test task, even
under difficult testing conditions. We have already mentioned a
disadvantage of straight-beam probes which, under certain
conditions, can be decisive: the poor recognition of
near-to-surface dis-continuities due to the width of the
initialANGLE BEAM SEARCH UNITS pulse.
Probes whose beams enter at an angle are called angle-beam probes
because they transmit and receive the sound waves at an angle to
the surface of the test object. Most standard angle-beam probes
transmit and receive, due to technical reasons, transverse waves
or shear waves. Angle beam transducers direct sound beam into
the test piece at an angle other than 90 degrees. Angle beam
transducers are used to locate discontinuities oriented at right
angles between 90 and 180 degrees to the surface. They are also
used to propagate shear, surface and plate waves into the test
specimen by mode conversion.
Page 281
DOUBLE TRANSDUCERS (T/R PROBES)
The technique with two search units, also referred to as Dual Element probes, are used
when the test piece is of irregular shape and reflecting interfaces or back surfaces are
not parallel with the entry surface. One search unit is the transmitter, and the second
search unit is the receiver. The transmitting search unit projects a beam of vibrations
into the material; the vibrations travel through the material and are reflected back to the
receiving search unit from flaws or from the opposite surface.
An additional increase in sensitivity within the near surface zone is attained by a slight
inclination of crystals towards each other. This angle of inclination is called the roof
angle, which varies from 0 degrees to approx. 12 degrees, depending on the purpose
of application and probe itself.
Page 282
Due to the relatively long delay line, the distance between the
electrically zero point
(initial pulse) and the mechanical zero point (surface of the
work piece) is rather great. For this reason, the initial pulse
shifts to the left so far out of the CRT screen, that the near
surface area is no longer covered. However, at high instrument
gain, there is an interference echo in the near surface echo.
This interference is caused by small sound portions which,
travelling through the acoustic separation layer or along the
surface through the coupling medium, reach the recieving crystal.
This interference echo, occuring with T/R probes, designated
cross-talk echo because it concerns a direct cross-talk of sound
pulses. In general, however, the cross-talk echo doesnot affect
the detectability of near surface reflectors.
a = Sq.root ( T2 + 0.25 c2 )
The detour “u” which the sound pulse additionally travels on its
V-shaped path, consequently equals
u = a - d
Page 283
he chooses the two calibration lines which are required to
calibrate the test range. All measured values which are now
included in the interval of the two calibration lines are exact
values within the usual tolerances. The measuring error increases
as the smaller the real measuring distance is.
Applications
Page 284
EMA Transducers
Page 285
FOCUSED UNITS
Sound can be focused by acoustic lenses in a manner similar to that in which light is
focused by optic lenses. Most acoustic lenses are designed to concentrate sound
energy, which increases beam intensity in the zone between the lens and the focal
point. When an acoustic lens is placed in front of the search unit, the effect resembles
that of a magnifying glass; that is , a smaller area is viewed but details in that are
apppear larger. The combination of search unit and an acoustic lens is known as a
focused search unit or focused transducer; for optimum sound transmission, the lens of
focused search unit or focused transducer; for optimum sound transmission, the lens of
focused search unit is usually bonded to the transducer face. Focused search units can
be immersion or contact types.
Acoustic lenses are designed similarly to optic lenses. Acoustic lenses can be made of
various materials; severals of the more common lens materials are methyl
methacrylate, polystryrene, epoxy resin, aluminum, and magnesium. The important
properties of materials for acoustic lenses are:
Acoustic lenses for contour correction are usually designed on the premise that ht e
entire sound beam must enter the testpiece normal to the surface of the testpiece.For
example, in the straight –beam inspection oftubing . a narrow diverging beam is
preferred for internal inspection and a narrow converging beam for external inspection.
Page 286
In either case, with a flat-face search unit there is a wide front-surface echo caused by
the inherent change in the length of the water path across the width of the sound beam
which results in a distorted pattern of multiple back reflections (fig.38a). A cylindrinates
this effect.
The shapes of acoustic lenses vary over a broad range; two types are shown in fig.39-a
cylindrical9line-focus) search unit in fig.39(b). the sound beam from a cylindrical search
unit illuminates a rectangular area that can be described in terms of beam length and
width.Cylindrically focused search units are mainly used for the inspection of thin-wall
tubing and round bars .Such search units are especially sensitive to fine surface or
subsurface cracks within the walls of tubing. The sound beam from a spherical search
unit illuminates a small circular spot. Spherical transducers exhibit the greatest
sensitivity and resolution of all the transducer types; but the area covered is small, and
the useful depthj range is correspondingly small. Focusing can be achieved by shaping
the transducer element. The front surface of a quartz crystal can be ground to a
cylindrical or spherical radous. Barium titanate can be formed into a curved shape
before it is polarized. A small piezoelectric element can be mounted on a curved
backing member to achieve the same result.
Focal Length. Focused transducers are described by their focal length, that is short,
medium, long ,or extralong. Short focal lengths are best for the inspections of regions of
the testpeice that are close to the front surface.The medium, long, and extralong focal
lengths are for increasingly deeper regions. Frequently,focused transducers are specially
designed fo a specific application. The longer the focal length of the transducer, the
deeper into the testpiece the point of high sensitivity will be.
The focal length of a lens in water has little relation to its focal depth in metal, and
changing the length of the water path in immersion inspection produces little change in
focal depth in a testpiece. The large difference in sonic velocity between water and
metal cause sound to bend at a sharp angle when entering a metal surface at any
oblique angle. Therefore, the metal surface acts as a second lens that is much more
powerful than the acoustic lens at the transducer, as shown in fig,.40. this effect moves
the focal spot very close to the front surface, as compared to the focal point of the same
sound beam in water. This effect also cause the transducer to act as a notably
directional and distance-sensitive receiver,sharpens the beam, and increases sensitivity
to small reflectors in the focal zone. Thus flaws that produce very low amplitude echoes
can be examined in greater detail than is possible with standard search units.
Useful Range. The most useful portion of a sound strats at the point of maximum
sound intensity and extends for a considerable distance beyond this point. Focusing the
sound beam moves the maximum-intensity point toward the transducer and shortens
the usable range beyond. The useful range of focused transducers extends from about
0.25 to approximately 2.50mm below the front surface, in materials 0.25(0.010 in.) or
less in thickness, resonance or anti resonance techniques can be used. These
techniques are based on changes in the duration of ringing of the echo, or the number
of multiples of the back-surface echo. The advantages of focused search units are listed
below; these advantages apply mainly to the useful thickness range of 0.25 to 2.50
mm(0.010 to 10 in.) below the front surface:
Page 287
• High resolving power
• Low effects of surface roughness
• Low effects of front-surface contour
• Low metal noise (background)
The echo-masking effects of surface roughness and metal noise can be reduced by
concentrating the energy into a smaller beam. The side lobe energy produced by a flat
transducer is reflected away by a smooth surface. When the surface is rough, some of
the side lobe energy returns to the transducer and widens the front reflection, causing
loss of resolving power and increasing the length of the dead zone. The limitation of
focused search units is the small region in the test part in the area of sound focusing
that can be effectively interrogated. Noise. Material noise consists of low-amplitude,
random signals from numerous small reflectors irregularly distributed throughout the
testpiece. Some of the causes of metal noise are grain boundaries, microporosity, and
segregations. The larger the small reflectors encountered by the beam. If echoes from
several of these small reflectors have the sam transit time, they may produced can
indication amplitude that exceeds the acceptance level. Focused beams reduce
background by reducing the volume of metal inspected, which reduces the probability
that the sound beam will encounter several small reflectors at the same depth. Echoes
from discontinuities of unacceptable size are not affected and will rise well above the
remaining background.
Focused search units allow the highest possible resolving power to be achieved with
standard ultrasonic equipment. When a focused search unit is used, the front surface of
the testpiece is not in the focalzone, and concentration of sound beam energy makes
the echo amplitude from any flaw very large. The resolving power of any system can be
greatly improved by using a focused transducer, designed specifically for the application
and a defined region within the testpiece. Special configuration consist of spherical,
conical, and toroidal apertures the 40 with improvements in beam width and depth of
field. The figure below shows an immersion probe with a cylindrical lens.
Special Probes
Page 288
to particular locations and angles. Some of the examples are
given below:
HF Wheel Probe
Small, low profile twin angle beam transducers with integral cables for the
inspection of welds in steam boilers in power stations.
Page 289
Axially radiused to suit 50mm boiler tubes.
5 MHz with an increased toe in angle to produce a short focal length to suit
such weld and are available as 70° & 60°, with BNC or LEMO1 to fit flaw
detectors.
Roller Probes
Dry contact roller probes are designed to be used where automated testing is
required. Roller probes have miniature BNC connectors. They can be used in
combination with dry contact flaw detectors, UFDS and the MS310D.
Page 290
All TOFD transducers are highly
damped with short pulse length,
broad bandwidth and high
sensitivity, utilizing lead
metaniobate crystals. One major
application of TOFD is the
ultrasonic examination of welds
after final heat treatment and/or
hydraulic testing, to verify the
absence of cracks not detectable
by radiography etc. They are also
used to monitor welds during the
service life of components.
Sound radiation in the matter
The previous discussion of the
wave propagation proceeded from the assumption that a plane wave in an unlimited
medium is concerned. In practice, however, the crystal produces a system of waves in a
limited area which leads to a sound field shaped in a very complicated manner. In order
to explain these processes we look at two pointshaped sources P-1- and P-2, which
transmit spherical wave. We also assume that both sources simultaneously produce
maxima and minima of the same amplitude. In the space surrounding these points P-1 - -
and P-2 - there are certain points where the path difference between the two
waves is just ½ ë; i.e. at these points a minimum of the one wave
overlaps a maximum of the other wave. At these points the two
waves compensate to each other. Another group of certain points is characterized by
the fact two maximum or two minims of the two waves overlap. At these points a wave
occurs whose amplitude is twice as large as that of the wave coming from points P-1 - and
P-2the figures shows a momentary representation of the system discussed. The thickly
shown circles represent the maxims and the thinly shown circles the minima of the
outgoing waves. The points where a thickly shown ring and a thinly shown ring intersect
are the points of cancellation (connected by dotted lines). The dash-and –dot lines
connect the points where two maxima or two minima overlap, i.e. where a gain takes
places. A superposition of two or more waves coming from different points is called
interference. Due to the interference a complicated wave system occurs which cab be
demonstrated by water waves. In order to make the sound radiation from the crystal
surface understandable the surface is subdivided into many small points. Each point of
the crystal is considered to be the starting point of a spherical wave (Huygens principle).
Due to the interference of all these waves a complicated system of maxims and minima
occurs, the sound field.
Behind the crystal a number of interference maxima and minima can be seen. On the
central beam there is the last maximum- the main maximum – and from this point on no
further maxima and minima exist. The area of maxima and minima up to the main
maximum is called near field. The distance between crystal and main maximum is the
near field length N. The near field length depends on the face of the crystal, i.e. the
square of its diameter, the frequency of the sound waves and the sound velocity in the
material in which the waves propagate.
Formula:
Page 291
The representation of the sound field which shown the interference maxima and minima
is not very suitable for the ultrasonic testing practice although it shows the actual sound
pressure distribution of sound source. In practice an approximate representation of the
sound field is used which shows the area of sound source where reflections from flawed
area in the workpiece occur when the pulse echo method is applied. This approximates
representation is called sound beam.
First, we look at the distance which are larger than the near field length N. Proceeding
from a point on the central beam we look for points on the vertical to the central beam
where the sound pressure has been reduced by 50 as compared with the starting point
on the central beam.
Page 292
For various distance these points can be connected by straight lines in good
approximation. The angle between the central beam and the marginal ray is called
divergence angle Y-50- the index 50 means that the marginal ray characterizes those
points which show a sound pressure reduced those points which show a sound
pressure reduced by 50 as compared with the central beam. In the same ray 10
marginal rays can, of course, be designed. Here the sound pressure amounts to only 10
of the value on the central beam. This results in a new divergence angle Y--10-. the
divergence angles can be calculated as follows:
It can be observed that the shape of the sound beam depends on the sound velocity; the
frequency velocity means that the shape of the sound beam is influenced by the material
of the test object. If a certain material is concerned and thus the sound velocity known, a
larger near field length and the small divergence angle are obtained by increasing the
testing frequency. The same effect can be reached by increasing the crystal diameter.
The geometry of the crystal and the wave characteristics of the sound waves are the
reason for the characteristic form of the sound beam and the interference effects.
Page 293
We assumed in this connection a trouble-free propagation of the sound waves. Trouble-
free means:
Both conditions can, of course, not be met in practice. That means that further
influences and effects exerted on the sound propagation occur in the natural workpiece.
Even a flawless workpiece of steel, for example, contains very small and finely
distributed in homogeneities, e.g. grain boundaries. These are areas where the
spacelattice structure of the material is defective or foreign matter which accumulates A
very small quantities at the grain boundaries. The propagation of the sound waves is
more or less strongly affected by this inhomogeneity.
Absorption
Scattering
Page 294
To reduce this effect, it is desirable to decrease the test
frequency while scanning coarse grained material. Hence
frequencies in the range of 0.1MHz are even used to perform
inspection. However, reducing the frequency results in a reduction
in sensitivity. Hence it should always be borne in mind that the
test is being conducted at a lower sensitivity.
Inspection standards
The inspection or reference standards for pulse-echo testing include test blocks
containing natural flaws test blocks containg artificial flaws, and test technique of
evaluating th epercentage of back reflection. Inspection standards for thickness testing
can be plates of various known thickness or can be stepped or tapered wedges.
Test blocks containing natural flaws are metal sections similar to those parts being
inspected. Sections known to contain natural flaws can be selected for test blocks.
Test blocks containing naturals flaws have only limited use as standards, for principal
reasons:
Test blocks containing artificial flaws consist of metal sections containing notches,slots
or drilled holes. Hese test blocks are more widely accepted as standards than are test
blocks that contain natural flaws.
Test blocks containing drilled holes are widely used for longitudinal wave, straightbeam
inspection. The hole in the block can be positioned so that ultrasonic energy from the
search unit is reflected either from the side of the hole or from the bottom of the hole
The flat-bottom hole is used most because the flat –bottom hole is used most because
Page 295
the flat bottom of the hole offers an optimum reflection surface that is reproducible,
because a large portion of the reflected energy may be never reach the search unit.
Differences of 50% or more can easily be encountered between the energy reflected
back to the search unit form flat-bottom holes and from conicial-bottom holes of the
same diameter. The difference is a function of both tranducer frequency and distance
from search unit to hole bottom.Fig 43(a) shows a typical design for a test block that
contains a flat-bottom hole. In using such a block, high-frequency sound is directed from
the surface called “entry surface” toward the bottom of the hole, and the refection from
it is used either as a standard to compare with signals responses from flaws or as a
reference value for setting the controls of the ultrasonic instrument .
In the inspection of sheet, strip welds tubing and pipe, angle –beam inspection can be
used. This type of inspection generally requires a reference standard in the form of a
block that has a notch machined into the block. The sides of the notch can be straight
and at right angles to the surface of the test block, or they can be at an angle. The
width, length, and depth of the notch are usually defined by the applicable specification.
The depth is usually expressed as a percentage of testpiece thickness.
In some cases, it may be necessary to make one of the parts under inspection into a
test block by adding artificial discontinuities, such as flat-bottom holes or notches.
These artificial discontinuities can some times be placed so that they will be removed
by subsequent machining.
Reference blocks
On the screen, the height of the echo indication from a hole varies with the distance of
the hole from the front surface in a predictable manner based on near-field and far-field
Page 296
effects, depending on the test frequency and search-unit size, as long as the grain size
of the material is not large. Where grain size is large, this normal variation can be
altered. Differences in ultrasonic transmissibility can be encountered in reference
blocks of a material with two different grain sizes. This was caused by rapid attenuation
of ultrasound in the large-grain stainless steel. In some cases where the grain size is
quite large, it may not even be possible to obtain a back reflection at normal test
frequencies.
In the inspected of aluminum, a single set of reference blocks can be used for most
parts regardless of alloy or wrought mill product. This is considered acceptable practice
because ultrasonic transmissibility is about the same for all aluminum alloy
compositions.
For ferrous alloy, however, ultrasonic transmissibility can vary considerable with
composition. Con- sequently, a single set reference blocks cannot be used when
inspecting various products made f carbon steels, stainless steels, tool steels low-alloy
steels, and high-temperature alloys. For example, if a reference block prepared from
fine-grain steel were to set the level of test sensitivity and the material being inspected
were coarse grained, flaws could be quite large before they would yield an indication
equal to that obtained from the bottom of the hole in the reference block. Conversely, if
a reference block prepared from coarse-grain steel, the instrument could be so sensitive
that minor discontinuities would appear to be major flaws. Thermal treatment can also
have an appreciable effect on the ultrasonic transmissibility of steel. For this reason, the
stage in the fabrication process at which the ultrasonic inspection is performed may be
important. In some cases, it may determine whether or not a satisfactory ultrasonic
inspection can be performed.
Reference blocks are generally used to adjust the controls of the ultrasonic instrument
to a level that will detect flaws having amplitudes above a certain predetermined level. It
is usual practice to set instrument controls to display a certain amplitude of indication
from the bottom of the hole or from the notch in a reference block and to record and
evaluate for accept \reject indications exceeding that amplitude, depending on the
codes or applicable standards. The diameter of the reference hole and the number of
flaw indications permitted are generally related to performance requirements for the part
and are specified in the applicable codes or specifications.
• The larger the section or testpiece, the greater the likelihood of encountering flaws
of a particular size.
• Flaws of a damaging size may be permitted if found if found to be in an area that
will be subsequently removed by machining or that is not critical
• It is generally recognized that the size of the flaw whose echo exceeds the
rejection level usually is not the same as the diameter of the reference hole. In a
reference block, sound is reflected from a nearly perfect flat surface represented
by the bottom of the hole. In contrast, natural flaws are usually neither flat nor
perfectly reflecting
• The depth of a flaw from the entry surface will influence the height of its echo that
is displayed on the oscilloscope screen. Fig 45 shows th meaner in which echo
height normally varies with flaw depth. Test blocks of several lengths are used to
Page 297
establish a reference curve for the distance-amplitude correction of inspection
data.
The size of flaw that produced a rejectable indication will depend on grain size, depth of
the flaw below the entry surface, and test frequency. When acceptance or rejection is
based on indications that equal or exceed a specified percentage of the back reflection
technique, is most useful when lot-to-lot variations in ultrasonic transmissibility are large
or unpredictable-a condition often encountered in inspection of steel.
The size of flaw that produces a rejectable indication will depend on grain size, depth of
flaw below the entry surface, and test frequency. When acceptance or rejection is
based on indications that equal or exceed a specified percentage of the back reflection,
rejectable indications may be caused by smaller flaws in coarse-grain steel than in
finegrain steel. This effect becomes less pronounced, or is reversed, as the transducer
frequency and corresponding sensitivity necessary to obtain a predetermined height of
back reflection are lowered. Flaw evaluation may be difficult when the testpiece grain
size is large or mixed.
Other techniques Are used besides the reference block technique and the percentage
of back reflection technique. For example, in the inspection of stainless steel plate, a
procedure can be used in which the search unit is moved over the rolled surface, and
the display on the screen is observed to determine whether or not an area of defined
size is encountered where complete loss of presence of a discrete flaw indication, if
such an area is encountered, the plate is rejected unless the loss of back reflection can
be attributed to surface condition or large grain size.
Another technique that does not rely on test blocks is to thoroughly inspect one or more
randomly chosen sample parts for natural flaws by the method to be used on
production material. The size and location of any flaws that are detected by ultrasonic
testing are confirmed by sectioning the part. The combined results of ultrasonic and
destructive studies are used to develop standards for instrument calibration and to
define the acceptance level for production material.
Page 298
Thickness Blocks. Stepped or tapered testblocks are used to calibrate ultrasonic
equipment for thickness measurement. These blocks are carefully ground from material
similar to that being inspected, and the exact thickness at various positions is marked
on the block. Either type block can be used as a reference standard for resonance
inspection; the stepped block can also be used for transit-time inspection.
Many of the standards and specification for ultrasonic inspection require the use of
standard reference blocks, Which can be prepared from various alloys, may contain holes,
slots, or notches of several sizes, and may be of different sizes or shapes.
Page 299
The IIW Type Calibration
Block
IIW Type US-1 & IIW Type US-2 are shown in the figures above
IIW type blocks are used to calibrate instruments for both angle
beam and normal incident inspections. Some of their uses include
setting metal-distance and sensitivity settings, determining the
sound exit point and refracted angle of angle beam transducers,
and evaluating depth resolution of normal beam inspection setups.
Instructions on using the IIW type blocks can be found in the
annex of American Society for Testing and Materials Standard E164,
Standard Practice for Ultrasonic Contact Examination of Weldments.
Page 300
The Miniature Angle-Beam or ROMPAS Calibration Block
Page 301
The RC Block is used to determine the resolution of angle beam
transducers per the requirements of AWS and AASHTO. Engraved
Index markers are provided for 45, 60, and 70 degree refracted
angle beams.
Page 302
Step and tapered calibration wedges
come in a large variety of sizes and
configurations. Step wedges are
typically manufactured with four or
five steps but custom wedge can be
obtained with any number of steps.
Tapered wedges have a constant taper
over the desired thickness range.
Distance/Area-Amplitude Blocks
· 3/64" at 3"
·5/64" at 1/8", 1/4", 1/2", 3/4", 11/2", 3", and 6"
· 8/64" at 3" and 6"
Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum,
and Type 304
Corrosion Resistant Steel. Aluminum blocks are fabricated per the
requirements of
ASTM E127, Standard Practice for Fabricating and Checking Aluminum
Alloy Ultrasonic
Standard Reference Blocks. Steel blocks are fabricated per the
requirements of ASTM E428, Standard Practice for Fabrication and
Control of Steel Reference Blocks Used in Ultrasonic Inspection.
Page 303
Distance-Amplitude #3, #5, #8 FBH Blocks
Page 304
Distance-amplitude blocks also very similar to the
distance/area-amplitude blocks pictured above. Nineteen block
sets with flat-bottom holes of a single size and varying metal
path distances are also commercially available. Sets have either
a #3 (3/64") FBH, a #5 (5/64") FBH, or a #8 (8/64") FBH. The
metal path distances are 1/16", 1/8",
1/4", 3/8", 1/2", 5/8", 3/4", 7/8", 1", 1-1/4", 1-3/4", 2-1/4",
2-3/4", 3-14", 3-3/4", 4-1/4", 43/4", 5-1/4", and 5-3/4". The
relationship between the metal path distance and the signal
amplitude is determined by comparing signals from same size flaws
at different depth. Sets are commonly sold in 4340 Vacuum melt
Steel, 7075-T6 Aluminum, and Type 304
Corrosion Resistant Steel. Aluminum blocks are fabricated per the
requirements of
ASTM E127, Standard Practice for Fabricating and Checking Aluminum
Alloy Ultrasonic
Standard Reference Blocks. Steel blocks are fabricated per the
requirements of ASTM E428, Standard Practice for Fabrication and
Control of Steel Reference Blocks Used in Ultrasonic Inspection.
Page 305
Modern industrial asset maintenance and inspection concepts
require reliable and accurate inspection techniques. New
developments in modern NDT have resulted in a range of screening
tools and enhanced mapping techniques, which enable reliable
condition assessment during the operational phase of
installations. In service inspections allow on stream condition
assessment of equipment before planned shutdowns. NDT data is
applied to optimise off stream inspection intervals and aim
maintenance effort to where it is required most. The result is
reduced downtime and increased availability of installations. With
the upcoming of computer technology and miniaturisation of
equipment, NDT techniques have developed rapidly into modern
highly reliable inspection tools. One of the most important
factors is the increased accuracy and reproducibility of data.
Further mechanisation of inspection techniques improved
reliability greatly. One of the main reasons for this is that full
coverage is assured by performing the inspection in a mechanised
fashion. Secondly, the availability of a complete inspection
record greatly improves condition monitoring facilities.
Thickness Testing: Sound travels into the part and returns after
a measurable period of time.
Flaw Detection: Sound travels into the part, reflecting from the
defect.
Page 306
The height of the signal on the display represents the amount
Test Part:
· The part has properties that allow for consistent measurable
repeatable results. The Flaw detector and probe utilize the Piezoelectric effect. When
electrical energy is applied, mechanical energy is produced and when mechanical energy is
applied electrical energy is produced.
The following shows the types of reflections that would be displayed on the Ultrasonic instrument
when testing a piece with internal flaws.
Page 307
as contact when the sound beam is transmitted through a substance,
other than water.
The display form of contact testing shows an initial pulse and the
front wall reflection superimposed or sometimes very close to each
other, followed by subsequent back surface reflection. The
presence of discontinuities are judged by the appearance of echo
or a pip in between the front wall reflection (initial echo) and
the back wall reflection. On the basis of this echo, the nature
of flaw, its apparent depth and the distance from the scanning
point is calculated. Acceptance or rejection is as per the
amplitude and the relative size of the discontinuity indication.
Since the probe is in contact with the test specimen, the frequency
is induced by the exciting cystal and the vibrations are passed
on to the job. Hence, the thickness of the crystal used in contact
inspection has has to be necessarily thicker to induce a vibration
in the test specimen. This limits the frequency of inspection, as
the frequency depends on the thickness of the crystal. The thicker
the crystal, the lesser the frequency it can operate, hence
limiting the frequency of contact inspection. However, other
factors such as protability and cost of inspection when compared
to other forms of testing, has made contact testing a conventional
method in ultrasonic inspection.
Immersion Inspection
In this method, both the test piece and the probe are totally
immersed in liquid, usually water. There are three broadly classified
scanning methods that utilize immersion –type search units:
• Conventional immersion methods in which both the search unit and the test piece
are immersed in liquid
• Squirter and bubbler methods in which the sound is transmitting in a column of
flowing water
• Scanning with a wheel-type search unit which is generally classified as an
immersion method because the transducer itself is immersed.
Page 308
Basic Immersion Inspection. In conventional immersion inspection, both the search
unit and the test piece are immersed in water. The sound beam is directed into the test
piece using either a straight-beam (longitudinal wave) techniques or one of the various
angle-beam techniques, such as shear, combined longitudinal and shear, or Lamb
wave. Immersion-type search units are basically straight-beam units and can be used
for either straight-beam or angle-beam inspection through control and direction of the
sound beam.
In straight-beam immersion inspection, the water path (distance from the face of the
search unit to the front surface of the test piece) is generally adjusted to require a
longer transit time than the depth of scan so than the first multiple of the front reflection
will appear farther along the oscilloscope trace than the first back reflection. This is
done to clear the displayed trace of signals that may be misinterpreted. Water path
adjustment is particularly important when gates are used for automatic signaling and
recording. Longitudinal wave velocity in water is approximately one-fourth the velocity in
aluminum or steel; 25mm (1in) of water path is approximately equal to 100mm (4in) of
steel or aluminum. Therefore a rule of thumb is to make the water path equal to
onefourth the test piece thickness plus 6 mm (1/4 in).
Since the transducer doesnot come into the direct contact with
test specimen, it is possible to use thinner crystals to produce
sound waves. Using a thinner crystal increases the frequency of
inspection. It is possible to use frequencies as high as 25MHz
and the range usually varies between 2.25MHz and 25MHz. Higher
frequencies gives better resolution of smaller discontinuities.
Apart from the conventional probes, spherically ground and
Page 309
cylindrically ground acoustic lenses are commonly added to
immersion type transducers. They are used to:
4 x 0.25 “ = 1”
C. Substracting the answer from the known focal length in water
5” - 1” = 4”
Water-Column Designs. In many cases the shape or size of a test piece does not lend
itself to conventional immersion inspection in a tank. The Squirter scanning method
which operates on the immersion principle, is routinely applied to the high-speed
scanning of plate , sheet, strip, cylindrical form, and other regularly shaped test
Page 310
pieces(fig.) In the Squirter method, the sound beam is projected into the material
through a column of water that flows through a nozzle on the search unit. The sound
beam can be directed either perpendicular or at an angle to the surface of the test
piece. Squirter methods can also be adapted for through transmission inspection. For
this type of inspection, two Squirter-type search units are used. The bubble method is a
minor modification of the Squirter method that gives a less directional flow of couplant.
Wheel-type search Units operate on the immersion principle in that a sound beam is
projected through a liquid path into the test piece. An immersion principle in that a
sound beams is projected through a liquid path into the test piece. An immersion-type
search unit, mounted on a fixed axle inside a liquid –filled rubber tire, is held in one
position relative to the surface of the test piece while the tire rotates freely. The
wheeltype search unit can be mounted on a stationary fixture and the test piece moved
past it, or it can be mounted on a mobile fixture that moves over a stationary test piece.
The position and angle of the transducer element are determined by the inspection
method and technique to be used and are adjusted by varying the position of either the
immersion and inside the tire or the mounting yoke of the entire unit.
For straight-beam inspection, the ultrasonic beam is projected straight into the test
piece perpendicular to the surface of the test piece. Applications of the straight-beam
technique include the inspection of plate for lamination and of billet stock for primary
and secondary pipe.
Page 311
Resonant Inspection
Page 312
Resonant Inspection is a “new” NDT technique that was originated
by scientists at Los Alamos National Laboratory in the USA, and
has been developed for industrial applications during the last
four years of commercialisation by an American company Quatrosonics
Inc. It is a whole-body resonance inspection that is particularly
suited to inspecting smaller mass-produced hard components, and
one test will inspect the complete component without radiation,
the need for scanning, immersion in liquids, chemicals, abrasives
or other consumables.
Page 313
Test Set-Up
For Resonant Inspection, we normally locate the component to be
tested on three or four piezo transducers. It is not necessary to
scan the component with the transducers, nor to rotate a component
past the transducers, as one test will evaluate the wholebody or
complete component. One of the transducers normally acts as a
transmitter, exciting the component, whilst one or two more of the
transducers act as receivers, measuring the amplitude of vibration
at the specific frequency of the transmitter or at one of its
harmonics. Further transducers can be used to support the component
in the test. These transducers have ceramic tips (to prevent wear
of the transducers and to provide a good transfer of energy between
the component and transducer), which whilst normally being
hemispherical, can also be ground to a user specific shape if
required. Vibration Modes and Spectra
detection limits
Lack of Bonding
Applications in mass-production
Resonant Inspection is very suited to inspecting mass produced
parts, and is easily able to detect “outliers” or components
that differ from the normal production.
General corrosion
Page 315
The most common manifestation of corrosion is a uniform attack,
caused by a chemical or electro-chemical reaction uniformly
distributed over the exposed surface. A combination of a corrosive
product and an oxygen containing environment may start corrosion.
Environmental factors such as temperature, electrochemical
potential etc determine the corrosion rate. Generally, this type
of corrosion is of no great concern, since a slow, gradual loss
of material is well predictable and adequate measures may be taken.
Pitting corrosion
Localised corrosion may be a greater threat to installations, because it may form small
pinholes that perforate the material rapidly. Pitting is a result of an
anodic reaction process. At an initiation location, the surface is
attacked by a corrosive product (e.g. chloride). Metallic atoms are
dissolved at the surface of the starting pit. The dissolution causes
excessive positive charge in the surface area, which attracts negative
chloride ions to restore electrochemical balance. The chloride ions,
again, dissolve new metal atoms and the reaction becomes self
propagating. Within a short time the pit may penetrate the complete
wall thickness. The localised nature of pitting makes it extremely difficult to detect pits in an
early stage.
Weld root erosion/corrosion
Root erosion is
a degradation phenomenon, which is often
encountered in flow lines. It is difficult to
grind the weld penetration on the inside of
pipelines. The excessive penetration causes a
discontinuity on the surface, which disturbs the
flow pattern. On some metals, the wall is
passivated by an oxide film, which protects the
steel from corrosion processes. Turbulence and
cavitation affect the region adjacent to the
welds and hamper formation of this passivation
layer. Local wash out of wall material at the
weld side is the result. Another form of weld
root corrosion is caused by selective corrosion.
In many corrosion resistant alloys or
special welding materials, selective leaching may occur. Removal
of the least noble metals results in deterioration of the lattice
structure in alloys (e.g. dezincification in brass components).
If the weld material is more susceptible to corrosion than the
base material, wash out of the weld causes root corrosion and
degradation of structural integrity.
Fatigue cracking
Page 317
Besides expansion of the volume and crack initiation,
decarburisation of the steel structure results in hydrogen
embrittlement, which leads to fast deterioration of structural
integrity.
Special UT probes
For near surface cracks, special creeping wave probes have been
developed. Creeping waves travel just below the surface rather
than in it, therefore they are not influenced by the presence of
coupling liquids, and the influence of surface irregularities.
Moreover, since the creeping wave is a compression wave type, they
suffer less from a coarse material structure than shear waves.
Another outstanding example is the development of probes focussed
under cladding crack detection (UCC). The focal range is calculated
in the parent material - cladding transition zone. Cracks
initiating from the clad layer into the parent material may be
readily detected by UCC probes.
Page 318
probe or special functions such as Long-Long-Trans (LTT) probes
have been developed. Multi crystal transducers combine a number
of tasks in one housing to save space in the scanner setup and
construction costs. In all cases, it appeared to be of utmost
importance that the transducer parameters are optimized for
specific jobs. Once having gained experience with a certain weld
type however, it is possible to establish a “standard series” of
dedicated transducers, with which the inspections can be performed
without excessive lead times. Mechanization of the inspection
improved inspection accuracy and reproducibility.
TOFD
Over the past years, the system has been used in a great variety
of applications, ranging from circumferential welds in pipelines
(including joints of different wall thickness and tapered pipes),
weld inspection of heavy wall pressure vessels (up to 300 mm wall
thickness). Also, the TOFD technique was successfully applied for
inspection of partially filled welds, which are hardly inspectable
by any other technique. Nozzle and flange welds (complex geometry)
can be inspected with prior computer simulation modelling to aid
inspection planning and result evaluation.
Page 319
and service induced defects are revealed and progressively
monitored. Critical reactor vessels with heavy wall constructions
can only be adequately inspected by means of TOFD. Other
techniques such as high energy radiography with Cobalt-60 sources
or portable betatrons, are faced with high safety requirements
and extremely long examination times. Ultrasonic meander scanning
is often too cumbersome and time consuming. Spherical gas tanks
and steam generator headers may be surveyed for cracks.
Mapscan
P-scan/Bandscan
P-scan
Bandscan
Page 321
concept of line scanning was implemented in the Bandscan. A
transducer frame on a guided vehicle is moved parallel to the weld
along a band. Any number of probes can be mounted with different
functions.
pipe supports
Page 323
Corrosion under pipe supports or saddles is a major problem area
for inspection. The region between outer pipe surface and support
is susceptible to corrosion due to water ingress. Radiographic or
ultrasonic wall thickness techniques are not applicable because
of access limitation. The only option is lifting the pipe from the
support to gain access. However, the condition of the pipe is the
unknown factor and lifting may be a riskful enterprise. With LORUS,
the support region is inspected from the free top surface of the
pipe without the need for lifting. Fast screening of large numbers
of supports is achieved in a minimum of time. Both pulse-echo and
transmission techniques are applied in circumferential directions
to obtain maximum information. In a single scan over the top
surface of the pipe, two probes measure reflection and transmission
signals simultaneously. Reflection signals are used to calculate
projection images, while transmission signals are used to estimate
corrosion severity in several depth classes.
Page 325