Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views326 pages

NDT Level II

Uploaded by

NSIMBE LABAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views326 pages

NDT Level II

Uploaded by

NSIMBE LABAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 326

Non-Destructive Testing

Level I & II

As per The American Society for Non-Destructive Testing (ASNT-SNT-


TC-1A)

- Penetrant Testing - Magnetic Particle Testing - Ultrasonic Testing


- Radiographic Testing - Visual Testing - RTFI

IMECH Institute Pvt. Ltd.


Training & Consulting

QA/QC Courses MEP Courses Safety Courses


- NDT - HVAC - NEBOSH IGC
- CSWIP 3.1 & 3.2 - Fire Fighting & Plumbing - NEBOSH HSW
- API Exam Training - Electrical Design - IOSH
- BGAS & SSPC - REVIT MEP - Diploma
- Mechanical QA/QC
- Piping

2nd Floor Safa Apartments, Red Hills, Near Niloufer Hospital, Lakdikapul, Hyderabad T.S-
500004
email: [email protected] website: www.imechinstitute.in | Contact : 9700008685,
9700008797
TABLE OF CONTENT

S.NO Method Name Page start-


Page End

1 Introduction 2-5

2 MAGNETIC PARTICLE 6-70


INSPECTION

3 LIQUID PENETRANT 72-149


TESTING

4 RADIOGRAPHY TESTING 150-264

5 ULTRASONIC TESTING 265-325


Introduction to Nondestructive Testing
Non-destructive testing (NDT) has no clearly defined boundaries. A simple technique
such as a visual inspection is a form of Non-destructive testing method. Major methods
includes:

1. Penetrant Testing
2. Magnetic Particle Testing
3. Ultrasonic Testing
4. Radiography Testing

There are also ranges of other new techniques that have particular specialized
applications in limited fields. They include:

1. Eddy Current Testing


2. Acoustic Emission Methods
3. Thermography
4. Holography
5. Leak Testing

In a book of this size, it is not possible to describe all the methods in detail.
Many NDT methods have reached a stage of development where a semi-skilled
operator, following the detailed procedural instructions, with safeguards built in to the
equipment, can use them. The advent of microcomputers allows procedures to be
preprogrammed and crosschecked, so that a competent operator does not necessarily
need to understand the physics of the technique being used. However, it is desirable
that the supervisors of inspection, the designers who specify the techniques to be used
in terms of their performance and attainable sensitivity, and the development engineers
working on new methods, do have a thorough scientific understanding of their
fundamental physics involved.

One of the problems in NDT is that there is often too large a choice of methods and
techniques, with too little information on performance of each in terms of overall defect
sensitivity, speed of operation, running costs or overall reliability. Some NDT methods
have been much over-sold in the recent years. However, there are rapid developments
made in the computer modeling of electrical magnetic and radiation fields, which appear
to have considerable potential in realistically representing the conditions met in practical
specimens.

The term “Non-destructive testing” and “Non-destructive inspection” are taken to be


interchangeable, but a newer term “Non-destructive Evaluation” is coming in to use. In
NDT or NDI, in flaw detection applications, the end product is taken to be the
description of flaws, which have been detected-their nature, size and location. From
this, either in conjunction with a standard for acceptable/ rejectable flaws, or knowledge,
for example fracture mechanism, a decision is then made on the serviceability of the
tested item. The terms “flaw” and “defect” have been used interchangeably and neither
has been taken to signify either an acceptable or unacceptable condition. More neutral
terms such as ‘discontinuity’, ‘imperfection’, or ‘inhomogenity’ are too cumbersome for
general use, and the terms ‘flaw sensitivity’ and

2
‘defect detectability’ are so widespread in NDT, and have been used for many years,
that it seems unnecessary to propose anything. The term “defect” signifies that the
material fabrication is defective, unserviceable. Thus, in the interpretation of words,
there is no such thing as “acceptable defect”.

Although a great deal of non-destructive testing is carried out for flaw detection in
materials- e.g., the detection of weld defects, lack of bonding in adhesive joints and fatigue
cracks developing during service, it should be forgotten that NDT has important
applications in the examinations of assemblies, missing or displaced components, to
measure spacing etc.,

In many of these applications, it is possible to be quite sure of what is necessary to


detect (flaw sensitivity) or what accuracy of measurement is desired. Examples of this
type are ordinance inspections for correction of assembly, aero-jet engine inspection
during test running, ultrasonic thickness measurement, and metal alloy sorting. The use
of computer techniques is another modification to most of the NDT methods. Apart from
being used for calculations, it is now possible to collect and store, process vast
quantities of digital data’s at very high speeds.

It has become quite fashionable in recent years to divide defect evaluation methods
into quality-control criteria and fitness –for—purpose criteria. For the latter, acceptance
standards must be defined on a case-to-case basis, usually on fracture mechanics. For
quality control criteria, the requirements are based on a more general engineering
experience, and the inspection is directed towards detecting the most common defects
with an implication of less-severe NDT requirements.

Regarding the comparison and evaluation of different NDT methods, set of different
specimens are collected and inspected non-destructively. In a trial involving fatigue
cracks at section changes on a light-alloy panel, it is not surprising to note that eddy
current and penetrant testing techniques proved superior to ultrasonic or radiography
testing. On corroded surfaces, penetrant testing was found to be inferior to eddy
current testing, in reliability, if not sensitivity.

Finally, most NDT techniques have a wide range of applications and comparison data is
valid only for particular application, a specific type of defect and a particular material.
Probabilities and confidence levels are also needed and laboratory trials cannot be
extrapolated to field results.

Various NDT Methods


Basic principle of radiography involves the use of penetrating radiations, which are
made to pass through the material under inspection, and the transmitted radiation is
recorded in film.

Radiography inspection can be applied to any materials. It uses radiation from isotopic
sources and X – radiation that will penetrate through the job and produce an image in
the film. The amount of radiation transmitted through the material is dependent on the
density and the material thickness. As material thickness increases, radiography
becomes less sensitive as an inspection method.

3
Surface discontinuities that can be detected by this method include undercuts,
longitudinal grooves, incomplete filling of grooves, excessive reinforcement, overlap,
concavity at the root etc. Subsurface discontinuities include gas porosity, slag
inclusions, cracks, inadequate penetration, incomplete fusion etc.

Leak Testing:

Leak testing is used to check fabricated components and systems, for nuclear reactors,
pressure vessels, electronic valves, vacuum equipment, gas containers, etc. The leak is
a passage of gas from one side of the wall or container, under pressure or
concentration difference. It is measured in cc/ sec.

Depending on the range of leak detection capability, a number of test methods are
available. Some examples are, pressure drop/ rise, ultrasonic leak detectors, bubble
tests, and ammonia sensitized paper, with detection capabilities up to 10 – 4 cc/sec.
The Halogen diode sniffer, Helium mass spectrometer and Argon mass spectrometer in
the range of 10 – 7, 10 – 11 cc/sec.

Ultrasonic Testing:

Ultrasonic testing is method, which is especially applied for the detection of internal
flaws. Beams of high-frequency sound waves are introduced into a specimen by a unit
referred to as a Probe or a Search unit. These sound waves travel through the
specimen with attended loss of energy, and are reflected at interfaces.

The degree of reflection largely depends on the physical state of the matter on
the opposite side of the matter, and to a lesser extent on the specific properties
of the matter. This reflected energy is received by the probe, processed and
represented in the Cathode Ray Tube (CRT) in the equipment in the form of pips
or echoes. The position of this echo and the nature of this echo is analyzed for
determining the exact location and nature of the defect.

Cracks, laminations, shrinkage cavities, bursts, pores, bonding faults and other
discontinuities that act as a metal-gas interface can be easily detected.
Inclusions and other inhomogenities can also be detected by partial reflection of
sound waves. Superior penetrating power and high sensitivity to finer flaws are
some of the advantages of this inspection.

Acoustic Emission Testing:

When a solid is subjected to stress at a sufficiently high level, sound is generated


in the material and is emitted in discrete pulses. This is called acoustic emission,
AE, or stress wave emission, SWE. The effect has been known for many years,
but only recently this has been used as a method of NDT, and a large literature
has been developed. The claims for this technique are controversial; some
authorities claim considerable success while others find it to be an almost failure
in predicting crack growth.

4
The specific advantage of this testing is that they are non localized; it is not
necessary to examine specific regions of a structure, but a large volume can be
inspected at one time. This can also serve as an continuous monitoring system.

Thermographic Methods:
The first industrial applications of Thermographic (infrared) techniques were the
measurement of stationary temperature fields, such as the measurement of
temperature across hot-rolled steel strip, or, variations in insulation on a wall of a
building. There have been few studies with contact sensors, but most of the
applications use an “Thermographic Camera”. With this a suitable lens on to an
infrared-sensitive detector images the infrared image of the specimen. Modern
camera might typically have a cadmium- mercury-telluride detector, with a
special bandwidth of 8 – 14 micrometer, a germanium lens system, a spatial
resolution of about 0.2k and an effective temporal resolution of about 20ms.

The basic idea behind Thermographic inspection is that if a pulse of heat is


applied to one side of a specimen, the spatial distributions if the heat flux on the
opposite side will depend on the homogeneity of the specimen (presence of
internal defects), diffusivity (related to thermal conductivity and the volume of
heat capacity), and time.
Liquid Penetrant Testing:

Liquid Penetrant examination method is an effective method for detecting


discontinuities, which are open to the surface of nonporous metals and other
materials. Typical discontinuities detected by this method are cracks, seams,
cold shuts, laminations, and porosity. In principle, a liquid penetrant is applied to
the surface of the specimen to be examined and some time is allowed for the
penetrant to enter into the discontinuities.
All the excess penetrant is then removed and a developer is then applied to the
surface. The developer functions as both blotters, to absorb the penetrant from
the discontinuities as well as a means of providing a contrasting background for
meaningful interpretation of indications.

5
MAGNETIC PARTICLE INSPECTION

Magnetic Particle Inspection is used for locating surface or near surface


discontinuities in ferromagnetic materials. This method involves establishing a
magnetic field within the material to be tested. Discontinuities at or near the
surface create a distortion of this field and hence a leakage of this field exists. a
suitable ferromagnetic medium is then applied over the surface of the specimen.
The leakage field attracts these particles and forms a pattern on the surface of
the specimen. By carefully observing the particle buildup, the location, size and
nature of the discontinuity is determined.

The sensitivity is greater for surface discontinuities and diminishes rapidly with
increasing depth of subsurface discontinuities below the surface. Typical
discontinuities that can be detected by this method are cracks, laps, cold shuts,
seams and laminations.

Whenever this technique is used to produce a magnetic flux in the part, maximum
sensitivity will be to the linear discontinuities oriented perpendicularly to the
lines of flux.

6
Basic Principles
In theory, magnetic particle inspection (MPI) is a relatively simple concept. It can be
considered as a combination of two nondestructive testing methods: magnetic flux
leakage testing and visual testing. Consider a bar magnet. It has a magnetic field in and
around the magnet. Any place that a magnetic line of force exits or enters the magnet is
called a pole. A pole where a magnetic line of force exits the magnet is called a north
pole and a pole where a line of force enters the magnet is called a south pole. When a
bar magnet is broken in the center of its length, two complete bar magnets with
magnetic poles on each end of each piece will result. If the magnet is just cracked but
not broken completely in two, a north and south pole will form at each edge of the
crack. The magnetic field exits the north pole and reenters the at the south pole. The
magnetic field spreads out when it encounter the small air gap created by the crack
because the air can not support as much magnetic field per unit volume as the magnet
can. When the field spreads out, it appears to leak out of the material and, thus, it is
called a flux leakage field.

If iron particles are sprinkled on a cracked magnet, the particles will be attracted to and
cluster not only at the poles at the ends of the magnet but also at the poles at the edges
of the crack. This cluster of particles is
much easier to see than the actual crack
and this is the basis for magnetic particle
inspection.The first step in a magnetic
particle inspection is to magnetize the
component that is to be inspected. If any
defects on or near the surface are present,
the defects will create a leakage field. After
the component has been magnetized, iron
particles, either in a
dry or wet suspended form, are applied to the surface of the magnetized part. The
particles will be attracted and cluster at the flux leakage fields, thus forming a visible
indication that the inspector can detect.

History of Magnetic Particle Inspection


Magnetism is the ability of matter to attract other matter to itself. The ancient Greeks
were the first to discover this phenomenon in a mineral they named magnetite. Later on
Bergmann, Becquerel, and Michael Faraday discovered that all matter including liquids
and gasses were affected by magnetism, but only a few responded to a noticeable extent.
The earliest known use of magnetism to inspect an object took place as early as 1868.
Cannon barrels were checked for defects by magnetizing the barrel then sliding a
magnetic compass along the barrel’s length. These early inspectors were able to locate
flaws in the barrels by monitoring the needle of the compass. This was a form of
nondestructive testing but the term was not really used until sometime after WorldWarI.

In the early 1920’s, William Hoke realized that magnetic particles (colored metal
shavings) could be used with magnetism as a means of locating defects. Hoke
discovered that a surface or subsurface flaw in a magnetized material caused the
magnetic field to distort and extend beyond the part. This discovery was brought to his
attention in the machine shop. He noticed that the metallic grindings from hard steel
parts, which were being held by a magnetic chuck while being ground, formed patterns

7
on the face of the parts which corresponded to the cracks in the surface. Applying a fine
ferromagnetic powder to the parts caused a build up of powder over flaws and formed a
visible indication.

In the early 1930’s, magnetic particle inspection (MPI) was quickly replacing the oiland-
whiting method (an early form of the liquid penetrant inspection) as the method of
choice by the railroad to inspect steam engine boilers, wheels, axles, and the tracks.
Today, the MPI inspection method is used extensively to check for flaws in a large
variety of manufactured materials and components. MPI is used to check materials
such as steel bar stock for seams and other flaws prior to investing machining time
during the manufacturing of a component. Critical automotive components are
inspected for flaws after fabrication to ensure that defective parts are not placed into
service. MPI is used to inspect some highly loaded components that have been
inservice for a period of time. For example, many components of high performance
race cars are inspected whenever the engine, drive train and other systems are
overhauled. MPI is also used to evaluate the integrity of structural welds on bridges,
storage tanks, and other safety critical structures.

8
Magnetism
Magnets are very common items in the workplace and household. Uses of magnets
range from holding pictures on the refrigerator to causing torque in electric motors. Most
people are familiar with the general properties of magnets but are less familiar with the
source of magnetism. The traditional concept of magnetism centers around the
magnetic field and what is know as a dipole. The term “magnetic field” simply describes
a volume of space where there is a change in energy within that volume. This change in
energy can be detected and measured. The location where a magnetic field can be
detected exiting or entering a material is called a magnetic pole. Magnetic poles have
never been detected in isolation but always occur in pairs and, thus, the name dipole.
Therefore, a dipole is an object that has a magnetic pole on one end and a second
equal but opposite magnetic pole on the other.

A bar magnet can be considered a dipole with a north pole at one end and south pole at
the other. A magnetic field can be measured leaving the dipole at the north pole and
returning the magnet at the south pole. If a magnet is cut in two, two magnets or dipoles
are created out of one. This sectioning and creation of dipoles can continue to the atomic
level. Therefore, the source of magnetism lies in the basic building block of all matter...the
atom.

The Source of Magnetism

All matter is composed of atoms, and atoms are composed of protons, neutrons and
electrons. The protons and neutrons are located in the atom’s nucleus and the
electrons are in constant motion around the nucleus. Electrons carry a negative
electrical charge and produce a magnetic field as they move through space. A magnetic
field is produced whenever an electrical charge is in motion. The strength of this field is
called the magnetic moment.

This maybe hard to visualize on a subatomic scale but consider electric current flowing
through a conductor. When the electrons (electric current) are flowing through the
conductor, a magnetic field forms around the conductor. The magnetic field can be
detected using a compass. The magnetic field will place a force on the compass
needle, which is another example of a dipole. Since all matter is comprised of atoms, all
materials are affected in some way by a magnetic field. However, not all materials react
the same way. This will be explored more in the next section.

Diamagnetic, Paramagnetic, and Ferromagnetic Materials


When a material is placed within a magnetic field, the magnetic forces of the material’s
electrons will be affected. This effect is known as Faraday’s Law of Magnetic Induction.
However, materials can react quite differently to the presence of an external magnetic
field. This reaction is dependent on a number of factors such as the atomic and
molecular structure of the material, and the net magnetic field associated with the
atoms. The magnetic moments associated with atoms have three origins. These are the
electron orbital motion, the change in orbital motion caused by an external magnetic
field, and the spin of the electrons.

9
In most atoms, electrons occur in pairs. Each electron in a pair spins in the
opposite direction. So when electrons are paired together, their opposite spins
cause there magnetic fields to cancel each other. Therefore, no net magnetic
field exists. Alternately, materials with some unpaired electrons will have a net
magnetic field and will react more to an external field. Most materials can be
classified as ferromagnetic, diamagnetic or paramagnetic.
Diamagnetic metals have a very weak and negative susceptibility to magnetic fields.
Diamagnetic materials are slightly repelled by a magnetic field and the material does
not retain the magnetic properties when the external field is removed. Diamagnetic
materials are solids with all paired electron and, therefore, no permanent net magnetic
moment per atom. Diamagnetic properties arise from the realignment of the electron
orbits under the influence of an external magnetic field. Most elements in the periodic
table, including copper, silver, and gold, are diamagnetic.

Paramagnetic metals have a small and positive susceptibility to magnetic fields. These
materials are slightly attracted by a magnetic field and the material does not retain the
magnetic properties when the external field is removed. Paramagnetic properties are
due to the presence of some unpaired electrons and from the realignment of the
electron orbits caused by the external magnetic field. Paramagnetic materials include
Magnesium, molybdenum, lithium, and tantalum.

Ferromagnetic materials have a large and positive susceptibility to an external


magnetic field. They exhibit a strong attraction to magnetic fields and are able to retain
their magnetic properties after the external field has been removed. Ferromagnetic
materials have some unpaired electrons so their atoms have a net magnetic moment.
They get their strong magnetic properties due to the presence of magnetic domains. In
these domains, large numbers of atoms moments (10^12 to 10^15) are aligned parallel
so that the magnetic force within the domain is strong. When a ferromagnetic material
is in the unmagnitized state, the domains are nearly randomly organized and the net
magnetic field for the part as a whole is zero. When a magnetizing force is applied, the
domains become aligned to produce a strong magnetic field within the part. Iron,
Nickel, and cobalt are examples of ferromagnetic materials. Components with these
materials are commonly inspected using the magnetic particle method.

Magnetic Domains
Ferromagnetic materials get their magnetic properties not only because their atoms
carry a magnetic moment but also because the material is made up of small regions
known as magnetic domains. In each domain, all of the atomic dipoles are coupled
together in a preferential direction. This alignment develops as the material develops its
crystalline structure during solidification from the molten state. Magnetic domains can
be detected using Magnetic Force Microscopy (MFM) and images of the domains like
the one shown below can be constructed. During solidification a trillion or more atom
moments are aligned parallel so that the magnetic force within the domain is strong in
one direction. Ferromagnetic materials are said to be characterized by “spontaneous
magnetization” since they obtain saturation magnetization in each of the domains
without an external magnetic field being applied. Even though the domains are
magnetically saturated, the bulk material may not show any signs of magnetism
because the domains develop themselves are randomly oriented relative to each other.

10
Magnetic Force Microscopy (MFM) image showing the magnetic domains in a piece of
heat treated carbon steel. Ferromagnetic materials become magnetized when the
magnetic domains within the material are aligned. This
can be done my placing the material in a strong
external magnetic field or by passes electrical current
through the material. Some or all of the domains can
become aligned. The more domains that are aligned,
the stronger the magnetic field in the material. When all
of the domains are aligned, the material is said to be
magnetically saturated. When a material is magnetically
saturated, no additional amount of external
magnetization force will cause an increase in its internal
level of magnetization.

Unmagnetized Material Magnetized Material

Magnetic Field Characteristics

Magnetic Field In and Around a Bar Magnet


It can be seen in the magnetograph that there are poles all along the length of the
magnet but that the poles are concentrated at the ends of the magnet. The area where
the exit poles are concentrated is called the magnet’s north pole and the area where the
entrance poles are concentrated is called the magnet’s south pole.
As discussed previously a magnetic field is a change in energy within a volume of
space. The magnetic field surrounding a bar magnet can be seen in the magnetograph
below. A magnetograph can be created by placing a piece of paper over a magnet and
sprinkling the paper with iron filings. The particles align themselves with the lines of
magnetic force produced by the magnet. The magnetic lines of force show where the
magnetic field exits the material at one pole and reenters the material at another pole
along the length of the magnet. It should be noted that the magnetic lines of force exist
in three-dimensions but are only seen in two dimensions in the image.

Magnetic Fields in and around Horseshoe and Ring Magnets

11
Magnets come in a variety of shapes and one of the more common is the horseshoe (U) magnet.
The horseshoe magnet has north and south poles just like a bar magnet but the magnet is curved
so the poles lie in the same plane. The magnetic lines of force flow from pole to pole just like in
the bar magnet. However, since the poles are located closer together and a more direct path
exists for the lines of flux to travel between the poles, the magnetic field is concentrated between
the poles. If a bar magnet was placed across the end of a horseshoe magnet or if a magnet was
formed in the shape of a ring, the lines of magnetic force would not even need to enter the air.
The value of such a magnet where the magnetic field is completely contained with the material
probably has limited use. However, it is important to understand that the magnetic field can flow
in loop within a material when the concept of circular magnetism is later covered.

General Properties of Magnetic Lines of Force

Magnetic lines of force have a number of


important properties, which include:

They seek the path of least resistance


between opposite magnetic poles. In a
single bar magnet as shown to the right,
they attempt to form closed loop from pole
to pole.
They never cross one another.
They all have the same strength. Their
density decreases (they spread out) when
they move from an area of higher
permeability to an area of lower
permeability.
Their density decreases with increasing
distance from the poles. They are
considered to have direction as if flowing,
though no actual movement
occurs. They flow from the south pole to the north pole within the material and north
pole to south pole in air.

12
Electromagnetic Fields
Magnets are not the only source of magnetic fields. In 1820, Hans Christian Oersted
discovered that an electric current flowing through a
wire caused a nearby compass to deflect. This
indicated that the current in the wire was generating
a magnetic field. Oersted studied the nature of the
magnetic field around the long straight wire. He
found that the magnetic field existed in circular form
around the wire and that the intensity of the field
was directly proportional to the amount of current
carried by the
wire. He also found that the strength of the field
was
strongest close to the
wire and diminished with distance from the
conductor until it could no longer be detected. In
most conductors, the magnetic field exists only
as long as the current is flowing (i.e. an
electrical charge is in motion). However, in
ferromagnetic materials the electric current will
cause some or all of the magnetic domains to
align and a residual magnetic field will
remain.Oersted also noticed that the direction
of the magnetic field was dependent on the direction of the electrical current in the wire.
A three-dimensional representation of the magnetic field is shown below. There is a
simple rule for remembering the direction of the magnetic field around a conductor. It is
called the right-hand rule. If a person grasps a conductor in ones right hand with the
thumb pointing in the direction of the current, the fingers will circle the conductor in the
direction of the magnetic field.

A word of caution about the right-hand rule

For the right-hand rule to work, one important thing that must remembered about the
direction of current flow. Standard
convention has current flowing from the
positive terminal to the negative
terminal. This convention is credited to
the French physicist Ampere who
theorized that electric current was due
to a positive charge
moving from the positive terminal to the negative terminal. However, it was later
discovered that it is the movement of the negatively charged electron that is responsible
for electrical current. Rather than changing several centuries of theory and equations,
Ampere’s convention is still used today.

13
Magnetic Field Produced by a Coil
When a current carrying conductor is formed into a loop or several loops to form a coil,
a magnetic field develops that flows through the center of the loop or coil along
longitudinal axis and circles back around the outside of the loop or coil. The magnetic
field circling each loop of wire combines with the fields from the other loops to produce
a concentrated field down the center of the coil. A loosely wound coil is illustrated
below to show the interaction of the magnetic field. The magnetic field is essentially
uniform down the length of the coil when it is wound tighter.

The strength of a coil’s magnetic field increases not only with increasing current but
also with each loop that is added to the coil. A long straight coil of wire is called a
solenoid and can be used to generate a nearly uniform magnetic field similar to that of
a bar magnet. The concentrated magnetic field inside a coil is very useful in
magnetizing ferromagnetic materials for inspection using the magnetic particle testing
method. Please be aware that the field outside the coil is weak and is not suitable for
magnetize ferromagnetic materials.

Quantifying Magnetic Properties


(Magnetic Field Strength, Flux Density, Total Flux and Magnetization)

Until now, only the qualitative features of the magnetic field have been discussed.
However, it is necessary to be able to measure and express quantitatively the various
characteristics of magnetism. Unfortunately, a number of unit conventions are in use as
shown below. SI units will be used in this material. The advantage of using SI units is
that they are traceable back to an agreed set of four base units - meter, kilogram,
second, and Ampere.
The units for magnetic field strength H are ampere/meter. A magnetic field strength of 1
ampere/meter is produced at the center of a single circular conductor of diameter 1
meter carrying a steady current of 1 ampere. The number of magnetic lines of force
cutting through a plane of a given area at a right angle is known as the magnetic flux
density B. The flux density or magnetic induction has the tesla as its unit. One tesla is
equal to 1 Newton/(A/m). From these units it can be seen that the flux density is a
measure of the force applied to a particle by the magnetic field. The Gauss is CGS unit
for flux density and is commonly used by US industry. One gauss represents one line of
flux passing through one square centimeter of air oriented 90 degrees to flux flow.

14
SI units
Quantity (Sommerfield) SI units CGS units
(kennely) (Gaussian)

Field (H) A/m A/m Oersteds

Flux
Density (B) tesla tesla Gauss

Flux f weber weber Maxwell

Magnetization erg.Oe-
(M) A/m ---- 1.c-
m3
The total number of lines of magnetic force in a material is called magnetic flux f. The
strength of the flux is determined by the number of magnetic domains that are aligned
within a material. The total flux is simply the flux density applied over an area. Flux
carries the unit of a weber, which is simply a tesla-square meter. The magnetization is a
measure of the extent to which an object is magnetized. It is a measure of the magnetic
dipole moment per unit volume of the object. Magnetization carries the same units as a
magnetic field; amperes/meter.

15
The Hysteresis Loop and Magnetic Properties
A great deal of information can be learned about the magnetic properties of a material
by studying its hysteresis loop. A hysteresis loop shows the relationship between the
induced magnetic flux density B and the magnetizing force H. It is often referred to as
the B-H loop. An example hysteresis loop is
shown in the figure.

The loop is generated by measuring the


magnetic flux B of a ferromagnetic material
while the magnetizing force H is changed. A
ferromagnetic material that has never been
previously magnetized or has been thoroughly
demagnetized will follow the dashed line as H
is increased. As the line demonstrates, the
greater the amount of current applied (H+), the
stronger the magnetic field in the component
(B+). At point “a” almost all of the magnetic
domains are aligned and an additional
increase in the
magnetizing force will produce very little increase in magnetic flux. The material has
reached the point of magnetic saturation. When H is reduced back down to zero, the
curve will move from point “a” to point “b.” At this point, it can be seen that some
magnetic flux remains in the material even though the magnetizing force is zero. This is
referred to as the point of retentivity on the graph and indicates the remanence or level
of residual magnetism in the material. (Some of the magnetic domains remain aligned
but some have lost there alignment.) As the magnetizing force is reversed, the curve
moves to point “c”, where the flux has been reduced to zero. This is called the point of
coercivity on the curve. (The reversed magnetizing force has flipped enough of the
domains so that the net flux within the material is zero.) The force required to remove
the residual magnetism from the material, is called the coercive force or coercivity of
the material.

As the magnetizing force is increased in the negative direction, the material will again
become magnetically saturated but in the opposite direction (point “d”). Reducing H to
zero brings the curve to point “e.” It will have a level of residual magnetism equal to that
achieved in the other direction. Increasing H back in the positive direction will return B to
zero. Notice that the curve did not return to the origin of the graph because some force
is required to remove the residual magnetism. The curve will take a different path form
point “f” back the saturation point where it with complete the loop.

From the hysteresis loop, a number of primary magnetic properties of a material can be
determined.

Retentivity - A measure of the residual flux density corresponding to the saturation


induction of a magnetic material. In other words, it is a material’s ability to retain a
certain amount of residual magnetic field when the magnetizing force is removed
after achieving saturation. (The value of B at point B on the hysteresis curve.)

16
Residual Magnetism or Residual Flux - the magnetic flux density that remains
in a material when the magnetizing force is zero. Note that residual magnetism
and retentivity are the same when the material has been magnetized to the
saturation point. However, the level of residual magnetism may be lower than
the retentivity value when the magnetizing force did not reach the saturation
level.

Coercive Force - The amount of reverse magnetic field which must be applied to
a magnetic material to make the magnetic flux return to zero. (The value of H at
point C on the hysteresis curve.)

Permeability, m - A property of a material that describes the ease with which a


magnetic flux is established in the component.

Reluctance - Is the opposition that a ferromagnetic material shows to the


establishment of a magnetic field. Reluctance is analogous to the resistance in
an electrical circuit.
Permeability
As previously mentioned, permeability is a material property that describes the ease
with which a magnetic flux is established in the component. It is the ratio of the flux
density to the magnetizing force and, therefore, represented by the following equation:
µ = B/H
It is clear that this equation describes the slope of
the curve at any point on the hysteresis loop. The
permeability value given in papers and reference
materials is usually the maximum
permeability or
the maximum
relative
permeability. The
maximum per-
meability is the
point where the
slope of the B/H
curve for unmagnetized material is the greatest. This
point is often taken as the point where a straight line
from the origin is tangent to the B/H curve.

The relative permeability is arrived at by taking the


ratio of the material’s permeability to the permeability in free space (air).

µ(relative) = µ(material) / µ(air)


where: µ(air) = 4π x 10^-7 Hm^-1

17
The shape of the hysteresis loop tells a great deal about the material being magnetized.
The hysteresis curves of two different materials are shown in the graph.

Relative to the other material, the materials with the wide hysteresis loop has:
Lower Permeability
Higher Retentivity
Higher Coercivity
Higher Reluctance
Higher Residual Magnetism

The material with the narrower loop has:

Higher Permeability
Lower Retentivity
Lower Coercivity
Lower Reluctance
Lower Residual Magnetism.

In magnetic particle testing the level of residual magnetism is important. Residual


magnetic fields are affected by the permeability, which can be related to the carbon
content and alloying of the material. A component with high carbon content will have
low permeability and will retain more magnetic flux than a material with low carbon
content.

Magnetic Field Orientation and Flaw Detectability


To properly inspect a component for cracks or other defects, it is important to understand
that orientation between the magnetic lines of force and the flaw is very important. There
are two general types of magnetic fields that can be
established within a component. A longitudinal
magnetic field has magnetic lines of force that run
parallel to the long axis of the part. Longitudinal
magnetization of a component can be accomplished
using the longitudinal field set up by a coil or
solenoid. It can also be accomplished using
permanent or
electromagnets.

A circular magnetic field has magnetic lines of


force that run circumferentially around the
perimeter of a part. A circular magnetic field is
induced in an article by either passing current
through the component or by passing current
through a conductor surrounded by the component.
The type of magnetic field established is
determined by the method used to magnetize the specimen. Being able to magnetize
the part in two directions is important because the best detection of defects occurs
when the lines of magnetic force are established at right angles to the longest
dimension of the defect. This orientation creates the largest disruption of the magnetic

18
field within the part and the greatest flux leakage at the surface of the part. As can be
seen in the image below, if the magnetic field is parallel to the defect, the field will see
little disruption and no flux leakage field will be produced.

An orientation of 45 to 90 degrees between the magnetic field and the defect is


necessary to form an indication. Since defects may occur in various and unknown
directions, each part is normally magnetized in two directions at right angles to each
other. If the component below is considered, it is known that passing current through
the part from end to end will establish a circular magnetic field that will be 90 degrees to
the direction of the current. Therefore, defects that have a significant dimension in the
direction of the current (longitudinal defects) should be detectable. Alternately,
transverse-type defects will not be detectable with circular magnetization.

Magnetization of Ferromagnetic Materials


There are a variety of methods that can be used to establish a magnetic field in a
component for evaluation using magnetic particle inspection. It is common to classify
the magnetizing methods as either direct or indirect.

Magnetization Using Direct Induction (Direct Magnetization)

With direct magnetization, current is passed directly through the component. Recall that
whenever current flows a magnetic field is produced.
Using the right-hand rule, which was introduced
earlier, it is known that the magnetic lines of flux form
normal to the direction of the current and form a
circular field in and around the conductor. When
using the direct magnetization method, care must be
taken to ensure that good electrical contact is
established and maintained between the test

19
equipment and the test component. Improper contact can result in arcing that may
damage the component. It is also possible to overheat components in areas of high
resistance such as the contact points and in areas of small crosssectional area.
There are several ways that direct magnetization
is commonly accomplished. One way involves clamping the component between two
electrical contacts in a special piece of equipment. Current is passed through the
component and a circular magnetic field is established in and around the component.
When the magnetizing current is stopped, a
residual magnetic field will remain within the
component. The strength of the induced
magnetic field is proportional to the amount of
current passed through the component

A second technique involves using clams or


prods, which are attached or placed in contact
with the component. Current is injected into
the component as it flows from the contacts.
The current sets up a circular
magnetic fields around the path of the current.

Magnetization Using Indirect Induction (Indirect Magnetization)

Indirect magnetization is accomplished by using a strong external magnetic field to


establish a magnetic field within the component. As with direct magnetization, there are
several ways that indirect magnetization can be accomplished.

The use of permanent magnets is a low cost method of establishing a magnetic field.
However, their use is limited due to lack of control of the field strength and the difficulty
of placing and removing strong permanent magnets from the component.

Another way of indirectly inducting a magnetic field in a material is by using the


magnetic field of a current carrying conductor. A circular magnetic field can be
established in cylindrical components by using a central conductors. Typically, one or
more cylindrical components are hung from a solid copper bar running through the
inside diameter. Current is passed through the copper bar and the resulting circular
magnetic field established a magnetic field with the test components. The use of coils
and solenoids is a third method of indirect magnetization. When the length of a
component is several time larger than its diameter, a longitudinal magnetic field can be
established in the component. The component is placed longitudinally in the
concentrated magnetic field that fills the center of a coil or solenoid. This magnetization

20
technique is often referred to as a “coil shot.”

Magnetizing Current

Electric current is often used to establish the magnetic field in components during
magnetic particle inspection. Alternating current and direct current are the two basic
types of current commonly used. Current from single phase 110 volts, to three phase
440 volts are used when generating an electric field in a component. Current flow is
often modified to provide the appropriate field within the part. The type of current used
can have an effect on the inspection results so the types of currents commonly used will
be briefly reviewed.

Direct Current
Direct current (DC) flows continuously in one direction at a constant voltage. A battery is
the most common source of direct current. As previously mentioned, current is said to
flow from the positive to the negative terminal when in actuality the electrons flow in the
opposite direction. DC is very desirable when performing magnetic particle inspection in
search of subsurface defects because DC generates a magnetic field that penetrates
deeper into the material. In ferromagnetic materials, the magnetic field produced by DC
generally penetrates the entire cross-section of the component; whereas, the field
produced using alternating current is concentrated in a thin layer at the surface of the
component.

Alternating Current

Alternating current (AC) reverses in direction at a rate of 50 or 60 cycles per second. In


the United States, 60 cycle current is the commercial norm but 50 cycle current is
common in many countries. Since AC is readily available in most facilities, it is
convenient to make use of it for magnetic particle inspection. However, when AC is
used to induce a magnetic field in ferromagnetic materials the magnetic field will be
limited to narrow region at the surface of the component. This phenomenon is known as
“skin effect” and it occurs because induction is not a spontaneous reaction and the
rapidly reversing current does not allow the domains down in the material time to align.
Therefore, it is recommended that AC be used only when the inspection is limited to
surface defects.
Rectified Alternating Current

Clearly, the skin effect limits the use of AC since many inspection applications call for
the detection of subsurface defects. However, the convenient access to AC, drive its
use beyond surface flaw inspections. Luckily, AC can be converted to current that is

21
very much like DC through the process of rectification. With the use of rectifiers, the
reversing AC can be converted to a one-directional current. The three commonly used
types of rectified current are described below.
Half Wave Rectified Alternating Current (HWAC)

When single phase alternating current is passed through a rectifier, current is allowed to
flow in only one direction. The reverse half of each cycle is blocked out so that a one
directional, pulsating current is produced. The current rises from zero to a maximum
and then returns to zero. No current flows during the time when the reverse cycle is
blocked out. The HWAC repeats at same rate as the unrectified current (50 or 60 hertz
typical). Since half of the current is blocked out, the amperage is half of the unaltered
AC.

This type of current is often referred to as half wave DC or pulsating DC. The pulsation
of the HWAC helps magnetic particle indications form by vibrating the particles and
giving them added mobility. This added mobility is especially important when using dry
particles. The pulsation is reported to significantly improve inspection sensitivity. HWAC
is most often used to power electromagnetic yokes.

Full Wave Rectified Alternating Current (FWAC) (Single Phase)

Full wave rectification inverts the negative current to positive current rather than blocking
it out. This produces a pulsating DC with no interval between the pulses. Filtering is
usually performed to soften the sharp polarity switching in the rectified current. While
particle mobility is not as good as half-wave AC due to the reduction in pulsation, the
depth of the subsurface magnetic field is improved.

Three Phase Full Wave Rectified Alternating Current

Three phase current is often used to power industrial equipment because it has more
favorable power transmission and line loading characteristics. It is also highly desirable
for magnetic part testing because when it is rectified and filtered, the resulting current
very closely resembles direct current. Stationary magnetic particle equipment wire with
three phase AC will usually have the ability to magnetize with AC or DC (three phase
full wave rectified), providing the inspector with the advantages of each current form.

Figure below shows waveforms of different current types: Input AC, Rectified AC, and
Rectified + Filtered AC at Half-wave, Full-wave (single phase), and Full-wave (three
phase) currents.

22
23
Longitudinal Magnetic Fields Distribution and Intensity
When the length of a component is several time larger than its diameter, a longitudinal
magnetic field can be established in the component. The component is often placed
longitudinally in the concentrated magnetic field that fills the center of a coil or solenoid.
This magnetization technique is often referred to
as a “coil shot.” The magnetic field travels
through the component from end to end with
some flux loss along its length as shown in the
image to the right. Keep in mind that the
magnetic lines of flux occur in three dimensions
and are only shown in 2D in the image. The
magnetic lines of flux are much denser inside the
ferromagnetic material than in air because ferromagnetic materials have much higher
permeability than does air. When the concentrated flux within the material comes to the
air at the end of the component, it must spread out since the air can not support as
many lines of
flux per unit volume. To keep from crossing as they spread out, some of the
magnetic lines of flux are forced out
the side of the component. When a
component is magnetized along its
complete length, the flux loss is
small along its length. Therefore,
when a component is uniform in
cross
section and magnetic permeability, the
flux density will be relatively uniform
throughout the component. Flaws that
run normal to the magnetic lines of flux
will disturb the flux lines and often
cause a leakage field at the surface of
the component.

When a component with considerable


length is magnetized using a solenoid, it
is possible to magnetize only a portion of the component. Only the
material within the solenoid and about the same width on each side of the solenoid will
be strongly magnetized. At some distance from the solenoid, the magnetic lines of force
will abandon their longitudinal direction, leave the part at a pole on one side of the
solenoid and return to the part at a opposite pole on the other side of the solenoid. This
occurs because the magnetizing force diminishes with increasing distance from the
solenoid, and, therefore, the magnetizing force may only be strong enough to align the
magnetic domains within and very near the solenoid. The unmagnetized portion of the
component will not support as much magnetic flux as the magnetized portion and some
of the flux will be forced out of the part as illustrated in the image below. Therefore, a
long component must be magnetized and inspected at several locations along its length
for complete inspection coverage.

Solenoid - An electrically energized coil of insulated wire, which produces a magnetic field
within the coil.

24
TECHNIQUES FOR LONGITUDINAL MAGNETIZATION
Techniques for creating longitudinal magnetization are as follows:

YOKE
There are two basic types of yokes that are commonly used for magnetizing purposes.
They are permanent – magnet and electromagnetic yokes. Both are handheld and
therefore are quite mobile.

Permanent – Magnet Yokes


They are used for applications where a source of electric power is not available or
where arcing is not permissible (as in an explosive atmosphere), The permanent –
magnetic yoke, when placed on the part grips the surface well enough, particularly on
an overhead part which is to be magnetically examined. It can basically fit to any
contour of the part, with little or no difficulty. Permanent magnets can lose their
magnetic field generating capacity by being partially demagnetized by a stronger flux
field, being damaged or dropped. In addition, the particle mobility created by AC and
HWAC in electromagnetic yokes is not present.

However, there are certain limitations of permanent-magnet yokes:

(a) Large areas or masses cannot be magnetized with enough strength to


producesatisfactory indications.
(b) Since it contains a predetermined magnetic field, the flux density cannot bevaried
at will.
(c) If the magnet is powerful, it may be difficult to remove it from the part.
(d) Particles, when applied, may cling to the magnet, possibly obscuring indications.
Electromagnetic Yokes

They consist of a coil wound around a soft iron core, usually in the form of a horseshoe.
The legs of a yoke can be either fixed or adjustable. Adjustable legs permit changing

25
the contact spacing to accommodate irregular objects. Unlike a permanent yoke, the
electromagnetic yoke can be readily switched on or off. This feature makes the yokes to
be applied or removed whenever required form the test piece. The design of the
electromagnetic yoke can be based on the use of either direct current or alternating
current. Varying the amount of current through the coil can vary the flux density of the
magnetic field. The direct current yoke have better penetration, while an AC yoke
concentrates its magnetic field on the surface providing good sensitivity to surface
discontinuities over a broad area. In general, the discontinuities to be disclosed should
be centrally located between the pole pieces and should essentially lie perpendicularly
to an imaginary line connecting them.

Extraneous leakage fields in the immediate vicinity of the poles cause an excessive
particle buildup. If such case is encountered, the pole spacing is increased, to limit this
cause. As for prods, the maximum pole spacing is 8" and the minimum pole spacing is
3". Spacing less than 3" will cause banding of particles around the poles of yoke,
obscuring any indication.

In operation, the part completes the magnetic circuit for the flow of magnetic flux. Yokes
that use AC for magnetization have numerous applications and can be used for
demagnetization also.

Discontinuities preferentially located transverse to the alignment of pole pieces are


indicated. Most yokes are energized by AC, HWAC, or, FWAC. This method shall only
be used to detect discontinuities that are open to the surface of the part. Except for
materials ¼” thick (6mm) or less in thickness, AC yokes are superior to direct or
permanent yokes of equal lifting power for the detection of surface discontinuities.

Yoke Magnetizing Strength

The strength of electromagnetic yoke is ascertained by checking the lifting power of


yoke.

(a) Check that the AC yoke will lift a 10- pound (2.2 Kg) steel bar with the legs at
theinspection spacing.

(b) Check that the DC yoke will lift a 40- pound (8.8 Kg) steel bar with the legs at
theinspection spacing.

(c) Note that 2-leg or 1-leg yoke should produce clearly defined indications on
themagnetic field indicator. The indication must be remaining even after the excess
particles have been removed.

The 2-leg yoke should produce a minimum of 30 Oersteds (24Amperes/cm) in air, in the
area of inspection.

The yoke shall be calibrated once in a year, or whenever the yoke is damaged. If the
yoke has not been in use for the past one year, a calibration check shall be done before
its first use.

26
Inspection of Large Surface Areas for Surface Discontinuities

Advantages
No electrical contact
Highly portable
Can locate discontinuities in any direction, with proper yoke orientation

Disadvantages
Time consuming
Yoke must be systematically repositioned to locate discontinuities with random orientation.

Miscellaneous Parts Requiring Inspection of Localized Areas


Advantages
No electrical contact
Highly portable
Good sensitivity to surface discontinuities
Wet or dry method can be used
AC yoke can also serve as an demagnetizer in some cases

Disadvantages
Yoke must be properly positioned relative to the orientation of the discontinuity.
Relatively good contact must be established between part and poles of the magnet.
Complex part shape may cause a difficulty.
Poor sensitivity to subsurface discontinuities except in isolated areas.

COIL MAGNETIZATION

Single and multiple loop coils are used for longitudinal magnetization of components.
The field within the coil has definite direction, corresponding to the direction of lines of
force running through it. The flux density passing through the interior of the coil is
proportional to the product of current and the number of turns of the coil. Therefore
changing either the current or the number of turns of the coil can vary the magnetizing
force. For large parts, winding several turns of cable around the part can produce a coil.
Care must be taken to ensure that no indications are concealed beneath the cable.

The relationship between the length of the part being inspected to the width of the coil
must be considered. For a simple part, the effective overall distance that can be
inspected using a coil is approximately 6 – 9 inches on either side of the coil. Thus, a
part of 12 – 18 inches long can be inspected using a normal coil approximately 1" thick.
In testing large parts, either the part or the coil is moved at regular intervals for
complete coverage.

The ease with which the part can be magnetized in a coil is significantly related to the
length – diameter ratio (L/D) of the part. This is due to the demagnetizing effect of the
magnetic poles setup at the ends of the part. This effect is considerable for L/D ratios
less than 10 to 1 and is very significant for ratios less than 3 to 1. When using a coil for

27
magnetizing a long bar, strong polarity at the ends of the part could mask transverse
defects. An advantageous field in this area is assured on full wave, three phase, DC
units by a special circuitry known as “quick” or “fast” break.

The various advantages and disadvantages for different material forms for Coil method are
given below:

Medium-Sized Parts Whose Length Predominates, Such as Crankshaft


or Camshaft

Advantages
All generally longitudinal surfaces are longitudinally magnetized to transverse discontinuities.

Disadvantages
Parts should be centered in the coil to maximize length effectively during a given shot. Length
may dictate additional shots as coil is repositioned.

Large Castings, Forgings, or Shafts

Advantages
Longitudinal field easily attained by wrapping the part with a flexible cable.

Disadvantages
Multiple processing may be required because of part shape.

Miscellaneous Small Parts

Advantages
Easy and fast, especially where residual method is applicable.
Non-contact with part
Relatively complex parts can be usually processed with same ease as for a simple part.

Disadvantages
L/D ratio is important in determining the current adequacy.
Sensitivity diminishes at the end of the part because of general leakage field pattern.
Quick break of current is desirable to minimize end effect on short parts with low L/D
ratio’s.
Traditional vs. Multidirectional Coil
Most traditional units will magnetize a part with a direct contact shot (for a circular field)
and a longitudinal coil shot in order to inspect all planes of a part. The direct current shot
will magnetize the part of the planes while the coil will cover the third plane. The Multi-
directional Coil induces the field in the part in the same way as the coil shot on a
traditional unit. The coil can only induce a field in one direction so the Multi-directional
Coil is made up of a three coil system. Each part of the coil system induces a field in a
different plane.

Advantages of the Multi-directional Coils

28
(a) Since the fields are all included into the part, chance of contact damage from arcing
or clamping are eliminated.
(b) The no contact method also allows all surfaces to be inspected in one step since
the contact surfaces are now free from interference.

The coil system can be set up to handle multiple parts at once increasing productivity,
or set up to allow any part orientation to decrease handling time, loading and unloading.

• Controlled Break Circuitry/Quick Break, which ensures magnetic field, wraps


around ends of test piece (crucial to final inspection).
• Current Assurance Indicator lets operator know the current did not pass thru or
magnetic field was not set up in test piece screen

MULTIDIRECTIONAL MAGNETIZATION

With all magnetizing methods, discontinuities that are perpendicular to the magnetic
field are optimally detected. However, discontinuity detection depends heavily on
material penetrability and properties of testing medium. It is true that magnetic particle
inspection also detects defects that are not exactly perpendicular to the magnetic field.
In this case, the lines of flux can be decomposed into two ways, one parallel and other
perpendicular to the direction of the crack. The perpendicular component constitutes for
their detectability. In some cases, even cracks appearing to be parallel to the field are
weakly detected. The reason is that most cracks are ragged in outline so that some
sections may be favorable for detection. However, at best, cracks can be detected
when the angle between them and the direction of magnetization is more than 30o.

VECTOR FIELD
When two magnetizing forces are imposed simultaneously in the same part, the object
is not magnetized in the in two directions simultaneously. A vector field is formed, which
is in the resultant direction and the strength of the new field depends on the strength of
two imposed fields. This is illustrated above, where Fa is the first magnetizing force in
one direction, and Fb is the second magnetizing force in another direction. As a result
of this, a vector field, indicated by Fa+b, is produced. One of the major advantages of
this type of magnetization is that defects occurring in almost all directions can be
determined.

Fa+b (vector field)

Fa Fb

29
Combined Direct Fields
When a DC magnetic field of certain direction and strength is superimposed on one of a
different direction and strength, both fields can be combined to form another field as
shown in the above figure. The resulting field has a direction and strength different from
either of the primary fields. and is therefore, very difficult to predict, especially when
induced on complex shapes.

Combined DC and AC Fields

Two perpendicular magnetic fields are created and the resultant field changes its
direction as shown in the figure below. The first two lines indicate the circular flux, the
next two lines indicate the longitudinal flux, and the third two lines

indicate the resultant flux direction of the created by the combined field. The direction
change occurs again in such a way that, for at least a short time, some field component
is perpendicular to any existing crack direction. This in turn causes subsequent particle
accumulation and hence formation if indications.

Direction of Magnetization
At least two separate examinations shall be performed on each area. During the
second examination, the lines of magnetic flux shall be perpendicular to those during
the first examination. All examinations shall be conducted with sufficient overlap to
assure 100% coverage at the required sensitivity.

White contrast paints can be applied over the surface for the enhancement of
indications. However, they should be applied only in small amounts and the same must
be demonstrated that the indications can be detected through these enhancement
coatings.

TOROIDAL MAGNETIZATION

When magnetizing a part with a toroidal shape, such as solid wheel or the disk with
center opening, an induced field that is radial to the disk is most useful for the detection
of discontinuities that are oriented in the circumferential direction. In such applications,
this field may be more effective than multiple shots across the periphery.

MANUAL CLAMP/MAGNETIC LEECH TECHNIQUE

Local areas of complex components may be magnetized by


electrical contacts manually clamped or attached with
magnetic leeches to the part. Sufficient overlap may be
necessary, if testing contact location is required.

30
TECHNIQUES OF CIRCULAR MAGNETIZATION

The techniques used to create a circular magnetic field in the material are as follows;

HEADSHOT TECHNIQUE
This is otherwise called as Direct Contact Method. For small parts having no openings
through the interior, circular magnetic fields are produced by direct contact to the part.
This is done by clamping the parts between the contact heads of a headshot machine,
generally a bench unit as shown in the figure. A similar unit can also be used to supply
current to the central conductor. The contact heads are so constructed that the surfaces
of the part or not damaged-either physically or pressure or structurally by heat from
arcing or from high resistance at the point of contact.
The contact heads must be kept clean in order to carry out useful inspection. For the
complete inspection of a complex part, it may be necessary to attach clamps at several
points or to wrap cables around the parts to orient the fields in a particular direction.
Copper braided pads are often used in headstocks in order to provide the contact area
and reduce the possibility of burning of part during inspection, as high values of current
are passing through it.

The various advantages and disadvantages for different material forms for headstock
contact are given below:

31
Solid, relatively small parts (castings, forgings and machined pieces)

Advantages
Fast and easy technique
Circular magnetic field surrounds current path
Good sensitivity to surface and near surface discontinuities
Simple as well as relatively complex parts can be easily processed with one or more shots.
Complete magnetic path is conducive to maximizing residual characteristics of material.

Disadvantages
Possibility of arc burns, if poor contact conditions exist.
Long parts should be magnetized in sections to facilitate bath application without resorting
to an overly long current shot.

Large Castings and Forgings


Advantages
Large surface areas can be processed and examined in relatively short time.

Disadvantages
High amperage requirements (16000 – 20000) dictate special power supply.

Cylindrical Parts Such as Tubing, Pipe, Hollow Shafts, etc.

Advantages
Entire length can be circularly magnetized by contacting, end to end.

Disadvantages
Effective limited field to outside surface and cannot be used for inner diameter
examination.
Ends must be conducive to electrical contacts and capable of carrying current without
much heat.
Cannot be used on oil country tubular goods because of arcburns.

Long Solid Parts Such as Billets, Rods, Shafts, etc.

Advantages
Contacting, end to end, can circularly magnetize entire length.
Current requirements are independent of length. No
end loss

Disadvantages
Voltage requirements increase as length increases, due to greater impedance of cable and
part.
Ends must be conducive to electrical contacts and capable of carrying current without
much heat.

32
PROD CONTACTS

For the inspection of large and massive parts too bulky to be put into any sort of units,
are inspected by using prod contacts, to pass the current directly through the part or
through a local portion of it. Such local contacts do not always produce true circular
fields, but they are very convenient and practical for many purposes. Prod contacts are
often used in the magnetic particle inspection of large castings and weldments.

The prod tips that contact the piece should be aluminum, copper braid, or, copper pads
rather than solid copper. With solid copper tips, accidental arcing during prod placement
or removal could cause copper penetration in the surface, which may result in
metallurgical damage (softening, hardening, cracking, etc.).

The prod electrodes (legs) are first pressed firmly placed against the test part. The
magnetizing current is then passed through the prods and into the area of the part in
contact with the part. This establishes a circular field in the part around and in between
each prod electrode, sufficient to carry out a local magnetic particle testing. Extreme
care should be taken to maintain clean pod tips, to minimize heating at the point of
contact and to prevent arc burns and local overheating of the surface. This could cause
adverse effects on the surface being examined and might cause an effect to the
material properties as well. Prods should be used on machined surfaces or on
aerospace component parts.
Un-rectified AC limits the prod technique to the detection of surface discontinuities.
HWAC is the most desirable form of current since it will detect both surface and near
surface discontinuities. The prod technique generally uses dry magnetic particles due to
better particle mobility. Wet magnetic particles are not generally used with prods
because of potential electrical and flammability hazards. Proper prod placement
requires a second placement with the prods rotated approximately 900 from the first
placement to assure that all existing discontinuities are revealed. Sufficient overlap
should be given between successive prod placements. On large surfaces, it is good
practice to layout a grid for prod or yoke technique.

Prod spacing is measured as the distance from a centerline connecting the prod
centers. It should not exceed 8" (200mm). Shorter spacing may be used to
accommodate geometric limitations of the area being examined, or to increase the
sensitivity. Prod spacing of less than 3" are usually not practical due to the banding of
particles around the legs of the prods. When the area of examination exceeds a width

33
of one-quarter of prod spacing, the magnetic intensity should be verified at the edges of
the area being examined. The various advantages and disadvantages for different
material forms for prod contact are given below:

Welds
Advantages
Circular field can be selectively directed to weld area by prod placement.
In conjunction with HWAC and dry powder, provides excellent sensitivity to surface and
subsurface discontinuities.
Portability – can be brought to examination site easily.

Disadvantages
Only small area can be examined at one time.
Arc burns due to poor contact
Surface must be dry when dry powder is being used
Prod spacing must be in accordance with the magnetizing current level

Large Castings and Forgings

Advantages
Entire surface area can be examined in small increments using nominal current values.
Circular fields can be concentrated in specific areas that are historically prone to
discontinuities.
Equipments can be brought to the location of parts that are difficult to move.
In conjunction with HWAC and dry powder, provides excellent sensitivity to subsurface
discontinuities.

Disadvantages
Coverage of large area requires multiple of shots and can be very time consuming
Possibility of arc burns due to poor contact.
Surface should be dry before application of dry powder.

CENTRAL CONDUCTOR
Central conductor forms a part of inducing indirect magnetization in the specimen. For
many tubular and ring-shaped objects, it is advantageous to use a separate conductor
to carry the current rather than the part itself. Such a conductor, commonly referred to
as central conductor, is threaded through the inside of the part and is a convenient
means of circularly magnetizing a part without the need for making direct contact with
the material itself. Central conductors are made of solid and tubular, magnetic and
nonmagnetic materials that are good conductors of electricity.

The basic rules regarding the magnetic fields around the circular conductor carrying direct
current are as follows:
(a) The magnetic field around the conductor of uniform cross-section is uniform

along the length of the conductor.


(b) The magnetic field is 900 to the path of the current through the conductor.
(c) The flux density outside the conductor varies inversely proportional with theradial
distance from the center of the conductor.

34
Circular Magnetic Fields Distribution and Intensity
As discussed previously, when current is passed through a solid conductor, a magnetic
field forms in and around the conductor. The following statements can be made about
the distribution and intensity of the magnetic field.

• The field strength varies from zero at the center of the component to a maximum
at the surface.

• The field strength at the surface of the conductor decreases as the radius of the
conductor increases when the current strength is held constant. (However, a
larger conductor is capable of carrying more current.)

• The field strength outside the conductor is directly proportional to the current
strength. Inside the conductor the field strength is dependent on the current
strength, magnetic permeability of the material, and if magnetic, the location on
the B-H curve.

• The field strength outside the conductor decreases with distance from the
conductor.

DISTRIBUTION OF MAGNETIC FIELDS

Solid Nonmagnetic Conductor Carrying Direct Current


The distribution of magnetic field inside a
nonmagnetic conductor, such as a copper
bar, when carrying a direct current is
different from the distribution external to
the bar. At any point outside the bar, the
flux density is the result of only that portion
of the current that is flowing in the metal
between the point and the center of the
bar. Therefore, the flux density increases
linearly, from zero at the center of the bar
to a maximum value at the surface.
Outside the bar, the magnetic field
decreases along the curve as shown in the
figure. In calculating the flux densities
outside the bar, the current can be
considered to be concentrated at the
center of the bar. If the radius of the bar (R), and the flux density (B) at the surface
of the bar is equal to the magnetizing force (H), then the flux density at a distance of 2R
from the center of the bar will be H/2; at 3R, H/3; and so on.

Solid Ferromagnetic Conductor Carrying Direct Current


If the current carrying conductor is a solid magnetic material, the same distribution exists
as in a nonmagnetic conductor, but the flux density is much more greater. The flux density

35
is zero at the center, but at the surface it is ¼H, where ¼ is the permeability of the
magnetic material. The actual flux density may be much more than in a nonmagnetic
material.

Solid Ferromagnetic Conductor Carrying Alternating Current

The distribution of the magnetic field in a solid ferromagnetic conductor carrying


alternating current is shown in the figure. Outside the conductor, the flux density
decreases along the same curve as if direct current produced the magnetic force;
however, while AC is flowing, the field is constantly varying in strength and direction.
Inside the conductor, the flux density is zero at the center and increases toward the
outer surface– slowly at first, then accelerating towards maximum at the surface. The
flux density at the surface is proportional to the permeability of the conductor material.

36
Central Conductor Enclosed Within Hollow Ferromagnetic Cylinder

When the central conductor is used to magnetize a hollow cylindrical part made of a
ferromagnetic material, the flux density is maximum at the inside surface of the part.
The flux density produced by the current in the central conductor is maximum at the
surface of the conductor, through the space between the conductor and the inside
surface of the part. At this surface, the flux
density is immediately increased by the
permeability factor, ¼, of the material of the part
and then decreases to the outer surface. Hence
the flux density again drops to the same
decreasing curve it was following inside the part.

This method then, produces strong indications at


the inside surface, as the flux density is maximum.
Sometimes, these indications may appear at the
outside surface of the part. The flux density is
same at the inside surface of the
part for both magnetic and nonmagnetic material, because it is the field external to the
conductor that constitutes the magnetizing the part.

In small hollow cylindrical parts, it is desirable that the conductor be placed centrally so
that a uniform magnetic field will exist for the detection of discontinuities at all portions
of the part. When AC is passed through a hollow circular conductor the skin effect
concentrates the magnetic field at the OD of the component.

As can be seen in the field distribution images, the field strength at the inside surface of
hollow conductor carrying a circular magnetic field
produced by direct magnetization is very low.
Therefore, the direct method of magnetization is
not recommended when inspecting the inside
diameter wall of a hollow component for shallow
defects. The field strength increases rather rapidly
as one moves in from the ID so if the defect has
significant depth, it may be detectable. However, a
much better method of magnetizing hollow
components for inspection of the ID and OD
surfaces is with the use of a central conductor. As
can be seen in the field distribution image to the
right, when current is passed through
a nonmagnetic central conductor (copper bar) the magnetic field produced on the inside
diameter surface of a magnetic tube is much greater and the field is still strong enough
for defect detection on the OD surface.

Offset Central Conductor


In larger diameter tubes, rings, pipes, etc. the current required to produce adequate
magnetic field becomes excessively large. To overcome this requirement, an offset
central should then be used. The conductor passing through the inside of a part is
placed against an inside wall of an part. The distance along the part circumference

37
(interior or exterior) that is effectively magnetized will be taken as four times the
diameter of the central conductor. Rotating the conductor by allowing approximately
10% magnetic field overlap inspects the entire circumference. The diameter of a central
conductor is not related to the inside diameter or the wall thickness of the cylindrical
part. Conductor size is usually based on its current carrying capacity and ease of
handling.

The central conductor type of inspection is sometimes required on components having


paralleled multiple openings, such as engine blocks. For this purpose, a multiple
central conductor fixture can be designed that enables the operator to perform two or
more cylinder at one time with the same degree of sensitivity, as if processed
individually.

Hollow Magnetic DC Hollow Nonmagnetic DC

The various advantages and disadvantages for different material forms for Central
conductor are given below:

Miscellaneous parts having holes through which a conductor can


be threaded such as nuts, rings, washers, etc.

Advantages
No electrical contact to part and no possibility of arcing.

38
Circumferentially directed magnetic field is generated in all surfaces surrounding the
conductor.
Ideal for those cases where residual method is applicable. Central
conductor can support lightweight parts.
Multiple turns may be used to reduce the required current.

Disadvantages
Size of the conductor must be ample to carry required current.
Ideally, conductor should be centrally located within hole.
Larger diameters require repeated magnetization and rotation of parts for complete coverage.
Where continuous magnetization technique is employed, inspection is required after each
magnetization.

Tubular Type Parts Such as Pipe, Tube, Hollow Shafts

Advantages
No electrical contact to part is required.
Both inside and outside diameter examination possible. Entire
length of part circularly magnetized.

Disadvantages
Outside diameter sensitivity may be somewhat diminished for parts with large diameter and
heavy wall.

Large Valve Body and Similar Parts


Advantages
Provides good sensitivity for the detection of discontinuities located at the internal wall of the
part.

Disadvantages
Outside diameter sensitivity may be somewhat diminished for parts with large diameter and
heavy wall.
CURRENT SELECTION

PROD TECHNIQUE

Direct or rectified current shall be used. The current shall be 100amp/inch. to


125amp/inch of prod spacing for sections ¾ inches (19mm) thick or greater. For
sections less than ¾ inches thick, the current shall be 90mm/inch to 110amp/
inch of prod spacing.

LONGITUDINAL MAGNETIZATION TECHNIQUE


For this technique, the magnetization is accomplished by passing current
through a multi-turn fixed coil (or cables)are wrapped around the part.

The required field strength shall be calculated based on the length (L) and the
diameter (D). Long parts shall be examined inspections not to exceed 18", and

39
18" shall be used for calculating the required field strength. For non-cylindrical
parts, D shall be the maximum cross-sectional diagonal.

(i) Parts with L/D Ratio’s Equal to or Greater than 4

The magnetizing current shall be within +/- 10% of the ampere-turns value
determined as below:

Ampere-turns = 35000/((L/D)+2)

For example, a part with 10" long, 2" diameter has an L/D ratio of 5, and then the
required ampere-turns shall be

Ampere-turns = 35000/(5+2) = 5000AT

(ii) Parts with L/D Ratio Less than 4, but Not Less Than 2

The magnetizing current shall be within +/- 10% of the ampere-turns value
determined as below:

Ampere-turns = 45000/ (L/D)

(iii) If the area to be magnetized extends beyond 6" on either side of coil, field
adequacy shall be demonstrated by field indicator.

(iv) For large parts, due to size and shape, the magnetizing current shall be 1200 AT
to 4500 AT. Field indicator shall demonstrate the field adequacy.

Magnetizing Current shall be calculated by dividing the Ampere-turns obtained from


above, by the number of turns of the coil.

CIRCULAR MAGNETIZATION TECHNIQUE

DIRECT CONTACT TECHNIQUE:

For this technique, passing current through the part magnetizes the part. Direct
or rectified current shall be used.

(i) The current shall be 300Amp/inch (12A/mm) to 800 Amp/inch (31A/mm) of


outer diameter.
(ii) Parts with geometric shapes other than round with the greatest
crosssectional diagonal in a plane at right angles to the current flow shall
determine the inches to be used.
(iii) If the current levels required for the above cannot be obtained, the
maximum current shall be determined by checking the field adequacy, by
field indicator.
(iv) For non-cylindrical parts, and when examining large parts by clamping the
contacts to the wall thickness, field adequacy shall be demonstrated by
field indicator.

40
CENTRAL CONDUCTOR TECHNIQUE

GUIDE FOR SELECTING CURRENT VALUES FOR THE USE OF CENTRAL


CONDUCTOR
CentralConductor
Diameter(inches) SectionWall
Amperage(A)
thickenessInches

1/2
0.125 500

0.250 750

0.375 1000

0.500 1250

1 0.125 750

0.250 1000

0.375 1250

0.500 1500

11/2 0.125 1000

0.250 1250

0.375 1500

0.500 1750

2 0.125 1250

0.250 1500

0.375 1750

0.500 2000

Note: For cylinder wall thickness greater than 0.500" add 250A (+- 10%)for each additional 0.125".
When using offset central conductors the conductor passing through the inside of the
part is placed against the inside wall of the object. The current shall be from 12A per
mm of part diameter to 32A per mm of part diameter (300 – 800A/inch). The diameter of
the part shall be taken as the largest distance between any two points on the outside
circumference of the part. Generally currents will be 500A/inch (20A per mm) or lower
with the higher currents (up to 800A/inch) being used to examine inclusions or to
examine low permeability alloys, such as precipitation-hardening steels. The entire

41
circumference of the part shall be examined by rotating the part on the conductor, and
by allowing a 10% overlap.

Photographs of Various Techniques are shown below


Coil Shot

Central Conductor

Head shot Technique

MEDIUMS: TYPES & APPLICATION METHODS


Magnetic particles are classified according to the medium used to carry to the parts.
The medium can be air (dry particle method), or a liquid (wet-particle method).
Magnetic particles can be made of any low-retentivity ferromagnetic powder that is
finely subdivided.

Characteristics of Mediums

Magnetic properties:

42
The particles used for magnetic particle inspection should have high permeability
so that they can be readily magnetized to low-level leakage fields that occur
around discontinuities and can be drawn to these leakage sites to form a visible
indication. The leakage field produced by some tight cracks is extremely weak.
Low coercive force and low retentivity are other desirable magnetic properties for
mediums. If in high coercive force, wet particles become strongly magnetized
and form an objectionable background. In addition the particles will adhere to
any steel in tanks or piping of the unit causing heavy settling–out losses. Highly
retentive wet particles tend to cling together quickly in large aggregates on the
test surface, leading to lack in mobility and coarse indication may backgrounds.

(b) Effect of Particle Size: Large and heavy particles are not likely to be arrested by
weak fields, but very weak fields will hold fine particles. However extremely fine
particles will adhere to fingerprints, rough surfaces and soiled or other damp
areas, thus obscuring indications. For a wet medium, if the particle size is finer,
the liquid will readily drain away leaving a thin film of particles on the surface.
Coarser particles will become stranded and immobilized.

(c) Effect of Particle Shape: Long slender particles develop a strong polarity than
globular particles. Because of the attraction exhibited by the opposite poles,
these tiny, slender particles, which have pronounced north and south poles,
arrange themselves into strings more readily than globular particles. The ability
of globular dry particles is that they flow freely and smoothly under similar
conditions, where elongated particles tend to mat. The greatest sensitivity is
provided by a blend of elongated and globular particles.

(d) Visibility and Contrast: are promoted by choosing particles that are easy to see
against the color of the surface of the part being inspected. The natural color of
metallic powders used in dry method is silver-gray, but pigments are added to
impart color to the particles. The colors of wet method particles are limited to
black and red of the iron oxides that are used as the base for wet particles. For
increased visibility, the manufacturers coat particles with a fluorescent pigment.
The search for indications is conducted in total or partial darkness, using
ultraviolet radiation to activate the fluorescent dyes. They are available in both
wet and dry mediums, but fluorescent method is most commonly used with wet
method.

TYPES OF MAGNETIC PARTICLES

The two primary types of particles used in magnetic particle inspection are dry and wet
particles. The particle selection is influenced by:

Location of the discontinuity, i.e., surface or subsurface.


Size of the discontinuity, if on the surface.
Which type, wet or dry particles, is easier to apply.

43
DRY PARTICLES
Dry particles, when used with direct current for magnetization, are superior for detecting
discontinuities wholly below the surface. The use of AC is with dry particles is excellent
for detection of surface cracks that are not exceedingly fine, but is of little interest for
subsurface discontinuities. Dry powder is not recommended for the detection of fine
discontinuities such as fatigue cracks and grinding cracks.

Dry Particle Uses

In weld testing, the typical particle technique uses with


prods and yokes, with the inspector magnetizing and
testing short overlapping lengths of the weld. The
continuous magnetization method is used (the magnetic
field is continuously activated while the inspector applies
the powder and removes the excess.). Automated
processing has been used for testing linear welds on
large diameter pipes. However, much testing is usually
done on site and only portable test systems can be used.
Direct current is usually preferred for the inspection of
weld, as it penetrates deeply and allows the indication of slightly linear subsurface
discontinuities.

Dry particles are particularly used for magnetic particle inspection of large castings.
Cast objects are usually tested with yokes or prods, with test covering small
overlapping areas. Dry particles are most sensitive for the use on very rough surfaces
and for detecting the flaws beneath the surface. The reclamation and re-use of the
particles is not recommended.

“Magnetic particle examination using dry mediums shall not be performed, if the
surface temperature of the part exceeds 6000F”.

Application methods:
Air is used as a medium to carry the particles to the part’s surface and care must be
taken to apply them correctly. Dry particles must be applied in gentle stream or cloud. If
applied too rapidly, the stream may dislodge already formed indications and will present
a objectionable background masking real indications. Manual and mechanized
applicators can provide proper density and speed of particle applications. When the
particles are applied, they come under the influence of the leakage fields when they are
airborne and have a 3-D mobility.

float a cloud of particles onto the test objects surface and then a gentle stream of air to
remove lightly held background particles.

44
For best results, the magnetizing current should be present
throughout the application of the particles and the removal of
background. It is equally important to monitor the particles while
they are being applied. This is especially true if all subsurface
discontinuity indications that are weakly held and not well
delineated, as they are susceptible to damage form particles
applied later.

Particle Reuse
It is recommended that dry magnetic particles be used only once. Ferrous magnetic
powders are dense. When agitated in bulk, as in a powder blower or a bulb, a lot of
shearing and abrasion happens which wears off the pigment. As a result after each
reuse, the color and contrast will continually diminish to the point that discontinuity
indications are not visible.

Particle Storage
The storage method for dry powders is critical to their subsequent use. The primary
environment condition is moisture. If they are exposed to high levels of moisture, they
immediately begin to form oxides. Rusting alters colors, but the major problem is that
the particles adhere to each other, forming lumps or large masses that are useless for
magnetic particle inspection.

There are limitations, though not severe, on the temperatures at which dry powders can
be stored and used. Visible powders work on surfaces as hot as 3700C at which some
particles will become sticky and others lose some of their color. Beyond 3700 C,
magnetic powders can ignite and burn. Fluorescent and daylight fluorescent powders
lose their visible contrast at 1500C and sometimes at lower temperatures. This occurs
because the pigments are organic compounds that decompose or lose their ability to
fluoresce at particular temperatures.

Advantages of Dry Mediums:

(i) Dry magnetic particle is superior to wet particles in the detection of near
surface discontinuities.
(ii) For large objects, when using portable equipment for local magnetization.
(iii) Superior particle mobility is obtained for relatively deep-seated flaws, with
half-wave rectified current as the source.
(iv) Can be readily removed from the surface easily.

Disadvantages of Dry Mediums:

(i) Cannot be used in confined areas without proper safety breathing apparatus.
(ii) Probability of Detection (POD) is appreciably less than the wet medium for fine
surface discontinuities.
(iii) Difficult to use in overhead magnetizing positions

(iv) No evidence exists of complete coverage of the part as like wet method.

45
(v) Lower production rates.
(vi) It is difficult to adapt to any sort of automotive system.

WET MEDIUMS

Wet mediums are suited for the detection of fine surface discontinuities such as fatigue
cracks. Wet particles are commonly used with stationary equipments where the bath
can remain in use until contaminated. They are also used in field operations, but care
should be taken to maintain bath concentration by constant agitation.

They are available in red and black colors or as fluorescent particles that impart bluish
green or greenish yellow color. The particles are supplied in the form of a paste, or as a
concentrate that is suspended in a liquid to produce a bath. The liquid may be either water
or petroleum distillate having specific properties.

In wet method, particles size may range from 1/8 th of a micron up to 40 or 60 microns
(0.0015 – 0.0025 inches. The particle size is smaller compared to dry mediums and
hence they are not a form of substitution for dry particles, when they are not available.
The small size and generally compact shape of wet particles have a dominating effect
on their behavior. Their size renders permeability measurements highly inexact and of
limited utility. In addition, the size influences the brightness of fluorescent indications
when detected by this medium. The temperature of the wet suspension and the surface
of the part shall not exceed 1350F.

Vehicles for Wet Method Particles


There are two kinds of vehicles that are used to carry the powder on to the surface of
the part. They are water and oil suspending liquids.

Oil Suspending Liquids

Oil is used as a suspending liquid for magnetic particle testing and should be odorless,
well-refined light petroleum distillate of low viscosity having low sulfur content and high
flash point. Low viscosity is desired because the movement of the particles in the bath
is sufficiently retarded to have a definite effect in reducing buildup, and therefore
visibility of indication from a small discontinuity. Parts should be pre-cleaned to remove
oil and grease from the surface of the part, because oil from the surface accumulates in
the bath and increases its viscosity.
Oil vehicles are preferred in certain applications:

(a) Where lack of corrodibility to ferrous alloys is vital, such as finished bearing
andbearing races.
(b) Where water should pose a electrical hazard.
(c) On some high strength alloys, where hydrogen atoms from watercan diffuse into the
crystal structure of certain alloys and cause hydrogen embrittlement.

Water Suspending Liquids

46
The use of water as a suspendible liquid eliminates additional cost and flammability
hazards. Water cannot by itself be used as a medium. It rusts ferrous alloys, wets the
surface poorly, and does not disperse the particles efficiently. Hence water suspendible
particle concentrates should contain the necessary wetting agents, dispersing agents,
rust inhibitors, and anti-foaming agents. It should not be used in freezing conditions
where formation of ice would seriously retard the mobility of the particles. However,
ethylene glycol can be used to protect against reasonably low temperatures. Water
vehicles are preferred for the following reasons:

(a) Lower cost.


(b) Little fire hazard.
(c) No petrochemical fumes.
(d) Quicker indication formation.
(e) Minimum cleanup required on site.

Bath Preparation

Wet magnetic particle baths may be mixed by the supplier or may be sold dry for mixing
by the user. If the user is preparing the bath, the concentration of the bath should be
given primary importance. The effectiveness and reliability of inspection is primarily
dependent on concentration of bath. If the concentration of the bath is too high, the
background will be intense enough to camouflage the indications. On the other hand, if
the concentration is too low, the indications will be weak and difficult to locate. Keeping
the concentration at a constant level eliminates the indication to background contrast.

Settling Test

The effective concentration is ably achieved through a settling test. Since the 1940’s,
settling test has been used to measure the particle bath concentrations. It is a
convenient method that requires little equipment, a simple procedure and only 30 – 60
minutes to perform. Its accuracy is sometimes less than 80%, but its levels of precision
are appropriate for most applications.

Settling Parameters
It is essentially that the settling test takes place in a location free from vibration. The
settling tube must be positioned in an area that is proven to be free of strong magnetic
fields. Freshly prepared bath settles very rapidly, of ten in 15 minutes or less.
Magnetization causes the particles to cling together and form large and fast sinking
clusters. Speed and settling volume depend on the particles magnetization level and
this is the basis for demagnetizing the settling tube sample.

Apparatus:

Settling test equipment is simple in construction and applicability. It consists of: (a) a
100mL pear-shaped graduated centrifuge tube (b) a stand for supporting the tube
vertically, and (c) a timer to signal the end of settling process.

47
The centrifuge tube with a 1mL (0.05mL subdivisions) is used for a determining the
concentrations of fluorescent particle suspensions, and a 1.5mL (0.1mL subdivisions) is
used for non-fluorescent particle suspensions. The test method for determining perform
the settling test is as per ASTM – D – 96. Before sampling, the suspension should run
through a recirculating system for about 30 minutes to ensure proper mixing of
particles. A sample of 100mL is taken and demagnetized to avoid particle clinging to
each other and to the tube body. The settling conditions mentioned above are
maintained, and a settling time of 30 minutes is given for water-based suspension, and
60 minutes for petroleum distillate suspensions.

After the recommended settling time, the sample is interpreted visually without
disturbing the setup. If the settling volume is too low than the prescribed limits, then
add sufficient amount of particles. If the concentration is high, then add sufficient
vehicle. The particles or vehicles should not be added directly to the centrifuge tube.
They are added to the container, in which the medium was previously prepared.

If the particles that are settled appear as loose agglomerates, then the existing sample
in the tube is discarded, and a second sample is taken and the procedures are
repeated. If they continue to appear agglomerated, then the particles may already be in
a magnetized state. Hence the whole suspension is discarded and replaced. The
procedures are again carried out to determine the concentration. Sometimes bands or
striations are noticed in the suspension. These are due to the presence of
contaminants in the suspension. If the total volume of contaminates in the suspension
exceeds 30 % of the volume of magnetic particle or if the liquid is noticeably
fluorescent, the bath should be replaced.

Settling Volumes

For fluorescent bath, the concentration shall be 0.1mL to 0.4mL in a 100mL settling
volume.
For a Non-fluorescent bath, the concentration shall be 1.2mL to 2.4mL in a 100mL of
settling volume.

The methodology described above will produce repeatable results provided:

Strength of the field is maintained.


Field should be in proper direction with respect to the discontinuity (essentially
900).
Material factors such as surface condition, magnetic properties, and size and
shape are taken care of.

48
CONTINUOUS MAGNETIZATION

Continuous magnetization is employed for most of the operations utilizing either dry or
wet particles. The sequence of operations for both dry and wet applications is different.

Dry Continuous Magnetization Technique


Unlike wet particles, dry particles lose their mobility when they come into contact with
the part. Therefore, it is imperative that the part / area of interest be under the influence
of magnetic field while the particles are still airborne and free to be attracted towards
the leakage field. This dictates that the flow of current shall be initiated before the
application of dry medium and terminated after the excess powder is blown off.

Wet Continuous Magnetization Technique


The wet continuous magnetization technique generally applies to those parts processed
on a horizontal wet type unit. In practice, it involves bathing the part with the examination
medium to provide an abundant supply of particles on the surface of the part and
terminating the bath application immediately prior to cutting off the magnetizing current.

RESIDUAL MAGNETIZATION TECHNIQUE

In this technique, the examination medium is applied after the magnetizing force has
been discontinued. It can be used only on materials that possess high retentivity
property, such that the particles can be held on the surface and produce indications.
This technique may be advantageous for integration with production or handling
requirements or for intentionally limiting the sensitivity of the examination. It has found
wide use in the inspection of pipe and tubular goods.

VARIABLES IN MAGNETIC PARTICLE INSPECTION

Magnetic particle testing is not an isolated technical discipline. It is a combination of two


distinct nondestructive testing techniques; flux leakage testing and visual inspection.
The basic principle of magnetic particle testing is to magnetize a part to a flux density
that causes magnetic leakage from a discontinuity. Powdered ferromagnetic material is
then passed through the leakage field and the operator visually interprets those held
over the discontinuity.

49
The key to ideal magnetic particle inspection is to provide the highest sensitivity to
smallest possible discontinuity. This can be achieved through careful combination of:

(a) Applied magnetic field strength.


(b) Flux density (B) in the test object.
(c) Particle size and application methods.(d) Optimal viewing conditions.

Effect of Flux Leakage on False Indications

In a magnetic particle test, it is important to raise the field strength and the flux density
in the object to a level that produces a flux leakage sufficient for holding the particles in
place over discontinuities. On the other hand, excessive magnetization causes the
particles to stick together to minor leakage fields not caused due to discontinuities. If
such leakage occurs and attracts large number of particles, the result is a false
indication and the test object is said to be over magnetized for this inspection. Such
false indications may result from local permeability changes, which are caused by local
stresses in the test object. In some cases, the flux leakage may be caused by a
subsurface discontinuity and may not be possible to distinguish the cause for the
leakage field without the use of additional NDT methods.

REASONS FOR FORMATION OF INDICATIONS

Surface-breaking discontinuities best detected by magnetic particle testing are those


that expel optimal leakage fields. In order to gain a clearer insight of this, it is necessary
to understand three sets of variables:

How discontinuity parameters affect the external flux leakage;


How magnetic field parameters affect the external flux leakage;
How sensor reacts to passing such fields.

Discontinuity Parameters

The discontinuity parameters are critical and they include depth, width, and angle to the
object surface. In cases where the discontinuity is narrow surface breaking (seams,
laps, quench cracks, and grind tears), the magnetic flux leakage near the mouth of
discontinuity is highly curved. In case of subsurface discontinuities (inclusions and
laminations), the leakage field is much less curved. Relatively high values of field
strength and flux density within the object are required for testing. This lack of leakage
curvature greatly reduces the particle’s ability to stick to such indications.

Magnetic Field Parameters

The magnetic field parameters that most affect flux leakage are the field strength, local
B – H properties, and the angle to the discontinuity opening.
The leakage field’s ability to attract the magnetic particles is determined by several
additional factors. These include:
The magnetic forces between the magnetic flux leakage field and the particle;
Image forces between a magnetized particle and its magnetic image in the surface
plane of the test object;

50
Gravitational forces that may act to pull the particle into or out of the leakage site;
and
Surface tension forces between particle vehicle and the object surface for wet
method tests.
Some of these forces may in turn vary with discontinuity orientation, earth’s gravitational
field, particle size and shape, and type of medium.

Surface Discontinuities
The largest and most important category of discontinuity consists of those that are
exposed to the surface. Surface cracks or discontinuities are effectively located with
magnetic particle testing. Surface discontinuities are also the most detrimental to the
service life of the component than subsurface discontinuities and as a result they are
more frequent of inspection. Magnetic particle inspection is capable of detecting seams,
laps, quenching cracks and surface ruptures in castings, forgings, and weldments. For
maximum detectability, the discontinuity should essentially lie perpendicular to the
magnetic field. This is especially true for a discontinuity that is small and fine. The
characteristics of a discontinuity that enhance its detectability are:

• Its depth is at right angles to the surface


• Its width at the surface is small
• Its length at the surface is large with respect to its width
• It is comparatively deep in proportion to the width of its surface opening.

Many incipient fatigue cracks and fine grinding cracks are less than 0.025mm deep and
have surface openings of perhaps 1/10th of thickness or less. Such cracks are readily
detected by wet method. The depth of the crack has a pronounced effect on its
detectability; the deeper the crack, the stronger the indication for a given level of
magnetization. This is because the stronger flux causes greater distortion of the field in
the part. This is effect is particularly not noticeable beyond 6mm. in depth. If the crack is
not tight-lipped, but wide-open at the surface, the reluctance of the resulting air gap
reduces the strength of the leakage field. This combined with the inability of the
particles to bridge the gap results in a weaker indication. Surface opening also plays a
part in detectability. A surface scratch, which may be as wide at the surface, usually
does not produce indications, although they may, at high levels of magnetization. Thus
so many variables influence the formation of a indication.
There are also certain limitations regarding a crack, which is tightlipped virtually
eliminating the presence of air gap, produce no indications. Sometimes, with careful
interpretation and maximizing techniques, faint indications of such cracks may be
produced. One other type of discontinuity that sometimes poses a problem for its
detectability is a forging or a rolling lap.

In this case, the leakage field produced is weak due to small angle of emergence and
the resultant high reluctance gap. Hence when such conditions, demands its
detectability, DC magnetization with the use of wet fluorescent method is desirable.

In general, a surface discontinuity, whose depth is at least 5 times it’s opening at the
surface, will be detected.

51
Internal Discontinuities
Magnetic particle inspection is also capable of detecting subsurface discontinuities.
Although radiography and ultrasonic methods are extensively used in the detection of
subsurface discontinuities, the shape of the discontinuities, sometimes, initiates the
requirement for magnetic particle examination. The internal discontinuities that can be
detected by magnetic particle inspection can be divided into two groups:

• Those lying just beneath the surface (subsurface).


• Deep lying discontinuities.

Subsurface discontinuities comprise of voids or inclusions that lie just beneath the
surface. Nonmetallic inclusions, as either scattered or individual, occur in almost all steel
products to some degree. These discontinuities are usually very small and cannot be
detected unless they lie close to the surface.

Deep-lying discontinuities in weldments may be caused by inadequate penetration,


incomplete fusion, or cracks in welds. In castings, they result from shrinkage, slag
inclusions or gas pockets. The depth to which the magnetic particle examination can
detect cannot be established in mm, because the size and the shape of the
discontinuity itself is a controlling factor. Therefore, deeper the discontinuity, larger it
should be, for its detection.

The orientation of discontinuity is another factor in detection. If the discontinuity lies at


900 to the field, it offers a sufficient leakage field, whereas a discontinuity which lies 600
to 700, there would be sufficient reduction in the leakage field. The difference would
only result due to the reduction in the projected area.

REFERENCE STANDARDS AND ARTIFICIAL DISCONTINUITY


INDICATIONS

When multiple variables can affect the outcome of a test, a means should be used to
normalize or standardize the test. This ensures that consistent, repeatable results are
achieved, independent of the machine, operator, or time of inspection.

More often, a form of artificial discontinuity indicator is used. This is so called the
reference standard is designed to help evaluate several aspects of a magnetic particle
test system’s performance, including:

(a) Testing the magnetic particle equipment;


(b) Checking the sensitivity of magnetic particle compound;
(c) Verifying the accuracy of a test procedure for detecting discontinuities of
apredetermined magnitude.

Reference Standards For System Evaluation


Unlike other systems, magnetic particle systems give little evidence of malfunction. The
absence of an indication could mean:

52
(a) Tests were carried out according to the specified procedure on a test
materialwithout discontinuities;
(b) The test system was not properly working and unable to detect even the largestof
the leakage fields form the discontinuities.

As a result, some form of reference standards are required to determine the sensitivity
of the test system. Reference standards may be used to evaluate the functionality or
performance of a magnetic particle test system.

On a periodic basis, reference standards are used as test objects in order to monitor the
test system changes in:

(i) Magnetic field production;


(ii) Particle concentration;
(iii) Visibility;
(iv) Contamination

Tool Steel Ring Standard

The tool steel ring is a commonly used standard reference standard for magnetic
particle test systems, but it essentially indicates only particle sensitivity. Its use has
been for both dry and wet mediums. The sample picture of the ring is shown below.
The ring standard is used by passing a specified DC through a conductor, which in turn
passes through the ring’s center. The test system is evaluated on the basis number of
holes detected using various current levels. The number of holes that should be
detected is given as per the table below:
Distancefrom the
Hole Diameterin edgetothecenterof
No inches theholeinmm
(inches)

1 1.78(0.07) 1.8(0.07)

2 1.78(0.07) 3.6(0.14)

3 1.78(0.07) 5.3(0.21)

4 1.78(0.07) 7.1(0.28)

5 1.78(0.07) 9.0(0.35)

6 1.78(0.07) 10.7(0.42)

7 1.78(0.07) 12.5(0.49)

8 1.78(0.07) 14.2(0.56)

53
9 1.78(0.07) 16.0(0.63)

10 1.78(0.07) 17.8(0.70)

11 1.78(0.07) 19.6(0.77)

12 1.78(0.07) 21.4(0.84)
Table: Test indications required when using the tool steel ring
standard

TypeofMagnetic Currentin Minimum numberof


ParticleUsed amperes holestobedetected

**Wetsuspension 1400 6

2500 7

3400 7

*Drypowder 1400 7

2500 9

3400 9
Note: * Full-wave DC at central conductor; ** Visible or fluorescent
Measuring Magnetic Fields
When performing a magnetic particle inspection, it is very important to be able to
determine the direction and intensity of the magnetic field. As discussed previously, the
direction of the magnetic field should be between 45 and 90 degrees to the longest
dimension of the flaw for best detectability. The field intensity must be high enough to
cause an indication to form, but not too high or nonrelevant indications may form that
could mask relevant indications. To cause an indication to form, the field strength in the
object must produce a flux leakage field that is strong enough to hold the magnetic
particles in place over a discontinuity. Flux measurement devices can provide important
information about the field strength.

Since it is impractical to measure the actual field strength within the material, all the
devices measure the magnetic field that is outside of the material. There are a number
of different devices that can be used to detect and measure an external magnetic field.
The two devices commonly used in magnetic particle inspection are the field indicator
and the Hall effect meter, which is also often called a Gauss meter. Pie gages and
shims are devices that are often used to provided an indication of the field direction and
strength but do not actually provide a quantitative measure.

54
Pie Gages and Raised Cross Indicators
Pie gages are disks of high permeability material divided into triangular segments
separated by known gaps. The gaps are typically filled with a nonmagnetic material.
The pie gage contains 8 segments, separated by gaps up to 0.75mm, which run to
full depth of the material. Raised cross indicators contain 4
gaps (in the shape of a cross) approximately 0.13mm
(0.5") in width. The segments are cut away so that the
known gap is raised a fixed distance off the test object’s
surface. Both of these devices are used to determine the
approximate orientation and to a limited extent, indicate the
adequacy of the field strength. However, they do not
measure the internal field strength of the object. The
presence of multiple gaps at different orientations helps
reveal the approximate orientation of
the magnetic field. Slots perpendicular to the flux lines produce distinct indications,
while those lying parallel to the magnetic flux give little or no indications.

Shim Discontinuity Standards

Shim discontinuity indicators are thin foils of high permeable materials containing
wellcontrolled notch discontinuities. Frequently, multiple shims are used at different
locations and different orientations on the test object to examine the field distribution.
One popular version of the shim indicator is a strip containing 3 slots of different widths.
The strip is placed in contact with the test object surface and shares the flux with the
test object. The principal limitation of this standard is that they require 50mm gage
length. Shims are most often used while preparing test procedures, where they help in
indicating particular test configuration. Once the field distribution is found adequate, the
testing procedure is recorded and the components are tested with the parameters
established by the shims.

Field Indicators

Feld indicators are small mechanical devices that utilize a soft


iron vane that will be deflected by a magnetic field. The X-ray
image below shows the inside working of a field meter looking in
from the side. The vane is attached to a needle that rotates and
moves the pointer for the scale. Field indicators can be adjusted
and calibrated so that quantitative information can be obtained.
However, the measurement range of field indicators is usually
small due to the mechanics of the device. The one shown to the
right has a range from plus twenty gauss to minus twenty gauss.
This limited ranges makes them best suited for measuring the
residual magnetic field after demagnetization.

Hall-Effect (Gauss/Tesla)Meter

A Hall-effect meter is an electronic device that provides a digital readout of the


magnetic field strength in Gauss or Tesla units. The meters use a very small conductive

55
or semiconductor element at the tip of the probe. Electric current is passed through
the conductor. In a magnetic field, the
magnetic field exerts a force on the
moving electrons which tends to push
them to one side of the conductor. A
buildup of charge at the sides of the
conductors will balance this magnetic
influence, producing a measurable voltage
between the two sides of the conductor.
The presence of this measurable
transverse voltage is called
the Hall effect after Edwin H. Hall who discovered it in 1879.

The voltage generated Vh can be related to the


external magnetic field by the following equation.
Vh = I B Rh / b

Where:

Vh is the voltage generated.

I is the applied direct current.

B is the component of the magnetic field that is


at a right angle to the direct current in the Hall
element.

Rh is the Hall Coefficient of the Hall element.

b is the thickness of the Hall element.

Probes are available with either tangential (transverse) or axial sensing elements.
Probes can be purchased in a wide variety of sizes and configurations and with different
measurement ranges. The probe is placed in the magnetic field such that the magnetic
lines of force intersect the major dimensions of the sensing element at a right angle.

56
Discontinuity Standards

Production Test Parts with Discontinuities: A practical way to evaluate the performance
and sensitivity of the dry or wet magnetic particles or overall system performance, is to
use representative test parts with known discontinuities of type and severity normally
encountered during actual production inspection. However, the usefulness of such
standards is limited because the orientation and magnitude of the discontinuities cannot
be controlled. If such parts are being used as reference, they should be thoroughly
cleaned and demagnetized after each usage. As an alternative, fabricated test parts
with discontinuities of varying degree and severity can be used to provide an indication
of the effectiveness of the particles used in inspection.

INTERPRETATION OF INDICATIONS

CLASSIFICATION OF INDICATIONS
Magnetic particle testing indications are classified as follows:

(a) Relevant Indications


(b) Nonrelevant Indications
(c) False Indications

NONRELEVANT INDICATIONS
Nonrelevant indications are true patterns caused by leakage fields that do not result from
the presence of flaws. Nonrelevant indications have several causes and their indication

57
is fuzzy as that of a subsurface discontinuity indication. They should not be interpreted
as flaws and therefore require careful evaluation.

SOURCES FOR NONRELEVANT INDICATIONS


Particle patterns that yield Nonrelevant indications can be the result of many factors.
They include the following:

Particle Adherence Due to Excessive Magnetization


The particle adherence at leakage fields around sharp corners, ridges, or other surface
irregularities when magnetized too strongly causes the adherence of powders in these
areas when longitudinally magnetized. The use of too strong current with circular
magnetization can produce indications of flux lines of the external field. Both of the
above phenomenon’s are recognized by experienced operators and can be eliminated
by reducing the current and retesting. Mill Scale
Tightly adhering mill scale will cause particle buildup, not only because of mechanical
adherence, but also due to the difference in permeability between scale and the test
object. In most cases, this can be detected by a visual inspection prior to carrying out
magnetic particle inspection. Additional cleaning followed by retesting will confirm the
absence of true discontinuity.

Configurations
Configurations that result from in a restriction of the magnetic field are a cause for this
type of nonrelevant indication. Typical restrictive configurations are internal notches
such as splines, threads, grooves for indexing, or keyways.

Abrupt Changes in Magnetic Properties


Typical source of this sort of nonrelevant indication is observed in testing welds.
Permeability differences such as those between weld metal – base metal, between two
dissimilar metals, result in nonrelevant indications. The particle may be held loosely or
tightly, depending on the degree of change in permeability. It is necessary for the
inspector to have a prior knowledge about these conditions.

Magnetized Writing
This is another form of nonrelevant indication. Magnetic writing is usually associated
with parts displaying good residual characteristics in the magnetized state. If such a
part is contacted with a sharp corner or edge of another part, the residual field is locally
reoriented, giving rise to a leakage field and consequently an indication. The point of
common nail can be used as an example to write on a part susceptible to magnetic
writing. Magnetic writing is not always easy to interpret, because the particles are
loosely held and are usually fuzzy or intermittent in appearance. If magnetic writing is
suspected, the only way is to demagnetize the part and retest. If the indication was due
to magnetic writing, it will not reappear.
Techniques For Identifying Nonrelevant Indications

There are several techniques for distinguishing relevant from nonrelevant indications.
They are:

58
• Carrying out a visual inspection before the commencement of magnetic particle
testing, as this would eliminate indications due to the presence of mill scale or
surface roughness.

• A careful study of the part’s design or drawing, to readily locate the section
changes or shape constrictions.

• A confusing indication can always be demagnetized and retested.

• Careful analysis of particle pattern. The particle pattern typical of nonrelevant


indication is usually wide, loose, and lightly held, and is easily removable even
during continuous magnetization.

• Inspection supplemented by another NDT method, such as radiography or


ultrasonic testing, to verify the presence of subsurface discontinuities.

Treatment of Indications Believed to be Nonrelevant

Any indication, which is believed to be nonrelevant, shall be regarded to be relevant


unless it is shown by re-examination by the same method or by the use of another NDT
method or by surface conditioning that no unacceptable imperfection is present.

RELEVANT INDICATIONS

Relevant indications are indications caused due to leakage flux emanating from the
actual discontinuities. They are the result of errors made during or after metal
processing. They may or may not be considered defects.

Terminology
Discontinuity: is any interruption in the normal physical structure or composition of a
part. It can also be termed as an intentional or unintentional lack in continuity. If the
lack in continuity is intentional, such as a case of a design requirement, then the
indication arising from these discontinuities are termed as Nonrelevant indications. If
the lack in continuity is unintentional, the indications arising from these are termed as
Relevant indications. Examples of such type of indications are cracks, porosity, lack of
fusion, lack of penetration, etc.

Defect: is any discontinuity that interferes with the service life or application of the
component is termed as a defect. It can also be defined as an imperfection of sufficient
magnitude to warrant rejection of a part with respect to standards.
Classification of Indications

Relevant indications are further classified as either linear or rounded.

The linear indication is one having a length greater than three times the width. A
rounded indication is one having a length equal to or less than three times its
width. A rounded indication need not be essentially rounded; it may be circular, or
elliptical in shape.

59
An indication is the evidence of a mechanical imperfection. Only indications that have
any dimension greater than 1/16" (1.6mm) shall be considered relevant. Any
questionable or doubtful indication shall be reexamined to determine whether or not
they are relevant.

ACCEPTANCE STANDARDS

Evaluation involves determining whether an indication will be detrimental to the service


of the part. It is a judgment based on a well defined accepts or reject standards that may
be either written or verbal.
General Evaluation Rules

Everything that has been said in this discussion thus for has emphasized the fact that
general rules for evaluation cannot be wholly laid down. These are not necessary for an
evaluator with sufficient knowledge and experience. Sometimes there exists a situation
where inspectors are called upon to make decisions regarding the seriousness of a
defect. Hence it should be the inspector’s responsibility to be aware of the general
conditions, which will be of great use to him in demanding conditions. As a guide for
inspectors, a few basic considerations are set forth below:

(a) A discontinuity of any kind, lying at the surface is more likely to be harmful
thanthose, of same size and shape lying wholly below the surface. The deeper it
lies, the less harmful it is said to be.
(b) Any discontinuity, having a principal dimension or plane which lies at rightangles
or at a considerable angle of principal tensile stress, whether surface or
subsurface, is more likely considered to be harmful than a defect of same size,
location and shape lying parallel to the tensile stress.
(c) Any discontinuity, which lies in the area of high tensile stress, should
beconsidered primary than a defect of same size, located in the area of low
tensile stress.
(d) Discontinuities, which are sharp at the bottom, such as the grinding cracks,
aresevere stress-raisers and therefore are more harmful in any location. These
defects offer much probability to propagate under severe load conditions.
(e) Any discontinuity, which occurs in a location close to a keyway or other
designstress raiser, is likely to increase the latter and must be considered much
harmful than those of same size and shape that occurs away from such a
location.
INTERPRETATION OF PATTERNS

The shape, sharpness of the outline, width, and height to which the build up are the
principle features by which discontinuities can be identified and distinguished from each
other.
Surface Cracks

Powder patterns for surface cracks are sharply defined, tightly held and usually built up
heavily. The deeper the crack, the heavier the buildup of the indication. Crater cracks
are recognized by a small indication at the terminal point of the weld. The indication
may be single line or multiple or star-shaped.

60
Incomplete Fusion

Accumulation of powder will generally be pronounced and the edge of the weld
indicated. The closer the incomplete fusion is to the surface, the sharper the pattern.

Undercut

A pattern is produced at the weld edge that adheres less strongly than the indications
obtained from an incomplete fusion. Undercut can also be detected by visual
examination.

Subsurface Discontinuities

The powder patterns have a fuzzy appearance and or not clearly defined. They are
neither strong nor pronounced; yet they are readily distinguished from the indications of
surface conditions.

Slag Inclusions

A fuzzy pattern similar to the subsurface discontinuity or porosity appears when a high
magnetizing field is used and they are present.

Seams

The indications are straight, sharp, and often intermittent. Buildup is small. A
magnetizing current greater than required for the detection of the cracks is necessary.

Examples of Magnetic Particle Indications

When magnetic particle inspection is used, cracks on the surface of the part appear as
sharp lines that follow the path of the crack. Flaws that exist below the surface of the
part are less defined and more difficult to detect. Below are some examples of magnetic
particle indications. Given below are some examples of the magnetic particle
indications.

Magnetic particle wet fluorescent indication of a cracks in a drive shaft

61
Magnetic particle wet fluorescent indication of
a crack in the crane hook

Magnetic particle dry powder indication of a crack


in a saw blade

62
The figure above 3 three pictures of the same crack. The first one shows the original size of the
crack, in the second one the crack is closed due to metal smearing operation.

CODES, STANDARDS, SPECIFICATIONS AND PROCEDURES


CODE

Code is a comprehensive document relating to all aspects like design, material,


fabrication, construction, erection, maintenance, quality control, as well as
documentation for specific industrial sectors like pressure vessels, aircrafts, etc.,
Codes are prepared by professional bodies or government agencies on a specific
subject. For some of the activities like design, calculation, material specification, NDT,
etc., codes may refer to standards which are independent and parallel documents.

As for NDT, codes should indicate the application of NDT methods towards inspection,
the need for NDT and the acceptance limits.

STANDARDS

The codes will often refer to the standards which are more specific documents giving
the details of how a particular operation is to be carried out. These documents take into
account the technological levels and the operational skills of the operators, in laying
down the requirements of the standards. For example, in regard to Radiography testing

63
or any other NDT method, test results are greatly dependent on the skill of the
personnel. Hence, the procedures and specifications for testing and evaluation must be
standardized in accordance with the requirements such that the results will be least
affected by the difference in the skill of the personnel.

Standards are documents prepared by a body of professionals or government agencies


in a specific subject. As the name standard indicates that standards attempt to
standardize material or activity. The standards making body take into account the
industrial requirements and prepare standards in such a way that a few standards can
fit into for a variety of applications.

SPECIFICATIONS

The document that prescribes in detail, the requirements with which the product or
service has to comply is the specification. The specification is of paramount importance
in the achievement of quality. In many cases, poor products or sevices are a result of
inadequate or ambiguous or improper specification. For a product to be manufactured
and operated properly there will be different specifications like material specification,
process specification, inspection specification, acceptance specification, installation
specification, maintenance specification, disposal specification etc. The specification
may be evolved by national bodies or by the manufacturer through his own experience.

PROCEDURES

These are the last level documents for any process, service, method etc. to be adopted
in the shop floor. The procedures are written giving all the specific details pertaining to
the activity so that the personnel in the shop floor can follow with ease. No changes are
allowed to be made without the approval from the authorized person in the
organization.

ASME BOILER AND PRESSURE VESSEL CODE

Initial enactment of the ASME Boiler and Pressure Code was in 1914. It was
established by a committee set up in 1911 with members from utilities, state insurance
companies and manufacturers. Whether or not it is adopted in USA is left to the
discretion of each state and the municipality. In any event, its effectiveness in reducing
the human casualties due to the boiler accidents since its adoption is widely
recognized.

ASME Boiler and Pressure Code have 11 sections of which Section V deals with
Nondestructive Testing. It is divided into further subsections; Subsection A contains the
code articles for various NDT methods whereas Subsection B deals with various
standards of testing. These standards become mandatory when they are specifically
referenced in whole or in part in subsection A.

After initial revision, ASME issues revisions once every three years. One of the features
of the code is that partial revisions are issued twice a year, the summer addenda (July
1st) and the winter addenda (January 1st). These addenda’s are effective upon
issuance. Any question about the interpretation of rules may be submitted to the

64
company in the form of letter of enquiry and the answers from the company will be
published as code cases from time to time.

CONSTITUTION OF ASME CODE

Rules for Nondestructive testing are collectively prescribed in Section V. Other sections
for each component (sections I, II, III, and VIII) refer section V or other applicable rules
for examination methods and SNT-TC-1A (ASNT Recommended Practice) for
qualification of nondestructive examination. Acceptances criteria are specified in each
section are sometimes quoted from ASTM.

Acceptance as per ASME-BOILER & PRESSURE VESSEL CODE

The acceptance is as per Appendix – 6, Methods for Magnetic Particle Examination,


Sec VIII, is as follows:
All surfaces to be examined shall be free of:
(a) Relevant linear indications;
(b) Relevant rounded indications greater than 3/16" (4.8mm);
(c) Four or more relevant rounded indications in aline separated by 1/16" (1.6mm) or
less, edge to edge;
(d) An indication of an imperfection may be larger than the imperfection that
causesit; however, the size of the indication is the basis for acceptance.

Acceptance as per ASME B-31.1 – ASME Code for


Pressure
Piping
The acceptance is as per Chapter VI – Examination, Inspection, and Testing

136.4.3 Magnetic Particle Examination shall be performed in accordance with methods


of Article 7, Section V, of the ASME code.

Linear Indications: are those in which length is more than three times the width; rounded
indications are indications, which are circular or elliptical with the length less than three
times the width.

The following relevant indications are unacceptable:


(B.1) Any cracks or linear indications
(B.2) Rounded indications with dimensions greater than 3/16" (5.0mm)
(B.3) Four or more rounded indications in a line separated by a distance 1/ 16"(2.0mm)
or less, edge to edge.
(B.4) Ten or more rounded indications in any 6 square inches of surface with
themajor dimension of this area not to exceed 6" with the area taken in the most
unfavorable location relative to the indications being evaluated.

Acceptance as per API 1104 – Welding of Pipelines and


Related Facilities

Section 6 - Acceptance Standards for Nondestructive Testing

65
6.5 Classification of Indications

6.5.1 Indications produced by Liquid Penetrant Inspection are not necessarily defects.
Machining marks, scratches, and surface conditions may produce indications
that are similar to those produced discontinuities, but that are not relevant to
acceptability. The criteria given under apply when indications are evaluated.
6.5.1.2 Any indication with a maximum dimension of 1/16 inches (1.59mm)or less, shall
be classified as Nonrelevant. Any larger indication believed to be Nonrelevant
shall be regarded as relevant, until reexamined by penetrant inspection or any by
any other nondestructive examination methods, to determine whether or not an
actual discontinuity exists. The surface may be ground are otherwise conditioned
before re-examination.
6.5.1.3 Relevant indications are those caused by actual discontinuities. Linear
indications are those in which the length is more than three times its width.
Rounded indications are those in which the length is three times its width or less.

6.5.2 ACCEPTANCE STANDARDS

Relevant indications shall be unacceptable when any of the following conditions


exists:

(a) Linear indications are evaluated as crater cracks or star cracks and exceed5/32"
(3.96mm) in length.
(b) Linear indications are evaluated as cracks other than crater cracks or star
cracks.
(c) Linear indications are evaluated as Incomplete Fusion and exceeds 1 inch
(25.4mm) in total length in a continuous 12 inches (304.8mm) length of the weld
or 8% of the weld length.

(d) Rounded indications shall be evaluated as follows:


Porosity- Individual or scattered porosity shall be unacceptable when any of
the following conditions exists:
a. The size of the individual pore exceeds 1/8 inch (3.17mm).
b. The size of an individual pore exceeds 25% of nominal wall
thicknessjoined.
c. Cluster Porosity that occurs in any pass except finish pass shall
comply with the above mentioned dimensions. CP that occurs in the
finish pass shall be unacceptable when any of the following conditions
exists:
i. The diameter of the cluster exceeds ½ “ (12.7mm).
ii. The aggregate length of CP in any 12" continuous weld
lengthexceeds 1/2”.
iii. An individual pore within a cluster exceeds 1/16" in size.

66
DEMAGNETIZATION

Demagnetization is a process of removing magnetism from a ferromagnetic material.


Ferromagnetic materials are characterized by a relative ease of magnetism when
exposed to a magnetizing force. Once magnetized, the material retains some amount of
magnetism even after the magnetizing force is removed. This left over field in the
material, after the force is removed is referred to as the residual field or residual
magnetism. The magnitude of this residual field is a function of the following factors:

(a) The magnetic characteristics of the material.


(b) The immediate history of the materials magnetization.
(c) The strength of the applied magnetic field.
(d) The direction of magnetic field, whether circular or longitudinal.
(e) The test objects geometry.
Characteristics of Residual Magnetic Field

(a) The residual field is in the same direction as the original magnetic field.
(b) The residual field is weaker than the original field.
(c) The original magnetizing force causes the residual field.
(d) When an article has been magnetized in more than one direction, the secondfield
applied will completely overcome the first field. However, this is only true if the
second field applied, is stronger than the first in magnitude.
This field may be negligible in soft materials, but in harder materials it may be
comparable to the intense fields associated with the special alloys used for permanent
magnets. Although it is time consuming and represents additional expense, the
demagnetization of parts is sometimes necessary in many cases. Demagnetization may
be easy or difficult depending on the type of material. Metals having high coercive force
are difficult to magnetize and once they are magnetized, it is equally difficult to remove
the residual field from it.

Requirement for Demagnetization


Components that retain a relatively strong residual field can be a source to various
problems during subsequent processing of the material and its service life.
Demagnetization may be necessary for the following reasons:
(a) The part will be used in a area where a residual magnetic field will interfere
withthe operation of instruments that are sensitive to magnetic fields. They may
also affect the accuracy of instrumentation incorporated in an assembly that
contains the magnetized part.
(b) Residual magnetism does not affect the mechanical properties of the part, butcan
adhere metal chips, filings, scales or some other loose magnetic particles on the
surface being machined. This will adversely affect the surface finish, dimensions,
and tool life of the assembly.
(c) During cleaning operations, the chips that are created by machining operationmay
adhere to the surface of the part and seriously interfere with the subsequent
painting or coating operations.

67
(d) Abrasive particles that may be attracted to parts such as bearing races,
gearteeth, bearing surfaces, etc., may lead to abrasion or may obstruct oil holes
or grooves.

(a) During some electric arc-welding operations, strong residual fields may
deflectthe arc away from the point where it should be applied.
(b) Finally, the residual field will interfere with the re-magnetization of part at
fieldintensity too low to overcome.

The residual fields may sometimes be allowed to remain in the part, without
demagnetizing it. The reasons for not demagnetizing being:

(a) Parts made of magnetically soft materials do not retain residual magnetism,
asthey have low retentivity properties.
(b) If the subsequent manufacturing process calls for the object to be heated
aboveCurie point, the material will readily be demagnetized as it loses all its
magnetic properties.
(c) If the part does not require additional machining and its intended function is
notcompromised by the presence of a residual field, then demagnetization
becomes unnecessary.
(d) The part is to be re-magnetized for further magnetic particle inspection or
forsome secondary operation in which a magnetic plate or chuck may be used to
hold a part.
(e) Finally, demagnetization is only required if specified in the
drawings,specifications, or procedures.
Types of Residual Fields
Longitudinal magnetic fields

Material magnetized by a coil or a solenoid can sometimes be left with a longitudinal


residual field. The field is oriented lengthwise in the object and there is a high
concentration of emergent fields at each end, which constitutes poles. These fields may
be easily detected by measuring devices such as gauss meter or by attraction of other
ferromagnetic materials. While this type of fields adversely affect the subsequent
machining process, the usually render a strong positive response to demagnetization.

Circular Magnetic Fields

Unlike longitudinal fields, circular residual fields offer little or no external evidence of
their presence. The flux may be entirely confined within the part, depending to some
extent on part geometry and magnetizing procedures. Without special equipment,
demagnetization of a circular residual field is very difficult. Confirmation of an adequate
demagnetization level is another additional problem. Leakage field measuring devices
are ineffective since there is absence of external field. In such cases, reorientation of a
circular field into a longitudinal field prior to demagnetization may be advantageous in
some instances.

Summary of Demagnetization Procedures


Alternating Current Demagnetization

68
Fastest and the most simple method for demagnetization is to pass current through a
high intensity AC coil and slowly withdrawing the part form the coil. A coil of 5000 –
10000 Ampere-turns, line frequency from 50 – 60 Hz is recommended. The part to be
demagnetized should enter the coil from a 12" distance and move through it steadily
and slowly until the piece is 36" beyond the coil. The operation is repeated until all of
the residual magnetism is being removed. The strength of the field is gradually reduced
to zero as the object exists the coil and reaches a point beyond the influence of the
coil’s field. Rotating and tumbling the part while passing through the field of the coil can
achieve demagnetization of smaller parts.
Direct Current Demagnetization
Reversing Direct Current Contact Coil Method

This type is usually associated with large test objects that have been magnetized using
DC source. It is also applicable where AC demagnetization procedures prove
ineffective. The method requires high values of current or full-wave rectified AC that
can be directed to a coil or plate. There must be also provisions for reversing the
polarity and reducing the amplitude, to zero. Although fewer steps may yield
satisfactory results, greater reliability is achieved by using about 30 reversals and
current reductions to approach zero asymptotically.
Reversing Cable Wrap Method

This method is used to demagnetize objects too large or heavy to process on a


horizontal wet testing unit. The object to be demagnetized is wrapped with multiple
turns of high amperage flexible cable connected to a stationary DC power pack. The
current is alternatively reversed in direction and reduced in amplitude through multiple
steps until the current reaches zero.

Pulsating Reversing Method

A high amperage DC coil demagnetizer has been designed to produce alternate pulses
of positive and negative current. The pulses are generated at fixed amplitude and a
repetition rate of 5 – 10 cycles per second. This permits relatively small objects to be
demagnetized by the through-coil method. The object is subjected to a constantly
reversing magnetic field as it passes through the coil and the effective field is reduced
to zero, as the test object is gradually withdrawn form the coil. The low repetition rate
substantially reduces the skin effect with a corresponding increase in the magnetic field
penetration.
Demagnetization With Yokes

AC yokes may be used for local demagnetization by placing the poles on the surface
and moving them around the area and slowly withdrawing the yoke, while it is still
energized.

Residual Field Measurement

After complete demagnetization, the residual field should not exceed


3 Gauss (240Am-1), anywhere in the piece. So in order to relatively maintain the
recommended limits of residual field in the material, the measurement of the level of

69
residual field is necessary. This is achieved through a Residual Field Meter, commonly
known as Gauss Meter. The main purpose of this device is to measure the relative
strength of magnetic leakage fields. Leakage field measurements are undertaken to
ascertain the level of residual magnetic fields emanating from the test object. It consists
of an elliptical vane, which is attached to a pointer that is free to move. A rectangular
permanent magnet is attached in a fixed position directly above the soft iron vane.

Because the vane is under the influence of the magnet, it tends to align its long axis in
the direction of the leakage field emanating from the magnet. In doing so, the vane
becomes magnetized in a fixed position. In the absence of external magnetic fields, the
pointer reads zero on the graduated scale. When the north pole of the residually
magnetized object is moved closer to the pivot end of the pointer, the south pole of the
vane is attracted towards the object and the north pole of the magnet is repelled. The
resulting torque causes the pointer to move in the positive (+) direction.

The relative strength of the residual field is measured by bringing the indicator near the
object and noting the reflection of the pointer. The edge of the pivot end of the pointer
should be closest to the object under investigation. To increase the accuracy and
repeatability of such measurements, it is a good practice to isolate the device from
extraneous magnetic fields. If such fields magnetize them, the sensitivity of these
devices becomes substantially reduced.
POST EXAMINATION CLEANING

The effect of particles, if allowed to remain on the test surface, can cause difficulty in
subsequent processes such as painting or coating, or even a shot-blasting operation
(when tested using wet medium). Hence it is recommended to remove the magnetic
particles after the inspection, which is referred to as Post-cleaning.
Means of Particle Removal

Particles can be removed by the following methods:


(a) Use of compressed air to blow off the excess particles
(b) Drying of wet particles and subsequent brushing or compressed air blow off.
(c) Removal of wet medium by use of a solvent.
(d) Any other means of particle removal, which do not interfere with the
subsequentrequirements, can be used.

70
CHAPTER-2 VARIOUS
NDT METHODS

2.1 Magnetic Particle Testing:


Magnetic particle testing may be applied to detect cracks and other
discontinuities on the surface or near the surfaces of ferromagnetic
materials. The sensitivity is greater for surface discontinuities and
diminishes rapidly with increasing depth of subsurface discontinuities
below the surface. Typical discontinuities that can be detected by this
method are cracks, laps, cold shuts, lams and laminations.
In principle, it involves magnetizing an area to be examined, and
applying the ferromagnetic particles to the surface. These particles will
form patterns on the surface where cracks and other discontinuities cause
distortions in the normal magnetic field. Whenever this technique is used
to produce a magnetic flux in the part, maximum sensitivity will be to the
linear discontinuities oriented perpendicularly to the lines of flux.

2.2 RADIOGRAPHY TESTING:

Page 71 of 330
Basic principle of radiography involves the use of penetrating
radiations, which are made to pass through the material under inspection
and the transmitted radiation is recorded in film.
Radiography inspection can be applied to any materials. It uses
radiation from isotopic sources and X-radiation that will penetrate through
the job and produce an image in the film. The amount of radiation
transmitted through the material is dependent on the density and the
material thickness. As material thickness increases, radiography becomes
less sensitive as an inspection method.
Surface discontinuities that can be detected by this method include
undercut, longitudinal grooves, incomplete filling of grooves, excessive
reinforcement, overlap, concavity at the root etc., Subsurface
discontinuities include gas porosity, slag inclusions, cracks, inadequate
penetration, incomplete fusion etc.,

LIQUID PENETRANT TESTING


Liquid penetrant examination method is an effective method for
detecting discontinuities, which are open to the surface of nonporous

Page 72 of 330
metals and other materials. Typical discontinuities detected by this
method are cracks, seams, cold shuts, laminations, and porosity.
In principle, a liquid penetrant is applied to the surface of the
specimen to be examined and some time is allowed for the penetrant to
enter into the discontinuities.
All the excess penetrant is then removed and a developer is then
applied to the surface. The developer functions as both blotters, to absorb
the penetrant from the discontinuities as well as a means of providing a
contrasting background for meaningful interpretation of indications.

2 Basic Processing Steps of a Liquid Penetrant Inspection

2.1.1 Surface Preparation: One of the most critical steps of a liquid


penetrant inspection is the surface preparation. The surface must be
free of oil, grease, water, or other contaminants that may prevent
penetrant from entering flaws. The sample may also require etching
if mechanical operations such as machining, sanding, or grit blasting
have been performed. These and other mechanical operations can
smear the surface of the sample, thus closing the defects.
2.1.2 Penetrant Application: Once the surface has been thoroughly
cleaned and dried, the penetrant material is applied by spraying,
brushing, or immersing the parts in a penetrant bath
2.1.3 Penetrant Dwell: The penetrant is left on the surface for a sufficient
time to allow as much penetrant as possible to be drawn from or to
seep into a defect. Penetrant dwell time is the total time that the
penetrant is in contact with the part surface. Dwell times are usually

Page 73 of 330
recommended by the penetrant producers or required by the
specification being followed. The times vary depending on the
application, penetrant materials used, the material, the form of the
material being inspected, and the type of defect being inspected.
Minimum dwell times typically range from 5 to 60 minutes.
Generally, there is no harm in using a longer penetrant dwell time as
long as the penetrant is not allowed to dry. The ideal dwell time is
often determined by experimentation and is often very specific to a
particular application.
2.1.4 Excess Penetrant Removal: This is a most delicate part of the
inspection procedure because the excess penetrant must be removed
from the surface of the sample while removing as little penetrant as
possible from defects. Depending on the penetrant system used, this
step may involve cleaning with a solvent, direct rinsing with water,
or first treated with an emulsifier and then rinsing with water
2.1.5 Developer Application: A thin layer of developer is then applied to
the sample to draw penetrant trapped in flaws back to the surface
where it will be visible. Developers come in a variety of forms that
may be applied by dusting (dry powdered), dipping, or spraying (wet
developers).
2.1.6 Indication Development: The developer is allowed to stand on the
part surface for a period of time sufficient to permit the extraction
of the trapped penetrant out of any surface flaws. This development
time is usually a minimum of 10 minutes and significantly longer
times may be necessary for tight cracks.
2.1.7 Inspection: Inspection is then performed under appropriate lighting
to detect indications from any flaws which may be present.
2.1.8 Clean Surface: The final step in the process is to thoroughly clean
the part surface to remove the developer from the parts that were

Page 74 of 330
found to be acceptable. Common Uses of Liquid Penetrant
Inspection Liquid penetrant inspection (LPI) is one of the most
widely used nondestructive evaluation (NDE) methods. Its
popularity can be attributed to two main factors, which are its
relative ease of use and its flexibility. LPI can be used to inspect
almost any material provided that its surface is not extremely rough
or porous. Materials that are commonly inspected using

LPI include the following:


• Metals (aluminum, copper, steel, titanium, etc.)
• Glass
• Many ceramic materials
• Rubber
• Plastics

LPI offers flexibility in performing inspections because it can be


applied in a large variety of applications ranging from automotive spark
plugs to critical aircraft components. Penetrant material can be applied
with a spray can or a cotton swab to inspect for flaws known to occur in a

Page 75 of 330
specific area or it can be applied by dipping or spraying to quickly inspect
large areas. At right, visible dye penetrant being locally applied to a highly
loaded connecting point to check for fatigue cracking. Liquid penetrant
inspection is used to inspect of flaws that break the surface of the sample.

Some of these flaws are listed below:


• Fatigue cracks
• Quench cracks
• Grinding cracks
• Overload and impact fractures
• Porosity
• Laps
• Seams
• Pin holes in welds
• Lack of fusion or braising along the edge of the bond line

As mentioned above, one of the major limitations of a penetrant


inspection is that flaws must be open to the surface.

2.2 Advantages and Disadvantages of Penetrant Testing


Like all nondestructive inspection methods, liquid penetrant
inspection has both advantages and disadvantages. The primary
advantages and disadvantages when compared to other NDE methods
are summarized below. Primary Advantages
• The method has high sensitive to small surface discontinuities.
• The method has few material limitations, i.e. metallic and nonmetallic,
magnetic and nonmagnetic, and conductive and nonconductive materials
may be inspected.

Page 76 of 330
• Large areas and large volumes of parts/materials can be inspected
rapidly and at low cost.
• Parts with complex geometric shapes are routinely inspected.
• Indications are produced directly on the surface of the part
andconstitute a visual representation of the flaw.
• Aerosol spray cans make penetrant materials very portable.
• Penetrant materials and associated equipment are
relativelyinexpensive.

Primary Disadvantages

• Only surface breaking defects can be detected.• Only materials with a


relative nonporous surface can be inspected. • Pre-cleaning is critical
as contaminants can mask defects.
• Metal smearing from machining, grinding, and grit or vapor
blastingmust be removed prior to LPI.
• The inspector must have direct access to the surface being inspected.
• Surface finish and roughness can affect inspection sensitivity.
• Multiple process operations must be performed and controlled.
• Post cleaning of acceptable parts or materials is required.
• Chemical handling and proper disposal is required.

Penetrant Properties:

The industry and military specification that control the penetrant


materials and their use all stipulate certain physical properties of the
penetrant materials that must be met. Some of these requirements address
the safe use of the materials, such as toxicity, flash point, and
corrosiveness, and other requirements address storage and contamination
issues. Still others delineate properties that are thought to be primarily

Page 77 of 330
responsible for the performance or sensitivity of the penetrant.The
properties of penetrant materials that are controlled by AMS 2644 and
MIL-I-25135E include flash point, surface wetting capability, viscosity,
color, brightness, ultraviolet stability, thermal stability, water tolerance,
and removability.
Surface Energy (Surface Wetting Capability): As previously
mentioned, one of the important characteristics of a liquid penetrant
material is its ability to freely wet the surface of the object being
inspected. At the liquid-solid surface interface, if the molecules of the
liquid have a stronger attraction to the molecules of the solid surface than
to each other (the adhesive forces are stronger than the cohesive forces),
then wetting of the surface occurs. Alternately, if the liquid molecules are
more strongly attracted to each other and not the molecules of the solid
surface (the cohesive forces are stronger than the adhesive forces), then
the liquid beads-up and does not wet the surface of the part.
Density or Specific Gravity: The density or the specific gravity of a
penetrant material probably has a slight to negligible effect on the
performance of a penetrant. The gravitational force acting on the penetrant
liquid can be working in cooperation with or against the capillary force
depending on the orientation of the flaw during the dwell cycle. When the
gravitational pull is working against the capillary rise the strength of the
force is given by the following equation:
Force = πr2hpg
Where: r = radius of the crack opening; h = height of penetrant above its
free surface p = density of the penetrant; g = acceleration due to gravity

Page 78 of 330
Viscosity: It has little effect on the ability of a penetrant material to enter
a defect but it does have an effect on speed at which the penetrant fills a
defect
Capillarity: The ability of a liquid to rise or fall in narrow openings is
due to capillary action. It is demonstrated by an simple experiment, which
uses two tubes of varying cross-section. The tubes are then immersed into
a container filled with water. It was observed that level of water in the
thinner tube rises faster. This experiment proves that, a liquid with a low
viscosity penetrates much faster into narrow openings. So cavities, which
offer narrow openings, such as a tight crack or fatigue cracks, which are
hairline, are best detected with penetrant test system as it penetrates
readily into narrow openings. When repeating the experiment with a
liquid such as mercury, the level of mercury falls down in the thinner tube.
The height to which the liquid is raised is determined largely by surface
tension and the wetting ability of the liquid. Also, lifting ability due to
capillary action increases, as the diameter of the bore decreases. Capillary
forces will be less in a closed tube than in an open tube, because of the air
that is trapped in the former. This can be compared with a discontinuity,
which has a closed end on one side. The air that is trapped inside will be
dissolved by the penetrant and is diffused at the surface.
Fluidity: The ability of a liquid to flow is termed as Fluidity. The
Penetrant should have the ability to drain away from the component well,
but with a dragging-out of the penetrant from the defects.
Surface tension: The force acting per unit length of an imaginary line
drawn on the surface on the liquid, normal to it, is called surface tension.
It is the property of a liquid to act like a stretched membrane. Surface
tension plays an important role in the effectiveness of a penetrant. High
surface tension liquids are usually are excellent solvents, and will easily
dissolve the dyes. However, low surface tension liquids provide the

Page 79 of 330
penetrating power and spreading properties necessary for a good
penetrant.
Volatility: Penetrant should essentially non-volatile liquids. A small
amount of evaporation at the discontinuity could help to intensify dye
brilliance and also prevent excessive spreading of indications. However
low volatility is desirable to minimize the losses due to evaporation of
penetrant stored in open tanks.
Flammability: Penetrant should have high flash point as a matter of
safety in use. Flash point is defined as the lowest temperature at which the
liquid gives a flash. When small flame is passed across its surface, the
liquid gets heated up and subsequently burns. Flash point of a penetrant
should not be less than 135o F (or) 52oC
Chemical activity: By chemical activity, we mean, the ability of the
penetrant to cause corrosion on the metals which are tested. This is due to
presence of halogens, elements which belong to a highly reactive group.
Examples of such type of elements are chlorine, fluorine, bromine, and
iodine. No penetrant is without the presence of halogens, and should be
noted that these can cause corrosion on the metal surface, if not removed.
This is the primary reason why post-cleaning operation is essential in
penetrant testing. Hence penetrant with presence of halogens are usually
restricted on austenitic steels, titanium, and other high-nickel alloys.
Drying characteristics: The penetrant must resist drying out, and
complete bleed out, during hot air drying of the component after the wash
operation has been completed. Ideally, heat should aid the penetrant in
promoting a return of penetrant to the component surface, in order to
produce a sharply defined indication.

Page 80 of 330
Techniques for Standard Temperatures
As a standard technique, the temperature of the penetrant and the
surface of the part to be processed shall neither be below 50 F (10 C) nor
above 125 F (52 C) throughout the examination period. Local heating or
cooling is permitted provided the part temperature remains within the
above-mentioned range during the examination. Where it is not practical
to comply with these temperature limitations, the examination procedure
mentioned below is proposed for higher or lower temperature ranges and
requires qualification. This shall require the use of a quench cracked
aluminum block, which is designated as a Liquid Penetrant Comparator
Block.

The block shall be made of Aluminum, ASTM B 209, Type 2024, 3/8"
thick, and should have approximate face dimensions of 2" x 3" (52mm x
76mm). At the center of each face, an area approximately 1" in diameter
shall be marked with a 950 degree temperature indicating crayon or
pencil. The marked area shall be heated with a blowtorch, a Bunsen burner
or similar device to a temperature between 950 F and 975 F (510 – 524
C). The specimen shall then be immediately quenched in cold water,
which will produce a network of fine cracks on each face.
The block shall then be dried by heating to approximately 300 F. After
cooling, the block shall be cut in halves and shall be designated as “A”
and “B”. If it is desired to qualify a penetrant examination procedure at a
temperature of less than 60 F (16 C), the proposed procedure shall be
applied to block “B”. A standard test procedure, which has previously
demonstrated as suitable for use, shall be applied to block “A”. If the
indications obtained under proposed conditions on block “B” is
essentially considered to be same as those obtained on block “A”, during

Page 81 of 330
examination at 50 F to 125 F range, the proposed procedure shall be
considered qualified for use.
These requirements are as per T – 652, Article 6, Sec V of ASME Boiler
and Pressure Vessel Code.

Characteristics of a good penetrant:


1. Readily penetrate into fine openings.
2. Ability to remain in relatively coarse openings.
3. Be applied and removed easily.
4. Bleed from discontinuities when developer is applied.
5. Be inert with respect to materials being tested and to the containers.
6. Odorless.
7. Exhibit stability under conditions of storage and use.
8. Non-flammable.
9. Non-toxic.
10. Low in cost.
11. Have permanence of color, when exposed to heat, light.
A penetrant, which essentially meets the above requirements, must be
quite sophisticated and educated one.

PRECLEANING

Adequate pre-cleaning of work pieces prior to penetrant inspection


is absolutely necessary for accurate results. Without adequate removal of
surface contaminations, indications from discontinuities may be missed.
Hence much of the success and reliability of inspection depends on the
thoroughness of the pre-cleaning process. If any contaminants, or the
liquids used for cleaning, or any deposits produced by this liquids, fills
the cracks, the discontinuity will not be detected. The ideal surface

Page 82 of 330
preparation, therefore, is one, which leaves the surface and the flaw in a
clean and dry condition.

Reasons for Pre-cleaning include:


1. May stop the penetrant from entering into the flaw.
2. The penetrant may lose its ability to identify the flaw, because it already
reacts with
some other component.
3. The surface immediately surrounding the flaw retains enoughpenetrant
to mask the true appearance of the flaw. Types of contaminant

Types of contaminant includes, but not limited to:


1. All purposely applied paint and coatings such as plating or Sprayed
coatings.
2. Scales from heat treatment or welding.
3. Welding flux and weld spatters.
4. Brazing flux or stop off, and burrs.
5. Dirt or dust that gets settled on the surface of the job.
6. Oils, greases, and other chemical impurities.
7. Polishing components, and other metal chips typically from
machineshop.
8. Metal-finishing processes, such as phosphating, oxidising, anodizing,
chromating , electrodeposition, and metallising.
9. All other components, which may prevent a penetrant form entering
aflaw.These contaminants are collectively referred to as “Soils”.

CLEANING METHODS

Page 83 of 330
They are generally classified as chemical, mechanical, solvent, or
thereof. The cleaning methods and the type of contaminant it removes is
discussed below:
MECHANICAL METHODS

Abrasive tumbling:
Removing light scales, burrs, welding flux, braze stop off, rust,
casting mold and core material. Should not be used on soft metals such as
aluminum, magnesium, or titanium.

Dry abrasive grit blasting:


Removing light or heavy scale, flux stop off, rust, casting mold or
core material, and sprayed coatings, carbon deposits. In general, any
friable deposits.

Wet abrasive grit blasting:


Same as dry except, where deposits are light; better surface and
better control of dimensions are required.
Wire brushing:
Removing light deposits of scale, flux, and stop off. This method is
generally avoided as there is a chance of the metal chips to be removed
and may close the discontinuity. High-pressure water and steam:
Ordinarily used with an alkaline cleaner or detergent. Removes
typical machine shop soils such as cutting oils, polishing components,
grease, chips, and deposits from electrical discharge machining. Used
when surface finish must be maintained. Inexpensive.
Ultrasonic cleaning:
Ordinarily used with detergent and water or with a solvent. Removes
adherent shop soils form large quantities of small parts.

Page 84 of 330
Mechanical methods should be used with care, because they often
mask flaws by smearing adjacent metal over them or by filling them with
abrasive material. This is more likely to happen with soft metals than with
hard metals. Hence, before a decision is made to use a specific method, it
is good practice to test the method on known flaws to ensure that this will
not mask the true flaws.
CHEMICAL METHODS

Alkaline cleaning:
Removes typical machine shop soils such as cutting oils, polishing
components, grease, and chips, carbon deposits. Ordinarily used on large
articles where hand methods are too laborious. Acid cleaning:
Typically known as ‘etching’, uses acid to remove scales and
smeared metal pieces due to machining operation. A concentrated solution
for removing heavy scales; mild solutions for light scales; weak solution
for removing lightly smeared metal is recommended. Besides the above-
mentioned methods, molten salt bath cleaning is also used for
conditioning and removing heavy scale. A chemical cleaning method
should be carefully chosen to ensure that neither the braze nor the
components of the assembly is attacked.

SOLVENT CLEANING METHODS

Vapor Degreasing:
Removing typical shop soil, oil, and grease .Usually employs
chlorinated solvents; not suitable for titanium.
Solvent Wiping:
Same as vapor degreasing, except a hand operation may employ
non-chlorinated solvents. Used for localized low volume cleaning.

Page 85 of 330
Surface finish of the work piece must always be considered. When
further processing is scheduled, such as machining or polishing, an
abrasive cleaning method is frequently a good choice. Generally chemical
cleaning methods have less degrading effects on the surface finish than
mechanical methods. The choice of a cleaning method is based on such
factors as;
(1) Type of contaminant to be removed, since one method does not
remove all the contaminants equally well.
(2) Effect of the cleaning method on the part.
(3) Practicality of the cleaning method for the part, for example, a
largepart cannot be put into a small degreaser or ultrasonic cleaner.
(4) Specific cleaning requirements of the purchaser.

Prior to each liquid penetrant inspection, the surface to be examined


and all adjacent areas within atleast 1” shall be dry and free of all dirt,
grease, oil and other impurities. Residual from cleaning processes such as
strong alkalies, pickles solutions and chromates, in particular, may
adversely react with the penetrant and reduce its sensitivity and
performance.
Drying after Preparation
After cleaning, drying of the surfaces shall be accomplished by
normal evaporation or with forced hot or cold air. A minimum period of
time shall be established to ensure that the cleaning solution has
evaporated prior to the application of the penetrant.

PENETRANT SYSTEMS

Page 86 of 330
A number of penetrant types or classes have been developed over
the years, to cater for the wide variety of inspection conditions that occur
in practice. The main types of penetrant are classified based on
(1) Interpretation of indications
(2) Type of removal process
The dye in the penetrant is the prime constituent, which aids in the
visible interpretation of indications. The dye could be of color contrast
type, or a fluorescent type or could be a combination of both. The Color
contrast penetrant consists of a brightly colored dye, usually red, that is
highly visible under normal lighting conditions.
The fluorescent dye, on the other hand, an almost colorless dye,
which emits a visible light, rays, when viewed under a source of
ultraviolet light. The equipment that aids in the interpretation of
fluorescent indications is termed as a black light.
The dual sensitivity penetrant contains both a visible dye for
examination under normal light and a fluorescent dye for a more sensitive
evaluation of small discontinuities. The processes used to remove the
excess penetrant from the surface of the specimen can further categorize
penetrant. The excess penetrant can be removed in four different ways:
(a) By water only.
(b) By a liquid solvent.
(c) By water, followed by a penetrant removal solution, which is
watersoluble (hydrophilic), followed by water.
(d) By an emulsifier which is oil-soluble (lipophilic), followed by water.

Thus, based on the above mentioned processes, the penetrant are


classified into the following types, namely:

Page 87 of 330
1. Water-washable penetrant, can be self emulsifying or removable
with plain water.
2. Post-emulsifiable penetrant, which require a separate emulsifier to
make the penetrant water washable.
3. Solvent-removable penetrant, which must be removed with a
solvent, typical when using color contrast dye in pressurized spray cans.
Water-Washable Penetrant
This system (using a fluorescent or a visible dye penetrant) is
designed so that the penetrant can be directly removed from the
component surface by washing with water. The process is, thus, rapid and
efficient. It is extremely important, however, to maintain a controlled
washing operation, especially where the removal of excess penetrant is by
means of water sprays. A good system will be an optimization of the
processing conditions, such as, water pressure, temperature, duration of
rinse cycle, surface condition of the work piece, and the inherent removal
characteristics of the penetrant. Even so, it is possible that penetrant may
be washed away from small defects.

Advantages of Water-washable systems


1. Easily washed with water.
2. Good for large number of small specimens.
3. Good on rough surfaces.
4. Good on keyways and threads.
5. Good for wide range of discontinuities.6. Relatively fast and single step
process,
7. Relatively inexpensive.
8. Available in oxygen compatible form.

Disadvantages of Water-washable systems

Page 88 of 330
1. Not reliable for detecting scratches and similar shallow
surfacediscontinuities.
2. Not reliable repeatability on specimens.
3. Not reliable on anodized surfaces.
4. Presence of acids and chromates affect the sensitivity of the system.
5. Indications can be easily over-washed.
6. There is a high chance for the contamination of the penetrant byWater.

Post-Emulsification Penetrant System


When it is necessary to detect minute defects, high-sensitivity
penetrant that are not water washable are usually employed. Such
penetrant have an oil-base, and require an additional processing step. An
emulsifier is applied after the penetrant had sufficient time to be absorbed
by the defects. This emulsifier renders the excess penetrant soluble in
water, and hence, capable of being rinsed away. Any penetrant within the
flaws is not affected, provided the process is carefully controlled.
Emulsifiers usually contain organic solvents, some of which may be a
petroleum base. They are of two types, namely, hydrophilic, and lipophilic
emulsifiers.
Hydrophilic emulsifiers are made of a non-ionic surfactant concentrate,
which may be in the form of dry powder or concentrated liquid. They must
be dissolved or diluted in water carefully, as per manufacturers
recommendations. Concentration may usually vary form 5 to 50 percent
emulsifier in water. Hydrophilic or water-based emulsifiers are dependent
on detergent or dissolving action. So they need a forceful water spray to
strip away the excess surface penetrant. Lipophilic emulsifiers are oil-
based. Their action is chiefly by diffusion of the surface penetrant. The
diffusion rate varies depending on the viscosity of the emulsifier.
Emulsifiers with a viscosity of 100 sqmm per sec react slowly, the time

Page 89 of 330
required by them being 2 to 4 minutes, whereas, the emulsifiers with
viscosity of 30 to 50 mm2/sec react relatively at a faster rate. The time
required by them being up to 2 minutes.
Emulsification dwell time begins as soon as the emulsifier is
applied. The length of time that the emulsifier is allowed to remain on the
work piece and in contact with the penetrant depends mainly on the type
of emulsifier i.e. fast-acting or slow-acting, water-based or oilbased, and
the surface roughness of the component under inspection.
Recommendations from the manufacturers can serve only as guidelines,
but the optimum time for a specific work piece is to be determined
experimentally. The period ranges from a few seconds to several minutes,
typically, 15 s to 4 min, although a maximum time of 5 min is established
by some specifications.

Emulsification time is critical in determining the sensitivity of


inspection. If he emulsification is too short, all the excess penetrant is not
removed properly and hence may lead to misinterpretation of indications.
If too long a time is used, penetrant within the discontinuity will also
become water washable and subsequently is rinsed off the surface along
with excess penetrant. This could cause even a relevant indication to be
missed.

Advantages of Post-emulsifiable penetrant systems

1. High sensitivity to very fine discontinuities.


2. Highly preferable for wide and shallow discontinuities.
3. Penetrant easily washed with water after emulsification
4. Penetrant inside cavities is not over-washed.

Page 90 of 330
Disadvantages of Post-emulsifiable penetrant systems
1. It is two step process and requires additional time in making the
penetrant water-washable.
2. Separate emulsifiers are required additionally, which raises the cost
ofinspection.
3. It becomes a cumbersome operation to remove penetrants
formkeyways, threads, blind holes and rough surfaces.

Solvent Removable Penetrant System

Occasionally, it is necessary to inspect only a small area of a work


piece or to inspect a work piece on the site than at a regular inspection
station. For such, situations, solvent-removable penetrants are available.
Normally, the same type of solvent is used for both pre-cleaning and for
the removal of excess penetrant. The penetrant process is convenient and
broadens the range of applications of penetrant inspections. The solvent
removable penetrants have an oilbase. Wiping off as much as penetrants
possible with a paper towel or with a lint-free cloth, then slightly
dampening a clean cloth with a solvent and wiping off what remains
accomplish optimum solvent removal. Final wiping with a dry paper,
towel or cloth is required. The penetrant may also be removed by flooding
the surface with solvent, in the same manner as for waterwashable
penetrant. The flooding technique is particularly useful for large work
pieces, but it must be carefully used to prevent removal of the penetrant
from the flaws. The solvent-removable system is used mainly for special
applications; because it involves too much labor, it is not practical for
production applications. Properly performed, the solvent-removable
system can be one of the most sensitive liquidpenetrant systems.

Page 91 of 330
Solvent Removers (Cleaners)
Solvent removers, sometimes referred as cleaners, differ from
emulsifiers in that they remove excess surface penetrant through direct
solvent action. The solvent remover dissolves the penetrant.
There are two types of solvent removers: flammable and
nonflammable removers. Flammable cleaners are free from halogens, but
are potential fire hazards. Non-flammable cleaners are usually
halogenated solvents, which renders them unsuitable for certain
applications- usually because of their high toxicity. The most important
precaution, while using solvent-removable penetrants is that; do not apply
the solvent directly on the test piece.
Flushing the surface with the solvent, following the application of the
penetrant and prior to development, is prohibited. This will dilute the
penetrant inside discontinuities.

Advantages of solvent-removable systems


1. Portability.
2. Good for spot-checking.
3. No water required.
4. Good on anodized specimens.
5. Excellent repeatability.

Disadvantages of solvent-removable system

1. May contain flammable materials.


2. Removal of excess penetrant is time consuming.
3. Materials cannot be stored in open tanks as they are volatile.
4. Difficult to use on rough surfaces such as cast magnesium.

Page 92 of 330
Selection of Penetrant Test Systems

Fluorescent Penetrants:

Water-washable Penetrant:

(1) Inspecting large volumes of parts.


(2) For detecting discontinuities that are not wider than their depth.
(3) For materials with rough surfaces.
(4) Excellent for inspection of threads and keyways.

Post- Emulsifiable Penetrants:


(1) Large volumes of parts.
(2) Requirement of high sensitivity.
(3) Parts contaminated with acid or harmful chemicals, which will
harmthe materials.
(4) Parts which may nave defects contaminated with in-service soils.
(5) For detection of stress corrosion, inter-granular, and grinding cracks.

Visible Dye Penetrant:

(a) Water-washable Penetrants:

(1) When lowest sensitivity is required.


(2) When large volumes of parts are to be inspected.

(b) Post-Emulsifiable Penetrants:

(1) More sensitivity is required.

Page 93 of 330
(2) Inspection of large volumes of parts, when time is not a constraint.

(c) Solvent Removable penetrant:

(1) Spot inspection.


(2) Where water wash is not feasible because of part size, weight and
surface condition.
(3) When inspecting small volumes of parts.

The appropriate process to be used on any specific application is based


on:
1. Extent of flaw sensitivity required.
2. Surface finish of the component.
3. Compatibility of the materials with the component.
4. The size, shape and accessibility of the area to be inspected.
5. The ultimate use of the component.

The equipment available can be divided into three types as follows;

1. Portable kits for carrying out inspection of small areas; for use on
Site; these often arecontained in pressurized aerosol cans.
2. Fixed installations are used for testing components on a
continuousbasis, with a seriesof processing stations in sequential order to
form a flow line. Increasingly, these areautomated component handling
and timing.

Page 94 of 330
3. Self-contained processing booths are used for testing components,
which cannot be movedfor testing.
Many of the materials used in penetrant inspection are potential fire
hazards and may be toxic. Hence, necessary safety precautions should be
taken on any installation.

Classification of Penetrant Test Systems

Penetrants are classified as per SE-165 of Sec V of ASME Boiler and


Pressure Vessel Code as follows:

Type I – Fluorescent Penetrant Examination

Method A – Water-washable Penetrants


Method B – Post-emulsifiable, lipophilic Penetrants
Method C – Solvent Removable Penetrants
Method D – Post-emulsifiable, hydrophilic Penetrants

Type II – Visible Penetrant Examination

Method A – Water-washable Penetrants

Page 95 of 330
Method B – Solvent Removable Penetrants

Technique Restrictions
Fluorescent penetrant examination should not follow a color contrast
penetrant examination.Intermixing of penetrant from different families or
different manufacturers is not permitted. A retest with a water washable
penetrant may cause marginal loss in indications due to sensitivity.
Selection of a Penetrant Technique
The selection of a liquid penetrant system is not a straightforward
task. There are a variety of penetrant systems and developer types that are
available for use, and one set of penetrant materials will not work for all
applications. Many factors must be considered when selecting the
penetrant materials for a particular application. These factors include the
sensitivity required, materials cost, number of parts and size of area
requiring inspection, and portability. When sensitivity is the primary
consideration for choosing a penetrant system, the first decision that must
be made is whether to use fluorescent dye penetrant, or visible dye
penetrant. Fluorescent penetrants are generally more capable of producing
a detectable indication from a small defect because the human eye is more
sensitive to a light indication on a dark background and the eye is naturally
drawn to a fluorescent indication. The graph below presents a series of
curves that show the contrast ratio required for a spot of a certain diameter
to be seen. The curves show that for indications spots larger than 0.076
mm (0.003 inch) in diameter, it does not really matter if it is a dark spot
on a light background or a dark spot on a light background. However,
when a dark indication on a light background is further reduced in size, it
is no longer detectable even though contrast is increased. Furthermore,
with a light indication on a dark background, indications down to 0.003

Page 96 of 330
mm (0.0001 inch) were detectable when the contrast between the flaw and
the background was high enough.
From this data, it can be seen why a fluorescent penetrant offers an
advantage over visible penetrant for finding very small defects. Data
presented by De Graaf and De Rijk supports this statement. They
inspected “Identical” fatigue cracked specimens using a red dye penetrant
and a fluorescent dye penetrant. The fluorescent penetrant found 60
defects while the visible dye was only able to find 39 of the defects. Under
certain conditions, the visible penetrant may be a better choice. When
fairly large defects are the subject of the inspection, a high sensitivity
system may not be warranted and may result in a large number of
irrelevant indications. Visible dye penetrants have also been found to give
better results when surface roughness is high or when flaws are located in
areas such as weldments. Since visible dye penetrants do not require a
darkened area for the use of an ultraviolet light, visible systems are more
easy to use in the field. Solvent removable penetrants, when properly
applied can have the highest sensitivity and are very convenient to use but
are usually not practical for large area inspection or in high-volume
production settings. Another consideration in the selection of a penetrant
system is whether water washable, post-emulsifiable or solvent removable
penetrants will be used. Post-emulsifiable systems are designed to reduce
the possibility of overwashing, which is one of the factors known to
reduce sensitivity. However, these systems add another step, and thus
cost, to the inspection process.
Penetrants are evaluated by the US Air Force according to the
requirements in MIL-I- 25135 and each penetrant system is classified into
one of five sensitivity levels. This procedure uses titanium and Inconel
specimens with small surface cracks produced in low cycle fatigue

Page 97 of 330
bending to classify penetrant systems. The brightness of the indications
produced after processing a set of specimens with a particular
penetrantsystem is measured using a photometer. Most commercially
available penetrant materials are listed in the Qualified
Products List of MIL-I-25135according to their type, method and
sensitivity level. Visible dye and dual-purpose penetrants are not
classified into sensitivity levels as fluorescent penetrants are. The
sensitivity of a visible dye penetrant is regarded as level 1 and largely
dependent on obtaining good contrast between the indication and the
background.

Limited Purpose Fluorescent Penetrant for Specialized Applications


(a) Red Fluorescent penetrants are used for leak detection on welded tanks
and systems, where red color makes easy differentiation.
(b) Yellow Fluorescent Oil-free Penetrants are used when oils or
petroleum distillates are incompatible with the material being tested,
e.g.
plastics, rubber etc.
(c) Dry Concentrates:
(1) Yellow: For use with water as a penetrant liquid for
leakdetection, where large volumes of liquid is needed, as testing of
large tanks and utility condensers.
(2) Red: For use with water or alcohol or similar hydrophilic
liquid combined as penetrant liquid.

Page 98 of 330
(3) Blue: Concentrate with water, as a penetrant. Provides
acheaper penetrant when large volumes of parts are to be inspected
together with marked differentiation.
Yellowish-Green Oil: Penetrant incorporated in refrigerator oil for leak
detection in refrigerators. Penetrant are then classified based on the
strength or detect ability of the indication that is produced for a number
of very small and tight fatigue cracks.
The five sensitivity levels are shown below:
_ Level ½ - Ultra Low Sensitivity
_ Level 1 - Low Sensitivity
_ Level 2 - Medium Sensitivity
_ Level 3 - High Sensitivity
_ Level 4 - Ultra-High Sensitivity

The major US government and industry specifications currently rely


on the US Air Force Materials Laboratory at Wright- Patterson Air Force
Base to classify penetrants into one of the five sensitivity levels. This
procedure uses titanium and Inconel specimens with small surface cracks
produced in low cycle fatigue bending to classify penetrant systems. The
brightness of the indication produced is measured using a photometer. The
sensitivity levels and the test procedure used can be found in Military
Specification MIL-I-25135 and Aerospace Material Specification 2644,
Penetrant Inspection Materials. An interesting note about the sensitivity
levels is that only four levels were originally planned but when some
penetrants were judged to have sensitivities significantly less than most
others in the level 1 category, the ½ level was created.

Page 99 of 330
Modes of Application
There are various methods of application of penetrant such as
dipping, brushing, spraying, or flooding. Small parts are quite often placed
in small baskets and dipped into a tank of penetrant. On larger parts, and
those with complex geometries, penetrant can be applied effectively by
brushing or spraying. Both conventional and electrostatic spray guns are
effective means of application of penetrants to the part surfaces.
Electrolytic spray application can eliminate excess liquid buildup of
penetrant on the part, minimize overspray, and minimize the amount of
penetrant entering hollow-cored passages which might serve as penetrant
reservoirs. This would result in severe bleed out problems during
examination. Aerosol cans are conveniently portable and ideal for spot
checking and local applications. There is a word of caution when using
spray application. With spray application, it is important that there be
proper ventilation. This is generally accomplished through properly
designed spray booth and exhaust systems.

Penetrant Dwell Time

Penetrant dwell time is the total time that the penetrant is in contact with
the part surface. The dwell time is important because it allows the

Page 100 of 330


penetrant the time necessary to be drawn or to seep into a defect. Dwell
times are usually recommended by the penetrant producers or required by
the specification being followed. The time required to fill a flaw depends
on a number of variables which include the following:
• The surface tension of the penetrant.
• The contact angle of the penetrant.
• The dynamic shear viscosity of the penetrant, which can vary with
the diameter of the capillary. The viscosity of a penetrant in micro
capillary flaws is higher than its viscosity in bulk, which slows the
infiltration of the tight flaws.
• The atmospheric pressure at the flaw opening.
• The capillary pressure at the flaw opening.
• The pressure of the gas trapped in the flaw by the penetrant.
• The radius of the flaw or the distance between the flaw walls.
• The density or specific gravity of the penetrant.
• Micro structural properties of the penetrant.

The ideal dwell time is often determined by experimentation and is


often very specific to a particular application. For example, AMS 2647A
requires that the dwell time for all aircraft and engine be at least 20
minutes while the ASTM E1209 only requires a 5 minute dwell time for
parts made of titanium and other heat resistant alloys. Generally, there is
no harm in using a longer penetrant dwell time as long as the penetrant is
not allowed to dry.
The following tables summarize the dwell time requirements of several
commonly used specifications. The information provided below is
intended for general reference and no guarantee is made about its

Page 101 of 330


correctness or currentness. Please consult the specifications for the actual
dwell time requirements.

Page 102 of 330


Page 103 of 330
Application of penetrant by brushing is shown in the figure below

Page 104 of 330


Tight crack-like discontinuities may require in excess of 30 minutes for
penetration to give an adequate indication. However gross discontinuities
may be favorably detected with the dwell times specified as previously.
The temperature of the specimen and temperature of the penetrant can also
affect the dwell time. Warming the specimen to about 70 degree F or
higher accelerates the penetration and shortens the dwell time. However,
care should be excercized not to overheat the specimen as too much heat
can cause the penetrant to evaporate from the discontinuity. Dwell times
are based on the assumption that the penetrant will wet on the part surface.
Additional penetrant may be permitted to be applied during dwell time.
The penetrant manufacturer will provide suggested dwell times for
various penetrants that they produce.

PENETRANT REMOVAL PROCESSES

Penetrant removal is an important step in the processing of parts for


inspections. Tight control of the various parameters must be maintained
to assure a good, reproducible result. Over washing of parts will remove
the penetrant from entrapped inside the discontinuities, while under
washing will leave much penetrant on the surface, resulting in excessive
background, capable of masking relevant indications. The adequacy of

Page 105 of 330


removal process is ascertained through a visual inspection, during the
removal process. When inspection is performed with fluorescent
penetrants, a black light is focused at the area of removal, while the
operation is being performed. With visible penetrants, the disappearance
of red color is generally considered to the completion of removal process.
Abnormal rinse pressure and lengthy rinse time should be avoided in order
to minimize the possibility of removing the penetrant from flaws.

Water-washable penetrants:

After the required penetration time, the excess penetrant is usually


removed by water. It can be washed off manually or by the use of
automatic or semi-automatic water-spray equipments or by immersion.
Care should be taken to avoid accumulation of water pockets on the
surface of the material.

General considerations:

(a) The temperature of the water should be relatively constant and should
be maintained within the range of 50 to 100 degree F (10 – 38 C). (b)
Spray-rinse water pressure should not be greater than 40 psi (280 kPa).
(c) Rinse time should not exceed 120 s unless otherwise specified by
governing procedures.
Rinse Effectiveness:

If the final rinse step is not effective, as evidenced by the excessive


residual penetrant on the surface after rinsing, it is desirable that the part
is dried and re-cleaned, followed by reapplication of penetrant.

Post-emulsifiable penetrants (Lipophilic):

Page 106 of 330


The part must be emulsified by flooding or immersing with
emulsifier. Effective post rinsing is accomplished either by manual,
automatic, or semi-automatic spray equipments

General considerations:
(a) The temperature of the water should be relatively constant
andshould be maintained within the range of 50 to 100 degree F (10 – 38
C).
(b) Spray-rinse pressure should be in accordance with
manufacturers recommendations.
(c) Rinse time should not exceed 120 s unless otherwise specified
by governing procedures.

Rinse Effectiveness:
If the emulsification and the final rinse step is not effective, as
evidenced by the excessive residual penetrant on the surface after rinsing,
it is desirable that the part is dried and recleaned, followed by re-
application of penetrant.
Post-emulsifiable Penetrant (Hydrophilic):

Directly after the required penetration time, it is recommended that


the parts must be pre-rinsed with water prior to emulsification. This step
allows for the removal of excess surface penetrant from the parts prior to
emulsification, to minimize the degree of penetrant contamination in the
hydrophilic emulsifier bath, thereby extending its life. In addition,
prerinsing of the penetrated parts minimizes possible oily penetrant
pollution in the final rinse step of this process. Either manual or automated
water spray rinsing of the parts achieves effective pre-rinsing. General
considerations:

Page 107 of 330


(a) Water should be free of contaminants that could clog or block
spray nozzles or leave a residue on the surface of the part.
(b) The water temperature should be controlled within the range
of50 – 100 F (10 – 38C).
(c) Spray-rinse at a water pressure of 25 –40 psi (175-275kPa).
(d) Pre-rinse time should be the least possible time (normally 60 s
max.) to provide consistent residue of penetrant on parts.
(e) Remove water trapped inside cavities using filtered shop air
ata nominal pressure of 25 psi or a suction device to remove water from
pooled areas.

Solvent Removable Penetrants:


After the required penetration time, the excess penetrant is removed
by wipers of a dry, clean, lint-free cloth and repeating the operation until
most traces of penetrant have been removed. Then, the remaining traces
are removed, by wiping with a lint-free material, lightly moistened in
solvent remover. Avoid use of excess solvent.
Flushing the surface with solvent following the application of penetrant
and before the application of developer is strictly prohibited.

DRYING OF JOBS

Drying can be effective as water traces on surface can prevent proper


absorption of penetrant, and is necessary prior to applying nonaqueous
developer or following the application of aqueous developer. Drying time
will vary with the size, nature, and number of parts under examination.
Drying Modes:
Parts can be dried using a hot-air recirculating oven, a hot or cold air blast,
or by exposure to ambient temperature, particularly when excess

Page 108 of 330


penetrant is removed with a solvent. Drying is best done in a
thermostatically controlled recirculating hot-air drier. Local heating or
cooling of parts is permitted, provided, the temperature of the part remains
in the range of 50-100 F. for fluorescent methods, and from 50125 F. for
visible methods. Drying oven temperature should be carefully controlled
and should not exceed 160 F. (71oC).

Drying Time Limits: The parts are not allowed to remain in the drying
oven any longer than is necessary to dry the surface. Normally, times over
30 min in the drier may impair the sensitivity of the examination.

DEVELOPERS

The primary function of the developer is to absorb or draw the


penetrant that is trapped in the discontinuities, to the surface, thereby
increasing the visibility of flaw indications. The developer action is
compared to the action of a blotting paper, which draws the ink to its
surface. The developer’s action appears to be a combination of solvency
effect, absorption, and adsorption, by which it draws the penetrant to the
surface.

Physical Properties of a Developer

(a) High capillary efficiency.


(b) High light scattering efficiency
(c) Uniform well-dispersed micro-particles.
(d) High solid-gas interface surface tension
(e) Low contact angle between penetrant and solid surface.

Page 109 of 330


There are four types of developers used in common use, namely, dry, wet,
non-aqueous wet and film types. All these types are discussed at length,
below.
Developer Forms
The AMS 2644 and Mil-I-25135 classify developers into six
standard forms. These forms are listed below:
Form a - Dry Powder
Form b - Water Soluble
Form c - Water Suspendible
Form d - Nonaqueous Type 1 Fluorescent (Solvent Based)
Form e - Nonaqueous Type 2 Visible Dye (Solvent Based)
Form f - Special Applications

The developer classifications are based on the method that the


developer is applied.The developer can be applied as a dry powder, or
dissolved or suspended in a liquid carrier. Each of the developer forms
has advantages and disadvantages.
Dry Developers
Dry powders are the first developers to be used with fluorescent
penetrants, although the alcohol-whiting suspension had been used for
many years with the old kerosene and- whiting method. Dry powders are
still widely used with fluorescent penetrants, but rarely with colorcontrast
penetrants. The first dry powders used were simple chalk powders and
talc, which gave reasonably good results under most circumstances.
However, as penetrant inspection became more widely used, the action of
developers is more carefully studied. The shortcomings are these powders
became more apparent. Later, much lighter amorphous silica powders
were used and proved superior in several ways. The best dry-powder

Page 110 of 330


developers are combinations of powders that have been carefully selected
to maximize desired properties.

Desired Properties of Dry developers:

(a) Transparent to ultra-violet radiation.


(b) Should be White or essentially colorless.
(c) Should have uniform particle size.
(d) Low in bulk density.
(e) High in refractive index.
(f) Chemically inert.
(g) Non-toxic; free from sulfur and halogens.
(h) In some cases, should be hygroscopic (moisture repellent).

Ideally, dry developers should be light and fluffy and should cling
to the dry metallic surfaces in a fine film. However, adherence of powder
should not be excessive, because the amount of penetrant at fine flaws is
so small that it cannot work through a thick coating of powder. Also, the
powder should not float and fill the air with dust. Developer bins with dust
control systems should used to minimize inhalation of dust. The color of
most dry developers is white, although sometimes, an identifying tinting
color is added, because the whiteness is only of real importance if used
with color-contrast visible dye penetrant. For fluorescent penetrants, the
tinting color, which is added to the developer should be in small amounts,
as many additives for tinting quench the luminescence of fluorescent dyes.
Even a slight quenching of fluorescence at a marginal fine indication can
cause serious consequences.

Application and Removal of Dry powders

Page 111 of 330


Hand processing equipment usually includes a developer station,
which for use with dry developers is an open tank. Work pieces are dipped
into the powder; or the powder is picked up with a scoop or with the hands
and applied onto the work piece. Excess powder is tapped and removed
form the work piece. Other effective methods of application of rubber
spray bulbs or air-operated spray guns. An electrostatic-charged powder
gun that can apply extremely even and adherent coating of dry powder on
metal parts is also used. Another successful method of application utilizes
a low-pressure air system. The powder is contained in an air-agitated
pressure tank through a rubber hose, which is manipulated by the operator
to apply the powder over baskets of small pieces or over large individual
work pieces. The system works very well and readily deposits a light layer
of powder where it is needed. Fully automatic applicators have proved
successful in several applications. One type consists of a dust cabinet into,
which, by means of air nozzles, developer powder is blown to form a dust
cloud. Work pieces are passed through this cloud and become coated with
powder. In another type, the bin contains powder and the air nozzles blow
the powder from the bottom of the bin into a dust cloud that coats a single
work pieces, or baskets of work pieces, that have been set on a grill over
the bin. With either type of applicator, to prevent escape of dust into the
room, the cabinet or bin should not be opened until the powder is settled.
In many instances, the amount of powder adhering to the surface is so
small that removal of powder after testing is not necessary. This is
especially true with castings and other unfinished work pieces. In other
instances, removal of the powder is essential. Cleaning can be done by air
blasting or by water or solvent spraying, but the most effective method is
the use of a mechanical washer.

Wet Developers

Page 112 of 330


Wet developers are of three types: suspensions of developer powder
in water (most widely used); aqueous solutions of soluble salts, and
suspensions of powder in volatile solvents.

Water-suspendible Developers (Aqueous Developers)


They permit high-speed application of developer in mass inspection
of small to medium sized pieces by fluorescent method. A basket of small
irregularly shaped work pieces that has gone through the steps of penetrant
application. Penetrant dwell and washing can be coated with in one quick
dip in water-suspendible solution. This also provides thorough and
complete coverage of all surfaces of the pieces inspected. Wet developer
is applied just after excess penetrant is washed away and immediately
before drying. After drying the surfaces are evenly coated with uniform
layer of developer. Developing time is increased because the heat from
the drier helps to bring the penetrant back out of the surface openings.
Also, since the developer film is already in place, the developing action
proceeds at once and thus better definition of flaw indications is obtained.
The material for watersuspendible developers is furnished as a dry
powder, which is added to water in recommended proportions- usually
from to one pound per gallon. However, a slight formulation is required
to make a good working suspension. Dispersing agents, agents to help
retard settling out and caking as well as corrosion inhibitors are necessary.
Wetting agents are also added, to ensure even and complete coverage of
surfaces that may retain some traces of oil. While using fluorescent
penetrants, the dried coating of developer must not fluoresce, as they may
absorb or filter out the black light used for inspection. This requirement
does not apply to the same degree to color contrast penetrants.

Application and Removal of Water-suspendible developers

Page 113 of 330


The outstanding advantage of this type of developers is that, they are
easy to apply. Hand application by dipping, or flowing from a nozzle is
common methods; sometimes a spray gun is also used. Care must be taken
to agitate the developer well, such that the particles are in suspension.
Otherwise, control of coating thickness cannot be achieved satisfactorily.
Coatings of water-suspendible developers can be best removed by
washing with water. This can be done manually by water spraying or can
be done using detergent and mechanical washer. If allowed to remain
indefinitely on the surface of lighter metals such as aluminum, this type
of developer can cause corrosion and even surface pitting. Care should be
taken to maintain the amount the powder in the suspension. Too little
developer on the surface will lead to a reduction in sensitivity. Use of a
suspension containing too much powder can lead to difficulties other than
loss in sensitivity. Such a suspension sticks excessively in fillet pockets
and accumulates along the lower edge of work pieces. These conditions
not only interfere with proper inspection, but also make cleaning after
inspection, difficult.

Water-Soluble Developers
By using a material soluble in water, many of the problems inherent
in suspension-type wet developers can be avoided. Unfortunately, most
water-soluble developers are considered to be inferior to other types of
developers, as they produce dimmer indications, when used with
fluorescent penetrants. Proper wetting and resistance to corrosion are
primary concerns with water-soluble developers. The problem of
maintaining the suspension is however eliminated. But changes in
concentration due to evaporation must be controlled, to ensure consistent
sensitivity. The soluble types are somewhat easier to remove than the
suspendible types.

Page 114 of 330


Solvent-Suspendible Developers (Non-Aqueous Wet Developers)
The forerunner of solvent-suspendible developers was the
whitingalcohol mixture of the old kerosene-whiting method. The solvent
technique is a very effective means of applying a smooth coating of
developer over the surface. Since the solvents used are moderately quick
drying, there is a little running of developer even on vertical surfaces,
where uniform coating is difficult to achieve. With fluorescent penetrants,
this type of developer is used primarily in portable kits, with spray cans.
It is seldom used for large jobs, and universally used for color-contrast
penetrants. In the original kerosene-whiting technique, alcohol was used
as a solvent. This has an property of quick-drying and effectively remove
oil and grease, but had only limited ability to dissolve the penetrant. The
present day developers still serve the same purpose in addition of
dissolving the penetrant, thereby bringing the penetrant out of the flaw.
On rough surfaces, solvent developers sometimes react unfavorably with
very brilliant fluorescent penetrants. They draw out small traces of
fluorescent substance remaining in rough spots and cause undesirable
over-all glow. The effect of solvents on sensitivity is controversial. It has
been found that solvent developers employing volatile solvents can be
very effective in revealing very fine cracks, if sprayed lightly and rapidly
over the surface. In this way, the surface is wetted only for a short period
of time. Penetrant is drawn out from the discontinuities, but the spreading
is minimized due to quick evaporation of the solvent. Solvent developers
are almost always premixed by the manufacturers to the optimum
concentration. The exact ratio of powder to solvent is not extremely
critical, but cans of mixed developer must be kept tightly closed to prevent
evaporation of solvent.
Application of Solvent-suspendible Developers

Page 115 of 330


Solvent developers are sometimes applied with a paintbrush, but
they are likely to produce a smeared indications; application by a pressure
spray can is a preferred method. In large installations, air-spray guns have
been used. Excessive deposits must be avoided, because excessive
thickness if powder reduces the sensitivity. However, the coating must be
thick enough to provide an even, fairly opaque covering surface. Without
the white background, the color-contrast process will not give satisfactory
results, because of poor visibility. Because of the dense white coating left
by the developer, removal is usually required. The powder washes off
easily with water or solvent. Vapor degreasing is also a method of
removing the developer, but the powder often gets entrapped in the
nozzles of the equipment, and therefore requires frequent cleaning of the
equipment.
Liquid Film Developers
The film type developer, as the name implies, form a plastic film
over the surface as it dries. It is normally applied by spraying, as with the
non-aqueous wet type developer, the solvent carrier acting to draw the
penetrant into the film. As the film dries, the exposed penetrant indications
set in pattern, indicative of the discontinuities on the surface being
inspected. The film provides a permanent record of the discontinuity
pattern and can be peeled off from the surface and retained for reference.
Sensitivity of the film type developer is of the highest order, but its use is
somewhat restricted to special applications, because of the skill required
in stripping the film, thus reducing its applicability.

Development Time
The length of time the developer is allowed to remain on the part
prior to examination should not be less than 10 minutes. Development
time begins immediately after the application of dry developer, and soon

Page 116 of 330


after the wet developer has dried. However, the maximum permitted
developing times are 2 hours for aqueous developers and 1 hour for
nonaqueous developers.

CASTING PROCESS AND DEFECTS ASSOCIATED WITH


CASTING PROCESS
A casting may be defined as a “metal object obtained by allowing
molten metal to solidify in a mold “, the shape of the object being
determined by the shape of the mold cavity. Certain advantages are
inherent in the metal casting process. These often form the basis for
choosing casting over other shaping processes such as machining, forging,
welding, stamping, rolling, extruding, etc. Some of the reasons for the
success of the casting process are:
• The most intricate of shapes, both external and internal, may be cast.
As a result, many other operations, such as machining, forging, and
welding, can be minimized or eliminated.
• Because of their physical properties, some metals can only be cast
to shape since they cannot be hot-worked into bars, rods, plates, or
other shapes from ingot form as a preliminary to other processing.

Page 117 of 330


• Construction may be simplified. Objects may be cast in a single
piece which would otherwise require assembly of several pieces if
made by other methods.
• Metal casting is a process highly adaptable to the requirements of
mass production. Large numbers of a given casting may be produced
very rapidly. For example, in the automotive industry hundreds of
thousands of cast engine blocks and transmission cases are produced
each year.
• Extremely large, heavy metal objects may be cast when they would
be difficult or economically impossible to produce otherwise. Large
pump housing, valves, and hydroelectric plant parts weighing up to
200 tons illustrate this advantage of the casting process.
Some engineering properties are obtained more favorably in cast metals.
Examples are:
• More uniform properties from a directional standpoint; i.e., cast
metals exhibit the same properties regardless of which direction is
selected for the test piece relative to the original casting. This is not
generally true for wrought metals.
• Strength and lightness in certain light metal alloys, which can be
produced only as castings.
• Good bearing qualities are obtained in casting metals.
• A decided economic advantage may exist as a result of any one or a
combination of points mentioned above. The price and sale factor is
a dominant one which continually weighs the advantages and
limitations of process used in a competitive of enterprise.

There are many more advantages to the metal-casting process; of


course it is also true that conditions may exist where the casting process

Page 118 of 330


must give way to other methods of manufacture, when other processes
may be more efficient. For example, machining procedures smooth
surfaces and dimensional accuracy not obtainable in any other way;
forging aids in developing the ultimate of fiber strength and toughness in
steel; welding provides a convenient method of joining or fabricating
wrought or cast products into more complex structures; and stamping
produces lightweight sheet metal parts. Thus the engineer may select from
a number of metal processing methods that one or combination, which is
most suited to the needs of his work.
Basic Steps in Making Sand Castings
Obtaining the casting geometry
The traditional method of obtaining the casting geometry is by
sending blueprint drawings to the foundry. This is usually done during the
request for quotation process. However, more and more customers and
foundries are exchanging part geometry via the exchange of computer
aided design files.

Patternmaking
The pattern is a physical model of the casting used to make the mold.
The mold is made by packing some readily formed aggregate material,
such as molding sand, around the pattern. When the pattern is withdrawn,
its imprint provides the mold cavity, which is ultimately filled with metal
to become the casting. If the casting is to be hollow, as in the case of pipe
fittings, additional patterns, referred to as cores, are used to form these
cavities.

Core making
Cores are forms, usually made of sand, which are placed into a mold
cavity to form the interior surfaces of castings. Thus the void space

Page 119 of 330


between the core and mold-cavity surface is what eventually the casting
becomes.

Molding
Molding consists of all operations necessary to prepare a mold for
receiving molten metal. Molding usually involves placing a molding
aggregate around a pattern held with a supporting frame, withdrawing the
pattern to leave the mold cavity, setting the cores in the mold cavity and
finishing and closing the mold.
Melting and Pouring
The preparation of molten metal for casting is referred to simply as
melting. Melting is usually done in a specifically designated area of the
foundry, and the molten metal is transferred to the pouring area where the
molds are filled.

Cleaning
Cleaning refers to all operations necessary to the removal of sand,
scale, and excess metal from the casting. The casting is separated from the
mold and transported to the cleaning department. Burned-on sand and
scale are removed to improved the surface appearance of the casting.
Excess metal, in the form of fins, wires, parting line fins, and gates, is
removed. Castings may be upgraded by welding or other procedures.
Inspection of the casting for defects and general quality is performed.
Other processes includes before shipment, further processing such
as heat-treatment, surface treatment, additional inspection, or machining
may be performed as required by the customer’s specifications.

WELDING PROCESSES AND DEFECTS ASSOCIATED WITH


WELDING

Page 120 of 330


Process:
Welding is material joining process used in making welds. A Weld is a
localized coalescence of metals or non-metals produced either by heating
the materials to a suitable temperature with or without the application of
pressure without or without the use of filler material.
Defects associated with the process

1. Porosity: Generally, porosity occurs due bubbles of gas entrapped


in the molten gas during solidification. The various types of
porosities are discussed below.
2. Wormhole Porosity: These are elongated cavities formed by
entrapment of gases during the solidification of weld metal. They
can occur singly or as a group. Wormholes are caused by the
progressive entrapment of gas between the solidifying crystals,
producing the elongated pores of circular cross-section. These
elongated pores often appear as a “herring bone” array. The gas
may come from gross surface contamination or from crevices
formed by the joint geometry such as a gap beneath the vertical
member of a horizontal/ vertical T-joint, which has been fillet
welded on both sides. They can also originate from plate
laminations, if these terminate in the weld metal.
3. Uniform porosity: Porosity distributed in a substantially uniform
manner throughout the weld run. The gas pores are equi-axed. This
occurs due to the entrapment of small discrete volumes of gas in the
solidifying weld metal. The gas may originate from damp fluxes,
corroded electrode wire, air entrapment in the gas shield, grease or
other hydrocarbon contamination, loss of shielding gas, water leaks
in water-cooled apparatus, and incorrect or insufficient de-oxidant

Page 121 of 330


addition in electrode, filler, filler wire, or parent metal. Some
priming paints and metal surface treatments can also cause porosity.
4. Restart porosity: Porosity generally confined to a small area of weld,
usually occurring in manual or automatic welding, at the start of a
weld run.
5. Surface porosity: These are gas pores, which break the surface of
the weld. The evolution of large quantities of gas, which have been
able to reach the surface of the weld pool. The origins of surface
porosity are similar to uniform porosity, but the degree of
contamination required is much greater. In addition, excessive sulfur
in the parent material, e.g. free cutting steels, or in the consumables,
can cause surface porosity.
6. Crated Pipes: The depression due to shrinkage at the end of a weld
run, where the source of heat is removed. The pipe is caused by a
combination of interrupted de-oxidation reactions and the liquidto-
solid volume change.
7. Linear Inclusions (Slag Inclusions): Slag or other matter entrapped
during welding. The inclusions are of a linear form and are situated
parallel to the weld axis. Poor manipulative technique, incomplete
removal of solidified slag from the underlying runs of a multi-pass
weld, poor bead profile produced by some electrodes, may give rise
to slag inclusions.
8. Isolated Slag Inclusions: Slag or any other matter entrapped during
welding. The defect is of irregular in shape and thus differs from a
gas pore. The causes for a linear inclusion to occur is same for an
isolated inclusions, except for the fact that, isolated indication can
be either linear or rounded.
9. Lack of root fusion: Lack of union at the root of the weld. This may
occur due to the following reasons:

Page 122 of 330


1. Incorrect welding conditions.
2. Too low arc energy.
3. Too high travel speed.
4. Incorrect electrode angle.
5. Molten metal flooding ahead of the arc because of
workposition.
6. Electrode diameter too large in manual metal arc welding.
7. Excessive root face and/ or undersize root gap.

If the lack of root fusion is accessible from the root side, dye
penetrant system is used to detect this defect. Considered as detrimental
defect, by almost all codes and standards. If the defective area is
accessible from the root side, the root defect should be cut out or defect
line widened and re-welded. If the root defect is not accessible for the root
side, the complete weld must be cut out and re-welded.

10. Lack of Sidewall Fusion: Lack of fusion between the weld and
the parent metal at a side of the weld. The common causes for the
occurrence of this defect is due to incorrect welding conditions, such
as, arc energy too low; travel speed too fast; Incorrect electrode
angle; molten metal flooding ahead of arc because of work position.
If lack of sidewall fusion reaches, penetrant testing can be used.
11. Lack of Inter-Run Fusion: This is otherwise termed as inter
pass lack of fusion. This is caused due to lack of union between
adjacent runs of weld metal in a multi-pass weld. The common
causes for the occurrence of this defect is due to incorrect welding
conditions, such as, arc energy too low; travel speed too fast;
Incorrect electrode angle; molten metal flooding ahead of arc
because of work position

Page 123 of 330


.
11.Incomplete root penetration: Failure of weld metal to extend into
the root of a joint. This may occur due to the following reasons:
1. Excessively thick root face of insufficient root gap.
2. Use of vertical down welding, when vertical up has been
specified to achieve root penetration.
3. Incorrect welding conditions, e.g. arc power too low; travel
speed too high; incorrect diameter of electrode.
4. Slag flooding
5. Misalignment of second side of the weld
6. Failure to cut back to sound metal in a back gouging
operation. If lack of penetration extends to an accessible
side, dye penetrant testing can be used.
Generally removed by cut out of weld, and re-welded, as the strength
of the weld joint is more concentrated in the root. Random areas of fused
metal where the electrode, the holder, or current return lamp, have
accidentally touched the work and produced a short duration arc. An arc
strike can produce a hard heat-affected zone (HAZ). It may contain cracks.

13.Cracks: The most commonly encountered cracking phenomena


in weldments can be classified as follows:
(a) Hot Cracking: Cracks initiate in a solidifying metal
under the influence of low melting constituents are termed as hot
cracks. The temperature range of solidification mainly governs
the tendency for hot cracking. Larger the range, greater the
tendency. Hot cracking in steel is most often caused due to the
presence of the impurity element sulfur. The sulfur combines
with iron to form iron sulfide. This liquid iron sulfide, in the
presence of sulfur, would act as a lubricant and the grains would

Page 124 of 330


slide over one another to absorb the shrinkage strains and form a
void. These voids coagulate to form a cavity,
Which is the origin of hot crack Hot cracking is also promoted by the
presence of carbon content, which extends the solidification temperature
range. The effect of this on the hot cracking tendency is obvious.
(b) Cold cracking: Cracks, which initiate in weldments
under the combined influence of residual stresses,
microstructure, hydrogen content are termed as Cold cracks.
Since these cracks initiate only in the solid state, the name cold
cracks was derived. This type of cracking has also several other
names, such as, Delayed Cracking, Hydrogen
Induced cracking, etc., Cold cracks are directly influenced by the welding
conditions and the welding procedure used. Cold cracks are transgranular
in nature, and are observed in the heat affected zone (HAZ) of the
weldments, hence referred to as HAZ Cracks.
Since these cracks can initiate several hours to several weeks after the
completion of the welding, they are termed as DelayedCracking. (c)
Lamellar Tearing: Lamellar tearing is a form of crack, which occurs in
the base metal of weldments, often outside the transformed HAZ and is
generally parallel to the weld fusion boundary. Lamellar tearing is
generally associated with weld joints in base materials with insufficient
short transverse (through thickness) ductility. It is also associated with
elongated or aligned inclusions, which cause poor short-transverse
mechanical properties.
(d) Stress Relief Cracking: This cracking is also known as Reheat
Cracking, and is observed in creep resistant steels containing
molybdenum and vanadium. This crack appears during the stressrelieving
treatment given to the weldments, and is found in the HAZ. This crack is
Intergranular in nature and is often aggravated by the presence of high

Page 125 of 330


residual stresses, high stress concentration due to notches introduced by
welding and high restraints in the weldments.
14. Misalignment: The nonalignment of two abutting edges in a butt
joint. The common causes for this are, inaccuracies in assembly
procedures; distortion from other welds; excessive out of flatness in hot
rolled plates or section.

15. Excess Weld Metal: The extra metal which produces convexity in
fillet welds and weld thicknesses greater than the parent metal plate in
butt welds. The term “reinforcement” is misleading, since the excess does
not normally produce a stronger weld in a butt joint. In certain situations,
however, excess metal may be required for metallurgical reasons. This
feature of weld is regarded as a defect only when the height of the excess
metal is greater than the specified limits.

16. Undercut: During the final pass or cover pass, the exposed upper
edges of the beveled weld preparation tend to melt and run down into the
deposited metal in the groove. Undercutting often occurs when
insufficient filler metal is deposited to fill the resultant grooves at the edge
of the weld bead. Excessive welding current, incorrect arc length,
incorrect manipulation etc may cause undercutting.

17. Burn Through: A burn through is that portion of the weld bead
where excessive penetration has caused the weld pool to be blown into
the pipe or vessel. It is caused by the factors that produce excessive heat
in one area, such as high current, slow rod speed, incorrect rod
manipulation etc.
Nature of the Defect

Page 126 of 330


The nature of the defect can have a large affect on sensitivity of a liquid
penetrant inspection. Sensitivity is defined as the smallest defect that can
be detected with a high degree of reliability. Typically, the crack length at
the sample surface is used to define size of the defect. A survey of any
probability-of-detection curve for penetrant inspection will quickly lead
one to the conclusion that crack length has a definite affect on sensitivity.
However, the crack length alone does not determine whether a flaw will
be seen or go undetected. The volume of the defect is likely to be the more
important feature. The flaw must be of sufficient volume so that enough
penetrant will bleed back out to a size that is detectable by the eye or that
will satisfy the dimensional thresholds of fluorescence.

In general, penetrant inspections are more effective at finding

small round defects than small linear defects: Small round defects
are generally easier to detect for several reasons. First, they are typically
volumetric defects that can trap significant amounts of penetrant. Second,
round defects fill with penetrant faster than linear defects. One research
effort found that elliptical flaw with length to width ratio of 100, will take
the penetrant nearly 10 times longer to fill than a cylindrical flaw with the
same volume.

Deeper flaws than shallow flaws: Deeper flaws will trap more
penetrant than shallow flaws, and they are less prone to over washing.
flaws with a narrow opening at the surface than wide open flaws. Flaws
with narrow surface openings are less prone to over washing.

Flaws on smooth surfaces than on rough surfaces: The surface


roughness of the part primarily affects the removability of a penetrant.

Page 127 of 330


Rough surfaces tend to trap more penetrant in the various tool marks,
scratches, and pits that make up the surface. Removing the penetrant from
the surface of the part is more difficult and a higher level of background
fluorescence or over washing may occur.

Flaws with rough fracture surfaces than smooth fracture surfaces:


The surface roughness that the fracture faces is a factor in the speed at
which a penetrant enters a defect. In general, the penetrant spreads faster
over a surface as the surface roughness increases. It should be noted that
a particular penetrant may spread slower than others on a smooth surface
but faster than the rest on a rougher surface.

Flaws under tensile or no loading than flaws under compression


loading: In a 1987 study at the University College London, the effect of
crack closure on delectability was evaluated. Researchers used a fourpoint
bend fixture to place tension and compression loads on specimens that
were fabricated to contain fatigue cracks. All cracks were detected with
no load and with tensile loads placed on the parts. However, as
compressive loads were placed on the parts, the crack length steadily
decreased as load increased until a load was reached when the crack was
no longer detectable.

Page 128 of 330


FORGINGS

In forgings of both ferrous and non-ferrous metals, the flaws occur


mostly due to the conditions that exist in the ingot, by subsequent hot
working of the ingot or the billet, and by hot or cold working during
forging. Many open-die forgings are forged from ingots. Many closeddie
forgings are forged from rolled billets, or bar stock. Most of the
discontinuities that arise in forgings are due to the imperfections present
in the ingot.

Chemical Segregation
The elements in the alloy are seldom uniformly distributed. Even in
unalloyed elements contain randomly distributed impurities in the form of
tramp elements. Therefore, the composition of metal or alloy will vary.

Page 129 of 330


Deviation from the metal composition at a particular location in a forging
is termed as segregation. Segregations, therefore, produces a metal,
having a range of compositions having no identical properties. Forging
can correct the results of segregation by re-crystallizing or breaking the
grain structure to provide a more uniform, homogenous substructure.
However, the effects of badly segregated forging cannot be totally
eliminated by forging. In metals, the presence of localized regions that
deviate from the normal compositions can affect corrosion resistance,
forging, and welding characteristics, mechanical properties fracture
toughness, and fatigue resistance. In heat-treatable alloys, variations in
compositions can reduce unexpected responses to heat treatments. This
may result in hard or soft spots; quench cracks, or other flaws. The degree
of degradation depends on the alloy and the process variables

Ingot Pipe and Center-line Shrinkage


A common imperfection in ingot is the shrinkage cavity, commonly
known as Pipe. It is often found in the upper portion of the ingot and
occurs during freezing of the metal, and eventually there is insufficient
liquid metal near the top to feed the ingot. As a result a cavity forms,
usually approximating the shape of the cylinder or cone – hence termed
as pipe. In addition to the primary pipe near the top of the ingot, secondary
regions of piping and centerline shrinkage may extend deeper into an
ingot. Primary piping is generally an economic concern, but if it extends
deeper into the ingot body, it goes undetected. Detection of pipe can be
obscured sometimes if bridging has occurred. Piping can be eliminated by
pouring ingots with the big end up, by providing risers in the ingot top,
and by applying hot top materials immediately after pouring.
Secondary piping can be detrimental as they are harder to detect in the
mill and may produce centerline defects in bar and wrought products.

Page 130 of 330


Nonmetallic Inclusions
They originate in the ingot and are likely to be carried over to the
forgings, even though the material may undergo several intermediate hot-
working operations. Most nonmetallic inclusions originate during
solidification from the initial operation. If no further consumable-
remelting cycles follow, the size, frequency, and distribution of these
inclusions will not be altered. However, if a subsequent vacuum remelting
operation is used, the inclusions will be lessened in size and frequency
and will become more random in nature.
Two kinds of nonmetallic inclusions are distinguished in metals: Those
that are entrapped in the metal inadvertently and originate exclusively
from particles of matter that are occluded in the metal while it is being
molten or being cast; Those that separate from the metal because of
change in temperature or composition. Inclusions of the latter type are
produced by the separation from the metal, when it is in the liquid or in
the solid state. Oxides, sulfides, nitrides and other nonmetallic compounds
are produced in such amounts that their solubility in the matrix is
exceeded of numerous types of flaws present in forgings, nonmetallic
inclusions contribute significantly to service failure. Those used in high-
integrity aerospace applications, these inclusions tend to decrease the
ability to withstand the static loads, fatigue loading and sometimes
corrosion and stress-corrosion.
Flaws caused by processing of Ingot or Billet

Flaws that occur during the preliminary reduction of ingots or billets


prior to final forging include, internal bursts, and various surface flaws,
such as, laps, seams, slivers, rolledin- scale, ferrite fingers, fins, overfills
and under fills.

Page 131 of 330


Bursts:
Where the metal is weak due to the presence of pipe, segregation,
and inclusions, the tensile stress can be very high to tear the material apart
internally, particularly if the hotworking temperature is too high. Such
internal tears are known as forging bursts or ruptures. Similarly, if the
metal contains low melting phases resulting from segregation, these
phases may cause internal bursts during hot working.

Laps:
Laps are surface irregularities that appear as linear defects and are
caused by the folding over of hot metal at the surface. These folds are
forged into the surface, but are not metallurgically bonded (welded),
because of the oxide present between the surfaces. Therefore, a
discontinuity with a sharp notch is created.

Seam:
Seam is a surface defect that also appears as a longitudinal indication
and is a result of a crack, a heavy cluster of nonmetallic inclusions, or a
deep lap (a lap that intersects the surface at a large angle). A seam can also
result from the result from a defect in the ingot surface, such as a hole,
that becomes oxidized and is prevented from healing
during working. In this case, the hole simply stretches out during forging
or rolling, producing a linear crack like seam in the work piece surface.

Other Surface Defects

Slivers are loose torn pieces of steel rolled into the surface. Rolled-in scale
is a scale formed during rolling. Ferrite fingers are surface cracks that
have been welded shut but still contain the oxides and decarburization.

Page 132 of 330


Fins and Overfills are protrusions formed by the incorrect reduction
during hot-working. Under fills are as a result of incomplete working of
the section during reduction.

Flaws Caused by Forging Operation

Flaws produced by forging operation are result of improper setup or


control. Proper control of heating is necessary for forging to prevent
excessive scale, decarburization, overheating, or burning. Excessive scale,
in addition to causing excessive metal loss, can result in forgings with
pitted surfaces. The pitted surfaces are caused by scale being hammered
into the surface and may result in unacceptable forgings.

Internal flaws in forgings often appear as cracks or tears, and may result
either from forging with too light a hammer or from continuing forging
after the metal has cooled down before a safe forging temperature. A
number of surface flaws can be produced by the forging operation. The
movement of metal over or upon another surface often causes these flaws
without actual welding or fusing of the surfaces; such flaws may be laps
or folds.
Cold shuts often occur in closed-die forgings. They are junctures of two
adjoining surfaces caused by incomplete metal fill and incomplete fusion
of surfaces. Shear cracks often occur in forgings. They are diagonal cracks
occurring on the trimmed edges and are caused by shear stresses.

INTERPRETATION OF INDICATIONS

Page 133 of 330


Final Interpretation shall be made within 7 to 60 minutes after the
application of developer. If bleed-out does not alter the examination
results, longer times may be permitted. If the surface to be examined is
large enough to preclude complete examination within the established
time, the examination shall be performed in increments. The exact nature
of indications are difficult to evaluate, if the penetrant diffuses excessively
into the developer .If this conditions exist, close observation of the
formation of indications during application of developer will be helpful in
characterizing the indications.

Color Contrast Penetrants

With a color contrast penetrants, the developer forms a reasonably


uniform white coating. Surface discontinuities are indicated by the bleed
out of the penetrant, which is normally a deep red color that stains the
developer. Indications with a light pink color may indicate excessive
washing. Inadequate cleaning may leave an excessive background,
making interpretation difficult.
Viewing conditions:
A minimum light intensity of 50-foot candles (500lux) is necessary to
ensure adequate sensitivity during the examination and evaluation of
indications.
Fluorescent Penetrants

With fluorescent penetrants, the process is essentially the same as


that of a color contrast penetrant. The exception being that, the
examination of fluorescent penetrant indications is performed using an
ultraviolet light called a black light.

Page 134 of 330


The examination shall be performed as follows:

a) It shall be performed in a darkened area.


b) The examiner shall be in the darkened area for at least 1 min prior
to performing the examination to enable his eyes to adapt to the
darkness. When the inspector is in sudden darkness, the cornea of
the eye tends to open up, enabling too much light coming to the eyes
of the operator. This will not enable him to see the smallest
indications properly, thereby reducing the sensitivity.
c) The examiner is also not allowed to wear glasses or lens, which are
photosensitive. This may minimize the reflection from the smallest
indications, reducing the sensitivity.
d) The black light should be allowed to warm up from a minimum of 5
min prior to the measurement of the intensity of the ultra-violet light
emitted.
e) Contrast spectacles, made with lenses of sodium glass, give an
increased contrast with fluorescent penetrant and block the
objectionable UV or blue light.
f) The intensity of black light is measured with a light meter, a digital
radiometer. A minimum of 1000 microwatts per sq.cm is required on
the surface of the part being inspected.
g) The black light intensity shall be measured at least once in every
8hour shift, and whenever the workstation is changed.
Inspection Tools
An inspector must have tools that are capable of providing the
required accuracy. These tools usually include suitable measuring
devices, a flash light, small quantities of solvent, small quantities of dry
developers or aerosol cans of non-aqueous wet developers, pocket
magnifiers ranging from 3 to 10 xs, and a suitable black light for

Page 135 of 330


fluorescent penetrant or sufficient white light for visible penetrant.
Reference photographs displaying various imperfections shall also be
handy during trivial situations.

Inspection

A typical inspection begins with an overall examination to


determine that the work piece has been properly processed and is in a
satisfactory condition for inspection. There are certain precautions that
have to be undertaken before proper inspection. They are:
a) Inspection should not begin until the wet developers are
completely dry.
b) If developers coating are too thick, or if the penetrant bleedout
is excessive, or if the penetrant background is excessive, the
work piece should be cleaned and re-inspected.
c) The inspector has to ascertain the satisfactory condition of the
work piece and be sure that no areas have been missed.
d) The inspector must be aware of the fact that surface
imperfections alone can be found out by Penetrant inspection.
Interpretation
The appearance of indication is a major factor, which allows the
operator to interpret the indications. However, a thorough knowledge of
various processes and defects associated with a particular process, aids the
interpreter in determining the exact nature of the indications. The types of
discontinuities that they show characterize indications. The size and
concentration (number of indications in an area) determine the severity. A
tight crack, such as a fatigue or a stress-corrosion crack, can show a
bluish-white indication by a fluorescent penetrant, whereas a wide crack
indication using the same penetrant,, will show a greenishyellow

Page 136 of 330


indication. The reason for this being that a tight crack is too small for the
color forming dye to penetrate. Hence, the light, which is reflected, has a
wavelength, which falls in the bluish-white range of the light spectrum.
The type of developer can significantly influence the appearance of
indications, as follows:
a) A dry developer will provide a sharper, less spreading indication
with better resolution of individual indications in areas of high
concentration of indications.
b) Wet developer provides a broader indication, as the uniform coat of
powder provides for more lateral spreading of the penetrant into the
developer. A group of indications, such as cluster porosity, can
spread and merge into one single large indication, thereby reducing
the resolution.
c) Solvent-suspendible developers will provide sharp indications with
good resolution, if applied uniformly. A heavier coating of solvent-
suspendible developers will when used with color contrast penetrant
will provide sharp indications, which will continue to bleed out and
fade in color with respect to time.
Variables that influence the persistence of indications

1. Pre-cleaning methods- traces of acids or alkalis on the surface can


fade indications.
2. Type of penetrant and its dye system.
3. The processing procedure – over washing.
4. Temperature- high temperature; too long drying times.
5. Concentration of emulsifier and dwell time.
6. Type of developer.

CLASSIFICATION OF INDICATIONS

Page 137 of 330


Penetrant Test Indications are classified as follows:
(a) Relevant Indications
(b) Non-relevant Indications
(c) False Indications

Relevant Indications:

Relevant indications are indications from true discontinuities. They may


be further classified into, (a) Relevant Linear Indications
(b) Relevant Rounded Indications
If the length of the indication is three times its width or more, then
the indications are termed as Relevant Linear indications.
If the length of the indication is less than three times its width, then
the indications are termed as Relevant Rounded indications.
Relevant indications are subjected to further evaluation with applicable
codes and standards as to the cause and the effect; they will have on the
service life of the article.

They could be divided into five distinct categories, namely:

(a) Continuous Line: This type of indication is often caused by cracks,


cold shuts, forging laps, scratches, or die marks.
(b) Intermittent Line: These indications could be any of the above
mentioned discontinuities, provided they are very tight, or where the
part had been machined, peened, or grounded.
(c) Round: Usually caused by porosities, blowholes.
(d) Small dots: Tiny round indications caused by porous nature of the
material, coarse grain structure, or micro shrinkage.

Page 138 of 330


(e) Diffused or Weak: These indications are often difficult to interpret,
such as indications due to poor excess penetrant removal. Whenever
such indications are encountered, the best method is to clean the
specimen and re-inspect.

Evaluation
The main purpose of evaluation is to classify the indications as acceptable
or rejectable. An experienced inspector readily determines which
indications are within acceptable limits and which ones are not. The
inspector then measures all other indications. If the length or diameter of
an indication exceeds allowable limits, it must be evaluated. One of the
most common and accurate ways of measuring indications is to lay a flat
gauge of the maximum acceptable dimension of discontinuity over the
indication. If the indication is not completely covered by the gauge, it is
not acceptable. Each indication that is not acceptable is evaluated. It may
actually be unacceptable, it may be worse than it appears, it may be false,
it may be real, or to may be acceptable upon closer examination. The
common method of evaluation includes the following steps:
a) Wipe the area of the indication with a small brush or clean cloth that
is dampened with solvent.
b) Dust the area with a dry developer or spray it with a light coat of
non-aqueous developer.
c) Re measure under lighting appropriate for the penetrant for the type
of penetrant used.
If the discontinuity originally appeared to be of excessive length because
of bleeding of penetrant along a scratch, service, or machining mark, that
will be evident to a trained eye. Finally, to gain maximum assurance that
the indication is properly interpreted, it is good practice to wipe the
surface again with solvent- dampened cotton and examine the indication

Page 139 of 330


area with a magnifying glass and ample white light. The final evaluation
may show that the indication is even larger than originally measured, but
was not shown in its entirety because the ends were too tight to hold
enough penetrant to reach the surface and become visible.

All indications are being evaluated in terms of the acceptance standards


of the referencing code section.

The ASME Boiler and Pressure Vessel Code is widely used acceptance
standard. The code contains nine sections in total, of which Sec V and Sec
VIII deals chiefly with Nondestructive Testing.

The following are the acceptance standards as per various codes, which
are being referred most commonly.
Acceptance Standards

Acceptance as per ASME Boiler and Pressure Vessel Code, Sec VIII,
Article
Following types of relevant indications are unacceptable:
(1) Crack or any other linear indications
(2) Rounded Indications, whose dimension exceeds 4.8 mm.
(3) Four or more 1.6mm diameter or greater, rounded indications in aline
separated by a distance of 1.6mm, edge to edge.
(4) Ten or more Rounded indications of diameter 1.6mm or greater in any
6 Square inch of surface, whose major dimension is not more than 153
mm with the dimensions taken in the least favorable location relative
to the indications being evaluated.
Acceptance as per API 1104 – Welding of Pipelines and Related
Facilities

Page 140 of 330


Section 6 - Acceptance Standards for Nondestructive Testing

Classification of Indications

1. Indications produced by Liquid Penetrant Inspection are not


necessarily defects. Machining marks, scratches, and surface
conditions may produce indications that are similar to those
produced discontinuities, but that are not relevant to acceptability.
The criteria given under apply when indications are evaluated.
2. Any indication with a maximum dimension of 1/16 inches
(1.59mm) or less, shall be classified as Nonrelevant. Any larger
indication believed to be Nonrelevant shall be regarded as relevant,
until reexamined by penetrant inspection or any by any other
nondestructive examination methods, to determine whether
or not an actual discontinuity exists. The surface may be ground are
otherwise conditioned before re-examination.
3. Relevant indications are those caused by actual discontinuities.

Linear indications are those in which the length is more than three times
its width. Rounded indications are those in which the length is three times
its width or less.

ACCPETANCE STANDARDS

Relevant indications shall be unacceptable when any of the following


conditions exists:

a) Linear indications are evaluated as crater cracks or star cracks and


exceed 5/32" (3.96mm) in length.

Page 141 of 330


b) Linear indications are evaluated as cracks other than crater cracks
or star cracks.
c) Linear indications are evaluated as Incomplete Fusion and exceeds
1 inch (25.4mm) in total length in a continuous 12 inches
(304.8mm) length of the weld or 8% of the weld length.
d) Rounded indications shall be evaluated as follows:
Porosity- Individual or scattered porosity shall be unacceptable
when any of the following conditions exists:
i) The size of the individual pore exceeds 1/8 inch (3.17mm).
ii) The size of an individual pore exceeds 25% of nominal wall
thickness joined
iii) Cluster Porosity that occurs in any pass except finish pass shall
comply with the above mentioned dimensions. CP that occurs in
the finish pass shall be unacceptable when any of the following
conditions exists:
a) The diameter of the cluster exceeds ½ “ (12.7mm).
b) The aggregate length of CP in any 12" continuous weld length
exceeds ½” .
c) An individual pore within a cluster exceeds 1/16" in size.

ASME CODE FOR PRESSURE PIPING – B 31.1


CHAPTER VI – EXAMINATION, INSPECTION, AND TESTING
LIQUID PENETRANT EXAMINATION

Whenever required by this chapter, liquid penetrant examination


shall be performed in accordance with the methods of Article 6, Section
V, of the ASME Boiler and Pressure Vessel Code.

Page 142 of 330


(B) Acceptance Standards Indications, whose major dimensions are
greater than 1/16", shall be considered Relevant. The following relevant
indications are considered unacceptable.
(B.1) Any cracks or linear indications;
(B.2) Rounded indications with dimensions greater than 3/16"
(5.0mm). (B.3) Four or more rounded indications in a line
separated by 1/16" or less, edge to edge.
(B.4) Ten or more rounded indications in any 6 sq. inch (3870mm) of the
surface with the major dimension of this area not to exceed a 6" (150mm)
with the area taken in the most unfavorable location relative to the
indications being evaluated.
TERMINOLOGY IN CODES AND STANDARDS

DEFECT: An imperfection of sufficient amplitude to warrant rejection of


a product based on the stipulations of codes and standards.

IMPERFECTION: A discontinuity or irregularity in the product detected


by methods outlined in codes and standards.

SHALL: Used to indicate that a provision is Mandatory

SHOULD: Used to indicate that a provision is not Mandatory, but is


recommended as a good practice.

It has not been practical to establish any type of universal


standardization, because of the wide variety of components and
assemblies subjected to Penetrant examination, the differences in the type
of discontinuities common to them, and the differences in the type of
integrity required.

Page 143 of 330


Specification is a document that delineates the design or performance
requirements. A specification should include the methods of inspection
and the requirements based in the inspection or test procedure.

NONRELEVANT INDICATIONS

Non-relevant Indications are actual surface discontinuities that in


most cases are there by Design. Some feature of Assembly such as articles
that are press fitted, keyed, splined, or riveted, are causes for these
indications. Loose scales or a rough surface on a forging or a casting can
also cause them. The operator normally identifies them, by looking at the
design drawing or sketches. Loose scales are often identified when the
operator, before the actual inspection of the work piece, carries out a usual
visual inspection. Any indication believed to be Non-relevant, shall be
termed as relevant, until it is confirmed by reexamination by the same or
different NDT method.

FALSE INDICATION

Most common sources of false indications are poor washing. The


operator can easily identify a good rinse by using a black light during and
after the fluorescent inspection.
Some of the common sources of false indications are given below:
(a) Penetrant on the operator’s hand
(b) Penetrant on the test table
(c) Penetrant transferred to clean specimen form other indications(d)
Contamination of developer.

Recording

Page 144 of 330


Penetrant indications can obviously be photographed, or
videorecorded with a CCTV camera. As with magnetic indications, with
specialized methods, a fluorescent method can be photographed to retain
some identifying background.
The dry indication can be lifted off the surface with a transparent
adhesive tape, or a replica can be made of the surface with a process called
Replica-Transfer-Coating (RTC). This is a resin in a volatile solvent, a
white pigment and a silicone de-bonding agent. RTC is applied instead of
developer, by aerosol spray, and allowed to dry. After drying, the edges
are trimmed with knifes, and the replica is peeled off.
Post Cleaning

Some residue will remain on the work pieces after the penetrant inspection
is completed. These residues can result in the formation of voids during
subsequent welding or stop-off brazing, in the contamination of surfaces
or in unfavorable reactions in chemical processing operations. Post
cleaning is particularly important where residual penetrant materials
might combine with other factors in service to produce corrosion. A
suitable technique, such as a simple water spray, water rinse, machine
wash, vapor degreasing, solvent soaking, or ultrasonic cleaning may be
employed. If it is recommended to remove the developer, it should be
carried out as promptly as possible, preventing fixing of developer.

Quality Control of Emulsifier Bath

Quality control of the emulsifier bath should be performed per the


requirements of the applicable specification. The information provided
here may not reflect the requirement of current specifications and is
provided here for general education purposes only.

Page 145 of 330


Lipophilic Emulsifiers

Lipophilic emulsifiers are miscible with penetrants in all concentrations.


However, if the concentration of penetrant contamination in the emulsifier
becomes too great, the mixture will not function affectively as a remover.
AMS 2644 requires that lipophilic emulsifiers be capable of 20%
penetrant contamination without a reduction in performance. AMS
2647A requires the emulsifier to be replaced when its cleaning
action is less than that of new material. Since lipophilic emulsifiers are
oil-based, they have a limited tolerance for water. When the tolerance
level is reached, the emulsifier starts to thicken and will eventually form
a gel as more water is added. AMS 2644 requires that lipophilic
emulsifiers be formulated to function adequately with at least 5% water
contamination and AMS 2647A requires that lipophilic emulsifiers be
replaced when the water concentration reaches 5%.

Hydrophilic Emulsifiers

Hydrophilic emulsifiers have less tolerance for penetrant


contamination. The penetrant tolerance varies with emulsifier
concentration and the type of contaminating penetrant. In some cases, as
little as 1% by volume of penetrant contamination can seriously affect the
performance of an emulsifier. One penetrant manufacture reports that 1 to
1.5% penetrant contaminations will affect solution with a 10%
concentration of emulsifier. As the emulsifier concentration increases in
the solution, the penetrant contamination tolerance also increases and a
solution with a 30% emulsifier concentration can tolerate from 5 to 8.5%
penetrant contamination. The percentage of added penetrant required to
destroy wash ability of the emulsifier can be measured and an oil tolerance

Page 146 of 330


index is commonly used to compare the tolerance of different emulsifiers
to contamination by penetrant. AMS 2647A requires that the
emulsification bath be discarded if penetrant is noted floating on the
surface or adhering to the sides of the tank.

Water contamination is not as much of a concern with hydrophilic


emulsifiers as they are miscible with water. However, it is very important
that the emulsifier solution be kept at the proper concentration.
It should also be noted that penetrant dragout and, thus, level of possible
emulsifier contamination by the penetrant is dependent on the type of
material being processed. Tests have shown that on both polished and grit
blasted surfaces, aluminum and stainless steel parts had a greater drag-out
than titanium parts.

Emulsifier Concentration and Contact Time

The optimal emulsifier contact time is dependent on a number of variables


that include the emulsifier used, the emulsifier concentration, the surface
roughness of the part being inspected, and other factors. Usually some
experimentation is required to select the proper emulsifier contact time.
The emulsifier used must be matched to the penetrant material. For
method D penetrant systems the concentration of the emulsifier should
not exceed the percentage specified by the supplier and if working to a
specification should not exceed the concentration specified. Since the
emulsifier is mixed with water, which is prone to evaporation, it is
recommended that the starting concentration be less than that
recommended by the supplier. One penetrant manufacture recommends
the following starting concentrations:
• 20% if the maximum concentration is 30%

Page 147 of 330


• 13% if the maximum is 20%
• 6.5% if the maximum is 10%.

Some Research on Emulsifier Concentration and Contact Time


Vaerman reported on the effect of emulsifier concentration on sensitivity.
He varied the contact time of a lipophilic emulsifier and compared the
results to those from a 5% concentrate hydrophilic emulsifier with a three
minute contact time. For a normal contact time of 45 seconds, the
lipophilic emulsifier was found to average nearly 18% less sensitive over
the range of crack depths (10 to 50 microns). The loss of sensitivity
increased rapidly as the lipophilic contact time was increased in steps to
5 minutes. Also as expected, the decrease in sensitivity increased with
increasing crack size.
Vaerman also looked at the effect of hydrophilic emulsifier
concentration. It was found that increasing the concentration for 5% by
volume to 33 percent, decreased sensitivity by 15% when a three minute
contact time was used. When a contact time of one minute was used the
decrease in sensitivity was just over nine percent.
Hyam also reports on the effect of the emulsifier concentration and
contact time. Both hydrophilic and lipophilic removers were tested. The
results showed that as the concentration of the emulsifier was increased
from 2.5% to 20%, sensitivity decreased. The contact time was shown to
have little effect on the hydrophilic system tested (up to 20 minutes) but
to have a significant effect on the lipophilic system with sensitivity
decreasing as contact time was increased from 2 to 10 minutes.

The primary in-service checks used to check the quality of the penetrant
test systems are as follows:

Page 148 of 330


TEST MATERIAL CONTROL SAMPLES

The tests mentioned below are on comparison tests in which “used”


materials are compared with “new” ones. Control samples are taken at the
time, the materials are received from the supplier. These samples are taken
in sealed containers and stored in a safe place, not subjected to any form
of deterioration.
TEST BLOCKS

The blocks made of Aluminum, Steel, Nickel, Glass, or Ceramics. Some


of them are used for checking the sensitivity of test system, by
comparison, while others are designed specifically for testing Penetrant
or emulsifier washability.

Aluminum Test Blocks

Measure: Dimensions of 2" x 3" (50mm x 75mm), and are cut from 5/16"
(8mm) thick 2024-T3 aluminum alloy plate. The blocks are heated non-
uniformly and water quenched to produce thermal cracks. This is
accomplished by supporting the blocks in the frame and heating it with a
flame of a gas burner or torch in the center of the under side of the block.
A temperature of about 510 degrees C to 527 degrees C is maintained,
templestik, tempilac or equivalent is applied on the area of size of a penny
on the topside of the block for approximately 4 minutes.

Page 149 of 330


RADIOGRAPHY TESTING
X-rays were discovered in 1895 by Wilhelm Conrad Roentgen (1845-1923) who was a
Professor at Wuerzburg University in Germany. Working with a cathode-ray tube in his
laboratory, Roentgen observed a fluorescent glow of crystals on a table near his tube. The tube
that Roentgen was working with consisted of a glass envelope (bulb) with positive and negative
electrodes encapsulated in it. The air in the tube was evacuated, and when a high voltage was
applied, the tube produced a fluorescent glow. Roentgen shielded the tube with heavy black
paper, and discovered a green colored fluorescent light generated by a material located a few
feet away from the tube.

He concluded that a new type of ray was being emitted from the tube. This ray was capable of
passing through the heavy paper covering and exciting the phosphorescent materials in the
room. He found the new ray could pass through most substances casting shadows of solid
objects. Roentgen also discovered that the ray could pass through the tissue of humans, but not
bones and metal objects. One of Roentgen’s first experiments late in 1895 was a film of the
hand of his wife, Bertha. It is interesting that the first use of X-rays were for an industrial (not
medical) application as Roentgen produced a radiograph of a set of weights in a box to show his
colleagues

Roentgen’s discovery was a scientific bombshell, and was received with extraordinary interest
by both scientist and laymen. Scientists everywhere could duplicate his experiment because the
cathode tube was very well known during this period. Many scientist dropped other lines of
research to pursue the mysterious rays. Newspapers and magazines of the day provided the
public with numerous stories, some true, others fanciful, about the properties of the newly
discovered rays.

Public fancy was caught by this invisible ray with the ability to pass through solid matter, and, in
conjunction with a photographic plate, provide a picture of bones and interior body parts.
Scientific fancy was captured by demonstration of a wavelength shorter than light. This
generated new possibilities in physics, and for investigating the structure of matter. Much
enthusiasm was generated about potential applications of rays as an aid in medicine and
surgery. Within a month after the announcement of the discovery, several medical radiographs
had been made in Europe and the United States which were used by surgeons to guide them in
their work. In June 1896, only 6 months after Roentgen announced his discovery, X-rays were
being used by battlefield physicians to locate bullets in wounded soldiers.

Prior to 1912, X-rays were used little outside the realms of medicine, and dentistry, though
some X-ray pictures of metals were produced. The reason that X-rays were not used in
industrial application before this date was because the X-ray tubes (the source of the X-rays)
broke down under the voltages required to produce rays of satisfactory penetrating power for
industrial purpose. However, that changed in 1913 when the high vacuum X-ray tubes designed
by Coolidge became available. The high vacuum tubes were an intense and reliable X-ray
sources, operating at energies up to 100,000 volts.

In 1922, industrial radiography took another step forward with the advent of the 200,000-volt X-
ray tube that allowed radiographs of thick steel parts to be produced in a reasonable amount of
time. In 1931, General Electric Company developed 1,000,000 volt X-ray generators, providing
an effective tool for industrial radiography. That same year, the American Society of Mechanical
Engineers (ASME) permitted X-ray approval of fusion welded pressure vessels that further
opened the door to industrial acceptance and use.

A Second Source of Radiation


Shortly after the discovery of X-rays, another form of penetrating rays were discovered. In

Page 150
1896, French scientist Henri Becquerel discovered natural radioactivity. Many scientists of the period
were working with cathode rays, and other scientists were gathering evidence on the theory that the
atom could be subdivided. Some of the new research showed that certain types of atoms disintegrate
by themselves. It was Henri Becquerel who discovered this phenomenon while investigating the
properties of fluorescent minerals. Becquerel was researching the principles of fluorescence, certain
minerals glow (fluoresce) when exposed to sunlight. He utilized photographic plates to record this
fluorescence.

One of the minerals Becquerel worked with was a uranium compound. On a day when it was
too cloudy to expose his samples to direct sunlight, Becquerel stored some of the compound in
a drawer with his photographic plates. Later when he developed these plates, he discovered
that they were fogged (exhibited exposure to light.) Becquerel questioned what would have
caused this fogging? He knew he had wrapped the plates tightly before using them, so the
fogging was not due to stray light. In addition, he noticed that only the plates that were in the
drawer with the uranium compound were fogged. Becquerel concluded that the uranium
compound gave off a type of radiation that could penetrate heavy paper and expose
photographic film. Becquerel continued to test samples of uranium compounds and determined
that the source of radiation was the element uranium. Bacquerel’s discovery was, unlike that of
the X-rays, virtually unnoticed by laymen and scientists alike. Only a relatively few scientists
were interested in Becquerel’s findings. It was not until the discovery of radium by the Curies
two years later that interest in radioactivity became wide spread.

While working in France at the time of Becquerel’s discovery, Polish scientist Marie Curie
became very interested in his work. She suspected that a uranium ore known as pitchblende
contained other radioactive elements. Marie and her husband, a French scientist, Pierre Curie
started looking for these other elements. In 1898, the Curies discovered another radioactive
element in pitchblende, they named it ‘polonium’ in honor of Marie Curie’s native homeland.
Later that year, the Curie’s discovered another radioactive element which they named ‘radium’,
or shining element. Both polonium and radium were more radioactive than uranium.
Since these discoveries, many other radioactive elements have been discovered or produced.

Radium became the initial industrial gamma ray source. The material allowed radiographing
castings up to 10 to 12 inches thick. During World War II, industrial radiography grew
tremendously as part of the Navy’s shipbuilding program. In 1946, manmade gamma ray
sources such as cobalt and iridium became available. These new sources were far stronger
than radium and were much less expensive. The manmade sources rapidly replaced radium,
and use of gamma rays grew quickly in industrial radiography.

Health Concerns
The science of radiation protection, or “health physics” as it is more properly called, grew out of
the parallel discoveries of X-rays and radioactivity in the closing years of the 19th century.
Experimenters, physicians, laymen, and physicists alike set up X-ray generating apparatus and
proceeded about their labors with a lack of concern regarding potential dangers. Such a lack of
concern is quite understandable, for there was nothing in previous experience to suggest that
X-rays would in any way be hazardous. Indeed, the opposite was the case, for who would
suspect that a ray similar to light but unseen, unfelt, or otherwise undetectable by the senses
would be damaging to a person? More likely, or so it seemed to some, X-rays could be
beneficial for the body.

Inevitably, the widespread and unrestrained use of X-rays led to serious injuries. Often injuries
were not attributed to X-ray exposure, in part because of the slow onset of symptoms, and
because there was simply no reason to suspect X-rays as the cause. Some early experimenters
did tie X-ray exposure and skin burns together. The first warning of possible adverse effects of
X-rays came from Thomas Edison, William J. Morton, and Nikila Tesla who each reported eye
irritations from experimentation with X-rays and fluorescent substances.

Page 151
Today, it can be said that radiation ranks among the most thoroughly investigated causes of
disease. Although much still remains to be learned, more is known about the mechanisms of
radiation damage on the molecular, cellular, and organ system than is known for most other
health stressing agents. Indeed, it is precisely this vast accumulation of quantitative
doseresponse data that enables health physicists to specify radiation levels so that medical,
scientific, and industrial uses of radiation may continue at levels of risk no greater than, and
frequently less than, the levels of risk associated with any other technology.

X-rays and Gamma rays are electromagnetic radiation of exactly the same nature as light, but of
much shorter wavelength. Wavelength of visible light is of the order of 6000 angstroms while the
wavelength of x-rays is in the range of one angstrom and that of gamma rays is 0.0001
angstrom. This very short wavelength is what gives x-rays and gamma rays their power to
penetrate materials that light cannot. These electromagnetic waves are of a high energy level
and can break chemical bonds in materials they penetrate. If the irradiated matter is living tissue
the breaking of chemical bond may result in altered structure or a change in the function of cells.
Early exposures to radiation resulted in the loss of limbs and even lives. Men and women
researchers collected and documented information on the interaction of radiation and the
human body. This early information helped science understand how electromagnetic radiation
interacts with living tissue. Unfortunately, much of this information was collected at great
personal expense.

In many ways radiography has changed little from the early days of its use. We still capture a
shadow image on film using similar procedures and processes technicians were using in the
late 1800’s. Today, however, we are able to generate images of higher quality, and greater
sensitivity through the use of higher quality films with a larger variety of film grain sizes.

Film processing has evolved to an automated state producing more consistent film quality by
removing manual processing variables. Electronics and computers allow technicians to now
capture images digitally. The use of “filmless radiography” provides a means of capturing an
image, digitally enhancing, sending the image anywhere in the world, and archiving an image
that will not deteriorate with time.

Technological advances have provided industry with smaller, lighter, and very portable
equipment that produce high quality X-rays. The use of linear accelerator provide a means of
generating extremely short wavelength, highly penetrating radiation, a concept dreamed of only
a few short years ago. While the process has changed little, technology has evolved allowing
radiography to be widely used in numerous areas of inspection.

Radiography has seen expanded usage in industry to inspect not only welds and castings, but
to radiographically inspect items such as airbags and caned food products. Radiography has
found use in metallurgical material identification and security systems at airports and other
facilities.

Gamma ray inspection has also changed considerably since the Curies’ discovery of radium.
Man-made isotopes of today are far stronger and offer the technician a wide range of energy
levels and half-lives. The technician can select Co-60 which will effectively penetrate very thick
materials, or select a lower energy isotope, such as Tm-170, which can be used to inspect
plastics and very thin or low density materials. Today gamma rays find wide application in
industries such as petrochemical, casting, welding, and aerospace.

RADIATION FUNDAMENTALS
For the purposes of this manual, we can use a simplistic model of an atom. The atom
can be thought of as a system containing a positively charged nucleus and negatively
charged electrons that are in orbit around the nucleus. The nucleus is the central core
of the atom and is composed of two types of particles, protons, which are positively
charged, and neutrons, which have a neutral charge. Each of these particles has a
Page 152
mass of approximately one atomic mass unit (amu). (1 amu = 1.66E-24 g). Electrons
surround the nucleus in orbital of various energies. (In simple terms, the farther an
electron is from the nucleus, the less energy is required to free it from the atom.)
Electrons are very light compared to protons and neutrons. Each electron has a mass of
approximately 5.5E-4 amu. A nuclide is an atom described by its atomic number (Z) and
its mass number (A). The Z number is equal to the charge (number of protons) in the
nucleus, which is a characteristic of the element. The A number is equal to the total
number of protons and neutrons in the nucleus. Nuclides with the same number of
protons but different number of neutrons are called isotopes.
For example, deuterium (2,1H) and tritium (3,1H) are isotopes of hydrogen with mass
numbers two and three, respectively. There are on the order of 200 stable nuclides and
over 1100 unstable (radioactive) nuclides. Radioactive nuclides can generally be
described as those, which have an excess or deficiency of neutrons in the nucleus.

RADIOACTIVE DECAY
Radioactive nuclides (also called radio nuclides) can regain stability by nuclear
transformation (radioactive decay) emitting radiation in the process. The radiation
emitted can be particulate or electromagnetic or both. The various types of radiation
and examples of decay are shown below.
ALPHA (a)
Alpha particles have a mass and charge equal to those of helium nuclei (2 protons + 2
neutrons). Alpha particles are emitted during the decay of some very heavy nuclides (Z
> 83).
226,88Ra —> 222,86Rn + 4,2a

BETA (B-, B+)


Beta particles are emitted from the nucleus and have a mass equal to that of electrons.
Betas can have either a negative charge or a positive charge. Negatively charged betas
are equivalent to electrons and are emitted during the decay of neutron rich nuclides.
14,6C —> 14,7N + 0,-1B + neutrino

Positively charged betas (positrons) are emitted during the decay of proton rich
nuclides.

22,11Na —> 22,10Ne + 0,1B + g

GAMMA (g)
Gammas (also called gamma rays) are electromagnetic radiation (photons). Gammas
are emitted during energy level transitions in the nucleus. They may also be emitted
during other modes of decay.
99m, 43Tc —> 99,43Tc + g

ELECTRON CAPTURE
In certain neutron deficient nuclides, the nucleus will capture an orbital electron resulting
in conversion of a proton into a neutron. This type of decay also involves gamma
emission as well as x-ray emission as other electrons fall into the orbital vacated by the
captured electrons.

125,53I + 0,-1e —> 125,52Te + g


Page 153
FISSION
Fission is the splitting of an atomic nucleus into two smaller nuclei and usually two or three
neutrons. This process also releases a large amount of energy in the form of gammas and
kinetic energy of the fission fragments and neutrons.

235,92U + 1,0n —> 93,37Rb + 141,55Cs + 2(1,0n) + g

NEUTRONS (n)
For a few radionuclides, a neutron can be emitted during the decay process.

17,7N —> 17,8O* + 0,-1B (*excited state)

17,8O* —> 16,8O + 1,0n

CHARACTERISTICS OF RADIOACTIVE DECAY


In addition to the type of radiation emitted, the decay of a radionuclide can be
described by the following characteristics.

HALF-LIFE
The half-life of a radionuclide is the time required for one-half of a collection of atoms of
that nuclide to decay. Decay is a random process which follows an exponential curve.
The number of radioactive nuclei remaining after time (t) is given by:

N(t) = N(0) x exp(-0.693t/T)

where

N(0) = original number of atoms

N(t) = number remaining at time t

t = decay time

T = half-life

ENERGY
The basic unit used to describe the energy of a radiation particle or photon is the
electron volt (eV). An electron volt is equal to the amount of energy gained by an
electron passing through a potential difference of one volt. The energy of the radiation
emitted is a characteristic of the radionuclide. For example, the energy of the alpha
emitted by Cm-238 will always be 6.52 MeV, and the gamma emitted by Ba-135m will
always be 268 keV. Many radionuclides have more than one decay route. That is, there
may be different possible energies that the radiation may have, but they are discreet
possibilities. However, when a beta particle is emitted, the energy is divided between the
beta and a neutrino. (A neutrino is a particle with no charge and infinitesimally small
mass.) Consequently, a beta particle may be emitted with an energy varying in a
continuous spectrum from zero to a maximum energy (Emax), which is characteristic of
the radionuclide. The average energy is generally around forty percent of the maximum.

Page 154
RADIATION UNITS
ACTIVITY
1 Curie (Ci) = 3.7E10 disintegrations per sec (dps). The Becquerel (Bq) is also coming
into use as the International System of Units (SI){XE “International System of Units
(SI)”} measure of disintegration rate. 1 Bq = 1 dps, 3.7E10 Bq = 1 Ci, and 1 mCi = 37
MBq.
CALCULATION OF ACTIVITIES
The half-life of a radionuclide is the time required for one-half of a collection of atoms of
that nuclide to decay. This is the same as saying it is the time required for the activity of
the sample to be reduced to one-half the original activity. This can be written as:

A(t) = A(0) x exp(-0.693t/T)

where

A(0) = original activity

A(t) = activity at time t

t = decay time T =

half-life

EXAMPLE
P-32 has a half-life of 14.3 days. On January 10, the activity of a P-32 sample was 10
uCi. What will the activity be on February 6? February 6 is 27 days after January 10, so

A(Feb 6) = A(Jan 10) x exp[-0.693(27/14.3)] = 2.7 uCi

A quick estimate could also have been made by noting that 27 days is about two
halflives. So the new activity would be about one-half of one-half (i.e. one-fourth) of the
original activity.

EXPOSURE
The unit of radiation exposure in air is the roentgen (R). It is defined as that quantity of
gamma or x-radiation causing ionization in air equal to 2.58E-4 coulombs per kilogram.
Exposure applies only to absorption of gammas and x-rays in air.
CALCULATION OF EXPOSURE RATES
Gamma exposure constants (G) for some radionuclides are shown below. G is the
exposure rate in R/hr at 1 cm from a 1 mCi point source.
Nuclide G
Chromium-51 0.16
Cobalt-57 0.9
Cobalt-60 13.2
Gold-198 2.3
Iodine-125 1.5
Nickel-63 3.1
Page 155
Radium-226 8.25
Tantalum-182 6.8
Zinc-65 2.7
An empirical rule which may also be used is

6 x Ci x n x E = R/hr @ 1 foot, where Ci = source strength in curies

E = energy of the emitted photons in MeV.

n = fraction of decays resulting in photons with an energy of E.

It should be noted that this formula and the gamma constants are for exposure rates
from gammas and x-rays only. Any dose calculations would also have to include the
contribution from any particulate radiation that may be emitted.

X-RAY SOURCES
PRODUCTION OF X-RAYS
X-rays are produced when electrons, traveling at high speed, collide with matter or
change direction. In the usual type of x-ray tube, an incandescent filament supplies the
electrons and thus forms the cathode, or negative electrode, of the tube. A high voltage
applied to the tube drives the electrons to the anode, or target. The sudden stopping of
these rapidly moving electrons in the surface of the target results in the generation of
xradiation.

The design and spacing of the electrodes and the degree of vacuum are such that no
flow of electrical charge between cathode and anode is possible until the filament is
heated.

THE X-RAY TUBE


The figure below is a schematic diagram of the essential parts of an x-ray tube. A
current of several amperes from a low-voltage source, generally a small transformer,
heats the filament. The focusing cup serves to concentrate the stream of electrons on a
small area of the target, called the focal spot. This stream of electrons constitutes the
tube current and is measured in milliamperes.
Schematic diagram of an x-ray tube.

Page 156
The higher the temperature of the filament, the greater is its emission of electrons and
the larger the resulting tube current. The tube current is controlled, therefore, by some
device that regulates the heating current supplied to the filament. This is usually
accomplished by a variable-voltage transformer, which energizes the primary of the
filament transformer. Other conditions remaining the same, the x-ray output is
proportional to the tube current.

Most of the energy applied to the tube is transformed into heat at the focal spot, only a
small portion being transformed into x-rays. The high concentration of heat in a small
area imposes a severe burden on the materials and design of the anode. The high
melting point of tungsten makes it a very suitable material for the target of an x-ray
tube. In addition, the efficiency of the target material in the production of x-rays is
proportional to its atomic number.1 Since tungsten has a high atomic number, it has a
double advantage. The targets of practically all industrial x-ray machines are made of
tungsten.

COOLING
Circulation of oil in the interior of the anode is an effective method of carrying away the
heat. Where this method is not employed, the use of copper for the main body of the
anode provides high heat conductivity, and radiating fins on the end of the anode
outside the tube transfer the heat to the surrounding medium. The focal spot should be
as small as conditions permit, in order to secure the sharpest possible definition in the
radiographic image. However, the smaller the focal spot, the less energy it will
withstand without damage. Manufacturers of x-ray tubes furnish data in the form of
charts indicating the kilovoltages and milliamperages that may be safely applied at
various exposure times. The life of any tube will be shortened considerably if it is not
always operated within the rated capacity.

FOCAL-SPOT SIZE
The principle of the line focus is used to provide a focal spot of small effective size,
though the actual focal area on the anode face may be fairly large, as illustrated in the
figure below. By making the angle between the anode face and the central ray small,
Page 157
usually 20 degrees, the effective area of the spot is only a fraction of its actual area.
With the focal area in the form of a long rectangle, the projected area in the direction of
the central ray is square.
Diagram of a line-focus tube depicting the relation between actual focal-spot area (area of
bombardment) and effective focal spot, as projected from a 20° anode.

EFFECTS OF KILOVOLTAGE
As will be seen later, different voltages are applied to the x-ray tube to meet the
demands of various classes of radiographic work. The higher the voltage, the greater
the speed of the electrons striking the focal spot. The result is a decrease in the
wavelength of the x-rays emitted and an increase in their penetrating power and
intensity. It is to be noted that x-rays produced, for example, at 200 kilovolts contain all
the wavelengths that would be produced at 100 kilovolts, and with greater intensity.

In addition, the 200-kilovolt x-rays include some shorter wavelengths that do not exist in
the 100-kilovoIt spectrums at all. The higher voltage x-rays are used for the penetration
of thicker and heavier materials.

Most x-ray generating apparatus consists of a filament supply and a high-voltage


supply. The power supply for the x-ray tube filament consists of an insulating step-down
transformer. A variable-voltage transformer or a choke coil may serve for adjustment of
the current supplied to the filament. The high-voltage supply consists of a transformer,
an autotransformer, and, quite frequently, a rectifier.

A transformer makes it possible to change the voltage of an alternating current. In the


simplest form, it consists of two coils of insulated wire wound on an iron core. The coil
connected to the source of alternating current is called the primary winding, the other
the secondary winding. The voltages in the two coils are directly proportional to the
number of turns, assuming 100 percent efficiency.

If, for example, the primary has 100 turns, and the secondary has 100,000, the voltage
in the secondary is 1,000 times as high as that in the primary. At the same time, the
current in the coils is decreased in the same proportion as the voltage is increased. In
the example given, therefore, the current in the secondary is only 1/1,000 that in the
primary. A step-up transformer is used to supply the high voltage to the x-ray tube.

An autotransformer is a special type of transformer in which the output voltage is easily


varied over a limited range. In an x-ray generator, the autotransformer permits

Page 158
adjustment of the primary voltage applied to the step-up transformer and, hence, of the
high voltage applied to the x-ray tube.

The type of voltage waveform supplied by a high-voltage transformer is shown in part A


of the figure below and consists of alternating pulses, first in one direction and then in
the other. Some industrial x-ray tubes are designed for the direct application of the high-
voltage waveform of part A of the figure below, the x-ray tube then acting as its own
rectifier. Usually, however, the high voltage is supplied to a unit called a rectifier, which
converts the pulses into the unidirectional form illustrated in part B of the figure below.
Another type of rectifier may convert the waveform to that shown in part C of the figure
below, but the general idea is the same in both cases—that is, unidirectional voltage is
supplied to the x-ray tube. Sometimes a filter circuit is also provided that “smooths out”
the voltage waves shown in parts Band C of the figure below, so that essentially
constant potential is applied to the x-ray tube, part D of the figure below.

Many different high-voltage waveforms are possible, depending on the design of the
xray machine and its installation. The figure below shows idealized waveforms difficult
to achieve in practical high-voltage equipment. Departures from these terms may vary
in different x-ray installations. Since x-ray output depends on the entire waveform, this
accounts for the variation in radiographic results obtainable from two different x-ray
machines operating at the same value of peak kilovoltage.

Typical voltage waveforms of x-ray machines.

Tubes with the anodes at the end of a long extension cylinder are known as “rodanode”
tubes. The anodes of these tubes can be thrust through small openings (see
the figure below, top) to facilitate certain types of inspection. If the target is
perpendicular to the electron stream in the tube, the x-radiation through 360 degrees
can be utilized (see the figure below, bottom), and an entire circumferential weld can be
radiographed in a single exposure.

Page 159
With tubes of this type, one special precaution is necessary. The long path of the
electron stream down the anode cylinder makes the focusing of the electrons on the
target very susceptible to magnetic influences. If the object being inspected is
magnetized—for example, if it has undergone a magnetic inspection and has not been
properly demagnetized—a large part of the electron stream can be wasted on other
than the focal-spot area, and the resulting exposures will be erratic.

The foregoing describes the operation of the most commonly used types of x-ray
equipment. However, certain high-voltage generators operate on principles different
from those discussed.

Top: Rod-anode tube used in the examination of a plug weld. Bottom: Rod-anode tube with
a 360° beam used to examine a circumferential weld in a single exposure.

FLASH X-RAY MACHINES


Flash x-ray machines are designed to give extremely short (microsecond), extremely
intense bursts of x-radiation. They are intended for the radiography of objects in rapid
motion or the study of transient events. The high-voltage generators of these units give
a very short pulse of high voltage, commonly obtained by discharging a condenser
across the primary of the high-voltage transformer. The x-ray tubes themselves usually
do not have a filament. Rather, the cathode is so designed that a high electrical field
“pulls” electrons from the metal of the cathode by a process known as field emission, or
cold emission. Momentary electron currents of hundreds or even thousands of
amperes—far beyond the capacity of a heated filament—can be obtained by this
process.
HIGH-VOLTAGE EQUIPMENT
The betatron may be considered as a high-voltage transformer, in which the secondary
consists of electrons circulating in a doughnut-shaped vacuum tube placed between the
poles of an alternating current electromagnet that forms the primary. The circulating
electrons, accelerated to high speed by the changing
magnetic field of the primary, are caused to impinge on a target within the accelerating
tube.

In the linear accelerator, the electrons are accelerated to high velocities by means of a
high-frequency electrical wave that travels along the tube through which the electrons
Page 160
travel. Both the betatron and the linear accelerator are used for the generation of
xradiation in the multimillion-volt range.

Typical X-ray Machines and Their Applications

Maximum voltage (kV) Screens


Applications and Approximate Thickness Limits
50 None Thin sections of most metals; moderate thickness of graphite and beryllium;
small electronic components; wood, plastics, etc.
150 None or lead foil 5inch aluminum or equivalent. 1-inch steel or equivalent.
Fluorescent 11/2-inch steel or equivalent. (See Equivalence Factors.)
300 Lead foil 3-inch steel or equivalent.
Fluorescent 4-inch steel or equivalent.
400 Lead foil 31/2-inch steel or equivalent.
Fluorescent 41/2-inch steel or equivalent.
1000 Lead foil 5-inch steel or equivalent.
Fluorescent 8-inch steel or equivalent.
2000 Lead foil 8-inch steel or equivalent.
8 to 25 MeV Lead foil 16-inch steel or equivalent.
Fluorescent 20-inch steel or equivalent.
In the high-voltage electrostatic generator, the high voltage is supplied by static
negative charges mechanically conveyed to an insulating electrode by a moving belt.
Electrostatic generators are used for machines in the 1- and 2-million-volt range. No
attempt is made here to discuss in detail the various forms of electrical generating
equipment. The essential fact is that electrons must be accelerated to very great
velocities in order that their deceleration, when they strike the target, may produce
xradiation.

In developing suitable exposure techniques, it is important to know the voltage applied


to the x-ray tube. It is common practice for manufacturers of x-ray equipment to calibrate
their machines at the factory. Thus, the operator may know the voltage across the x-ray
tube from the readings of the voltmeter connected to the primary winding of the high-
voltage transformer.

APPLICATION OF VARIOUS TYPES OF X-RAY APPARATUS


The various x-ray machines commercially available may be roughly classified according
to their maximum voltage. The choice among the various classes will depend on the
type of work to be done. The table above lists voltage ranges and applications of typical
x-ray machines. The voltage ranges are approximate since the exact voltage limits of
machines vary from one manufacturer to another. It should be emphasized that a table
like the one above can serve only as the roughest sort of guide, since x-ray machines
differ in their specifications, and radiographic tasks differ in their requirements.

X-ray machines may be either fixed or mobile, depending on the specific uses for which
they are intended. When the material to be radiographed is portable, the x-ray machine
is usually permanently located in a room protected against the escape of x-radiation.
Page 161
The x-ray tube itself is frequently mounted on a stand allowing considerable freedom of
movement. For the examination of objects that are fixed or that are movable only with
great difficulty, mobile x-ray machines may be used.

These may be truck-mounted for movement to various parts of a plant, or they may be
small and light enough to be carried onto scaffolding, through manholes, or even
selfpropelled to pass through pipelines. Semiautomatic machines have been designed
for the radiography of large numbers of relatively small parts on a “production line”
basis. During the course of an exposure, the operator may arrange the parts to be
radiographed at the next exposure, and remove those just radiographed, with an
obvious saving in time.

GAMMA-RAY SOURCES
Radiography with gamma rays has the advantages of simplicity of the apparatus used,
compactness of the radiation source, and independence from outside power. This
facilitates the examination of pipe, pressure vessels, and other assemblies in which
access to the interior is difficult; field radiography of structures remote from power
supplies; and radiography in confined spaces, as on shipboard.

In contradistinction to x-ray machines, which emit a broad band of wavelengths,


gamma-ray sources emit one or a few discrete wavelengths. The figure below shows
the gamma-ray spectrum of cobalt 60 and the principal gamma rays of iridium 192. (The
most intense line in each spectrum has been assigned an intensity of 1.0.)
Gamma-ray spectrum of cobalt 60 (solid lines) and principal gamma rays of iridium 192
(dashed lines).

Note that gamma rays are most often specified in terms of the energy of the individual
photon, rather than in the wavelength. The unit of energy used is the electron volt
(eV)—an amount of energy equal to the kinetic energy an electron attains in falling
through a potential difference of 1 volt. For gamma rays, multiples—kiloelectron volts
(keV; 1 keV = 1,000 eV) or million electron volts (MeV; 1 MeV = 1,000,000 eV)—are
commonly used.

A gamma ray with an energy of 0.5 MeV (500 keV) is equivalent in wavelength and in
penetrating power to the most penetrating radiation emitted by an x-ray tube operating
Page 162
at 500 kV. The bulk of the radiation emitted by such an x-ray tube will be much less
penetrating (much softer) than this. Thus the radiations from cobalt 60, for example,
with energies of 1.17 and 1.33 MeV, will have a penetrating power (hardness) about
equal to that of the radiation from a 2-million-volt x-ray machine.

For comparison, a gamma ray having an energy of 1.2 MeV has a wavelength of about
0.01 angstrom (A); a 120 keV gamma ray has a wavelength of about 0.1 angstrom.

The wavelengths (or energies of radiation) emitted by a gamma-ray source, and their
relative intensities, depend only on the nature of the emitter. Thus, the radiation quality
of a gamma ray source is not variable at the will of the operator.
The gamma rays from cobalt 60 have relatively great penetrating power and can be
used, under some conditions, to radiograph sections of steel 9 inches thick, or the
equivalent. Radiations from other radioactive materials have lower energies; for
example, iridium 192 emits radiations roughly equivalent to the x-rays emitted by a
conventional x-ray tube operating at about 600 kV.

The intensity of gamma radiation depends on the strength of the particular source used—
specifically, on the number of radioactive atoms in the source that disintegrate in one
second. This, in turn, is usually given in terms of curies (1 Ci = 3.7 x 1010s-1). For small or
moderate-sized sources emitting penetrating gamma rays, the intensity of radiation
emitted from the source is proportional to the source activity in curies.

The proportionality between the external gamma-ray intensity and the number of curies
fails, however, for large sources or for those emitting relatively low-energy gamma rays.
In these latter cases, gamma radiation given off by atoms in the middle of the source
will be appreciably absorbed (self-absorption) by the overlying radioactive material itself.
Thus, the intensity of the useful radiation will be reduced to some value below that
which would be calculated from the number of curies and the radiation output of a
physically small gamma-ray source.

A term often used in speaking of radioactive sources is specific activity, a measure of the
degree of concentration of a radioactive source. Specific activity is usually expressed in
terms of curies per gram. Of two gamma-ray sources of the same material and activity,
the one having the greater specific activity will be the smaller in actual physical size.
Thus, the source of higher specific activity will suffer less from self-absorption of its own
gamma radiation. In addition, it will give less geometrical unsharpness in the radiograph
or, alternatively, will allow shorter source-film distances and shorter exposures.

Gamma-ray sources gradually lose activity with time, the rate of decrease of activity
depending on the kind of radioactive material (see the table below). For instance, the
intensity of the radiation from a cobalt 60 source decreases to half its original value in
about 5 years; and that of an iridium 192 source, in about 70 days. Except in the case
of radium, now little used in industrial radiography, this decrease in emission
necessitates more or less frequent revision of exposures and replacement of sources.

1
The roentgen (R) is a special unit for x- and gamma-ray exposure(ionization of air): 1 roentgen
= 2.58 x.10-4 coulombs per kilogram (Ckg-1). (The International Commission on Radiation Units
Page 163
Radioactive Materials Used in Industrial Radiography

Radioactive Element Half-Life Energy of Gamma Rays (MeV)


Gamma-Ray Dosage Rate (roentgens1 per hour per curie at 1 metre)
Thulium 170 127 days 0.084 and 0.542 — Iridium 192 70 days 0.137 to 0.6513
0.55
Cesium 137 33 years 0.66 0.39
Cobalt 60 5.3 years 1.17 and 1.33 1.35
and Measurements [ICRU] recommends that the roentgen be replaced gradually by the SI unit
[Ckg-1] by about 1985.)
2
These gamma rays are accompanied by a more or less intense bacground of much
harderradiation. The proportion of hard radiation depends upon the chemical nature and
physical size of the source.
3
Twelve gamma rays.

The exposure calculations necessitated by the gradual decrease in the radiation output
of a gamma-ray source can be facilitated by the use of decay curves similar to those for
indium 192 shown in the figure below. The curves contain the same information, the
only difference being that the curve on the left shows activity on a linear scale, and the
curve on the right, on a logarithmic scale. The type shown on the right is easier to draw.
Locate point X, at the intersection of the half-life of the isotope (horizontal scale) and the
“50 percent remaining activity” line (vertical scale). Then draw a straight line from the
“zero time, 100 percent activity” point Y through point X.

Decay curves for iridium 192. Left: Linear plot. Right: Logarithmic plot.

It is difficult to give specific recommendations on the choices of gamma-ray emitter and


source strength (See the figure below). These choices will depend on several factors,
among which are the type of specimen radiographed, allowable exposure time, storage
facilities available, protective measures required, and convenience of source
replacement. The values given in the table below for practical application are therefore

Page 164
intended only as a rough guide and in any particular case will depend on the source
size used and the requirements of the operation.

Typical industrial gamma-ray arrangement. Gamma-ray source in a combination “camera”


and storage container.

Industrial Gamma-Ray Sources and Their Applications


Source Applications and Approximate Practical Thickness Limits
Thulium 170 Plastics, wood, light alloys. 1/2-inch steel or equivalent.
Iridium 192 11/2- to 21/2-inch steel or equivalent.
Cesium 137 1 to 31/2-inch steel or equivalent.
Cobalt 60 21/2- to 9-inch steel or equivalent.

1
The atomic number of an element is the number of protons in the nucleus of the atom, and is
equal to the number of electrons outside the nucleus. In the periodic table the elements are
arranged in order of increasing atomic number. Hydrogen has an atomic number of 1; iron, of 26;
copper, of 29; tungsten, of 74; and lead of 82.
Page 165
GEOMETRIC PRINCIPLES

A radiograph is a shadow picture of an object that has been placed in the path of an x-ray
or gamma-ray beam, between the tube anode and the film or between the source of gamma
radiation and the film. It naturally follows, therefore, that the appearance of an image thus
recorded is materially influenced by the relative positions of the object and the film and by
the direction of the beam. For these reasons, familiarity with the elementary principles of
shadow formation is important to those making and interpreting radiographs.

GENERAL PRINCIPLES
Since x-rays and gamma rays obey the common laws of light, their shadow formation may
be explained in a simple manner in terms of light. It should be borne in mind that the
analogy between light and these radiations is not perfect since all objects are, to a greater
or lesser degree, transparent to x-rays and gamma rays and since scattering presents
greater problems in radiography than in optics. However, the same geometric laws of
shadow formation hold for both light and penetrating radiation.

Suppose, as in Figure A below, that there is light from a point L falling on a white card C,
and that an opaque object O is interposed between the light source and the card. A shadow
of the object will be formed on the surface of the card. This shadow cast by the object will
naturally show some enlargement because the object is not in contact with the card; the
degree of enlargement will vary according to the relative distances of the object from the
card and from the light source. The law governing the size of the shadow may be stated:

The diameter of the object is to the diameter of the shadow as the distance of the light from
the object is to the distance of the light from the card.

Mathematically, the degree of enlargement may be calculated by use of the following


equations

where S is the size of the object; S is the size of the shadow (or the radiographic image); D
the distance from source of radiation to object; and D the distance from the source of
radiation to the recording surface (or radiographic film).

The degree of sharpness of any shadow depends on the size of the source of light and on the
position of the object between the light and the card—whether nearer to or farther from
one or the other. When the source of light is not a point but a small area, the shadows cast
are not perfectly sharp (in Figures B to D) because each point in the source of light casts its
own shadow of the object, and each of these overlapping shadows is slightly displaced from
the others, producing an ill-defined image.

The form of the shadow may also differ according to the angle that the object makes with
the incident light rays. Deviations from the true shape of the object as exhibited in its
shadow image are referred to as distortion. Figures A to F shows the effect of changing the
size of the source and of changing the relative positions of source, object, and card. From

Page 166
an examination of these drawings, it will be seen that the following conditions must be
fulfilled to produce the sharpest, truest shadow of the object:

1. The source of light should be small, that is, as nearly a point as can be
obtained.Compare Figures A and C.
2. The source of light should be as far from the object as practical. Compare Figures
Band C.
3. The recording surface should be as close to the object as possible. Compare FiguresB
and D.
4. The light rays should be directed perpendicularly to the recording surface. SeeFigures
A and E.
5. The plane of the object and the plane of the recording surface should be parallel.
Compare Figures A and F.

Page 167
Illustrating the general geometric principles of shadow formation as explained in these
sections.

RADIOGRAPHIC SHADOWS
The basic principles of shadow formation must be given primary consideration in order
to assure satisfactory sharpness in the radiographic image and essential freedom from
distortion. A certain degree of distortion naturally will exist in every radiograph because
some parts will always be farther from the film than others, the greatest magnification
being evident in the images of those parts at the greatest distance from the recording
surface (see the figure above).

Note, also, that there is no distortion of shape in Figure E above—a circular object having
been rendered as a circular shadow. However, under circumstances similar to those
shown, it is possible that spatial relations can be distorted. In the figure below the two
Page 168
circular objects can be rendered either as two circles (A) or as a figure-eightshaped
shadow (B). It should be observed that both lobes of the figure eight have circular
outlines.
Two circular objects can be rendered as two separate circles (A) or as two overlapping
circles (B), depending on the direction of the radiation.

Distortion cannot be eliminated entirely, but by the use of an appropriate source-film


distance, it can be lessened to a point where it will not be objectionable in the
radiographic image.

APPLICATION TO RADIOGRAPHY
The application of the geometric principles of shadow formation to radiography leads to
five general rules. Although these rules are stated in terms of radiography with x-rays, they
also apply to gamma-ray radiography.

6. The focal spot should be as small as other considerations will allow, for there isa
definite relation between the size of the focal spot of the x-ray tube and the
definition in the radiograph. A large-focus tube, although capable of withstanding
large loads, does not permit the delineation of as much detail as a small-focus
tube. Long source-film distances will aid in showing detail when a large-focus
tube is employed, but it is advantageous to use the smallest focal spot
permissible for the exposures required.

B and H in the figure below show the effect of focal spot size on image quality.
As the focal spot size is increased from 1.5 mm (B) to 4.0 mm (H), the definition

of the radiograph starts to degrade. This is especially evident at the edges of the
chambers, which are no longer sharp.

Page 169
7. The distance between the anode and the material examined should always be
asgreat as is practical. Comparatively long-source distances should be used in
the radiography of thick materials to minimize the fact that structures farthest
from the film are less sharply recorded than those nearer to it. At long distances,
radiographic definition is improved and the image is more nearly the actual size
of the object.

A to D in the figure below show the effects of source-film distance on image


quality. As the source-film distance is decreased from 68 inches (A) to 12 inches
(D) the image becomes more distorted until at 12 inches it is no longer a true
representation of the casting. This is particularly evident at the edges of the
casing where the distortion is greatest.

8. The film should be as close as possible to the object being radiographed.


Inpractice, the film—in its cassette or exposure holder—is placed in contact
with the object.
In B and E of the figure below, the effects of object-film distance are evident. As
the object-film distance is increased from zero (B) to 4 inches (E), the image
becomes larger and the definition begins to degrade. Again, this is especially
evident at the edges of the chambers that are no longer sharp.

9. The central ray should be as nearly perpendicular to the film as possible


topreserve spatial relations.

10.As far as the shape of the specimen will allow, the plane of maximum interest
should be parallel to the plane of the film.

Finally, in F and G of the figure below, the effects of object-film-source


orientation are shown. When compared to B, image F is extremely distorted
because although the film is perpendicular to the central ray, the casting is at a
45° angle to the film and spatial relationships are lost. As the film is rotated to be
parallel with the casting (G), the spatial relationships are maintained and the
distortion is lessened.

These graphics illustrate the effects on image quality when the geometric exposure factors
are changed.

Page 170
CALCULATION OF GEOMETRIC UNSHARPNESS
The width of the “fuzzy” boundary of the shadows in B, C, and D in the above figure is
known as the geometric unsharpness (Ug). Since the geometric unsharpness can
strongly affect the appearance of the radiographic image, it is frequently necessary to
determine its magnitude.

From the laws of similar triangles, it can be seen (in the figure below) that:

where Ug is the geometric unsharpness, F is the size of the radiation source, Do is the
source-object distance, and t is the object-film distance. Since the maximum
unsharpness involved in any radiographic procedure is usually the significant quantity,
the object-film distance (t) is usually taken as the distance from the source side of the
specimen to the film.
Geometric construction for determining geometric unsharpness (Ug).

Page 171
Do and t must be measured in the same units; inches are customary, but any other unit
of length—say, centimeters—would also be satisfactory. So long as Do and t are in the
same units, the formula above will always give the geometric unsharpness Ug in
whatever units were used to measure the dimensions of the source. The projected size
of the focal spots of x-ray tubes is usually stated in millimeters, and Ug will also be in
millimeters. If the source size is stated in inches, Ug will be in inches.

For rapid reference, graphs of the type shown in the figure below can be prepared by
the use of the equation above. These graphs relate source-film distance, object-film
distance and geometric unsharpness. Note that the lines of the figure are all straight.
Therefore, for each source-object distance, it is only necessary to calculate the value of
U for a single specimen thickness, and then draw a straight line through the point so
determined and the origin. It should be emphasized, however, that a separate graph of
the type shown in the figure below must be prepared for each size of source.

Graph relating geometric unsharpness (Ug) to specimen thickness and source-object


distance, for a 5-millimetre source size

Page 172
PINHOLE PROJECTION OF FOCAL SPOT
Since the dimensions of the radiation source have considerable effect on the sharpness
of the shadows, it is frequently desirable to determine the shape and size of the x-ray
tube focal spot. This may be accomplished by the method of pinhole radiography, which
is identical in principle with that of the pinhole camera. A thin lead plate containing a
small hole is placed exactly midway between the focal spot and the film, and lead
shielding is so arranged that no x-rays except those passing through the pinhole reach
the film (See the figure below).

The developed film will show an image that, for most practical radiographic purposes,
may be taken as equal in size and shape to the focal spot (See the second figure
below). If precise measurements are required, the measured dimensions of the
focalspot image should be decreased by twice the diameter of the pinhole.

Schematic diagram showing production of a pinhole picture of an x-ray tube focal


spot

Page 173
The method is applicable to x-ray tubes operating up to about 250 kV. Above this
kilovoltage, however, the thickness of the lead needed makes the method impractical.
(The entire focal spot cannot be “seen” from the film side of a small hole in a thick
plate.) Thus the technique cannot be used for high-energy x-rays or the commonly used
gamma-ray sources, and much more complicated methods, suitable only for the
laboratory, must be employed.

A focus-film distance of 24 inches is usually convenient. Of course, the time of exposure


will be much greater than that required to expose the film without the pinhole plate
because so little radiation can get through such a small aperture. In general, a needle or
a No. 60 drill will make a hole small enough for practical purposes.

A density in the image area of 1.0 to 2.0 is satisfactory. If the focal-spot area is
overexposed, the estimate of focal-spot size will be exaggerated, as can be seen by
comparing the two images in the figure below.

Pinhole pictures of the focal spot of an x-ray tube. A shorter exposure (left) shows only
focal spot. A longer exposure (right) shows, as well as the focal spot, some details of
the tungsten button and copper anode stem. The x-ray images of these parts result
from their bombardment with stray electrons

Page 174
GEOMETRIC UNSHARPNESS LIMITATIONS
The under-mentioned limitations are as per ASME Boiler and Pressure Code, SecV
(T285):
Material Thickness Ug Maximum

Under 2 inches 0.020 inches

2 through 3 inches 0.030 inches

Over 3 through 4 inches 0.040 inches

Greater than 4 inches 0.070 inches


Note: Material thickness is the thickness on which the penetrameter is based.

RADIOGRAPHY TECHNIQUES

INSPECTION OF SIMPLE SHAPES

It is usually best to direct the radiation at right angles to the surface, along a path that
represents a minimum thickness to the radiation. This not only minimizes the exposure
time but also assures that the internal structure will present the greatest subject contrast
to the radiation. When the presence of a planar discontinuity is suspected, radiation
must be essentially parallel to the expected occurrence of the discontinuity, regardless
of the test piece thickness in that direction.

Flat plates are the simplest of the shapes that can be inspected by radiography. The
most favorable direction for the viewing of a flat plate id the one in which the radiation
impinges perpendicular to the plate surface and penetrates the shortest dimension.
When large areas are to be inspected, they should be serially radiographed, each one
Page 175
overlapping another (usually 1 inch). Use of a relatively short SFD and multiple
overlapping exposures is more satisfactory than taking a single radiograph.

Curved plates are more satisfactorily inspected using views that are similar to flat
shapes. For best resolution, the recording plane should be shaped to confirm to that of
the back surface of the cured plate. If the curved plate has its convex side towards the
radiation, it is usually advantageous to minimize distortion by making multiple
exposures with reduced individual areas of coverage. If the curved plate has its
concave side towards the radiation, a distortion-free image can be obtained by placing
the source at the center of radius of curvature.

It is also equally advantageous to take multiple exposures, with number of films


wrapped around the specimen and the source placed centrally. This technique is
referred to as “Panoramic Shot”. This technique shall also be applied to radiograph
several small parts in one single exposure. The test pieces are placed in circle around
the source that emits equal intensity in all directions.

Solid cylinders are inspected either by a longitudinal view, which is generally


satisfactory for short large diameter cylinders, or by a transverse view for relatively large
diameter cylinders. The thickness of the cylindrical test piece varies across a
diametrical plane in a similar manner as for a sphere, being thickest at the center and
progressively thinning out tot almost to nil at the outer edges. Edge definition is
relatively good for light-metal cylinders less than about 2" in diameter and for heavy
metal cylinders less than about 1" in diameter.

Sometimes section-equalizing techniques are helpful. In this technique the outer edges
of the cylinder are built-up to present greater radiographic density to the X-rays. Close
fitting solid cradles, liquid absorbers or many layers of shim stock, all having
radiographic absorption characteristics equivalent to that of the cylinder, are alternative
means of equalizing radiographic density.

INSPECTION OF TUBULAR SECTIONS

There are three major inspection techniques for tubular sections, namely:

1. DOUBLE-WALL DOUBLE-IMAGE technique


2. DOUBLE-WALL SINGLE-IMAGE technique
3. SINGLE-WALL SINGLE-IMAGE technique

DOUBLE-WALL DOUBLE-IMAGE TECHNIQUE

This technique is mainly applicable to the sections of no more than 3 ½ inches OD. It
produces a radiograph in which the images of both the walls are superimposed on one
another. The beam of radiation is directed towards one side of the section and the
recording media is placed on the other side, usually tangent to the section. Two
exposures 900 apart are required to provide complete coverage, when the ratio of the
outside diameter to inside diameter is 1.4 or less. The area at the edges of the pipe
exhibits too much subject contrast for meaningful interpretation, and hence more than
one exposure is required.

Page 176
When the ratio of the outside diameter to inside diameter is greater than 1.4 (i.e. when
radiographing a thick walled specimen, the number of exposures required to provide
complete coverage can be determined by multiplying the ratio by 1.7 and rounding off
to the next highest integer. For instance, to examine a 2" OD specimen with a 1"
diameter axial hole, a total of 1.7 X (2"/1") = 3.4, or 4 shots must be taken. The
circumferential displacement between shots is found by dividing 1800 by the number of
shots. When an odd number of exposures are required for complete coverage, the
angular spacing between shots an be determined by dividing the number of shots by
360o.

A variation is DWDI technique is sometimes called the “Corona” or “Offset” technique.


Often used for the inspection of small diameter pipe and tubing. The central beam is
directed at an acute angle to the run of the tubes that the weld is projected on the film
as an ellipse rather than a straight band. The offset angle should be large enough so
that the image of the top and bottom sections should not overlap with each other, and
shouldn’t be exceedingly large to avoid distortion. Larger the offset angle, greater the
probability that the technique will fail to detect Incomplete fusion at the root.

The correct number of exposures and the circumferential location of the corresponding
views can be determined in the same manner as that for superimposed technique. In all
the variations of the DWDI, the image of the section of the cylinder that is closest to the
radiation source will exhibit the greatest amount of unsharpness. Hence the
penetrameters should be located where they will evaluate the image of the section that
is closest to the source.

DOUBLE-WALL SINGLE-IMAGE TECHNIQUE

DWSI technique is applicable to hollow cylinders and tubular sections exceeding 3 ½ “ in


outside diameter. The technique produces a radiographic image of the wall that is
closest to the recording plane, although the radiation penetrates both the walls. The
source is positioned close to the section, so that blurring caused by geometric
unsharpness in the image wall closest to the source makes that completely
indistinguishable. Only the image of the wall section closest to the film is sharply
defined. Exposures are calculated on the basis of double-walled thickness of the hollow
section as they are for the DWDI technique.

The area of coverage is limited by the geometric unsharpness and distortion at the
edges of the resolved image for hollow cylinders that are less than 15" in outside
diameter. For large cylinders, the film size is usually a limiting factor.

TECHNIQUE DEVELOPMENT

Sensitivity
The choice of a radiographic technique is usually based on sensitivity. A weld zone will
always be in different in structure and density from the parent material and ordinarily
there is no need for these differences to be highlighted. Sensitivity is expressed in terms
of percentages, with 2% applying to fairly critical projects and 4% to less critical ones.
Page 177
Exposure setup for Welds
The above mentioned techniques will apply for a simple butt joint for a part that is flat or
tubular. The following chapters will discuss the techniques used for weld joints other
than butt welds.

Lap Joints
The butt joint with a groove is the simplest welding arrangement and the one yielding
the easiest interpretation because of the general uniformity of arrangement of the
assembly. A less uniform assembly is the lap joint secured by two fillet welds. It is
possible to make a fillet weld joint that appears full-sized from the outside, but is
actually hollow. A radiographic requirement is imposed to ensure that the bond has
been made over the complete edge of the applied pipe. The simplest technique is to
shoot the radiograph through the weld area. Because the weld is triangular, there is no
means of directing the beam so that the thickness is examined. Also, the film plane is
not so close to the weld zone.

The interpretation of the film from this seemingly simple joint is quite complex as
because there is a large film density gradient over a small dimension. In this case, one
could only expect to confirm the presence of relatively large discontinuities. This is
acceptable because of the fact that the use of a simple lap joint indicates that the joint is
not expected to meet high strength requirements.

T-joints

A further increase in complexity occurs with the T-joint, of which there are two types:
one with full penetration, making a completely welded assembly and the simpler case
with fillet welds at the corners. When there are only two fillet welds involved, the
radiographic assessment becomes very similar to the lap joint: a varying thickness of
material is presented to the radiation beam and the film plane is separated from the
weld metal by the lower plate.

A refined T-joint involves a groove weld, or welds rather than a simple fillet weld. The
vertical member is prepared from one or both sides. The complete weld is made
through the thickness of the web. When such a weld is radiographed, a distorted image
will appear and the effective sensitivity will be reduced because of the base material
thickness. The weld in a prepared T-joint can be considered fully load-bearing and may
therefore have performance requirements that will justify a sensitive radiographic
technique.

Corner Joints
As with the simple lap joint and the simple T-joint, the corner joint may be assembled
with a minimum of welding using a fillet weld in the corner. This weld is not used when
full loading is required and thus not require refined radiography. If radiography is used,
the joint has special advantage: there could be a preferred direction for the beam that
would not involve the welded portion of the joint.

Page 178
Fillet Welds
The fillet weld is commonly used with lap joint, the corner joint, and T-joint: the fillet
weld may be used in conjunction with a groove weld in a corner or T-joint. For a 900
Tjoint with a symmetrical fillet weld, the shortest path through the weld is that bisecting
the 900 angle, giving the radiation beam angle of 450. If the film holder is positioned in
contact with the flange of the T-joint, then the radiation must pass through a relatively
large thickness of the metal. Another approach to radiography of fillet welds is to
position the film on the weld side of the joint. In this case the thickness of the film
holder will have some considerable influence on the resolution, but this limitation only
applies when the film holder is flat. Flexible holders, if used, can be brought closer to
the weld bead

Other arrangements may be used to compensate for the uneven geometry of the fillet
weld. One example is to introduce metal wedges prepared with shapes complimentary
to the fillet shape and fitted between the radiation source and the film. The wedges may
be in contact with the weld surface or on the opposite side, where they should be in
contact with the film holder. This setup can be used with any of the three joint types
using fillet weld.

PENETRAMETERS
A standard test piece is usually included in every radiograph as a check on the
adequacy of the radiographic technique. The test piece is commonly referred to as a
penetrameter in North America and an Image Quality Indicator (IQl) in Europe. The
penetrameter (or lQI) is made of the same material, or a similar material, as the
specimen being radiographed, and is of a simple geometric form. It contains some small
structures (holes, wires, etc), the dimensions of which bear some numerical relation to
the thickness of the part being tested. The image of the penetrameter on the radiograph
is permanent evidence that the radiographic examination was conducted under proper
conditions.

Codes or agreements between customer and vendor may specify the type of
penetrameter, its dimensions, and how it is to be employed. Even if penetrameters are
not specified, their use is advisable, because they provide an effective check of the
overall quality of the radiographic inspection.
Hole Type Penetrameters
The common penetrameter consists of a small rectangular piece of metal, containing
several (usually three) holes, the diameters of which are related to the thickness of the
penetrameter (see the figure below).

The ASTM (American Society for Testing and Materials) penetrameter contains three
holes of diameters T, 2T, and 4T, where T is the thickness of the penetrameter.
Because of the practical difficulties in drilling minute holes in thin materials, the
minimum diameters of these three holes are 0.010, 0.020, and 0.040 inches,
respectively. These penetrameters may also have a slit similar to the ASME
penetrameter described below.

Thick penetrameters of the hole type would be very large, because of the diameter of the
4T hole. Therefore, penetrameters more than 0.180 inch thick are in the form of discs,
the diameters of which are 4 times the thickness (4T) and which contain two holes of
Page 179
diameters T and 2T. A lead number showing the thickness in thousandths of an inch
identifies each penetrameter.

The ASTM penetrameter permits the specification of a number of levels of radiographic


sensitivity, depending on the requirements of the job. For example, the specifications
may call for a radiographic sensitivity level of 2-2T. The first symbol (2) indicates that
the penetrameter shall be 2 percent of the thickness of the specimen; the second (2T)
indicates that the hole having a diameter twice the penetrameter thickness shall be
visible on the finished radiograph.

The quality level 2-2T is probably the one most commonly specified for routine
radiography. However, critical components may require more rigid standards, and a
level of 1-2T or 1-1T may be required. On the other hand, the radiography of less critical
specimens may be satisfactory if a quality level of 2-4T or 4-4T is achieved. The more
critical the radiographic examination—that is, the higher the level of radiographic
sensitivity required—the lower the numerical designation for the quality level.

American Society for Testing and Materials (ASTM) penetrameter (ASTM E 14268).

Page 180
Some sections of the ASME (American Society of Mechanical Engineers) Boiler and
Pressure Vessel Code require a penetrameter similar in general to the ASTM
penetrameter. It contains three holes, one of which is 2T in diameter, where T is the
penetrameter thickness. Customarily, the other two holes are 3T and 4T in diameter,
but other sizes may be used. Minimum hole size is 1/6 inch.

Penetrameters 0.010 inch, and less, in thickness also contain a slit 0.010-inch wide and
1
/4 inch long. A lead number designating the thickness in thousandths of an inch identifies
each.

Equivalent Penetrameter Sensitivity


Ideally, the penetrameter should be made of the same material as the specimen.
However, this is sometimes impossible because of practical or economic difficulties. In
such cases, the penetrameter may be made of a radiographically similar material—that is,
a material having the same radiographic absorption as the specimen, but one of which it
is easier to make penetrameters. Tables of radiographically equivalent materials have
been published wherein materials having similar radiographic absorptions are arranged in
groups.

In addition, a penetrameter made of a particular material may be used in the


radiography of materials having greater radiographic absorption. In such a case, there
is a certain penalty on the radiographic testers, because they are setting for themselves
more rigid radiographic quality standards than are actually required. The penalty is often
outweighed, however, by avoidance of the problems of obtaining penetrameters of an
unusual material or one of which it is difficult to make penetrameters.

In some cases, the materials involved do not appear in published tabulations. Under
these circumstances the comparative radiographic absorption of two materials may be
determined experimentally. A block of the material under test and a block of the material
proposed for penetrameters, equal in thickness to the part being examined, can be
radiographed side by side on the same film with the technique to be used in practice. If
the density under the proposed penetrameter materials is equal to or greater than the
density under the specimen material, that proposed material is suitable for fabrication of
penetrameters.

In practically all cases, the penetrameter is placed on the source side of the specimen—
that is, in the least advantageous geometric position. In some instances, however, this
location for the penetrameter is not feasible. An example would be the radiography of a
circumferential weld in a long tubular structure, using a source positioned within the
tube and film on the outer surface. In such a case a “film-side” penetrameter must be
used. Some codes specify the film-side penetrameter that is equivalent to the source-
side penetrameter normally required. When such a specification is not made, the
required film-side penetrameter may be found experimentally.

Page 181
In the example above, a short section of tube of the same dimensions and materials as
the item under test would be used to demonstrate the technique. The required
penetrameter would be used on the source side, and a range of penetrameters on the
film side. If the penetrameter on the source side indicated that the required radiographic
sensitivity was being achieved, the image of the smallest visible penetrameter hole in
the film-side penetrameters would be used to determine the penetrameter and the hole
size to be used on the production radiograph.

Sometimes the shape of the part being examined precludes placing the penetrameter
on the part. When this occurs, the penetrameter may be placed on a block of
radiographically similar material of the same thickness as the specimen. The block and
the penetrameter should be placed as close as possible to the specimen.

Wire Penetrameters
A number of other penetrameter designs are also in use. The German DIN (Deutsche
Industrie-Norm) penetrameter (See the figure below) is one that is widely used. It
consists of a number of wires, of various diameters, sealed in a plastic envelope that
carries the necessary identification symbols. The thinnest wire visible on the radiograph
indicates the image quality. The system is such that only three penetrameters, each
containing seven wires, can cover a very wide range of specimen thicknesses. Sets of DIN
penetrameters are available in aluminum, copper, and steel. Thus a total of nine
penetrameters is sufficient for the radiography of a wide range of materials and
thicknesses.

DIN (German) penetrameter (German Standard DIN 54109).

Comparison of Penetrameter Design


The hole type of penetrameter (ASTM, ASME) is, in a sense, a “go no-go” gauge; that
is, it indicates whether or not a specified quality level has been attained but, in most
cases, does not indicate whether the requirements have been exceeded, or by how
much. The DIN penetrameter on the other hand is a series of seven penetrameters in a
single unit. As such, it has the advantage that the radiographic quality level achieved
can often be read directly from the processed radiograph.

On the other hand, the hole penetrameter can be made of any desired material but the
wire penetrameter is made from only a few materials.

Page 182
Therefore, using the hole penetrameter, a quality level of 2-2T may be specified for the
radiography of, for example, commercially pure aluminum and 2024 aluminum alloy,
even though these have appreciably different compositions and radiation absorptions.
The penetrameter would, in each case, be made of the appropriate material. The wire
penetrameters, however, are available in aluminum but not in 2024 alloy. To achieve
the same quality of radiographic inspection of equal thicknesses of these two materials,
it would be necessary to specify different wire diameters—that for 2024 alloy would
probably have to be determined by experiment.
Special Penetrameters
Special penetrameters have been designed for certain classes of radiographic
inspection. An example is the radiography of small electronic components wherein some
of the significant factors are the continuity of fine wires or the presence of tiny balls of
solder. Special image quality indicators have been designed consisting of fine wires and
small metallic spheres within a plastic block, the whole covered on top and the bottom
with steel approximately as thick as the case of the electronic component.
Penetrameters and Visibility of Discontinuities
It should be remembered that even if a certain hole in a penetrameter is visible on the
radiograph, a cavity of the same diameter and thickness may not be visible. The
penetrameter holes, having sharp boundaries, result in an abrupt, though small, change
in metal thickness whereas a natural cavity having more or less rounded sides causes a
gradual change.

Therefore, the image of the penetrameter hole is sharper and more easily seen in the
radiograph than is the image of the cavity. Similarly, a fine crack may be of considerable
extent, but if the x-rays or gamma rays pass from source to film along the thickness of
the crack, its image on the film may not be visible because of the very gradual transition
in photographic density. Thus, a penetrameter is used to indicate the quality of the
radiographic technique and not to measure the size of cavity that can be shown.

In the case of a wire image quality indicator of the DIN type, the visibility of a wire of a
certain diameter does not assure that a discontinuity of the same cross section will be
visible. The human eye perceives much more readily a long boundary than it does a
short one, even if the density difference and the sharpness of the image are the same.

USE OF IMAGE QUALITY INDICATORS IN MONITORING RADIOGRAPHIC QUALITY

Placement of Penetrameter

(a)Source side penetrameters: The penetrameters shall be placed on the


source side of the part being examined expect for the condition described:

When due to part or weld configuration or size, it is not practical to place the
penetrameters on the part or weld; the penetrameters may be placed on a separate
block. Separate blocks shall be made of the same or Radiographically similar materials
and may be used to facilitate penetrameter positioning. There is no restriction on the
separate block thickness, provided the penetrameters are met.

Page 183
(1) The penetrameters on the source side of the separate block shall be placed
no closer to the film than the source side of the part being radiographed.

(2) The separate block shall be placed as close as possible to the part being
radiographed.

(3) The separate block shall exceed the penetrameters dimensions such that the
outline of at least three sides of the penetrameter image shall be visible on
the radiograph.

(b) Film side penetrameters: Where inaccessibility prevents hard placing the
penetrameters on the source side, the penetrameters shall be placed on the film
side in contact with the part being examined. A lead letter “F” shall be placed
adjacent to or on the penetrameters, but shall not mask the essential hole where
hole penetrameters are used.

(c) Penetrameters Location for Welds –Hole penetrameters: The


penetrameters may be placed adjacent to or on the weld. The identification
numbers and when used the lead letter “F” shall not be in the area of interest
except when geometric configuration makes it impractical.

(d) Penetrameters Location for Welds-Wire penetrameter: The


penetrameters shall be placed on the weld so that the length of the wires is
perpendicular to the length of the weld. The identification numbers and when used
the lead letter “F”, shall not be in the area of interest, except when geometric
configuration makes it impractical.

(e) Penetrameters Location for Materials Other Than Welds: The


penetrameters with the penetrameters identification numbers and when used the
lead letter “F”, may be placed in the interest.

Number of Penetrameters:
When one or more film holders are used for an exposure at least onepenetrameter
image shall appear on each radiograph except as outlined in (b) below.

(a) Multiple penetrameters. If the requirements of are met by using more than
one penetrameter, one shall be representative of the lightest area of interest
and the other the darkest area of interest and the other the darkest area of
interest the intervening densities on the radiograph shall be considered as
having acceptable density.

(b) Special Cases

(1) For cylindrical components where the source is placed on the axis of the
component for a single exposure, at least three penetrameters, spaced
approximately120 deg. Apart are required under the following conditions:

Page 184
(a) When the complete circumference is radiographed using one or
morefilm holders, on
(b) When a section or section of the circumference. Where the
lengthbetween the ends of the outermost sections span 240 or more
degree is radiographed using one more film holders. Additional film
locations may be required to obtain necessary penetrameter spacing.

(2) For cylindrical components where the source I placed on the axis of the
component for a single exposure, at least three penetrameters, with one placed at each
end of the span of the circumference radiographed and one in the following conditions:

(a) When a section of the circumference the length of which is grater


than 120 deg. And less than 240 deg., is radiographed using just one
film holder, on.
(b) When a section or section of the circumference, where the length
between the ends of the outermost section span less than 240deg., is
radiographed using more than one film holder.

(4) In 1&2 above, where sections of longitudinal welds adjoining the


circumferential weld are radiographed simultaneously with the circumferential
weld, an additional penetrameter shall; b placed on each longitudinal weld at
the end of the section most remote from the junction with the circumferential
weld being radiographed.

(5) For spherical components where the source is placed at the center of the
component for a single exposure, at lest three penetrameter, spaced
approximately 120deg. Apart, are required under the following conditions:
(a) When a complete circumference is radiographed using one or morefilm
holder, or,
(b) When a section or section of circumference. Where the length
betweenthe ends of the outermost sections spans 240 or more deg, is
radiographed using one or more film holder. Additional film locations
may be required to obtain necessary penetrameter spacing.

(6) For spherical components where the source is placed at the center of the
component for a single exposure at least three penetrameters, with one
placed at each end of the radiographed span of the circumference,
radiographed and one in the approximate center of the span, are required the
following conditions:

(a) When a section of a circumference, the length of which is greater


than120deg. And less than 240deg. is radiographed using just one film
holder, or.
(b) When a section or sections of a circumference, where is the
lengthbetween the ends of the outermost sections span less than
240deg. Is radiographed using more than one film holder.

(7) In (4)&(5) above, where other welds are radiographed simultaneously with the
circumferential weld, one additional penetrameter shall be placed on each
other weld.
Page 185
(8) When an array of components in a circle is radiographed, at least one
penetrameter shall show on each component image.

(9) In order to maintain the continuity of records involving subsequent exposures,


all radiographs exhibited in accordance with (1) through (6) above shall be
retained.

Shims Under Hole Penetrameters: For welds, a shim of material


Radiographically similar to the weld metal shall be placed between the part and
penetrameter. If needed so that the radiographic density through the area of
interest is no more than minus15% from the radiographic density through the
penetrameter.

The shim dimensions shall exceed the penetrameter dimensions such that the
outline of atleast three sides of the penetrameter image shall be visible in the
radiograph.

Page 186
ARITHMETIC OF EXPOSURE

RELATIONS OF MILLIAMPERAGE (SOURCE STRENGTH), DISTANCE, AND TIME

With a given kilo voltage of x-radiation or with the gamma radiation from a
particular isotope, the three factors governing the exposure are the
milliamperage (for x-rays) or source strength (for gamma rays), time, and
sourcefilm distance. The numerical relations among these three quantities are
demonstrated below, using x-rays as an example. The same relations apply for
gamma rays, provided the number of curies in the source is substituted wherever
milliamperage appears in an equation.
The necessary calculations for any changes in focus-film distance (D), milliamperage
(M), or time (T) are matters of simple arithmetic and are illustrated in the following
example. As noted earlier, kilo voltage changes cannot be calculated directly but must
be obtained from the exposure chart of the equipment or the operator’s logbook. All of
the equations shown on these pages can be solved easily for any of the variables (mA,
T, D), using one basic rule of mathematics: If one factor is moved across the equals
sign (=), it moves from the numerator to the denominator or vice versa.

We can now solve for any unknown by:

1. Eliminating any factor that remains constant (has the same value and is in
thesame location on both sides of the equation).

2. Simplifying the equation by moving the unknown value so that it is alone on


oneside of the equation in the numerator.

3. Substituting the known values and solving the equation.


Milliamperage-Distance Relation
The milliamperage employed in any exposure technique should be in conformity with
the manufacturer’s rating of the x-ray tube. In most laboratories, however, a constant
value of milliamperage is usually adopted for convenience.

Rule: The milliamperage (M) required for a given exposure is directly proportional to the
square of the focus-film distance (D). The equation is expressed as follows:

Example: Suppose that with a given exposure time and kilovoltage, a properly exposed
radiograph is obtained with 5mA (M1) at a distance of 12 inches (D1), and that it is
desired to increase the sharpness of detail in the image by increasing the focus-film

Page 187
distance to 24 inches (D2). The correct milliamperage (M2) to obtain the desired
radiographic density at the increased distance (D2) may be computed from the
proportion:

When very low kilovoltages, say 20 kV or less, are used, the x-ray intensity decreases
with distance more rapidly than calculations based on the inverse square law would
indicate because of absorption of the x-rays by the air. Most industrial radiography,
however, is done with radiation so penetrating that the air absorption need not be
considered. These comments also apply to the time-distance relations discussed
below.
Time-Distance Relation
Rule: The exposure time (T) required for a given exposure is directly proportional to the
square of the focus-film distance (D). Thus:

To solve for either a new Time (T2) Or a new Distance (D2), simply follow the steps
shown in the example above.
Milliamperage-Time Relation
Rule: The milliamperage (M) required for a given exposure is inversely proportional to
the time (T):

Another way of expressing this is to say that for a given set of conditions (voltage,
distance, etc), the product of milliamperage and time is constant for the same
photographic effect.

Thus, M1T1 = M2T2 = M3T3 = C, a constant.

This is commonly referred to as the reciprocity law. (Important exceptions are discussed
below.).To solve for either a new time (T2) or a new milliamperage (M2), simply follow the
steps shown in the example in “Milliamperage-Distance Relation”.

X-RAY EXPOSURE CHARTS


An exposure chart is a graph showing the relation between material thickness,
kilovoltage, and exposure. In its must common form, an exposure chart resembles the
figure below. These graphs are adequate for determining exposures in the radiography

Page 188
of uniform plates, but they serve only as rough guides for objects, such as complicated
castings, having wide variations of thickness.

Typical exposure chart for steel. This chart may be taken to apply to Film X (for
example), with lead foil screens, at a film density of 1.5. Source-film distance,
40 inches.

Exposure charts are usually available from manufacturers of x-ray equipment. Because,
in general, such charts cannot be used for different x-ray machines unless suitable
correction factors are applied, individual laboratories sometimes prepare their own.
PREPARING AN EXPOSURE CHART
A simple method for preparing an exposure chart is to make a series of radiographs of a
pile of plates consisting of a number of steps. This “step tablet” or stepped wedge, is
radiographed at several different exposure times at each of a number of kilovoltages.
The exposed films are all processed under conditions

identical to those that will later be used for routine work. Each radiograph

consists of a series of photographic densities corresponding to the x-ray intensities


transmitted by the different thicknesses of metal. A certain density, for example, 1.5, is
selected as the basis for the preparation of the chart. Wherever this density occurs on
the stepped-wedge radiographs, there are corresponding values of thickness,
milliampere-minutes, and kilovoltage. It is unlikely that many of the radiographs will
contain a value of exactly 1.5 in density, but the correct thickness for this density can be
found by interpolation between steps. Thickness and milliampere-minute values are
plotted for the different kilovoltages in the manner shown in the figure above.

Another method, requiring fewer stepped wedge exposures but more arithmetical
manipulation, is to make one step-tablet exposure at each kilovoltage and to measure
Page 189
the densities in the processed stepped-wedge radiographs. The exposure that would
have given the chosen density (in this case 1.5) under any particular thickness of the
stepped wedge can then be determined from the characteristic curve of the film used
The values for thickness, kilovoltage, and exposure are plotted as described in the
figure above.

Note that thickness is on a linear scale, and that milliampere-minutes are on a


logarithmic scale. The logarithmic scale is not necessary, but it is very convenient
because it compresses an otherwise long scale. A further advantage of the logarithmic
exposure scale is that it usually allows the location of the points for any one kilovoltage to
be well approximated by a straight line.

Any given exposure chart applies to a set of specific conditions. These fixed conditions
are:

1. The x-ray machine used

2. A certain source-film distance

3. A particular film type

4. Processing conditions used

5. The film density on which the chart is based

6. The type of screens (if any) that are used

Only if the conditions used in making the radiograph agree in all particulars with those
used in preparation of the exposure chart can values of exposure be read directly from
the chart. Any change requires the application of a correction factor. The correction
factor applying to each of the conditions listed previously will be discussed separately.

1. It is sometimes difficult to find a correction factor to make an exposure


chartprepared for one x-ray machine applicable to another. Different x-ray
machines operating at the same nominal kilovoltage and milliamperage settings
may give not only different intensities but also different qualities of radiation.

2. A change in source-film distance may be compensated for by the use of


theinverse square law or, if fluorescent screens are used, by referring to the
earlier table. Some exposure charts give exposures in terms of “exposure factor”
rather than in terms of milliampere-minutes or milliampere-seconds. Charts of
this type are readily applied to any value of source-film distance.

3. The use of a different type of film can be corrected for by comparing


thedifference in the amount of exposure necessary to give the same density on
both films from relative exposure charts such as those shown here.

For example, to obtain a density of 1.5 using Film Y, 0.6 more exposure is
required than for Film X.

This log exposure difference is found on the L scale and corresponds to an


exposure factor of 3.99 on the D scale. (Read directly below the log E
difference.) Therefore, in order to obtain the same density on Film Y as on
Page 190
Film X, multiply the original exposure by 3.99 to get the new exposure.
Conversely, if going from Film Y to Film X, divide the original exposure by
3.99 to obtain the new exposure.

You can use these procedures to change densities on a single film as well.
Simply find the log E difference needed to obtain the new density on the film
curve; read the corresponding exposure factor from the chart; then multiply to
increase density or divide to decrease density.

4. A change in processing conditions causes a change in effective film speed. If


theprocessing of the radiographs differs from that used for the exposures from
which the chart was made, the correction factor must be found by experiment.

5. The chart gives exposures to produce a certain density. If a different density


isrequired, the correction factor may be calculated from the film’s characteristic
curve

6. If the type of screens is changed, for example from lead foil to fluorescent, it
iseasier and more accurate to make a new exposure chart than to attempt to
determine correction factors.

Sliding scales can be applied to exposure charts to allow for changes in one or more of
the conditions discussed, with the exception of the first and the last. The methods of
preparing and using such scales are described in detail later on.

In some radiographic operations, the exposure time and the source-film distance are set
by economic considerations or on the basis of previous experience and test radiographs.
The tube current is, of course, limited by the design of the tube. This leaves as variables
only the thickness of the specimen and the kilovoltage. When these conditions exist, the
exposure chart may take a simplified form as shown in the figure below, which allows the
kilovoltage for any particular specimen thickness to be chosen readily. Such a chart will
probably be particularly useful when uniform sections must be radiographed in large
numbers by relatively untrained persons. This type of exposure chart may be derived
from a chart similar to the figure above by following the horizontal line corresponding to
the chosen milliampere-minute value and noting the thickness corresponding to this
exposure for each kilovoltage. These thicknesses are then plotted against kilovoltage.
GAMMA-RAY EXPOSURE CHARTS
The figure below shows a typical gamma-ray exposure chart. It is somewhat similar to
the next to the last figure above.

However, with gamma rays, there is no variable factor corresponding to the kilovoltage.
Therefore, a gamma-ray exposure chart contains one line, or several parallel lines,
each of which corresponds to a particular film type, film density, or source-film distance.
Gamma-ray exposure guides are also available in the form of linear or circular slide
rules. These contain scales on which can be set the various factors of specimen
thickness, source strength and source-film distance, and from which exposure time can
be read directly.

Sliding scales can also be applied to gamma-ray exposure charts of the type in the
figure below to simplify some exposure determinations. For the preparation and use of
such scales,

Page 191
Typical gamma-ray exposure chart for iridium 192, based on the use of Film X (for
example).

USE OF MULTIPLE FILMS


If the chart shows that the thickness range is too great for a single exposure under any
condition, it may be used to select two different exposures to cover the range. Another
technique is to load the cassette with two films of different speed and expose them
simultaneously, in which case the chart may be used to select the exposure. The log
relative exposure range for two films of different speed, when used together in this
manner, is the difference in log exposure between the value at the low-density end of
the faster film curve and the high-density end of

the slower film curve. An earlier figure shows that when Films X and Y are used, the
difference is 1.22, which is the difference between 1.57 and 2.79. It is necessary that
the films be close enough together in speed so that their curves will have some
“overlap” on the log E axis.
LIMITATIONS OF EXPOSURE CHARTS
Although exposure charts are useful industrial radiographic tools, they must be used with
some caution. They will, in most cases, be adequate for routine practice, but they will not
always show the precise exposure required to radiograph a given thickness to a particular
density.

Several factors have a direct influence on the accuracy with which exposures can be
predicted. Exposure charts are ordinarily prepared by radiographing a stepped wedge.
Since the proportion of scattered radiation depends on the thickness of material and,
therefore, on the distribution of the material in a given specimen, there is no assurance
that the scattered radiation under different parts will correspond to the amount under the
same thickness of the wedge. In fact, it is unreasonable to expect exact
correspondence between scattering conditions under two objects the thicknesses of
Page 192
which are the same but in which the distribution of material is quite different. The more
closely the distribution of metal in the wedge resembles that in the specimen the more
accurately the exposure chart will serve its purpose. For example, a narrow wedge
would approximate the scattering conditions for specimens containing narrow bars.

Although the lines of an exposure chart are normally straight, they should in most cases
be curved—concave downward. The straight lines are convenient approximations,
suitable for most practical work, but it should be recognized that in most cases they are
only approximations. The degree to which the conventionally drawn straight line
approximates the true curve will vary, depending on the radiographic conditions, the
quality of the exposing radiation, the material radiographed, and the amount of
scattered radiation reaching the film.

In addition, time, temperature, degree of activity, and agitation of the developer are all
variables that affect the shape of the characteristic curve and should therefore be
standardized. When, in hand processing, the temperature or the activity of the
developer does not correspond to the original conditions, proper compensation can be
made by changing the time. Automated processors should be carefully maintained and
cleaned to achieve the most consistent results. In any event, the greatest of care should
always be taken to follow the recommended processing procedures.

SCATTERED RADIATION
When a beam of x-rays or gamma rays strikes any object, some of the radiation is
absorbed, some is scattered, and some passes straight through. The electrons of the
atoms constituting the object scatter radiation in all directions, much as light is
dispersed by a fog. The wavelengths of much of the radiation are increased by the
scattering process, and hence the scatter is always somewhat “softer,” or less
penetrating, than the unscattered primary radiation. Any material—whether specimen,
cassette, tabletop, walls, or floor—that receives the direct radiation is a source of
scattered radiation. Unless suitable measures are taken to reduce the effects of scatter,
it will reduce contrast over the whole image or parts of it.

Scattering of radiation occurs, and is a problem, in radiography with both x-rays and
gamma rays. In the material that follows, the discussion is in terms of x-rays, but the
same general principles apply to gamma radiography.

In the radiography of thick materials, scattered radiation forms the greater percentage of
the total radiation. For example, in the radiography of a 3/4-inch thickness of steel, the
scattered radiation from the specimen is almost twice as intense as the primary radiation;
in the radiography of a 2-inch thickness of aluminum, the scattered radiation is two and
a half times as great as the primary radiation. As may be expected, preventing scatter
from reaching the film markedly improves the quality of the radiographic image.

Sources of scattered radiation. A: Transmitted scatter. B: Scatter from cassette.


C: “Reflection” scatter.

Page 193
As a rule, the greater portion of the scattered radiation affecting the film is from the
specimen under examination (A in the figure above). However, any portion of the film
holder or cassette that extends beyond the boundaries of the specimen and thereby
receives direct radiation from the x-ray tube also becomes a source of scattered
radiation, which can affect the film. The influence of this scatter is most noticeable just
inside the borders of the image (B in the figure above). In a similar manner, primary
radiation striking the film holder or cassette through a thin portion of the specimen will
cause scattering into the shadows of the adjacent thicker portions. Such scatter is called
undercut. Another source of scatter that may undercut a specimen is shown as C in the
figure above. If a filter is used near the tube, this too will scatter x-rays.

However, because of the distance from the film, scattering from this source is of
negligible importance. Any other material, such as a wall or floor, on the film side of the
specimen may also scatter an appreciable quantity of x-rays back to the film, especially
if the material receives the direct radiation from the x-ray tube or gamma-ray source
(See the figure below). This is referred to as backscattered radiation.

Intense backscattered radiation may originate in the floor or wall. Coning, masking, or
diaphragming should be employed. Backing the cassette with lead may give adequate
protection.

Page 194
REDUCTION OF SCATTER
Although scattered radiation can never be completely eliminated, a number of means
are available to reduce its effect. The various methods are discussed in terms of x-rays.
Although most of the same principles apply to gamma-ray radiography, differences in
application arise because of the highly penetrating radiation emitted by most common
industrial gamma-ray sources. For example, a mask (See the figure below) for use with
200 kV x-rays could easily be light enough for convenient handling. A mask for use with
cobalt 60 radiation, on the other hand, would be thick, heavy, and probably
cumbersome. In any event, with either x-rays or gamma rays, the means for reducing
the effects of scattered radiation must be chosen on the basis of cost, convenience, and
effectiveness.

The combined use of metallic shot and a lead mask for lessening scattered radiation is
conducive to good radiographic quality. If several round bars are to be radiographed,
they may be separated with lead strips held on edge on a wooden frame and the voids
filled with fine shot.

Page 195
LEAD FOIL SCREENS
Lead screens, mounted in contact with the film, diminish the effect on the film of
scattered radiation from all sources. They are beyond doubt the least expensive, most
convenient, and most universally applicable means of combating the effects of
scattered radiation. Lead screens lessen the scatter reaching the films regardless of
whether the screens permit a decrease or necessitate an increase in the radiographic
exposure.

Many x-ray exposure holders incorporate a sheet of lead foil in the back for the specific
purpose of protecting the film from backscatter. This lead will not serve as an
intensifying screen, first, because it usually has a paper facing, and second because it
often is not lead of “radiographic quality”. If intensifying screens are used with such
holders, definite means must be provided to insure good contact.

X-ray film cassettes also are usually fitted with a sheet of lead foil in the back for protection
against backscatter.

Using such a cassette or film holder with gamma rays or with million-volt x-rays, the film
should always be enclosed between double lead screens; otherwise, the secondary
radiation from the lead backing is sufficient to penetrate the intervening felt or paper and
cast a shadow of the structure of this material on the film, giving a granular or mottled
appearance. This effect can also occur at voltages as low as 200 kV unless the film is
enclosed between lead foil or fluorescent intensifying screens.
MASKS AND DIAPHRAGMS
Scattered radiation originating in matter outside the specimen is most serious for
specimens that have high absorption for x-rays, because the scattering from external
sources may be large compared to the primary image-forming radiation that reaches
the film through the specimen. Often, the most satisfactory method of lessening this
scatter is to use cutout diaphragms or some other form of mask mounted over or
around the object radiographed. If many specimens of the same article are to be
radiographed, it may be worthwhile to cut an opening of the same shape, but slightly
smaller, in a sheet of lead and place this on the object.

The lead serves to reduce the exposure in surrounding areas to a negligible value and
therefore to eliminate scattered radiation from this source. Since scatter also arises
from the specimen itself, it is good practice wherever possible, to limit the cross an xray
beam to cover only the area of the specimen that is of interest in the examination.

For occasional pieces of work where a cutout diaphragm would not be economical,
barium clay packed around the specimen will serve the same purpose. The clay should
be thick enough so that the film density under the clay is somewhat less than that under
the specimen. Otherwise, the clay itself contributes appreciable scattered radiation.

Page 196
It may be advantageous to place the object in aluminum or thin iron pans and to use a
liquid absorber, provided the liquid chosen will not damage the specimen. A combined
saturated solution of lead acetate and lead nitrate is satisfactory.

Warning

WARNING! Harmful if swallowed. Harmful if inhaled. Wash thoroughly after handling.


Use only with adequate ventilation.

To prepare this solution, dissolve approximately 31/2 pounds of lead acetate in 1 gallon
of hot water. When the lead acetate is in solution, add approximately 3 pounds of lead
nitrate.

Because of its high lead content this solution is a strong absorber of x-rays. In masking
with liquids, be sure to eliminate bubbles that may be clinging to the surface of the
specimen.

One of the most satisfactory arrangements, combining effectiveness and convenience,


is to surround the object with copper or steel shot having a diameter of about 0.01 inch
or less (See the figure above). This material “flows” without running badly. It is also very
effective for filling cavities in irregular objects, such as castings, where a normal
exposure for thick parts would result in an overexposure for thinner parts. Of course, it
is preferable to make separate exposures for thick and thin parts, but this is not always
practical.

In some cases, a lead diaphragm or lead cone on the tube head may be a convenient
way to limit the area covered by the x-ray beam. Such lead diaphragms are particularly
useful where the desired cross section of the beam is a simple geometric figure, such
as a circle, square, or rectangle.
FILTERS
In general, the use of filters is limited to radiography with x-rays. A simple metallic filter
mounted in the x-ray beam near the x-ray tube (See the figure below) may adequately
serve the purpose of eliminating overexposure in the thin regions of the specimen and
in the area surrounding the part. Such a filter is particularly useful to reduce scatter
undercut in cases where a mask around the specimen is impractical, or where the
specimen would be injured by chemicals or shot. Of course, an increase in exposure or
kilovoltage will be required to compensate for the additional absorption; but, in cases
where the filter method is applicable, this is not serious unless the limit of the x-ray
machine has been reached.

The underlying principle of the method is that the addition of the filter material causes a
much greater change in the amount of radiation passing through the thin parts than
through the thicker parts. Suppose the shape of a certain steel specimen is as shown in
the figure below and that the thicknesses are 1/4 inch, 1/2 inch, and 1 inch. This
specimen is radiographed first with no filter, and then with a filter near the tube.

Page 197
A filter placed near the x-ray tube reduces subject contrast and eliminates much of the
secondary radiation, which tends to obscure detail in the periphery of the specimen.

Column 3 of the table below shows the percentage of the original x-ray intensity
remaining after the addition of the filter, assuming both exposures were made at
180 kV. (These values were derived from actual exposure chart data.)
Region Specimen Thickness (inches)
Percentage of Original X-ray Intensity Remaining After Addition of a Filter
Outside specimen 0 less
than 5%
Thin section 1/4 about 30%
Medium section 1
/2
about 40%
Thick section 1 about 50%
Note that the greatest percentage change in x-ray intensity is under the thinner parts of
the specimen and in the film area immediately surrounding it. The filter reduces by a
large ratio the x-ray intensity passing through the thin sections or sticking the cassette
around the specimen, and hence reduces the undercut of scatter from these sources.
Thus, in regions of strong undercut, the contrast is increased by the use of a filter since
the only effect of the undercutting scattered radiation is to obscure the desired image. In
regions where the undercut is negligible, a filter has the effect of decreasing the
contrast in the finished radiograph.
Although frequently the highest possible contrast is desired, there are certain instances
in which too much contrast is a definite disadvantage For example, it may be desired to
render detail visible in all parts of a specimen having wide variations of thickness. If the
exposure is made to give a usable density under the thin part, the thick region may be
underexposed. If the exposure is adjusted to give a suitable density under the thick
parts, the image of the thin sections may be grossly overexposed.

A filter reduces excessive subject contrast (and hence radiographic contrast) by


hardening the radiation. The longer wavelengths do not penetrate the filter to as great
an extent as do the shorter wavelengths. Therefore, the beam emerging from the filter
Page 198
contains a higher proportion of the more penetrating wavelengths. The figure below
illustrates this graphically. In the sense that a more penetrating beam is produced,
filtering is analogous to increasing the kilovoltage. However, it requires a comparatively
large change in kilovoltage to change the hardness of an x-ray beam to the same extent
as will result from adding a small amount of filtration.

Curves illustrating the effect of a filter on the composition and intensity of an xray
beam.

Although filtering reduces the total quantity of radiation, most of the wavelengths
removed are those that would not penetrate the thicker portions of the specimen in any
case. The radiation removed would only result in a high intensity in the regions around
the specimen and under its thinner sections, with the attendant scattering undercut and
overexposure. The harder radiation obtained by filtering the x-ray beam produces a
radiograph of lower contrast, thus permitting a wider range of specimen thicknesses to
be recorded on a single film than would otherwise be possible.

Thus, a filter can act either to increase or to decrease the net contrast. The contrast
and penetrameter visibility are increased by the removal of the scatter that undercuts
the specimen (see the figure below) and decreased by the hardening of the original
beam. The nature of the individual specimen will determine which of these effects will
predominate or whether both will occur in different parts of the same specimen.

Sections of a radiograph of an unmasked 11/8-inch casting, made at 200 kV without


filtration (left), and as improved by filtration at the tube (right).

Page 199
The choice of a filter material should be made on the basis of availability and ease of
handling. For the same filtering effect, the thickness of filter required is less for those
materials having higher absorption. In many cases, copper or brass is the most useful,
since filters of these materials will be thin enough to handle easily, yet not so thin as to
be delicate. See the figure below.

Maximum filter thickness for aluminum and steel.

Definite rules as to filter thicknesses are difficult to formulate exactly because the
amount of filtration required depends not only on the material and thickness range of
the specimen, but also on the distribution of material in the specimen and on the
amount of scatter undercut that it is desired to eliminate. In the radiography of
aluminum, a filter of copper about 4 percent of the greatest thickness of the specimen
should prove the thickest necessary. With steel, a copper filter should ordinarily be
about 20 percent, or a lead filter about 3 percent, of the greatest specimen thickness for
the greatest useful filtration. The foregoing values are maximum values, and, depending
on circumstances, useful radiographs can often be made with far less filtration.

In radiography with x-rays up to at least 250 kV, the 0.005-inch front lead screen
customarily used is an effective filter for the scatter from the bulk of the specimen.
Page 200
Additional filtration between specimen and film only tends to contribute additional
scatter from the filter itself. The scatter undercut can be decreased by adding an
appropriate filter at the tube as mentioned before (see also the figures above). Although
the filter near the tube gives rise to scattered radiation, the scatter is emitted in all
directions, and since the film is far from the filter, scatter reaching the film is of very low
intensity.

Further advantages of placing the filter near the x-ray tube are that specimen-film
distance is kept to a minimum and that scratches and dents in the filter are so blurred
that their images are not apparent on the radiograph.
GRID DIAPHRAGMS
One of the most effective ways to reduce scattered radiation from an object being
radiographed is through the use of a Potter-Bucky diaphragm. This apparatus (see the
figure below) consists of a moving grid, composed of a series of lead strips held in
position by intervening strips of a material transparent to x-rays. The lead strips are
tilted, so that the plane of each is in line with the focal spot of the tube. The slots
between the lead strips are several times as deep as they are wide. The parallel lead
strips absorb the very divergent scattered rays from the object being radiographed, so
that most of the exposure is made by the primary rays emanating from the focal spot of
the tube and passing between the lead strips. During the course of the exposure, the
grid is moved, or oscillated, in a plane parallel to the film as shown by the black arrows
in the figure below. Thus, the shadows of the lead strips are blurred out so that they do
not appear in the final radiograph.

Schematic diagram showing how the primary x-rays pass between the lead strips of the
Potter-Bucky diaphragm while most of the scattered x-rays are absorbed because they
strike the sides of the strips.

The use of the Potter-Bucky diaphragm in industrial radiography complicates the


technique to some extent and necessarily limits the flexibility of the arrangement of the x-
ray tube, the specimen, and the film. Grids can, however, be of great value in the
radiography of beryllium more than about 3 inches thick and in the examination of other
low-absorption materials of moderate and great thicknesses. For these materials,
Page 201
kilovoltages in the medical radiographic range are used, and the medical forms of Potter-
Bucky diaphragms are appropriate. Grid ratios (the ratio of height to width of the openings
between the lead strips) of 12 or more are desirable.

The Potter-Bucky diaphragm is seldom used elsewhere in the industrial field, although
special forms have been designed for the radiography of steel with voltages as high as
200 to 400 kV. These diaphragms are not used at higher voltages or with gamma rays
because relatively thick lead strips would be needed to absorb the radiation scattered
at these energies. This in turn would require a Potter-Bucky diaphragm, and the
associated mechanism, of an uneconomical size and complexity
MOTTLING CAUSED BY X-RAY DIFFRACTION
A special form of scattering caused by x-ray diffraction is encountered occasionally. It is
most often observed in the radiography of fairly thin metallic specimens whose grain
size is large enough to be an appreciable fraction of the part thickness. The
radiographic appearance of this type of scattering is mottled and may be confused with
the mottled appearance sometimes produced by porosity or segregation. It can be
distinguished from these conditions by making two successive radiographs, with the
specimen rotated slightly (1 to 5 degrees) between exposures, about an axis
perpendicular to the central beam. A pattern caused by porosity or segregation will
change only slightly; however, one caused by diffraction will show a marked change.
The radiographs of some specimens will show a mottling from both effects, and careful
observation is needed to differentiate between them.

Briefly, however, a relatively large crystal or grain in a relatively thin specimen may in
some cases “reflect” an appreciable portion of the x-ray energy falling on the specimen,
much as if it were a small mirror. This will result in a light spot on the developed
radiograph corresponding to the position of the particular crystal and may also produce
a dark spot in another location if the diffracted, or “reflected,” beam strikes the film.
Should this beam strike the film beneath a thick part of the specimen, the dark spot may
be mistaken for a void in the thick section.

This effect is not observed in most industrial radiography, because most specimens are
composed of a multitude of very minute crystals or grains, variously oriented; hence,
scatter by diffraction is essentially uniform over the film area. In addition, the directly
transmitted beam usually reduces the contrast in the diffraction pattern to a point where
it is no longer visible on the radiograph.

The mottling caused by diffraction can be reduced, and in some cases eliminated, by
raising the kilovoltage and by using lead foil screens.

The former is often of positive value even though the radiographic contrast is reduced.
Since definite rules are difficult to formulate, both approaches should be tried in a new
situation, or perhaps both used together.

Page 202
It should be noted, however, that in same instances, the presence or absence of
mottling caused by diffraction has been used as a rough indication of grain size and
thus as a basis for the acceptance or the rejection of parts.
SCATTERING IN 1- AND 2-MILLION-VOLT RADIOGRAPHY
Lead screens should always be used in this voltage range. The common thicknesses,
0.005-inch front and 0.010-inch back, are both satisfactory and convenient. Some
users, however, find a 0.010-inch front screen of value because of its greater selective
absorption of the scattered radiation from the specimen.

Filtration at the tube offers no improvement in radiographic quality. However, filters at


the film improve the radiograph in the examination of uniform sections, but give poor
quality at the edges of the image of a specimen because of the undercut of scattered
radiation from the filter itself. Hence, filtration should not be used in the radiography of
specimens containing narrow bars, for example, no matter what the thickness of the
bars in the direction of the primary radiation. Further, filtration should be used only
where the film can be adequately protected against backscattered radiation.

Lead filters are most convenient for this voltage range. When thus used between
specimen and film, filters are subject to mechanical damage. Care should be taken to
reduce this to a minimum, lest filter defects be confused with structures in or on the
specimen. In radiography with million-volt x-rays, specimens of uniform sections may be
conveniently divided into three classes. Below about 11/2 inches of steel, filtration
affords little improvement in radiographic quality. Between 11/2 and 4 inches of steel, the
thickest filter, up to 1/8-inch lead, which at the same time allows a reasonable exposure
time, may be used. Above 4 inches of steel, filter thicknesses may be increased to1/ 4
inch of lead, economic considerations permitting.

It should be noted that in the radiography of extremely thick specimens with million-volt x-
rays, fluorescent screens may be used to increase the photographic speed to a point
where filters can be used without requiring excessive exposure time.

A very important point is to block off all radiation except the useful beam with heavy
(1/2inch to 1-inch) lead at the anode. Unless this is done, radiation striking the walls of the
x-ray room will scatter back in such quantity as to seriously affect the quality of the
radiograph. This will be especially noticeable if the specimen is thick or has parts
projecting relatively far from the film.
MULTIMILLION-VOLT RADIOGRAPHY
Techniques of radiography in the 6- to 24-million-volt range are difficult to specify. This
is in part because of the wide range of subjects radiographed, from thick steel to
several feet of mixtures of solid organic compounds, and in part because the sheer size
of the specimens and the difficulty in handling them often impose limitations on the
radiographic techniques that can be used.

In general, the speed of the film-screen combination increases with increasing


thickness of front and back lead screens up to at least 0.030 inch. One problem
encountered with screens of such great thickness is that of screen contact. For
example, if a conventional cardboard exposure holder is supported vertically, one or
both of the heavy screens may tend to sag away from the film, with a resulting

Page 203
degradation of the image quality. Vacuum cassettes are especially useful in this
application and several devices have been constructed for the purpose, some of which
incorporate such refinements as automatic preprogrammed positioning of the film
behind the various areas of a large specimen.

The electrons liberated in lead by the absorption of multimegavolt x-radiation are very
energetic. This means that those arising from fairly deep within a lead screen can
penetrate the lead, being scattered as they go, and reach the film. Thus, when thick
screens are used, the electrons reaching the film are “diffused,” with a resultant
deleterious effect on image quality. Therefore, when the highest quality is required in
multimillion-volt radiography, a comparatively thin front screen (about 0.005 inch) is
used, and the back screen is eliminated. This necessitates a considerable increase in
exposure time. Naturally, the applicability of the technique depends also on the amount
of backscattered radiation involved and is probably not applicable where large amounts
occur.

PHOTOGRAPHIC LATENT IMAGE


As shown in earlier figures, a photographic emulsion consists of a myriad of tiny crystals
of silver halide—usually the bromide with a small quantity of iodide—dispersed in gelatin
and coated on a support. The crystals—or photographic grains—respond as individual
units to the successive actions of radiation and the photographic developer.

The photographic latent image may be defined as that radiation-induced change in a grain
or crystal that renders the grain readily susceptible to the chemical action of a developer.

To discuss the latent image in the confines of this site requires that only the basic concept be
outlined. A discussion of the historical development of the subject and a consideration of
most of the experimental evidence supporting these theories must be omitted because of lack
of space.

It is interesting to note that throughout the greater part of the history of photography, the
nature of the latent image was unknown or in considerable doubt. The first public
announcement of Daguerre’s process was made in 1839, but it was not until 1938 that a
reasonably satisfactory and coherent theory of the formation of the photographic latent
image was proposed. That theory has been undergoing refinement and modification ever
since.

Some of the investigational difficulties arose because the formation of the latent image is a
very subtle change in the silver halide grain. It involves the absorption of only one or a few
photons of radiation and can therefore affect only a few atoms, out of some 109 or 1010 atoms
in a typical photographic grain. The latent image cannot be detected by direct physical or
analytical chemical means.

However, even during the time that the mechanism of formation of the latent image was a
subject for speculation, a good deal was known about its physical nature. It was known, for
example, that the latent image was localized at certain discrete sites on the silver halide
grain.

If a photographic emulsion is exposed to light, developed briefly, fixed, and then examined
under a microscope (see the figure below), it can be seen that development (the reduction of
silver halide to metallic silver) has begun at only one or a few places on the crystal. Since
small amounts of silver sulfide on the surface of the grain were known to be necessary for a

Page 204
photographic material to have a high sensitivity, it seemed likely that the spots at which the
latent image was localized were local concentrations of silver sulfide.

Electron micrograph of exposed, partially developed, and fixed grains, showing initiation of
development at localized sites on the grains (1µ = 1 micron = 0.001 mm).

It was further known that the material of the latent image was, in all probability, silver. For
one thing, chemical reactions that will oxidize silver will also destroy the latent image. For
another, it is a common observation that photographic materials given prolonged exposure
to light darken spontaneously, without the need for development. This darkening is known
as the print-out image. The printout image contains enough material to be identified
chemically, and this material is metallic silver. By microscopic examination, the silver of the
print-out image is discovered to be localized at certain discrete areas of the grain (see the
figure below), just as is the latent image.

Electron micrograph of photolytic silver produced in a grain by very intense exposure to


light.

Thus, the change that makes an exposed photographic grain capable of being transformed
into metallic silver by the mild reducing action of a photographic developer is a
concentration of silver atoms—probably only a few—at one or more discrete sites on the
Page 205
grain. Any theory of latent-image formation must account for the way that light photons
absorbed at random within the grain can produce these isolated aggregates of silver atoms.
Most current theories of latent-image formation are modifications of the mechanism
proposed by R. W. Gurney and N. F. Mott in 1938.

In order to understand the Gurney-Mott theory of the latent image, it is necessary to digress
and consider the structure of crystals—in particular, the structure of silver bromide
crystals.

When solid silver bromide is formed, as in the preparation of a photographic emulsion, the
silver atoms each give up one orbital electron to a bromine atom. The silver atoms, lacking
one negative charge, have an effective positive charge and are known as silver ions (Ag+).
The bromine atoms, on the other hand, have gained an electron—a negative charge—and
have become bromine ions (Br-). The “plus” and “minus” signs indicate, respectively, one
fewer or one more electron than the number required for electrical neutrality of the atom.

A crystal of silver bromide is a regular cubical array of silver and bromide ions, as shown
schematically in the figure below. It should be emphasized that the “magnification” of the
figure is very great. An average grain in an industrial x-ray film may be about 0.00004 inch
in diameter, yet will contain several billions of ions.

A silver bromide crystal is a rectangular array of silver (Ag+) and bromide (Br-) ions.

A crystal of silver bromide in a photographic emulsion is—fortunately—not perfect; a


number of imperfections are always present. First, within the crystal, there are silver ions
that do not occupy the “lattice position” shown in the figure above, but rather are in the
spaces between. These are known as interstitial silver ions (see the figure below). The
number of the interstitial silver ions is, of course, small compared to the total number of
silver ions in the crystal. In addition, there are distortions of the uniform crystal structure.

These may be “foreign” molecules, within or on the crystal, produced by reactions with the
components of the gelatin, or distortions or dislocations of the regular array of ions shown
in the figure above. These may be classed together and called “latent-images sites.”

“Plain view” of a layer of ions of a crystal similar to that of the previous figure. A
latentimage site is shown schematically, and two interstitial silver ions are indicated.

Page 206
The Gurney-Mott theory envisions latent-image formation as a two-stage process. It will be
discussed first in terms of the formation of the latent image by light, and then the special
considerations of direct x-ray or lead foil screen exposures will be covered.
THE GURNEY-MOTT THEORY
When a photon of light of energy greater than a certain minimum value (that is, of
wavelength less than a certain maximum) is absorbed in a silver bromide crystal, it releases
an electron from a bromide (Br-) ion. The ion, having lost its excess negative charge, is
changed to a bromine atom. The liberated electron is free to wander about the crystal (see
the figure below).

As it does, it may encounter a latent image site and be “trapped” there, giving the latentimage
site a negative electrical charge. This first stage of latent-image formation—involving as it
does transfer of electrical charges by means of moving electrons—is the electronic
conduction stage.

Stages in the development of the latent image according to the Gurney-Mott theory.

Page 207
The negatively charged trap can then attract an interstitial silver ion because the silver ion
is charged positively (C in the figure above). When such an interstitial ion reaches a
negatively charged trap, its charge is counteracted, an atom of silver is deposited at the
trap, and the trap is “reset” (D in the figure above). This second stage of the Gurney-Mott
mechanism is termed the ionic condition stage, since electrical charge is transferred
through the crystal by the movement of ions—that is, charged atoms. The whole cycle can
recur several, or many, times at a single trap, each cycle involving absorption of one photon
and addition of one silver atom to the aggregate. (See E to H in the figure above.)

In other words, this aggregate of silver atoms is the latent image. The presence of these few
atoms at a single latent-image site makes the whole grain susceptible to the reducing action
of the developer. In the most sensitive emulsions, the number of silver atoms required may
be less than ten.

The mark of the success of a theory is its ability to provide an understanding of previously
inexplicable phenomena. The Gurney-Mott theory and those derived from it have been

Page 208
notably successful in explaining a number of photographic effects. One of these effects—
reciprocity-law failure—will be considered here as an illustration.

Low-intensity reciprocity-law failure (left branch of the curve ) results from the fact that
several atoms of silver are required to produce a stable latent image. A single atom of silver
at a latent-image site (D in the figure above) is relatively unstable, breaking down rather
easily into an electron and a positive silver ion. Thus, if there is a long interval between the
formation of the first silver atom and the arrival of the second conduction electron (E in the
figure above), the first silver atom may have broken down, with the net result that the
energy of the light photon that produced it has been wasted. Therefore, increasing light
intensity from very low to higher values increases the efficiency, as shown by the downward
trend of the left-hand branch of the curve, as intensity increases.

High-intensity reciprocity-law failure (right branch of the curve) is frequently a


consequence of the sluggishness of the ionic process in latent-image formation (see the
figure above). According to the Gurney-Mott mechanism, a trapped electron must be
neutralized by the movement of an interstitial silver ion to that spot (D in the figure above)
before a second electron can be trapped there (E in the figure above); otherwise, the second
electron is repelled and may be trapped elsewhere. Therefore, if electrons arrive at a
particular sensitivity center faster than the ions can migrate to the center, some electrons
are repelled, and the center does not build up with maximum efficiency.

Electrons thus denied access to the same traps may be trapped at others, and the latent
image silver therefore tends to be inefficiently divided among several latent-image sites.
(This has been demonstrated by experiments that have shown that high-intensity exposure
produces more latent image within the volume of the crystal than do either low- or
optimum-intensity exposures.) Thus, the resulting inefficiency in the use of the conduction
electrons is responsible for the upward trend of the right-hand branch of the curve.
X-RAY LATENT IMAGE
In industrial radiography, the photographic effects of x-rays and gamma rays, rather than
those of light, are of the greater interest. At the outset it should be stated that the agent
that actually exposes a photographic grain, that is, a silver bromide crystal in the
emulsion, is not the x-ray photon itself, but rather the electrons—photoelectric and
Compton— resulting from the absorption event. It is for this reason that direct x-ray
exposures and lead foil screen exposures are similar and can be considered together.

The most striking differences between x-ray and visible-light exposures to grains arise
from the difference in the amounts of energy involved. The absorption of a single photon of
light transfers a very small amount of energy to the crystal. This is only enough energy to
free a single electron from a bromide (Br-) ion, and several successive light photons are
required to render a single grain developable.

The passage through a grain of an electron, arising from the absorption of an x-ray photon,
can transmit hundreds of times more energy to the grain than does the absorption of a light
photon. Even though this energy is used rather inefficiently, in general the amount is
sufficient to render the grain traversed developable—that is, to produce within it, or on it, a
stable latent image.

Page 209
As a matter of fact, the photoelectric or Compton electron, resulting from absorption or
interaction of a photon, can have a fairly long path in the emulsion and can render several
or many grains developable. The number of grains exposed per photon interaction can
vary from 1 grain for x-radiation of about 10 keV to possibly 50 or more grains for a 1
meV photon.

However, for 1 meV and higher energy photons, there is a low probability of an interaction
that transfers the total energy to grains in an emulsion.

Most commonly, high photon energy is imparted to several electrons by successive


Compton interactions. Also, high-energy electrons pass out of an emulsion before all of
their energy is dissipated. For these reasons there are, on the average, 5 to 10 grains made
developable per photon interaction at high energy.

For comparatively low values of exposure, each increment of exposure renders on the
average the same number of grains developable, which, in turn, means that a curve of net
density versus exposure is a straight line passing through the origin (see the figure below).

This curve departs significantly from linearity only when the exposure becomes so great
that appreciable energy is wasted on grains that have already been exposed. For
commercially available fine-grain x-ray films, for example, the density versus exposure
curve may be essentially linear up to densities of 2.0 or even higher.

Typical net density versus exposure curves for direct x-ray exposures.

Page 210
The fairly extensive straight-line relation between exposure and density is of considerable
use in photographic monitoring of radiation, permitting a saving of time in the
interpretation of densities observed on personnel monitoring films.

It the D versus E curves shown in the figure above are replotted as characteristic curves (D
versus log E), both characteristic curves are the same shape (see the figure below) and are
merely separated along the log exposure axis. This similarity in toe shape has been
experimentally observed for conventional processing of many commercial photographic
materials, both x-ray films and others.

Characteristic curves plotted from the data in the previous figure.

Because a grain is completely exposed by the passage of an energetic electron, all x-ray
exposures are, as far as the individual grain is concerned, extremely short. The actual time
that an x-ray-induced electron is within a grain depends on the electron velocity, the grain
dimensions, and the “squareness” of the hit. However, a time of the order of 10-13 second is
representative. (This is in distinction to the case of light where the “exposure time” for a
single grain is the interval between the arrival of the first photon and that of the last photon
required to produce a stable latent image.)

The complete exposure of a grain by a single event and in a very short time implies that
there should be no reciprocity-law failure for direct x-ray exposures or for exposures made
with lead foil screens. The validity of this has been established for commercially available
film and conventional processing over an extremely wide range of x-ray intensities. That
films can satisfactorily integrate x-, gamma-, and beta-ray exposures delivered at a wide
range of intensities is one of the advantages of film as a radiation dosimeter.

In the discussion on reciprocity-law failure it was pointed out that a very short, very high
intensity exposure to light tends to produce latent images in the interior of the grain.
Because x-ray exposures are also, in effect, very short, very high intensity exposures, they
too tend to produce internal, as well as surface, latent images.
DEVELOPMENT
Many materials discolor on exposure to light—a pine board or the human skin, for
example—and thus could conceivably be used to record images. However, most such
systems reset to exposure on a “1:1” basis, in that one photon of light results in the

Page 211
production of one altered molecule or atom. The process of development constitutes one of
the major advantages of the silver halide system of photography.

In this system, a few atoms of photolytically deposited silver can, by development, be made
to trigger the subsequent chemical deposition of some 109 or 1010 additional silver atoms,
resulting in an amplification factor of the order of 109 or greater. The amplification process
can be performed at a time, and to a degree, convenient to the user and, with sufficient
care, can be uniform and reproducible enough for the purposes of quantitative
measurements of radiation.

Development is essentially a chemical reduction in which silver halide is reduced or


converted to metallic silver in order to retain the photographic image, however, the
reaction must be limited largely to those grains that contain a latent image. That is, to those
grains that have received more than a certain minimum exposure to radiation. Compounds
that can be used as photographic developing agents, therefore, are limited to those in which
the reduction of silver halide to metallic silver is catalyzed (or speeded up) by the presence
of the metallic silver of the latent image. Those compounds that reduce silver halide in the
absence of a catalytic effect by the latent image are not suitable developing agents because
they produce a uniform overall density on the processed film.

Many practical developing agents are relatively simple organic compounds (see the figure
below) and, as shown, their activity is strongly dependent on molecular structure as well as
on composition. There exist empirical rules by which the developing activity of a particular
compound may often be predicted from a knowledge of its structure.

Configurations of Dihydroxybenzene, showing how developer properties depend on


structure.

The simplest concept of the role of the latent image in development is that it acts merely as
an electron-conducting bridge by which electrons from the developing agent can reach the
silver ions on the interior face of the latent image. Experiment has shown that this simple
concept is inadequate to explain the phenomena encountered in practical photographic
development.

Page 212
Adsorption of the developing agent to the silver halide or at the silver-silver halide
interface has been shown to be very important in determining the rate of direct, or
chemical, development by most developing agents. The rate of development by
hydroquinone (see the figure above), for example, appears to be relatively independent of
the area of the silver surface and instead to be governed by the extent of the silver-silver
halide interface.

The exact mechanisms by which a developing agent acts are relatively complicated, and
research on the subject is very active.

The broad outlines, however, are relatively clear. A molecule of a developing agent can
easily give an electron to an exposed silver bromide grain (that is, to one that carries a
latent image), but not to an unexposed grain. This electron can combine with a silver (Ag+)
ion of the crystal, neutralizing the positive charge and producing an atom of silver. The
process can be repeated many times until all the billions of silver ions in a photographic
grain have been turned into metallic silver.

The development process has both similarities to, and differences from, the process of latent-
image formation. Both involve the union of a silver ion and an electron to produce an atom
of metallic silver. In latent image formation, the electron is freed by the action of radiation
and combines with an interstitial silver ion. In the development process, the electrons are
supplied by a chemical electron-donor and combine with the silver ions of the crystal lattice.

The physical shape of the developed silver need have little relation to the shape of the silver
halide grain from which it was derived. Very often the metallic silver has a tangled,
filamentary form, the outer boundaries of which can extend far beyond the limits of the
original silver halide grain (see the figure below). The mechanism by which these filaments
are formed is still in doubt although it is probably associated with that by which filamentary
silver can be produced by vacuum deposition of the silver atoms from the vapor phase onto
suitable nuclei.

Electron micrograph of a developed silver bromide grain.

The discussion of development has thus far been limited to the action of the developing
agent alone. However, a practical photographic developer solution consists of much more
than a mere water solution of a developing agent. The function of the other common
components of a practical developer are the following:

An Alkali

Page 213
The activity of developing agents depends on the alkalinity of the solution. The alkali
should also have a strong buffering action to counteract the liberation of hydrogen ions—
that is, a tendency toward acidity—that accompanies the development process. Common
alkalis are sodium hydroxide, sodium carbonate, and certain borates.

A Preservative

This is usually a sulfite. One of its chief functions is to protect the developing agent from
oxidation by air. It destroys certain reaction products of the oxidation of the developing
agent that tend to catalyze the oxidation reaction. Sulfite also reacts with the reaction
products of the development process itself, thus tending to maintain the development rate
and to prevent staining of the photographic layer.

A Restrainer

A bromide, usually potassium bromide, is a common restrainer or antifoggant. Bromide


ions decrease the possible concentration of silver ions in solution (by the common-ion
effect) and also, by being adsorbed to the surface of the silver bromide grain, protect
unexposed grains from the action of the developer. Both of these actions tend to reduce the
formation of fog.

Commercial developers often contain other materials in addition to those listed above. An
example would be the hardeners usually used in developers for automatic processors.

RADIOGRAPHY PROCESS CONTROL


PHOTOGRAPHIC DENSITY
Photographic density refers to the quantitative measure of film blackening. When no
danger of confusion exists, photographic density is usually spoken of merely as density.
Density is defined by the equation: where D is density, Io is the light intensity incident on
the film and It is the light intensity transmitted. The table below illustrates some relations
between transmittance, percent transmittance, opacity, and density. This table shows
that an increase in density of 0.3 reduces the light transmitted to one-half its former
value. In general, since density is a logarithm, a certain increase in density always
corresponds to the same percentage decrease in transmittance.

1.0 100 1 0
0.50 50 2 0.3
0.25 25 4 0.6
0.10 10 10 1.0
0.01 1 100 2.0
0.001 0.1 1,000 3.0
0.0001 0.01 10,000 4.0
DENSITOMETERS
A densitometer is an instrument for measuring photographic densities. A number of
different types, both visual and photoelectric, are available commercially. For purposes
of practical industrial radiography, there is no great premium on high accuracy of a
densitometer. A much more important property is reliability, that is, the densitometer
should reproduce readings from day to day.

Page 214
RADIOGRAPHIC CONTRAST

The first subjective criteria for determining radiographic quality are radiographic
contrast. Essentially, radiographic contrast is the degree of density difference between
adjacent areas on a radiograph.

It is entirely possible to radiograph a particular subject and, by varying factors, produce


two radiographs possessing entirely different contrast levels. With an x-ray source of low
kilovoltage, we see an illustration of extremely high radiographic contrast, that is, density
difference between the two adjacent areas (A and B) is high. It is essential that sufficient
contrast exist between the defect of interest and the surrounding area. There is no
viewing technique that can extract information that does not already exist in the original
radiograph.

With an x-ray source of high kilovoltage, we see a sample of relatively low radiographic
contrast, that is, the density difference between the two adjacent areas (A and B) is low.

Definition
Besides radiographic contrast as subjective criteria for determining radiographic quality,
there exists one other, radiographic detail. Essentially, radiographic definition is the
abruptness of change in going from one density to another. For example, it is possible to
radiograph a particular subject and, by varying certain factors, produce two radiographs,
which possess different degrees of definition.

In the example to the left, a two-step step tablet with the transition from step to step
represented by Line BC is quite sharp or abrupt. Translated into a radiograph, we see
that the transition from the high density to the low density is abrupt. The Edge Line BC is
still a vertical line quite similar to the step tablet itself. We can say that the detail portrayed
in the radiograph is equivalent to physical change present in the step tablet. Hence, we
can say that the imaging system produced a faithful visual reproduction of the step table.
It produced essentially all of the information present in the step tablet on the radiograph.

In the example on the right, the same two-step step tablet has been radiographed.
However, here we note that, for some reason, the imaging system did not produce a
faithful visual reproduction. The Edge Line BC on the step tablet is not vertical. This is
evidenced by the gradual transition between the high and low-density areas on the
radiograph. The edge definition or detail is not present because of certain factors or
conditions, which exist, in the imaging system.

In review, it is entirely possible to have radiographs with the following qualities:

• Low contrast and poor detail

• High contrast and poor definition

• Low contrast and good definition

• High contrast and good definition

One must bear in mind that radiographic contrast and definition are not dependent upon
the same set of factors. If detail in a radiograph is originally lacking, then attempts to
manipulate radiographic contrast will have no effect on the amount of detail present in
that radiograph.
Page 215
GRAININESS
Graininess is defined as the visual impression of nonuniformity of density in a
radiographic (or photographic) image. With fast films exposed to high-kilovoltage
radiation, the graininess is easily apparent to the unaided vision; with slow films
exposed to low-kilovoltage x-rays, moderate magnification may be needed to
make it visible. In general, graininess increases with increasing film speed and
with increasing energy of the radiation.
The “clumps” of developed silver, which are responsible for the impression of graininess
do not each arise from a single developed photographic grain. That this cannot be so
can be seen from size consideration alone. The particle of black metallic silver arising
from the development of a single photographic grain in an industrial x-ray film is rarely
larger than 0.001 mm (0.00004 inch) and usually even less. This is far below the limits
of unaided human vision.

Rather, the visual impression of graininess is caused by the random, statistical grouping
of these individual silver particles. Each quantum (photon) of x-radiation or gamma
radiation absorbed in the film emulsion exposes one or more of the tiny crystals of silver
bromide of which the emulsion is composed. These “absorption events” occur at
random and even in a uniform x-ray beam, the number of absorption events will differ
from one tiny area of the film to the next for purely statistical reasons. Thus, the
exposed grains will be randomly distributed; that is, their numbers will have a statistical
variation from one area to the next.

In understanding this effect, a simple analogy—a long sidewalk on a rainy day--will be of


assistance. The sidewalk corresponds to the film and the raindrops to the x-ray photons
absorbed in it. (Only those absorbed by the film are considered because only those that
are absorbed have a photographic effect.) First, consider what happens in a downpour
so hard that there is an average of 10,000 drops per block or square of the sidewalk. It
would not be expected, however, that each square would receive precisely this average
number of 10,000 drops. Since the raindrops fall at random, only a few or perhaps none
of the squares would receive precisely 10,000 drops.

Some would receive more than 10,000 and others less. In other words, the actual
number of drops falling on any particular square will most likely differ from the average
number of drops per square along the whole length of the sidewalk. The laws of
statistics show that the differences between the actual numbers and the average
number of drops can be described in terms of probability. If a large number of blocks is
involved, the actual number of raindrops on 68 percent of the block, will differ from the
average by no more than 100 drops or ±1 percent of the average.1 The remaining 32
percent will differ by more than this number. Thus, the differences in “wetness” from
one block of sidewalk to another will be small and probably unnoticeable.

This value of 100 drops holds only for an average of 10,000 drops per square. Now
consider the same sidewalk in a light shower in which the average number of drops per
square is only 100. The same statistical laws show that the deviation from the average
number of drops will be 10 or ±10 percent of the average. Thus, differences in wetness
from one square to the next will be much more noticeable in a light shower (±10
percent) than they are in a heavy downpour (±1 percent).

Now we will consider these drops as x-ray photons absorbed in the film. With a very
slow film, it might be necessary to have 10,000 photons absorbed in a small area to
Page 216
produce a density of, for example, 1.0. With an extremely fast film it might require only
100 photons in the same area to produce the same density of 1.0. When only a few
photons are required to produce the density, the random positions of the absorption
events become visible in the processed film as film graininess. On the other hand, the
more x-ray photons required, the less noticeable the graininess in the radiographic
image, all else being equal.

It can now be seen how film speed governs film graininess. In general, the silver
bromide crystals in a slow film are smaller than those in a fast film, and thus will
produce less light-absorbing silver when they are exposed and developed. Yet, at low
kilovoltages, one absorbed photon will expose one grain, of whatever size. Thus, more
photons will have to be absorbed in the slower film than in the faster to result in a
particular density. For the slower film, the situation will be closer to the “downpour”
case in the analogy above and film graininess will be lower.

The increase in graininess of a particular film with increasing kilovoltage can also be
understood on this basis. At low kilovoltages each absorbed photon exposes one
photographic grain; at high kilovoltages one photon will expose several, or even many,
grains. At high kilovoltages, then, fewer absorption events will be required to expose
the number of grains required for a given density than at lower kilovoltages. The fewer
absorption events, in turn, mean a greater relative deviation from the average, and
hence greater graininess.

It should be pointed out that although this discussion is on the basis of direct x-ray
exposures, it also applies to exposures with lead screens. The agent that actually
exposes a grain is a high-speed electron arising from the absorption of an x- or gamma-
ray photon. The silver bromide grain in a film cannot distinguish between an electron
that arises from an absorption event within the film emulsion and one arising from the
absorption of a similar photon in a lead screen.

The quantum mottle observed in radiographs made with fluorescent intensifying screens
has a similar statistical origin. In this case, however, it is the numbers of photons
absorbed in the screens that are of significance.
THE CHARACTERISTIC CURVE
The characteristic curve, sometimes referred to as the sensitometric curve or the H and
D curve (after Hurter and Driffield who, in 1890, first used it), expresses the relation
between the exposure applied to a photographic material and the resulting photographic
density. The characteristic curves of three typical films, exposed between lead foil
screens to x-rays, are given in the figure below. Such curves are obtained by giving a
film a series of known exposures, determining the densities produced by these
exposures, and then plotting density against the logarithm of relative exposure.

Relative exposure is used because there are no convenient units, suitable to all
kilovoltages and scattering conditions, in which to express radiographic exposures.
Hence, the exposures given a film are expressed in terms of some particular exposure,
giving a relative scale. In practical radiography, this lack of units for x-ray intensity or
quantity is no hindrance. The use of the logarithm of the relative exposure, rather than
the relative exposure itself, has a number of advantages. It compresses an otherwise
long scale. Furthermore, in radiography, ratios of exposures or intensities are usually

Page 217
more significant than the exposures or the intensities themselves. Pairs of exposures
having the same ratio will be separated by the same interval on the log relative
exposure scale, no matter what their absolute value may be. Consider the following
pairs of exposures.

Relative Exposure Log Relative Exposure Interval in Log Relative Exposure


1 0.0 0.70
5 0.70
2 0.30 0.70
10 1.00
30 1.48 0.70
150 2.18
This illustrates another useful property of the logarithmic scale. A previous figure shows
that the antilogarithm of 0.70 is 5, which is the ratio of each pair of exposures. Hence, to
find the ratio of any pair of exposures, it is necessary only to find the antilog of the log E
(logarithm of relative exposure) interval between them. Conversely, the log exposure
interval between any two exposures is determined by finding the logarithm of their ratio.

Characteristic curves of three typical x-ray films, exposed between lead foil screens.

As the figure above shows, the slope (or steepness) of the characteristic curves is
continuously changing throughout the length of the curves.. It will suffice at this point to
give a qualitative outline of these effects. For example, two slightly different thicknesses
in the object radiographed transmit slightly different exposures to the film. These two
exposures have a certain small log E interval between them, that is, have a certain
ratio. The difference in the densities corresponding to the two exposures depends on
just where on the characteristic curve they fall, and the steeper the slope of the curve,
the greater is this density difference.

For example, the curve of Film Z (see the figure above), is steepest in its middle
portion. This means that a certain log E interval in the middle of the curve corresponds
Page 218
to a greater density difference than the same log E interval at either end of the curve. In
other words, the film contrast is greatest where the slope of the characteristic curve is
greatest. For Film Z, as has been pointed out, the region of greatest slope is in the
central part of the curve. For Films X and Y, however, the slope—and hence the film
contrast continuously increases throughout the useful density range. The curves of
most industrial x-ray films are similar to those of Films X and Y.
USE OF THE CHARACTERISTIC CURVE
The characteristic curve can be used to solve quantitative problems arising in
radiography, in the preparation of technique charts, and in radiographic research.
Ideally, characteristic curves made under the radiographic conditions actually
encountered should be used in solving practical problems. However, it is not always
possible to produce characteristic curves in a radiographic department, and curves
prepared elsewhere must be used. Such curves prove adequate for many purposes
although it must be remembered that the shape of the characteristic curve and the
speed of a film relative to that of another depend strongly on developing conditions.

The accuracy attained when using “ready-made” characteristic curves is governed


largely by the similarity between the developing conditions used in producing the
characteristic curves and those for the film, whose densities are to be evaluated. A few
examples of the quantitative use of characteristic curves are worked out below. In the
examples below, D is used for density and log E for the logarithm of the relative
exposure.

Example 1: Suppose a radiograph made on Film Z (see the figure below) with an
exposure of 12 mA-min has a density of 0.8 in the region of maximum interest. It is
desired to increase the density to 2.0 for the sake of the increased contrast there
available.

1. Log E at D = 2.0 is 1.62

2. Log E at D = 0.8 is 1.00

3. Difference in log E is 0.62

Antilogarithm of this difference is 4.2

Therefore, the original exposure is multiplied by 4.2 giving 50 mA-min to produce a


density of 2.0.
Circled numerals in the figure correspond to the items in Example 1.

Page 219
Example 2: Film X has a higher contrast than Film Z at D = 2.0 (see the figure below)
and also a finer grain. Suppose that, for these reasons, it is desired to make the
radiograph on Film X with a density of 2.0 in the same region of maximum interest.

4. Log E at D = 2.O for Film X is 1.91

5. Log E at D = 2.0 for Film Z is 1.62

6. Difference in log E is 0.29. Antilogarithm of this difference is 1.95. Therefore,


theexposure for D = 2.0 on Film Z is multiplied by 1.95 giving 97.5 mA-min, for a
density of 2.0 on Film X.

Form of transparent overlay of proper dimensions for use with the characteristic curves
in the solution of exposure problems. The use of the overlay and curves is
demonstrated below.
Nomogram Methods
In the figure above, the scales at the far left and far right are relative exposure values.
They do not represent milliampere-minutes, curie-hours, or any other exposure unit;
they are to be considered merely as multiplying (or dividing) factors, the use of which is
explained below. Note, also, that these scales are identical, so that a ruler placed
across them at the same value will intersect the vertical lines, in the center of the
diagram, at right angles.

On the central group of lines, each labeled with the designation of a film whose curve is
shown in the earlier figure, the numbers represent densities.

The use of the figure above will be demonstrated by a re-solution of the same problems
used as illustrations in both of the preceding sections. Note that in the use of the
nomogram, the straightedge must be placed so that it is at right angles to all the lines—
that is, so that it cuts the outermost scales on the left and the right

Page 220
at the same value.

Example 1: Suppose a radiograph made on Film Z with an exposure of 12 mA-min has


a density of 0.8 in the region of maximum interest. It is desired to increase the density to
2.0 for the sake of the increased contrast there available.

Place the straightedge across the figure above so that it cuts the Film Z scale at 0.8.
The reading on the outside scales is then 9.8. Now move the straightedge upward so that
it cuts the Film Z scale at 2.0; the reading on the outside scales is now 41. The original
exposure (12 mA-min) must be multiplied by the ratio of these two numbers— that is, by
41/9.8 = 4.2. Therefore, the new exposure is 12 x 4.2 mA-min or 50 mA-min.

Example 2: Film X has a higher contrast than Film Z at D = 2.0 (see the earlier figure)
and also lower graininess. Suppose that, for these reasons, it is desired to make the
aforementioned radiograph on Film X with a density of 2.0 in the same region of
maximum interest.

Place the straightedge on the figure above so that it cuts the scale for Film Z at 2.0. The
reading on the outside scales is then 41, as in Example 1. When the straightedge is
placed across the Film X scale at 2.0, the reading on the outside scale is 81. In the
previous example, the exposure for a density of 2.0 on Film Z was found to be 50
mAmin. In order to give a density of 2.0 on Film X, this exposure must be multiplied by
the ratio of the two scale readings just found—81/41 = 1.97. The new exposure is
therefore 50 x 1.97 or 98 mA-min.

Example 3: The types of problems given in Examples 1 and 2 are often combined in
actual practice. Suppose, for example, that a radiograph was made on Film X (see the
earlier figure) with an exposure of 20 mA-min and that a density of 1.0 was obtained. A
radiograph at the same kilovoltage on Film Y at a density of 2.5 is desired for the sake
of the higher contrast and the lower graininess obtainable. The problem can be solved
graphically in a single step.

Page 221
The reading on the outside scale for D = 1.0 on Film X is 38. The corresponding reading
for D = 2.5 on Film Y is 420. The ratio of these is 420/38 = 11, the factor by which the
original exposure must be multiplied. The new exposure to produce D = 2.5 on Film Y is
then 20 x 11 or 220 mA-min

FUNDAMENTALS OF PROCESSING

In the processing procedure, the invisible image produced in the film by exposure to
xrays, gamma rays, or light is made visible and permanent. Processing is carried out
under subdued light of a color to which the film is relatively insensitive. The film is first
immersed in a developer solution, which causes the areas exposed to radiation to
become dark, the amount of darkening for a given degree of development depending
on the degree of exposure. After development, and sometimes after a treatment
designed to halt the developer reaction abruptly, the film passes into a fixing bath. The
function of the fixer is to dissolve the darkened portions of the sensitive salt. The film is
then washed to remove the fixing chemicals and solubilized salts, and finally is dried.

Processing techniques can be divided into two general classes:

1. “MANUAL PROCESSING” and

2. “AUTOMATED FILM PROCESSING”.

If the volume of work is small, or if time is of relatively little importance, radiographs may
be processed by hand. The most common method of manual processing of industrial
radiographs is known as the tank method. In this system, the processing solutions and
wash water are contained in tanks deep enough for the film to be hung vertically. Thus,
the processing solutions have free access to both sides of the film, and both emulsion
surfaces are uniformly processed to the same degree. The all-important factor of
temperature can be controlled by regulating the temperature of the water in which the
processing tanks are immersed.

Where the volume of work is large or the holding time is important, automated processors
are used. These reduce the darkroom manpower required, drastically shorten the interval
between completion of the exposure and the availability of a dry radiograph ready for
interpretation, and release the material being inspected much faster. Automated
Page 222
processors move films through the various solutions according to a predetermined
schedule. Manual work is limited to putting the unprocessed film into the processor or
into the film feeder, and removing the processed radiographs from the receiving bin.
GENERAL CONSIDERATIONS
Cleanliness
In handling x-ray films, cleanliness is a prime essential. The processing room, as w ell
as the accessories and equipment, must be kept scrupulously clean and used only for
the purposes for which they are intended. Any solutions that are spilled should be
wiped up at once; otherwise, on evaporation, the chemicals may get into the air and
later settle on film surfaces, causing spots. The thermometer and such accessories as
film hangers should be thoroughly washed in clean water immediately after being used,
so that processing solutions will not dry on them and possibly cause contamination of
solutions or streaked radiographs when used again.

All tanks should be cleaned thoroughly before putting fresh solutions into them.

Mixing Processing Solutions


Processing solutions should be mixed according to the directions on the labels; the
instructions as to water temperature and order of addition of chemicals should be
followed carefully, as should the safe-handling precautions for chemicals given on
labels or instruction sheets.

The necessary vessels or pails should be made of AISI Type 316 stainless steel with 2
to 3 percent molybdenum, or of enamelware, glass, plastic, hard rubber, or glazed
earthenware. (Metals such as aluminum, galvanized iron, tin, copper, and zinc cause
contamination and result in fog in the radiograph.). Paddles or plunger-type agitators
are practical for stirring solutions. They should be made of hard rubber, stainless steel,
or some other material that does not absorb or react with processing solutions.

Separate paddles or agitators should be provided for the developer and fixer. If the
paddles are washed thoroughly and hung up to dry immediately after use, the danger of
contamination when they are employed again will be virtually nil. A motor-driven stirrer
with a stainless steel propeller is a convenient aid in mixing solutions. In any event, the
agitation used in mixing processing solutions should be vigorous and complete, but not
violent.
MANUAL PROCESSING
When tank processing is used, the routine is, first, to mount the exposed film on a
hanger immediately after it is taken from the cassette or film holder, or removed from
the factory-sealed envelope. (See the figure below.) Then the film can be conveniently
immersed in the developer solution, stop bath, fixer solution, and wash water for the
predetermined intervals, and it is held securely and kept taut throughout the course of
the procedure.

Page 223
At frequent intervals during processing, radiographic films must be agitated. Otherwise,
the solution in contact with the emulsion becomes exhausted locally, affecting the rate
and evenness of development or fixation.

Another precaution must be observed: The level of the developer solution must be kept
constant by adding replenisher. This addition is necessary to replace the solution carried
out of the developer tank by the films and hangers, and to keep the activity of the
developer constant.

Special precautions are needed in the manual processing of industrial x-ray films in roll
form. These are usually processed on the commercially available spiral stainless-steel
reels. The space between the turns of film on such a reel is small, and loading must be
done carefully lest the turns of film touch one another. The loaded reel should be placed
in the developer so that the film is vertical—that is, the plane of the reel itself is
horizontal. Agitation in the developer should not be so vigorous as to pull the edges of
the film out of the spiral recesses in the reel. The reel must be carefully cleaned with a
brush to remove any emulsion or dried chemicals that may collect within the
filmretaining grooves.

Method of fastening film on a developing hanger. Bottom clips are fastened first,
followed by top clips.

Cleanliness
Processing tanks should be scrubbed thoroughly and then well rinsed with fresh water
before fresh solutions are put into them. In warm weather especially, it is advisable to
sterilize the developer tanks occasionally. The growth of fungi can be minimized by
filling the tank with an approximately 0.1 percent solution of sodium hypochlorite
Page 224
(Clorox, “101,” Sunny Sol bleaches, etc, diluted 1:30), allowing it to stand several hours
or overnight, and then thoroughly rinsing the tank. During this procedure, rooms should
be well ventilated to avoid corrosion of metal equipment and instruments by the small
concentrations of chlorine in the air. Another method is to use a solution of sodium
pentachlorphenate, such as Dowicide G fungicide, at a strength of 1 part in 1,000 parts
of water.

This solution has the advantage that no volatile substance is present and it will not
corrode metals. In preparing the solution, avoid breathing the dust and getting it or
the solution on your skin or clothing or into your eyes.
DEVELOPMENT
Developer Solutions

Prepared developers that are made ready for use by dissolving in water or by dilution
with water provide a carefully compounded formula and uniformity of results. They are
comparable in performance and effective life, but the liquid form offers greater
convenience in preparation, which may be extremely important in a busy laboratory.
Powder chemicals are, however, more economical to buy.

When the exposed film is placed in the developer, the solution penetrates the emulsion
and begins to transform the exposed silver halide crystals to metallic silver. The longer
the development is carried on, the more silver is formed and hence the denser the image
becomes.

The rate of development is affected by the temperature of the solution—as the


temperature rises, the rate of development increases. Thus, when the developer
temperature is low, the reaction is slow, and the development time recommended for
the normal temperature would result in underdevelopment. When the temperature is
high, the reaction is fast, and the same time would result in over development. Within
certain limits, these changes in the rate of development can be compensated for by
increasing or decreasing the time of development.

The time-temperature system of development should be used in all radiographic work.


In this system, the developer temperature is always kept within a small range and the
time of development is adjusted according to the temperature in such a way that the
degree of development remains the same. If this procedure is not carefully observed,
the effects of even the most accurate exposure technique will be nullified. Films cannot
withstand the effects of errors resulting from guesswork in processing.

In particular, “sight development” should not be used; that is, the development time for a
radiograph should not be decided by examining the film under safelight illumination at
intervals during the course of development. It is extremely difficult to judge from the
appearance of a developed but unfixed radiograph what its appearance will be in the
dried state.

Page 225
Even though the final radiograph so processed is apparently satisfactory, there is no
assurance that development was carried far enough to give the desired degree of film
contrast. Further, “sight development” can easily lead to a high level of fog caused by
excessive exposure to safelights during development.

An advantage of standardized time-temperature processing is that by keeping the


degree of development constant a definite check on exposure time can always be
made. This precludes many errors that might otherwise occur in the production of
radiographs. When the processing factors are known to be correct but the radiographs
lack density, underexposure can be assumed; when the radiographic image is too
dense, overexposure is, indicated. The first condition can be corrected by increasing the
exposure time; and the second, by decreasing it.

Control of Temperature and Time


Because the temperature of the processing solutions has a decided influence on their
activity, careful control of this factor is very important. It should be a rule that the developer
be stirred and the temperature be checked immediately before films are immersed in it so
that they can be left in the solution for the proper length of time.

Ideally, the temperature of the developer solution should be 68°F (20°C). A temperature
below 60°F (16°C) retards the action of the chemical and is likely to result in
underdevelopment, whereas an excessively high temperature not only may destroy the
photographic quality by producing fog but also may soften the emulsion to the extent
that it separates from the base.

When, during extended periods, the tap water will not cool the solutions to
recommended temperatures, the most effective procedure is to use mechanical
refrigeration. Conversely, heating may be required in cold climates. Under no
circumstances should ice be placed directly in processing solutions to reduce their
temperature because, on melting, the water will dilute them and possibly cause
contamination.

Because of the direct relation between temperature and time, both are of equal
importance in a standardized processing procedure. So, after the temperature of the
developer solution has been determined, films should be left in the solution for the
exact time that is required. Guesswork should not be tolerated. Instead, when the films
are placed in the solution, a timer should be set so that an alarm will sound at the end
of the time.

Agitation

It is essential to secure uniformity of development over the whole area of the film. This
is achieved by agitating the film during the course of development.

If a radiographic film is placed in a developer solution and allowed to develop without


any movement, there is a tendency for each area of the film to affect the development
Page 226
of the areas immediately below it. This is because the reaction products of
development have a higher specific gravity than the developer and, as these products
diffuse out of the emulsion layer, they flow downward over the film surface and retard
the development of the areas over which they pass. The greater the film density from
which the reaction products flow, the greater is the restraining action on the
development of the lower portions of the film. Thus, large lateral variations in film
density will cause uneven development in the areas below, and this may show up in the
form of streaks. The figure below illustrates the phenomena that occur when a film
having small areas whose densities are widely different from their surroundings is
developed without agitation of film or developer.

An example of streaking that can result when a film has been allowed to remain in
the solution without agitation during the entire development period.

Agitation of the film during development brings fresh developer to the surface of the film
and prevents uneven development. In small installations, where few films are
processed, agitation is most easily done by hand. Immediately after the hangers are
lowered smoothly and carefully into the developer, the upper bars of the hangers should
be tapped sharply two or three times on the upper edge of the tank to dislodge any
bubbles clinging to the emulsion. Thereafter, films should be agitated periodically
throughout the development.

Acceptable agitation results if the films are shaken vertically and horizontally and moved
from side to side in the tank for a few seconds every minute during the course of the
development. More satisfactory renewal of developer at the surface of the film is
obtained by lifting the film clear of the developer, allowing it to drain from one corner for
2 or 3 seconds, reinserting it into the developer, and then repeating the procedure, with
drainage from the other lower corner. The whole cycle should be repeated once a
minute during the development time.

Another form of agitation suitable for manual processing of sheet films is known as
“gaseous burst agitation.” It is reasonably economical to install and operate and,
because it is automatic, does not require the full-time attention of the processing room
operator. Nitrogen, because of its inert chemical nature and low cost, is the best gas to
use.

Gaseous burst agitation consists of releasing bursts of gas at controlled intervals


through many small holes in a distributor at the bottom of the processing tank. When

Page 227
first released, the bursts impart a sharp displacement pulse, or piston action, to the
entire volume of the solution. As the bubbles make their way to the surface, they
provide localized agitation around each small bubble. The great number of bubbles, and
the random character of their paths to the surface, provide effective agitation at the
surfaces of films hanging in the solution (See the figure below.)

Distribution manifold for gaseous burst agitation.

If the gas were released continuously, rather than in bursts, constant flow patterns
would be set up from the bottom to the top of the tank and cause uneven development.
These flow patterns are not encountered, however, when the gas is introduced in short
bursts, with an interval between bursts to allow the solution to settle down.

Note that the standard sizes of x-ray developing tanks will probably not be suitable for
gaseous burst agitation. Not only does the distributor at the bottom of the tank occupy
some space, but also the tank must extend considerably above the surface of the still
developer to contain the froth that results when a burst of bubbles reaches the surface.
It is therefore probable that special tanks will have to be provided if the system is
adopted.

Agitation of the developer by means of stirrers or circulating pumps should be


discouraged. In any tank containing loaded film hangers, it is almost impossible to
prevent the uniform flow of developer along certain paths. Such steady flow conditions
may sometimes cause more uneven development than no agitation at all.

Activity of Developer Solutions

As a developer is used, its developing power decreases, partly because of the


consumption of the developing agent in changing the exposed silver bromide to metallic
silver, and also because of the restraining effect of the accumulated reaction products
of the development. The extent of this decrease in activity will depend on the number of
films processed and their average density. Even when the developer is not used, the
activity may decrease slowly because of aerial oxidation of the developing agent.

Some compensation must be made for the decrease in developing power if uniform
radiographic results are to be obtained over a period of time. The best way to do this is

Page 228
to use the replenisher system, in which the activity of the solution is not allowed to
diminish but rather is maintained by suitable chemical replenishment.

In reference to the replenisher method or replenishment, the following should be


understood. As used here, replenishment means the addition of a stronger-thanoriginal
solution, to revive or restore the developer to its approximate original strength.

Thus, the replenisher performs the double function of maintaining both the liquid level in
the developing tank and the activity of the solution. Merely adding original-strength
developer would not produce the desired regenerating effect; development time would
have to be progressively increased to achieve a constant degree of development.

The quantity of replenisher required to maintain the properties of the developer will
depend on the average density of the radiographs processed. It is obvious that if 90
percent of the silver in the emulsion is developed, giving a dense image over the
entire film, more developing agent will be consumed. Therefore, the developer will be
exhausted to a greater degree than if the film were developed to a low density.

The quantity of replenisher required, therefore, depends on the type of subject


radiographed. In the processing of industrial radiographs that have a relatively large
proportion of dense background, some of the original developer must be discarded
each time replenisher is added. The exact quantity of replenisher can be determined
only by trial and by frequent testing of the developer.

The replenisher should be added at frequent intervals and in sufficient quantity to


maintain the activity reasonably constant for the types of radiographs processed. It is
obvious that if replenisher is added only occasionally, there will be a large increase in
density of the film after replenishing. By replenishing frequently, these density increases
after replenishing are kept at a minimum. The quantity of the replenisher added each
time preferably should not exceed 2 or 3 percent of the total volume of the developer in
the tank.

It is not practical to continue replenishment indefinitely, and the solution should be


discarded when the replenisher used equals two to three times the original quantity of
the developer. In any case, the solution should be discarded after three months
because of aerial oxidation and the buildup of gelatin, sludge, and solid impurities.
Arresting Development
After development is complete, developer remaining in the emulsion must be
deactivated by an acid stop bath or, if this is not feasible, by prolonged rinsing in clean
running water.

Page 229
If this step is omitted, development continues for the first minute or so of fixation and,
unless the film is agitated almost continuously during this period, uneven development
will occur, resulting in streakiness.

In addition, if films are transferred to the fixer solution without the use of an acid stop
bath or thorough rinsing, the alkali from the developer solution retained by the gelatin
neutralizes some of the acid in the fixer solution. After a certain quantity of acid has
been neutralized, the chemical balance of the fixer solution is upset and its usefulness
is greatly impaired—the hardening action is destroyed and stains are likely to be
produced in the radiographs. Removal of as much of the developer solution as possible
before fixation prolongs the life of the fixer solution and assures the routine production
of radiographs of better quality.

Stop Bath

A stop bath consisting of 16 fluidounces of 28 percent acetic acid per gallon of bath (125
mL per litre) may be used. If the stop bath is made from glacial acetic acid, the proportions
should be 41/2 fluidounces of glacial acetic acid per gallon of bath, or 35 mL per litre.

Warning

Glacial acetic acid should be handled only under adequate ventilation, and great care
should be taken to avoid injury to the skin or damage to clothing. Always add the glacial
acetic acid to the water slowly, stirring constantly, and never water to acid; otherwise,
the solution may boil and spatter acid on hands and face, causing severe burns.

When development is complete, the films are removed from the developer, allowed to
drain 1 or 2 seconds (not back into the developer tank), and immersed in the stop bath.
The developer draining from the films should be kept out of the stop bath. Instead of
draining, a few seconds’ rinse in fresh running water may be used prior to inserting the
films in the stop bath. This will materially prolong the life of the bath.

Films should be immersed in the stop bath for 30 to 60 seconds (ideally, at 65 to 70°F
or 18 to 21°C) with moderate agitation and then transferred to the fixing bath. Five
gallons of stop bath will treat about 100 14 x 17-inch films, or equivalent. If a developer
containing sodium carbonate is used, the stop bath temperature must be maintained
between (65 and 70°F or 18 to 21°C); otherwise, blisters containing carbon dioxide may
be formed in the emulsion by action of the stop bath.

Rinsing

If a stop bath cannot be used, a rinse in running water for at least 2 minutes should be
used. It is important that the water be running and that it be free of silver or fixer
chemicals. The tank that is used for the final washing after fixation should not be used
for this rinse.

Page 230
If the flow of water in the rinse tanks is only moderate, it is desirable to agitate the films
carefully, especially when they are first immersed. Otherwise, development will be uneven,
and there will be streaks in areas that received a uniform exposure.
Fixing
The purpose of fixing is to remove all of the undeveloped silver salt of the emulsion,
leaving the developed silver as a permanent image. The fixer has another important
function—hardening the gelatin so that the film will withstand subsequent drying with warm
air. The interval between placing the film in the fixer solution and the disappearance of the
original diffuse yellow milkiness is known as the clearing time. It is during this time that the
fixer is dissolving the undeveloped silver halide.

However, additional time is required for the dissolved silver salt to diffuse out of the
emulsion and for the gelatin to be hardened adequately. Thus, the total fixing time
should be appreciably greater than the clearing time. The fixing time in a relatively fresh
fixing bath should, in general, not exceed 15 minutes; otherwise, some loss of low
densities may occur. The films should be agitated vigorously when first placed in the
fixer and at least every 2 minutes thereafter during the course of fixation to assure
uniform action of the chemicals.

During use, the fixer solution accumulates soluble silver salts which gradually inhibit its
ability to dissolve the unexposed silver halide from the emulsion. In addition, the fixer
solution becomes diluted by rinse water or stop bath carried over by the film. As a
result, the rate of fixing decreases, and the hardening action is impaired. The dilution
can be reduced by thorough draining of films before immersion in the fixer and, if
desired, the fixing ability can be restored by replenishment of the fixer solution.

The usefulness of a fixer solution is ended when it has lost its acidity or when clearing
requires an unusually long interval. The use of an exhausted solution should always be
avoided because abnormal swelling of the emulsion often results from deficient
hardening and drying is unduly prolonged; at high temperatures reticulation or sloughing
away of the emulsion may take place. In addition, neutralization of the acid in the fixer
solution frequently causes colored stains to appear on the processed radiographs.
Washing
X-ray films should be washed in running water so circulated that the entire emulsion area
receives frequent changes. For a proper washing, the bar of the hanger and the top clips
should always be covered completely by the running water, as illustrated in the figure
below.

Water should flow over the tops of the hangers in the washing compartment. This
avoids streaking due to contamination of the developer when hangers are used
over again.

Page 231
Efficient washing of the film depends both on a sufficient flow of water to carry the fixer
away rapidly and on adequate time to allow the fixer to diffuse from the film. Washing
time at 60 to 80° F (15.5 to 26.5° C) with a rate of water flow of four renewals per hour
is 30 minutes.

The films should be placed in the wash tank near the outlet end. Thus, the films most
heavily laden with fixer are first washed in water that is somewhat contaminated with
fixer from the films previously put in the wash tank. As more films are put in the wash
tank, those already partially washed are moved toward the inlet, so that the final part of
the washing of each film is done in fresh, uncontaminated water.

The tank should be large enough to wash films as rapidly as they can be passed
through the other solutions. Any excess capacity is wasteful of water or, with the same
flow as in a smaller tank, diminishes the effectiveness with which fixer is removed from
the film emulsion. Insufficient capacity, on the other hand, encourages insufficient
washing, leading to later discoloration or fading of the image.

The “cascade method” of washing is the most economical of water and results in better
washing in the same length of time. In this method, the washing compartment is divided
into two sections. The films are taken from the fixer solution and first placed in Section
A. (See the figure below.) After they have been partially washed, they are moved to
Section B, leaving Section A ready to receive more films from the fixer. Thus, films
heavily laden with fixer are washed in somewhat contaminated water, and washing of
the partially washed films is completed in fresh water.

Schematic diagram of a cascade washing unit.

Washing efficiency decreases rapidly as temperature decreases and is very low at


temperatures below 60°F (15.5°C). On the other hand, in warm weather, it is especially
important to remove films from the tank as soon as washing is completed, because

Page 232
gelatin has a natural tendency to soften considerably with prolonged washing in water
above 68°F (20°C). Therefore, if possible the temperature of the wash water should be
maintained between 65 and 70°F or 18 and 21°C).

Formation of a cloud of minute bubbles on the surfaces of the film in the wash tank
sometimes occurs. These bubbles interfere with washing the areas of emulsion
beneath them, and can subsequently cause a discoloration or a mottled appearance of
the radiograph. When this trouble is encountered, the films should be removed from the
wash water and the emulsion surfaces wiped with a soft cellulose sponge at least twice
during the washing period to remove the bubbles. Vigorous tapping of the top bar of the
hanger against the top of the tank rarely is sufficient to remove the bubbles.
Prevention of Water SpotsWhen films are removed from the wash tanks, small
drops of water cling to the surfaces of the emulsions. If the films are dried
rapidly, the areas under the drops dry more slowly than the surrounding areas.
This uneven drying causes distortion of the gelatin, changing the density of the
silver image, and results in spots that are frequently visible and troublesome in
the finished radiograph. Such “water spots” can be largely prevented by
immersing the washed films for 1 or 2 minutes in a wetting agent, then allowing
the bulk of the water to drain off before the films are placed in the drying cabinet.
This solution causes the surplus water to drain off the film more evenly,
reducing the number of clinging drops. This reduces the drying time and lessens
the number of water spots occurring on the finished radiographs. Drying
Convenient racks are available commercially for holding hangers during drying when
only a small number of films are processed daily. When the racks are placed high on
the wall, the films can be suspended by inserting the crossbars of the processing
hangers in the holes provided. This obviates the danger of striking the radiographs
while they are wet, or spattering water on the drying surfaces, which would cause spots
on them. Radiographs dry best in warm, dry air that is changing constantly.When a
considerable number of films are to be processed, suitable driers with built-in fans,
filters, and heaters or desiccants are commercially available.
Marks in Radiographs
Defects, spots, and marks of many kinds may occur if the preceding general rules for
manual processing are not carefully followed. Perhaps the most common processing
defect is streakiness and mottle in areas that receive a uniform exposure. This
unevenness may be a result of:

• Failure to agitate the films sufficiently during development or the presence of too
many hangers in the tank, resulting in inadequate space between neighboring
films.

• Insufficient rinsing in water or failure to agitate the films sufficiently before


fixation.

• The use of an exhausted stop bath or failure to agitate the film properly in the
stop bath.

• In the absence of satisfactory rinsing—insufficient agitation of the films on first


immersing them in the fixing bath.

Page 233
Other characteristic marks are dark spots caused by the spattering of developer
solution, static electric discharges, and finger marks; and dark streaks occurring when
the developer-saturated film is inspected for a prolonged time before a safelight lamp. If
possible, films should never be examined at length until they are dry.

A further trouble is fog - that is, development of silver halide grains other than those
affected by radiation during exposure. It is a great source of annoyance and may be
caused by accidental exposure to light, x-rays, or radioactive substances; contaminated
developer solution; development at too high a temperature; or storing films under
improper storage conditions or beyond the expiration dates stamped on the cartons.

Accidental exposure of the film to x-radiation or gamma radiation is a common


occurrence because of insufficient protection from high-voltage tubes or stored
radioisotopes; films have been fogged through 1/8 inch of lead in rooms 50 feet or more
away from an x-ray machine.
AUTOMATED FILM PROCESSING
Automated processing requires a processor (see the figure below), specially formulated
chemicals and compatible film, all three of which must work together to produce
highquality radiographs. This section describes how these three components work
together.

An automated processor has three main sections: a film-feeding section; a


filmprocessing section (developer, fixer, and wash); and a film-drying section.

Processing Control
The essence of automated processing is control, both chemical and mechanical. In
order to develop, fix, wash, and dry a radiograph in the short time available in an
automated processor, specifically formulated chemicals are used. The processor
maintains the chemical solutions at the proper temperatures, agitates and replenishes
the solutions automatically, and transports the films mechanically at a carefully
controlled speed throughout the processing cycle. Film characteristics must be
compatible with processing conditions, shortened processing times and the mechanical
transport system. From the time a film is fed into the processor until the dry radiograph
is delivered, chemicals, mechanics, and film must work together.
Automated Processor Systems

Page 234
Automated processors incorporate a number of systems which transport, process, and
dry the film and replenish and recirculate the processing solutions. A knowledge of
these systems and how they work together will help in understanding and using
automated processing equipment.

Transport System

The function of the transport system (see the figure below) is to move film through the
developer and fixer solutions and through the washing and drying sections, holding the
film in each stage of the processing cycle for exactly the right length of time, and finally to
deliver the ready-to-read radiograph.

The roller transort system is the backbone of an automated processor. The


arrangement and number of its components vary, but the basic plan is virtually
the same.

In most automated processors now in use, the film is transported by a system of rollers
driven by a constant speed motor. The rollers are arranged in a number of
assemblies—entrance roller assembly, racks, turnarounds (which reverse direction of
film travel within a tank), crossovers (which transfer films from one tank to another), and
a squeegee assembly (which removes surface water after the washing cycle). The
number and specific design of the assemblies may vary from one model of processor to
another, but the basic design is the same.

It is important to realize that the film travels at a constant speed in a processor, but that
the speed in one model may differ from that in another. Processing cycles—the time
interval from the insertion of an unprocessed film to the delivery of a dry radiograph—in
general range downward from 15 minutes. Because one stage of the cycle may have to
be longer than another, the racks may vary in size—the longer the assembly, the longer
the film takes to pass through a particular stage of processing.

Although the primary function of the transport system is to move the film through the
processor in a precisely controlled time, the system performs two other functions of
importance to the rapid production of high-quality radiographs. First, the rollers produce
vigorous uniform agitation of the solutions at the surfaces of the film, contributing
significantly to the uniformity of processing.
Page 235
Second, the top wet rollers in the racks and the rollers in the crossover assemblies
effectively remove the solutions from the surfaces of the film, reducing the amount of
solution carried over from one tank to the next and thus prolonging the life of the fixer
and increasing the efficiency of washing. Most of the wash water clinging to the surface
of the film is removed by the squeegee rollers, making it possible to dry the processed
film uniformly and rapidly, without blemishes.

Water System

The water system of automated processors has two functions - to wash the films and to
help stabilize the temperature of the processing solutions. Hot and cold water are
blended to the proper temperature and the tempered water then passes through a flow
regulator which provides a constant rate of flow.

Depending upon the processor, part or all of the water is used to help control the
temperature of the developer. In some processors, the water also helps to regulate the
temperature of the fixer. The water then passes to the wash tank where it flows through
and over the wash rack. It then flows over a weir (dam) at the top of the tank and into the
drain.

Sometimes the temperature of the cold water supply may be higher than required by
the processor. In this situation, it is necessary to cool the water before piping it to the
processor. This is the basic pattern of the water system of automated processors; the
details of the system may vary slightly, however.

Recirculation Systems

Recirculation of the fixer and developer solutions performs the triple functions of uniformly
mixing the processing and replenisher solution, maintaining them at constant
temperatures, and keeping thoroughly mixed and agitated solutions in contact with the
film.

The solutions are pumped from the processor tanks, passed through devices to
regulate temperature, and returned to the tanks under pressure. This pressure forces
the solutions upward and downward, inside, and around the transport system
assemblies. As a result of the vigorous flow in the processing tanks, the solutions are
thoroughly mixed and agitated and the films moving through the tanks are constantly
bathed in fresh solutions.

Replenishment Systems

Accurate replenishment of the developer and fixer solutions is even more important in
automated processing than in manual processing. In both techniques, accurate
replenishment is essential to proper processing of the film and to long life of the
processing solutions; but, if the solutions are not properly replenished in an automated
processor, the film may swell too much and become slippery, with the result that it might
get stuck in the processor.

Page 236
When a film is fed into the processor, pumps are activated, which pump replenisher
from storage tanks to the processing tanks. As soon as the film has passed the
entrance assembly, the pumps stop—replenisher is added only during the time required
for a sheet of film to pass through the entrance assembly. The amount of replenisher
added is thus related to the size of the sheet of film. The newly added replenisher is
blended with the processor solutions by the recirculation pumps. Excess processing
solutions flow over a weir at the top of the tanks into the drain.

Different types of x-ray films require different quantities of processing chemicals. It is,
therefore, important that the solutions be replenished at the rate proper for the type or
types of film being processed and the average density of the radiographs.

Replenishment rates must be measured accurately and checked periodically.


Overreplenishment of the developer is likely to result in lower contrast; slight
underreplenishment results in gain of speed and contrast, but severe under
replenishment results in a loss of both. Severe underreplenishment of developer can
cause not only loss of density and contrast but also failure of the film to transport at any
point in the transport system. Over replenishment of the fixer does not affect good
operation, but is wasteful. However, underreplenishment results in poor fixation,
insufficient hardening, inadequate washing, and possible failure of the film to be
transported in the fixer rack or at any point beyond.

Dryer System

Rapid drying of the processed radiograph depends on proper conditioning of the film in
the processing solutions, effective removal of surface moisture by the squeegee rollers,
and a good supply of warm air striking both surfaces of the radiograph.

Heated air is supplied to the dryer section by a blower. Part of the air is recirculated; the
rest is vented to prevent buildup of excessive humidity in the dryer. Fresh air is drawn
into the system to replace that which is vented.
Rapid Access to Processed Radiographs
Approximately twelve or fourteen minutes after exposed films are fed into the unit, they
emerge processed, washed, dried, and ready for interpretation. Conservatively, these
operations take approximately 1 hour in hand processing. Thus, with a saving of at least
45 minutes in processing time, the holding time for parts being radiographed is greatly
reduced. It follows that more work can be scheduled for a given period because of the
speed of processing and the consequent reduction in space required for holding
materials until the radiographs are ready for checking.
Uniformity of Radiographs
Automated processing is very closely controlled time-temperature processing. This,
combined with accurate automatic replenishment of solutions, produces day-after-day
uniformity of radiographs rarely achieved in hand processing. It permits the setting up
of exposure techniques that can be used with the knowledge that the films will receive
optimum processing and be free from processing artifacts. Processing variables are
virtually eliminated.
Small Space Requirements
Page 237
Automated processors require only about 10 square feet of floor space. The size of the
processing room can be reduced because hand tanks and drying facilities are not
needed. A film loading and unloading bench, film storage facilities, plus a small open
area in front of the processor feed tray are all the space required. The processor, in
effect, releases valuable floor space for other plant activities. If the work load increases
to a point where more processors are needed, they can be added with minimal
additional space requirements. Many plants with widely separated exposure areas have
found that dispersed processing facilities using two or more processors greatly increase
the efficiency of operations.
Chemistry of Automated Processing
Automated processing is not just a mechanization of hand processing, but a system
depending on the interrelation of mechanics, chemicals, and film. A special chemical
system is therefore required to meet the particular need of automated processing.

When, in manual processing, a sheet of x-ray film is immersed in developer solution,


the exposed silver halide grains are converted to metallic silver, but, at the same time,
the emulsion layer swells and softens. The fixer solution removes the underdeveloped
silver halide grains and shrinks and hardens the emulsion layer. Washing removes the
last traces of processing chemicals and swells the film slightly. Drying further hardens
and shrinks the emulsion. Therefore, the emulsion changes in thickness and in
hardness as the film is moved from one step to the next in processing. In manual
processing, these variations are of no importance because the films are supported
independently and do not come in contact with other films or any other surfaces.

Automated processing, however, places an additional set of demands on the processing


chemicals. Besides developing and fixing the image very quickly, the processing
chemicals must prevent the emulsion from swelling or becoming either slippery, soft, or
sticky. Further, they must prepare the processed film to be washed and dried rapidly.

In automated processors, if a film becomes slippery, it could slow down in the transport
system, so that films following it could catch up and overlap. Or it might become too
sticky to pass come point and get stuck or even wrap around a roller. If the emulsion
becomes too soft it could be damaged by the rollers. These occurrences, of course,
cannot be tolerated. Therefore, processing solutions used in automated processors
must be formulated to control, within narrow limits, the physical properties of the film.
Consequently, the mixing instructions with these chemicals must be followed exactly.

This control is accomplished by hardener in the developer and additional hardener in


the fixer to hold the thickness and tackiness of the emulsion within the limits required
for reliable transport, as well as for rapid washing and drying.

It is also desirable that automated processing provide rapid access to a finished


radiograph. This is achieved in part by the composition of the processing solutions and in

Page 238
part by using them at temperatures higher than those suitable for manual processing of
film.

The hardening developer develops the film very rapidly at its normal operating
temperature. Moreover, the formulation of the solution is carefully balanced so that
optimum development is achieved in exactly the time required for the hardener to
harden the emulsion. If too much hardener is in a solution, the emulsion hardens too
quickly for the developer to penetrate sufficiently, and underdevelopment results. If too
little hardener is in the solution, the hardening process is slowed, overdevelopment of
film occurs, and transport problems may be encountered. To maintain the proper
balance, it is essential that developer solution be replenished at the rate proper for the
type or types of film being processed and the average density of the radiographs.

Because washing, drying, and keeping properties of the radiograph are closely tied to
the effectiveness of the fixation process, special fixers are needed for automatic
processing. Not only must they act rapidly, but they must maintain the film at the proper
degree of hardness for reliable transport. Beyond this, the fixer must be readily removed
from the emulsion so that proper washing of the radiograph requires only a short time. A
hardening agent added to the fixer solution works with the fixing chemicals to condition
the film for washing and for rapid drying without physical damage to the emulsion.

Experience has shown that the solutions in this chemical system have a long life. In
general, it is recommended that the processor tanks be emptied and cleaned after
50,000 films of mixed sizes have been processed or at the end of 3 months, whichever
is sooner. This may vary somewhat depending on local use and conditions; but, in
general, this schedule will give very satisfactory results.
Film-Feeding Procedures
Sheet Film

The figure below shows the proper film-feeding procedures. The arrows indicate the
direction in which films are fed into the processor. Wherever possible, it is advisable to
feed all narrower films side by side so as to avoid Overreplenishment of the solutions.
This will aid in balanced replenishment and will result in maximum economy of the
solutions used.

Care should be taken that films are fed into the processor square with the edge of a
side guide of the feed tray, and that multiple films are started at the same time. In no
event should films less than 7 inches long be fed into the processor.

Film-feeding procedures for KODAK PROFESSSIONAL INDUSTREX Processors.

Page 239
Roll Film

Roll films in widths of 16 mm to 17 inches and long strips of film may be processed in a
KODAK PROFESSSIONAL INDUSTREX Processor. This requires a somewhat different
procedure than is used when feeding sheet film. Roll film in narrow widths and many
strips have an inherent curl because they are wound on spools. Because of this curl, it
is undesirable to feed roll or strip film into the processor without attaching a sheet of
leader film to the leading edge of the roll or strip. Ideally, the leader should be
unprocessed radiographic film. Sheet film that has been spoiled in exposure or
accidentally light-fogged can be preserved and used for this purpose.

The leader film should be at least as wide as, and preferably wider than, the roll film and
be a minimum of 10 inches long. It is attached to the roll film with a butt joint using
pressure-sensitive polyester tape, such as SCOTCH Brand Electrical Tape No. 3, one
inch in width. (Other types of tape may not be suitable due to the solubility of their bases
in the processing solutions.) Care should be taken that none of the adhesive side of the
tape is exposed to the processing solutions. Otherwise, the tape may stick to the
processor rollers or bits of adhesive may be transferred to the rollers, resulting in
processing difficulties.

If narrow widths of roll or strip films are being fed, they should be kept as close as possible
to one side guide of the feed tray. This will permit the feeding of standard-size sheet films
at the same time. Where quantities of roll and strip films are fed, the replenisher pump
should be turned off for a portion of the time. This will prevent overreplenishment and
possible upset of the chemical balance in the processor tanks.

FILING RADIOGRAPHS
After the radiograph is dry, it must be prepared for filing. With a manually processed
radiograph, the first step is the elimination of the sharp projections that are caused by

Page 240
the film-hanger clips. Use of film corner cutters will enhance the appearance of the
radiograph, preclude its scratching others with which it may come in contact, facilitate
its insertion into an envelope, and conserve filing space.

The radiograph should be placed in a heavy manila envelope of the proper size, and all
of the essential identification data should be written on the envelope so that it can be
easily handled and filed. Envelopes having an edge seam, rather than a center seam,
and joined with a non-hygroscopic adhesive are preferred, since occasional staining
and fading of the image is caused by certain adhesives used in the manufacture of
envelopes. Ideally, radiographs should be stored at a relative humidity of 30 to 50
percent.

Page 241
RADIOGRAPHIC IMAGE QUALITY AND DETAIL VISIBILITY
Because the purpose of most radiographic inspections is to examine a specimen for
inhomogenity, a knowledge of the factors affecting the visibility of detail in the finished
radiograph is essential. The summary chart below shows the relations of the various
factors influencing image quality and radiographic sensitivity, together with page
references to the discussion of individual topics. For convenience, a few important
definitions will be repeated.
Factors Affecting Image Quality
Radiographic Image Quality
Radiographic Contrast
Definition
Subject Contrast Film
Contrast
Geometric Factors Film
Graininess, Screen Mottle Factors
Affected by:A - Absorption differences in specimen (thickness, composition, density)B - Radiation
wavelength C - Scattered radiation Reduced by: 1 - Masks and diaphragms 2 - Filters 3 - Lead screens 4
- Potter-Bucky diaphragm
Affected by:A - Type of Film B - Degree of development (type of developer, time and temperature of
development, activity of developer, degree of agitation) C - Density D - Type of screens (fluorescent vs
lead or none)
Affected by:A - Focal-spot size B - Source-film distance C - Specimen-film distance D - Abruptness of
thickness changes in specimen E - Screen-film contact F - Motion of specimen
Affected by:A - Type of Film B - Type of screen C - Radiation wavelength D - Development
Radiographic sensitivity is a general or qualitative term referring to the size of the
smallest detail that can be seen in a radiograph, or to the ease with which the images
of small details can be detected. Phrased differently, it is a reference to the amount of
information in the radiograph. Note that radiographic sensitivity depends on the
combined effects of two independent sets of factors. One is radiographic contrast (the
density difference between a small detail and its surroundings) and the other is
definition (the abruptness and the “smoothness” of the density transition). See the figure
below.
Advantage of higher radiographic contrast (left) is largely offset by poor definition. Despite
lower contrast (right), better rendition of detail is obtained by improved definition.

Radiographic contrast between two areas of a radiograph is the difference between the
densities of those areas. It depends on both subject contrast and film contrast. Subject

Page 242
contrast is the ratio of x-ray or gamma-ray intensities transmitted by two selected
portions of a specimen. (See the figure below.) Subject contrast depends on the nature
of the specimen, the energy (spectral composition, hardness, or wavelengths) of the
radiation used, and the intensity and distribution of the scattered radiation, but is
independent of time, milliamperage or source strength, and distance, and of the
characteristics or treatment of the film.

With the same specimen, the lower-kilovoltage beam (left) produces higher subject contrast
than does the higher-kilovoltage beam (right).

Film contrast refers to the slope (steepness) of the characteristic curve of the film. It
depends on the type of film, the processing it receives, and the density. It also depends
on whether the film is exposed with lead screens (or direct) or with fluorescent screens.
Film contrast is independent, for most practical purposes, of the wavelengths and
distribution of the radiation reaching the film, and hence is independent of subject
contrast.

Definition refers to the sharpness of outline in the image. It depends on the types of
screens and film used, the radiation energy (wavelengths, etc), and the geometry of the
radiographic setup.
SUBJECT CONTRAST
Subject contrast decreases as kilovoltage is increased. The decreasing slope
(steepness) of the lines of the earlier exposure chart as kilovoltage increases illustrates
the reduction of subject contrast as the radiation becomes more penetrating. For
example, consider a steel part containing two thicknesses, 3/4 inch and 1 inch, which is
radiographed first at 160 kV and then at 200 kV.

Page 243
In the table above, column 3 shows the exposure in milliampere-minutes required to
reach a density of 1.5 through each thickness at each kilovoltage. These data are from
the exposure chart mentioned above. It is apparent that the milliampere-minutes
required to produce a given density at any kilovoltage are inversely proportional to the
corresponding x-ray intensities passing through the different sections of the specimen.
Column 4 gives these relative intensities for each kilovoltage. Column 5 gives the ratio
of these intensities for each kilovoltage.

Column 5 shows that, at 160 kV, the intensity of the x-rays passing through the 3/4-inch
section is 3.8 times greater than that passing through the 1-inch section. At 200 kV, the
radiation through the thinner portion is only 2.5 times that through the thicker. Thus, as
the kilovoltage increases, the ratio of x-ray transmission of the two thicknesses
decreases, indicating a lower subject contrast.
FILM CONTRAST
The dependence of film contrast on density must be kept in mind when considering
problems of radiographic sensitivity. In general the contrast of radiographic films,
except those designed for use with fluorescent screens, increases continuously with
density in the usable density range. Therefore, for films that exhibit this continuous
increase in contrast, the best density, or upper limit of density range, to use is the
highest that can conveniently be viewed with the illuminators available. Adjustable high-
intensity illuminators that greatly increase the maximum density that can be viewed are
commercially available.

The use of high densities has the further advantage of increasing the range of radiation
intensities that can be usefully recorded on a single film. This in turn permits, in x-ray
radiography, the use of lower kilovoltage, with resulting increase in subject contrast and
radiographic sensitivity.

Maximum contrast of screen-type films is at a density of about 2.0. Therefore, other


things being equal, the greatest radiographic sensitivity will be obtained when the
exposure is adjusted to give this density.

FILM GRAININESS, SCREEN MOTTLE


The image on an x-ray film is formed by countless minute silver grains, the individual
particles being so small that they are visible only under a microscope. However, these
small particles are grouped together in relatively large masses, which are visible to the
naked eye or with a magnification of only a few diameters. These masses result in the
visual impression called graininess.

All films exhibit graininess to a greater or lesser degree. In general, the slower films
have lower graininess than the faster. Thus, Film Y would have a lower graininess than
Film X.The graininess of all films increases as the penetration of the radiation
increases, although the rate of increase may be different for different films. The
graininess of the images produced at high kilovoltages makes the slow, inherently
finegrain films especially useful in the million- and multimillion-volt range. When
sufficient exposure can be given, they are also useful with gamma rays.

The use of lead screens has no significant effect on film graininess. However,
graininess is affected by processing conditions, being directly related to the degree of
Page 244
development. For instance, if development time is increased for the purpose of
increasing film speed, the graininess of the resulting image is likewise increased.
Conversely, a developer or developing technique that results in an appreciable
decrease in graininess will also cause an appreciable loss in film speed. However,
adjustments made in development technique to compensate for changes in temperature
or activity of a developer will have little effect on graininess.

Such adjustments are made to achieve the same degree of development as would be
obtained in the fresh developer at a standard processing temperature, and therefore the
graininess of the film will be essentially unaffected. Another source of the irregular
density in uniformly exposed areas is the screen mottle encountered in radiography with
the fluorescent screens. The screen mottle increases markedly as hardness of the
radiation increases. This is one of the factors that limits the use of fluorescent screens
at high voltage and with gamma rays.

Page 245
INTERPRETATION
Viewing of radiographs
Illuminator Requirements

To properly view a radiograph that meets the requirements of film density, a


highintensity film viewer is required. Many styles of Viewers are available, but in general
they fit into four groups;

1. Spot Viewers
2. Area Viewers
3. Strip Viewers
4. Combination of Spot and Area Viewers.

Viewers must have power ventilators to cool the intense light source required for
viewing high density radiographs. Most viewers will use one or more photoflood
incandescent lamps as the light source. In addition, a light diffuser is required to
eliminate the variation in light intensity. A good illuminator will employ a rheostat to vary
the light intensity, allowing the lower density areas to view with optimum light
conditions. A film density of 2.0 is allowing only 1% of incident light to be transmitted
through the film, whereas a 4.0 density allows only 0.01% transmissions. This fact
illustrates the necessity of high-density illuminators.

Background Lighting

The film illuminator should be located in such areas that allows for background light
control. Although the viewing room need not be completely dark, the direct light should
impinge on the radiograph being viewed except from the high intensity light source. All
precautions should be taken to ensure that light is not reflecting off the surface of the
radiograph and , potentially, distracting the film interpreter. It is desirable for the
interpreter to adapt himself to the conditions of the viewing room for at least 10 minutes
before interpreting radiographs.

Viewing Aids

During the process of interpreting and evaluating radiographs, numerous aids may be
used to enhance the ability of the interpreter in discovering small indications. Included are
masks to cover the portions of film and allow concentrations on specific areas. Magnifiers
are also used to magnify small indications wherever necessary.

INTERPRETATION AIDS

Reference radiographs with known discontinuity images are very useful in evaluating
and interpreting radiographs. They are available for different product forms, such as
castings, welds, tubular components, pressure vessels etc. They are commercially
available from several sources including ASTM and ASNT.
JUDGING RADIOGRAPHIC QUALITY
Film Density

Page 246
The optical density of the film is an accepted measure of the amount of information that
has been recorded. The density is in proportion to the number of silver halide crystals
that has been exposed and converted into metallic silver. The greater the density, more
the detail available for interpretation. In accordance with the most commonly used
standards, the following film density guidelines are used to judge the quality of the
radiograph in the area of interest.

Film density 1.8 minimum, 4.0 maximum for X-rays


Film density 2.0 minimum, 4.0 maximum for Gamma rays

For viewing two superimposed radiographs, known as composite film viewing, each film
is usually required to have a minimum density of 1.3. The transmitted film density shall
be measured through the radiographic image of the body of the appropriate hole
penetrameter or adjacent to the designated wire of a wire penetrameter. A tolerance of
0.5 density units is allowed for variation between densitometer readings.

If the density of the radiograph anywhere on the area of interest varies by more than
minus 15% or plus 30% from the density through the body of the hole penetrameter or
adjacent to the wire of wire penetrameter, additional penetrameter is used for each
exceptional areas and the radiograph is retaken.

Artifacts

Artifacts on radiograph can reduce the quality of the radiographs significantly and can
cause misinterpretation, if not thoroughly understood. Most film artifacts are caused by
improper film processing and careless film handling. In addition to the above, the film
can be partially fogged or mottled due to improper storage. Commonly occurring film
artifacts include:

1. Pressure Marks: caused by improper handling of film and cassettes.


2. Scratch Marks: caused by finger nails or abrasives.
3. Static Marks: caused by static electricity generated when the film is removed
rapidly from the container.
4. Screen Marks: caused by the screen damage or contamination with chemicals.
5. Streaks : due to ineffective agitation of the solutions during development or
rinsing.

Penetrameters
While evaluating the adequacy of the radiograph, carefully examine the penetrameter
image to ensure complete penetrameter outline as well as the required hole can be
seen. Make sure that the correct penetrameter is been selected for the required
thickness and type of the material that is to be radiographed.

To interpret and analyze the results of any radiographic examination, the interpreter
must first judge the quality of the radiograph with regard to the applied technique,
density, penetrameter selection and sensitivity, identification of film, coverage of parts
and artifacts. Secondly, the interpreter must identify the rejectable discontinuities and
judge them as true flaws. Any artifact that masks the area of interest must be verified
with another exposure with the same view or another. Knowledge of the part or

Page 247
component and the manufacturing process the component has undergone together
with the defects associated with the respective process helps the interpreter in judging
the nature of discontinuity

Awareness of the following factors permits the interpreter to make sound judgments.

1. Thickness
2. Surface finish
3. Process
4. Design
5. Material form
6. Heat treatment methods
7. Physical accessibility in making the radiograph.

Welding Process
The fusion of two similar or dissimilar metals is often referred to as welding. Over forty
welding processes are available and are not limited to welding, brazing, resistance
welding and solid state welding. Regardless fo the process, there are three common
variables: source of heat, source of shielding, source of chemical elements.

Control of these variables is essential and when any of these variables is left uncontrolled,
the individual interpreting the radiograph would expect to find:

1. Porosity
2. Slag inclusions
3. Lack of fusion
4. Lack of Penetration
5. Cracks
6. Tungsten inclusions and/or other indications.

Casting
The ability of a molten metal to fill a mold is based on the fluidity of the molten metal. It
varies with the material type and temperature. The process of solidification occurs when
the

1. Liquid metal contracts


2. Liquid metal solidifies
3. Solid contracts to room temperature

Some of the factors affecting the solidification are

1. thermal properties of the mold


2. liquid or solid temperature of the metal
3. thermal properties of the solidifying metal
4. pour temperature

Typical of the types of discontinuities formed during the casting process which may or
may not be detrimental to the casting integrity are

Page 248
1. Porosity
2. Gas voids
3. Sand inclusions
4. Slag inclusions
5. Tears
6. Cracks
7. Unfused chaplets
8. Cold shuts

DISCONTINUITIES, THEIR CAUSES AND EFFECTS, AND


RADIOGRAPHIC APPEARANCES
Welding-Related Discontinuities
Porosity is a result of gas being trapped in the weld. In general, porosity is not
considered critical unless it is

a. present in large quantities


b. contains sharp tails
c. is aligned in a short distance

Porosity shows a rounded well-defined high-density spots with sharp contours.

Tungsten inclusion is a result of pieces of tungsten electrode being trapped in the


weld. If the inclusions are isolated and no larger than the acceptable porosity limits, they
are normally disregarded. In radiograph they appear as very light indications because of
the tungsten’s higher radiation absorption.

Incomplete Penetration or lack of penetration results when the weld metal did not
penetrate and fuse at the root. Incomplete penetration is normally a cause for rejection
as they are potential stress raisers. In radiograph, they will appear as a sharp, dark
continuous or intermittent line. Depending on the weld geometry, they may occur in the
center of the weld or along the edge of the weld bevel.

Lack of fusion or incomplete fusion is a result of non-adhesion between successive weld


passes or between a weld pass and the weld edge preparation. Lack of fusion is
considered to be detrimental to the fatigue strength and hence rejectable by most of the
codes. In radiograph they will appear as a thin straight dark line parallel to the weld. Lack
of fusion between the sidewall and the weld usually appears straight on one side and
irregular on the other. It will typically appear at some distance from the weld centerline.

Slag inclusion is caused when nonmetallic materials become trapped in the weld metal
between passes or between the weld metal and the base metal. They are generally
permissible unless they are bigger run along a sufficient length or they are large in
number. In radiograph they usually appear as dark irregular shapes of varying lengths
and widths. They are dark when the oxide that makes up for the inclusions is of a lower
atomic weight than the weld metal.

Cracks result from fractures or ruptures of the weld metal. They occur when stresses in
the localized areas exceed the materials ultimate tensile strength. Cracks in all forms

Page 249
are considered to be detrimental because their sharp extremities act as severe stress
concentrators. They normally appear as dark, irregular, wavy, or zigzag lines and may
have fine hairline indications branching off from the main crack indication.

Casting-Related Discontinuities
Porosity occurs when the gas is trapped in the metal during solidification. The pores
vary in size and distribution. The conditions are similar as for that of welding process.
In radiograph, radiographically rounded dark spots or various sizes.

Gas Voids i.e., gas holes, worm holes, or blowholes are also formed during
solidification and are considered more critical when in a tail-like linear pattern. In
radiograph, they appear as large rounded, dark indications, normally with smooth
edges.

Slag inclusions results when impurities are introduced during the solidification process
and is of little concern unless the amount and concentration are excessive. In
radiograph, they appear as light or dark indications depending on the relative densities of
the inclusions and the base metal.

Sand inclusions results from sand breaking free from the mold and subsequently
becoming trapped. Unless the amount of trapped sand is excessive, it would normally
be an acceptable condition.

Shrinkage results from localized contraction of the cast metal as it solidifies and cools.
It may or may not be acceptable, depending on the population, design function and
several other factors. Irregularly shaped spots of varying densities, which often appear
to be interconnected.

Hot tears are cracks or ruptures occurring when the metal is very hot; usually not
acceptable. They appear as dark, ragged, irregular lines and may have branches of
varying densities; less clearly defined than cracks.

Cracks result from stresses in the cast material which occur at relatively low
temperatures; always unacceptable as they have a tendency to propagate under stress.
Appear as dark continuous or intermittent lines, usually quite well defined.

Unfused chaplets result from failure of the liquid metal to consume the metal device
used to support the core inside the mold. If the chaplet is totally consumed, the part is
normally unacceptable because of obvious metallurgical problems. They are easily
identified as circular dark lines approximately the same size as the core material
support device.

Cold shuts are a result of splashing, surging, interrupted pouring, or the meeting of
two streams of molten metal. In radiograph, cold shuts appear as dark lines or linear
areas of varying length.

Radiation and Risk


How much radiation do we get?

Page 250
The average person in the United States receives about 360 mrem every year whole
body equivalent dose. This is mostly from natural sources of radiation, such as radon. In
1992, the average dose received by nuclear power workers in the United States was 3
mSv whole body equivalent in addition to their background dose.
What is the effect of radiation?
Radiation causes ionizations in the molecules of living cells. These ionizations result in
the removal of electrons from the atoms, forming ions or charged atoms. The ions
formed then can go on to react with other atoms in the cell, causing damage. An
example of this would be if a gamma ray passes through a cell, the water molecules
near the DNA might be ionized and the ions might react with the DNA causing it to
break.

At low doses, such as what we receive every day from background radiation, the cells
repair the damage rapidly. At higher doses (up to 1 Sv), the cells might not be able to
repair the damage, and the cells may either be changed permanently or die. Most cells
that die are of little consequence, the body can just replace them. Cells changed
permanently may go on to produce abnormal cells when they divide. In the right
circumstance, these cells may become cancerous. This is the origin of our increased
risk in cancer, as a result of radiation exposure.

At even higher doses, the cells cannot be replaced fast enough and tissues fail to
function. An example of this would be “radiation sickness.” This is a condition that
results after high acute doses to the whole body (>2 Gy), the body’s immune system is
damaged and cannot fight off infection and disease. Several hours after exposure
nausea and vomiting occur. This leads to nausea, diarrhea and general weakness. With
higher whole body doses (>10 Gy), the intestinal lining is damaged to the point that it
cannot perform its functions of intake of water and nutrients, and protecting the body
against infection. At whole body doses near 7 Gy, if no medical attention is given, about
50% of the people are expected to die within 60 days of the exposure, due mostly from
infections.

If someone receives a whole body dose more than 20 Gy, they will suffer vascular
damage of vital blood providing systems for nervous tissue, such as the brain. It is likely
at doses this high, 100% of the people will die, from a combination of all the reasons
associated with lower doses and the vascular damage.

There a large difference between whole body dose, and doses to only part of the body.
Most cases we will consider will be for doses to the whole body.

What needs to be remembered is that very few people have ever received doses more
than 2 Gy. With the current safety measures in place, it is not expected that anyone will
receive greater than 0.05 Gy in one year where these sicknesses are for sudden doses
delivered all at once. Radiation risk estimates, therefore, are based on the increased
rates of cancer, not on death directly from the radiation. Non-Ionizing radiation does not
cause damage the same way that ionizing radiation does. It tends to cause chemical
changes (UV) or heating (Visible light, Microwaves) and other molecular changes
(EMF).

Risk
How is risk determined?

Page 251
Risk estimates for radiation were first evaluated by scientific committees in the starting
in the 1950s. The most recent of these committees was the Biological Effects of
Ionizing Radiation committee five (BEIR V). Like previous committees, this one was
charged with estimating the risk associated with radiation exposure. They published
their findings in 1990. The BEIR IV committee established risks exclusively for radon
and other internally alpha emitting radiation, while BEIR V concentrated primarily on
external radiation exposure data.

It is difficult to estimate risks from radiation, for most of the radiation exposures that
humans receive are very close to background levels. In most cases, the effects from
radiation are not distinguishable from normal levels of those same effects. With the
beginning of radiation use in the early part of the century, the early researchers and
users of radiation were not as careful as we are today though. The information from
medical uses and from the survivors of the atomic bombs (ABS) in Japan, have given
us most of what we know about radiation and its effects on humans. Risk estimates
have their limitations,

1. The doses from which risk estimates are derived were much higher than
theregulated dose levels of today;

2. The dose rates were much higher than normally received;

3. The actual doses received by the ABS group and some of the medical
treatmentcases have had to be estimated and are not known precisely;

4. Many other factors like ethnic origin, natural levels of cancers, diet, smoking,stress
and bias effect the estimates.
What is the risk estimate?
According to the Biological Effects of Ionizing Radiation committee V (BEIR V), the risk
of cancer death is 0.08% per rem for doses received rapidly (acute) and might be 2-4
times (0.04% per rem) less than that for doses received over a long period of time
(chronic). These risk estimates are an average for all ages, males and females, and all
forms of cancer. There is a great deal of uncertainty associated with the estimate.

Risk from radiation exposure has been estimated by other scientific groups. The other
estimates are not the exact same as the BEIR V estimates, due to differing methods of
risk and assumptions used in the calculations, but all are close.

Risk comparison
The real question is: how much will radiation exposure increase my chances of cancer
death over my lifetime.

To answer this, we need to make a few general statements of understanding. One is


that in the US, the current death rate from cancer is approximately 20 percent, so out of
any group of 10,000 United States citizens, about 2,000 of them will die of cancer.
Second, that contracting cancer is a random process, where given a set population, we
can estimate that about 20 percent will die from cancer, but we cannot say which
individuals will die. Finally, that a conservative estimate of risk from low doses of
radiation is thought to be one in which the risk is linear with dose. That is, that the risk
increases with a subsequent increase in dose. Most scientists believe that this is a
conservative model of the risk.

Page 252
So, now the risk estimates. If you were to take a large population, such as 10,000
people and expose them to one rem (to their whole body), you would expect
approximately eight additional deaths (0.08%*10,000*1 rem). So, instead of the 2,000
people expected to die from cancer naturally, you would now have 2,008. This small
increase in the expected number of deaths would not be seen in this group, due to
natural fluctuations in the rate of cancer.

What needs to be remembered it is not known that 8 people will die, but that there is a
risk of 8 additional deaths in a group of 10,000 people if they would all receive one rem
instantaneously.

If they would receive the 1 rem over a long period of time, such as a year, the risk would
be less than half this (<4 expected fatal cancers). Risks can be looked at in many ways,
here are a few ways to help visualize risk.

One way often used is to look at the number of “days lost” out of a population due to
early death from separate causes, then dividing those days lost between the population
to get an “Average Life expectancy lost” due to those causes. The following is a table of
life expectancy lost for several causes:
Health Risk Est. life expectancy lost
Smoking 20 cigs a day 6 years

Overweight (15%) 2 years

Alcohol (US Ave) 1 year

All Accidents 207 days


All Natural Hazards 7 days
Occupational dose (300 mrem/yr) 15 days

Occupational dose (1 rem/yr) 51 days

You can also use the same approach to looking at risks on the job:
Industry type Est. life expectancy lost
All Industries 60 days

Agriculture 320 days

Construction 227 days

Mining and quarrying 167 days


Manufacturing 40 days
Occupational dose (300 mrem/yr) 15 days

Occupational dose (1 rem/yr) 51 days

Another way of looking at risk, is to look at the Relative Risk of 1 in a million chances of
dying of activities common to our society.

1. Smoking 1.4 cigarettes (lung cancer)

2. Eating 40 tablespoons of peanut butter

Page 253
3. Spending 2 days in New York City (air pollution)

4. Driving 40 miles in a car (accident)

5. Flying 2500 miles in a jet (accident)

6. Canoeing for 6 minutes

7. Receiving 10 mrem of radiation (cancer)

The following is a comparison of the risks of some medical exams and is based on the
following information:

Cigarette Smoking - 50,000 lung cancer deaths each year per 50 million smokers
consuming 20 cigarettes a day, or one death per 7.3 million cigarettes smoked or 1.37 x
10-7 deaths per cigarette

Highway Driving - 56,000 deaths each year per 100 million drivers, each covering
10,000 miles or one death per 18 million miles driving, or 5.6 x 10-8 deaths per mile
driven

Radiation Induced Fatal Cancer - 4% per Sv (100 rem) for exposure to low doses and
dose rates
Procedure Effective Dose (Sv) Effective Dose (mrem) Risk of Fatal
Cancer Equivalent to Number of Cigarettes Smoked Equivalent to
Number of Highway Miles Driven
3.2 1.3 x 10-6 9
Chest Radiograph 3.2 x 10 23 -5

Skull Exam 1.5 x 10-4 15 6 x 10-6 44 104


Barium Enema 5.4 x 10-4 54 2 x 10-5 148
357
Bone Scan 4.4 x 10-3 440 1.8 x 10-4 1300 3200
So, in summary, we must balance the risks with the benefit. It is something we do often.
We want to go somewhere in a hurry, we accept the risks of driving for that benefit. We
want to eat fat foods, we accept the risks of heart disease. Radiation is another risk
which we must balance with the benefit. The benefit is that we can have a source of
power, or we can do scientific research, or receive medical treatments. The risks are a
small increase in cancer. Risk comparisons show that radiation is a small risk, when
compared to risks we take every day. We have studied radiation for nearly 100 years
now. It is not a mysterious source of disease, but a well-understood phenomenon,
better understood than almost any other cancer causing agent to which we are
exposed.

Doses
The following is a comparison of limits, doses and dose rates from many different
sources. Most of this data came from Radiobiology for the Radiologist, by Eric Hall or
BEIR V, National Academy of Science. Ranges have been given if known. All doses are
TEDE (whole body total) unless otherwise noted. Units are defined on our. The doses

Page 254
for x-rays are for the years 1980-1985 and could be lower today. Any correction or
comments can be sent to us at the University of Michigan using our.

Radiation Safety
Radionuclides in various chemical and physical forms have become extremely
important tools in modern research. The ionizing radiation emitted by these materials,
however, can pose a hazard to human health. For this reason, special precautions must
be observed when radionuclides are used.

The possession and use of radioactive materials in the United States is governed by
strict regulatory controls. The primary regulatory authority for most types and uses of
radioactive materials is the federal Nuclear Regulatory Commission (NRC). However,
more than half of the states in the US (including Iowa) have entered into “agreement”
with the NRC to assume regulatory control of radioactive material use within their
borders. As part of the agreement process, the states must adopt and enforce
regulations comparable to those found in Title 10 of the Code of Federal Regulations.
Regulations for control of radioactive material use in Iowa are found in Chapter 136C of
the Iowa Code.

For most situations, the types and maximum quantities of radioactive materials
possessed, the manner in which they may be used, and the individuals authorized to
use radioactive materials are stipulated in the form of a “specific” license from the
appropriate regulatory authority. In Iowa, this authority is the Iowa Department of Public
Health. However, for certain institutions which routinely use large quantities of
numerous types of radioactive materials, the exact quantities of materials and details of
use may not be specified in the license. Instead, the license grants the institution the
authority and responsibility for setting the specific requirements for radioactive material
use within its facilities. These licensees are termed “broadscope” and require a
Radiation Safety Committee and usually a full-time Radiation Safety Officer.

At Iowa State University, the Department of Environmental Health and Safety (EH&S)
has the responsibility for ensuring that all individuals who work with radionuclides are
aware of both the potential hazards associated with the use of these materials and the
proper precautions to employ in order to minimize these hazards. As a means to
accomplish this objective, EH&S has established a radiation safety training program.
This document, together with the in-class training, should give each radionuclide user
sufficient information to enable him or her to use radionuclides in a safe manner.

ACTIVITY (of radionuclides)

The quantity which expresses the degree of radioactivity or radiation producing potential
of a given amount of radioactive material is activity. The special unit for activity is the
curie (Ci) which was originally defined as that amount of any radioactive material which
disintegrates at the same rate as one gram of pure radium. The curie has since been
defined more precisely as a quantity of radioactive material in which 3.7 x 10^10 atoms
disintegrate per second. The International System (SI) unit for activity is the becquerel
(Bq), which is that quantity of radioactive material in which one atom is transformed per
second. The activity of a given amount of radioactive material does not depend upon
the mass of material present. For example, two one-curie sources of Cs137 might have
very different masses depending upon the relative proportion of non radioactive atoms
present in each source. The concentration of radioactivity, or the relationship between
Page 255
the mass of radioactive material and the activity, is called the specific activity. Specific
activity is expressed as the number of curies or becquerels per unit mass or volume.

EXPOSURE

The Roentgen is a unit used to measure a quantity called exposure. This can only be
used to describe an amount of gamma and x-rays, and it can only be in air. Specifically
it is the amount of photon energy required to produce 1.610 x 10^12 ion pairs in one
cubic centimeter of dry air at 0°C. One Roentgen is equal depositing to 2.58 x 10^-4
coulombs per kg of dry air. It is a measure of the ionization of the molecules in a mass
of air. The main advantage of this unit is that it is easy to measure directly, but it is
limited because it is only for deposition in air, and only for gamma and x-rays.

Historically, the roentgen is no longer used in radiation protection, as it applies only to


photons (gamma and X-rays), and it is related only to their effect in air and can only be
measured for radiations less than 3.0 MeV. The roentgen was however preferable to the
previous unit the “dose” which was measured by the quantity of gamma or xradiation
required to produce visible reddening on the skin of the hand or arm.

ABSORBED DOSE - RAD (Radiation Absorbed Dose)

The absorbed dose is the quantity that expresses the amount of energy which ionizing
radiation imparts to a given mass of matter. The special unit for absorbed dose is the
RAD (Radiation Absorbed Dose), which is defined as a dose of 100 ergs of energy per
gram of matter. The SI unit for absorbed dose is the gray (Gy), which is defined as a
dose of one joule per kilogram. Since one joule equals 10^7 ergs, and since one
kilogram equals 1000 grams, 1 Gray equals 100 rads. The size of the absorbed dose is
dependent upon the strength (or activity) of the radiation source, the distance from the
source to the irradiated material, and the time over which the material is irradiated. The
activity of the source will determine the dose rate which can be expressed in rad/hr,
mrad/hr, mGy/sec, etc.

DOSE EQUIVALENT - REM (Roentgen Equivalent Man)

Although the biological effects of radiation are dependent upon the absorbed dose,
some types of particles produce greater effects than others for the same amount of
energy imparted. For example, for equal absorbed doses, alpha particles may be 20
times as damaging as beta particles. In order to account for these variations when
describing human health risk from radiation exposure, the quantity called dose
equivalent is used. This is the absorbed dose multiplied by certain “quality” and
“modifying” factors (QF) indicative of the relative biological damage potential of the
particular type of radiation. The special unit for dose equivalent is the rem (Roentgen
Equivalent Man). The SI unit for dose equivalent is the sievert (Sv).

Radiation Measurement
When given a certain amount of radioactive material, it is customary to refer to the
quantity based on its activity rather than its mass. The activity is simply the number of
disintegrations or transformations the quantity of material undergoes in a given period
of time.

The two most common units of activity are the Curie and the Becquerel. The Curie is
named after Pierre Curie for his and his wife Marie’s discovery of radium. One Curie is
Page 256
equal to 3.7x1010 disintegrations per second. A newer unit of activity if the Becquerel
named for Henry Becquerel who is credited with the discovery of radioactivity. One
Becquerel is equal to one disintegration per second.

It is obvious that the Curie is a very large amount of activity and the Becquerel is a very
small amount. To make discussion of common amounts of radioactivity more
convenient, we often talk in terms of milli and microCuries or kilo and MegaBecquerels.

Radiation Units
Roentgen: A unit for measuring the amount of Gamma or X-rays in air.

Rad: A unit for measuring absorbed energy from radiation.

Rem: A unit for measuring biological damage from radiation.

Radiation is often measured in one of these three units, depending on what is being
measured and why. In international units, these would be Coulombs/kg for roentgen,
Grays for rads and Seiverts for rem.

Measurement of Radiation, Gas Filled Detector

Since we cannot see, smell or taste radiation, we are dependent on instruments to


indicate the presence of ionizing radiation.

The most common type of instrument is a gas filled radiation detector. This instrument
works on the principle that as radiation passes through air or a specific gas, ionization of
the molecules in the air occur. When a high voltage is placed between two areas of the
gas filled space, the positive ions will be attracted to the negative side of the detector
(the cathode) and the free electrons will travel to the positive side (the anode). These
charges are collected by the anode and cathode which then form a very small current in
the wires going to the detector. By placing a very sensitive current measuring device
Page 257
between the wires from the cathode and anode, the small current measured and
displayed as a signal. The more radiation which enters the chamber, the more current
displayed by the instrument.

Many types of gas-filled detectors exist, but the two most common are the ion chamber
used for measuring large amounts of radiation and the Geiger-Muller or GM detector
used to measure very small amounts of radiation.

Measurement of Radiation, Sodium Iodide Detector

The second most common type of radiation detecting instrument is the scintillation
detector. The basic principle behind this instrument is the use of a special material which
glows or “scintillates” when radiation interacts with it. The most common type of material
is a type of salt called sodium-iodide. The light produced from the scintillation process
is reflected through a clear window where it interacts with device called a photomultiplier
tube.

The first part of the photomultiplier tube is made of another special material called a
photocathode. The photocathode has the unique characteristic of producing electrons
when light strikes its surface. These electrons are then pulled towards a series of
plates called dynodes through the application of a positive high voltage. When
electrons from the photo cathode hit the first dynode, several electrons are produced
for each initial electron hitting its surface. This “bunch” of electrons is then pulled
towards the next dynode, where more electron “multiplication” occurs. The sequence
continues until the last dynode is reached, where the electron pulse is now millions of
times larger then it was at the beginning of the tube. At this point the electrons are
collected by an anode at the end of the tube forming an electronic pulse. The pulse is
then detected and displayed by a special instrument.

Page 258
Scintillation detectors are very sensitive radiation instruments and are used for special
environmental surveys and as laboratory instruments.

Page 259
ULTRASONIC TESTING

Prior to World War II, sonar, the technique of sending sound waves through water and
observing the returning echoes to characterize submerged objects, inspired early
ultrasound investigators to explore ways to apply the concept to medical diagnosis. In
1929 and 1935, Sokolov studied the use of ultrasonic waves in detecting metal objects.
Mulhauser, in 1931, obtained a patent for using ultrasonic waves, using two
transducers to detect flaws in solids. Firestone (1940) and Simons (1945) developed
pulsed ultrasonic testing using a pulse-echo technique.

Shortly after the close of World War II, researchers in Japan began to explore medical
diagnostic capabilities of ultrasound. The first ultrasonic instruments used an A-mode
presentation with blips on an oscilloscope screen. That was followed by a B-mode
presentation with a two dimensional, gray scale imaging.

Japan’s work in ultrasound was relatively unknown in the United States and Europe until
the 1950s. Then researchers presented their findings on the use of ultrasound to detect
gallstones, breast masses, and tumors to the international medical community. Japan
was also the first country to apply Doppler ultrasound, an application of ultrasound that
detects internal moving objects such as blood coursing through the heart for
cardiovascular investigation.

Ultrasound pioneers working in the United States contributed many innovations and
important discoveries to the field during the following decades. Researchers learned
to use ultrasound to detect potential cancer and to visualize tumors in living subjects
and in excised tissue. Real-time imaging, another significant diagnostic tool for
physicians, presented ultrasound images directly on the system’s CRT screen at the
time of scanning. The introduction of spectral Doppler and later color Doppler depicted
blood flow in various colors to indicate speed of flow and direction.

The United States also produced the earliest hand held “contact” scanner for clinical
use, the second generation of B-mode equipment, and the prototype for the first
articulated-arm hand held scanner, with 2-D images.

Beginnings of Nondestructive Evaluation (NDE)


Nondestructive testing has been practiced for many decades, with initial rapid
developments in instrumentation spurred by the technological advances that occurred
during World War II and the subsequent defense effort. During the earlier days, the
primary purpose was the detection of defects. As a part of “safe life” design, it was
intended that a structure should not develop macroscopic defects during its life, with
the detection of such defects being a cause for removal of the component from service.
In response to this need, increasingly sophisticated techniques using ultrasonic, eddy
currents, x-rays, dye penetrants, magnetic particles, and other forms of interrogating
energy emerged.

In the early 1970’s, two events occurred which caused a major change. The continued
improvement of the technology, in particular its ability to detect small flaws, led to the
unsatisfactory situation that more and more parts had to be rejected, even though the
probability of failure had not changed. However, the discipline of fracture mechanics
emerged, which enabled one to predict whether a crack of a given size would fail under

Page 260
a particular load if a material property, fracture toughness, were known. Other laws were
developed to predict the rate of growth of cracks under cyclic loading (fatigue). With the
advent of these tools, it became possible to accept structures containing defects if the
sizes of those defects were known. This formed the basis for new philosophy of “fail
safe” or “damage tolerant” design. Components having known defects could continue in
service as long as it could be established that those defects would not grow to a critical,
failure producing size.

A new challenge was thus presented to the nondestructive testing community. Detection
was not enough. One needed to also obtain quantitative information about flaw size to
serve as an input to fracture mechanics based predictions of remaining life. These
concerns, which were felt particularly strongly in the defense and nuclear power
industries, led to the creation of a number of research programs around the world and
the emergence of quantitative nondestructive evaluation (QNDE) as a new discipline.

NDT) has been practiced for many decades with initial rapid developments in
instrumentation spurred by the technological advances that occurred during World War
II and the subsequent defense effort. During the earlier days, the primary purpose was
the detection of defects. As a part of “safe life” design, it was intended that a structure
should not develop macroscopic defects during its life, with the detection of such
defects being a cause for removal of the component from service. In response to this
need, increasingly sophisticated techniques using ultrasonics, eddy currents, x-rays,
dye penetrants, magnetic particles, and other forms of interrogating energy emerged to
fuel this trend.

In the early 1970’s, two events occurred which caused a major change. The continued
improvement of the technology, in particular its ability to detect small flaws, led to the
unsatisfactory situation that more and more parts had to be rejected, even though the
probability of failure had not changed. However, the discipline of fracture mechanics
emerged, which enabled one to predict whether a crack of a given size would fail under
a particular load if a material property, fracture toughness, were known. Other laws
were developed to predict the rate of growth of cracks under cyclic loading (fatigue).
With the advent of these tools, it became possible to accept structures containing
defects if the sizes of those defects were known. This formed the basis for new
philosophy of “fail safe” or “damage tolerant” design. Components having known
defects could continue in service as long as it could be established that those defects
would not grow to a critical, failure producing size.

A new challenge was thus presented to the nondestructive testing community. Detection
was not enough. One needed to also obtain quantitative information about flaw size to
serve as an input to fracture mechanics based predictions of remaining life. These
concerns, which were felt particularly strongly in the defense and nuclear power
industries, led to the creation of a number of research programs around the world and
the emergence of quantitative nondestructive evaluation (QNDE) as a new discipline.
The Center for Nondestructive Evaluation at Iowa State University (growing out of a
major research effort at the Rockwell International Science Center); the Electric Power
Research Institute in Charlotte, North Carolina; the Fraunhofer Institute for
Nondestructive Testing in Saarbrucken, Germany; and the Nondestructive Testing
Centre in Harwell, England can all trace their roots to those changes.
In the ensuing years, many important advances have been made. Quantitative theories
have been developed to describe the interaction of the interrogating fields with flaws.
Page 261
Models incorporating the results have been integrated with solid model descriptions of
real-part geometries to simulate practical inspections. Related tools allow NDE to be
considered during the design process on an equal footing with other failure-related
engineering disciplines. Quantitative descriptions of NDE performance, such as the
probability of detection (POD), have become an integral part of statistical risk
assessment. Measurement procedures initially developed for metals have been
extended to engineered materials, such as composites, where anisotropy and
inhomogeneity have become important issues.

The rapid advances in digitization and computing capabilities have totally the changed
the faces of many instruments and the type of algorithms that are used in processing
the resulting data. High-resolution imaging systems and multiple measurement
modalities for characterizing a flaw have emerged. Interest is increasing not only in
detecting, characterizing and sizing defects, but in characterizing the materials in which
they occur. Goals range from the determination of fundamental micro structural
characteristics such as grain size, porosity and texture (preferred grain orientation) to
material properties related to such failure mechanisms as fatigue, creep, and fracture
toughness—determinations that are sometimes quite challenging to make due to the
problem of competing effects.

Most ultrasonic instruments detect flaws by monitoring one or more of the following:
Reflection of sound from interfaces consisting of material boundaries or discontinuities
within the material itself.
Time of transit of a sound wave through the test piece from the entrance point of the
transducer to the transducer
Attenuation of sound waves by absorption of the sound waves within the material
Features in the spectral response for either a transmitted signal or a reflected one.

Ultrasonic waves are mechanical vibrations; the amplitude of vibrations in metal parts
being ultrasonically inspected impose stresses well below the elastic limit, thus
preventing permanent effects on parts.

Basic Equipment
An electronic signal generator that produces bursts of alternating voltage (a negative spike
or a square wave) when electronically triggered.
A transducer (probe or search unit) that emits a beam of ultrasonic wave when bursts of
alternating voltage are applied to it.
A couplant to transfer energy in the beam of ultrasonic waves to the test piece
A couplant to transfer the output of ultrasonic waves (acoustic energy) from the test piece to
the transducer
A transducer ( can be the same as the transducer initiating the sound or it can be a
separate one) to accept and convert the output of ultrasonic waves from the test piece to
corresponding bursts of alternating voltage. In most system, a single transducer
alternately acts as sender and receiver.
An electronic device to amplify and if necessary demodulate or otherwise modify the signal
from the transducer.
A display or indicating device to characterize or record the output from the test piece. The
display device may be a CRT, sometimes referred to as an oscilloscope; a chart or strip
recorder; a marker, indicator , or alarm device; or a computer printout
An electronic clock, or timer, to control the operation of the various components of the
Page 262
system, to serve as a primary reference point, and to provide coordination for the
entire system..

Advantages and Disadvantages

The principal advantages of ultrasonic inspection as compared to other metal parts


are:
Superior penetrating power, which allows the detection of flows deep in the part.
Ultrasonic inspection is done routinely to thicknesses of few meters on many
types of part s and to thicknesses of about 6m(20ft) in the axial inspection of
parts such as long steel shafts or rotor forgings
High sensitivity, permitting the detection of extremely small flaws
Greater accuracy than other nondestructive methods in determining the position of
internal flaws, estimating their size, and characterizing their orientation, shape
and nature
Only one surface needs to be accessible
Operation is electronic, which provides almost instantaneous indication of flaws.
This makes the method suitable for immediate interpretation, automation, rapid
scanning, in line production monitoring and process control. With most system, a
permanent record of inspection results can be made for future reference
Volumetric scanning ability, enabling the inspection of volume of metal extending
from front surface to back surface of a part
Non-hazardous to operations or to nearby personal and has no effect on equipment
and materials in the vicinity
Portability
Provides an output that can be processed digitally by a computer to characterize
defects and to determine material properties .

The disadvantages of ultrasonic inspection include the following

Manual operation requires careful attention by experienced technicians


Extensive technical knowledge is required for the development of inspection
procedures
Parts that are rough, irregular in shape. Very small or thin , or not homogeneous
are difficult to inspect
Discontinuities that are present in a shallow layer immediately beneath the surface
may not be detectable
Couplant’s are needed to provide effective transfer of ultrasonic wave energy
between transducers and parts being inspected
Reference standards are needed, both for calibrating the equipment and for
characterizing flaws

Applicability
The ultrasonic inspection of metal is principally conducted for the detection of
discontinuities. This method can be used to detect internal flaws in most engineering
metals and alloys. Bonds produced by welding, brazing, soldering, and adhesive
bonding can also be ultrasonically inspected. In-line techniques have been developed

Page 263
for process control. Both line-powered and battery –operated commercial equipment is
available permitting inspection in shop laboratory, warehouse, or field.

Ultrasonic inspection is used for quality control and materials inspection in all major
industries .This includes electrical and electronic component manufacturing; production
of metallic and composite materials; and fabrication of structures such as airframes,
piping and pressure vessels, ships, bridges, motor vehicle, machinery, and jet engines.
In service ultrasonic inspection for preventive maintenance is used for detecting the
impending failure of railroad- rolling-stock axles, press columns, earth moving
equipment, mill rolls, mining equipment, nuclear systems, and other systems and
components.

The flaws that can be detected include, but not limited to cracks, inclusions, voids,
laminations, debonding, pipes and flakes. They may be inherent in the raw material,
may result from fabrication and heat treatment, or may occur in service as a result of
fatigue, impact, abrasion, or corrosion.

Ultrasonic inspection can also be used to measure thickness of metal sections.


Thickness measurements are often made in refinery or chemical plants, and has a wide
range of applications on thickness survey on ships. Special ultrasonic techniques has
been used on such diverse problems such as the rate of growth of fatigue crack,
detection of borehole eccentricity, measurement of elastic moduli, study of press-fits,
determination of nodularity in cast iron and metallurgical research phenomenon such as
structure, hardening, inclusion count in various metals etc.

Page 264
Physics of Ultrasound
Particle Displacement and Strain
Acoustics is the study of time-varying deformations or vibrations in materials. All
material substances are composed of atoms, which may be forced into vibrational
motion about their equilibrium positions. Many different patterns of vibrational motion
exist at the atomic level. Most of these patterns, however, are irrelevant to the study of
acoustics, which is concerned only with material particles that, although small, contain
many atoms; for within each particle, the atoms move in unison.

When the particles of a medium are displaced from their equilibrium positions, internal
(electro-static) forces arise. It is these elastic restoring forces between particles,
combined with inertia of the particles, which lead to oscillatory motions of the medium.

Sound is produced by a vibrating body and is itself a mechanical vibration of particles


about an equilibrium position. The actual particles donot travel through the material
away from the sound source. It is the energy produced, which causes the particles to
vibrate that is moving progressively through the medium. After a drum head has been
struck, vibrations are transferred or imparted to the air surrounding it. The air carries
energy that can be heard. This range of sound energy is called the “Audible” range and
has a frequency of about 20 Hz to 20000 Hz (20kHz).

Hz = Hertz kHz = Kilo hertz

= 1000 Hz

MHz = Mega hertz = 1000 kHz = 1000000 Hz

Most of the industrial ultrasonic inspections are carried out in the MHz range.
Conventional ultrasonic inspections are being carried out in the range of 2 - 5 MHz
frequency range. However, ultrasonic testing can be carried out upto 25 Mhz range.
The lower limit can also be in the range of 0.5 MHz (for theinspection of castings) and
even in the range of some kHz (inspection of concrete structures).

Wave

The nature of sound travel is in the form of waves. If a particle displacement Vs time
graph is plotted, the resulting graph would resemble a sine wave and hence we refer
sound as a wave.

Properties of Waves:

Wave is a disturbance that transmits energy through a system. In order to discuss


ultrasonic waves more easily, it is necessary to define certain terms that characterize
them. They are:

Frequency: Frequency (f) is defined as the number of times a repetitive event (cycle)
occurs per given unit of time. Normal ultrasonic testing is carried out in the range of 0.5
MHz to 25 MHz. However, for specialized inspections, such as those involving testing
concrete structures will employ frequency range in the order of several kHz. The
frequency of ultrasonic waves affects the inspection capabilities in several ways.
Generally, a compromise has to be made between favorable and adverse effects to

Page 265
achieve an optimum balance and to overcome the limitations imposed by the ultrasonic
equipment and material.

Sensitivity, or the ability of an ultrasonic test system to detect smallest possible


discontinuity, is generally increased by using relatively high frequencies. Resolution or
the ability of an ultrasonic system to give simultaneous, separate indications from
discontinuities that are close to each other both in depth below the test surface and in
lateral position, is directly propotional to the frequency bandwidth and inversely related to
the pulse length. Resolution generally improves with an increase in frequency.

Wavelength
If the length of a particular sound wave is measured from trough to
trough, or from crest to crest, the distance is always the
same. This distance is known as the wavelength (ë) and is defined in
the following equation. The time it takes for the wave to travel a distance of one complete
wavelength, is the same amount of time it takes for the source to execute one complete
vibration.

ë = V/f

where: ë = wavelength of the wave

V = velocity f = frequency

of the wave

Acoustic Wavelength and Defect Detection


The relationship between flaw (defect) detection and ultrasound wavelength is not as
straight forward as one might expect. Conventional wisdom says that if the wavelength
of the ultrasound is too large, that defects with sizes much smaller that the ultrasonic
wavelength will not be detected.

Detection of a defect involves many factors other than the relationship of wavelength
and flaw size. Sound reflects from a defect if its acoustic impedance differs from the
surrounding material. Often, the surrounding material has competing reflections, for
example microstructure grains in metals and the aggregate of concrete. A good
measure of detectability of a flaw is its signal-to-noise ratio (S/N), that is the signal from
the defect against the background reflections (categorized as “noise”). The absolute
noise level and the absolute strength of a echo from a “small” defect depend on a
number of factors:

• probe size and focal properties

• probe frequency, bandwidth and efficiency

• inspection path and distance (water and/or solid)

• interface (surface curvature and roughness)

• flaw location with respect to the incident beam

• inherent noisiness of the metal microstructure or concrete aggregate

Page 266
• inherent reflectivity of the flaw which is dependent on its acoustic impedance, size,
shape, and orientation. Cracks and volumetric defects can reflect ultrasonic waves
quite differently. Many cracks are “invisible” from one direction and strong reflectors
from another.
The signal-to-noise ratio (S/N), and therefore the detectability of a defect:

increases with increasing flaw size (scattering amplitude). The detectability of a


defect is directly proportional to its size.

increases with a more focused beam. In other words, flaw detectability is inversely
proportional to the transducer beam width.

increases with decreasing pulse width (delta-t). In other words, flaw detectability is
inversely proportional to the duration of the pulse produced by an ultrasonic
transducer. The shorter the pulse (often higher frequency), the better the
detection of the defect. Shorter pulses correspond to broader bandwidth
frequency response. See the figure below showing the waveform of a transducer
and its corresponding frequency spectrum.

decreases in materials with high density and/or a high ultrasonic velocity. The
signal-to-noise ratio (S/N) in inversely proportional to material density and
acoustic velocity.

increases with frequency. However, in some materials, such as titanium alloys, both
Aflaw and the Figure of Merit (FOM) both change about the same with frequency,
so in those cases the signal-to-noise ratio (S/N) in somewhat independent of
frequency, fo.

Sound Beam Velocities


Ultrasonic waves travel through solids and liquids at relatively high speeds, but are
more readily attenuated, or die out in gases. The velocity of a specific wave mode of
ultrasound is a constant through a given homogenous material. The velocities of
vibrational waves through various materials related to ultrasonic testing are listed by
many authorities in centimeters per second or inches per second. The velocities differ
from material to material and these differences are largely due to the differences in
density and elasticity of each material. Density alone cannot count for the extremely
high velocity in Beryllium, which is less dense than aluminum. Another example is that,
the acoustic velocity of water and mercury are almost identical, yet mercury is 13 times
as dense as water.

ACOUSTIC IMPEDANCE & COUPLANTS


When a transducer is used to transmit an ultrasonic wave into the material, only part of
the wave energy is transmitted; the rest is reflected at the interface between two
different materials as ultrasound passes from one to another. How much of the sound
beam is reflected at the interface depends on a factor called the Acoustic Impedance
ratio. Acoustical impedance is a material property and can be generally referred to as
the resistance of a material to the passage of sound waves. The specific acoustic
impedance (Z) of any material can be computed by multiplying the density of the
material and the velocity of the material.

Page 267
Z = pV
Air has a very low impedance ratio, while the impedance of water is relatively higher
than the impedance of air. Aluminum and steel still have higher impedances.

Impedance Ratio
Impedance ratio between two materials is simply the acoustical impedance of one
material divided by the acoustical impedance of the other material. When a sonic beam
is passing from first material to the second, the impedance ratio is the impedance of the
second material divided by the impedance of the first material. As the ratio increases,
more of the original energy will be reflected.

R = {(Z2 - Z1) / (Z2 + Z1)}2 R = Reflected Energy


Note that Transmitted Sound Energy + Reflected Sound Energy = 1. Since air has very
small impedance, the ratio between air and solids are very high. Therefore, most, if not
all, of the ultrasound will be reflected at interfaces involving air and any other material.
An impedance ratio is often referred to as “an impedance mismatch”. If the impedance
ratio for example is 5/1, then the impedance mismatch is 5 to 1. The impedance ratio
for a liquid – metal interface is on the order of 20:1 ( around 80% reflection), while the
ratio fro air – metal is about 100,000 to 1 (virtually 100% reflection). This results in only
a small portion of sound energy being transmitted into a test specimen. Ideally, a 1 to 1
impedance ratio would be desirable for the optimum transmission of ultrasound.
COUPLANTS
Air is a poor transmitter of ultrasound at megahertz frequencies, and the impedance
mismatch between air and most solids is great enough that even a thin layer of air will
severely retard the transmission of sound waves from the transducer to the test piece.
To perform satisfactory contact inspection, it becomes necessary to eliminate the air
between the transducer and the material. There are two options, one being to
completely evacuate the air, the other being to substitute some other medium for air. If
evacuation is can be done to perfection, still the sound waves cannot be transmitted as
sound waves cannot travel in vacuum. Hence the only option is to substitute some
media instead of air. This medium is often is referred to as the couplant. Thus, a
couplant, usually a liquid or a semi-liquid, is required between the face of the search unit
and the examination surface to permit or improve the transmittance of ultrasound.
Typical Couplant’s include water, grease, cellulose paste, gel and oils. Certain soft
rubbers that transmit the ultrasound may be used where adequate coupling can be
achieved through hand pressure application.

The following factors should be considered while selecting the couplant:

• Surface finish of the test piece

• Temperature of the test piece

• Possibility of chemical reactions between the test piece surface and the couplant
• Cleaning requirements
The couplant should be selected such that its viscosity is appropriate for the surface
finish of the test material to be examined. The examination on rough surfaces generally
Page 268
requires a high viscosity couplant e.g. for scanning castings the couplant generally used
is grease. At elevated temperatures as conditions warrant, heat resistant coupling
materials such as silicone gels, or greases should be used.

Water is a suitable couplant for the use on a relatively smooth surface; however a
wetting agent should be added. It is sometimes appropriate to add glycerin to increase
viscosity, but they tend to induce corrosion in aluminum and therefore it is not
recommended for aerospace applications. Heavy grease or oil can be used on
overhead surfaces or on rough surfaces when the primary purpose of the couplant is to
smoothen out the irregularities.

Wall paper paste is especially used on rough surfaces when good coupling is needed to
minimize background noise and yield a adequate signal - to – noise ratio. Water is not
an good couplant to be used on carbon steel pieces as it promotes surface corrosion.
Wall paper paste has a tendency to flake off when exposed to air. When dry and hard,
they can be removed by blasting or by wire brushing. Couplant’s used in contact
inspection should be applied as thin as possible to obtain consistent test results. The
necessary for a couplant is one of the drawbacks of the inspection and may be a
limitation.

The couplant used in calibration should be used for examination. During the
performance of an inspection the couplant layer must be maintained throughout the
inspection such that the contact area is held constant while maintaining adequate
couplant thickness. Lack of couplant thickness may reduce the amount of energy
transferred between the test piece and the transducer. These couplant variations in turn
severely results in examination sensitivity variations.

Wave Behavior at an Interface


Snell’s Law and Critical Angles
Light and sound are both refracted when passing from one medium to another with
different indicies of refraction. Because light is refracted at interfaces, objects you see
across an interface appear to be shifted relative to where they really are. If you look
straight down at an object at the bottom of a glass of water, for example, it looks closer
to you than it really is. A good way to visualize how sound refracts is to shine a
flashlight into a bowl of slightly cloudy water noting the refraction angle with respect to
the incidence angle.

The velocity of sound in each material is determined by the material properties (in the
case of sound, elastic modulus and density) for that material. When sound waves
pass between materials having different acoustic velocities refraction takes place at an
interface.
Only when an ultrasound wave is incident at right angles on an interface between two
materials (i.e. at angle of incidence = 0) do transmission and reflection occur at the
interfaces without any change in the beam direction. At any other angle of incidence,
the phenomenon of mode conversion, i.e. s change in the nature of the wave motion
and refraction i.e. a change in the direction of the wave propagation must be considered.
The angle of incidence è of a ray or beam is the angle measured from i
the ray to the surface normal.
Page 269
The angle of reflection of a ray or beam is the angle measured from the reflected
ray to the surface normal. The law of reflection states that the angle of incidence of a
wave or stream of particles reflecting from a boundary, conventionally measured from
the normal to the interface (not the surface itself), is equal to the angle of reflection
measured from the same interface,

Fig indicating angle of reflection

Angle of refraction

A familiar example of this phenomenon occurs as we view an object on the bottom of the
lake. This object is not exactly where it is, but it appears to be due to the bending of light.
This bending is referred to as refraction and is determined by

(a) the sound velocities in both the materials


Page 270
(b) the angle of incidence

It has been observed that unlike light, a sound wave of one type, such as longitudinal
will not only be refracted in the second material, but according to the incident angle, it
may be transformed into another wave mode such as shear or surface. The wave that
propagate in an given instance depend on the ability of the waveform to exist in the
given material, the angle of incidence, and the velocities of the waveforms in both
materials.

The general law that describes waves behavior at an interface is known as Snell’s law.
Although originally derived from light waves, snell’s law applies to acoustic waves and
to many other typeof waves. According to Snell’s law, the ratio if sine of the angle of
incidence to the sine of angle of refraction equals the ratio of the corresponding wave
velocities. Mathematically, Snell’s law can be expressed as:

Sin i /Sin r = V1/ V2

Critical Angles

If the angle of incidence is small, sound waves propagating in a given medium may
undergo mode conversion at the boundaries. Moreover, at this angle of incidence, two
wave mode exists as a result of mode conversion. One being longitudinal wave and the
other being shear wave. Two wave modes of ultrasound at different velocities and
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it.

With further enlargement of the angle of incidence the angle of


refraction b also increases until finally, at an angle of
incidence of a = 27.5° (1st critical angle), the longitudinal
wave, with an angle b of 90°, is refracted. This means that it
runs along the interface whilst the transverse wave is still
transmitted into the test object, Fig 32a.

Page 271
Critical Angles

If the angle of incidence is small, sound waves propagating in a given medium may
undergo mode conversion at the boundaries. Moreover, at this angle of incidence, two
wave mode exists as a result of mode conversion. One being longitudinal wave and the
other being shear wave. Two wave modes of ultrasound at different velocities and
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it. With further enlargement of the angle of incidence
the angle of
refraction b also increases until finally, at an angle of
incidence of a = 27.5° (1st
critical angle), the
longitudinal wave, with an angle
b of 90°, is refracted. This
means that it runs along the
interface whilst the transverse
wave is still transmitted into
the test object, Fig 32a.

Our precondition for clear


reflector evaluation is
fulfilled: now only one sound
wave occurs in the test object, this is the transverse wave
with a refraction angle of 33.3° (for
perspex/steel). With further enlargement of the angle of
incidence various refraction angles of the transverse wave (=beam
angle) can be set, e.g. exactly

Page 272
45°, Fig. 32 b. Finally, with an
angle of incidence of about 57°
(2nd critical angle), the
transverse wave, with an angle of
90°, is refracted and propagates
along the surface of the test
object, it then becomes a surface
wave, Fig. 32 c. That is the limit
over which no more sound waves are
transmitted into the test object.
Total reflection starts from
here, Fig. 32d. The area in which
an angle of incidence is present
between the 1st and 2nd critical
angle (27.5° - 57°) gives us a
clear evaluable sound wave in the test object (made of steel),
namely the transverse wave between 33.3° and 90°, Fig. 33.
refracted angles present in the same medium, at the same time would make it nearly
impossible to properly evaluate a discontinuity as it would be unknown that which wave
mode has detected it.

With further enlargement of the


angle of incidence the angle of
refraction b also increases
until finally, at an angle of
incidence of a = 27.5° (1st
critical angle), the
longitudinal wave, with an angle
b of 90°, is refracted. This
means that it runs along the
interface whilst the transverse
wave is still transmitted into
the test object, Fig 32a. Our
precondition for clear reflector evaluation is fulfilled: now only
one sound wave occurs in the test object, this is the transverse
wave with a refraction angle of 33.3° (for perspex/ steel). With
further enlargement of the angle of incidence various refraction
angles of the transverse wave (=beam angle) can be set, e.g.
exactly 45°, Fig. 32 b. Finally, with an angle of incidence of about 57° (2nd critical
angle), the transverse wave, with an angle of 90°, is refracted and propagates along the
surface of the test object, it then becomes a surface wave, Fig. 32 c. That is the limit
over which no more sound waves are transmitted into the test object. Total reflection
starts from here, Fig. 32d. The area in which an angle of incidence is present between
the 1st and 2nd critical angle (27.5° 57°) gives us a clear evaluable sound wave in the
test object (made of steel), namely the transverse wave between 33.3° and 90°, Fig. 33.

Page 273
GENERATION OF ULTRASONIC WAVES

1. Piezoelectricity

Ultrasonic transmitters and receivers are mainly made from small plates cut from certain
crystals. If no external forces act upon such a small plates electric charges are arranged
in certain crystal symmetry and thus compensate each other. Due to external pressure
the thickness of the small plate is changed and thus the symmetry of the charge. An
electric field develops and at the silver-coated faces of the crystal voltage can be
tapped off. This effect is called “direct piezoelectric effect”. Pressure fluctuations and
thus also sound waves are directly converted into electric voltage variations by this
effect; the small plate serves as receiver. The direct piezoelectric effect is
reversible(reciprocal piezoelectric effect). If voltage is applied to the contact face of the
crystal the thickness of the small plate changes, according to the polarity of the voltage
the plate becomes thicker or thinner. Due to an applied high-frequency a.c. voltage the
crystal oscillates at the frequency of the a.c. voltage. A short voltage pulse of less than
1/1000000 seconds and a voltage of 300-1000 v excites the crystal into oscillations at
its “natural frequency (resonance) which depends on the thickness and the material of
small plate. The thinner the crystal, the higher its resonance frequency. Therefore it is
possible to generate an ultrasonic signal with a defined primary frequency. The
thickness of the crystal is calculated from the required resonance frequency f-0according
to the following formula:

C= sound velocity of the crystal material


F---0= - resonance frequency of the crystal formula
D=thickness of the crystal
Ë= wave length
A piezoelectric crystal occurring naturally is the quartz (rock crystal) which was used an
crystal material in the beginning of ultrasonic testing. Depending on whether longitudinal
waves or transverse waves are to be generated the quartz plates have either been
sawed vertically to the x-axis of the crystal (x-cut) or vertically to the y-axis (y-cut) out of
the rock crystal.

In modern probes quartz is hardly used, instead sintered ceramics or artificially


produced crystals are employed. The most important materials for ultrasonic crystal as
well as their characteristics are stated in the table below.
Page 274
Ba r iu m t it a n Le a d Me t a n io b Lit h iu m Su lf a t e Lit h iu m Nio b a t e
P r o p e r t ie s PZT ate ate Qu a r t z s

7320
Sound
velocity 4,000 5100 3300 5460 5740
Cm/s

Acoustic impedance z
106 kg/m2 s
30 27 20,5 11,2 15,2 34

Electromechanic
coupling factor k
O,6-o,7 0,45 0,4 0,38 0,1 0,2

Piezoelectric
modulus d 150-593 125-190 85 15 2,3 6

Piezoelectric
deformation constant
H 1,8-4,6 1,1-1,6 1,9 8,2 4,9 6,7

Coupling factor
for
radical cecillation kp 0,5-0,6 0,3 0,07 0 0,1 -

The efficiency during the conversion from electrical into mechanical energy and vice versa
differs according to the crystal material used.

The corresponding features are characterized by the piezoelectric constants and the
coupling factor. The constant (piezoelectric modulus) is a measure for the quality of the
crystal material as ultrasonic transmitter. The constant H (piezoelectric deformation
constant) is a measure for the quality as receiver. The table shows that lead
zirconatetitanate has the best transmitter characteristics and lithium sulphate the best
receiver characteristics. The constant k (theoretical value) shows the efficiency for the
conversion of electric voltage into mechnical displacement and vice versa. This value is
important for the pulse echo operating as the crystal acts as transmitter and receiver.
Here the values for lead zirconate-titanate, barium-titanate and lead meta-niobate lie in
a comparable order. As in the case of direct contact as well as immersion testing a
liquid couplant with low acoustic impedance z is required the crystal material should
have an acoustic impedance of the same order in order to b e able to transmit as much
sound energy as possible. Thus the best solution would be to use lead meta-niobate
and lithium sulphate as they have the lowest acoustic impendance. A satisfactory
resolution power requires that the constant k---p- (coupling factor for radical oscillation) is
as low as possible. K-p- is a measure for the appearance of disturbing radial oscillations
which widen the signals. From this point of view lead meta-niobate and lithium sulphate
are the best crystal materials. The characteristics of the crystal materials described
here show that no ideal crystal material exists; according to the problem compromises
have always to be made. As lithum sulphate presents additional difficulties due to its
water solubility the most common materials due to its water solubility the most common
materials as lead zinconate-titanate, barium-titanate and lead meta-niobate.
1. Set-up of the probe

For the practical application in the material testing probes are used into which the
piezoelectric crystal are installed. In order to protect the crystals against damage they
are pasted on a plane-parallel or wedge-shaped plastic delay block, the shape of the
delay block depends on whether the sound wave is to be transmitted perpendicularly or
angularly into the workpiece to be tested. The rear of the crystal is closely connected
with the damping elements which dampens the natural oscillations of the crystal as
quickly as possible.

Page 275
In this way the short pulses required for the pulse echo method are generated. The unit
comprising crystal, delay block and damping element are installed into a robust plastic
or metal housing and the crystal contacts are connected with the connector socket.
Probes transmitting and receiving the sound pulses perpendicularly to the surface of
the workpiece are called normal beam probes or straight beam probes. If the crystal is
equipped with a wedge-shaped delay block and angle beam probe is concerned which
transmits/receives the sound pulses at a fixed probe angle into/ from the workpiece to
be tested. In both probe types the crystal serves for both transmission and the
reception of sound pulses. A third probe type comprises two electrically and
acoustically separated crystal units of which one only transmits and the other received
of sound pulses. This probe is called TR probe (transmitter-raceiver-probe) or twin
crystal probe and is used-due to its design and functioning –for the testing of thin
materials or the detection of material flaws located near the surface of the workpiece.

1. probe characteristics

Page 276
Not only the damping element but all components of the crystal unit determine the
oscilation process. If the crystal is
excited into oscillation by an applied a.c.
voltage it will oscillate at this frequency
of excitation. The amplitude (max.
change of thickness) strongly depends
on the frequency of excitation. Near the
resonance frequency f-0 the amplitude
reaches a maximum (case of
resonance). The lower the damping the
higher the amplitude in the case of
resonance and the narrower the
resonance curve. Therefore a low
damping factor can produce high sound
pressure and thus a great sensitivity of
the test system. The disadvantage of low
damping factor
are long sound pulses and thus correspondingly long signals results in a poor
resolution power of complete test system. Signals from two reflectors located closely to
each other are not resolved, i.e. both reflector reflect only one wide signal. Thus a
compromise has to be made between great sensitivity and satisfactory resolution
power. In the case of given damping the resolution power can be improved by an
increase in frequency, however, a transit time-dependent decrease of the sensitivity has
to be accepted. According to the application probes with different damping and thus
different resolution power are manufactured. In this connection it is mainly distinguished
between 3 different types of damping:

In the case of medium damped probes the crystal carries out a relatively large number
of oscillations after its excitation. This results in a pulse length according to the number
of oscillations.

In the case of highly damped probe the pulse consists of only a few (two to four)
oscillations.

Shock wave probes are so highly damped that the crystal carries out only half an
oscillation or one oscillation (a periodic damping).

The number of probe types available shows that for each test problem encountered in
practice the suitable probe has to be selected. When selecting the probe it is first of all
important to know the test object with the expected flaws and to compare the available
probes on the basis of the shape of the sound beam and the sensitivity characteristic.
The sensitivity characteristic depends on the shape of the sound beam, the resolution
power and the test object itself. The length of the initial pulse determines the dead
zone. That is the area immediately below the surface of the workpiece where no
reflectors can be surface of the workpiece where no reflectors can be detected. The
sensitivity in ores with increasing distance and reaches the peak value in the focus of
the sound beam, i.e. at the distance of the near field length unless the dead zone
exceeds this areas.

Page 277
Behind the focus the sound beam becomes divergent. The sound pressure decrease in
inverse ratio to the distance and thus the sensitivity decreases, too. Moreover, a
considerable sound attenuation in the material may further decrease the sensitivity and
thus the flaw detectability.

An appropriate representation of the sensitivity characteristic, which is considered when


selecting the probe, is given in the sonogram. In this sonogram the detectability of
reference flaws of certain sizes is marked on the sound beam of the most common
probes. The user immediately knows if the sensitivity is still sufficient at a certain
distance.

Band-width describes a frequency spectrum of pulses, and receivers


capable of amplifying them. Particular descriptions are broad
banded, having a relatively wide frequency bandwidth, as opposed
to narrow banded or tuned. With the fourier transformation method
a pulse or amplifier can be subscribed with its bandwidth. It is
usually expressed with a 6dB drop of maximum amplitude, subscribing
the bandwidth between the edges of the curve. It is sometimes
referred to as 3dB or 20 dB drop. The selection of bandwidth is
essential for achieving certain test results - narrow bandwidth
for highly sensitive testing, or broad banded for high resolution
testing. All elements of the test influence the total test
bandwidth, and are described in series: Transmitter pulse -Cable
--Transducer -- Coupling -- Material -- Cable -- Amplifier --
(Digitalization). Besides the commonly used echo amplitude
evaluation method there are other methods that use the capability
of the echo frequency/bandwidth information, such as Fast Fourier
Transformation (FFT).

Properties of Peizo-electric Materials

Quartz

* Natural or Artificially grown crystals


* High resistance to wear
* Insoluble in water
* Resists Ageing
Page 278
* Least efficient generator of acoustic energy
* Suffers from mode conversion interference * Requires high
voltage to drive at low temperatures * Good reciever.

Lithium Sulfate

Most efficient sound reciever


Low electrical impedance
No ageing
Less affected by mode conversion interference
Fragile, soluble in water
Operates well on low voltage
Low mechanical strength
Crystal decomposes at 600 C (max temp. - 750C)

Polarized Ceramics

Efficient generator of ultrasound


Operates on low voltage Used
upto a temperature of 300C
0

Low mechanical strength


Affected little by mode conversion interference
Have a tendency to age

Service and checks of probes

In the case of direct contact in ultrasonic testing the probe is coupled to the surface of
the workpiece by a couplant as the ultrasonic waves are not transmitted in an air gap
between probe and workpiece. The workpiece is tested by scanning its surface using
the probe. Therefore the coupling face of the probe is subject to wear and the testing
operator has to see to it that the crystal of the probe does not get damaged. The wear
of the probes can be decreased by using exchangeable protective membranes which,
however reduce the sensitivity and widen the pulses. Probes with a particularly hard
ceramic guard plate are also offered. If the delay block of a probe is worn off up to the
crystal the probe can no longer be used for testing purposes.

Heavy shocks such as a dropping of the probe may lead to the break of the crystal.
Temperatures exceeding the recommended max. operating temperature may also
damage the probe as in such a case the adhesive bonding between crystal and delay
block or crystal and damping element are dissolved. The crystals themselves undergo
an aging process as the polarization of the crystal gradually decreases.

In addition to the careful handling of the probes the check of the probes is therefore of
particular importance. Using an ultrasonic flaw detector and a suitable calibration block
the characteristics of the probes can easily be checked so that the ultrasonic operator
can rapidly make sure whether a certain probe produces the specified values or is
defective.

Page 279
TRANSDUCER TYPES

Straight beam probe

Page 280
Most standard straight-beam probes
transmit and receive longitudinal waves
(pressure waves). The oscillations of such
a wave can be described by compression and
decompression of the atoms propagating
through the material (gas, liquid and
solid), Fig 27. There is a large selection
of straight-beam probes in various sizes
and range from frequencies of
approximately 0.5 MHz to 25 MHz. Distances
of over 10†m can be obtained thus enabling
large test objects to be tested. The wide
range enables individual matching of probe
characteristics to every test task, even
under difficult testing conditions. We have already mentioned a
disadvantage of straight-beam probes which, under certain
conditions, can be decisive: the poor recognition of
near-to-surface dis-continuities due to the width of the
initialANGLE BEAM SEARCH UNITS pulse.
Probes whose beams enter at an angle are called angle-beam probes
because they transmit and receive the sound waves at an angle to
the surface of the test object. Most standard angle-beam probes
transmit and receive, due to technical reasons, transverse waves
or shear waves. Angle beam transducers direct sound beam into
the test piece at an angle other than 90 degrees. Angle beam
transducers are used to locate discontinuities oriented at right
angles between 90 and 180 degrees to the surface. They are also
used to propagate shear, surface and plate waves into the test
specimen by mode conversion.

In contact testing, angel beam transducers use a wedge, usually


made of plastic, to direct the sound wave at a desired angle
beam into the test specimen.

Page 281
DOUBLE TRANSDUCERS (T/R PROBES)

The technique with two search units, also referred to as Dual Element probes, are used
when the test piece is of irregular shape and reflecting interfaces or back surfaces are
not parallel with the entry surface. One search unit is the transmitter, and the second
search unit is the receiver. The transmitting search unit projects a beam of vibrations
into the material; the vibrations travel through the material and are reflected back to the
receiving search unit from flaws or from the opposite surface.

Dual-element units provide a method of increasing the directivity and resolution


capabilities in contact inspection. By splitting the transmitting and receiving functions,
two transducer, send and receive inspection can be done with a single search unit. The
dual-element design allows the receiving function to be electrically and acoustically
isolated from effects of the excitation pulse by use of an acoustical barrier separating
the transmitter element from the receiver. The receiving transducer is always in a
quiescent state and can respond to the signal reflected from a flaw close to the test
piece surface.

An additional increase in sensitivity within the near surface zone is attained by a slight
inclination of crystals towards each other. This angle of inclination is called the roof
angle, which varies from 0 degrees to approx. 12 degrees, depending on the purpose
of application and probe itself.

Page 282
Due to the relatively long delay line, the distance between the
electrically zero point
(initial pulse) and the mechanical zero point (surface of the
work piece) is rather great. For this reason, the initial pulse
shifts to the left so far out of the CRT screen, that the near
surface area is no longer covered. However, at high instrument
gain, there is an interference echo in the near surface echo.
This interference is caused by small sound portions which,
travelling through the acoustic separation layer or along the
surface through the coupling medium, reach the recieving crystal.
This interference echo, occuring with T/R probes, designated
cross-talk echo because it concerns a direct cross-talk of sound
pulses. In general, however, the cross-talk echo doesnot affect
the detectability of near surface reflectors.

The design of probe results in a very good resolution power. On


the other hand, it also results in a further interference effect,
which makes the calibration of test range more difficult. The fact
that the two crystals of T/R probe are slightly inclined towards
each other causes refraction and splitting of sound waves at the
transition from flexiglass to steel and also at the back surface
of the test specimen. This means that totally 4 signals are return
from the backwall of the test specimen. Several interference echos
appear just after the first backwall echo such that the second
backwall echo is not clearly distinguishable. Hence we require 2
reference thicknessess while calibrating with the T/R probe.

Measuring Accuracy with T/R Probe

The sound travel path in a T/R probe is V-shaped and results in a


longer beam path length. The additional distance which the sound
pulse must travel is called the Detour. The actual beampath “a”
is also dependent on the thickness “T” of the workpiece as well
on the distance “c” between the probe indices of sound exit and
sound entrance or the formal connection.

a = Sq.root ( T2 + 0.25 c2 )

The detour “u” which the sound pulse additionally travels on its
V-shaped path, consequently equals

u = a - d

A measuring error which is due to the detour of the sound beam


becomes negligibily small if the distance to be measured lies
within the two wall thickness of the used for the calibration and
if the interval of the two calibration paths that has not been
chosen too large. For practical applications this means that,
before starting to test with a T/R probe, the operation must know
from which depth range he can expect measured values. After this,

Page 283
he chooses the two calibration lines which are required to
calibrate the test range. All measured values which are now
included in the interval of the two calibration lines are exact
values within the usual tolerances. The measuring error increases
as the smaller the real measuring distance is.

Applications

T/R probes are extensively used for the determination if remaining


wall thickness on tubes and containers that are exposed to
corrosion and erosion. They are often

in combination with digital thickness meters (D-meters), which


after coupling directly indicates the wall thickness in mm or
inches. Due to better near surface
resolution of T/R probe, they are
often used for materials that are
essentially thin.

For surface curvature, convex, e.g.


tubes, twin crystal probes coupling
factor plays an important role for
definite and reliable results. The
acoustic separation layer must along
the curvature (transverse to the tube axis) so as to provide
excellent coupling. If the probe is coupled
so that the separation layer runs along the tube axis, then the probe
indices will form a wedge shaped gap which results in insufficient
coupling and measuring errors. In case of very strong curvature
radii or concave surfaces, it is possible to grind the delay
line of the probe so that it matches with the surface contour
of the test specimen to achieve optimum coupling condition.

Paintbrush Transducers. The inspection of large areas with small single-element


transducers is along and tedious process. To
overcome this and to increase the rate of
inspection, the paintbrush transducers were
developed. It is so named because t has a
wide beam pattern that, when scanned,
covers a relatively wide swath in the manner
of paintbrush. Paintbrush transducers are
usually constructed of mosaic or series of
matched crystal elements. The primary
requirement of a paintbrush transducer is that
the intensity of the beam pattern not varies
greatly over the entire length of the transducer. At best, paint-
brush transducers are designed to be survey devices; their primary function is to reduce
inspection time while still giving full coverage. Standard search units are usually
employed to pinpoint the location and size of a flaw that has been detected by a
paintbrush transducer.

Page 284
EMA Transducers

Electromagnetic-acoustic phenomenon can be used to generate


ultrasonic sound waves directly into the surface of an electrically
conducive specimen without the need for an external vibrating
transducer and couplings. Similar probes can be used for detection,
so that a complete non-contact transducer can be constructed. The
method is therefore particularly suitable for use on high-
temperature specimens, rough surfaces, and moving specimens. The
recieved ultrasonic strength in EMA systems is 40 - 50 dB lower
than a conventional barium titanate probe, but input powers can
be increased.

The principle of EMA transducers us illustrated by the figure. A


permanent magnet or an electromagnet produces a steady magnetic
field, while a coil of wire carries an RF current. The radio-
frequency induces eddy currents in the surface of the specimen,
which interacts with the magnetic field to produce Lorentz forces
that causes the specimen surface to vibrate in sympathy with the
applied radio frequency. When recieving the ultrasonic energy, the
vibrating specimen can be regarded as a moving conductor or a
magnetic field, which generates currents in the coil. The clearance
between the transducer and the metal surface affects the magnetic
field strength and the stength of the eddy current generated and
the ultrasonic intensity falls of rapidly with increasing gap.

Page 285
FOCUSED UNITS
Sound can be focused by acoustic lenses in a manner similar to that in which light is
focused by optic lenses. Most acoustic lenses are designed to concentrate sound
energy, which increases beam intensity in the zone between the lens and the focal
point. When an acoustic lens is placed in front of the search unit, the effect resembles
that of a magnifying glass; that is , a smaller area is viewed but details in that are
apppear larger. The combination of search unit and an acoustic lens is known as a
focused search unit or focused transducer; for optimum sound transmission, the lens of
focused search unit or focused transducer; for optimum sound transmission, the lens of
focused search unit is usually bonded to the transducer face. Focused search units can
be immersion or contact types.
Acoustic lenses are designed similarly to optic lenses. Acoustic lenses can be made of
various materials; severals of the more common lens materials are methyl
methacrylate, polystryrene, epoxy resin, aluminum, and magnesium. The important
properties of materials for acoustic lenses are:

•Large index of refraction in water


•Acoustic impedance close to that of water or the piezoelectric element
•Low internal sound attenuation
•EASE of fabrication

Acoustic lenses for contour correction are usually designed on the premise that ht e
entire sound beam must enter the testpiece normal to the surface of the testpiece.For
example, in the straight –beam inspection oftubing . a narrow diverging beam is
preferred for internal inspection and a narrow converging beam for external inspection.
Page 286
In either case, with a flat-face search unit there is a wide front-surface echo caused by
the inherent change in the length of the water path across the width of the sound beam
which results in a distorted pattern of multiple back reflections (fig.38a). A cylindrinates
this effect.

The shapes of acoustic lenses vary over a broad range; two types are shown in fig.39-a
cylindrical9line-focus) search unit in fig.39(b). the sound beam from a cylindrical search
unit illuminates a rectangular area that can be described in terms of beam length and
width.Cylindrically focused search units are mainly used for the inspection of thin-wall
tubing and round bars .Such search units are especially sensitive to fine surface or
subsurface cracks within the walls of tubing. The sound beam from a spherical search
unit illuminates a small circular spot. Spherical transducers exhibit the greatest
sensitivity and resolution of all the transducer types; but the area covered is small, and
the useful depthj range is correspondingly small. Focusing can be achieved by shaping
the transducer element. The front surface of a quartz crystal can be ground to a
cylindrical or spherical radous. Barium titanate can be formed into a curved shape
before it is polarized. A small piezoelectric element can be mounted on a curved
backing member to achieve the same result.
Focal Length. Focused transducers are described by their focal length, that is short,
medium, long ,or extralong. Short focal lengths are best for the inspections of regions of
the testpeice that are close to the front surface.The medium, long, and extralong focal
lengths are for increasingly deeper regions. Frequently,focused transducers are specially
designed fo a specific application. The longer the focal length of the transducer, the
deeper into the testpiece the point of high sensitivity will be.

The focal length of a lens in water has little relation to its focal depth in metal, and
changing the length of the water path in immersion inspection produces little change in
focal depth in a testpiece. The large difference in sonic velocity between water and
metal cause sound to bend at a sharp angle when entering a metal surface at any
oblique angle. Therefore, the metal surface acts as a second lens that is much more
powerful than the acoustic lens at the transducer, as shown in fig,.40. this effect moves
the focal spot very close to the front surface, as compared to the focal point of the same
sound beam in water. This effect also cause the transducer to act as a notably
directional and distance-sensitive receiver,sharpens the beam, and increases sensitivity
to small reflectors in the focal zone. Thus flaws that produce very low amplitude echoes
can be examined in greater detail than is possible with standard search units.

Useful Range. The most useful portion of a sound strats at the point of maximum
sound intensity and extends for a considerable distance beyond this point. Focusing the
sound beam moves the maximum-intensity point toward the transducer and shortens
the usable range beyond. The useful range of focused transducers extends from about
0.25 to approximately 2.50mm below the front surface, in materials 0.25(0.010 in.) or
less in thickness, resonance or anti resonance techniques can be used. These
techniques are based on changes in the duration of ringing of the echo, or the number
of multiples of the back-surface echo. The advantages of focused search units are listed
below; these advantages apply mainly to the useful thickness range of 0.25 to 2.50
mm(0.010 to 10 in.) below the front surface:

• High sensitivity to small flaws

Page 287
• High resolving power
• Low effects of surface roughness
• Low effects of front-surface contour
• Low metal noise (background)

The echo-masking effects of surface roughness and metal noise can be reduced by
concentrating the energy into a smaller beam. The side lobe energy produced by a flat
transducer is reflected away by a smooth surface. When the surface is rough, some of
the side lobe energy returns to the transducer and widens the front reflection, causing
loss of resolving power and increasing the length of the dead zone. The limitation of
focused search units is the small region in the test part in the area of sound focusing
that can be effectively interrogated. Noise. Material noise consists of low-amplitude,
random signals from numerous small reflectors irregularly distributed throughout the
testpiece. Some of the causes of metal noise are grain boundaries, microporosity, and
segregations. The larger the small reflectors encountered by the beam. If echoes from
several of these small reflectors have the sam transit time, they may produced can
indication amplitude that exceeds the acceptance level. Focused beams reduce
background by reducing the volume of metal inspected, which reduces the probability
that the sound beam will encounter several small reflectors at the same depth. Echoes
from discontinuities of unacceptable size are not affected and will rise well above the
remaining background.

Focused search units allow the highest possible resolving power to be achieved with
standard ultrasonic equipment. When a focused search unit is used, the front surface of
the testpiece is not in the focalzone, and concentration of sound beam energy makes
the echo amplitude from any flaw very large. The resolving power of any system can be
greatly improved by using a focused transducer, designed specifically for the application
and a defined region within the testpiece. Special configuration consist of spherical,
conical, and toroidal apertures the 40 with improvements in beam width and depth of
field. The figure below shows an immersion probe with a cylindrical lens.

Special Probes

Custom or special probes are often required for specialized


applications. These often encompass a number of elements relating

Page 288
to particular locations and angles. Some of the examples are
given below:

The Rail Probe

Specifically designed to contain two angle probes (2MHz, 70°) at either


end and a compression probe in the middle (2MHz).
The Rail Probe was used in combination with a flaw detector and railcart
to scan rails and has proved to be a very successful transducer.

HF Wheel Probe

A new high frequency dry coupled wheel probe


The main driving advantage of this dry coupled solid contact wheel probe
is that it works to overcome problems with couplant contamination
(application & removal) as well as eliminat ing the practicalities
of immersion systems.
The “tyre” or delay material is constructed of hydrophilic polymers which have
acoustic properties that lend themselves ideally to the implementation
of ultrasonics.
Applications include thickness measurement, composite inspection, delamination
detection and general flaw detection.

Boiler Tube Probes (BTPs)

Small, low profile twin angle beam transducers with integral cables for the
inspection of welds in steam boilers in power stations.

Page 289
Axially radiused to suit 50mm boiler tubes.
5 MHz with an increased toe in angle to produce a short focal length to suit
such weld and are available as 70° & 60°, with BNC or LEMO1 to fit flaw
detectors.

Roller Probes
Dry contact roller probes are designed to be used where automated testing is
required. Roller probes have miniature BNC connectors. They can be used in
combination with dry contact flaw detectors, UFDS and the MS310D.

Time of Flight Diffraction (TOFD) Probes.

Page 290
All TOFD transducers are highly
damped with short pulse length,
broad bandwidth and high
sensitivity, utilizing lead
metaniobate crystals. One major
application of TOFD is the
ultrasonic examination of welds
after final heat treatment and/or
hydraulic testing, to verify the
absence of cracks not detectable
by radiography etc. They are also
used to monitor welds during the
service life of components.
Sound radiation in the matter
The previous discussion of the
wave propagation proceeded from the assumption that a plane wave in an unlimited
medium is concerned. In practice, however, the crystal produces a system of waves in a
limited area which leads to a sound field shaped in a very complicated manner. In order
to explain these processes we look at two pointshaped sources P-1- and P-2, which
transmit spherical wave. We also assume that both sources simultaneously produce
maxima and minima of the same amplitude. In the space surrounding these points P-1 - -
and P-2 - there are certain points where the path difference between the two
waves is just ½ ë; i.e. at these points a minimum of the one wave
overlaps a maximum of the other wave. At these points the two
waves compensate to each other. Another group of certain points is characterized by
the fact two maximum or two minims of the two waves overlap. At these points a wave
occurs whose amplitude is twice as large as that of the wave coming from points P-1 - and
P-2the figures shows a momentary representation of the system discussed. The thickly
shown circles represent the maxims and the thinly shown circles the minima of the
outgoing waves. The points where a thickly shown ring and a thinly shown ring intersect
are the points of cancellation (connected by dotted lines). The dash-and –dot lines
connect the points where two maxima or two minima overlap, i.e. where a gain takes
places. A superposition of two or more waves coming from different points is called
interference. Due to the interference a complicated wave system occurs which cab be
demonstrated by water waves. In order to make the sound radiation from the crystal
surface understandable the surface is subdivided into many small points. Each point of
the crystal is considered to be the starting point of a spherical wave (Huygens principle).
Due to the interference of all these waves a complicated system of maxims and minima
occurs, the sound field.

Behind the crystal a number of interference maxima and minima can be seen. On the
central beam there is the last maximum- the main maximum – and from this point on no
further maxima and minima exist. The area of maxima and minima up to the main
maximum is called near field. The distance between crystal and main maximum is the
near field length N. The near field length depends on the face of the crystal, i.e. the
square of its diameter, the frequency of the sound waves and the sound velocity in the
material in which the waves propagate.

To a circular crystal the following formula applies:

Formula:

Page 291
The representation of the sound field which shown the interference maxima and minima
is not very suitable for the ultrasonic testing practice although it shows the actual sound
pressure distribution of sound source. In practice an approximate representation of the
sound field is used which shows the area of sound source where reflections from flawed
area in the workpiece occur when the pulse echo method is applied. This approximates
representation is called sound beam.
First, we look at the distance which are larger than the near field length N. Proceeding
from a point on the central beam we look for points on the vertical to the central beam
where the sound pressure has been reduced by 50 as compared with the starting point
on the central beam.

Page 292
For various distance these points can be connected by straight lines in good
approximation. The angle between the central beam and the marginal ray is called
divergence angle Y-50- the index 50 means that the marginal ray characterizes those
points which show a sound pressure reduced those points which show a sound
pressure reduced by 50 as compared with the central beam. In the same ray 10
marginal rays can, of course, be designed. Here the sound pressure amounts to only 10
of the value on the central beam. This results in a new divergence angle Y--10-. the
divergence angles can be calculated as follows:

Circular crystal: Square crystal:

Using N and ã the sound beam can be easily designed:


At the distance N from the crystal an auxiliary parallel to the crystal is drawn. From the
crystal centre the two marginal rays are marked off to the central beam
at the angle ã-50-. The points of intersection of the marginal rays and the auxiliary
parallel are connected with the margin of the crystal. Thus a focusing exists up to the
near field end. In the case of larger distance from the crystal (behind the near field) the
sound beam diverges. As from a distance of approximately 3 near field lengths the
sound pressure on the central beam is indirectly reduced proportionally to the distance
of the crystal. This area of the sound beam is called Far field. The area between near
field and far field is called transition area. Let’s have a closer look at the two values N
and ã-50- characterizing the sound beam:

It can be observed that the shape of the sound beam depends on the sound velocity; the
frequency velocity means that the shape of the sound beam is influenced by the material
of the test object. If a certain material is concerned and thus the sound velocity known, a
larger near field length and the small divergence angle are obtained by increasing the
testing frequency. The same effect can be reached by increasing the crystal diameter.
The geometry of the crystal and the wave characteristics of the sound waves are the
reason for the characteristic form of the sound beam and the interference effects.

Page 293
We assumed in this connection a trouble-free propagation of the sound waves. Trouble-
free means:

1. The material is completely homogeneous, i.e., absolutely uniform.


2. The workpiece is infinitely large so that the sound waves do near encounter
any limits.

Both conditions can, of course, not be met in practice. That means that further
influences and effects exerted on the sound propagation occur in the natural workpiece.
Even a flawless workpiece of steel, for example, contains very small and finely
distributed in homogeneities, e.g. grain boundaries. These are areas where the
spacelattice structure of the material is defective or foreign matter which accumulates A
very small quantities at the grain boundaries. The propagation of the sound waves is
more or less strongly affected by this inhomogeneity.

Absorption

The absorption of ultrasonic energy occurs mainly by the


conversion of mechanical energy into heat. Elastic motion within
a substance as a sound wave propagates through it alternatively
heats the substanceduring compression and cools it during
rarefaction. Because heat flows so much more than slowly than a
ultrasonic wave, thermal losses are incurred and this
progressively reduces the energy of the propagating wave.

Absorption can be thought of as a braking action on the motion


of oscillating particles. This braking action is more pronounced
when oscillations are more rapid, that is, at higher frequencies.
For most materials, absorption losses increase directly with
frequency.

Scattering

Scattering of an ultrasonic wave occurs because most materials


are not truly homogeneous. Crystal boundaries, win boundaries,
and minute non-metallic inclusions, tend to decrease and deflect
small amounts of ultrasonic energy out of the main beam. In
addition, mode conversion at crystal boundaries tends to occur
because of slight differences in acoustic velocity and acoustic
impedance across the boundaries.

Scattering is highly dependent on the relation of grain size to


ultrasonic wavelength. When grain size is less than 0.1 times the
wavelength, scattering is negligible. Scattering effects vary
approximately with the third power of grain size, and when the
grain size is 0.1 times the wavelength or larger, excessive
scattering may make it impossible to conduct meaningful
ultrasonic inspection. Castings, which are more likely to have a
rough surface and coarse grained structure, often suffers losses
due to scattering effects.

Page 294
To reduce this effect, it is desirable to decrease the test
frequency while scanning coarse grained material. Hence
frequencies in the range of 0.1MHz are even used to perform
inspection. However, reducing the frequency results in a reduction
in sensitivity. Hence it should always be borne in mind that the
test is being conducted at a lower sensitivity.

In some cases, determination of the degree of scattering can also


be used as a means of acceptance or rejection of parts. Some cast
irons can be inspected for the size and distribution of graphite
flakes. Similarly, the size and distribution of microscopic voids
in some powder metallurgy parts can be evaluated by measuring the
attenuation of an ultrasonic beam.

Inspection standards

The standardization of ultrasonic inspection allows the same test producer to be


conducted at various times and locations, and by both customer and supplier with
reasonable assurance that consistent results will be obtained. Standardization also
provides a basis for estimating the sizes of any flaws that are found.

An ultrasonic inspection system includes several controls that can be adjusted to


display as much information as is needed on the oscilloscope screen or other display
device. If the pulse length and sensitivity controls are adjusted to a high setting ,
numerous indications may appear on an A-scan display between the front-surface and
back-surface indications. On the other hand at low setting, the face indications may
show no indications even if flaws of prohibited size are present. Inspection or reference
standards are used as a guide for adjusting instrument controls to reveal the presence
of flaws that may be considered harmful to the end use of the product and for
determining which indications come fro flaws that are insignificant, so that needless
reworking or scrapping of satisfactory parts is avoided.

The inspection or reference standards for pulse-echo testing include test blocks
containing natural flaws test blocks containg artificial flaws, and test technique of
evaluating th epercentage of back reflection. Inspection standards for thickness testing
can be plates of various known thickness or can be stepped or tapered wedges.

Test blocks containing natural flaws are metal sections similar to those parts being
inspected. Sections known to contain natural flaws can be selected for test blocks.
Test blocks containing naturals flaws have only limited use as standards, for principal
reasons:

Test blocks containing artificial flaws consist of metal sections containing notches,slots
or drilled holes. Hese test blocks are more widely accepted as standards than are test
blocks that contain natural flaws.

Test blocks containing drilled holes are widely used for longitudinal wave, straightbeam
inspection. The hole in the block can be positioned so that ultrasonic energy from the
search unit is reflected either from the side of the hole or from the bottom of the hole
The flat-bottom hole is used most because the flat –bottom hole is used most because

Page 295
the flat bottom of the hole offers an optimum reflection surface that is reproducible,
because a large portion of the reflected energy may be never reach the search unit.

Differences of 50% or more can easily be encountered between the energy reflected
back to the search unit form flat-bottom holes and from conicial-bottom holes of the
same diameter. The difference is a function of both tranducer frequency and distance
from search unit to hole bottom.Fig 43(a) shows a typical design for a test block that
contains a flat-bottom hole. In using such a block, high-frequency sound is directed from
the surface called “entry surface” toward the bottom of the hole, and the refection from
it is used either as a standard to compare with signals responses from flaws or as a
reference value for setting the controls of the ultrasonic instrument .

In the inspection of sheet, strip welds tubing and pipe, angle –beam inspection can be
used. This type of inspection generally requires a reference standard in the form of a
block that has a notch machined into the block. The sides of the notch can be straight
and at right angles to the surface of the test block, or they can be at an angle. The
width, length, and depth of the notch are usually defined by the applicable specification.
The depth is usually expressed as a percentage of testpiece thickness.

In some cases, it may be necessary to make one of the parts under inspection into a
test block by adding artificial discontinuities, such as flat-bottom holes or notches.
These artificial discontinuities can some times be placed so that they will be removed
by subsequent machining.

When used to determine the operating characteristics of the instruments and


transducers or to establish reproducible test conditions, a test block is called calibration
block. When used to compare the height or location of the echo from a discontinuity in a
part to that from an artificial discontinuity in the test block, the block is called a
reference block. The same physical block, or blocks, can be used for both purposes.
Blocks whose dimensions have been established and sanctioned by any of the various
groups concerned with materials-testing standards are called Standard Reference
Blocks.

Reference blocks

Establish a standard of comparison so that echo intensities can be evaluated in terms


of flaw size. Numerous factors that affect the ultrasonic test can make exact
quantitative determinating of flaw size extremely difficult , if not impossible. One factor
is the nature of the reflecting surface. Although a flat-bottom hole in a reference blocks
has been chosen because it offers an optimum reflecting surface and is reproducible,
natural flaws can be of diverse shape and offer nonuniform reflecting surfaces. The
origin of a flaw and the amount and type of working that the product has received will
influence the shape of the flaw, For eg. A pore in an ingotmight be spherical and
therefore scatter most of the sound away from the search unit,reflecting back only a
small amount to produce a flaw echo. However, when worked by forging or rolling, a
pore usually becomes elongated and flat and therefore reflects more sound back to the
search unit.

On the screen, the height of the echo indication from a hole varies with the distance of
the hole from the front surface in a predictable manner based on near-field and far-field

Page 296
effects, depending on the test frequency and search-unit size, as long as the grain size
of the material is not large. Where grain size is large, this normal variation can be
altered. Differences in ultrasonic transmissibility can be encountered in reference
blocks of a material with two different grain sizes. This was caused by rapid attenuation
of ultrasound in the large-grain stainless steel. In some cases where the grain size is
quite large, it may not even be possible to obtain a back reflection at normal test
frequencies.

In the inspected of aluminum, a single set of reference blocks can be used for most
parts regardless of alloy or wrought mill product. This is considered acceptable practice
because ultrasonic transmissibility is about the same for all aluminum alloy
compositions.

For ferrous alloy, however, ultrasonic transmissibility can vary considerable with
composition. Con- sequently, a single set reference blocks cannot be used when
inspecting various products made f carbon steels, stainless steels, tool steels low-alloy
steels, and high-temperature alloys. For example, if a reference block prepared from
fine-grain steel were to set the level of test sensitivity and the material being inspected
were coarse grained, flaws could be quite large before they would yield an indication
equal to that obtained from the bottom of the hole in the reference block. Conversely, if
a reference block prepared from coarse-grain steel, the instrument could be so sensitive
that minor discontinuities would appear to be major flaws. Thermal treatment can also
have an appreciable effect on the ultrasonic transmissibility of steel. For this reason, the
stage in the fabrication process at which the ultrasonic inspection is performed may be
important. In some cases, it may determine whether or not a satisfactory ultrasonic
inspection can be performed.

Reference blocks are generally used to adjust the controls of the ultrasonic instrument
to a level that will detect flaws having amplitudes above a certain predetermined level. It
is usual practice to set instrument controls to display a certain amplitude of indication
from the bottom of the hole or from the notch in a reference block and to record and
evaluate for accept \reject indications exceeding that amplitude, depending on the
codes or applicable standards. The diameter of the reference hole and the number of
flaw indications permitted are generally related to performance requirements for the part
and are specified in the applicable codes or specifications.

The following should be considered when setting controls for inspection:

• The larger the section or testpiece, the greater the likelihood of encountering flaws
of a particular size.
• Flaws of a damaging size may be permitted if found if found to be in an area that
will be subsequently removed by machining or that is not critical
• It is generally recognized that the size of the flaw whose echo exceeds the
rejection level usually is not the same as the diameter of the reference hole. In a
reference block, sound is reflected from a nearly perfect flat surface represented
by the bottom of the hole. In contrast, natural flaws are usually neither flat nor
perfectly reflecting
• The depth of a flaw from the entry surface will influence the height of its echo that
is displayed on the oscilloscope screen. Fig 45 shows th meaner in which echo
height normally varies with flaw depth. Test blocks of several lengths are used to
Page 297
establish a reference curve for the distance-amplitude correction of inspection
data.

Percentage of back reflection.

As an alternative to reference blocks, an internal standard can be used. In this


technique, the search unit is placed over an indication-free area of the part being
inspected, the instrument controls are adjusted to obtain a predetermined height of the
back reflection and the part is evaluated on the basis of the presence or absences of
indications that equal or exceed a certain percentage of back reflection technique, is
most useful when lot-to-lot variations in ultrasonic transmissibility are large or
unpredictable- a condition often encountered in the inspection of steels.

The size of flaw that produced a rejectable indication will depend on grain size, depth of
the flaw below the entry surface, and test frequency. When acceptance or rejection is
based on indications that equal or exceed a specified percentage of the back reflection
technique, is most useful when lot-to-lot variations in ultrasonic transmissibility are large
or unpredictable-a condition often encountered in inspection of steel.

The size of flaw that produces a rejectable indication will depend on grain size, depth of
flaw below the entry surface, and test frequency. When acceptance or rejection is
based on indications that equal or exceed a specified percentage of the back reflection,
rejectable indications may be caused by smaller flaws in coarse-grain steel than in
finegrain steel. This effect becomes less pronounced, or is reversed, as the transducer
frequency and corresponding sensitivity necessary to obtain a predetermined height of
back reflection are lowered. Flaw evaluation may be difficult when the testpiece grain
size is large or mixed.

Generally, metallurgical structure such as grain size has an effect on ultrasonic


transmissibility for al l metals. The significantly large effect shown in Fig. 44 for type
304 stainless steel is also encountered in other materials. The magnitude of the effect
is frequency dependent, that is, the higher the test frequency, the greater the
attenuation of ultrasound. In any event, when the grain size approaches ASTM No.B
the effect is significant regardless of alloy composition or test frequency.

Other techniques Are used besides the reference block technique and the percentage
of back reflection technique. For example, in the inspection of stainless steel plate, a
procedure can be used in which the search unit is moved over the rolled surface, and
the display on the screen is observed to determine whether or not an area of defined
size is encountered where complete loss of presence of a discrete flaw indication, if
such an area is encountered, the plate is rejected unless the loss of back reflection can
be attributed to surface condition or large grain size.

Another technique that does not rely on test blocks is to thoroughly inspect one or more
randomly chosen sample parts for natural flaws by the method to be used on
production material. The size and location of any flaws that are detected by ultrasonic
testing are confirmed by sectioning the part. The combined results of ultrasonic and
destructive studies are used to develop standards for instrument calibration and to
define the acceptance level for production material.

Page 298
Thickness Blocks. Stepped or tapered testblocks are used to calibrate ultrasonic
equipment for thickness measurement. These blocks are carefully ground from material
similar to that being inspected, and the exact thickness at various positions is marked
on the block. Either type block can be used as a reference standard for resonance
inspection; the stepped block can also be used for transit-time inspection.

Standard Reference Blocks

Many of the standards and specification for ultrasonic inspection require the use of
standard reference blocks, Which can be prepared from various alloys, may contain holes,
slots, or notches of several sizes, and may be of different sizes or shapes.

The characteristics of an ultrasonic beam in a testpiece are affected by the following


variables, which should be considered when selecting standards reference blocks:

• Nature of the testpiece


• Alloy type
• Grain size
• Effects of thermal or mechanical processing
• Distance-amplitude effects
• Flaw size
• Direction of the ultrasonic beam
Introduction to the Common Standards

Calibration and reference standards for ultrasonic testing come


in many shapes and sizes. The type of standard used is dependent
on the NDE application and the form and shape of the object being
evaluated. The material of the reference standard should be the
same as the material being inspected and the artificially induced
flaw should closely resemble that of the actual flaw. This second
requirement is a major limitation of most standard reference
samples. Most use drilled holes and notches that do not closely
represent real flaws. In most cases the artificially induced
defects in reference standards are better reflectors of sound
energy (due to their flatter and smoother surfaces) and produce
indications that are larger than those that a similar sized flaw
would produce. Producing more “realistic” defects is cost
prohibitive in most cases and, therefore, the inspector can only
make an estimate of the flaw size. Computer programs that allow
the inspector to create computer simulated models of the part and
flaw may one day lessen this limitation.

Page 299
The IIW Type Calibration
Block

The standard shown in the


above figure is commonly
known in the US as an IIW type
reference block. IIW is an
acronym for the International
Institute of Welding. It is
referred to as an IIW “type”
reference block because it
was patterned after the “true” IIW block but does not conform to
IIW requirements in IIS/IIW-23-59. “True”
IIW blocks are only made out of steel
(to be precise, killed, open hearth or electric furnace, low-
carbon steel in the normalized condition with a grain size of
McQuaid-Ehn #8) where IIW “type” blocks can be commercially
obtained in a selection of materials. The dimensions of “true”
IIW blocks are in metric units while IIW “type” blocks usually
have English units. IIW “type” blocks may also include additional
calibration and references features such as notches, circular
groves, and scales that are not specified by IIW. There are two
full-sized and a mini versions of the IIW type blocks. The Mini
version is about one-half the size of the full-sized block and
weighs only about one-fourth as much. The IIW type US-1 block was
derived the basic “true” IIW block and is shown below in the
figure on the left. The IIW type US-2 block was developed for US
Air Force application and is shown below n the center. The Mini
version is shown on the right.

IIW Type US-1 & IIW Type US-2 are shown in the figures above

IIW type blocks are used to calibrate instruments for both angle
beam and normal incident inspections. Some of their uses include
setting metal-distance and sensitivity settings, determining the
sound exit point and refracted angle of angle beam transducers,
and evaluating depth resolution of normal beam inspection setups.
Instructions on using the IIW type blocks can be found in the
annex of American Society for Testing and Materials Standard E164,
Standard Practice for Ultrasonic Contact Examination of Weldments.
Page 300
The Miniature Angle-Beam or ROMPAS Calibration Block

The miniature angle-beam is a


calibration block that was
designed for the US Air Force for
use in the field for instrument
calibration. The block is much
smaller and lighter than the IIW
block but performs many of the
same functions. The miniature
angle-beam block can be used to
check the beam angle and exit point of the
transducer. The block can also be used to make
metal-distance and sensitivity calibrations for
both angle and normal-beam inspection setups.
AWS Shearwave Distance/Sensitivity Calibration (DSC) Block
A block that closely resembles the
miniature angle-beam block and is
used in a similar way is the DSC AWS
Block. This block is used to
determine the beam exit point and
refracted angle of angle-beam
transducers and to calibrate distance
and set the sensitivity for both
normal and angle beam inspection
setups. Instructions on using the DSC
block can be found in the annex of
American Society for Testing and
Materials Standard E164, Standard
Practice for Ultrasonic Contact Examination of Weldments.

AWS Shearwave Distance


Calibration (DC) Block

The DC AWS Block is a metal


path distance and beam exit
point calibration standard
that conforms to the
requirements of the American
Welding Society (AWS) and the
American Association of State
Highway and Transportation
Officials (AASHTO).
Instructions on using the DC block can be found in the annex of
American Society for Testing and Materials
Standard E164, Standard Practice for Ultrasonic Contact

Examination of Weldments. AWS Resolution Calibration (RC) Block

Page 301
The RC Block is used to determine the resolution of angle beam
transducers per the requirements of AWS and AASHTO. Engraved
Index markers are provided for 45, 60, and 70 degree refracted
angle beams.

30 FBH Resolution Reference Block

The 30 FBH resolution reference block is used to evaluate the


near-surface resolution and flaw size/depth sensitivity of a
normal-beam setup. The block contains number 3 (3/ 64"), 5 (5/64"),
and 8 (8/64") ASTM flat bottom holes at ten metal-distances ranging
from 0.050 inch (1.27 mm) to 1.250 inch (31.75 mm).

Miniature Resolution Block

The miniature resolution block is


used to evaluate the near-surface
resolution and sensitivity of a
normal-beam setup It can be used to
calibrate high-resolution thickness
gages over the range of 0.015 inches
(0.381 mm) to 0.125 inches (3.175
mm).

Step and Tapered Calibration Wedges

Page 302
Step and tapered calibration wedges
come in a large variety of sizes and
configurations. Step wedges are
typically manufactured with four or
five steps but custom wedge can be
obtained with any number of steps.
Tapered wedges have a constant taper
over the desired thickness range.

Distance/Sensitivity (DS) Block

The DS test block is a calibration


standard used to check the horizontal
linearity and the dB accuracy per
requirements of AWS and AASHTO.

Distance/Area-Amplitude Blocks

Distance/area amplitude correction blocks typically are purchased


as a ten-block set, as shown above. Aluminum sets are manufactured
per the requirements of ASTM E127 and steel sets per ASTM E428.
Sets can also be purchased in titanium. Each block contains a
single flat-bottomed, plugged hole. The hole sizes and metal path
distances are as follows:

· 3/64" at 3"
·5/64" at 1/8", 1/4", 1/2", 3/4", 11/2", 3", and 6"
· 8/64" at 3" and 6"

Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum,
and Type 304
Corrosion Resistant Steel. Aluminum blocks are fabricated per the
requirements of
ASTM E127, Standard Practice for Fabricating and Checking Aluminum
Alloy Ultrasonic
Standard Reference Blocks. Steel blocks are fabricated per the
requirements of ASTM E428, Standard Practice for Fabrication and
Control of Steel Reference Blocks Used in Ultrasonic Inspection.

Page 303
Distance-Amplitude #3, #5, #8 FBH Blocks

Page 304
Distance-amplitude blocks also very similar to the
distance/area-amplitude blocks pictured above. Nineteen block
sets with flat-bottom holes of a single size and varying metal
path distances are also commercially available. Sets have either
a #3 (3/64") FBH, a #5 (5/64") FBH, or a #8 (8/64") FBH. The
metal path distances are 1/16", 1/8",
1/4", 3/8", 1/2", 5/8", 3/4", 7/8", 1", 1-1/4", 1-3/4", 2-1/4",
2-3/4", 3-14", 3-3/4", 4-1/4", 43/4", 5-1/4", and 5-3/4". The
relationship between the metal path distance and the signal
amplitude is determined by comparing signals from same size flaws
at different depth. Sets are commonly sold in 4340 Vacuum melt
Steel, 7075-T6 Aluminum, and Type 304
Corrosion Resistant Steel. Aluminum blocks are fabricated per the
requirements of
ASTM E127, Standard Practice for Fabricating and Checking Aluminum
Alloy Ultrasonic
Standard Reference Blocks. Steel blocks are fabricated per the
requirements of ASTM E428, Standard Practice for Fabrication and
Control of Steel Reference Blocks Used in Ultrasonic Inspection.

Page 305
Modern industrial asset maintenance and inspection concepts
require reliable and accurate inspection techniques. New
developments in modern NDT have resulted in a range of screening
tools and enhanced mapping techniques, which enable reliable
condition assessment during the operational phase of
installations. In service inspections allow on stream condition
assessment of equipment before planned shutdowns. NDT data is
applied to optimise off stream inspection intervals and aim
maintenance effort to where it is required most. The result is
reduced downtime and increased availability of installations. With
the upcoming of computer technology and miniaturisation of
equipment, NDT techniques have developed rapidly into modern
highly reliable inspection tools. One of the most important
factors is the increased accuracy and reproducibility of data.
Further mechanisation of inspection techniques improved
reliability greatly. One of the main reasons for this is that full
coverage is assured by performing the inspection in a mechanised
fashion. Secondly, the availability of a complete inspection
record greatly improves condition monitoring facilities.

The types of testing that can be accomplished with Ultrasonics


are:

Thickness Testing: Sound travels into the part and returns after
a measurable period of time.
Flaw Detection: Sound travels into the part, reflecting from the
defect.

Flaw Sizing: Reflector size is determined by signal amplitude.

Velocity Testing: Sound travels through the part at different


speeds based on material properties.

The testing technique can be grouped under three main heads.


They are:

• Pulse Echo Method


· Through Transmission
Method · Resonance Method

Pulse Echo Test Method

The parts of the system shown above include:


Flaw Detector:
Produces a short duration electrical pulse
Processes the returning signal for interpretation
The flaw detector display shows the “time of flight” of the
sound through the material. (Left to right = Time/Distance)

Page 306
The height of the signal on the display represents the amount

of reflected sound Transducer/Probe:

The piezoelectric element converts the electrical pulse (voltage)


into mechanical vibrations (sound)
The pulse of sound then travels through a material and bounces
off a flaw or other reflective surface
The returning echo is converted back into an electrical signal

Test Part:
· The part has properties that allow for consistent measurable
repeatable results. The Flaw detector and probe utilize the Piezoelectric effect. When
electrical energy is applied, mechanical energy is produced and when mechanical energy is
applied electrical energy is produced.
The following shows the types of reflections that would be displayed on the Ultrasonic instrument
when testing a piece with internal flaws.

Techniques of Pulse-Echo testing are accomplished with one of the


two basic methods; Contact or Immersion testing. In contact
testing, the transducer is used in direct contact with the test
specimen, with a thin layer of couplant seperating them. On some
contact units, plastic wedges together with flexible membranes are
mounted over the face of the cystal. These units are considered

Page 307
as contact when the sound beam is transmitted through a substance,
other than water.

The display form of contact testing shows an initial pulse and the
front wall reflection superimposed or sometimes very close to each
other, followed by subsequent back surface reflection. The
presence of discontinuities are judged by the appearance of echo
or a pip in between the front wall reflection (initial echo) and
the back wall reflection. On the basis of this echo, the nature
of flaw, its apparent depth and the distance from the scanning
point is calculated. Acceptance or rejection is as per the
amplitude and the relative size of the discontinuity indication.
Since the probe is in contact with the test specimen, the frequency
is induced by the exciting cystal and the vibrations are passed
on to the job. Hence, the thickness of the crystal used in contact
inspection has has to be necessarily thicker to induce a vibration
in the test specimen. This limits the frequency of inspection, as
the frequency depends on the thickness of the crystal. The thicker
the crystal, the lesser the frequency it can operate, hence
limiting the frequency of contact inspection. However, other
factors such as protability and cost of inspection when compared
to other forms of testing, has made contact testing a conventional
method in ultrasonic inspection.

Immersion Inspection
In this method, both the test piece and the probe are totally
immersed in liquid, usually water. There are three broadly classified
scanning methods that utilize immersion –type search units:

• Conventional immersion methods in which both the search unit and the test piece
are immersed in liquid
• Squirter and bubbler methods in which the sound is transmitting in a column of
flowing water
• Scanning with a wheel-type search unit which is generally classified as an
immersion method because the transducer itself is immersed.

Advantages of Immersion inspection include:

1. Adequate couplant condition.


2. Speed of inspection.
3. Ability to control and direct sound beams.
4. Adaptability to automated scanning

Disadvantages of Immersion inspection include:

1.Cost of test system


2.Portability
3.Size of the test specimen inspected is a limitation

Page 308
Basic Immersion Inspection. In conventional immersion inspection, both the search
unit and the test piece are immersed in water. The sound beam is directed into the test
piece using either a straight-beam (longitudinal wave) techniques or one of the various
angle-beam techniques, such as shear, combined longitudinal and shear, or Lamb
wave. Immersion-type search units are basically straight-beam units and can be used
for either straight-beam or angle-beam inspection through control and direction of the
sound beam.

In straight-beam immersion inspection, the water path (distance from the face of the
search unit to the front surface of the test piece) is generally adjusted to require a
longer transit time than the depth of scan so than the first multiple of the front reflection
will appear farther along the oscilloscope trace than the first back reflection. This is
done to clear the displayed trace of signals that may be misinterpreted. Water path
adjustment is particularly important when gates are used for automatic signaling and
recording. Longitudinal wave velocity in water is approximately one-fourth the velocity in
aluminum or steel; 25mm (1in) of water path is approximately equal to 100mm (4in) of
steel or aluminum. Therefore a rule of thumb is to make the water path equal to
onefourth the test piece thickness plus 6 mm (1/4 in).

The general display of Immersion testing on the CRT is an initial


pulse followed by the front wall reflection and the backwall
reflection. Since the probe is placed at a particular distance
form the test piece, the front wall reflection is visible on the
CRT. Presence of discontinuities are judged by the appearance of
echo in between the front and the back surface reflections.

If the water path distance is not maintained properly i.e. if the


transducer is too close to the front surface of the test piece,
the second front wall reflection will appear between the front
and the back surface relections, due to decrease in time delay.
This echo may appear to be a discontinuity echo, which leads to
misinterpretation.

Primary advantage of immersion inspection is the ability to


control and direct sound beams. Angulation is possible with the
same probe that is used for straight beam inspection, which is
not the case in contact testing, which uses different probes for
angulation. The angle at which the crystal is placed to produce
a particular refracted angle can be calculated using the Snell’s
law, and the crsytal face is tilted to the determined angle to
produce the required refracted angle is the test specimen.

Since the transducer doesnot come into the direct contact with
test specimen, it is possible to use thinner crystals to produce
sound waves. Using a thinner crystal increases the frequency of
inspection. It is possible to use frequencies as high as 25MHz
and the range usually varies between 2.25MHz and 25MHz. Higher
frequencies gives better resolution of smaller discontinuities.
Apart from the conventional probes, spherically ground and

Page 309
cylindrically ground acoustic lenses are commonly added to
immersion type transducers. They are used to:

1. improve sensitivity and resolution


2. compensate for test part contours
3. examine a given depth of the test part more carefully

Cylindrically ground lenses focus the sound energy to a line, while


spherically ground lenses focus the sound energy to a point.

Cylindrical lenses are used in two ways:

1.To increase the sensitivity and resolution of equipment


2.For contour correction and to direct the sound energy
perpendicularly at all points.

Spherical lenses concentrate the sound energy into a cone shaped


beam.

1. The focussing increases its sensitivity, but shortens its useful


range.
2. While cylindrical lens has a greater width, the spherical lens
has the greatest sensitivity.
3. The cylindrical is often used when immersing the parts having a
rougher surface.

The proper water-path can be determined as follows:

Using a transducer with a focal length of 5 inches in water to focus


the beam to a point
0.25 inches below the surface of a steel part, one would determine
the water-path by:

A. Dividing the velocity of steel by the velocity of water. For


e.g.

6.0 x 105 cm/sec / 1.5 x 105 cm/sec = 4


B. Multiplying the desired focal depth by the answer

4 x 0.25 “ = 1”
C. Substracting the answer from the known focal length in water

5” - 1” = 4”

D. Thus the water path distance must be 4” to focus the beam at


0.25” below the sur-face.

Water-Column Designs. In many cases the shape or size of a test piece does not lend
itself to conventional immersion inspection in a tank. The Squirter scanning method
which operates on the immersion principle, is routinely applied to the high-speed
scanning of plate , sheet, strip, cylindrical form, and other regularly shaped test
Page 310
pieces(fig.) In the Squirter method, the sound beam is projected into the material
through a column of water that flows through a nozzle on the search unit. The sound
beam can be directed either perpendicular or at an angle to the surface of the test
piece. Squirter methods can also be adapted for through transmission inspection. For
this type of inspection, two Squirter-type search units are used. The bubble method is a
minor modification of the Squirter method that gives a less directional flow of couplant.

Wheel-type search Units operate on the immersion principle in that a sound beam is
projected through a liquid path into the test piece. An immersion principle in that a
sound beams is projected through a liquid path into the test piece. An immersion-type
search unit, mounted on a fixed axle inside a liquid –filled rubber tire, is held in one
position relative to the surface of the test piece while the tire rotates freely. The
wheeltype search unit can be mounted on a stationary fixture and the test piece moved
past it, or it can be mounted on a mobile fixture that moves over a stationary test piece.
The position and angle of the transducer element are determined by the inspection
method and technique to be used and are adjusted by varying the position of either the
immersion and inside the tire or the mounting yoke of the entire unit.

For straight-beam inspection, the ultrasonic beam is projected straight into the test
piece perpendicular to the surface of the test piece. Applications of the straight-beam
technique include the inspection of plate for lamination and of billet stock for primary
and secondary pipe.

In angle –beam inspection, ultrasound is projected into the material at an angle of


sound beam propagation is 45 deg.; however, other angles can be used where
required. The beam can be projected forward, in the direction of wheel rotation, or can
be projected to the side 90deg to the direction of wheel rotation. The flexibility of the tire
permits angles other than those set by the manufacturer to be obtained by varying the
mounting yoke axis with respect to the surface of the testpiece.
Special wheel-type search units can be made with beam direction or other features
tailored to the specific application. One such unit is the cross-eyed Lamb, which utilizes
two transducer elements that are mounted at angles to the axle so that the two sound
beams cross each other in a forward direction. This unit is used in the lamb wave
inspection of narrow, thin strip for edge nicks and laminations. By selecting incident
angles and beam directions that are appropriate for the thickness, width, and material of
the strip being inspected, Lamp waves are setup by the combined effect of the two
ultrasonic beams.

Page 311
Resonant Inspection

This new Inspection Technology is not only restricted to


conventional NDT applications, but is also able to perform many
inspection functions in one test. The technology can detect
cracks, voids, hardness variations, dimensional variations,
bonding problems, parts with missing manufacturing processes,
misshaped parts and changes in material properties. It is
primarily suitable for inspecting mass-produced components,
although some high value individual components can be condition
monitored to detect changes in their structural integrity. The
Author’s company offers a rapid response testing service using RI
to manufacturing industries.

Page 312
Resonant Inspection is a “new” NDT technique that was originated
by scientists at Los Alamos National Laboratory in the USA, and
has been developed for industrial applications during the last
four years of commercialisation by an American company Quatrosonics
Inc. It is a whole-body resonance inspection that is particularly
suited to inspecting smaller mass-produced hard components, and
one test will inspect the complete component without radiation,
the need for scanning, immersion in liquids, chemicals, abrasives
or other consumables.

Hard components have their own Resonant Frequencies, for example


a bell will ring with one specific note. This note is actually a
combination of several pure tones, each representing a different
resonance mode of the bell or harmonics of them. Wine glasses also
have resonant frequencies. The tone of the “ringing” depends upon
the size of the glass, a small glass ringing at a higher note than
a large glass. This tells us that Resonant Inspection can
differentiate between components of different sizes. A bell and a
glass of the same size will ring at different frequencies. This
tells us that the resonant frequency is dependant upon the material
of the tested component. (in practice, it depends upon the material
properties or “stiffness” of the object). In addition, a good bell
or wine glass will ring true, whilst a cracked bell or wine glass
will ring with a “cracked” note or will “clunk” instead of ringing.
This tells us that we can detect cracks with Resonant Inspection.
So what’s new ? People have been “inspecting” things by hitting
them with a hammer and listening to them ringing for centuries.

Computers and modern electronics technology have enabled us to


take the human element out of the inspection process, thus
measuring more frequencies and recognising more subtle changes
than are detectable with the human ear. This also allows us to
automate the process (thereby eliminating “operator error”),
and also allows us to move into the ultrasound region to detect
smaller differences.

Resonant Inspection operates by exciting a component with a sine


wave excitation at one specific frequency (thereby putting all of
the energy into that one frequency) then quickly sweeping all of
the individual frequencies through the required test range. A
hammer striking the component will put all the energy into a broad
spectrum (from DC up to hundreds of kilohertz), with only a small
amount at the resonant frequencies. This swept sine-wave approach
allows a much improved signal to noise compared to the hammer blow
technique. A narrow band filtered receiver, typically only several
Hertz wide, will follow the swept sine-wave. This .vastly improves
the signal to noise ratio and raises the detectability of the
inspection by orders of magnitude compared to the old hammer
method.

Page 313
Test Set-Up
For Resonant Inspection, we normally locate the component to be
tested on three or four piezo transducers. It is not necessary to
scan the component with the transducers, nor to rotate a component
past the transducers, as one test will evaluate the wholebody or
complete component. One of the transducers normally acts as a
transmitter, exciting the component, whilst one or two more of the
transducers act as receivers, measuring the amplitude of vibration
at the specific frequency of the transmitter or at one of its
harmonics. Further transducers can be used to support the component
in the test. These transducers have ceramic tips (to prevent wear
of the transducers and to provide a good transfer of energy between
the component and transducer), which whilst normally being
hemispherical, can also be ground to a user specific shape if
required. Vibration Modes and Spectra

detection limits

Cracks on steel components (typically 3mm x 0.3mm on components


with dimensions 15-20mm).

Cracks on small steel components (typically 1mm x 0.1mm on


components with dimensions 10-15mm).

Cracks on ceramic components (typically 2mm x 0.05mm on components


with dimensions 15-25mm).

Dimensional variations of about 0.1%, or 0.025mm on components


with dimensions typically 25mm (provided the production variation
is smaller than this).

Hardness variations typically of 4 or 5 Rockwell C points, also


larger variations that do not affect 100% of the component.

Changes in Material Properties

Parts with different shapes such as radii etc.

Parts with a missing production process such as coating, thread


rolling, hole drilling, grooves etc.

Lack of Bonding

Almost any “hard” materials such as metals, powder metal


parts, ceramics and in certain circumstances also composites.
What can’t we inspect ? “Soft” materials, assemblies and very
large components
Page 314
Testing Speed
Resonant Inspection is a fast inspection technology. Typical
testing times are between 1.5 and 4 seconds per piece, but in some
circumstances, significant deviations from this can occurr. For
some simple operations, testing times of less than 100 milliseconds
per component are possible, and some condition monitoring
applications may require testing times as high as 10 seconds per
piece.

Applications in mass-production
Resonant Inspection is very suited to inspecting mass produced
parts, and is easily able to detect “outliers” or components
that differ from the normal production.

Advanced Ultrasonic Methods For In Service Condition Assessment

Modern industrial asset maintenance and inspection concepts


require reliable and accurate inspection techniques. New
developments in modern NDT have resulted in a range of screening
tools and enhanced mapping techniques, which enable reliable
condition assessment during the operational phase of
installations. In service inspections allow on stream condition
assessment of equipment before planned shutdowns. NDT data is
applied to optimise off stream inspection intervals and aim
maintenance effort to where it is required most. The result is
reduced downtime and increased availability of installations.

During service time of installations many known degradation


mechanisms threaten the condition of the equipment and eventually
cause failures. Several phenomena are known to cause problems in
the service phase of equipment. Depending on type of equipment,
materials, process parameters, service time and external factors,
certain types of degradation may occur. It is desirable to
understand these mechanisms and anticipate in an early stage on
possible consequences.

Degradation phenomena may be classified in two groups: mechanical


wear and chemical corrosion. Mechanical wear occurs in equipment
under cyclic mechanical or thermal loads. Depending on its
manifestation and the equipment’s function, mechanical wear may be
more or less threatening to the operational reliability. Corrosion
manifests itself in several forms [2], which occur in a certain
environment in a single form or in a combination of corrosion
forms. Many forms may have a severe impact on the integrity of
(parts of) process installations.

General corrosion

Page 315
The most common manifestation of corrosion is a uniform attack,
caused by a chemical or electro-chemical reaction uniformly
distributed over the exposed surface. A combination of a corrosive
product and an oxygen containing environment may start corrosion.
Environmental factors such as temperature, electrochemical
potential etc determine the corrosion rate. Generally, this type
of corrosion is of no great concern, since a slow, gradual loss
of material is well predictable and adequate measures may be taken.

Pitting corrosion

Localised corrosion may be a greater threat to installations, because it may form small
pinholes that perforate the material rapidly. Pitting is a result of an
anodic reaction process. At an initiation location, the surface is
attacked by a corrosive product (e.g. chloride). Metallic atoms are
dissolved at the surface of the starting pit. The dissolution causes
excessive positive charge in the surface area, which attracts negative
chloride ions to restore electrochemical balance. The chloride ions,
again, dissolve new metal atoms and the reaction becomes self
propagating. Within a short time the pit may penetrate the complete
wall thickness. The localised nature of pitting makes it extremely difficult to detect pits in an
early stage.
Weld root erosion/corrosion

Root erosion is
a degradation phenomenon, which is often
encountered in flow lines. It is difficult to
grind the weld penetration on the inside of
pipelines. The excessive penetration causes a
discontinuity on the surface, which disturbs the
flow pattern. On some metals, the wall is
passivated by an oxide film, which protects the
steel from corrosion processes. Turbulence and
cavitation affect the region adjacent to the
welds and hamper formation of this passivation
layer. Local wash out of wall material at the
weld side is the result. Another form of weld
root corrosion is caused by selective corrosion.
In many corrosion resistant alloys or
special welding materials, selective leaching may occur. Removal
of the least noble metals results in deterioration of the lattice
structure in alloys (e.g. dezincification in brass components).
If the weld material is more susceptible to corrosion than the
base material, wash out of the weld causes root corrosion and
degradation of structural integrity.

Fatigue cracking

Under cyclic mechanical or thermal loads, a component may be


subject to fatigue. This process is caused by repeated stresses
just below the yield point. However, due to stress peaks,
Page 316
microscopic plastic deformations of material structure occur.
Under continuing stresses, these deformations result in crack
initiations. Mechanical fatigue cracking manifests itself as
cracks with preferential orientation perpendicular to the
predominant stress directions. Thermal fatigue cracking results
in a random web-like crack structure. In combination with a
corrosive medium, the fatigue resistance of materials is reduced.
Corroded spots act as initiator of fatigue cracks, which on their
turn corrode fastest at the crack tip. This combined mechanical
and chemical process is called corrosion fatigue.

Stress Corrosion Cracking

Crack formation occurs at location where tensile stresses act on


the component in a specific corrosive environment. High pressure
equipment, mechanical stresses, thermal stresses, remaining
stresses from welding processes etc. may initiate stress cracking.
After a while, corrosion products in cracks act as wedging forces.
A preferably hot, aqueous environment in the presence of oxidizers
is an ideal situation where SCC grows rank. Failing cathodic
protection may accelerate the process.
In heavy duty austenitic materials, stress corrosion cracking
occurs mainly along the grain boundaries. This phenomenon, known
as intergranular stress corrosion cracking (IGSCC), manifests
itself in the heat affected zone of nuclear reactor vessels and
piping. With increased use of high definition materials in
petrochemical plant work IGSCC may cause problems in process
plants as well.

Hydrogen Induced Cracking (HIC)

Chemical reactors containing hydrogen may suffer from hydrogen


induced cracking (HIC). In contrast with molecular H gas, Hydrogen
atoms penetrate steel reactor walls.
2

Recombination of atoms forms H gas, which piles up in voids in


the metal structure.
2

Internal pressure increases until blisters build up and cracking


occurs. Propagation of cracks occurs often by transitional cracks
between two parallel cracks. This process is known as stepwise
cracking.

Hot Hydrogen Attack

At high temperature, hydrogen atoms react with the metallic atoms


to intermetallic hydrides. This phenomenon is referred to as Hot
Hydrogen Attack, to discriminate from HIC, which merely occurs at
lower temperatures. Formation of methane gas (CH) is
4
caused by Hydrogen reaction with carbides from the metal structure.

Page 317
Besides expansion of the volume and crack initiation,
decarburisation of the steel structure results in hydrogen
embrittlement, which leads to fast deterioration of structural
integrity.

State of the art inspection Techniques

With the upcoming of computer technology and miniaturisation of


equipment, NDT techniques have developed rapidly into modern
highly reliable inspection tools [3]. One of the most important
factors is the increased accuracy and reproducibility of data.
Further mechanisation of inspection techniques improved
reliability greatly. One of the main reasons for this is that
full coverage is assured by performing the inspection in a
mechanised fashion. Secondly, the availability of a complete
inspection record greatly improves condition monitoring
facilities.

Special UT probes

In the past, specific probes have been designed to solve known


inspection problems. Special focused angle beam compression wave
probes have been developed to inspect specific depth regions in a
weld. The compression wave concept was introduced by BAM in the
early eighties [4]. Dual crystal probe construction and beam
focussing techniques enabled high sensitivity ultrasonic
examination of complex materials, which could not be achieved with
standard shear wave techniques. In austenitic or duplex materials,
a well known phenomenon is intergranular stress corrosion cracking
(IGSCC). Especially in nuclear component inspection, many studies
have been undertaken to find appropriate methods for IGSCC
detection.

For near surface cracks, special creeping wave probes have been
developed. Creeping waves travel just below the surface rather
than in it, therefore they are not influenced by the presence of
coupling liquids, and the influence of surface irregularities.
Moreover, since the creeping wave is a compression wave type, they
suffer less from a coarse material structure than shear waves.
Another outstanding example is the development of probes focussed
under cladding crack detection (UCC). The focal range is calculated
in the parent material - cladding transition zone. Cracks
initiating from the clad layer into the parent material may be
readily detected by UCC probes.

Mode conversion techniques and combination of functions lead to


minimization of the number of probes required for complete
inspection coverage. Tandem transducers with multiple crystals
can be build in one housing such as the Round Trip Tandem (RTT)

Page 318
probe or special functions such as Long-Long-Trans (LTT) probes
have been developed. Multi crystal transducers combine a number
of tasks in one housing to save space in the scanner setup and
construction costs. In all cases, it appeared to be of utmost
importance that the transducer parameters are optimized for
specific jobs. Once having gained experience with a certain weld
type however, it is possible to establish a “standard series” of
dedicated transducers, with which the inspections can be performed
without excessive lead times. Mechanization of the inspection
improved inspection accuracy and reproducibility.

TOFD

The Time-Of-Flight Diffraction (TOFD) technique is an advanced


ultrasonic inspection technique that fulfils a need for reliable
inspections. It is a powerful technique because it can
simultaneously detect and size defects. TOFD provides highly
reproducible fingerprints of installation, which makes TOFD
extremely suitable for condition monitoring[5] .

TOFD Weld inspection

The preparation required for a TOFD scan is minimal, which makes


the technique attractive even when only a small number of welds
have to be inspected. TOFD may be applied during construction,
where time constraints exist. TOFD allows examination directly
after welding (up to 200 °C) without hold up of production speed.
The acceptance result is directly available. There is practically
no interference in continuation of nearby construction works for
safety reasons (no radiation shielding etc).

Over the past years, the system has been used in a great variety
of applications, ranging from circumferential welds in pipelines
(including joints of different wall thickness and tapered pipes),
weld inspection of heavy wall pressure vessels (up to 300 mm wall
thickness). Also, the TOFD technique was successfully applied for
inspection of partially filled welds, which are hardly inspectable
by any other technique. Nozzle and flange welds (complex geometry)
can be inspected with prior computer simulation modelling to aid
inspection planning and result evaluation.

On stream inspection with TOFD

In contrast with radiography, for TOFD examination only external


access to the object is required. In the service phase of process
installations and pipework, TOFD may be applied to detect and
monitor service induced defects (stress or fatigue cracks etc).
‘Fingerprints’ of the object are recorded during acceptance
inspection of welds directly after construction and periodically
every number of years. Initial acceptable defects are monitored

Page 319
and service induced defects are revealed and progressively
monitored. Critical reactor vessels with heavy wall constructions
can only be adequately inspected by means of TOFD. Other
techniques such as high energy radiography with Cobalt-60 sources
or portable betatrons, are faced with high safety requirements
and extremely long examination times. Ultrasonic meander scanning
is often too cumbersome and time consuming. Spherical gas tanks
and steam generator headers may be surveyed for cracks.

Root erosion in flow lines

Selective erosion/corrosion in flow lines may be detected and


sized by regular TOFD inspection. Discrimination between single
or two side wash out is easily achieved from the TOFD images.
Long stretches of pipeline may be inspected rapidly with minimum
preparation needed.

Hot Hydrogen Damage detection

Hydrogen embrittlement starts from the internal surface and


propagates slowly through the wall material. It is often difficult
to detect due to minor changes in structure. A TOFD image can
display an increased noise level at the affected location and
estimate the degree of attack.

Mapscan

The demand for mechanised ultrasonic wall thickness measurement


is fulfilled with the introduction of the Mapscan. Mapscan applies
a mechanical link between transducer and computer to record the
thickness data for each predetermined measurement position. The
Page 320
transducer is scanned manually over the surface and the thickness
readings are stored on disk.

After the scanning is finished, the data are plotted in a wall


thickness map. Each thickness level is colour coded and wall
thinning by corrosion or erosion is readily recognised. High
reproducibility (within 0.3 mm wall loss) enables accurate
monitoring and calculation of corrosion rates. This input is very
important to estimate time-tofailure. Mapscan is applied on
vessels, pipework (bends), tank walls etc, for exact documentation
of corroded regions. It can be applied in service at temperatures
up to 250°C. Corrosion phenomena such as general wall thinning,
pitting corrosion, flow accelerated corrosion (FAC), hydrogen
induced corrosion (HIC) and hot hydrogen attack have been revealed
successfully by Mapscan. For its high accuracy, Mapscan is accepted
au lieu internal visual inspection of vessels. Based on the on
stream Mapscan results, off stream inspection interval extension
is often accomplished.

P-scan/Bandscan

P-scan

Ultrasonic mapping techniques prove to be very useful in wall


thickness measurement. The same advantages apply for weld
inspections. However, ultrasonic weld inspection is executed with
angle beam transducers and mapping algorithms become more complex.
This problem was solved with the development of projection scanning
(in short ‘Pscan’). Dedicated scanners enable 3-dimensional
presentation of the inspection object. This technique enabled
extreme high reliability inspection for monitoring of nuclear
reactor vessels. Meander scanning was implemented to comply with
existing ultrasonic inspection standards e.g. ASME, AINSI, etc.
Regression in construction of new nuclear power plants, meant a
reduction in inspection work. Increased inspection requirements
in process plants have accelerated a technology transfer to
petrochemical equipment inspections. Especially for high pressure
vessels and stainless steel reactors higher inspection demands
arose. Enhanced scanning equipment has lead to further automation
of ultrasonic inspections. A spin off of this development is fast
mechanised wall thickness mapping, which in spite of the expensive
equipment is economically attractive due to its high inspection
speed.

Bandscan

Mechanised weld inspection greatly improved accuracy and


reproducibility. The development of special purpose probes and
focussing techniques opened new ways of inspection. Instead of
meander scanning with standard shear wave angle beam probes, the

Page 321
concept of line scanning was implemented in the Bandscan. A
transducer frame on a guided vehicle is moved parallel to the weld
along a band. Any number of probes can be mounted with different
functions.

For weld inspection, the cross section of the weld is subdivided


in several depth zones, each of which is addressed with a
combination of probes. Using focusing techniques, the ultrasonic
beam is so narrow at the focal position, that accurate defect
sizing is possible, even though only a single line scan is
performed. Improved inspection speed and direct sizing capability
are great advantages for application of Bandscan rather than
meander scanning (P-scan). Especially for large structures, such
as spherical gas tanks, Bandscan is the preferred method. Later
on this concept was motorized for high inspection speed on pipeline
girth welds. Inspection cycles of several minutes in (off shore)
pipeline construction can now be achieved with the widely used
“Rotoscan” systems.

For non-routine weld inspection, such as dissimilar metal welds


(DMWs), Bandscan may be equipped with special probes. Joints of
carbon steel to austenitic, duplex or high nickel alloy steel
materials can be examined using optimised probes. Due to the
coarse structure of these materials, they can hardly be
inspected using conventional shear wave methods. Compression
wave angle beam probes are capable to penetrate these ultrasonic
unfriendly materials. In applications, where TOFD or shear wave
ultrasonic techniques fail, Bandscan may do the job. In this
field, once again, Bandscan has proven its merits.

LORUS (Long Range Ultrasonics)

Service induced defects such as corrosion and cracking tend to


favour locations where access for inspection is limited. Hidden
corrosion is often found with great difficulty or in a late stage
when damage is already done. LORUS has been developed as a tool
for fast screening of regions with limited access [6]. From a
single access point, a range up to one metre may be examined for
corrosion or cracks. Inspection may be carried out on stream
without dismantling, lifting or opening components, which means
substantial savings in shutdown, process interruptions and/or
cleaning time and costs.

Special design, high sensitivity ultrasonic probes are applied to


achieve a considerable inspection range. LORUS measures
reflection signals and composes coherent projection images.
Examination results are documented in easy to understand, colour
coded, 2D top view corrosion maps. Corrosion extent is readily
obtained and corrosion growth may be monitored in recurrent
inspections Reflection amplitudes provide qualitative information
on corrosion severity but can not present actual corrosion depth.
Page 322
Storage Tank Inspection

An upcoming trend is the use of on stream


screening techniques to establish general
tank condition. LORUS may be part of a
carefully composed on stream tank
inspection package, containing Acoustic
Emission corrosion activity measurement,
and mechanised ultrasonic wall thickness
mapping. These complementary techniques
provide a firm basis for decision making
in
maintenance planning and service
time extension for storage tank
operation.
LORUS focuses specifically on the high
risk zone of the annular plate,
supporting the tank shell. This region
is considered critical, due to high
stresses and failures
may lead to large product spills or endanger personnel and
environment. The ultrasonic beam is emitted under the proper
angle to propagate underneath the shell and cover a range up to
1 metre. Projection images show the area of the annular plate
region in top view and form a permanent document for recurrent
inspections. General corrosion as well as localised pitting
corrosion is easily
Large area screening

Mapping of large objects for corrosion or cracks may be performed


rapidly with projection mapping techniques. Complete volumetric
coverage is achieved by performing line scanning only. Substantial
time saving is achieved compared to full area scanning methods
(e.g. Mapscan). In many cases, defect orientation may be such that
angle beam inspection is required. Detection of cracks in through
thickness direction demands application of angle beam probes.
Projection images present colour coded defect maps, with exact
defect location and extent. Depth sizing may be by more
quantitative techniques e.g. TOFD, at those locations indicated
on the LORUS maps. Even locations that cannot be inspected
directly because of access restrictions of pipework, repair
scales, bandages etc, can be examined by means of LORUS.

Minimum surface preparation combined with high inspection speed,


essentially provide a rapid and cost effective mapping technique.
This method is extremely suited for mapping of cracking, such as
fatigue cracking with a preferential orientation or randomly
orientated SCC, but also suited for small corrosion pits.

pipe supports

Page 323
Corrosion under pipe supports or saddles is a major problem area
for inspection. The region between outer pipe surface and support
is susceptible to corrosion due to water ingress. Radiographic or
ultrasonic wall thickness techniques are not applicable because
of access limitation. The only option is lifting the pipe from the
support to gain access. However, the condition of the pipe is the
unknown factor and lifting may be a riskful enterprise. With LORUS,
the support region is inspected from the free top surface of the
pipe without the need for lifting. Fast screening of large numbers
of supports is achieved in a minimum of time. Both pulse-echo and
transmission techniques are applied in circumferential directions
to obtain maximum information. In a single scan over the top
surface of the pipe, two probes measure reflection and transmission
signals simultaneously. Reflection signals are used to calculate
projection images, while transmission signals are used to estimate
corrosion severity in several depth classes.

Guided wave pipeline inspection

Screening tools for fast assessment of large parts of


installations seem to have a
growing inspection potential. In
stead of spot checks, plant
users demand complete 100%
inspection coverage of their
installations. Where
conventional ultrasonic
techniques, based on bulk wave
propagation, have a limited
range up to one meter, Lamb
waves have the potential of
propagating over much longer distances. In a confined geometry
such as a pipe, guided waves build up, which can travel over
tens of meters. As a screening tool, this technique provides on
line information of long lengths of pipework. Guided waves
travel across straight stretches of pipes, bends, supports, T-
joints, etc but cannot pass across flange joints, end pieces,
etc. Ring transducers have been developed [7], which can
generate waves in a specific mode, optimal in range and
sensitivity. An extremely long inspection range is achieved for
screening of on and offshore pipework, detection of corrosion
under insulation without removing lagging other than for
application of the probes, road crossings and other hidden
penetrations, lined pipework, etc. Since very low frequencies
are applied, the defect sensitivity is limited to larger areas
of (corrosion) wall loss. Welds cause reflection signals at
regular distance, providing reference for sensitivity settings.
Internal features in the weld such as weld root erosion may be
discriminated in the reflection signal by advanced signal
processing techniques. In a similar way, guided wave inspection
Page 324
could discriminate between corroded and unaffected pipes at
locations of supports. The full potential of the technique will
become evident when it is applied more widely.

Advanced ultrasonic techniques show very attractive merits for use


on in service inspections of industrial process plants. Most known
degradation phenomena may be detected, localised and documented
using on stream techniques. Inspection programmes can be optimised
to aim optimum effort at high risk (parts of) equipment. Critical
components can be monitored and defect grow can be followed in
time. Reliable operational service of equipment can be extended
until safety limits are attained. The result of an optimised
inspection approach is reduced downtime and maximum availability
of installations. Substantial savings are obtained in inspection
expenses and secondary, much larger savings may be obtained in
cutting operational costs for unneeded maintenance shutdowns.

Page 325

You might also like