Measurement Theory Finalx
Measurement Theory Finalx
NATIONAL DIPLOMA IN
MECHANICAL ENGINEERI
ENGINEERING
NG TECHNOLOGY
ENGINEERING MEASUREMENTS
(THEORY)
Week1
Week 2
1.2 List the sub-division of standard of length
Week 3
1.3 Discuss the sub-division in 1.1
Week 4
2.1 Describe the types of errors commonly found in engineering measurements
2.2 explain sources of error in measurements such as equipment errors, operational interference,
and installation.
Week 5
2.3 Explain means of over-coming errors mentioned in 2.1 above.
2.4 Describe drunken thread.
Week 6
3.1(a) Explain the principle of construction and operation of the following (a) dynamometer, (b)
bourdon tube manometers
Week 7
3.1 (b) Explain the principle of construction and operation of thermometers, pyrometer,
thermocouple e.t.c.
Week 8
3.2 State the precautions to be observed when using the measuring instruments in 3.1
3.3 Differentiate between direct measurement and measurement by comparison
Week 9
4. 0 Caliberation of measuering instruments
4. 1 Concept of caliberation
4. 2 Static calibration.
Week 10
4. 3 Precautions to be observed during caliberations of measuring instruments
4. 6. 8 Recess gauges
4. 6. 9 Step gauges
Week 11
5. 1 Strain gauges.
Week 12
Week 13
5. 2. 1 Cross sensitively
5. 2. 2 Effects of temperature and humidity on strain gauges.
Week 14
5. 4 Mounting of gauges
Week15
5. 5. 1 Static calibration
5. 5. 2 Dynamic calibration
5. 5. 3 Measurement of strains
1.1.1 MASS
The SI unit of mass is the kilogram, unit symbol kg. The kilogram is one of the seven base units
in the SI and is defined:
The kilogram is the unit of mass; it is equal to the mass of the international prototype of the
kilogram, a platinum-iridium cylinder kept at sè res, in France. A frequently used sub-multiple
unit is the gram, symbol g:
1g = 10-3kg, 1kg = 1000g
The tonne (the metric ton) is equal to 1000kg, i.e. 1 tonne = 1000 kg , and it is also worth
remembering that:
1 pound, 1lb = 0.4536 kg or 453.6 g
So for example,
2240 lb = 1 ton = 2240x0.4536 = 1016 kg
Hence,
1 ton = 1.016 tonne.
The rigid six-inch rule is a common measuring tool found in the machine shop. The rule is a strip
of metal graduated in inches and fractions of an inch to give actual measurements.
When tolerances of fractional dimensions are required, the steel rule is used. The most
commonly used steel rule is the 6" rule. Although rules come in 6-inch increments (Example: 6",
12", 18", 24", and 36" lengths), the 6-inch rule is the most popular because it fits into the apron
pocket easily. It also comes in various widths and thicknesses to meet varying requirements as
will be seen in the slide series.
The terms "rules" and "scales" are used in different ways by different workers, sometimes
causing confusion. While some manufacturing workers do not differentiate between these
terms, others use the term "rule" when referring to a measurement tool which provides actual
linear measurements. In other words one inch on the rule represents exactly one inch of linear
distance. On the other hand, to these workers, a scale is a measurement tool in which an actual
measured linear distance represents another imaginary (smaller or larger) linear distance. For
example, on an architect's scale (Figure 2), one inch of measured distance may represent one
foot of actual linear distance. When this principle is in operation, a "key" should indicate the
ratio of measured distance to actual distance or "scale" that is being used.
There are four basic divisions that are found on a fractional inch rule. These are 1/64, 1/32, 1/16
and 1/8 of an inch.
Let's look at each graduation a little closer. A 1/64" graduate scale mean that in a 1-inch length,
there are 64 lines dividing the 1-inch length (Figure 1.2).
These are the smallest graduations on the rule, therefore making the accuracy of a steel rule
1/64". This is sometimes argued by some of the metal workers who say they can measure to
within ±.003 with the rule. They are right for they have worked with it a long time and have
become masters at reading the graduations. However, the rule is only intended to measure to
1/64" accuracy, and other instruments are used to measure to closer tolerances.
The 1/16" scale divides a 1-inch length into 16 divisions and is a total of 4/64 or 2/32
graduations (Figure 1.4).
The 1/8" scale divides a 1-inch length into 8 equal divisions and is a total of 8/64, 4/32, or
2/16 graduations (Figure1.5).
As you can see, a combination of two or more of the various graduations will make up a larger
graduation. The same holds true with graduations larger than 1/8". Any combination of the four
different sized graduations will give a reading up to the 1-inch length. Figure 7 is a decimal
equivalent chart that shows all the various fractions possible in one inch.
As can be seen on the chart, 3/8" is only given as 3/8". It is not 24/64", 6/16", or 12/32". Any of
these fractions does equal 3/8", but it is reduced to its lowest terms. Another example would be
7/16". This could be given as 14/32" or 28/64", but these are not in their lowest terms.
Also, the 1/2 division, the 3/4 division and 1-inch division are not read as 8ths, 64ths, 32nds, or
1/16 even though they are made from these combinations. They are read as 1/2", 3/4", or 1 inch.
Now that you have this information under your belt, let's look at how a rule should be used.
fig 1.1.4.7
When measuring a length, the rule must be kept in a straight line parallel to the centerline of the
work. If it is tilted, the measurement will be longer than the actual part. See Figure 1.9
Correct
Incorrect
Fig1.1.4.9
One other important factor in using the rule is to be aware of parallax. This is an observation
error from the person measuring or holding at the part in relation to the part being held.
Incorrect Correct
(Rule Lying Flat) (Rule Standing on Edge)
Fig 1.1.4.10
The figure on the left is an incorrect way of measuring, and parallax is greatly increased because
of the thickness of the rule. The graduations do not come in direct contact with the work. The
arrows pointing to the right and left will cause parallax, and even though the arrow pointing
straight up is the correct way to view the rule, there is a chance for error in reading due to the
thickness of the rule.
The figure on the right is used with the rule on edge. As can be seen, the graduation comes in
contact with the work which is the correct way of measuring. Although the arrows pointing to
the right and left will cause an improper reading, it will not be as great an error as when used like
the figure on the left. The proper way is to view the graduation straight up as the center arrow.
2.1.1 Using the vernier calipers and micrometer screw gauge to measure length
The precision of length measurements may be increased by using a device that uses a sliding
vernier scale. Two such instruments that are based on a vernier scale whicH YOU WILL USE IN THE
WORKSHOP to measure lengths of objects are the vernier callipers and the micrometer screw
gauge. These instruments have a main scale (in millimetres) and a sliding or rotating vernier
scale. In figure 1 below, the vernier scale (below) is divided into 10 equal divisions and thus the
least count of the instrument is 0.1 mm. Both the main scale and the vernier scale readings are
taken into account while making a measurement. The main scale reading is the first reading on
the main scale immediately to the left of the zero of the vernier scale (3 mm), while the vernier
scale reading is the mark on the vernier scale which exactly coincides with a mark on the main
scale (0.7 mm). The reading is therefore 3.7 mm.
Figure 2.1.1.1 : The reading here is 3.7 mm.
The vernier CALIPERS FOUND IN THE WORKSHOP incorporates a main scale and a sliding vernier
scale which allows readings to the nearest 0.02 mm. This instrument may be used to measure
outer dimensions of objects (using the main jaws), inside dimensions (using the smaller jaws at
the top), and depths (using the stem).
Figure 2.1.1. 3: The vernier calipers
To measure outer dimensions of an object, the object is placed between the jaws, which are then
moved together until they secure the object. The screw clamp may then be tightened to ensure
that the reading does not change while the scale is being read.
The first significant figures are read immediately to the left of the zero of the vernier scale and
the remaining digits are taken as the vernier scale division that lines up with any main scale
division.
Some examples:
Note that the important region of the vernier scale is enlarged in the upper right hand corner of
each figure.
Figure2.1.1. 4: The reading is 37.46 mm.
In figure 4 above, the first significant figures are taken as the main scale reading to the left of the
vernier zero, i.e. 37 mm. The remaining two digits are taken from the vernier scale reading that
lines up with any main scale reading, i.e. 46 on the vernier scale. Thus the reading is 37.46 mm.
In figure 5 above, the first significant figures are taken as the main scale reading to the left of the
vernier zero, i.e. 34 mm. The remaining two digits are taken from the vernier scale reading that
lines up with any main scale reading, i.e. 60 on the vernier scale. Note that the zero must be
included because the scale can differentiate between fiftieths of a millimetre. Therefore the
reading is 34.60 mm.
Figure2.1.1. 6: The reading is 40.00 mm.
In figure 6 the zero and the ten on the vernier scale both line up with main scale readings,
therefore the reading is 40.00 cm.
Answers:
Figure2.1.1. 7: 30.88mm
Figure2.1.1. 8: 8.10mm
Figure2.1.1. 9: 121.68mm
WEEK 3 ENGINEERING MEASUREMENTS MEC 203
Here is a useful applet to learn how to use the micrometer SCREW GAUGE.
In order to measure an object, the object is placed between the jaws and the thimble is rotated
using the ratchet until the object is secured. Note that the ratchet knob must be used to secure the
object firmly between the jaws, otherwise the instrument could be damaged or give an
inconsistent reading. The manufacturer recommends 3 clicks of the ratchet before taking the
reading. The lock may be used to ensure that the thimble does not rotate while you take the
reading.
The first significant figure is taken from the last graduation showing on the sleeve directly to the
left of the revolving thimble. Note that an additional half scale division (0.5 mm) must be
included if the mark below the main scale is visible between the thimble and the main scale
division on the sleeve. The remaining two significant figures (hundredths of a millimetre) are
taken directly from the thimble opposite the main scale.
In figure 11 the last graduation visible to the left of the thimble is 7 mm and the thimble lines up
with the main scale at 38 hundredths of a millimetre (0.38 mm); therefore the reading is 7.38
mm.
Figure 1.3.3: The reading is 7.72 mm.
In figure 12 the last graduation visible to the left of the thimble is 7.5 mm; therefore the reading
is 7.5 mm plus tHE THIMBLE READING OF 0.22 MM, giving 7.72 mm.
In figure 13 the main scale reading is 3 mm while the reading on the drum is 0.46 mm; therefore,
the reading is 3.46 mm.
In figure 14 the 0.5 mm division is visible below the main scale; therefore the reading is 3.5 mm
+ 0.06 mm = 3.56 mm.
Whenever you use a vernier calipers or a micrometer screW GAUGE YOU MUST ALWAYS TAKE A
ZERO READING i.e. a reading with the instrument closed. This is because when you close your
calipers, you will see that very often (not always) it does not read zero. Only then open the jaws
and place the object to be measured firmly BETWEEN THE JAWS AND TAKE THE OPEN reading.
Your actual measurement will then BE THE DIFFERENCE BETWEEN YOUR OPEN READING AND YOUR
ZERO reading.
Let us say you take a reading with an object between the jaws of a vernier calipers and you see
the following:
Say that you decide that the best estimate of the reading l 1 is 37.46 mm.
Using a triangular probability density function, you might decide that you are 100% sure that the
reading is not 37.42 mm and 100% sure that the reading is not 37.50 mm.
Then mm = 0.0163 mm
When you remove the object and read the vernier calipers with the jaws closed, you might decide
that the best estimate of the "closed" reading l0 = 0.04 mm with standard uncertianty u(l 0) =
0.0204 mm
What should you then record as the best estImate of the length of the object you are measuring?
Answers:
The purpose of any measurement is to describe some physical property of an object or a system
quantitatively, viz., its length, temperature, pressure, e.t.c. every measurement of such a quantity
has a certain amount of uncertainty.
The term error is defined as the deviation of a measured variable from the actual value of thing
being measured.
Measurement errors generally fall into two categories: random or systematic errors. However
even if we know about the types of error we still need to know why those errors exist. We can
break these into two basic categories: Instrument errors and Operator errors.
Random errors are ones that are easier to deal with because they cause the measurements to
fluctuate around the true value. If we are trying to measure some parameter X, greater random
errors cause a greater dispersion of values, but the mean of X still represents the true value for
that instrument.
Fig 2.1.1.1: random error
A systematic error can be more tricky to track down and is often unknown. This error is often
called a bias in the measurement. In chemistry a teacher tells the student to read the volume of
liquid in a graduated cylinder by looking at the meniscus. A student may make an error by
reading the volume by looking at the liquid level near the edge of the glass. Thus this student
will always be off by a certain amount for every reading he makes. This is a systematic error.
Instruments often have both systematic and random errors.
Fig 2.1.1.2: systematic error
Now that we know the types of measurement errors that can occur, what factors lead to errors
when we take measurements? We can separate this category into 2 basic categories: instrument
and operator errors. Human errors are not always blunders however since some mistakes are a
result of inexperience in trying to make a particular measurement or trying to investigate a
particular problem.
When you purchase an instrument (if it is of any real value) it comes with a long list of specs that
gives a user an idea of the possible errors associated with that instrument. In labs as a faculty you
may be using equipment that is not new, so you should help students be aware of the errors
associated with the instrument. If the company that made the instrument still exists you can
contact them to find out this information as well. Looking at these carefully can help avoid poor
measurements and poor usage of the instrument. Students when they hand in labs can calculate
and represent errors associated with their data which is important for every scientist or future
scientist. Some basic information that usually comes with an instrument is:
2.2.2 Calibration
Other instrument errors include calibration errors. All instruments need to be calibrated.
Instruments are calibrated according to theory, standards and other instruments that also have
errors. Calibration ideally should be performed against an instrument that is very accurate, but
this can be costly, so it does not always happen.
All instruments have a finite lifetime, even when calibrated frequently. In class you may have an
opportunity to show students the difference in measurements between an older and new
instrument. Electronic instruments drift over time and devices that depend on moving parts often
experience hysteresis. Hysteresis can be a complex concept for kids but it is easily demonstrated
by making an analogy to Slinkys or bed springs. You can also show the students a new deck of
cards vs. an older deck of cards. You can shuffle the new cards a couple of times and the cards
will quite obviously look new and flat. However, the old cards which have been shuffled and
held in peoples hands many times, develop a curve to them, indicate the structural integrity of the
cardboard has changed from its original form.
2.2.4 Operator Errors
These errors generally lead to systematic errors and sometimes cannot be traced and often can
create quite large errors. Through experimentation and observation scientists leard more all the
time how to minimize the human factors that cause error. Operator errors are not only just
reading a dial or display wrong (although that happens) but can be much more complicated. As
faculty it is important to keep these in mind so that in a lab or field situation students can obtain
meaningful data. Making students aware of operator errors is definitely more of a preparatory
lesson. Let's explore some of these topics.
Data often has errors because the instrument making the measurements was not placed in an
optimal location for making this measurement. A good example of this is again associated with
measurements of temperature. Any temperature measurement will be in accurate if it is directly
exposed to the sun or is not properly ventilated. In addition, a temperature device place too close
to a building will also be erroneous because it receives heat from the building through
conduction and radiation.
A scientist must always ask himself/herself questions like: What is being measured? How often
does it need to be measured? How accurate do I need to be? What conditions am I going to make
the measurements in? Knowing the answer to these questions can help the scientist pick the
appropriate instrument for the situation. An example of this is errors that used to be quite
common in trying to measure temperature from an aircraft. Thermometers that were unprotected
got wet when flying through clouds thus making the temperature data useless. The device that
was used was not appropriate for that experiment, where as it might have been fine for many
other situations. Another example would be getting an electronic temperature device that can
report temperature measurements ever 5 seconds when one really only is trying to record the
daily maximum and minimum temperature. This is a case where the instrument was superfluous
(and probably too expensive) for the type of measurement that needed to be made.
Appropriateness can also relate to the spatial and temporal frequency in which measurements are
made. Students may look at the global and average temperature and take it for truth, because we
have good temperature measurement devices. They may not be aware that the global average
may be made with the same density of measurements in sparsely populated areas and poorer
nations. Sampling issues can be a big source of error and if you are teaching a statistics course
you may want to delve into this more deeply. Providing your instruments are good the more data
the better. Studying events that happen infrequently or unpredictably can also affect the certainty
of your results. Althoughh understanding what you are trying to measure can help you collect no
more data than is necessary. For example sea surface temperatures in the middle of the ocean
change very slowly, on the order of two weeks. It is therefore unnecessary to record temperature
tempera
changes every half an hour or an hour.
Whether it's a screw thread or digital micrometer, the instrument's level of precision depends on
two factors: the inherent accuracy of the reference (the screw thread or the digital scale) and
process errors.
With a screw micrometer, accuracy relies on the lead of the screw built into the micrometer
barrel. Error in this type of micrometer tends to be cumulative and increases with the length of
the spindle travel. This is one reason micrometers come in 11-inch
inch (25 mm) measuring ranges.
Apart from the difficulty of making long, fine threads, the error generated over the longer lengths
may not meet performance requirements.
One way to improve the performance ooff the measurement is to tune the micrometer to the range
where it is most likely to be used. For example, if a 00-to 1-inch
inch (0 to 25 mm) micrometer is to be
used on parts toward the largest size, the micrometer could be calibrated and set up so that the
optimum
imum accuracy is at some point in its travel other than at its starting point. You could choose
the middle to balance any errors at the end points or elsewhere to maximize performance at any
particular point of travel.
With electronic micrometers, the thread usually drives a sensing head over a scale or uses a
rotary encoder as the displacement indicator. Both can induce errors, but the thread of the barrel
remains the largest source of error. An electronic micrometer can remember and correct for such
errors, and, in the end, can provide better performance than the interpreted mechanical
micrometer.
The process for checking the performance of a micrometer is similar to the process for checking
other comparative or scale-based
based instruments. Gage blocks of known sizes are measured, and
deviations from expected values are plotted. Usually the gage blocks are chosen so that the
spindle travels for a full or half turn of the sscrew.
crew. A rotation of the screw can be analyzed by
taking small increments of measurements around the peaks discovered on the first pass. These
increments--maybe ten steps in one revolution--may reveal larger errors or show patterns that
were machined into the screw threads.
The other significant cause of errors can be found in the parallelism of the anvils. The precision
method for inspecting the condition of the anvils is with an optical flat. Using a monolithic light
source, it is generally acceptable to allow two visible bands when assessing individual anvil
flatness. For inspecting parallelism, six bands may be observed, the combined total of both sides.
The applied measuring force of the sensing anvil on the part and the reference anvil is the other
source of process measuring error. The friction of ratchet drive thimbles reduces the deflection of
the micrometer frame, but the condition still exists as a source of error. With about 2 pounds of
measuring force, typical frame deflection is roughly 50 micro inches, although this is apt to
increase on larger micrometers where the rigidity of the frame increases.
Other sources of error can also sneak in. Temperature, dirt and the means by which the operator
aligns the gage to the part affect any micrometer's overall performance.
3.1.1 Dynamometer
A dynamometer or "dyno" for short, is a machine used to measure torque and rotational speed
(rpm) from which power produced by an engine, motor or other rotating prime mover can be
calculated.
A dynamometer can also be used to determine the torque and power required to operate a driven
machine such as a pump. In that case, a motoring or driving dynamometer is used. A
dynamometer that is designed to be driven is called an absorption or passive Dynamometer. A
dynamometer that can either drive or absorb is called a universal or active dynamometer.
The dynamometer must absorb the power developed by the prime mover. The power absorbed
by the dynamometer must generally be dissipated to the ambient air or transferred to cooling
water. Regenerative dynamometers transfer the power to electrical power lines.
Dynamometers can be equipped with a variety of control systems. If the dynamometer has a
torque regulator, it operates at a set torque while the prime mover operates at whatever speed it
can attain while developing the torque that has been set. If the dynamometer has a speed
regulator, it develops whatever torque is necessary to force the prime mover to operate at the set
speed.
A motoring dynamometer acts as a motor that drives the equipment under test. It must be able to
drive the equipment at any speed and develop any level of torque that the test requires.
Only torque and speed can be measured; Power must be calculated from the torque and speed
figures according to the formula:
2πNxT
To calculate power use: p =
60
where:
Torque is in newton-metres (N·m)
Rotational speed is in revolutions per minute (rpm)
A dynamometer consists of an absorption (or absorber/driver) unit, and usually includes a means
for measuring torque and rotational speed. An absorption unit consists of some type of rotor in a
housing. The rotor
or is coupled to the engine or other equipment under test and is free to rotate at
whatever speed is required for the test. Some means is provided to develop a braking torque
between dynamometer's rotor and housing. The means for developing torque can be frictional,
f
hydraulic, electromagnetic etc. according to the type of absorption/driver unit.
One means for measuring torque is to mount the dynamometer housing so that it is free to turn
except that it is restrained by a torque arm. The housing can be made free to rotate by using
trunnions connected to each end of the housing to support the dyno in pedestal mounted trunnion
bearings. The torque arm is connected to the dyno housing and a weighing scales is positioned so
that it measures the force exerted by the dyno housing in attempting to rotate. The torque is the
force indicated by the scales multiplied by the llength
ength of the torque arm measured from the center
of the dynamometer. A load cell transducer can be substituted for the scales in order to provide
an electrical signal that is proportional to torque.
Another means for measuring torque is to connect the engine to the dynamometer through a
torque sensing coupling or torque transducer. A torque transducer provides an electrical signal
that is proportional to torque.
With electrical absorption units, it is possible to determine torque by measuring the current
drawn (or generated) by the absorber/driver. This is generally a less accurate method and not
much practiced in modern time, but it may be adequate for some purposes.
A wide variety of tachometers are available for measuring speed. Some types can provide an
electrical signal that is proportional to speed.
When torque and speed signals are available, test data can be trans
transmitted to a data acquisition
system rather than being recorded manually. Speed and torque signals can also be recorded by a
chart recorder or plotter.
Types of dynamometers
In addition to classification as absorption, motoring or universal as described above,
dynamometers can be classified in other ways.
A dyno that can measure torque and power delivered by the power train of a vehicle directly
from the drive wheel or wheels (without removing the engine from the frame of the vehicle), is
known as a chassis dyno.
Dynamometers can also be classified by the type of absorption unit or absorber/driver that they
use. Some units that are capable of absorption only can be combined with a motor to construct an
absorber/driver or universal dynamometer. The following types of absorption/driver units have
been used:
EC dynamometers are currently the most common absorbers used in modern chassis dynos. The
EC absorbers provide the quickest load change rate for rapid load settling. Some are air cooled,
but many require external water cooling systems. Eddy current dynamometers require the ferrous
core, or shaft, to rotate in the magnetic field to produce torque. Due to this, stalling a motor with
an eddy current dyno is usually not possible.
Powder Dynamometer
A powder dynamometer is similar to an eddy current dynamometer, but a fine magnetic powder
is placed in the air gap between the rotor and the coil. The resulting flux lines create "chains" of
metal particulate which are constantly built and broken apart during rotation creating great
torque. Powder dynamometers are typically limited to lower RPM due to heat dissipation issues.
Hysteresis Dynamometers
Hysteresis dynamometers, such as Magtrol Inc's HD series, use a proprietary steel rotor that is
moved through flux lines generated between magnetic pole pieces. This design allows for full
torque to be produced at zero speed, as well as at full speed. Heat dissipation is assisted by
forced air. Hysteresis dynamometers are one of the most efficient technologies in small (200 hp
(150 kW) and less) dynamometers
In engine testing, universal dynamometers can not only absorb the power of the engine but also,
drive the engine for measuring friction, pumping losses and other factors.
Electric motor/generator dynamometers are generally more costly and complex than other types
of dynamometers.
Fan Brake
A fan is used to blow air to provide engine load. Changing gearing or fan or simply measuring
the max rpm attained.
Hydraulic brake
The hydraulic brake system consists of a hydraulic pump (usually a gear type pump), a fluid
reservoir and piping between the two parts. Inserted in the piping is an adjustable valve and
between the pump and the valve is a gauge or other means of measuring hydraulic pressure.
Usually, the fluid used was hydraulic oil, but recent synthetic multi-grade oils may be a better
choice. In simplest terms, the engine is brought up to the desired rpm and the valve is
incrementally closed and as the pumps outlet is restricted, the load increases and the throttle is
simply opened until at the desired throttle opening. Unlike most other systems, power is
calculated by factoring flow volume (calculated from pump design specs), hydraulic pressure
and rpm. Brake HP, whether figured with pressure, volume and rpm or with a different load cell
type brake dyno, should produce essentially identical power figures. Hydraulic dynos are
renowned for having the absolutely quickest load change ability, just slightly surpassing the eddy
current absorbers. The downside is that they require large quantities of hot oil under high
pressure and the requirement for an oil reservoir.
The water brake absorber is sometimes mistakenly called a "hydraulic dynamometer". Water
brake absorbers are relatively common, having been manufactured for many years and noted for
their high power capability, small package, light weight, and relatively low manufacturing cost
as compared to other, quicker reacting "power absorber" types. Their drawbacks are that they can
take a relatively long period of time to "stabilize" their load amount and the fact that they require
a constant supply of water to the "water brake housing" for cooling. In many parts of the country,
environmental regulations now prohibit "flow through" water and large water tanks must be
installed to prevent contaminated water from entering the environment.
The schematic shows the most common type of water brake, the variable level type. Water is
added until the engine is held at a steady rpm against the load. Water is then kept at that level
and replaced by constant draining and refilling, which is needed to carry away the heat created
by absorbing the horsepower. The housing attempts to rotate in response to the torque produced
but is restrained by the scale or torque metering cell which measures the torque.
This schematic shows a water brake which is actually a fluid coupling with the housing
restrained from rotating. It is very similar to a water pump with no outlet.
Dynamometers are useful in the development and refinement of modern day engine technology.
The concept is to use a dyno to measure and compare power transfer at different points on a
vehicle, thus allowing the engine or drivetrain to be modified to get more efficient power
transfer. For example, if an engine dyno shows that a particular engine achieves 400 N·m (300
lbf·ft) of torque, and a chassis dynamo shows only 350 N·m (260 lbf·ft), one would know to look
to the drivetrain for the major improvements. Dynamometers are typically very expensive pieces
of equipment, reserved for certain fields that rely on them for a particular purpose.
A Brake dynamometer applies variable load on the engine and measures the engine's ability to
move or hold the rpm as related to the "braking force" applied. It is usually connected to a
computer which records the applied braking torque and calculates the power output of the engine
based on information from a "load cell" or "strain gauge" and rpm (speed sensor).
An Inertia dynamometer provides a fixed inertial mass load and calculates the power required to
accelerate that fixed, known mass and uses a computer to record rpm and acc. rate to calculate
torque.
The engine is generally tested from somewhat above idle to its maximum rpm and the output is
measured and plotted on a graph.
1. Steady State (only on brake dynamometers), where the engine is held at a specified rpm
(or series of usually sequential rpms) for 3-5 seconds by the variable brake loading as
provided by the PAU (power absorber unit).
2. Sweep Test (inertia or brake dynamometers), where the engine is tested under a load
(inertia or brake loading), but allowed to "sweep" up in rpm in a continuous fashion, from
a specified lower "starting" rpm to a specified "end" rpm.
1. Inertia Sweep: An inertia dyno system that provides a fixed inertial mass flywheel and
computes the power required to accelerate the flywheel (load) from the starting to the
ending rpm. The actual rotational mass of the engine or engine and vehicle in the case of
a chassis dyno is not known and the variability of even tire mass will skew power results.
The inertia value of the flywheel is "fixed", so low power engines are under load for a
much longer time and internal engine temperatures are usually too high by the end of the
test, skewing optimal "dyno" tuning settings away from the outside world's optimal
tuning settings. Conversely, high powered engines, commonly complete a common "4th
gear sweep" test in less than 10 seconds, which is not a reliable load condition as
compared to operation in the outside world. By not providing enough time under load,
internal combustion chamber temps are unrealistically low and power readings,
especially past the power peak, are skewed low.
2. Loaded Sweep Tests (brake dyno type)consist of 2 types:
1. Simple fixed Load Sweep Test: A fixed load, of somewhat less than the engine's
output, is applied during the test. The engine is allowed to accelerate from its
starting rpm to its ending rpm, varying in its acceleration rate, depending on
power output at any particular rpm point Power is calculated using torque * rpm /
5252 + the power required to accelerate the dyno and engine's / vehicle's rotating
mass.
2. Controlled Acceleration Sweep Test: Similar in basic usage as the above Simple
fixed Load Sweep Test, but with the addition of active load control that targets a
specific rate of acceleration. Commonly, 20fps/ps is used.
The advantage of controlled acc. rate is that the acc. rate used is relatively common from low
power to high power engines and unnatural overextension and contraction of "test duration
duration" is avoided, providing more accurate and repeatable test and tuning results.
There is still the remaining issue of potential power reading error due to the variable engine /
dyno / vehicle's total rotating mass. Most modern computer controlled brake dyno systems are
capable of deriving that "inertial mass" value to eliminate the error.
Interestingly, A "sweep test" will always be suspect, as many "sweep" users ignore the inertial
mass factor and prefer to use a blanket "factor" on every test, on every engine or vehicle. Inertia
dyne systems aren't capable of deriving "inertial mass" and are forced to use the same inertial
mass.
Using Steady State testing eliminates the inertial mass error, as there is no acceleration during a
test.
3.1.2 Pressure measurement
The construction of a bourdon tube gauge, construction elements are made of brass
Many techniques have been developed for the measurement of pressure and vacuum. Instruments
used to measure pressure are called pressure gauges or vacuum gauges
A vacuum gauge is used to measure the pressure in a vacuum --- which is further divided into
two subcategories: high and low vacuum (and sometimes ultra-high vacuum). The applicable
pressure range of many of the techniques used to measure vacuums have an overlap. Hence, by
combining several different types of gauge, it is possible to measure system pressure
continuously from 10 mbar down to 10-11 mbar.
Although pressure is an absolute quantity, everyday pressure measurements, such as for tire
pressure, are usually made relative to ambient air pressure. In other cases measurements are
made relative to a vacuum or to some other ad hoc reference. When distinguishing between these
zero references, the following terms are used:
Atmospheric pressure is typically about 100 kPa at sea level, but is variable with altitude and
weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid
will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the
tire pressure goes up. Some standard values of atmospheric pressure such as 101.325 kPa or 100
kPa have been defined, and some instruments use one of these standard values as a constant zero
reference instead of the actual variable ambient air pressure. This impairs the accuracy of these
instruments, especially when used at high altitudes.
Units
Pressure Units
pound-force
technical
per
pascal bar atmosphere atmosphere torr
square inch
(Pa) (bar) (at) (atm) (Torr)
(psi)
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m-2 or kg·m-
1 -2
·s ). This special name for the unit was added in 1971; before that, pressure in SI was expressed
in units such as N/m². When indicated, the zero reference is stated in parenthesis following the
unit, for example 101 kPa (abs). The Pounds per square inch (psi) is still in widespread use in the
US and Canada, notably for cars. A letter is often appended to the psi unit to indicate the
measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although
this practice is discouraged by the NIST.
Because pressure was once commonly measured by its ability to displace a column of liquid in a
manometer, pressures are often expressed as a depth of a particular fluid (e.g. inches of water).
The most common choices are mercury (Hg) and water; water is nontoxic and readily available,
while mercury's density allows for a shorter column (and so a smaller manometer) to measure a
given pressure.
Fluid density and local gravity can vary from one reading to another depending on local factors,
so the height of a fluid column does not define pressure precisely. When 'millimetres of mercury'
or 'inches of mercury' are quoted today, these units are not based on a physical column of
mercury; rather, they have been given precise definitions that can be expressed in terms of SI
units. The water-based units usually assume one of the older definitions of the kilogram as the
weight of a litre of water.
Although no longer favoured by measurement experts, these manometric units are still
encountered in many fields. Blood pressure is measured in millimetres of mercury in most of the
world, and lung pressures in centimeters of water are still common. Natural gas pipeline
pressures are measured in inches of water, expressed as '"WC' ('Water Column'). Scuba divers
often use a manometric rule of thumb: the pressure exerted by ten metres depth of water is
approximately equal to one atmosphere. In vacuum systems, the units torr, micrometre of
mercury (micron), and inch of mercury (inHg) are most commonly used. Torr and micron
usually indicates an absolute pressure, while inHg usually indicates a gauge pressure.
Atmospheric pressures are usually stated using kilopascal (kPa), or atmospheres (atm), except in
American meteorology where the hectopascal (hPa) and millibar (mbar) are preferred. In
American and Canadian engineering, stress is often measured in kip. Note that stress is not a true
pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1
dyn·cm-2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square
metre.
Many other hybrid units are used such as mmHg/cm² or grams-force/cm² (sometimes as kg/cm²
and g/mol2 without properly identifying the force units). Using the names kilogram, gram,
kilogram-force, or gram-force (or their symbols) as a unit of force is forbidden in SI; the unit of
force in SI is the Newton (N).
While static gauge pressure is of primary importance to determining net loads on pipe walls,
dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured
by taking the differential pressure between instruments parallel and perpendicular to the flow.
Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The
presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its
shape is critical to accuracy and the calibration curves are often non-linear.
Applications
• Sphygmomanometer
• Barometer
• Altimeter
• Pitot tube
• MAP sensor
Instruments
Many instruments have been invented to measure pressure, with different advantages and
disadvantages. Pressure range, sensitivity, dynamic response and cost all vary by several orders
of magnitude from one instrument design to the next. The oldest type is the liquid column
manometer invented by Evangelista Torricelli.
3.1.2.3 Hydrostatic
Hydrostatic gauges (such as the mercury column manometer) compare pressure to the
hydrostatic force per unit area at the base of a column of fluid. Hydrostatic gauge measurements
are independent of the type of gas being measured, and can be designed to have a very linear
calibration. They have poor dynamic response.
Piston
Piston-type gauges counterbalance the pressure of a fluid with a solid weight or a spring. For
example dead-weight testers used for calibration and Tire-pressure gauges.
Liquid column
The difference in fluid height in a liquid column manometer is proportional to the pressure
difference.
Liquid column gauges consist of a vertical column of liquid in a tube whose ends are exposed to
different pressures. The column will rise or fall until its weight is in equilibrium with the
pressure differential between the two ends of the tube. A very simple version is a U-shaped tube
half-full of liquid, one side of which is connected to the region of interest whilst the reference
pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The
difference in liquid level represents the applied pressure. The pressure exerted by a column of
fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore
the pressure difference between the applied pressure Pa and the reference pressure Po in a U-
tube manometer can be found by solving Pa − Po = hgρ. If the fluid being measured is
significantly dense, hydrostatic corrections may have to be made for the height between the
moving surface of the manometer working fluid and the location where the pressure
measurement is desired.
Any fluid can be used, but mercury is preferred for its high density (13.534 g/cm³) and low vapor
pressure. For low pressure differences well above the vapor pressure of water, water is a
commonly-used liquid (and "inches of water" is a commonly-used pressure unit). Liquid column
pressure gauges are independent of the type of gas being measured and have a highly linear
calibration. They have poor dynamic response. When measuring vacuum, the working liquid
may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring
liquid pressure, a loop filled with gas or a light fluid must isolate the liquids to prevent them
from mixing. Simple hydrostatic gauges can measure pressures ranging from a few Torr (a few
100 Pa) to a few atmospheres. (Approximately 1,000,000 Pa)
A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube
and has a scale beside the narrower column. The column may be inclined to further amplify the
liquid movement. Based on the use and structure following type of manometers are used
1. Simple Manometer
2. Micro manometer
3. Differential manometer
4. Inverted differential manometer
Fig 3.1.2.2: A McLeod gauge, drained of mercury
A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer
until the pressure is a few mmHg. The gas must be well-behaved during its compression (it must
not condense, for example). The technique is slow and unsuited to continual monitoring, but is
capable of good accuracy.
An important variation is the McLeod gauge which isolates a known volume of vacuum and
compresses it to multiply the height variation of the liquid column. The McLeod gauge can
measure vacuums as high as 10−6 Torr (0.1 mPa), which is the lowest direct measurement of
pressure that is possible with current technology. Other vacuum gauges can measure lower
pressures, but only indirectly by measurement of other pressure-controlled properties. These
indirect measurements must be calibrated to SI units via a direct measurement, most commonly a
McLeod gauge.
3.1.2.5 Aneroid
Aneroid gauges are based on a metallic pressure sensing element which flexes elastically under
the effect of a pressure difference across the element. "Aneroid" means "without fluid," and the
term originally distinguished these gauges from the hydrostatic gauges described above.
However, aneroid gauges can be used to measure the pressure of a liquid as well as a gas, and
they are not the only type of gauge that can operate without fluid. For this reason, they are often
called mechanical gauges in modern language. Aneroid gauges are not dependent on the type of
gas being measured, unlike thermal and ionization gauges, and are less likely to contaminate the
system than hydrostatic gauges. The pressure sensing element may be a Bourdon tube, a
diaphragm, a capsule, or a set of bellows, which will change shape in response to the pressure of
the region in question. The deflection of the pressure sensing element may be read by a linkage
connected to a needle, or it may be read by a secondary transducer. The most common secondary
transducers in modern vacuum gauges measure a change in capacitance due to the mechanical
deflection. Gauges that rely on a change in capacitances are often referred to as Baratron gauges.
Bourdon
A Bourdon gauge uses a coiled tube, which, as it expands due to pressure increase causes a
rotation of an arm connected to the tube.
A combination pressure and vacuum gauge (case and viewing glass removed)
Indicator Side with card and dial Mechanical Side with Bourdon tube
In 1849 the Bourdon tube pressure gauge was patented in France by Eugene Bourdon.
The pressure sensing element is a closed coiled tube connected to the chamber or pipe in which
pressure is to be sensed. As the gauge pressure increases the tube will tend to uncoil, while a
reduced gauge pressure will cause the tube to coil more tightly. This motion is transferred
through a linkage to a gear train connected to an indicating needle. The needle is presented in
front of a card face inscribed with the pressure indications associated with particular needle
deflections. In a barometer, the Bourdon tube is sealed at both ends and the absolute pressure of
the ambient atmosphere is sensed. Differential Bourdon gauges use two Bourdon tubes and a
mechanical linkage that compares the readings.
In the following pictures the transparent cover face has been removed and the mechanism
removed from the case. This particular gauge is a combination vacuum and pressure gauge used
for automotive diagnosis:
• the left side of the face, used for measuring manifold vacuum, is calibrated in centimetres
of mercury on its inner scale and inches of mercury on its outer scale.
• the right portion of the face is used to measure fuel pump pressure and is calibrated in
fractions of 1 kgf/cm² on its inner scale and pounds per square inch on its outer scale.
Mechanical details
Stationary parts:
• A: Receiver block. This joins the inlet pipe to the fixed end of the Bourdon tube (1) and
secures the chassis plate (B). The two holes receive screws that secure the case.
• B: Chassis Plate. The face card is attached to this. It contains bearing holes for the axles.
• C: Secondary Chassis Plate. It supports the outer ends of the axles.
• D: Posts to join and space the two chassis plates.
Moving Parts:
1. Stationary end of Bourdon tube. This communicates with the inlet pipe through the
receiver block.
2. Moving end of Bourdon tube. This end is sealed.
3. Pivot and pivot pin.
4. Link joining pivot pin to lever with pins to allow joint rotation.
5. Lever. This an extension of the sector gear .
6. Sector gear axle pin.
7. Sector gear.
8. Indicator needle axle. This has a spur gear that engages the sector gear and extends
through the face to drive the indicator needle. Due to the short distance between the lever
arm link boss and the pivot pin and the difference between the effective radius of the
sector gear and that of the spur gear, any motion of the Bourdon tube is greatly amplified.
A small motion of the tube results in a large motion of the indicator needle.
9. Hair spring to preload the gear train to eliminate gear lash and hysteresis.
Diaphragm
A second type of aneroid gauge uses the deflection of a flexible membrane that separates regions
of different pressure. The amount of deflection is repeatable for known pressures so the pressure
can be determined using by calibration. The deformation of a thin diaphragm is dependent on the
difference in pressure between its two faces. The reference face can be open to atmosphere to
measure gauge pressure, open to a second port to measure differential pressure, or can be sealed
against a vacuum or other fixed reference pressure to measure absolute pressure. The
deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and
metallic diaphragms are used.
For absolute measurements, welded pressure capsules with diaphragms on either side are often
used.
Shape:
• Flat
• corrugated
• flattened tube
• capsule
Bellows
In gauges intended to sense small pressures or pressure differences, or require that an absolute
pressure be measured, the gear train and needle may be driven by an enclosed and sealed bellows
chamber, called an aneroid, which means "without liquid". (Early barometers used a column of
liquid such as water or the liquid metal mercury suspended by a vacuum.) This bellows
configuration is used in aneroid barometers (barometers with an indicating needle and dial card),
altimeters, altitude recording barographs, and the altitude telemetry instruments used in weather
balloon radiosondes. These devices use the sealed chamber as a reference pressure and are driven
by the external pressure. Other sensitive aircraft instruments such as air speed indicators and rate
of climb indicators (variometers) have connections both to the internal part of the aneroid
chamber and to an external enclosing chamber.
Secondary transducer
This is also called a capacitance manometer, in which the diaphragm makes up a part of a
capacitor. A change in pressure leads to the flexure of the diaphragm, which results in a change
in capacitance. These gauges are effective from 10−3 Torr to 10−4 Torr.
3.2.1 Thermistors
Thermistors are temperature sensitive resistors. All resistors vary with temperature, but
thermistors are constructed of semiconductor material with a resistivity that is especially
sensitive to temperature. However, unlike most other resistive devices, the resistance of a
thermistor decreases with increasing temperature. That's due to the properties of the
semiconductor material that the thermistor is made from. For some, that may be
counterintuitive, but it is correct. Here is a graph of resistance as a function of temperature for a
typical thermistor. Notice how the resistance drops from 100 kΩ, to a very small value in a
range around room temperature. Not only is the resistance change in the opposite direction from
what you expect, but the magnitude of the per
percentage
centage resistance change is substantial.
In this lesson you will examine some of the characteristics of thermistors and the circuits they are
used in.
Here are some data points for a typical thermistor from "The Temperature Handbook"
(Omega Engineering, Inc., 1989). (By the way, when you refer to this thermistor, you would say
it has 5kΩ at room temperature.)
T (oC) R (Ω)
0 16,330
25 5000
50 1801
This set of simultaneous linear equations can be solved for A, B and C. Here are the values
computed for A, B and C.
A = 0.001284
B = 2.364x 10-4
C = 9.304x 10-8
Using these values you can compute the reciprocal, and therefore the temperature, from a
resistance measurement.
If you have a resistance value - and that is what you will measure electrically - you then
need
eed to solve for the temperature. Use the reciprocal of the equation above, and you will get:
However, if the thermistor is embedded in a circuit - like a voltage divider, for example - then
you will have to measure electrical quantities - usually voltage - and work back from that
electrical measurement.
There will be situations where you need to measure a higher temperature than a thermistor
can work in. Or you may need more precision than a therm
thermistor can give. Consider a
thermocouple or and integrated circuit sensor like the LM35.
How Do You Use A Thermistor?
Thermistors are most commonly used in bridge circuits like the one below.
The thermistor can be placed anywhere in the bridge with three constant resistors, but
different placements can produce different behavior in the bridge. For example, different
placements might cause the output voltage to go in different directions as the temperature
changes
3.2.2 Thermocouple
A thermocouple is a junction formed from two dissimilar metals. One at a reference temperature
(like 0 oC), and the other junction at the temperature to be measured. A temperature difference
will cause a voltage to be developed that is temper
temperature dependent. (That voltage is caused by
phenomenon called the Seebeck effect.) Thermocouples are widely used for temperature
measurement because they are inexpensive, rugged and reliable, and they can be used over a
wide temperature range. In partic
particular,
ular, other temperature sensors (like thermistors) are useful
around room temperature, but the thermocouple can be used for temperatures.
Fig 3.2.2
3.2.2.2: A thermocouple circuit
• When you use a thermocouple, you need to ensure that the connections are at some
standard temperature, or you need to use an electronically compensated system that takes
those voltages into account. If your thermocouple is connected
cted to a data acquisition
system, then chances are good that you have an electronically compensated system.
• Once we obtain a reading from a voltmeter, the measured voltage has to be converted to
temperature. The temperature is usually expressed as a pol
polynomial
ynomial function of the
measured voltage. Sometimes it is possible to get a decent linear approximation over a
limited temperature range.
• There are two ways to convert the measured voltage to a temperature reading.
o Measure the voltage and let the operat
operator do the calculations.
o Use the measured voltage as an input to a conversion circuit - either analog or
digital.
• Type K (Ni-Cr/Ni-Al) Al) thermocouples are also widely used in the industry. It has high
thermo power and good resistance to oxidation. The operating temperature range of a
Type K thermocouple is from -269 oC to +1260 oC. However, this thermocouple
performs rather
ther poorly in reducing atmospheres.
• Type T (Cu/Cu-Ni) Ni) thermocouples can be used in oxidizing of inert atmospheres over the
temperature range of -250250 oC to +850 oC. In reducing or mildly oxidizing environments,
it is possible to use the thermocouple up to nearly +1000 oC.
• Type N (Nicrosil/Nisil) thermocouples are designed to be used in industrial environments
of temperatures up to +1200 oC.
The coefficients, an , are tabulated in many places. Here are the NBS polynomial coefficients
for a type K thermocouple. (Source: T. J. Quinn, Temperature, Academic Press Inc., 1990)
Table 3.2.2.1: Values of an for tyke k thermocouple.
Type K
Polynomial
Coefficients
n an
0 0.226584602
1 24152.10900
2 67233.4248
3 2210340.682
4 -860963914.9
5 4.83506x1010
-
6
1.18452x1012
7 1.38690x1013
-
8
6.33708x1013
There are really no thermocouples that can withstand oxidizing atmospheres for
temperatures above the upper limit of the platinum-rhodium type thermocouples. We cannot,
therefore, measure temperature in such high temperature conditions.
Other options for measuring extremely high temperatures are radiation or the noise
pyrometer. For non-oxidizing atmospheres, tungsten-rhenium based thermocouples shows good
performance up to +2750 oC. They can be used, for a short period, in temperatures up to +3000
o
C.
The selection of the types of thermocouple used for low temperature sensing is primarily
based on materials of a thermocouple. In addition, thermo power at low temperature is rather
low, so measurement of EMF will be proportionally small as well.
This comes into play with thermocouples because there is usually a need to
introduce extra metals into the circuit. This generally occurs when
instrumentation (lead wires) is connected to measure the EMF, or when the
junction is welded together on the hot end (weld rod). One would assume that the
introduction of these extra undesired "junctions" would destroy the calibration
and throw off the EMF measurement. However, this law states that the addition of
these extra metals will not have an effect on the total EMF as long as they are
kept at the same temperature as the point where they are connected.
Fig 3.2.2.4 : Thermocouple law2
This law is very important to thermocouples because of the fact that the cold
junction of most thermocouples (in real applications) will not be used at 32°F.
Unfortunately, the standardized EMF tables are usually only available with 32°F
as a reference temperature. This law gives us a means of relating the EMF of a
thermocouple used under ordinary conditions to that of one at a standardized
temperature. This topic is covered in more detail in the section "Using your
Thermocouple" under the heading "Cold Junction Compensation".
3.2.3 Pyrometers
Firing requires the user to know the actual temperature of the firing
chamber. To do so, a devise called a Pyrometer is used. A Pyrometer is a type of
thermometer used for high temperature measurements where physical contact may
damage the measuring instruments. All pyrometers come with a thermocouple
(temperature sensor) that senses the temperature of the measurand.
Pyrometers may be digital or analog. Digital Pyrometers are often offered in both a 9V
battery version and a standard receptacle Plug-in style version.
Connected to the pyrometer is a thermocouple. This thermocouple is the temperature sensor. It's
inserted into the firing chamber. The thermocouple produces a very low voltage which changes
depending upon the temperature seen at the tip of the thermocouple. This voltage is applied to the
pyrometer which reacts by moving or deflecting the needle or changing the digital display.
It should be noted that the Pyrometer and Thermocouple do not control the kiln in any way. They
don't turn it on and they don't turn it off. Their sole purpose is to display the temperature seen at the
tip of the thermocouple.
WEEK 8 ENGINEERING MEASUREMENTS MEC 203
Learning objectives: - understanding precautions for using measuring
instruments.
In addition to these guidelines, the user must also comply with general safe operating practice,
and when using the systems for weighing during lifting, the user must also comply with safe
operating practice during lifting
The load limit rating (or "capacity") indicates the maximum force or load a system can carry
under normal working conditions. Overloading, or placing a load on the system above its rated
capacity is dangerous and is therefore STRICTLY PROHIBITED
Avoid opening or attempting to open measuring systems and, needless to say, any attempt to
repair the systems by unauthorised personnel (without written authorisation) will nullify the
warranty as well as the manufacturer's liability, and could be dangerous.
Do not use load measuring system with an unknown load if there is any doubt as to the
reliability of the load indication. To check its reliability, use only a known load with a value of
more than 50% and less than 100% of the system's rated capacity (load limit).
The permitted temperature range for measuring system is normally given in systems
specification. Do not allow the systems to overheat. This could be dangerous.
Take particular care not to expose the measuring system to nuclear radiation. Local
environmental conditions such as extreme temperatures, radio transmissions or other magnetic
radiation may interfere with reliable system operation and may cause a false (low) reading which
could prove dangerous. Avoid using measuring system under such conditions.
Between calibrations, the user can verify whether the systems are still calibrated correctly by
using a known input. Calibration verification and adjustment must be performed with extreme
care since a wrong calibration adjustment will result in false readings, which could be dangerous.
At all times, it is the responsibility of all user of engineering equipment to ensure that normal
safety precautions are observed. No amount of safety features and engineering can be a substitute
for common sense and a desire to work safely.
All normal workshop safety precautions must be strictly observed during practicals.
In engineering, there are some things that are very easy to measure ’directly.’ These are things
like weight, distance, etc. So if I wanted to measure how long a piece of wood is, I would just
measure how long it is. But let’s say I wanted to find out about something that is a little harder to
measure... like how quickly the wind is blowing. I may not be able to measure the wind’s actual
speed, but if I had a windmill, I could measure how much power the windmill is making. Then,
using this information, I could work backwards to figure out how fast the wind must be. This
would be an example where I have to measure something ’indirectly.’
Direct and indirect measurements are also very important in sciences. Take, for example,
bacteria (little organisms that are so small that you can’t see them without a microscope). If we
wanted to figure out how many bacteria are in a tube, we could measure it directly or indirectly.
To do it directly, we would have to spread the bacteria out on a microscope slide and count them
one by one. This is very time-consuming. Instead, we could use something called a
spectrophotometer to measure it indirectly. A spectrophotometer works by shining light in one
side of a tube and measuring how much actually gets through to the other side. Since more light
getting through means there’s less bacteria in the tube, we can work backwards to figure out how
many bacteria are there. Another way of measuring it indirectly would be to measure how much
food the bacteria eat in a certain amount of time. The more they eat, the more bacteria there must
be.
WEEK 9
Learning objectives: - understanding calibration of instrument and
calibration.
4. 0 CALIBERATION OF MEASUERING INSTRUMENTS
4. 1 CONCEPT OF CALIBERATION: - calibration refers the process of setting or resetting of
a measurement instrument to give accurate readings without discrepancies. A measuring
instrument is to be checked for accuracy at frequent intervals with known standard, and any
discrepancy between the measured values and known standard to be set right through calibration.
Calibration procedures thus involve a comparison of a particular instrument with either (i) a
primary standard, or (ii) a secondary standard, or (iii) a known input source. Primary standards
here refer to fundamental standards of measurement such as mass, length, time, and Kelvin.
Secondary standards refers to electrical units that are derived from mechanical units of mass,
length and time e.g the reading obtained from an electrical device. There are two types of
calibrations, (1) static calibration and (2) dynamic calibration.
4. 2 STATIC CALIBRATION
The calibration of measuring instruments like pressure gauge, thermometers and flow meters is
based on the principle of static calibration.
Static calibration refers to a procedure where an input, either constant or slowly time varying, is
applied to an instrumented and corresponding output measured, while all other inputs (desired,
interfering modifying) are kept constant at some value, instruments are so constant at some
value. Instrument are so constructed that the signal conversion the perform have the property of
irreversibility or directionality. This implies that charge in an input quantity will cause a
corresponding change in the output quantity q1 is referred to as static calibration valid under the
stated constant conditions of all other inputs. The static calibration may be expressed analytically
(qo = =f (q1), or graphically, or in a tabular form. A graphical representation between qo and q1
is the calibration curve applicable under the stated constant conditions fro all other inputs. The
static performance characteristics are obtained from the calibration curves. It may, however, be
emphasized than the instrument to be calibrated. Some of the static performance characteristics
are
(i) Linearity
If the relationship between the output and input can be expressed by an equation of the
form
qo = a + kqi
Where a and k are constants, the instrument is said to possess linearity Linearity, in
practices, is never completely achieved, and the deviations from the ideas are termed as
linearity tolerances. In commercial instrument, the maximum departure from linearity is
often specified. Independent linearity and the proportional linearity are the two forms of
specifying linearity. They are illustrated in Fig. 4.1 (a) and 4.1(b).
These definitions are graphically illustrated in Fig. 4.2 and 4.2(b) the calibration curves are
plotted using least square method from the experimental data points.
Fig. 4. 2 Definition of static sensitivity
While the instrument’s sensitivity to its desired input is of primary importance, its sensitivity to
interfering a modifying input may also be of considerable interest. As an example, consider the
case of the strain gauge already discussed in week14. The temperature is an interfering input, and
causes the resistances of the gauge to vary and this would drift the output value even when strain
is zero. This is called the zero drift. Further, the temperature is also a modifying input which
changes the sensitivity factor. This effect is called the sensitivity drift or scale factor drift Fig.4.3
illustrated these definitions clearly. The total error due to temperature on the measurement has
also been shown in the Fig.4.3
(iii) Repeatability
If an instrument is used to measure same or an identical input many times and at different
time intervals, the output is not the same but shows a scatter. This scatter or deviation
from the ideal static characteristics, in absolute units or a fraction of the full scale, called
repeatability error and illustrated in Fig. 4. 4
WEEK 10
MEASURING INSTRUMENTS.
i. the instrument must be checked for external or internal friction and the friction removed
out.
iii. loose or sliding parts must be tightened before calibration is carried out on an instrument.
iv. broken part of the instrument must be changed before calibration is carried out .
v. the standard instrument to be used for calibrating an instrument must be at least 10 times
MAXIMUM METAL LIMITS: - refers to the maximum amount of metal that can be left on
a particular dimension during manufacture. This signifies that if a shaft has a dimension of 20 ±
0.025mm then the maximum amount of metal that can be left on the shaft is 20.025mm. This
means that the maximum amount of metal that can be left on a nominal dimension of 20 is
+0.025 if the dimension is to exceed the nominal dimension. Any amount of metal above
+0.025mm is considered above limit of the dimension. Therefore the maximum dimension
allowed for this shaft must not exceed 20.025mm. This is different from the maximum metal
limits of a hole. If a hole is to have the diameter of 20 ± 0.025mm then the maximum metal limit
on this hole’s diameter is 19.975mm. This also means that the maximum amount of metal that
can left on the nominal dimension of 20 is +0.025mm if the dimension is to exceed the nominal
dimension. Any dimension above +0.025mm is considered above the limit of the hole’s diameter
MINIMUM METAL LIMITS: - refers to the minimum amount of metal that can be taken from
the surface of the metal during manufacture. This signifies that if a metal shaft is to have a
diameter of 20±0.025mm then the minimum amount of metal that can be taken from the nominal
dimension of 20 is -0.025mm. The taking must stop when the diameter of the shaft is 19.975mm
if the shaft is to remain within the limit. This ffigure
igure also varies in the case of diameter of a hole.
If a hole is to have a diameter of 20±0.025mm the amount of metal that can be taken during the
20.025mm if the hole’s diameter is to remain within the limit. The limit that is associated with
the greatest amount of metal is often called the maximum metal limit, and that associated with
the least amount of metal is often called the minimum metal limit. The largest sshaft
haft size
permitted is the maximum metal limit, and the smallest shaft size permitted is the minimum
metal limit; similarly the largest hole size permitted is the minimum metal limit, and the smallest
4. 5 LIMITS OF GAUGES
When dimensioning a component it is necessary to stipulate the permitted variation in its size
because errors will occur owing to inaccurate in the machine tools, jigs and fixtures,
simply the ‘limit’) and the difference between these limits is called the variation tolerated (or
more simple the ‘tolerance’). The tolerances should be as large as possible to minimize the cost
of the component, butt the sufficiently small to ensure that the required fit, or the degree of
interchangeability between parts, is ensured. One method of inspecting parts is to use limit
gauges; these gauges are designed to accept the work piece if its size and shape lies within
wi the
specified limits. A limit gauge (or pair of limit gauges) consists of a ‘GO’ member that will pass
over or through a correct feature, and a ‘NOT GO’ member that will not pass over or through a
associated with the use of limit gauges of that the extent of error is not indicated when a
workplace is rejected, and that the system imposes smaller tolerances than stipulated on the
drawing of the work piece to allo
alloww for tolerances when the gauge is manufactured and also for
gauge wear,
Fig. 4. 1 Go-gauges
As stated above, the ‘GO’ member is used to check the maximum metal limit, which in turn
controls the shape of the work piece; the ‘GO’ member must be full form, because, as shown in
fig. 4. 2, a ‘GO’ member that is not full form, will accept an incorrectly
incorrectly-shaped.
Fig. 4. 2 non alignment of go gauges
Work piece. Taylor stated that the ‘GO’ gauge should incorporate the maximum metal limits
limi of
Taylor also stated that the ‘NOT GO’ gauges should be separate, and check the minimum metal
is checked at a time by the ‘NOT GO’ member, it will enter not enter, or pass over the feature as
long as one dimension is within limits, and can therefore accept an incorrect work piece.
It is necessary to allow manufacturing tolerances when designing a gauge, but it is also necessary
tolerances,
recommends
that the limits on the ‘GO’ gauge are such that its size is within the limits of size for the work
piece, and that the limits on the ’NOT GO’ are such that its size is outside the limits of size for
the work piece. When made to these limits the ‘NOT GO’ will not accept faulty parts, but the
‘GO’ will reject parts that are close to the maximum metal limit; the latter condition will rarely
occur in practice. The gauge tolerance is usually one-tenth of the work piece tolerance.
B.S. 969 recommends that where the work piece tolerance exceeds about 0.1mm additional metal
be left on the ‘GO’ gauge surface to allow for wear (this has the effect of placing the
manufacturing limits for the ‘GO’ gauge still further within the work piece limits). The standard
also recommends that if the work piece tolerance is too small to permit this, the gauge should be
the gauge, or form cast steel (with between about 0.7 and .2% carbon) that is usually hardened,
but may be hard enough ‘as received’. Larger gauges are sometimes made form grey cast iron,
In order to completely satisfy Taylor’s principle, the ‘GO’ end should be full form, and be the
same length as the hold to be gauged; it is often inconvenient to make the ‘GO’ end the same
length as the hole, but except when large holes are gauged, it is ffull
ull form. When large holes are
gauged the gauge members are of the ‘bar’ type to reduce the weight of the gauge; this type of
gauge does not satisfy Taylor’s principles because it only checks across the diameter.
diameter. Sometimes the ‘NOT GO’ end is flatted, and it then partly satisfies Taylor’s principle;
large gauges of the ‘bar’ type also partly satisfy Taylor’s principle.
made separately and engaged together to form an assembly (a ‘renewable end’ gauge). B.S.
1044: 1964, gauge blanks, does not include “solid” gauges because it is now common praictce to
use renewable end gauges with a light alloy or plastics handle. Renewable end gauges can be
have separate handles, be at opposite ends of one handle, or combined as one gauging member
memb of
the progressive gauge type. The ends of large plug gauges should be protected from becoming
burred when placed on a machine tale by providing a ‘guard extension’ (as should in fig. 4. 4);
Fig. 4. 4
Of more than about 75mm diameter excepting those for testing blind holed to their full depth.
The centres should be good quality, they should not be large, and the length of the cone should
Fig. 4. 5
Adequate air venting should be provided when small gauges are used for blind holes; when a
gauge is more than 100mm dia. Lightening holes are incorporated, which also give the required
air venting. The marking of gauges should be kept to a minimum; the mar
marking
king should include the
limiting dimension controlled by the gauge, ‘GO’ or ‘NOT GO’ (alternatively ‘H’ or ‘L’ – high
limit or low limit), ‘General’ or ‘Reference’, the manufacturer’s name or trade mark, and the
diameter, and a thread gaugee for the effective diameter (the effective diameter is the diameter of
an imaginary cylinder whose generator cuts the thread such that the.
The ‘Matrix’ thread gauge is illustrated in fig. 4. 7 (a). This gauge has two sets of adjustable
anvils; the front anvils form a ‘NOT GO’ effective diameter gauge. The rear anvils gauge only
two threads, and they are shaped so that error of pitch of the screw thread being gauged will not
the thread being gauged will not cause interference. The ‘GO’ and ’NOT GO’ anvils are
(a)
(b)
and a double – ended screw plug gauge for the effective diameter. The ‘GO’ member is full
form, but the ‘NOT GO’ member has truncated threads so that only the effective diameter is
gauged.
auged. It is common to have a dirt clearance groove cut axial to the thread to a depth slightly
below the root of the thread. The general design notes already given regarding plain plug gauges
These gauges are made from gauge plate, and may be double – ended or single-ended.
ended.
4. 6. 8 RECESS GAUGES
Fig. 4. 10 shows a plate gauge for checking the recess depth; care must be taken wen this type of
gauge with be seated on the work piece face, when in the extreme positions during gauging.
(a)
Fig.4. 10 Recess gauge
The recess width can be gauged with a simple plate gauge as shown in fig. 10.18.
The recess diameters more difficult to gauge because the gauge must enter the small diameter
hole before gauging the recess diameter. Fig. 4. 10 shows a typical gauge that locates in the
smaller diameter hole, and the recess diameter is gauged by rotating the lobed member;
m the
position for ‘GO’ and ‘NOT GO’ must be indicated on the locating and gauging and gauging
member.
These gauges are designed so that the work piece is accepted if one step is below the datum face,
and the other step is above the datum face; they are convenient to use if the step is at least
0.2mm.
shows a taper ring gauge; in both these examples the datum face is the work piece face.
(a)
(b)
Fig. 4. 14 (a)stepped-pin
pin depth gauge Fig. 4. 14 (b)taper gauges
4. 6. 10 POSITION AND
ND RECEIVER GAUGES
Position gauges are used to check the relative positions of several features (see fig. 4. 15), and
Fig. 4. 15 receiver
gauges
WEEK 11
Learning objectives: - knowing different type of strain gauges and different
ways of mounting strain gauges (axial, radial and biaxial).
5. 1 STRAIN GUAGES.
Strain gauges are electrical devices that are used to measure strains that occur in metals in the
industry during manufacture. They are in many forms and uses different principles. Amongst
(4) resistance gauges. The electrical methods of measuring strain possess the advantages of high
sensitivity and ability to respond to dynamic strains. Both the capacitive ad inductive type
are sometimes used as load indicators, mounted directly on the machine frame. In one
of rolling
olling loads in steel mill. These gages are quite rugged and maintain.
FIG. 5. 3 (a) inductive gauge (b) capacitance gauge
The calibration over long period of time. Figure 5.3 (a) illustrates all inductive gauge for general
purpose application. A deformation results in the variation of air gap and hence the inductance.
A linear differential transformer can also be used as an inductive gauge. Figure 5.3 (b) shows a
capacitive type strain causes a shift in the relative meter. Torque carried by an elastic member
causes a shift in the relative position of the teeth, thereby changing the effective area and hence
the capacitance. The changes in inductance or capacitance due to strains caused by loading are
Piezo- electric strain gauges are mainly used for studying dynamic inputs. Some transducers
have very high internal resistance and proper circuitry may be used to measure slow varying
inputs, but rarely steady inputs the gauge is cemented to the specimen. The voltage output
developed when the gauge is deformed along with the specimen is taken as a measure as a
measure of strain. The piezo- electric gauges are equally sensitive to strains in lateral directions
and have very high output sensitivity. One of the piezo- electric materials that is used for this
purpose is barium-titanate of 0.25mm thickness with suitable electrodes are bonded to the
Due to their good dynamic response, stability, range of available size, ease of data presentation
and processing etc., resistance strain gauges are widely used for stress analysis the resistance
strain.
Gauges work on the principle of piezo- resistively: the resistance of wire conductor changes
when it is strained. The change in the resistance bears a definite relation with the strain or the
The unbonded strain gauges are made of a high- tensile resistance wire (commercially referred to
as alloy 479, containing 92% PT AND 8% w) of about 0.025 mm diameter and of about 25mmm
in length. Two to twelve loops of the wire are attached to both a stationary frame and a movable
platform with the help of pins made of electrically insulating materials relative motion between
the stationary frame and the platform is possible as guided by flexure plates. One such
construction is shown in fig 5.4 the resistance wire is preloaded so that four resistance wires in
this construction are so wired that the bridge acts as a full bridges as shown in fig. 5.4.
unbonded stain gauges are mainly used as rather than for stain measurement. N the other hand,
bonded type of Strain gauges are cemented to the specimen. When properly cemented they
effectively form part of the surface and undergo the same strain. A bonded type of strain gauge
consists of either a length of a fine metal wire of approximately 0.025mm diameter, or metal foil
the length of the gauge while retaining its sensitivity ®, the wire, or the foil is usually used.
These are shown in Fig. 5.5 the resistance element is formed on a suitable base, for example.
Fig. 5. 5 resistance element
(i) Paper base: the wire is wound around a paper or sandwiched between the paper. Leakage
ii) Bakelite base: the wire and foil gauges used for high temperature applications are formed
on bakelite base.
iii) Plastic and rubberized strippable base: mostly foil gauges are formed on plastic or
rubberize base where form they are stripped for cementing to the test member.
The paper base strain gauge may be cemented with Duco cement to the test member
bakelite base strain gauges which are held to the specimen with a pressure of 1.8 kg/cm2
during curing.
Curing last 24 hours. Epoxy resin may be used for plastic base strain gauges.
The most common metals for the manufacture of strain gauges are (i) an alloy of copper and
nickel (55% Cu and 45% Ni), and (ii) an alloy of nickel, chromium and iron with other elements
in very small percentage. The gauges with resistances varying from 60 to 5000 are available
commercially but the strain gauge of 120 is considered a standard. Most commercially available
strain gauge reading equipments have been designed for 120 gauges. The safe current carried by
accepted for short periods. The change resistance brought about due to the application of load is
measure strain on the test member if the stress distribution is fairly uniform. In regions of likely
stress concentration and steep strain gradients, a gauge of much shorter length must be used.
A gauge will only react to strain parallel to the length the wire. If the strain is not parallel to the
gauge length, then only the component of strain that lies in the direction of gauge length will be
measured.
Therefore.
i) single element gauges are used in uniaxial fields or for making resottes. If the gauge is
used as prescribed by the calibration procedures, strain measurement will be accurate otherwise a
ii) Two element gauges are used in a biaxial stress in which the directions of principal
stresses are known and only their magnitudes are to be determined. In the region of high
stress concentration, the single element gauges (grids) are stacked one on top of the
other but insulated from each other so that the gauge is effectively of a small size
iii) Three element rosette gauges are sued for the study of a general biaxial stress filed, in
which neither the direction nor he magnitude of the principal stresses are known. The
choice of stacking or single plane style is determined by the nature of strain gradient at
the point of consideration where the gauge is to be mounted; figure 5.6 illustrated a single
element. Two elements stacked over each other, three elements 60o rosette in place style
WEEK 12
Learning objectives: - understanding gauge factor of material and its relation
to sensitivity of strain gauges.
5. 2 Analytical Theory Of Strain Gauges: Piezo- Resistive Effect
A simplified approach to the theory of strain gauge is presented here.
The resistance R of length L and area of cross-section A is given by
L
R = ρ 5. 1
A
Where ρ is the specific resistivity. On differentiation one obtains
R ρ L D
d =d + d - 2d 5. 2
R ρ L D
Where A = CD2, C is a constant, its value being one for square cross- section of dimension D,
and π/4 for circular cross-section of diameter D.
Defining.
L
d = ea = axial strain
L
D
d = e1 – lateral strain, and
D
dD / D
-d = µ = possion’s ratio’
dL / L
The above equation can be recast as
dR/R = 1 + 2 µ + dρ/ρ = Gf. 5. 3
R
Or d = GF ea.
R
The term dR/R/ (= GF) is called axial sensitivity of gauge factor of gauge factor itself. If the
specific resistivity does not depend on the strain then.
GF = 1 + 2 µ. 5. 4
Usually the value of poision’s ratio for most metals is 0.3 therefore,
GF = 1.6
In fact the measured values of gauge factor for metals vary form – 12 for Ni to 0.47 for
manganin (an alloy of N1, Cu and Mn). This is perhaps due to the fact that the behaviour of p
with strain for very thin wires used for strain gauges is not so well understood. Table 5.1 gives
the gauge factor, temperature coefficient of resistance etc. For some materials used for strain
work.
Table 5.1 shows a very wide variable of the gauge factor. Therefore the gauge factor for each
composition type is either to be measured before using the gauge or is supplied by the
manufacturer which usually is the case.
If the resistance gauge is strained to the extent that its element is operating in the plastic region,
then u== 0.5 and hence Gr ==2.9 for most commercial strain gauge factor is the same for both
compressive and tensile strains
5. 1. 2 CROSS SENSITIVELY
As has been pointed out earlier, a strain gauge is formed in a grid pattern in order to shorten its
length while retaining its sensitivity. Thus although most of the gauge element wire is parallel to
the length (axis) of the gauge, a certain length of the wire is unavoidably placed transverse to the
gauge axis. This length of wire will react to the strain perpendicular to the gauge-axis and gives
rise to cross sensitivity Gfc. The ratio GFC/GF is generally about 2%. In foil type gauge, the effect
of this length is considerably minimized by decreasing the resistance of the cross length, the end
The manufacture’s calibration is carried out in a uniaxial stress filed, sot that if the gauge is used
in services in the same manner. Cross sensitivity may be ignored. However when the strain
gauge is used in a biaxial stress filed, the readings with influenced by the transverses strain and a
correction should therefore, be applied to obtain the true strain along the gauge axis when a pair
of strain gauges are used in a biaxial filed with their axes along the principal stress direction, the
true strain e3 and e2 along the two perpendicular directions are given by the resistance R of wire
e1 = 1 – µŋ (e1’ – ŋe2),
1 – ŋ2
And e1’ and e2’ are the measured strains along these directions and ŋ (= GFC/GF) is the
After strain gauge has been mounted on a test member, variation of room temperature influences
(i) The gauge factor of gauge is affected by temperature owing to creep, i.e. the gauge factor
varies with time for a constant applied stress. Constantan wire gauge with bakelite base
ii) The resistance of the strain gauge will vary with a change in the temperature, and
iii) An apparent strain will be induced in the gauge due to the differential expansion between
The influence of temperature on the gauge reading can be mathematically expressed by the
following equations:
coefficient of resistance of gauge wire, a1 and a2 are the coefficients of linear expansion of test
member and the gauge wire respectively, and ∆Ø == Ø0; Ø is the temperature of the gauge. If the
temperature coefficient of resistance of the test member is more than that of the gauge wire
(a1>a2) at a given temperature, the strain gauge suffers tensile strain. On the other hand if a2 > a1,
Partial or complete temperature compensation over a limited range of measured stain values is
achieved by:
(i) Making
β + (a1 – a2) == 0.
For metals β is positive. Therefore this equation can be satisfied by suitably choosing the
gauge material for a definite test member. It is however assumed that the gauge factor is
independent of temperature
(ii) Using a series combination of two ‘opposing’ wire materials such as constantan and
nickel (dual gauges), the letter having a negative gauge factor. This method provides a
(iii) Locating a temperature sensitive resistor close to the gauge and connecting it electrically
in series with the bridge supply line so that the sensitivity (output voltage/input strain)
remains unaltered.
The most common and practical method of compensating temperature errors is either by the use
of dummy gauges or by employing wire resistance gauges in push-pull pairs. Examples of the
various arrangements for compensating temperature errors are given. Gauge can be minimized
Humidity is another factor which may seriously impair the performance of strain gauges.
Humidity causes corrosion of gauge wire resulting in an increasing in its resistance. The effect of
humidity on the performance of the strain gauge can be minimized by giving a coat of wax on
Strain induced resistance change of the strain gauge is measured with some from the wheatstone
bridge. Figure 5.7 shows wheatstone bridge circuit with four gauges of resistance R1, R2, R3 and
In practices it is very difficult to achieve null balance by proper selection of the resistances.
Usually series or parallel balancing network is used, these might modify the sensitivity of the
bridge.
On application of strees, the active gauges will undergo a change of resistance dRi (I
=,………,,4) and hence a potential difference e will be developed across AC. The potential
difference e is given by
e = V R2dR1 - R1Dr2 + R4dRa - RadR4
If it is assumed that all the four gauges have same nominal resistance i.e
R1 = R2 = R3 = R4 = R, the output voltage e can be expressed in terms of the gauge factor and
the train as
Where enet is the net strain. This is known as the bridge sensitivity equation. the sensitivity Sb of
Sb = e = VGF (enet),
E 4 e
Where e is the strain with one active gauge.
Sb = 1gRGF enet
2 e
This would be 2igRGF for the full bridge. Ig is the current through the gauge of resistance R.
based on the equation for e and requirements of temperature compensation, some arrangement
for mounting strain gauges for some specific purposes are now described.
5. 5 MOUNTING OF GAUGES
Figures 5.8 (a) and (b) show strain gauges mounted on a bar and the schematic of the
bride circuitry respectively. The gauges R1 and R3 are mounted axially on the bar which
is subjected to either tensile or compressive stress, gauges R2 and R4 are located where
e2 == e4 = 0,
and e1 == e3 = e.
double the sensitivity compared to that achievable with a single active gauge. Further the
while the other is elongated thus canceling the effect of bending stress provided they
It temperature compensation along with the removal of the influences of bending strain is
required, the gauges are mounted as shown in figs. 5.8 (c ) and (d0. the resistance R4, R4
are identical to R1, R3 and are used as dummy gauges for temperature compensation. The
resistance R and R1 are bonded diametrically opposite to each other for annulling the
Another very simple arrangement but mostly recommended for foil gauges )zero cross
sensitivity) is shown in Figs. 5.8 (e0 and (f0. in this arrangement, temperature
compensating gauges are mounted perpendicular to the axis of the bar and hence are nor
subjected to the axial strains. Bending strains are avoided by mounting R1, R3 and R2, R4
diametrically opposite to each other. The sensitivity is double and the bridge is said to be
Figures 5.9 (a) and (b) show two arrangements to measure bending strains only. The
gauges R2 and R2 are mounted diametrically opposite to each other; the arrangement is
inherently insensitive to temperature effects. In the arrangement of Fig. 5.9 (a) the bridge
works as a half bridge. The bridge can be made to work as full bridge if the gauges R1
and R3 are mounted on one side and gauges R2, R4 on the diametrically opposite side of
the test member as shown in Fig. 5.9 (b). in full bridge, the sensitivity is four times that
A cylindrical bar subjected to torsion has principal strain directions at 45o to the
longitudinal axis of the bar. Both the bending and axial strains can be eliminated by
mounting the strain gauges in an arrangement shown in Fig. 5.10. since the principal axes
of strain due to torsion are at 45o to the longitudinal axis of the cylinder and the strain
gauges R1 and R2 are mounted as shown in Fig. 5.10, the resistance of R2 will increase (
+∆R2) and that of R1 will decrease (-∆ R1), thus puling the effect. However, resistance
changes due to axial strain and temperature will be positive for both the gauges R1 and R2
thus compensate each other. Similar arguments hold good for gauges R3 and R4 . Further
R1, R2 and R3, R4 are diametrically place, thus the effect of bending will be compensated.
rigidly clamped at the periphery and is uniformly loaded. Both compressive and tensile
strains coexist. The gauges are mounted such that the bridge acts as a full bridge and
temperature effects are compensate by push-pull action of the gauges. Details of the
diaphragm type pressure pickups are given in the next chapter static and dynamic
strain measurement.
Strain gauges may be used under both the static and dynamic conditions. Hence both static and
4. 5. 1 STATIC CALIBRATION
Static calibration refers to a situation where an accurately known input is applied to the system
and the corresponding output measured, while all other inputs are kept constant at
This procedures often cannot be applied in the bonded strain gauge work because of the nature of
gauge (transducer_. Normally, the gauge is bonded to the test member to measure the strains.
Once the gauge is bonded, it can be hardly transferred to known strain situation for calibration.
Therefore when the gauge is used to experimentally measure the strain, some other approach to
the calibration problem is required. The method of calibration is based on the fact that the value
of both the gauge factor and gauge resistance are known accurately.
Resistance strain gauges are manufactured under carefully controlled conditions. The gauge
factor for each lot of gauges is provided by the manufacturer with an indicated tolerance of
about + 0.2%. Since both the gauge resistance and gauge facto known, a simple method of
calibration is to determine the response of the system to the introduction of a known small
resistance change at the gauge and calculated the equivalent strain therefore. A number of
precision high resistance are provided parallel to the gauge and small change in the resistance of
the gauge is obtained by shunting one of the precision resistors. Figure 5.11 shows an
calibration
arrangement that may be used for calibration. When the switch S is closed, the resistance of the
arm containing R1 is changed by a small amount ∆R. the resistance change ∆R is given by
A number of equivalent strain values can be obtained by shunting different calibration resistors
across R1, a graph between resistance change ∆R and the equivalent strain then can plotted. The
strain corresponding to the measured resistance changes when the gauge is bonded on the test
4. 5. 2 Dynamic calibration
Dynamic calibration of the strain gauge is carried by fastly shunting the gauge R1 repeatedly.
This is achieved by replacing the manual calibration switch S with an electrically driven switch.
This electrically driven switch is referred to as :chopper’, which makes and breaks the contact 60
to 100 times per second. When the output of the bridge is displayed on CRO screen or recorded,
the trace obtained is found to be a square wave. The step in the trace represents the equivalent
The dynamic response of the bonded strain gauge i.e of faithfully reproducing resistance changes
corresponding to strain variations of the test member upto 50 KHZ is extremely good. The
4. 5. 3 MEASUREMENT OF STRAINS
The bonded strain gauges are used to measure extremely small displacements ( or strains).
However they can be used to measure large displacements as well provided an intermediate
elastic elements is used, for example e, gauges may be bonded to a cantilever and an unknown
displacement is applied at its end. The method does not require calibrated by giving known
displacements to the end of the catilever and studying the system’s response.
Strain induced resistance changes are measured by a wheatstone bridge circuitry. The bridge
can be excited either by a d.c or an a.c source. Further, the strain to be measured could be
static, slowly varying or of dynamic nature. Fro static strains, a favanometer can be used for
measurements. However, its use for dynamic strains is restricted due to the high inertia of its
moving parts. Therefore a pen recorder of a CRO which possesses the required dynamic strains
range is used for dynamic strain measurements. Further, the output of the bridge is very small,
and hence a high gain d.c amplifier for static or slowly varying strains and is required. A
static strain
(i) Contact potentials and thermo emf’s in the various parts of the circuit may be comparable
to the unbalance voltage due to strain and are also amplified along with the signal.
(ii) Small changes in the terminal potentials at different stages due to dc. Drift will be of the
same order of magnitude as of the unbalance voltage and will be amplified along with the
signal.
(iii) Stray potentials of 50 Hz from the vicinity of the long cables used for measurements may
Essentially there is no inbuilt discrimination in a.c. amplifier which may discriminate between
betwee
the desired signal and undesired voltages that a.c amplifier at various states of the circuit. On the
other hand an a.c amplifier through suitable for dynamic measurements cannot be used for static
or slowly varying signals except when chopper is used in the circuit. A method using a carrier
frequency for both the slow varying and dynamic strains is described next.
figure 5.13 shows an a.c excited bridge. The a.c. sources is connected across the terminals A and
C of the bridge, and the output across the terminals B and D is fed to an a.c amplifier. The
frequency of the a.c, supply, often called a carrier, is usually between 50 Hz to 10 KHZ, although
used. The working of an a.c excited bridge is explained with references to a d.c excited bridge
already discussed. When the bridge is balanced, the potential difference across BD is zero. If the
resistance R1 is increased by ∆R1 due to the application not load, the potential of terminal B will
be higher than that of D. Conversely if the resistance R1 is decreased, the reverse will hold good.
Therefore tension and compression strains produce potentials of opposite polarity at the bridge
output. However for an a.c excited bridge the voltage at the terminals A and C is alternatively
positive and negative, and hence any unbalance of the bridge would result in an a.c output.
Indeed both tensile and compressive strains will provide an a.c output. It can, however be shown
phase; the tensile strain producing an output in phase with the supply.
The effect of unbalancing the bridge by subjecting the gauges to strains is to amplitude modulate
an a.c signal of carrier frequency. Therefore, the output of the bridge can be amplified by an a.c
amplifier irrespective of the nature of the strain i.e. static or dynamic. For the measurement of
It should, however, be noted that the bridge output voltages are unsuitable for recording
and
(iii) Both tension and compression stains cause an increase in the amplitude of carrier
These limitations of an a.c excited bridge for strain measurements can be eliminated in
(i) The first method consists of rectifying and filtering the amplitude modulate
carrier signal after amplification, thus eliminating the carrier frequency. In order
strain will not bring it back to the balance point. The signal is recovery after
and compressive strain can be detected without initially unbalancing the bridge.
nature. In practice a large number of gauges are mounted on a test member, and
the strain values are desired at about the small time therefore very fast
switching of gauges is incorporated. This might lead to errors unless special care
is exercised.