Instruments Calibration Procedures
Calibration refers to the adjustment of an instrument so its output accurately corresponds to its
input throughout a specified range.
The only way we can know that an instrument’s output accurately corresponds to its input over a
continuous range is to subject that instrument to known input values while measuring the
corresponding output signal values. This means we must use trusted standards to establish known
input conditions and to measure output signals.
The following examples show both input and output standards used in the calibration of pressure
and temperature transmitters:
A noteworthy exception is the case of digital instruments, which output digital rather than analog
signals. In this case, there is no need to compare the digital output signal against a standard, as
digital numbers are not liable to calibration drift.
However, the calibration of a digital instrument still requires comparison against a trusted
standard in order to validate an analog quantity.
For example, a digital pressure transmitter must still have its input calibration values validated by a
pressure standard, even if the transmitter’s digital output signal cannot drift or be misinterpreted.
It is the purpose of this section to describe procedures for efficiently calibrating different types of
instruments.
Linear Instruments
The simplest calibration procedure for an analog, linear instrument is the so-called zero-and-span
method. The method is as follows:
1. Apply the lower-range value stimulus to the instrument, wait for it to stabilize
2. Move the “zero” adjustment until the instrument registers accurately at this point
3. Apply the upper-range value stimulus to the instrument, wait for it to stabilize
4. Move the “span” adjustment until the instrument registers accurately at this point
5. Repeat steps 1 through 4 as necessary to achieve good accuracy at both ends of the range
An improvement over this crude procedure is to check the instrument’s response at several points
between the lower- and upper-range values. A common example of this is the so-called five-point
calibration where the instrument is checked at 0% (LRV), 25%, 50%, 75%, and 100% (URV) of
range.
A variation on this theme is to check at the five points of 10%, 25%, 50%, 75%, and 90%, while still
making zero and span adjustments at 0% and 100%. Regardless of the specific percentage points
chosen for checking, the goal is to ensure that we achieve (at least) the minimum necessary
accuracy at all points along the scale, so the instrument’s response may be trusted when placed
into service.
Yet another improvement over the basic five-point test is to check the instrument’s response at
five calibration points decreasing as well as increasing. Such tests are often referred to as Up-down
calibrations. The purpose of such a test is to determine if the instrument has any significant
hysteresis: a lack of responsiveness to a change in direction.
Some analog instruments provide a means to adjust linearity. This adjustment should be moved
only if absolutely necessary! Quite often, these linearity adjustments are very sensitive, and prone
to over-adjustment by zealous fingers.
The linearity adjustment of an instrument should be changed only if the required accuracy cannot
be achieved across the full range of the instrument. Otherwise, it is advisable to adjust the zero
and span controls to “split” the error between the highest and lowest points on the scale, and
leave linearity alone.
The procedure for calibrating a “smart” digital transmitter – also known as trimming – is a bit
different. Unlike the zero and span adjustments of an analog instrument, the “low” and “high”
trim functions of a digital instrument are typically non-interactive.
This means you should only have to apply the low- and high-level stimuli once during a calibration
procedure. Trimming the sensor of a “smart” instrument consists of these four general steps:
1. Apply the lower-range value stimulus to the instrument, wait for it to stabilize
2. Execute the “low” sensor trim function
3. Apply the upper-range value stimulus to the instrument, wait for it to stabilize
4. Execute the “high” sensor trim function
Likewise, trimming the output (Digital-to-Analog Converter, or DAC) of a “smart” instrument
consists of these six general steps:
1. Execute the “low” output trim test function
2. Measure the output signal with a precision milliammeter, noting the value after it stabilizes
3. Enter this measured current value when prompted by the instrument
4. Execute the “high” output trim test function
5. Measure the output signal with a precision milliammeter, noting the value after it stabilizes
6. Enter this measured current value when prompted by the instrument
After both the input and output (ADC and DAC) of a smart transmitter have been trimmed (i.e.
calibrated against standard references known to be accurate), the lower- and upper-range values
may be set.
In fact, once the trim procedures are complete, the transmitter may be ranged and ranged again
as many times as desired. The only reason for re-trimming a smart transmitter is to ensure
accuracy over long periods of time where the sensor and/or the converter circuitry may have
drifted out of acceptable limits.
This stands in stark contrast to analog transmitter technology, where re-ranging necessitates re-
calibration every time.
Non-linear Instruments
The calibration of inherently nonlinear instruments is much more challenging than for
linear instruments. No longer are two adjustments (zero and span) sufficient, because
more than two points are necessary to define a curve.
Examples of nonlinear instruments include expanded-scale electrical meters, square root
characterizers, and position-characterized control valves.
Every nonlinear instrument will have its own recommended calibration procedure, so I will defer
you to the manufacturer’s literature for your specific instrument. I will, however, offer one piece
of advice: when calibrating a nonlinear instrument, document all the adjustments you make (e.g.
how many turns on each calibration screw) just in case you find the need to “re-set” the
instrument back to its original condition.
More than once I have struggled to calibrate a nonlinear instrument only to find myself further
away from good calibration than where I originally started. In times like these, it is good to know
you can always reverse your steps and start over!
Discrete Instruments
The word “discrete” means individual or distinct. In engineering, a “discrete” variable or
measurement refers to a true-or-false condition. Thus, a discrete sensor is one that is only able to
indicate whether the measured variable is above or below a specified setpoint.
Examples of discrete instruments are process switches designed to turn on and off at certain
values. A pressure switch, for example, used to turn an air compressor on if the air pressure ever
falls below 85 PSI, is an example of a discrete instrument.
Discrete instruments require periodic calibration just like continuous instruments. Most discrete
instruments have just one calibration adjustment: the set-point or trip-point. Some process
switches have two adjustments: the set-point as well as a deadband adjustment.
The purpose of a deadband adjustment is to provide an adjustable buffer range that must be
traversed before the switch changes state. To use our 85 PSI low air pressure switch as an
example, the set-point would be 85 PSI, but if the deadband were 5 PSI it would mean the switch
would not change state until the pressure rose above 90 PSI (85 PSI + 5 PSI).
When calibrating a discrete instrument, you must be sure to check the accuracy of the set-point in
the proper direction of stimulus change. For our air pressure switch example, this would mean
checking to see that the switch changes states at 85 PSI falling, not 85 PSI rising.
If it were not for the existence of deadband, it would not matter which way the applied pressure
changed during the calibration test. However, deadband will always be present in a discrete
instrument, whether that deadband is adjustable or not.
For example, a pressure switch with a deadband of 5 PSI set to trip at 85 PSI falling would re-set at
90 PSI rising. Conversely, a pressure switch (with the same deadband of 5 PSI) set to trip at 85 PSI
rising would re-set at 80 PSI falling.
In both cases, the switch “trips” at 85 PSI, but the direction of pressure change specified
for that trip point defines which side of 85 PSI the re-set pressure will be found.
A procedure to efficiently calibrate a discrete instrument without too many trial-and-error
attempts is to set the stimulus at the desired value (e.g. 85 PSI for our hypothetical low-
pressure switch) and then move the set-point adjustment in the opposite direction as the
intended direction of the stimulus (in this case, increasing the set-point value until the switch
changes states).
The basis for this technique is the realization that most comparison mechanisms cannot tell the
difference between a rising process variable and a falling setpoint (or vice-versa).
Thus, a falling pressure may be simulated by a rising set-point adjustment. You should still perform
an actual changing-stimulus test to ensure the instrument responds properly under realistic
circumstances, but this “trick” will help you achieve good calibration in less time.
Differential Pressure Sensors Calibration Procedure
Differential pressure sensors are common in the process industry and cover a variety of
applications. To understand what a differential pressure sensor is, it becomes important to put it
in contrast to other pressure measurement types. The most common types of pressure
measurement are absolute, gauge and differential.
Gauge Pressure: Gauge pressure is the pressure difference in reference to barometric (or
atmospheric) pressure as showing in figure 1. This is the most common pressure measurement
type in industry today.
Absolute Pressure: Absolute pressure is when zero pressure is referenced to absolute vacuum as
shown in figure 1. This is done by pulling a very hard vacuum, achieving as close to absolute zero
as possible, and then referencing the zero of the sensor to that vacuum point. Often absolute
sensors utilize a gauge sensor and a barometric sensor and calculate the absolute pressure by
subtracting the barometric pressure from the gauge pressure.
Differential Pressure: Differential pressure (DP) can be independent of the atmospheric and
absolute pressures. It is the pressure difference between two applied pressures and as shown in
figure 1. These sensors are very useful in determining the pressure difference between two places
or systems and are often used in flow calculation, filtering, fluid level, density, and viscosity.
So now that we’ve reviewed the different pressure types and we know what differential pressure
is and how it compares to other pressure measurement types. Now, we can consider how we
calibrate a DP sensor and some of the challenges associated with calibration of DP sensors. First,
let’s start with the challenges.
Common Challenges in Calibrating DP Sensors
Producing a stable, controlled pressure – to have a meaningful measurement for calibration we
must be able to have stable pressure generation from a pressure source, such as a pump or a
controller. DP sensors can be very sensitive, so a solution that will produce and hold a stable
pressure is very important. Also, the pump or controller needs to have sufficient resolution to be
able to exactly generate the desired pressure points. Producing a stable, controlled pressure with
high resolution is often a challenge because many pump solutions rely on check valves, or non-
returning valves, within the pump as shown in Figure 2 : DP gauge and pump stability. These check
valves are prone to leaks over time and use and are often the source of frustration when trying to
hold highly stable pressures for a DP sensors calibration.
Temperature effects – Possibly the largest challenge to calibrating DP sensors has to do with the
impact of the environmental temperature on the DP sensor and the calibration standards. Because
many DP sensors are measuring very low full scale (FS) pressures, a small change in temperature
can amount to a very noticeable change in pressure. This change in temperature often equates to
constant instability in both the sensor being tested and the calibration standard (both reference
gauge and pump).
Changing atmospheric pressure – Several DP sensor manufacturers recommend the calibration be
performed with the reference port (or low port) be open to atmosphere. The challenge with this
requirement is that throughout a calibration, the atmospheric pressure is constantly changing
which influences the stability and repeatability of the calibration results.
Methods of Calibration
Example 1 – Using an pressure pump, DP reference gauge with the DUT’s
reference port open to atmosphere
Required Equipment:
Low pressure calibration pump
Device under test
Reference DP Gauge
Lines and fittings to connect from the gauges to the pump
Connection (See figure 2)
Both the high ports of each gauge are connected into the calibration pump
The reference or low ports of each gauge are left open to atmosphere
Ensure the DUT is in the proper orientation (typically vertical or horizontal)
Procedure
Depending on the DUT, you may need to exercise the gauge multiple times to its full scale.
Ensure the vent valve is open and zero both the reference gauge and the DUT (assuming the DUT
is a digital gauge that requires regular zeroing).
Close the vent valve and proceed to the next calibration points and record the data when the
measurement is stable.
Increase the Pressure by using pump and note down the readings in DUT & Reference gauge.
Typically, 3-5 calibration points are taken both upward then downward so as to determine
hysteresis.
Pros: This method is inexpensive and the set up is easy.
Cons: You’ll need to account for barometric pressure and temperature changes throughout the
test. Depending on the environmental conditions this can produce very unstable measurements.
This is the least accurate method for calibration of DP sensors.
Example 2 – Using an Low pressure calibration pump, DP reference gauge with the DUT’s
reference ports connected together
Required Equipment:
Low pressure calibration pump
Device under test
Reference DP Gauge
Lines and fittings to connect from the gauges to the pump and the gauges together
Both high ports of each gauge are connected into the calibration pump.
The reference or low ports of each gauge are connected together.
Ensure the DUT is in the proper orientation (typically vertical or horizontal).
Note: In this method pressure is generated on both the high and low pressure lines and the DP is
measured by the reference gauge. Depending on the DP range required the best solution to reach
the full scale of the DUT.
Procedure
Depending on the DUT, you may need to exercise the gauge multiple times to its full scale
Recording the zero point may vary depending on the type of DUT. If the DUT is a digital gauge,
then keep the reference gauge and the DUT reference ports connected together and zero both
gauges. If the DUT is an analog gauge that doesn’t require a regular zero, then disconnect both
reference ports and leave them open to atmosphere to zero the gauges. After recording the zero
point connect both the reference ports together and proceed through the calibration.
Close the vent valve and proceed to the next calibration points and record the data when the
measurement is stable.
Increase the Pressure by using pump and note down the readings in DUT & Reference gauge.
Typically, 3-5 calibration points are taken both upward then downward so as to determine
hysteresis
Pros: This method is inexpensive and better accounts for atmospheric pressure changes
throughout the test. The stability at each point is improved from the first example.
Cons: The set up is more complicated than the first example and temperature effects can
potentially have a larger impact than the first example because we have a sealed system with the
low (reference) lines being connected.
Example 3 – Using the automated calibration Equipment
Required Equipment:
Automatic Calibrator
Device under test
Lines and fittings to connect the DP gauge to the Calibrator
Connection (see figure 4)
Connect the high port of the DP gauge to the OUTLET port of the Automatic Calibrator
Equipment
Connect the low port of the DP gauge to the REF port of the Automatic Calibrator Equipment.
Ensure the DUT is in the proper orientation (typically vertical or horizontal).
Procedure
Depending on the DUT, you may need to exercise the gauge multiple times to its full scale.
Program in a task and run an automated test which will automatically generate the pressure,
stabilize the measurement, and allow for the DP gauge reading to be recorded.
Typically, 3-5 calibration points are taken both upward then downward and the Calibrator will
automatically calculate the hysteresis and display the test results with pass/fail criteria.
Pros: This method is fully- or semi-automated depending on the DUT. Measurements are
controlled and stability is ensured by the Calibrator controller. The Calibrator is much less
influenced by changes in temperature and barometric pressure than the previous examples.
Results are automatically displayed and calculated. The Calibrator can calibrate pressure gauges
and transmitters.