Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
31 views51 pages

Hapi 134 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views51 pages

Hapi 134 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

HAPI 134 FUNDUMENTALS OF INSTRUMENTATION AND METROLOGY

NOTES
Purpose and performance of measurement systems
A process is a system which generates information. Examples are a chemical reactor, a jet
fighter, a gas platform, a submarine, a car, a human heart, and a weather system.
Information variables which are commonly generated by processes are: a car generates
displacement, velocity and acceleration variables, and a chemical reactor generates
temperature, pressure and composition variables.
We then define the observer as a person who needs this information from the process. This
could be the car driver, the plant operator or the nurse.
The purpose of the measurement system is to link the observer to the process, as shown in
Figure 1
Measurement
system
process Input Output observer
True value of
variable measured value of
variable
Figure 1 Purpose of measurement system.
Here the observer is presented with a number which is the current value of the information
variable.
We can now refer to the information variable as a measured variable. The input to the
measurement system is the true value of the variable; the system output is the measured value
of the variable. In an ideal measurement system, the measured value would be equal to the
true value.
The accuracy of the system can be defined as the closeness of the measured value to the true
value.
A perfectly accurate system is a theoretical ideal and the accuracy of a real system is quantified
using measurement system error E, where
E = measured value − true value
E = system output − system input
Thus if the measured value of the flow rate of gas in a pipe is 11.0 m 3/h and the true value is
11.2 m3/h, then the error E = − 0.2 m3/h. If the measured value of the rotational speed of an
engine is 3140 rpm and the true value is 3133 rpm, then E = +7 rpm. Error is the main
performance indicator for a measurement system.
Structure of measurement systems
The measurement system consists of several elements or blocks. It is possible to identify four
types of element, although in a given system one type of element may be missing or may occur
more than once. The four types are shown in Figure 2 and can
be defined as follows:

1
sensing signal conditioning signal processing data presentation output
input element element element element
measured
true value value

Figure 2 General structure of measurement system.


Sensing element
This is in contact with the process and gives an output which depends in some way on the
variable to be measured. Examples are:
• Thermocouple where millivolt e.m.f. depends on temperature
• Strain gauge where resistance depends on mechanical strain
• Orifice plate where pressure drop depends on flow rate.
If there is more than one sensing element in a system, the element in contact with the process
is termed the primary sensing element, the others secondary sensing elements.
Signal conditioning element
This takes the output of the sensing element and converts it into a form more suitable for
further processing, usually a d.c. voltage, d.c. current or frequency signal.
Examples are:
• Deflection bridge which converts an impedance change into a voltage change
• Amplifier which amplifies millivolts to volts
• Oscillator which converts an impedance change into a variable frequency voltage.
Signal processing element
This takes the output of the conditioning element and converts it into a form more suitable for
presentation. Examples are:
• Analogue-to-digital converter (ADC) which converts a voltage into a digital form for input to a
computer
• Computer which calculates the measured value of the variable from the incoming digital data.
Typical calculations are:
• Computation of total mass of product gas from flow rate and density data
• Integration of chromatograph peaks to give the composition of a gas stream
• Correction for sensing element non-linearity.
Data presentation element
This presents the measured value in a form which can be easily recognised by the
observer. Examples are:
• Simple pointer–scale indicator
• Chart recorder
• Alphanumeric display
• Visual display unit (VDU).

2
Static characteristics of measurement system elements
Previously we saw that a measurement system consists of different types of element. We now
discuss the characteristics that typical elements may possess and their effect on the overall
performance of the system.
Static or steady-state characteristics; these are the relationships which may occur between the
output O and input I of an element when I is either at a constant value or changing slowly
(Figure 3).

Element
Input I Output O

Figure 3 Meaning of element characteristics.


Systematic characteristics
Systematic characteristics are those that can be exactly quantified by mathematical or graphical
means. These are distinct from statistical characteristics which cannot be exactly quantified.
Range
The input range of an element is specified by the minimum and maximum values of I, i.e. IMIN to
IMAX. The output range is specified by the minimum and maximum values of O, i.e. OMIN to OMAX.
Thus a pressure transducer may have an input range of 0 to 104 Pa and an output range of 4 to
20 mA; a thermocouple may have an input range of 100 to 250 °C and an output range of 4 to
10 mV.
Span
Span is the maximum variation in input or output, i.e. input span is IMAX – IMIN, and output span
is
OMAX – OMIN. Thus in the above examples the pressure transducer has an input span of 104 Pa
and an output span of 16 mA; the thermocouple has an input span of 150 °C and an output
span of 6 mV.
Ideal straight line
An element is said to be linear if corresponding values of I and O lie on a straight
line. The ideal straight line connects the minimum point A(IMIN, OMIN ) to maximum
point B(IMAX, OMAX) (Figure 2.2) and therefore has the equation:

𝑂𝑀𝐴𝑋−𝑂𝑀𝐼𝑁
𝑂 − 𝑂𝑀𝐼𝑁 =⌊ 𝐼 ⌋ (𝐼 − 𝐼𝑀𝐼𝑁 ) 2.1
𝑀𝐴𝑋−𝐼𝑀𝐼𝑁
Ideal straight line equation: OIDEAL= KI +a 2.2
𝑂𝑀𝐴𝑋−𝑂𝑀𝐼𝑁
where:K = ideal straight-line slope = ⌊ ⌋
𝐼𝑀𝐴𝑋−𝐼𝑀𝐼𝑁

3
and a = ideal straight-line intercept = OMIN − KIMIN
Thus the ideal straight line for the above pressure transducer is: O = 0.153I+4.0
The ideal straight line defines the ideal characteristics of an element. Non-ideal characteristics
can then be quantified in terms of deviations from the ideal straight line.

Non-linearity
In many cases the straight-line relationship defined by eqn [2.2] is not obeyed and the element
is said to be non-linear. Non-linearity can be defined (Figure 2.2) in terms of a function N(I )
which is the difference between actual and ideal straight-line behaviour, i.e
N(I ) = O(I ) − (KI + a) [2.3]
or
O(I ) = KI + a + N(I) [2.4]
Non-linearity is often quantified in terms of the maximum non-linearity N; expressed as a
percentage of full-scale deflection (f.s.d.), i.e. as a percentage of span.
̂
𝑁
Thus:Max. non-linearity as a percentage of f.s.d.= 𝑂 𝑥100% [2.5]
𝑀𝐴𝑋 −𝑂𝑀𝐼𝑁
Figure 4 Definition of non-linearity.

OMAX Ideal KI+a B(IMAX,OMAX)

Actual O(I)

N(I)

IMAX

A(IMIN,OMIN)

As an example, consider a pressure sensor where the maximum difference between


actual and ideal straight-line output values is 2 mV. If the output span is 100 mV, then the
maximum percentage non-linearity is 2% of f.s.d.
In many cases O(I ) and therefore N(I ) can be expressed as a polynomial in I:
O(I ) = a0 + a1I + a2 I 2 + . . . + aqIq + . . . + amI m =∑ aq I q [2.6]
An example is the temperature variation of the thermoelectric e.m.f. at the junction
of two dissimilar metals. For a copper–constantan (Type T) thermocouple junction,
the first four terms in the polynomial relating e.m.f. E(T ), expressed in V, and
junction temperature T °C are:

4
E(T) = 38.74T + 3.31910−2T 2 + 2.071  10−4T 3 +2.195  10-6T 4 + higher-order terms up to T 8
[2.7a]
for the range 0 to 400 °C.[1] Since E = 0μV at T = 0 °C and E = 20 869 μV at
T = 400 °C, the equation to the ideal straight line is: EIDEAL = 52.17T [2.7b]
and the non-linear correction function is:
N(T ) = E(T ) −EIDEAL= −13.43T + 3.319  10−2T 2 + 2.071  10−4T 3+ 2.195  10−6T 4 + higher-order
terms [2.7c]
In some cases expressions other than polynomials are more appropriate: for example
3300
the resistance R(T)ohms of a thermistor at T °C is given by: R(T ) = 0.04 exp[𝑇+273]
[2.8]
Sensitivity
This is the change O in output O for unit change I in input I, i.e. it is the ratio
O/I. In the limit that I tends to zero, the ratio O/I tends to the derivative dO/dI,
which is the rate of change of O with respect to I. For a linear element dO/dI is equal
to the slope or gradient K of the straight line; for the above pressure transducer the
sensitivity is 1.6  10−3 mA/Pa. For a non-linear element dO/dI = K + dN/dI, i.e.
sensitivity is the slope or gradient of the output versus input characteristics O(I ).
Figure 5 shows the e.m.f. versus temperature characteristics E(T ) for a Type T
thermocouple (eqn [2.7a] ). We see that the gradient and therefore the sensitivity varies
with temperature: at 100 °C it is approximately 35 μV/°C and at 200 °C approximately
42 μV/°C.
EμV

1000

Slope=42 μV/OC

500 Slope=35μV/oC

TOC
100 200
Figure 5 Thermocouple sensitivity.

Environmental effects
In general, the output O depends not only on the signal input I but on environmental
inputs such as ambient temperature, atmospheric pressure, relative humidity, supply voltage,
etc. Thus if eqn [2.4] adequately represents the behaviour of the element under ‘standard’
environmental conditions, e.g. 20 °C ambient temperature, 1000 millibars atmospheric

5
pressure, 50% RH and 10 V supply voltage, then the equation must be modified to take account
of deviations in environmental conditions from ‘standard’. There are two main types of
environmental input.
A modifying input IM causes the linear sensitivity of an element to change. K is the sensitivity at
standard conditions when IM = 0. If the input is changed from the standard value, then IM is the
deviation from standard conditions, i.e. (new value – standard value). The sensitivity changes
from K to K + KMIM, where KM is the change
in sensitivity for unit change in IM. Figure 2.4(a) shows the modifying effect of ambient
temperature on a linear element.
An interfering input II causes the straight line intercept or zero bias to change. a
is the zero bias at standard conditions when II = 0. If the input is changed from the
standard value, then II is the deviation from standard conditions, i.e. (new value –
standard value). The zero bias changes from a to a + KI II, where KI is the change in
zero bias for unit change in II. Figure 2.4(b) shows the interfering effect of ambient
temperature on a linear element.
KM and KI are referred to as environmental coupling constants or sensitivities. Thus
we must now correct eqn [2.4], replacing K with (K + KMIM)I and replacing a with
a + KI II to give:
O = KI + a + N(I ) + KMIMI + KI II [2.9]
Figure 6 Modifying and interfering inputs.

O 30OC,IM=+10,Sensitivity=K+10KM

20OC,IM = 0,Sensitivity = K

(a)

10OC,IM= -10,Sensitivity = K-10KM

O 30OC,II=+10,zero bias=a+10KI

20OC,II=0,zero bias=a

a+10KI

a 10OC,II= -10,zero bias=a-10KI

6
(b)

Hysteresis
For a given value of I, the output O may be different depending on whether I is
increasing or decreasing. Hysteresis is the difference between these two values of O
(Figure 2.6), i.e.
Hysteresis H(I ) = O(I )I −O(I )I [2.10]
Hysteresis is usually quantified in terms of the maximum hysteresis H expressed as a
percentage of f.s.d., i.e. span. Thus:
̂
𝐻
Maximum hysteresis as a percentage of f.s.d. = 𝑂 100% [2.11]
𝑀𝐴𝑋 −𝑂𝑀𝐼𝑁
A simple gear system (Figure 2.7) for converting linear movement into angular rotation provides
a good example of hysteresis. Due to the ‘backlash’ or ‘play’ in the gears the angular rotation θ,
for a given value of x, is different depending on the direction of the linear movement.
Resolution
Resolution is defined as the largest change in I that can occur without any corresponding
change in O.
Figure 7 Hysteresis.

Figure 8 Backlash in gears.

7
Wear and ageing
These effects can cause the characteristics of an element, e.g. K and a, to change slowly but
systematically throughout its life. One example is the stiffness of a spring k(t)
decreasing slowly with time due to wear, i.e.
k(t) = k0 − bt [2.12]
where k0 is the initial stiffness and b is a constant. Another example are the constants
a1, a2, etc. of a thermocouple, measuring the temperature of gas leaving a cracking
furnace, changing systematically with time due to chemical changes in the thermocouple
metals.
Error bands
Non-linearity, hysteresis and resolution effects in many modern sensors and transducers are so
small that it is difficult and not worthwhile to exactly quantify each individual effect. In these
cases the manufacturer defines the performance of the element in terms of error bands (Figure
2.9). Here the manufacturer states that for any value of I, the output O will be within ±h of the
ideal straight-line value OIDEAL. Here an exact or systematic statement of performance is
replaced by a statistical statement in terms of a probability density function p(O). In general a
probability density function p(x) is defined so that the integral p(x) dx (equal to the area under
the curve in Figure 2.10
between x1 and x2)is the probability Px1, x2 of x lying between x1 and x2 (Section 6.2).
In this case the probability density function is rectangular (Figure 2.9),
We note that the area of the rectangle is equal to unity: this is the probability of O
lying between OIDEAL - h and OIDEAL + h.
Figure 9 Error bands and rectangular probability density function.

Figure 10 Probability density function.

8
2.2 Generalised model of a system element
If hysteresis and resolution effects are not present in an element but environmental
and non-linear effects are, then the steady-state output O of the element is in general
given by eqn [2.9], i.e.:
O = KI + a + N(I ) + KMIMI + KI II [2.9]

DYNAMIC CHARACTERISTICS OF MEASUREMENT SYSTEMS


If the input signal I to an element is changed suddenly, from one value to another,
then the output signal O will not instantaneously change to its new value. For
example, if the temperature input to a thermocouple is suddenly changed from 25 °C
to 100 °C, some time will elapse before the e.m.f. output completes the change from
1 mV to 4 mV. The ways in which an element responds to sudden input changes are
termed its dynamic characteristics, and these are most conveniently summarised using a
transfer function G(s).
If the input signal to a multi-element measurement system is changing rapidly, then the
waveform of the system output signal is in general different from that of the input
signal.
Transfer function G(s) for typical system elements
First-order elements
A good example of a first-order element is provided by a temperature sensor with an electrical
output signal, e.g. a thermocouple or thermistor. The bare element (not enclosed in a sheath) is
placed inside a fluid (Figure 11). Initially at time t = 0− ( just
before t = 0), the sensor temperature is equal to the fluid temperature, i.e. T(0-) =
TF(0−). If the fluid temperature is suddenly raised at t = 0, the sensor is no longer in a steady
state, and its dynamic behaviour is described by the heat balance equation:
rate of heat inflow − rate of heat outflow = rate of change of sensor heat content
[4.1]
Figure 11 Temperature sensor in fluid.
output O

TFoC

ToC W

9
Assuming that TF  T, then the rate of heat outflow will be zero, and the rate of heat
inflow W will be proportional to the temperature difference (TF - T)
W = UA(TF − T) watts [4.2]
−2 -1
where U W m °C is the overall heat transfer coefficient between fluid and sensor
and A m2 is the effective heat transfer area.
The increase of heat content of the sensor is MC[T − T(0-)] joules,
where M kg is the sensor mass and C J kg−1 °C-1 is the specific heat of the sensor material. Thus,
assuming M and C are constants:
rate of increase of sensor heat content = MC [T − T(0-)]/time [4.3]
Defining T = T − T(0−) and TF = TF − TF(0−) to be the deviations in temperatures
from initial steady-state conditions, the differential equation describing the sensor temperature
𝑑𝑇
changes is UA(TF −T ) = MC 𝑑𝑡
𝑀𝐶 𝑑∆𝑇
i.e. 𝑈𝐴 = TF -T [4.4]
𝑑𝑡
This is a linear differential equation in which dT/dt and T are multiplied by
constant coefficients; the equation is first order because dT/dt is the highest
derivative present. The quantity MC/UA has the dimensions of time and is referred to as the
time constant τ for the system. The differential equation is now:
𝑑∆𝑇
Linear first-order differential equation 𝜏 𝑑𝑡 = TF -T [4.5]

The transfer function based on the Laplace transform of the differential equation provides a
convenient framework for studying the dynamics of multi-element systems. The Laplace
transform g(s) of a time-varying function is defined by:
̅ ∫∞ 𝑒 −𝑠𝑡 f(t)dt
Definition of Laplace transform 𝑓 (s)= 0
where s is a complex variable of the form σ + j where j = √(−1) .
Table 4.1 Laplace transforms of common time functions f(t).
̅ ∫∞ 𝑒 −𝑠𝑡 f(t)dt
L[f(t)] = 𝑓 (s)= 0

Function Symbol Graph Function

𝑑
Ist derivative f(t) ̅
s𝑓(s)-f(0-)
𝑑𝑡
𝑑2
2nd derivative f(t) ̅
𝑠 2 𝑓(s)-sf(0-)-𝑓 ̅
(0-)
𝑑𝑡 2
Unit impulse 𝛿(𝑡) 1
1
𝑎
Lim a 0
0
a
1
Unit step μ(t) 𝑠

10
0 t
1
Exponential exp(-αt) 𝑠+𝛼
decay
1

0
𝑠
Exponential 1-exp(-αt) 𝑠(𝑠+𝛼)
Growth 1
𝑠
0 𝑠2 +𝜔 2
Sine wave sin t 0 t

In order to find the transfer function for the sensor we must find the Laplace transform
of eqn [4.5]. Using Table 4.1 we have:
τ[s𝑇̅(s) − T(0−)] + 𝑇̅(s) = 𝑇̅F (s) [4.7]

where T(0-) is the temperature deviation at initial conditions prior to t = 0. By definition, T(0-
) = 0, giving: τs𝑇̅(s) + 𝑇̅(s) = 𝑇̅F (s)
i.e.
(τs+1) 𝑇̅(s) = 𝑇̅F (s) [4.8]
The transfer function G(s) of an element is defined as the ratio of the Laplace transform
of the output to the Laplace transform of the input, provided the initial conditions are zero.
Thus:
𝑓̅𝑂(𝑠)
Definition of element transfer function G(s)= [4.9]
𝑓̅𝐼(𝑠)
and 𝑓𝑂̅ (s) = G(s) 𝑓𝐼̅ (s); this means the transfer function of the output signal is simply
the product of the element transfer function and the transfer function of the input signal.
Because of this simple relationship the transfer function technique lends itself to the study of
the dynamics of multi-element systems and block diagram representation
(Figure 4.2).
From eqns [4.8] and [4.9] the transfer function for a first-order element is:
∆𝑇(𝑠) 1
G(s)= ∆𝑇̅ = 𝜏𝑆+1 [4.10]
𝐹(𝑠)
The above transfer function only relates changes in sensor temperature to changes in
fluid temperature. The overall relationship between changes in sensor output signal
∆𝑂̅ (𝑠) ∆𝑂 ∆𝑇̅ (𝑠)
O and fluid temperature is: ∆𝑇̅ = ∆𝑇 ∆𝑇̅ [4.11]
𝐹(𝑆_ 𝐹(𝑆)
where O/T is the steady-state sensitivity of the temperature sensor. For an ideal
element O/T will be equal to the slope K of the ideal straight line. For non-linear
elements, subject to small temperature fluctuations, we can take O/T = dO/dT, the
derivative being evaluated at the steady-state temperature T(0-) around which the

11
fluctuations are taking place. Thus for a copper–constantan thermocouple measuring
small fluctuations in temperature around 100 °C, E/T is found by evaluating dE/dT at 100 °C
(see Section 2.1) to give E/T = 35 μV °C−1. Thus if the time constant of the thermocouple is
10s the overall dynamic relationship between changes in e.m.f. and fluid temperature is:
∆𝐸̅ (𝑠) 1
= 35.1+10𝑠 [4.12]
∆𝑇̅𝐹 (𝑠)

Identification of static characteristics – calibration


Standards
The static characteristics of an element can be found experimentally by measuring
corresponding values of the input I, the output O and the environmental inputs IM and
II, when I is either at a constant value or changing slowly. This type of experiment is referred to
as calibration, and the measurement of the variables I, O, IM and II must
be accurate if meaningful results are to be obtained. The instruments and techniques
used to quantify these variables are referred to as standards (Figure 2.15).

standard instrument IM II standard instrument

I O

Element or system to
be calibrated
standard instrument standard instrument

Figure 2.15 Calibration of an element.

The accuracy of a measurement of a variable is the closeness of the measurement to the true
value of the variable. It is quantified in terms of measurement error, i.e. the difference between
the measured value and the true value. Thus the accuracy of a laboratory standard pressure
gauge is the closeness of the reading to the true value of pressure. The true value of a variable
is the measured value obtained with a standard of ultimate accuracy.
Thus the accuracy of the above pressure gauge is quantified by the difference between the
gauge reading, for a given pressure, and the reading given by the ultimate pressure standard.
However, the manufacturer of the pressure gauge may not have access to the ultimate
standard to measure the accuracy of his products.
In the United Kingdom the manufacturer is supported by the National Measurement
System. Ultimate or primary measurement standards for key physical variables such as time,
length, mass, current and temperature are maintained at the National Physical Laboratory
(NPL). Primary measurement standards for other important industrial variables such as the
density and flow rate of gases and liquids are maintained at the National Engineering

12
Laboratory (NEL). In addition there is a network of laboratories and centres throughout the
country which maintain transfer or intermediate standards. These centres are accredited by
UKAS (United Kingdom Accreditation Service). Transfer standards held at accredited centres are
calibrated against national primary and secondary standards, and a manufacturer can calibrate
his products against the transfer standard at a local centre. Thus the manufacturer of pressure
gauges can calibrate his products against a transfer standard, for example a deadweight tester.
The transfer standard is in turn calibrated against a primary or secondary standard, for example
a pressure balance at NPL. This introduces the concept of a traceability ladder, which is shown
in simplified form in Figure 2.16.
Traceability is the property of the result of a measurement or the value of a standard whereby
it can be related to stated references, usually national or international standards, through an
unbroken chain of comparisons all having stated uncertainties.
Calibration is a comparison of a measuring standard, measuring jnstrument or equipment with
a measuring standard of higher accuracy.
The element is calibrated using the laboratory standard, which should itself be
calibrated using the transfer standard, and this in turn should be calibrated using the
primary standard. Each element in the ladder should be significantly more accurate
than the one below it.

SI units
Having introduced the concepts of standards and traceability we can now discuss
different types of standards in more detail. The International System of Units (SI)
comprises seven base units, which are listed and defined in Table 2.3. The units of
all physical quantities can be derived from these base units.

Primary standard e.g. NPL pressure balance

Transfer standard e.g. dead weight tester

Accuracy Laboratory standard e.g. Standard pressure


gauge
Increasing

Element to be calibrated e.g. pressure


transducer

13
Figure 2.16 Simplified traceability ladder.
Table 2.3 SI base units (after National Physical Laboratory ‘Units of Measurement’ poster, 1996[4]).
Time: second (s)The second is the duration of 9 192 631 770 periods of the radiation
corresponding to the transition between the two hyperfine levels of the ground state of the
caesium-133 atom.
Length: metre (m)The metre is the length of the path travelled by light in vacuum during a time
interval of 1/299 792 458 of a second.
Mass: kilogram (kg)The kilogram is the unit of mass; it is equal to the mass of the international
prototype of the kilogram.
Electric current: ampere(A) The ampere is that constant current which, if maintained in two
straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1
metre apart in vacuum, would produce between these conductors a force equal to 2  10−7
newton per metre of length.
Thermodynamic temperature: kelvin (K)The kelvin, unit of thermodynamic temperature, is the
fraction 1/273.16 of the thermodynamic temperature of the triple point of water.
Amount of substance: The mole(mol) is the amount of substance of a system which contains as
many elementary entities as there are atoms in 0.012 kilogram of
carbon-12.
Luminous intensity: The candela(cd) is the luminous intensity, in a given direction, of a source
that emits monochromatic radiation of frequency 540  1012 hertz and that has a radiant
intensity in that direction of (1/683) watt per steradian.
The NPL is the custodian of ultimate or primary standards in the UK. For Zimbabwe it is SIRDC.
There are secondary standards held at other accreditation service centres. These have been
calibrated against SIRDC standards and are available to calibrate transfer standards.
At NPL, the metre is realised using the wavelength of the 633 nm radiation from an iodine-
stabilised helium–neon laser. The reproducibility of this primary standard is about 3 parts in
1011, and the wavelength of the radiation has been accurately related to the definition of the
metre in terms of the velocity of light. The primary standard is used to calibrate secondary laser
interferometers which are in turn used to calibrate precision length bars, gauges and tapes.
The international prototype of the kilogram is made of platinum–iridium and
is kept at the International Bureau of Weights and Measures (BIPM) in Paris. The
British national copy is kept at NPL and is used in conjunction with a precision balance
to calibrate secondary and transfer kilogram standards
At NPL deadweight machines covering a range of forces from 450 N to 30 MN are
used to calibrate strain-gauge load cells and other weight transducers.
The second is realised by caesium beam standards to about 1 part in 1013; this is
equivalent to one second in 300 000 years! A uniform timescale, synchronised to 0.1
microsecond, is available worldwide by radio transmissions; this includes satellite
broadcasts.
The ampere has traditionally been the electrical base unit and has been realized at NPL using
the Ayrton-Jones current balance; here the force between two current carrying
coils is balanced by a known weight.
.

14
METROLOGY AND THE NEW STANDARDS FOR THE EVALUATION AND
EXPRESSION OF MEASUREMENT UNCERTAINTY

Metrology is the science and practice of measurement.

Measurements are done in an extremely diverse scale of activities for purposes of trade,
manufacturing and scientific disciplines. Measuring methods and instruments vary widely,
according to the various fields and disciplines. Yet there are some general characteristics to
measurements that are always valid, for any kind of measurement. We restrict ourselves
here to the measurement of physical quantities.

• A measurement is always meant to give us information about the value of a certain


property of an object or a certain quantity in a process.
• The result of a measurement is never exact. No matter how hard we try to measure
as accurately as possible, there is always a limit to the accuracy. The metrological
term to express this limited accuracy is uncertainty.

MEASUREMENT UNCERTAINTY

WHAT IS MEASUREMENT UNCERTAINTY AND WHY IS IT IMPORTANT?


The following is an article i, copied from the internet site of UKAS, the United Kingdom
Accreditation Service, explaining the relevance of metrology and measurement uncertainty
for the quality and acceptance of products and services:

WHAT IS UNCERTAINTY?
It is a parameter, associated with the result of a measurement (eg a calibration or test)
that defines the range of the values that could reasonably be attributed to the measured
quantity. When uncertainty is evaluated and reported in a specified way it indicates the
level of confidence that the value actually lies within the range defined by the
uncertainty interval.

HOW DOES IT ARISE?


Any measurement is subject to imperfections; some of these are due to random effects,
such as short-term fluctuations in temperature, humidity and air-pressure or variability in
the performance of the measurer. Repeated measurements will show variation because of
these random effects. Other imperfections are due to the practical limits to which
correction can be made for systematic effects, such as offset of a measuring instrument,
drift in its characteristics between calibrations, personal bias in reading an analogue scale
or the uncertainty of the value of a reference standard.

WHY IS IT IMPORTANT?

15
The uncertainty is a quantitative indication of the quality of the result. It gives an answer to
the question, how well does the result represent the value of the quantity being measured?
It allows users of the result to assess its reliability, for example for the purposes of
comparison of results from different sources or with reference values. Confidence in the
comparability of results can help to reduce barriers to trade.

Often, a result is compared with a limiting value defined in a specification or regulation. In


this case, knowledge of the uncertainty shows whether the result is well within the
acceptable limits or only just makes it. Occasionally a result is so close to the limit that the
risk associated with the possibility that the property that was measured may not fall within
the limit, once the uncertainty has been allowed for, must be considered.

Suppose that a customer has the same test done in more than one laboratory, perhaps on
the same sample, more likely on what they may regard as an identical sample of the same
product. Would we expect the laboratories to get identical results? Only within limits, we
may answer, but when the results are close to the specification limit it may be that one
laboratory indicates failure whereas another indicates a pass. From time to time
accreditation bodies have to investigate complaints concerning such differences. This can
involve much time and effort for all parties, which in many cases could have been avoided if
the uncertainty of the result had been known by the customer.

WHAT IS DONE ABOUT IT?


Laboratories that operate in accordance with the requirements of EN 45001 (eg UKAS-
accredited) are required by the standard to report results with a statement of uncertainty,
where relevant. The standard also states that quantitative results shall be accompanied by
a statement of uncertainty. The problems presented by these requirements vary in nature
and severity depending on the technical field and whether the measurement is a calibration
or test.

Calibration is characterised by the facts that


(i) repeated measurements can be made
(ii) uncertainty of reference instruments is provided at each stage down the
calibration chain, starting with the national standard and
(iii) customers are aware of the need for a statement of uncertainty in order to
ensure that the instrument meets their requirements.

Consequently, calibration laboratories are used to evaluating and reporting uncertainty. In


accredited laboratories the uncertainty evaluation is subject to assessment by the
accreditation body and is quoted on calibration certificates issued by the laboratory.

HOW IS UNCERTAINTY EVALUATED?


Uncertainty is a consequence of the unknown sign of random effects and limits to
corrections for systematic effects and is therefore expressed as a quantity, ie an interval

16
about the result. It is evaluated by combining a number of uncertainty components. The
components are quantified either by evaluation of the results of several repeated
measurements or by estimation based on data from records, previous measurements,
knowledge of the equipment and experience of the measurement.

In most cases, repeated measurement results are distributed about the average in the
familiar bell-shaped curve or normal distribution, in which there is a greater probability
that the value lies closer to the mean than to the extremes. The evaluation from repeated
measurements is done by applying a relatively simple mathematical formula. This is
derived from statistical theory and the parameter that is determined is the standard
deviation.

Uncertainty components quantified by means other than repeated measurements are also
expressed as standard deviations, although they may not always be characterised by the
normal distribution. For example, it may be possible only to estimate that the value of a
quantity lies within bounds (upper and lower limits) such that there is an equal probability
of it lying anywhere within those bounds. This is known as a rectangular distribution.
There are simple mathematical expressions to evaluate the standard deviation for this and
a number of other distributions encountered in measurement. An interesting one that is
sometimes encountered, eg in EMC measurements, is the U-shaped distribution.

The method of combining the uncertainty components is aimed at producing a realistic


rather than pessimistic combined uncertainty. This usually means working out the square
root of the sum of the squares of the separate components (the root sum square method).
The combined standard uncertainty may be reported as it stands (the one standard
deviation level), or, usually, an expanded uncertainty is reported. This is the combined
standard uncertainty multiplied by what is known as a coverage factor. The greater this
factor the larger the uncertainty interval and, correspondingly, the higher the level of
confidence that the value lies within that interval. For a level of confidence of
approximately 95% a coverage factor of 2 is used. When reporting uncertainty it is
important to indicate the coverage factor or state the level of confidence, or both.

WHAT IS BEST PRACTICE?


Sector-specific guidance is still needed in several fields in order to enable laboratories to
evaluate uncertainty consistently. Laboratories are being encouraged to evaluate
uncertainty, even when reporting is not required; they will then be able to assess the
quality of their own results and will be aware whether the result is close to any specified
limit. The process of evaluation highlights those aspects of a test or calibration that
produce the greatest uncertainty components, thus indicating where improvements could
be beneficial. Conversely, it can be seen whether larger uncertainty contributions could be
accepted from some sources without significantly increasing the overall interval. This could
give the opportunity to use cheaper, less sensitive equipment or provide justification for
extending calibration intervals.

17
Uncertainty evaluation is best done by personnel who are thoroughly familiar with the test
or calibration and understand the limitations of the measuring equipment and the
influences of external factors, eg environment. Records should be kept showing the
assumptions that were made, eg concerning the distribution functions referred to above,
and the sources of information for the estimation of component uncertainty values, eg
calibration certificates, previous data, experience of the behaviour of relevant materials.

STATEMENTS OF COMPLIANCE - EFFECT OF UNCERTAINTY


This is a difficult area and what is to be reported must be considered in the context of the
client's needs. In particular, consideration must be given to the possible consequences and
risks associated with a result that is close to the specification limit. The uncertainty may be
such as to raise real doubt about the reliability of pass/fail statements. When uncertainty is
not taken into account, then the larger the uncertainty, the greater are the chances of
passing failures and failing passes. A lower uncertainty is usually attained by using better
equipment, better control of environment, and ensuring consistent performance of the test.

For some products it may be appropriate for the user to make a judgement of compliance,
based on whether the result is within the specified limits with no allowance made for
uncertainty. This is often referred to as shared risk, since the end user takes some of the
risk of the product not meeting specification. The implications of that risk may vary
considerably. Shared risk may be acceptable in non-safety critical performance, for
example the EMC characteristics of a domestic radio or TV. However, when testing a heart
pacemaker or components for aerospace purposes, the user may require that the risk of the
product not complying has to be negligible and would need uncertainty to be taken into
account. An important aspect of shared risk is that the parties concerned agree on the
uncertainty that is acceptable; otherwise disputes could arise later.

CONCLUSION
Uncertainty is an unavoidable part of any measurement and it starts to matter when
results are close to a specified limit. A proper evaluation of uncertainty is good
professional practice and can provide laboratories and customers with valuable
information about the quality and reliability of the result.

Uncertainty in practice

Print this page, and then use a ruler to measure the distance
between these lines, in centimetres to two decimal points (eg 4.28
cm). Make a note of the measurement, and of the ruler you use.

18
Now do it with another ruler; note the result, and also make a note
of which ruler you use. Repeat the job with as many different
rulers as you can find, noting the ruler used each time. Are the
measurements all the same?

Now ask colleagues to do the same, noting the measurements for


each ruler.
Do different people produce different results with the same ruler?
Do different rulers give consistent results?

Now give one of the rulers to someone else and get them to
measure this distance.

How confident in the result are you?

SOME DEFINITIONS
measurement

set of operations having the object of determining the value of a quantity

measurand

particular quantity subject to measurement


EXAMPLE: Vapour pressure of a given sample of water at 20 C.

NOTE: The specification of a measurand may require statements about quantities such as
time, temperature and pressure.

uncertainty of measurement

parameter, associated with the result of a measurement, that characterises the dispersion
of the values that could reasonably be attributed to the measurand.

NOTES

19
1 The parameter may be, for example, a standard deviation (or a given multiple of it),

Estimated
a b
value
uncertainty uncertainty

uncertainty interval
or the half-width of an interval having a stated level of confidence.

2 Uncertainty of measurement comprises, in general, many components. Some of


these components may be evaluated from the statistical distribution of the results of
series of measurements and can be characterised by experimental standard
deviations. The other components, which can also be characterised by standard
deviations, are evaluated from assumed probability distributions based on
experience or other information.

3 It is understood that the result of the measurement is the best estimate of the value
of the measurand, and all components of uncertainty, including those arising from
systematic effects, such as components associated with corrections and reference
standards, contribute to the dispersion.

UNDERSTANDING THE DEFINITION OF UNCERTAINTY


What is important to realise is, that the result of a measurement is an estimate, an
approximation of the true value (the true value can not be known, so the best information
we have is the estimated measurement result). The uncertainty is a number that somehow
indicates how far our estimate might be away from that true value.

Let’s illustrate the situation on the hand of the example of a shooting contest: the beginners
are expected to miss the target by far, the cracks will get much closer. But while even the
most experienced shooters may miss the target by some distance, any one may be lucky
and hit the bulls-eye by sheer coincidence.

The difference in the measurement situation is that nobody knows where the bulls-eye is
(the unknown true value). Yet we have to make a good guess of how far we may have
missed the target. This is the subject of uncertainty analysis and the result of that analysis
is another number, the uncertainty. Just like the measurement value, the uncertainty is also
an estimate.

The uncertainty defines an uncertainty interval around the estimated value, within which
we think the true value probably lies.

Because all these are only estimates, it is impossible to be sure that the true value lies
within the given interval. We therefore resort to the language of statistics and estimate the

20
probability that the true value lies within the given uncertainty interval. That probability
is called the confidence level.

CONFIDENCE LEVEL = ESTIMATED PROBABILITY THAT THE TRUE VALUE LIES


WITHIN THE GIVEN UNCERTAINTY INTERVAL.

In analogy with the shooting contest we assume that most of our estimated results will be
close to the true value, with a decreasing chance, the further we get away from the true
value.

Therefore we can make the confidence level very high by choosing a large uncertainty.
However, the usefulness and meaningfulness of result will be higher if we have a smaller
uncertainty, corresponding with a higher accuracy.

And so we have to find a reasonable compromise in the trade-off between accuracy and
confidence level.
DISPERSION, PROBABILITY DISTRIBUTIONS

If a measurement is repeated, even under the same conditions, the measured values will
not be exactly equal, leading to a limited repeatability. This is caused by fluctuations which
are intrinsic to the measurand or the measurement process, but also to all kinds of
changing environmental conditions which we cannot completely control or which we may
not even be aware of. The spread of values obtained from a series of independent
measurements is called a dispersion.

21
A probability distribution shows the way in which the results are spread over the total
range of values. The values may be evenly distributed over a certain range or may be
concentrated in the middle or even at the ends.
The three most important examples are shown in the figure below.

Some examples of distributions


each with an average value of 2.5 and a standard deviation of 1

0.5

average
0.4
Probability density

0.3

0.2
standard standard
deviation deviation
0.1

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Value (arbitrary units)

Normal Uniform U-Shaped

THE STANDARD DEVIATION


The uncertainty of a measurement result is usually a summation of several uncertainty
contributions from different origins and consequently with different probability
distributions. The values of those contributions must be expressed in the same way in
order to get a meaningful addition. The most suitable measure for the uncertainty is the
standard deviation.

This is a well-defined statistical parameter that can be calculated for each of the
distributions.

The actual calculations are described in chapter 4. First we will discuss the different
distributions and where we may expect to encounter them.

THE NORMAL DISTRIBUTION


The normal or Gaussian distribution is found when fluctuations of the values are due to a
combination of many individual fluctuations, such as thermodynamic energy fluctuations

22
(eg noise in electrical circuits, temperature measurements, flow measurements, etc.) It is
the most commonly found distribution.

The highest probability is in the middle (the average). The range of possible values
stretches to infinity, symmetrically on both sides, with the probability falling rapidly to
very low values, the further we get away from the average.

If we want to know the probability of finding a measurement value within a certain interval
around the average we can take the surface area under the graph over that interval. The
parameter that is used as a reference is the standard deviation, indicated by the symbol 

Normal Distribution

0.9

0.8

0.7
Probability density

0.6

0.5
1 1
0.4

0.3

0.2
2 2
0.1
3 3
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Value (arbitrary units)


std = 0.5

(sigma). An interval of two standard deviations (- to +) contains about 68% of all values,
four standard deviations (-2 to +2) about 95% and six standard deviations (-3 to +3)
contain 99,7%. In real measurements the number of values very far from the average (>5)
is lower than the number predicted by the theoretical curve, so we may assume that in
practice the normal distribution goes to zero rather than stretching to infinity.

THE UNIFORM DISTRIBUTION


The uniform or rectangular distribution occurs when there are clear boundaries beyond
which the value cannot exist. A good example is the uncertainty due to limited resolution.
Suppose for instance that a four-digit digital display reads 1527. This means that the actual
value could be anywhere between 1526.5 and 1527.5 (regardless of any possible additional
deviations).The chance of the actual value being anywhere within the given interval is the
same throughout the interval and zero outside the interval. Therefore this is a uniform
distribution.

23
The uniform distribution is often assumed if we do not know anything about the real
distribution apart from some given “worst case” limits. Sometimes the specifications of a
measuring instrument are given as maximum deviations by the manufacturer. Any
deviation within that tolerance has equal chance.

 

a a
Average
value

The standard deviation of a uniform distribution with a half-width a is:


a
=
3
It can easily be seen that an interval of 4 standard deviations (2 on either side of the
average) contains 100% of the possible values, unlike the normal distribution where it was
only about 95%.

The triangular distribution


The triangular distribution is used when it is known that most of the values are likely to be
near the center of the distribution. The standard uncertainty/standard deviation is found
𝑎
by dividing the half-interval by √6 i.e. . .For example, suppose that the lab temperature is
√6
controlled by a continuous cooling/variable re-heat system in such a way that the actual
temperature is always near the center of the range 20°C ± 2°C. The half-interval of the
allowed temperature range is then 2°C and the standard uncertainty is given by
𝑢𝑇𝐸𝑀𝑃= 2 = 0.82℃
√6

U-SHAPED DISTRIBUTION

24
In a U-shaped distribution there are two positions where the measured value is most likely
to be. It is unknown in which of the two positions the value is. This distribution is rather
rare, but it does occur in some special cases. One example is the mismatch uncertainty in
high frequency electrical measurements.

a
 = 
2

a a
Average

value of a the standard deviation is given by:


For a U-shaped distribution with a half-width

COMBINING MANY DISTRIBUTIONS

When several parameters, each with their own dispersion are combined, the resulting
distribution quickly approaches the normal distribution, no matter what is the shape of the
original distributions.

As an example, consider the probability distributions of the total score of one, two and
three dice. When one dice is thrown, all six possible values have equal chance and therefore
the distribution is uniform. If another dice is thrown, this also results in a uniform
distribution. However, if the scores of the two dice are added, the resulting distribution is
triangular. If the scores of three dice are added up, the total score has a distribution that
already looks very much like the normal distribution.

Total score of two dice


Score of one dice

0.18
0.18 0.16
0.16 0.14
0.14
Probability

0.12
Probability

0.12
0.10 0.1
0.08 0.08
0.06
0.06
0.04
0.02 0.04
0.00 0.02
1 2 3 4 5 6
0
1 2 3 4 5 6 7 8 9 10 11 12 13
Score
Score
Total of three dice

0.16
0.14
0.12
25
Probability

3 dice distribution
0.1
0.08
0.06 Normal
distribution
0.04
Note that the throwing of dice leads to discrete probabilities for
each possible outcome. This is due to the fact that there are only a
limited number of possible values, represented in the number of
bars in the bar graphs. In measurements the results can take any
value and has therefore a continuous character, represented by a
continuous curve in the graph. There is, however, no fundamental
difference between the discrete and continuous probability
distributions and the result of this example also applies to the
in the graph. There is,
combination however,
of various no fundamental
components difference
contributing to the totalbetween the
uncertainty of discrete
a and
measurement: many different contributions lead to a normal distribution.
continuous probability distributions and the result of this example also applies to the
combination of various components contributing to the total uncertainty of a
measurement: many different contributions lead to a normal distribution.

Thus we can conclude that in most cases the uncertainty of a measurement result may be
assumed to have a normal probability distribution, because it is a combination of a number
of contributions.

There is an exception: when there is a dominant contribution to the uncertainty that has a
distribution that is not normal. For example, if the mismatch uncertainty in a high-
frequency electrical calibration is much larger than any of the other uncertainty
components, the dispersion in the result will be more like a U-shaped distribution, because
the influence of the other components is very small.

26
THE UNCERTAINTY EVALUATION PROCESS

The statistical approach, explained above, is the starting point of the strategy for
calculating the uncertainty, described in the GUM. The idea is to express all contributions to
the total uncertainty in the form of standard deviations. The following steps will lead to the
final result:

1. Correct the result for all known influences


2. List all possible sources of uncertainty
3. Determine or estimate the magnitude of each uncertainty source and express that
magnitude as a standard deviation (this is called a standard uncertainty)
4. Determine the sensitivity coefficient for each uncertainty source
5. Calculate the combined standard uncertainty of the result
6. Multiply the combined standard uncertainty by an appropriate factor (coverage factor
k) to get the expanded uncertainty. The coverage factor is chosen to express the
uncertainty in terms of a suitable confidence level
7. Report the result with a complete uncertainty statement that contains all information
needed to understand and interpret the indicated uncertainty
These steps will be discussed in more detail in the following paragraphs.

LISTING OF UNCERTAINTY SOURCES


In this step we have to identify all possible sources of uncertainty. Start by listing as many
sources as you can think of. Now assess each of the listed components and select the ones
that are relevant for the measurement under consideration. Relevant means here that the
uncertainty contribution to the result is significant. This is the part that requires skill and
experience. It is better to include a component that later on turns out to be negligible, than
to leave it out at the risk of missing a significant contribution.

The uncertainty sources come in a great variety. Roughly they can be divided into several
classes. The list below is not an attempt to be complete, but is meant to give some guidance:

1. Intrinsic uncertainties due to imperfect definition of the measurand (e.g. the diameter
of a cylinder that is not perfectly round, the temperature of an object that has a
temperature gradient). Noise (thermal, electrical, acoustic, etc.) is also an intrinsic
source of uncertainty as far as it is part of the quantity being measured.
2. Disturbance of the measurand by the measuring process (e.g. the measurement
force of a micrometer deforming a plastic object that is being measured, the loading of a
resistive circuit by a voltmeter, the self-heating of a resistive temperature detector due
to the electrical power dissipated in the sensor, the disturbance of a flow pattern by the
flow meter and noise introduced by the measuring system).

27
3. Calibration uncertainties of measurement standards, measuring instruments,
reference materials and reference values, used in the measurement (e.g. the
calibration uncertainty of a standard thermometer, used to calibrate another
thermometer). This includes the extra uncertainties due to unknown or uncorrected
drift of the standards. Even if we correct for offset and drift, there is an uncertainty in
that correction, which contributes to the total uncertainty of the measurement.
4. Environmental influence quantities such as the temperature, atmospheric pressure,
humidity, air composition, dust, impurities, electromagnetic fields, vibrations, air drafts,
light, cosmic radiation, etc. etc. In each particular case we have to assess which of these
parameters may have any influence on the measurement result.
5. Influence from the person performing the measurement. This is only relevant for
non-automatic measurements and refers to personal habits and biases. These
differences appear for example in the estimation of the position of a pointer between
the smallest divisions of a scale, in the positioning of the weights on a balance or in the
handling of slip gauges (warming effects from the handling).
6. Rounding errors. These arise from limited resolution in the readings from scales and
digital displays and from the rounding in calculations, including software peculiarities.
Especially when two large numbers are subtracted to give a very small result, these
errors may become significant.
7. Gross errors and mistakes. This is not strictly an uncertainty source, but it does
influence the measurement result in a most unpredictable way. Swapping the positive
and negative readings on a nulldetector, interchanging digits: 1987 instead of 1897,
reading a 3 for an 8, a 7 for a 1, making calculation mistakes, copying numbers wrongly,
etc. etc. In many cases the error is so large that we can see immediately that there is
something wrong, but just as often the effect can be small and go unnoticed. Because the
effect of such mistakes cannot be evaluated, it is impossible to include them in the
measurement uncertainty. The only thing to do is to try and avoid them as much as
possible by working systematically, writing clearly and neatly. Automation of the
measurement is a possible way to avoid most of these mistakes, but other hidden
dangers appear in that approach, which we will not discuss here.

DETERMINING THE STANDARD UNCERTAINTIES

RANDOM AND SYSTEMATIC EFFECTS


For each of the uncertainty sources identified, we now have to determine their magnitude.
There are two ways in which an uncertainty source can influence the measurement result:

random and systematic.

A random effect means that if we repeat the measurement under the same conditions, the
resulting values will show a dispersion with usually a normal probability distribution.
These effects are caused by noise and fluctuations of environmental influence quantities
that cannot be completely controlled.

28
Systematic effects cause a deviation in the measurement result that follows a certain
pattern, for example a constant offset, a drift (value runs away over time), a regular
fluctuation or a non-linear behaviour. Some of the sources of these effects are the
calibration uncertainty of standards and measuring equipment used for the measurement,
uncertainties of the measurement of environmental influence quantities, bias and drift in
the standards and measuring system and disturbance of the measurand by the measuring
process. The most important difference with a random effect is that systematic effects
cannot be detected by repeating the measurement (except drift and regular fluctuations).

Both random and systematic effects lead to uncertainty contributions expressed as


standard deviations, associated with a probability distribution. For systematic effects this is
not immediately clear. For example: how can a null-offset in a voltmeter have a dispersion?
It is a single value, shifting the measurement result by a fixed amount!

We have to look at it in this way: we have already corrected for the offset and are left with
the uncertainty in that correction. Even though the error in our applied correction is a fixed
value, we know neither its value nor its direction. So our knowledge of the magnitude and
direction of the offset has an uncertainty, associated with a probability distribution.

Similarly, the value of a standard, taken from a calibration certificate of that standard will
not be exactly the value the standard. The error in the value gives a systematic shift in the
result, but we do not know the magnitude and direction of that shift. The uncertainty
indicated on the calibration certificate of the standard is associated with a probability
distribution.

So, even uncertainty contributions arising from systematic effects can be characterised by
probability distributions with their associated standard deviations.

Rather than distinguishing between the random and systematic character of uncertainty
components, the GUM concentrates on the methods of evaluation. There are two different
ways of evaluating the magnitude of uncertainty components:

Type-A evaluation consists of statistical analysis of the values obtained


from repeated measurements
Type-B evaluation is the assessment of uncertainty components in any
other way than type-A

This is only a distinction for practical convenience, because the methods of evaluation
are entirely different but there is no fundamental difference between the resulting
uncertainty components.

TYPE-A EVALUATION

29
To do a full statistical analysis on a series of readings is a lot of work. First of all, we need
quite a large number of readings (hundreds or even thousands) to be able to determine the
type of probability distribution. Instead, we take fewer readings and assume that the
distribution is normal, which is usually very close to the truth anyway.

From a series of about 10 or more readings (n is the number of readings, xi is the value of
reading nr. i, with i running from 1 to n) the two characteristic parameters can be
calculated: the average (mean),
1 + x2 + ...+ xn 1 n
x = x =  xi
n n i=1

which yields the uncorrected estimate of the value,

and the standard deviation of the sample, n

( x1 − x )2 + ( x 2 − x )2 + ..... + ( x n − x )2  (x i − x )2
s = = i=1
n −1 n −1
which is a measure for the type-A uncertainty component.

The term (n-1) in the denominator for the standard deviation is a statistical parameter
called the degrees of freedom, it will come back in the calculation of the expanded
uncertainty.
We can see immediately that the standard deviation cannot be calculated if we take only
one reading (division by zero). For a number of readings lower than 10 the actual type-A
contribution to the uncertainty becomes larger than the calculated standard deviation.
The question now arises: what if a measurement consists of only one or a few readings?
That would make it impossible to do a type-A evaluation. Yet in practice it is often too
expensive to repeat measurements several times. The GUM makes provision for this by
introducing the pooled standard deviation sp. That is a standard deviation, determined
from earlier measurements, performed under the same circumstances, with the same
equipment. So the recipe is to take a series of at least 10 readings, calculate the standard
deviation and use this as a pooled standard deviation in subsequent measurements. The
validity of this method should however be checked on a regular basis by repeating the
determination of the pooled standard deviation. There is an ongoing discussion between
laboratories and accreditation organisations on the conditions under which these shortcuts
are acceptable. Some of this will be discussed further on in this course.
The standard deviation calculated according to the formula given above is associated with
each individual measurement of the sample. If we have done a whole series of n
measurements to determine the standard deviation, we would like to use the average
(mean) of the readings as an estimate for the measured value. This has an advantage
because the standard deviation of the mean is 1/n times the value for one individual
reading. Thus the standard deviation of the mean of a series of n values is expressed as:

30
If we use a pooled standard deviation the value is reduced in the same way when we
average over more than one reading: n

s  (x i − x)
2

sm = = i =1

n n(n − 1)
where we have to be careful to use for n the number of readings used in the present
measurement, not the number of readings used for determining the sp.
If the random fluctuations in the measurements lead to an uncertainty contribution that is
larger than desired, it can be reduced by averaging over more readings. This averaging can
be done numerically, but low-pass filtering (mechanical or electrical) has exactly the same
effect. In precision digital meters the user can often choose the integration time for the
readings. This is also the same as filtering and averaging: more precision requires more
averaging, a longer time constant or a longer integration time.

TYPE-B EVALUATION
The uncertainty contributions that do not lead to a spread in values cannot be evaluated by
statistical analysis of repeated readings. We have to find other means of detecting and
estimating these uncertainty components. There are three levels of difficulty in this
analysis:

1. Components that are easy to identify and easy to evaluate (e.g. the uncertainty given in
the calibration certificate of a measurement standard used in the process or the limited
resolution of a measuring instrument).
2. Components that are easily identified, but are more difficult to estimate (e.g. the
influence of temperature, humidity and pressure on the measurement result). It may
take extensive extra tests and measurements to evaluate the influence of these
environmental conditions.
3. Components that are easily overlooked, because we do not realise that there is such an
influence. The measurement may be disturbed by vibrations in the building without
anyone being aware of these vibrations. It takes a lot of experience and even a healthy
dose of fantasy to think of all possible sources of uncertainty.

As mentioned above, the estimates of all these contributions should be expressed as


standard deviations in order to be able to combine them with the results from the type-A
evaluation. For that purpose we have to assume a probability distribution for each of these
type-B components, for which we can then estimate a standard deviation. Choosing the
type of distribution is done on the basis of the knowledge we have (or the lack of
knowledge) about the effects causing the uncertainty contribution. A few examples may
clarify this:

31
A. The calibration report of a reference standard states an uncertainty at a confidence
level of 95% with a normal distribution. We recognise that this is a 2 level, so the
standard uncertainty is simply the stated uncertainty divided by two.

B. The resolution of a digital display is 0,1. The maximum error due to this resolution
is one half of the least significant digit: 0,05. By its nature this is a uniform
distribution, so the standard deviation is 0,05/3 = 0,029. This uncertainty
component is separate from the calibration uncertainty of the measuring
instrument to which the display belongs.

C. The value of a standard has been monitored over some years and has been found to
be fluctuating slowly, but without a regular pattern. It was found that the variations
always remained within a band of  40 ppm (part per million) from the average.
Because at the time the standard is used the value can be anywhere within the given
range and there is no information about the distribution, we assume a uniform
distribution with a half-width of 40 ppm.
The standard uncertainty is therefore 40/3 = 23 ppm.

D. A calibration is performed in a laboratory room where the temperature is monitored. The


temperature is found to fluctuate between 24,6 and 25,2 C. The temperature sensor used
for the monitoring has been calibrated with an uncertainty of 0,2 C (2 level). Two
uncertainty components arise from this: for the fluctuations we assume a uniform
distribution, half-width 0,3 C, standard uncertainty 0,17 C and the calibration of the
sensor contributes a standard uncertainty of 0,1 C.

E. Specifications of multimeters and the like are usually complicated and difficult to
understand. An example: The table below shows part of the specifications of a 4½ digit
DMM, stating the worst case deviations for each function as a percentage of range plus a
percentage of reading plus a fixed number of digits:

32
Range 1-month 3-months 1-year
(at 23  3 C) % range + % value + digits % range + % value + digits % range + % value + digits
200 mV DC 0 + 0,05 + 3 0 + 0,08 + 4 0 + 0,10 + 5
2 V to 200 V DC 0,01+ 0,03 + 1 0,02 + 0,05 + 2 0,03 + 0,08 + 3
2000 V DC 0 + 0,05 + 2 0 + 0,06 + 3 0 + 0,08 + 4

What is the standard uncertainty if the instrument reads 8,000 on the 20 V range?
First of all we need to know when the multimeter was last calibrated. Suppose it was
calibrated half a year ago and was adjusted to be well within the specifications. We
have to use the 1-year specs. Next we have to check that the measurement was done
at an ambient temperature within the indicated range of 20 to 26 C. Let’s assume
that that is the case.
The worst case deviation of the value is now given by 0,03%20 V + 0,08%8 V + 3
digits. At the 20 V range the least significant digit is 0,001 V. The worst case
deviation is therefore 0,006 V + 0,0064 V + 0,003 V = 0,0154 V = 15,4 mV. We
assume a uniform distribution, so the standard uncertainty is 15,4/3 = 8,89 mV,
which can be rounded off to 9 mV.

F. Values of constants or conversion factors, taken from handbooks or scientific papers also
have an uncertainty that should be indicated together with the value itself. If this is not the
case, we can assume that only the significant digits are given. The uncertainty may then be
estimated to be a few least significant digits. Mind that some conversion factors are exact
(no uncertainty), e.g. 1“ = 25,4 mm, but others do have uncertainty.

SENSITIVITY COEFFICIENTS
The next step is to determine how much influence each of the uncertainty components has
on the final result, in other words, how sensitive the result is to each of the influence
quantities. The uncertainty contribution to the result ui(y) from the uncertainty u(xi) of the
input quantity Xi is simply: ui(y) = Ci  u(xi) , where Ci is the sensitivity coefficient.

Let’s start with an example: We want to know the resistance of a resistor at 23 C, using a
mulitimeter. If the resistance range of the multimeter has a deviation, this will directly
affect the measurement result, so the sensitivity coefficient is 1. The temperature of the
resistor has to be measured and a correction made for the deviation of the actual
temperature from 23 C. The uncertainty of the temperature measurement contributes to
the uncertainty in the result. However, a large uncertainty in the temperature
measurement can still have a very small effect on the resistance, namely if the temperature
coefficient of the resistor is very small.

EXPERIMENTAL DETERMINATION

33
One way of finding out the sensitivity coefficient is to vary the influence quantity by a
known amount and measure the change in the resulting value. In the above example this
would mean that we determine the temperature coefficient of the resistor experimentally.

If xi is a small variation of the influence quantity and y is the resulting change in the
output value, the sensitivity coefficient is given by:

y
ci = x i

For some of the influence quantities this method can be very useful, but it is usually quite a
lot of work to do the experiments. Some influence quantities cannot be controlled at all and
therefore this method cannot be used to determine the associated sensitivity coefficients.

MATHEMATICAL DERIVATION
We need an alternative way to calculate the coefficients. That alternative is provided by
mathematics on one condition: that we have a formula for the relationship between all
input quantities and the measurement result. In general, the result can be seen as an output
quantity y, related to the input quantities xi by a formula:

y = f ( x1 , x2 ,...., xN )
This is called the model equation. Mind the difference between N and n, where N is the
number of different input quantities and n is the number of readings in a series of repeated
measurements for determining the standard deviation (see par. 4.3.2).

Setting up the model equation is sometimes a very complicated process, but in many cases
it can be fairly easy. The starting point is the calculations we do to get from the
measurement values to the final result. After that we include correction terms or factors for
any influence quantities that may have an impact on the uncertainty.

Example: The density of a rectangular block of steel is determined by measuring its length, width
and height with a micrometer and weighing its mass. The density is calculated as:

m m
D= =
V l  w h

with density D, mass m, volume V, length l, width w and height h.

34
This is the formula we are looking for and from this the sensitivity coefficients for the mass and the
dimensions can be calculated. However, we now realise that the volume is dependent on the
temperature, so we restate the measurement result into: the density of the steel block at a certain
reference temperature. Now we have to measure the temperature and correct for the deviation of
the actual temperature from the reference value.

lt = (1 + [t − t 0 ])l0 , wt = (1 + [t − t 0 ])w0 , ht = (1 + [t − t 0 ])h0


Vt = lt  wt  ht = l0  w0  h0  (1 + [t − t 0 ]) 3 = V0  (1 + [t − t 0 ]) 3

where t and t0 stand for the actual and reference temperatures and  is the linear expansion
coefficient of the steel block.

The formula for the result now becomes:

m m
D0 =  (1 + [t − t 0 ])3 =  (1 + [t − t 0 ])3
Vt lt  wt  ht

Where D0 is the density at reference temperature.

We see that two new parameters t and  have come into the formula. The sensitivity coefficients
for the uncertainties in these can be derived from the new formula (t0 is a constant without
uncertainty).

In this way we can cater for any influence quantity by including a correction term in the formula for
that parameter. If we identify an uncertainty contribution that is not associated with a correction,
we artificially assume a correction with an estimated value of zero, so that the uncertainty
contribution can be accounted for. For example, if the drift of a measuring instrument is known to
be smaller than a certain worst case value, but the direction of the drift is not known, we still
include a correction for the drift with value zero and a standard uncertainty calculated from the
worst case value.

CALCULATION OF THE SENSITIVITY COEFFICIENTS


Once we have the formula, the sensitivity coefficients can be calculated. There are two ways to do
this:

1. Mathematical derivation

35
The sensitivity coefficient Ci for the input parameter xi is the partial derivative of the
function f to that input parameter:
f
Ci =
xi
This procedure is done for each input parameter (i = 1 to N).
Taking the derivative is a mathematical operation (A-level mathematics. Fortunately for
those not familiar with this technique there are computer programs available that can work
out these partial derivatives. Examples of such programs will be shown during the training
course.

2. Variation of parameters
A simpler but very effective technique to calculate the sensitivity coefficients is the
following:

A. Calculate the value y0 = f(x1, x2,…, xi ,…, xN) of the result using the best
estimates of each input quantity.
B. Change the value of one input quantity xi into xi+xi (choose a value xi close to
the estimated standard uncertainty of xi) and calculate the result yi by filling in the
changed input quantity while using the original best estimates for all other input
values: yi = f(x1, x2,…, xi+xi,…, xN).
C. The sensitivity coefficient Ci for the input quantity Xi is now calculated from:
y i − y0
Ci =
xi
Repeat steps B and C for each of the input quantities.

Example:

The density of a steel block at a reference temperature of 23 C is measured according to the


method described in the example above.

Procedure:

The steel block is first placed for one day in an air-conditioned laboratory room. The temperature of
the room is kept at 23,0 C with a standard uncertainty of 1,0 C. After a day the steel block is
assumed to have the same temperature as the environment. Then the dimensional measurements
are done in the same room. The temperature change of the block during the dimensional
measurements is expected to be negligible. Finally the mass of the block is determined using a
precision electronic balance.

The value of the thermal expansion coefficient for this type of steel and its uncertainty are taken
from literature.

36
The measurement results and other input variables are given in the table below:

input quantity Measured value standard uncertainty unit

m 4,6835 0,0005 kg

l 15,005 0,003 cm

w 8,055 0,002 cm

h 5,002 0,002 cm

t 23,0 1,0 C

 0,0015 C-1
0,0137

A: Calculate the best estimate for the result:

m 4,6835
D0 =  (1+ [t − t0 ])3 =  (1+ 0,0137[23,0 − 23,0])3
lt  wt  ht 15,005  8,055  5,002

Result: D0 = 7,7468526010-3 kg/cm3

Because we will look at very small changes, the rounding errors should be
minimised by using a lot of digits.

Note that the temperature correction is zero, because the measurement


temperature is equal to the reference temperature. As we will see, the uncertainties
in the temperature measurement will still contribute to the total uncertainty.

B. Increase the mass with 0,0005 kg (m = 4,6840 kg) and do the same calculation.

Result: D0 = 7,7476796410-3 kg/cm3

37
C. The sensitivity coefficient for the uncertainty in the mass is then:
(7,74767964 − 7,74685260)  10 −3 8,27  10 −7
Cm = = = 1,654  10 −3 kg  cm -3 per kg
0,0005 0,0005

We could have varied the mass with a different amount than the 0,0005 kg, as long as the
value is not too far from the standard uncertainty. Try this yourself by repeating the
calculation using an increase of 0,001 kg. You should find the same value for the sensitivity
coefficient.

In the same way the other sensitivity coefficients can be found (try to calculate this
yourself):

Input quantity Sensitivity coefficient Unit

m 1,65410-3 kgcm-3

l -5,16210-4 kgcm-4

w -9,61510-4 kgcm-4

h -1,54810-3 kgcm-4

t 3,22810-4 kgcm-3C-1

 0 kgcm-3C

COMBINED STANDARD UNCERTAINTY

At this point all contributing uncertainty sources have been identified and their values expressed as
standard uncertainties. The sensitivity coefficients for all contributions have been determined. All
there is to do is to add up all standard uncertainties in the right way to get the combined standard
uncertainty of the measurement, that is the standard uncertainty of the result. Because of the
statistical character of the uncertainties they have to be added up as the root of the sum of the
squares (RSS). The standard uncertainties behave exactly like standard deviations, where the RSS
adding was already visible in the formula for the standard deviation of the measurement values
(see par. 4.3.2).

The formula for the combined standard deviation is then:

N
uc ( y) =  (u ( y))
i =1
i
2
= (u1 ( y )) 2 + (u 2 ( y )) 2 + ...... + (u N ( y )) 2

38
CORRELATED INPUT QUANTITIES
This formula is true when all input variables are statistically independent, that means that the
sources of the uncertainty contributions do not influence each other. These influences can occur in
certain cases, for example when the same measuring is used for measuring two or more input
quantities. If in the example of the density measurement of a steel block (see par. 4.4.2) all
dimensional measurements are taken with the same micrometer and this micrometer has a certain
deviation, that deviation will enter in the length, the width and the height in the same way, so the
uncertainties in the values for the dimensions of the block are correlated. Between totally
independent and fully correlated quantities there are all possibilities in between and even negative
correlation is possible. The degree to which two input quantities are correlated is expressed in the
correlation coefficient, which can have values from –1 (100% negative correlation) through zero
(independent) to +1 (full positive correlation).

For correlated input quantities there are extra terms to be included in the formula for the combined
uncertainty. Suppose C1 and C2 are statistically dependent with a correlation coefficient of 0,4. The
formula for the combined uncertainty then looks like this:

N
uc ( y) =  (u ( y))
i =1
i
2
+ 2  0,4  u1 ( y )  u 2 ( y )

The extra term is 2 times the correlation coefficient times the product of the two correlated
uncertainty contributions. Such a term is included for each pair of correlated input quantities.
hgNote that these terms are not squared.

To determine whether two variables are correlated and to what extent can be done in two ways:
experimental or by theoretical inspection.

The experimental way needs a lot of extra measurements, in which the input parameters involved
are varied. This is so much work that it is only a last resort, if all other methods fail.

Inspection means that we try to predict and estimate the correlations by scrutinising the
measurement procedure and influence quantities.

STRATEGY
The general strategy is to assume a possible correlation only when there is a very good reason
to do so. In the case of the micrometer being used to measure the three dimensions of the steel
block there is such a good reason. If we know that the uncertainty of the micrometer is mainly
caused by systematic effects, we may assume a correlation coefficient of +1 between each of the
pairs: l and w, w and h, l and h. If the correlation is 100% this means in practice that the
uncertainties involved are added up linearly rather than RSS.

39
f

Suffice to say that we can miss dependent input quantities like a tooth ache! There are some ways of
eliminating these correlations, as the next example illustrates:

A resistor is calibrated by comparison with a standard resistor of the same nominal value. The
comparison is done by connecting the resistors each in turn to a precision digital multimeter. The
ratio of the readings from the DMM is equal to the ratio of resistance values, so the value of the unit
under test (UUT) is equal to the value of the standard times the measured ratio:

Rm , x
Rx =  Rs
Rm , s
Rx is the value of UUT, Rs the known value of the standard, Rm,x and Rm,s are the readings from the
DMM for the UUT and the standard respectively.
The readings taken with the DMM are obviously correlated. If we would consider them
independent, the measuring uncertainty of the DMM would come twice into the combined
uncertainty. But constant deviations of the DMM are not important in the measurement, because
we only need the ratio of the values. Only the variability and resolution of the DMM (noise and
fluctuations) should therefore contribute to the uncertainty of the comparison.
If we would work out this situation we would discover a strong negative correlation between the
measurement of the UUT and that of the standard.
We can avoid this analysis by considering the ratio of the two readings as one input parameter:
Rm , x
R x = r  Rs with : r =
Rm , s
The only contributions to the uncertainty are now those in r and Rs. (Temperature effects and
other influences have been left out of consideration here, a more complete analysis of this example
is given in the EAL document Gxx, which is a set of examples to illustrate the EAL-R2 document).

EXPANDED UNCERTAINTY

The expanded uncertainty is simply the total uncertainty of the result, expressed as a value with a
given confidence level. To this end the combined standard uncertainty is multiplied by a coverage
factor k. The value of k is dependent on three parameters of the measurement:

A. THE REQUIRED CONFIDENCE LEVEL

The confidence level to be chosen depends on what the measurement result is to be used
for. If the measurement is used for checking a product against specification, the producer
may want to be very sure that the value is reliable and thus choose a confidence level of
99,9% or even more. For calibrations of standards a confidence level in the order of 95% is

40
usually suitable. In some scientific measurements the one-standard deviation value is
sometimes used (k =1).

B. THE PROBABILITY DISTRIBUTION OF THE COMBINED UNCERTAINTY

In almost all cases the probability distribution is assumed to be normal, but there can be
exceptions. If the distribution can be assumed to be almost uniform, an almost 100%
confidence level can be reached for k = 3. For a U-shaped distribution this becomes k = 2.
These situations are rare and usually do not need consideration.

C. THE EFFECTIVE DEGREES OF FREEDOM

If the type-A contributions calculated from the standard deviation of a relatively small
number of readings (below 20) has a dominant contribution to the combined uncertainty,
the coverage factor becomes larger for a given confidence level. The degrees of freedom i
of the standard deviation of input quantity xi is equal to the number of readings minus one:
i = n – 1.

The degrees of freedom of a type-B contribution is always infinite.

The degrees of freedom of the combined uncertainty is somewhere between these two,
because it consists of contributions from both. The effective degrees of freedom eff in the
combined uncertainty can be calculated as follows:
(u c ( y )) 4 (u c ( y )) 4
 eff = =
N
(u i ( y )) 4 (u1 ( y )) 4 (u ( y )) 4 (u ( y )) 4

i =1 i 1
+ 2
2
+ .... + N
N

All symbols can be recognised from formulas given earlier and have the same meaning.
Round the outcome of the formula down to the nearest lower integer number. Note that
uncertainty contributions with infinite degree of freedom fall out of the sum because these
terms become zero (division by infinity).

There are two ways of avoiding the work of calculating this complicated formula:

1 Make sure that the type-A contributions to the combined uncertainty are much
smaller than the rest of the uncertainty components.
2 If this cannot be done, take 20 readings or more for each value (n  20), or base
your type-A contribution on a pooled standard deviation, taken from a sufficient
number of readings in a previous measurement.
In both cases the effective degrees of freedom becomes infinite.

41
Finally, to find the coverage factor, a table is used that lists the values of k against the effective
degrees of freedom and the confidence level. This factor is also referred to as the Student’s factor.
The table is given in the appendix.

The expanded uncertainty U(y) is now calculated according to:

U ( y) = k  uc ( y)

The above procedure seems very difficult, but fortunately in most cases we do not need to worry
about effective degrees of freedom and Student’s factors. If the effective degrees of freedom can be
considered infinite (see above) and the probability distribution is normal, we can use k =1 for a
confidence level of about 68%, k =2 for about 95% or k =3 for about 99,7%. This is sufficient in the
vast majority of measurement situations you will encounter.

RECOMMENDED TABULAR FORM OF UNCERTAINTY BUDGET

To keep some overview of all calculations involved in finding the measurement result and its
uncertainty, the use of the following table is highly recommended:

Standard Uncertainty
Degrees
uncertaint Sensitivity contributio
of
Quantity Estimate Shape of y coefficient n
freedom
distribu
Xi xi Uncertainty
-tion u(xI) Ci ui(y) I
U(xi)
Rs 5 m 2,5 m 1,0 2,5 m 
10 000,053  Normal
RD 0,020  10 m Uniform 5,8 m 1,0 5,8 m 

RTS 0,000  2,75 m Uniform 1,6 m 1,0 1,6 m 

RTX 0,000  5,5 m Uniform 3,2 m 1,0 3,2 m 

Rc 1,000 000 0 0,510-6 Uniform 0,2910-6 10 000  2,9 m 

r 1,000 010 5 0,0710-6 Normal 0,0710-6 10 000  0,7 m 4

RESULT

42
Rx 10 000,178  Combined standard uncertainty uc(y) 7,85 m 61982

The values filled in are taken from an example of the EAL-Gxx and are only for illustration.

The calculations for the result on the bottom line of the table are as follows:

Estimate of the result from the values above, according to the model formula.

Combined standard uncertainty from the uncertainty contributions by RSS combination (so it is not
simply the sum of the column above!).

For the effective degrees of freedom see the formula in par.4.6. As can be seen this number is very
large and effectively equal to infinity (see the table of the Student’s factors in the appendix).

REPORTING THE RESULT


The measurement result, reported should be clear and unambiguous. It goes without saying that
the measurement unit has to be indicated. When it comes to the uncertainty statement, it is of
paramount importance to make very clear how the reported value is to be understood. The
absolute minimum information required is the uncertainty and the associated confidence level.
Often the coverage factor is given as well and a statement about the assumed distribution. The NIST
guidelines advise the metrologist to go as far as to put the whole uncertainty budget, such as the
table above, into the report. This is not usual in most countries. Most calibration laboratories have a
standard phrase which is automatically included in each calibration or test certificate, stating the
confidence level, the coverage factor and the assumed distribution.

Do not forget to report the values of environmental conditions, such as the temperature, humidity
and/or other relevant influence parameters, including their uncertainties.

When a number of measurements is combined in one report, it is good practice to give the results in
a table with a separate column for the uncertainties.

It is beyond the scope of this course to go into all aspects of a complete report on a calibration or
test. The examples below should not be considered to be complete. They are only meant to
illustrate the wording that could be used in the reporting of measurement results and their
uncertainties.

1 Result of a calibration of a single value standard.

The slip gauge was compared to a reference standard of the same nominal length using
light interferometry at an environmental temperature of (20,0  0,5) C. The measured
length of the slip gauge was (10,0046  0,0025) mm. The indicated uncertainties were
obtained by multiplying the combined standard uncertainty by a coverage factor of 2,
corresponding to a confidence level of approximately 95%, assuming a normal
probability distribution.

43
2 Calibration of a set of standards.

The table below gives the deviations of the slip gauges measured at an environmental
temperature of (20,0  0,5) C.

Gauge Measured
no. Nominal value deviation

Mm m

1 2 +0,27

2 2 -0,34

3 5 -0,08

4 10 +0,18

5 20 +0,22

6 20 -0,47

7 50 -0,3

8 100 +0,5

The expanded uncertainty in all measured deviations is 0,05 m.

The stated expanded uncertainties have been determined for a coverage factor of 2,
equivalent to a confidence level of approximately 95%, assuming a normal probability
distribution.

3 Calibration of a DC voltage calibrator.

The emf at the output terminals of the calibrator was measured using a voltage
compensation bridge. The environmental conditions were as follows: temperature: (23,3 
0,5) C and relative humidity of (50  5) %.

44
Voltage set Measured Deviation of Uncertainty
Range on calibrator actual voltage calibrator in deviation

(V) (V) (V) (V) (V)

2 0,000 0 0,000 10 0,000 10 0,000 05

2 1,900 0 18,999 63 -0,000 37 0,000 10

2 -1,900 0 -18,999 44 0,000 56 0,000 10

20 0,000 0,000 24 0,000 2 0,000 05

20 19,000 19,002 6 0,002 6 0,000 30

20 -19,000 -18,999 8 0,000 2 0,000 30

200 0,00 0,000 57 0,000 57 0,000 05

200 190,00 190,072 0,072 0,000 2

200 -190,00 -190,039 -0,039 0,000 2

The uncertainties are stated for a confidence level of approximately 99% (coverage factor =
3).

4 A test result on water samples.

The measured dioxine concentration in the samples is given in the table below.

Sample number
Dioxine concentration Relative uncertainty
(g/l) (%)

A36 0,22 4,5

G15 1,56 1,5

K22 2,21 1,0

The stated relative uncertainty is expressed as a percentage of the measured value and has
been evaluated for an assumed normal probability distribution with effective degrees of
freedom of 8, using a coverage factor of 1,86 resulting in a confidence level of approximately
90%

ABSOLUTE AND RELATIVE UNCERTAINTY

45
In most cases an absolute uncertainty is to be preferred, that is an uncertainty in the same unit as
the measured result.

Relative uncertainties can be useful sometimes, but they can lead to confusion, so great care must
be taken to insure that its meaning cannot be misunderstood (see example 4).

The relationship between the absolute and relative uncertainty is:

relative uncertainty = absolute uncertainty / value of the measurand.

As a consequence the relative uncertainty is dimensionless, it has no unit and can be expressed as a
pure fraction, a percentage, in parts per million (ppm) or parts per billion (ppb).

N.B: In the uncertainty analysis following the GUM procedure only absolute uncertainties
should be used for the uncertainty budget, so if any of the input quantities is given with a
relative uncertainty, this has to be converted into an absolute uncertainty first.

CONSEQUENCES OF THE GUM REQUIREMENTS FOR INDUSTRY

IS UNCERTAINTY EVALUATION AFFORDABLE?


At this point a very relevant question may be raised: can the cost of such an involving and time
consuming exercise be justified?

The answer is very much dependent on the situation. For a specialised calibration or test laboratory
the answer is definitely yes: if such a laboratory wants to deliver the required quality of their
products, they will have to make sure that their uncertainty statements are in order. Their product
is a certificate with a measurement result, so measuring, calibrating and evaluating uncertainties is
their core activity. For them the method described in the GUM is the standard and they will need to
have the expertise for that.

In service and calibration departments of production units the resources are more restricted and
some compromises are inevitable.

As the tolerances for products and parts become ever more stringent and the demand for quality
assurance becomes ever louder – today ISO 9000 certification is a must for any company that
wishes to sell on the global market – the need for good quality control can only be satisfied through
accurate measurements, including a thorough assessment of the uncertainty. So:

Can we afford not to have knowledge on measurement and uncertainty evaluation?

The result of incompetence in metrology and uncertainty evaluation is non-conformance of


products. The consequences of delivering non-conforming products are serious: loss of contracts,

46
damage to one’s reputation, loss of existing and potential customers, withdrawal of accreditation or
certification. Draw your own conclusion.

SOFTWARE TOOLS FOR METROLOGY


There are several software products on the market that can help the weary engineer or technician
to get on top of the uncertainty evaluation process, or other metrology tasks. In the first place the
general spreadsheet programs contain functions and automatic routines to calculate averages and
standard deviations and can be used to calculate the total uncertainty budget. Some more
sophisticated measuring instruments have built-in computing power that provides the user with
the same kind of information.

There are also specialised programs that assist in various metrological calculations. Many
measuring systems come with computer programs that not only automate the measuring process,
but also include verification tests, automatic calibration routines and uncertainty evaluations.

Much of the metrology software is available on the internet:


Program name Description Freeware Available from Website

GUM Workbench Complete uncertainty analysis program Demo version only: www.gum.dk
according to the GUM, includes examples
Only examples are printable, or
your own work cannor be
printed or saved www.metrodata.de

Uncertainty Calculator Spreadsheet-like uncertainty budget Yes http://metrologyforum.t


program m.agilent.com/download.

html

Tolerance Calculator Calculation of tolerances, test- Yes idem


uncertainty ratios, etc.

Mismatch Calculator Tool for calculating accuracy of Yes idem


microwave power and impedance
measurements

Conversion Buddy Tool for converting from one unit into Yes idem
another, including associated uncertainty

Expression Buddy Tool for determining sensitivity Yes idem


coefficients
Included in
Conversion Buddy

Variation Demo program to illustrate SPC idem


principles

VNA uncertainty Spreadsheet for MS Excel for calculation Yes Idem


of uncertainties in vector network
analysiers

NAPT Unit Converter Yes www.callabmag.com

47
Dalby Calibration Calibration Management free 30 day trial, Unit www.dalbydata.dk
Conversion program free

Uncertainty Analyzer Comprehensive program for uncertainty No Idem


analysis according to GUM

SPCView Plug-in for Uncertainty Analyzer for No idem


Statistical Process Control

Accuracy Ratio Measurement decision rsik analysis tool No Idem

IntervalMax Determination of recalibration intervals Method A3 interval tester is a Idem


feeware demo version

Cat B degrees of Estimation degrees of freedom for type-B Yes idem


freedom estimator components in uncertainty budget

Last but not least, the national metrology institutes and the regional and international metrology
orgnaisations can provide expert advice and calibration services. There is intensive cooperation
and harmonisation between the metrology institutes and accreditation organisations, to assist
industry in the difficult but necessary task of maintaining, improving and assuring the quality of
their products.

APPENDIX

48
TYPES OF ERRORS
Grave/gross errors
Grave errors are caused by the negligence of the operator, the choice of inadequate
equipment or the measurement method e.g. the wrong reading of results, e.g. reading
28.3 instead of 23.8 on a digital instrument. Grave errors also occur due to the
incompetence of the measurers, or due to incorrect calculations. Grave errors are
avoided with good knowledge and attention during the measurements, and a proper
selection of measurement equipment and the measurement procedure.
Systematic errors
Systematic errors generally occur because of imperfections of the scale, measure,
measuring object, and measurement methods.E.g. systematic errors are caused by
connected instruments in cases where they need little power to function, voltmeter
internal impedance can cause errors in resistance measurements, and ammeters cause
errors in electrical current measurements. Systematic errors also arise due to
measurable environmental influences(humidity, pressure, temperature, magnetic field
etc.). An important characteristic of systematic errors is that they have a constant value
and sign, and these can be taken into account when correcting the measurement result.
Systematic errors are avoided by calibrating the equipment and using proper
procedures.
Random errors
Random errors result from small and variant changes that occur in the standards,
measures, measurement laboratory, and equipment. These errors can cause a large
number of different errors in each individual measurement and each time have a
different size and sign, causing measurement results to scatter. Random errors are
minimized by taking repeated measurements and then calculating the mean.

TRANSDUCERS
A transducer is a device that converts energy from one form into another.
Transducer terminology
The following terms are used to specify the performance of a transducer:
Accuracy is the term used to relate the output of an instrument to the true value of its
input, with a specified standard deviation.
Repeatability is the closeness of agreement of a group of output values for a constant
input, under given environmental conditions.
Resolution is the smallest increment in the input that can be detected with certainty by
the transducer.
Sensitivity is the ratio of the change in the magnitude of the output to the
corresponding change in the magnitude of the input.
Linearity is a measure of the constancy of the sensitivity of the transducer over the
entire range of input values.
Dead zone is the largest input change to which the transducer fails to respond.
Hysteresis is the algebraic difference between the average output errors corresponding
to input values, when the latter are approached from the maximum and minimum

49
possible input settings.
Drift is the unidirectional variation in the transducer output which is not caused by any
changes in its input.
Reproducibility is the closeness of individual measurements of the same quantity that
are measured in changed circumstances( different measurers in different laboratories,
using other measurement methods, instruments, places, and conditions etc).
Transducers may be divided into two main groups: active and passive transducers.
Passive transducers require an external power supply to effect the conversion of one
form of signal into another, eg resistive, inductive, capacitive, photoconductive etc.
Active transducers do not require an external energizing source to operate, they are
self-generating devices eg photovoltaic, piezoelectric, thermoelectric, electromagnetic
etc.
An electrical transducer is a sensing device by which a physical, mechanical or optical
quantity to be measured is transformed directly, with a suitable mechanism, into an
electrical voltage or current proportional to the input measurand.
Advantages of an electrical transducer:
a) The electrified output can be amplified to any desired level
b) The output can be indicated and recorded remotely at a distance from the
sensing medium.
c) The output can be modified to meet the requirements of the indicating or
controlling equipment
d) The signals can be conditioned or mixed to obtain any combination with outputs
of similar transducers or control signals as in an air data computer or adaptive
control systems.
e) The size and shape of the transducer can be suitably designed to achieve the
optimum weight and volume.
Examples of mechanical transducers are the dial gauge or aneroid barometer.
Mechanical transducers possess high accuracy, ruggedness, relatively low cost, and
operate without any external power supplies.
Basic requirements of a transducer are:
a) Ruggedness: Ability to withstand overloads, with safety stops for overload
protection.
b) Linearity : ability to reproduce input-output characteristics symmetrically and
linearly.
c) Repeatability: ability to reproduce the output signal exactly when the same
measurand is applied repeatedly under same environmental conditions.
d) Convenient instrumentation: sufficiently high analogue output signal with high
signal to noise ratio; digital output preferred in most cases.
e) High stability and reliability: minimum error in measurement, unaffected by
temperature, vibration, and other environmental variations.
f) Good dynamic response: output is faithful to input when taken as a function of
time.
g) Excellent mechanical characteristics that can affect the performance in static,
quasistatic , and dynamic states.

50
51

You might also like