CONCEPT OF MEASUREMENT:
General concept – Generalised measurement system: Measurement is a comparison of a
given quantity with one of its predetermined standard values opted as a unit.
There are two important requirements of the measurement:
• The standards used for comparison must accurate and internationally accepted
• The apparatus or instrument and the process used for information must be provable
Need for measurement:
To ensure that the part to be measured conforms to the established standard.
To meet the interchangeability of manufacture.
To provide customer satisfaction by ensuring that no faulty product reaches the
customers.
To coordinate the functions of quality control, production, procurement & other
departments of the organization.
To judge the possibility of making some of the defective parts acceptable after minor
repairs.
Methods of Measurement:
1) Method of direct measurement: The value of the quantity to be measured is obtained
directly without the necessity of carrying out supplementary calculations based on a functional
dependence of the quantity to be measured in relation to the quantities actually measured.
Example: Weight of a substance is measured directly using a physical balance.
2) Method of indirect measurement: The value of the quantity is obtained from
measurements carried out by direct method of measurement of other quantities, connected
with the quantity to be measured by a known relationship. Example: Weight of a substance is
measured by measuring the length, breadth
& height of the substance directly and then by using the relation Weight = Length x Breadth x
Height x
Density
3) Method of measurement without contact: The sensor is not placed in contact with the
object whose characteristics are being measured.
4) Method of combination measurement closed series: The results of direct or indirect
measurement or different combinations of those values are made use of & the corresponding
system of equations is solved.
5) Method of fundamental measurement: Based on the measurements of base quantities
entering into the definition of the quantity.
6) Method of measurement by comparison: Based on the comparison of the value of a
quantity to be measured with a known value of the same quantity (direct comparison), or a
known value of another quantity which is a function of the quantity to be measured (indirect
comparison).
7) Method of measurement by substitution: The value of a quantity to be measured is
replaced by a known value of the same quantity, so selected that the effects produced in the
indicating device by these two values are the same (a type of direct comparison).
8) Method of measurement by transposition : The value of the quantity to be measured is
in the beginning, balanced by a first known value A of the same quantity, then the value of the
quantity to be measured is put in place of this known value and is again balanced by another
known value B. If the position of the element indicating equilibrium is the same in both the
cases, the value of the quantity measured is equal to A & B.
9) Method of differential measurement: Based on the comparison of the quantity to be
measured with a quantity of the same kind, with a value known to be slightly difference from
that of the quantity to be measured, and the measurement of the difference between the values
of these two quantities.
10) Method of measurement by complement: The value of the quantity to be measured is
complemented by a known value of the same quantity, selected in such a way that the sum of
these two values is equal to a certain value of comparison fixed in advance.
11) Method of measurement by interpolation : It consists of determining value of the
quantity measured on the basis of the law of correspondence & known values of the same
quantity, the value to be determined lying between two known values.
12) Method of measurement by extrapolation : It consists of determining the value of the
quantity measured on the basis of the law of correspondence & known values of the same
quantity, the value to be determined lying outside the known values.
Generalized measuring system: The generalized measuring systems consist of the following
common elements:
1. Primary sensing element
2. Variable conversion element
3. Variable manipulation element
4. Data transmission element
5. Data processing element
6. Data presentation element
Basic units in SI system:
1) For Length : Metre (m) which is equal to 1650763.73 wavelengths in vacuum of the red-
orange radiation corresponding to the transition between the levels 2p10 & 5d5 of the
krypton-86 atom. (Definition by wavelength standard) By Line standard, Metre is the distance
between the axes of two lines engraved on a polished surface of the Platinum – Iridium bar
„M‟ (90% platinum & 10% iridium) kept at Bureau of Weights & Measures (BIPM) at Sevres
near Paris at 0C, the bar kept under normal atmospheric pressure, supported by two rollers
of at least 1 cm diameter symmetrically situated in the same horizontal plane at a distance
of 588.9 mm (Airy points) so as to give minimum deflection.
2) For Mass: Kilogram (kg) which is equal to the mass of International prototype of the
kilogram.
3) For Time : Second (s) which is equal to the duration of 9192631770 periods of the radiation
corresponding to the transition between the hyper fine levels of the ground state of the
Caesium 133 atom.
4) For Current : Ampere (A) is that constant current which, if maintained in two straight parallel
conductors of infinite length of negligible circular cross section & placed one metre apart in
vacuum would produce between these conductors, a force equal to 2 x 10-7 Newton per unit
length.
5) For Temperature: Kelvin (K) is the fraction 1/273 of thermodynamic temperature of the triple
point of water.
6) For Luminous intensity: Candela (cd) is the luminous intensity in the perpendicular direction
of a surface of 1/6,00,000 m2 of a black body at the temperature of freezing platinum under
a pressure of 101325 N/m2.
7) For amount of substance: Mole (mol) is the amount of substance of a system which
contains as many elementary entities as there are atoms in 0.012 kg of Carbon-12.
Supplementary SI units: 1) For Plane angle: Radian (rad) 2) For Solid angle: Steradian (sr)
Derived SI units:
1) For Frequency: Hertz (1 Hz = 1 cycle per second)
2) For Force: Newton (1 N = 1 kg-m/s2)
3) For Energy: Joule (1 J = 1 N-m)
4) For Power: Watt (1 W = 1 J/s)
Measuring instruments: A broad classification of the instruments based on the application
mode of operation, manner of energy conversion and the nature of energy conversion and the
nature of output signal is given below
1. Deflection and null type instruments
2. Analog and digital instruments
3. Active and passive instruments
4. Automatic and manually operated instruments
5. Contacting and non contacting instruments
6. Absolute and secondary instruments
7. Intelligent instruments.
Performance of instruments:
All instrumentation systems are characterized by the system characteristics or system
response. It consists of two basic characteristics such as static and dynamic. If the instrument
is required to measure a condition not varying with time characteristics are called static while
for a time varying process variable measurement, the dynamic characteristics are more
important.
Static response: The static characteristics of an instrument are considered for instruments
which are used to measure an unvarying process conditions.
Dynamic response: The behaviors of an instrument under such time varying input – output
conditions called dynamic response of an instrument. The instrument analysis of such
dynamic response is called dynamic analysis of the measurement system.
Dynamic quantities are two types
1. steady state periodic
2. Transient
Terms in Measurement:
Sensitivity: Sensitivity of the instrument is defined as the ratio of the magnitude of the output
signal to the magnitude of the input signal.
It denotes the smallest change in the measured variable to which the instruments
responds.
Sensitivity has no unique unit. It has wide range of the units which dependent up on the
instrument or measuring system.
Readability: Readability is a word which is frequently used in the analog measurement. The
readability is depends on the both the instruments and observer.
Readability is defined as the closeness with which the scale of an analog instrument can
be read.
The susceptibility of a measuring instrument to having its indications converted to a
meaningful number. It implies the ease with which observations can be made accurately.
For getting better readability the instrument scale should be as high as possible.
Accuracy: Accuracy may be defined as the ability of instruments to respond to a true value of a
measured variable under the reference conditions. It refers to how closely the measured value
agrees with the true value.
Precision: Precision is defined as the degrees of exactness for which an instrument is
designed or intended to perform. It refers to repeatability or consistency of measurement when
the instruments are carried out under identical conditions at a short interval of time. It can also
defined as the ability of the instruments to reproduce a group of the instruments as the same
measured quantity under the same conditions.
Correction: Correction is defined as a value which is added algebraically to the uncorrected
result of the measurement to compensate to an assumed systematic error
Calibration: Calibration is the process of determining and adjusting an instruments accuracy to
make sure its accuracy is with in manufacturing specifications. It is the process of determining
the values of the quantity being measured corresponding to a pre-established arbitrary scale. It
is the measurement of measuring instrument. The quantity to be measured is the „input‟ to the
measuring instrument. The „input‟ affects some „parameter‟ which is the „output‟ & is read out.
The amount of „output‟ is governed by that of „input‟. Before we can read any instrument, a
„scale‟ must be framed for the „output‟ by successive application of some already
standardized (inputs) signals. This process is known as „calibration‟.
Interchangeability: A part which can be substituted for the component manufactured to the
small shape and dimensions is known a interchangeable part. The operation of substituting
the part for similar manufactured components of the shape and dimensions is known as
interchangeability.
Constant of a measuring instrument: The factor by which the indication of the instrument shall
be multiplied to obtain the result of measurement.
Nominal value of a physical measure: The value of the quantity reproduced by the physical
measure and is indicated on that measure.
Conventional true value of a physical measure: The value of the quantity reproduced by the
physical measure, determined by a measurement carried out with the help of measuring
instruments, which show a total error which is practically negligible.
Standard: It is the physical embodiment of a unit. For every kind of quantity to be measured,
there should be a unit to express the result of the measurement & a standard to enable the
measurement.
Types of Errors:
A) Error of Measurement:
1) Systematic error: It is the error which during several measurements, made under the
same conditions, of the same value of a certain quantity, remains constant in absolute value
and sign or varies in a predictable way in accordance with a specified law when the conditions
change. The causes of these errors may be known or unknown. The errors may be constant or
variable. Systematic errors are regularly repetitive in nature.
2) Random error: This error varies in an unpredictable manner in absolute value & in sign
when a large number of measurements of the same value of a quantity are made under
practically identical conditions. Random errors are non-consistent. Random errors are
normally of limited time duration.
3) Parasitic error: It is the error, often gross, which results from incorrect execution of
measurement. B) Instrumental error:
1) Error of a physical measure: It is the difference between the nominal value and the
conventional true value reproduced by the physical measure.
2) Error of a measuring mechanism: It is the difference between the value indicated by the
measuring mechanism and the conventional true value of the measured quantity.
3) Zero error: It is the indication of a measuring instrument for the zero value of the
quantity measured. 4) Calibration error of a physical measure: It is the difference between the
conventional true value reproduced by the physical measure and the nominal value of that
measure.
5) Complementary error of a measuring instrument: It is the error of a measuring
instrument arising from the fact that the values of the influence quantities are different from
those corresponding to the reference conditions.
6) Error of indication of a measuring instrument: It is the difference between the measured
values of a quantity, when an influence quantity takes successively two specified values,
without changing the quantity measured.
7) Error due to temperature: It is the error arising from the fact that the temperature of
instrument does not maintain its reference value.
8) Error due to friction: It is the error due to the friction between the moving parts of the
measuring instruments.
9) Error due to inertia: It is the error due to the inertia (mechanical, thermal or otherwise)
of the parts of the measuring instrument.
C) Error of observation:
1) Reading error: It is the error of observation resulting from incorrect reading of the
indication of a measuring instrument by the observer.
2) Parallax error: It is the reading error which is produced, when, with the index at a certain
distance from the surface of scale, the reading is not made in the direction of observation
provided for the instrument used.
3) Interpolation error: It is the reading error resulting from the inexact evaluation of the
position of the index with regard to two adjacent graduation marks between which the index is
located.
D) Based on nature of errors:
1) Systematic error: (already discussed)
2) Random error: (already discussed) 3) Illegitimate error: As the name implies, it should not
exist. These include mistakes and blunders, computational errors and chaotic errors.
Chaotic errors are random errors but unlike the latter, they create chaos in the final results.
E) Based on control:
1) Controllable errors: The sources of error are known and it is possible to have a control on
these sources. These can be calibration errors, environmental errors and errors due to non-
similarity of condition while calibrating and measuring.
Calibration errors: These are caused due to variation in the calibrated scale from its normal
value. The actual length of standards such as slip gauges will vary from the nominal value by a
small amount. This will cause an error of constant magnitude.
Environmental (Ambient /Atmospheric Condition) Errors: International agreement has been
reached on ambient condition which is at 20 C temperature, 760 mm of Hg pressure and 10 mm
of Hg humidity. Instruments are calibrated at these conditions. If there is any variation in the
ambient condition, errors may creep into final results. Of the three, temperature effect is most
considerable.
Stylus pressure errors: Though the pressure involved during measurement is generally small,
this is sufficient enough to cause appreciable deformation of both the stylus and the work
piece. This will cause an error in the measurement.
Avoidable errors: These errors may occur due to parallax in the reading of measuring
instruments. This occurs when the scale and pointer are separated relative to one another. The
two common practices to minimise this error are: i) Reduce the separation between the scale
and pointer to minimum. ii) A mirror is placed behind the pointer to ensure normal reading of
the scale in all the cases. These avoidable errors occur also due to non-alignment of work
piece centers, improper location of measuring instruments, etc.
2) Non-controllable errors: These are random errors which are not controllable.