Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
84 views15 pages

Introduction To Measurement

Uploaded by

Rashedul Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views15 pages

Introduction To Measurement

Uploaded by

Rashedul Alam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Measurement and Quality Control (ME 361)

3.00 Contact Hour 3.00 Credit Hour


Measurement
Text and Ref books: Engineering Metrology –By R.K Jain
What is Metrology?
Metrology is the science of measurement embracing both experimental and theoretical
determinations at any level of uncertainty in any field of science and technology
Engineering metrology is restricted to the measurement of length, angles and other quantities
which are expressed in linear or angular terms.
Ex: Slideway for machine tool ( lathe) it must have specific dimension angle and flatness for its
desired function.
Depending upon the field of application it is divided into
1. Scientific Metrology: Deals with the organization and development of measurement standards
and with their maintenance (highest level).
2. Industrial Metrology: Ensures the adequate functioning of measuring instruments used in
production and testing processes. The metrological activities, testing and measurements are
generally valuable inputs to work with quality in industrial activities.
3. Legal Metrology: is concerned with the accuracy of measurements where these have influence
on the transparency of economical transactions, and health and safety, e.g., the volume of petrol
purchased at a pump or the weight of prepackaged flour. It seeks to protect the public against
inaccuracy in trade.
4. Fundamental Metrology: may be described as scientific metrology, supplemented by those
parts of legal and industrial metrology that require scientific competence. It signifies the highest
level of accuracy in the field of metrology.

Measurement
Procedure in which an unknown quantity is compared to a known standard, using an
accepted and consistent system of units.
 The measurement may involve a simple linear rule to scale the length of a part
 Or it may require a sophisticated measurement of force versus deflection during a tension
test
 Measurement provides a numerical value of the quantity of interest, within certain limits
of accuracy and precision
 During the process of production, measurements are essential because it is through those
characteristics that parameter values of products can be known and their qualities specified.
Why Measurements?
Measurements are essential in order to:
(i) Ensure that the part to be measured conforms to the established standard.
(ii) Judge the possibility of making some of the defective parts acceptable after minor repairs.
(iii) Provide customer satisfaction by ensuring that no faulty product reaches the customers.
(iv) Coordinate the functions of quality control, production, procurement & other departments of
the organization as through measurements a basis for comparison should have been provided.
(v) Meet the principle of interchangeability of manufacture.

Objectives of Measurement and Metrology

Although the basic objective of a measurement is to provide the required accuracy at a minimum
cost, metrology has further objectives in a modem engineering plant with different shapes which
are:
 To minimize the cost of inspection by efficient and effective use of available facilities,
 To minimize the cost of rejection and re-work through application of statistical quality control
techniques.
 To maintain the accuracies of measurement.
 To determine the process capabilities and ensure that these are better than relevant component
tolerances.
 To do complete evaluation of newly developed products.

Modes of Measurement
1. Primary measurement
—Direct observation and comparison
—Not involvement of any conversion
Ex. Length, Height, Depth or Width etc. measurement.
2. Secondary measurement
— Indirect method
— Involvement of one conversion
Ex. Pressure or Temperature measurement

3. Tertiary measurement
— Indirect method
— Involvement of 2 conversion
Ex. Measurement of rotating shaft
Fundamental elements of Measurement
There are three important fundamental elements of measurement
i) Measurand: The physical quantity or property like length, angle, diameter, thickness etc. to be
measured.
ii) Reference: The physical quantity or property to which quantitative operations are made.
iii) Comparator: The means of comparing measurand with some reference.
Suppose a fitter has to measure the length of M.S. flat bar, he first lays his rule along the bar. He
then carefully aligns the zero end of his rule with one end of the bar and finally compares the
length of the bar with the graduations on his rule by his eyes.
In this example, the length of M.S. flat bar is measurand, steel rule is the reference and eye can be
considered as a comparator
Methods of Measurements:
There various methods of measurements are:
(i) Direct measurement: The value of the quantity to be measured is obtained directly without
supplementary calculations based on a functional dependence of the quantity to be measured in
relation to the quantities actually measured.
E.g.: Measurements by scales, vernier calipers, micrometers for linear measurement, bevel
protractor for angular measurement, etc.
Example: Weight of a substance is measured directly using a physical balance.
 Human insensitiveness can affect the accuracy of measurement.
 Results obtained are not that dependable and not always accurate.
(ii) Indirect measurement: The value of the quantity is obtained from measurements carried out
by direct method of measurement of other quantities, connected with the quantity to be measured
by a known relationship.
Example: Weight of a substance is measured by measuring the length, breadth & height of the
substance directly and then by using the relation Weight = Length x Breadth x Height x Density
 Consists of a chain of devices which form a measuring system
 Consists of a detector element to detect, a transducer and a unit to indicate or record the
processed signal
 Fairly accurate
(iii) Absolute / fundamental Method: Based on the measurement of the base quantities used to
define a particular quantity, e.g., measuring a quantity (length) directly in accordance with the
definition of that quantity (definition of length in units
(iv) Comparative Method: Based on the comparison of the value of a quantity to be measured
with a known value of the same quantity (direct comparison), or a known value of another quantity
which is a function of the quantity to be measured (indirect comparison).
(v) Substitution Method: The quantity is measured by direct comparison on an indicating device
by replacing the measurable quantity with another which produces the same effect on the indicating
device, e.g., measuring a mass by means of the Borda method.
(vi) Complementary Method: The value of quantity to be measured is combined with known
value of the same quantity.
Ex: Volume determination of a solid by liquid displacement

Complementary Method Transposition Method

(vii) Transposition Method: Quantity to be measured is first balanced by an initial known value
P of the same quantity. Then the value of the quantity measured is put in place of that known value
and is balanced again by another known value Q. If the position of the element indicating
equilibrium is the same in both cases, the value of the quantity to be measured is √𝑃Q e.g.
determination of a mass by means of a balance and known weights, using the Gauss double
weighing method
(viii) Coincidence Method/ Differential Method: There is a very small difference between the
value of the quantity to be measured and the reference. The reference is determined by the
observation of the coincidence of certain lines or signals, e.g. measurement by the vernier calipers
(Lc x vernier scale reading) and micrometer (LC x circular scale reading).
(ix) Deflection Method: The value of the quantity to be measured is directly indicated by the
deflection of a pointer on a calibrated scale, e.g., dial indicator.
(x) Method of Null Measurement: Here, difference between measurand value and known value
of same quantity with which it is compared is brought to zero. e.g., measurement by potentiometer.

Inspection
Procedure of checking of all materials, products or component parts at various stages during
manufacture so that a part or product feature, such as a dimension, is examined to determine
whether or not it conforms to design specification
Many inspections rely on measurement techniques, while others use gauging methods
 Gauging determines simply whether the part characteristic meets or does not meet the
design specification
 Gauging is usually faster than measuring, but not much information is provided about
feature of interest
Need of inspection
1. To ensure that the part material or a component conforms to the established standard.
2. To meet the interchangeability of manufacture.
3. To control the performance of man/machine/process.
4. To maintain good customer relationship by ensuring that no faulty product reaches the
customer.
5. The result of inspection are forwarded to the manufacturing department, thus helps in
improving the quality.
6. It produce the parts having acceptable quality levels with reduced scrap and wastages and
judges the possibility of rework of defective parts.
7. It provides the means of finding the problem area for not meeting the established standards,
8. It helps to purchase good quality raw materials, tools and equipment’s that govern the quality
of finished products.
9. It led to development of precision measuring instruments.

Types of Inspection
Inspection involves the use of measurement and gauging techniques to determine whether a
product, its components, subassemblies, or materials conform to design specifications
Inspections divide into two types:
1. Inspection by variables - product or part dimensions of interest are measured by the
appropriate measuring instruments
2. Inspection by attributes – product or part dimensions are gauged to determine whether
or not they are within tolerance limits
Manual Inspection:
Inspection procedures are often performed manually
 The work is boring and monotonous, yet the need for precision and accuracy is high
 Hours may be required to measure the important dimensions of only one part
 Because of the time and cost of manual inspection, statistical sampling procedures are
often used to reduce the need to inspect every part
Sampling Inspection:
 When sampling inspection is used, the number of parts in the sample is usually small
compared to the quantity of parts produced
 Sample size may be 1% of production run
 Because not all of the items in the population are measured, there is a risk in any
sampling procedure that defective parts will slip through
 The risk can be reduced by taking a larger sample size
 Fact is that less than 100% good quality must be tolerated as the price of using sampling
100% Inspection:
 Theoretically, the only way to achieve 100% good quality is by 100% inspection
 All defects are screened and only good quality parts are passed.
Measurement Errors:
What is Error?
It is difference between indicated or measured value and true value.
 It is impossible to made measurement
with perfect accuracy
 Errors in measurement is not a mistake but it tells how accurate a instrument is.
Error in Measurement=Measured Value-True Value
 The actual value or true value is a theoretical size of dimension free from any error of
measurement which helps to examine the errors in a measurement system that lead to
uncertainties.
 Generally, the errors in measurements are classified into two testing types-one , which
should not occur and can be eliminated by careful work and attention; and the other,
which is inherent in the measuring process/ system.
Therefore, the errors are either controllable or random in occurrence
1. Absolute Error:- It is the algebraic difference between the result of measurement and
the value of comparison.
(a) True absolute error: algebraic difference between result of measurement and
conventional true value of the quantity measured.
(b) Apparent absolute error: if a series of measurements are made, the algebraic
difference between one of the measurement and the arithmetic mean. Absolute
Error=│Actual Value- Approximate Value│
2. Relative Error: It is the ratio of absolute error and the value of comparison (may be true
value or the arithmetic mean of a series of measurements) used for measurement.
Relative error = (Actual Value- Approximate Value /Actual value)
3. Percentile Error: When relative error is expressed in percentage form

Percentile error=
E.g. if a number has the actual value=0.8597 and approximate value = 0.85, calculate the
absolute, relative and percentile error.
Types of Error:
During measurement several types of error may arise, these are
1. Static errors which includes
- Reading errors
- Alignment Error
- Characteristic errors
- Environmental errors.
2. Instrument loading errors.
3. Dynamic errors
1. Static errors: These result from the physical nature of the various components of the
measuring system i.e. limitations of apparatus/ instruments. It results due to intrinsic
imperfections in the hardware and apparatus compared to the ideal instrument. It can be
reduced or eliminated by employing relatively simple techniques.
1. a. Reading errors: Reading errors apply exclusively to the read-out device. These do not
have any direct relationship with other types of errors within the measuring system.
Reading errors include: Parallax error, Interpolation error.
Attempts have been made to reduce or eliminate reading errors by relatively simple techniques.
For example, the use of mirror behind the readout pointer or indicator virtually eliminates
occurrence of parallax error.
Parallax Error: Error occurs when the line of sight is not perpendicular to the measuring
scale.
Therefore when either the scale and pointer of an instrument are not in the same plan or the
line of vision is not in line of the measuring scale. Use of mirror behind the read out or pointer
to ensure normal reading of the scale and virtually eliminates such type of error.
Refer Fig.
Let d = separation of scale and pointer
D = distance between the pointer and eye of the observer
Ѳ = angle which the line of sight makes with the normal to
scale.
PA= parallax error

Now generally, Ѳ is small therefore, tan Ѳ = Ѳ and error PA


= dѲ. For least error d should be minimum possible, value of
Ѳ can be reduced to zero by placing mirror behind the pointer
which ensures normal reading of scale.

Interpolation error: It is the reading error resulting from the inexact evaluation of the position of
index with regards to two adjacent graduation marks between which the index is located. How
accurately can a scale be read this depends upon the thickness of the graduation marks, the spacing
of the scale division and the thickness of the pointer used to give the reading Interpolation error
can be tackled by increasing; using magnifier over the scale in the vicinity of pointer or by using
a digital read out system
1.b.Alignment Error: This occurs if the checking of an instrument is not correctly aligned with
the direction of the desired measurement. Abbe’s alignment principle should be followed to avoid
error due to alignment.
According to this principle “the axis or line of measurement should coincide with axis of the
measuring instruments or line of the measuring scale”
If while measuring the length of a work piece the
measuring scale is inclined to the true line of the
dimension being measured there will be an error in the
measurement.
The length recorded will be more than the true length.
This error is called "Cosine error". In many cases the
angle 0 is very small and the error will be negligible.
The combined cosine and sine error will occur if the
micrometer axis is not truly perpendicular to the axis of
the work piece.
Referring Fig.,
if D = true diameter,
L =apparent length,
d = micrometer anvil diameter
Then D = (L cos θ) - d sin θ
= L cos θ – d sin θ
And error, = L – D
= L – (L cos θ – d sin θ)
= L (1- cos θ) + d sin θ
The errors of above nature are avoided by using with
spherical ends.

1.c. Characteristic Errors: The deviation of the output of the measuring system under constant
environmental conditions from the theoretical predicted performance or from nominal
performance specification. Linearity, repeatability, hysteresis and resolution are the examples of
this error.
1.d. Environmental Errors: These errors result from the effect of surrounding such as
temperature, pressure, humidity etc. on measuring system. External influences like magnetic or
electric fields, nuclear radiations, vibrations or shocks etc. also lead to environmental errors. It can
be reduced by controlling the atmosphere according to the specific requirements.
2. Loading Errors: Loading errors results from the change in measurand itself when it is being
measured, (i.e., after the measuring system or instrument is connected for measurement).
Instrument loading error is the difference between the value of the measurand before and after the
measuring system is connected/contacted for measurement. E.g. deformation of soft component
under contact pressure of measuring instrument. The effect of instrument loading errors is
unavoidable. Therefore, measuring system or instrument should be selected such that this sensing
element will minimize instrument loading error in a particular measurement involved.
3. Dynamic Errors (Related with time): This error caused by time variation in the measurand
and results from the inability of a measuring system to respond faithfully to a time varying
measurand. It is caused by inertia, damping, friction or other physical constraints in the sensing or
readout or display system. In case of cyclic or periodic variations in the measurand input dynamic
error is characterized by frequency and phase response, whereas for random or transient inputs by
the time constant of response time.
For statistical study and the study of accumulation of errors, these errors can be broadly
classified into two categories
Systematic or controllable errors, and Random errors.
Systematic Errors / Controllable Errors / bias errors: Systematic errors are regularly repetitive
in nature, consistent and controllable in both their magnitude and sense. They result from improper
conditions or procedures that are consistent in action. These can be determined and reduced if
attempts are made to analyze them. For example, suppose the first two millimeters of a ruler are
worn off, and the user is not aware of it. Everything he or she measures will be too short by two
millimeters – a systematic error.
Systematic errors include:
1. Calibration Errors: These are caused due to the variation in the calibrated scale from its normal
value. The actual length of standards such as slip gauge and engraved scales will vary from the
nominal value by a small amount causing an error in measurement of constant magnitude.
Sometimes the instrument inertia and hysteresis effect do not allow the instrument to transit the
measurement accurately. Drop in voltage along the wires of an electric meter may include an error
(called single transmission error) in measurement.
2. Ambient or Atmospheric conditions (Environmental Errors): Variation in atmospheric
condition (i.e., temperature, pressure, and moisture content) at the place of measurement from that
of internationally agreed standard values (20° temp. and 760 mm of Hg pressure) can give rise to
error in the measured size of the component.
3. Stylus Pressure: Another common source of error is the pressure with which the work piece is
pressed while measuring. Though the pressure involved is generally small but this is sufficient
enough to cause appreciable deformation of both the stylus and the work piece. In ideal case, the
stylus should have simply touched the work piece. It is more prominent in case of soft work piece.
Besides the deformation effect the stylus pressure can bring deflection in the work piece also.
Variations in force applied by the anvils of micrometer on the work to be measured results in the
difference in its readings. In this case error is caused by the distortion of both micrometer frame
and work-piece.
4. Spatial Errors: arise when a quantity varies in space, but a measurement is taken only at one
location (e.g. temperature in a room - usually the top of a room is warmer than the bottom) or
incorrect location of measuring instruments (like placing a thermometer in sunlight when
attempting to measure air temperature).
5. Human Errors: Arise if a person consistently reads a scale on the low side.
6. Defective Equipment Errors: Arise if the instrument consistently reads too high or too low
due to some internal problem or damage.
Random Errors: Random errors are non-consistent. They occur randomly and are accidental in
nature. Such errors are inherent in the measuring system. It is difficult to eliminate such errors.
Their specific cause, magnitudes and source cannot be determined from the knowledge of
measuring system or conditions of measurement.
The possible sources of such errors are:
1. Small variations in the position of setting standard and work piece.
2. Slight displacement of lever joints of measuring instruments.
3. Operator error in scale reading.
4. Fluctuations in the friction of measuring instrument etc.
These errors are a matter of chance, unrepeatable, inconsistent errors, resulting in scatter in the
output data. The random error of one data point is defined as the reading minus the average of
readings.
Comparison between Systematic Errors and Random Errors

Instrument calibration
Calibration is a comparison of instrument performance to standards of known accuracy.
 Calibration consists of comparing the output of the instrument or sensor under test against
the output of an instrument of known accuracy when the same input (the measured
quantity) is applied to both instruments
 Instrument calibration is a pre measurement process, that is a very important consideration
in measurement systems and calibration procedures.
 Calibration directly links customers’ measurement equipment to national and international
standards.
If the calibration is to be meaningful, the known input must itself be derived from a defined
standard.
Necessity for calibration:
 An instrument only conforms to stated static and dynamic patterns of behaviour after it has
been calibrated.
 During consistent use, however, its behaviour will gradually diverge from the stated
specification for a variety of reasons. Such reasons include mechanical wear, and the
effects of dirt, dust, fumes and chemicals in the operating environment.
 If the accuracy is to be maintained, the instruments must be checked and re-calibrated.
However, there will come a time, determined by practical knowledge, when the characteristics
of the instrument will have drifted from the standard specification by an unacceptable amount.
When this situation is reached, it is necessary to recalibrate the instrument to the standard
specifications.
Accuracy and Precision
When we talk about traceability of measurement and comparison with other measurement
methods, one need to know the difference between two basic aspects in measurement
–ACCURACY
– PRECISION
Accuracy: It is degree to which the measured value agrees with true value. Max. amount by
which result differs from the true value.
The accuracy of a measurement is the difference between your measurement and the accepted
correct answer.
The bigger the difference, the less accurate your measurement.
 Example: Who is more accurate when measuring a book that has a true length of 17.0 cm?
Susan:
17.0 cm, 16.0 cm, 18.0 cm, 15.0 cm
Amy:
15.5 cm, 15.0 cm, 15.2 cm, 15.3 cm
Precision = describes how closely measurements are to each other and how carefully
measurements were made
 It is repeatability or reproducibility of the measurement. It shows how close the measured
values are to each other. If instrument is not precise, great difference in dimension
measured again and again.
 Which set is more precise?
18.2 , 18.4 , 18.35
17.9 , 18.3 , 18.85
16.8 , 17.2 , 19.44
Some important terminologies used in measurement:
1. Sensitivity it should be noted that sensitivity is a term associated with the measuring
equipment whereas accuracy & precision are association with measuring process. Sensitivity
means the ability of a measuring device to detect small differences in a quantity being
measured. Higher the ability of such detection of an instrument, more sensitive it is, but if the
instruments are made more sensitive they are easily affected by external effects such as
temperature variations, vibrations etc. For instance if a very small change in voltage applied to
2 voltmeters results in a perceptible change in the indication of one instrument and not in the
other. Then the former is said to be more sensitive. If an instrument is more sensitive than
requirement, it becomes difficult for operator to obtain a reliable reading.
2. Readability: It refers to the case with which the readings of a measuring instrument can be
read. It is the susceptibility of a measuring device to have its indication converted into more
meaningful number. Fine and widely spaced graduation lines ordinarily improve the
readability. If the graduation lines are very finely spaced the scale will be more readable by
using the microscope however with naked eye the readability will be poor. To make
micrometers more readable they are provided with vernier scale. Readability hence accuracy
of measurement can also be improved by use of proper magnifying devices
3. Repeatability (Ability to do the same thing over & over)
It is the ability of the measuring instrument to repeat the same results when measurement are
carried out
 By same observer
 With the same instrument
 Under the same conditions
 And the measurement is carried out in short interval of time.
 It may be expressed quantitatively in terms of dispersion of the results.
4. Reproducibility: Reproducibility is the consistency of pattern of variation in measurement i.e
closeness of the agreement between the results of measurement of the same quantity when
individual measurement are carried out
 By different observer
 By different methods
 Using different instruments
 Under different condition, location and times.
 It may also be expressed quantitatively in terms of dispersion of the results.
5. Magnification: Magnification means increasing the magnitude of output signal of measuring
instrument many times to make it more readable. The degree of magnification should bear
some relation to the accuracy of measurement desired and should not be larger than necessary.
Generally the greater the magnification the smaller is the range of measurement. In a measuring
instrument, magnification may be either mechanical, electrical, optical, pneumatic or a
combination of these. Mechanical magnification is obtained by means of a levers or gear trains.
In Electrical magnification, the change in the inductance or capacitance of electrical circuit,
made by change in the quantity being measured is used to amplify the output of the measuring
instrument. Pneumatic magnification makes use of compressed air for amplifying the output
of measuring instruments.
6. Uncertainty: It is a parameter, associated with the result of a measurement that characterizes
the dispersion of the values that could reasonably be attributed to the measurand. It can also be
expressed as an estimate characterizing the range of values within which the true value of a
measurand lies. In cases where there is adequate information based on statistical distribution,
the estimate may be associated with a specified probability. In other cases, an alternative form
of numerical expression of the degree of confidence to be attached to the estimate may be
given. Hence, when specifying the uncertainty of a measurement, it necessary to indicate the
principle on which the calculation has been made.
7. Interchangeability: An interchangeable part is one which can be substituted for similar part
manufactured to the same drawing.
In old days production was confined to small number of parts. The same worker used to
produce the parts and assemble them to obtain necessary fits.
But modern trend is towards mass production in which parts are made by different workers in
different plants and assembled in one shop. Under such conditions, the dimensions of various
mating parts must strictly lie within certain variations so that any one part selected randomly
will assemble correctly with any other mating part that too chosen randomly. Such system is
called interchangeable system or system of limits and fit.
So By interchangeability, we mean that identical components, manufactured by
different personnel under different environments, can be assembled and replaced
without any further rectification during the assembly stage, without affecting the
functioning of the component when assembled.
Examples: Keys, Couplings , Pin Joints, Screwed Fasteners, Gears, Clutches
Interchangeability is possible only when certain standards should be followed. When all the
parts to be assembled are manufactured in a single unit, local standards may be followed.
Manufacture of machine tools, automobiles, IC engines air craft etc. require thousands of
components which are identical. In such large scale production (or mass production) each male
component should fit with corresponding female component without interchanging the parts
present in a lot of identical items (i.e called random assembly), if this condition exist it is called
interchangeability in manufacturing or simply interchangeability.
The required fit assembly can be obtained in two ways.
a) Universal or full interchangeability (if international standard is followed)
b) Selective assembly
Full interchangeability means any component will mat with any other mating component without
classifying the manufactured components into sub groups or without carrying out minor alteration
for mating Purpose. It requires precise machines or processes whose Process Capability is equal
or less than the manufacturing Tolerances allowed for that part. So every component produced will
be with in desired tolerances and capable of mating (Fitting) with any other mating components to
give the required Fit. Process capability of a machine is defined as its ±3σ spread of dimensions
of components produced by it.
Selective Assembly: In selective assembly components produced are classified into several groups
according to size. The matting parts are also classified into same number of groups, so that the
corresponding groups, when assembled will give the desired fit at assembly with little or no further
machining. In this method the parts are manufactured to rather wider tolerances and then separated
into number of groups according to their actual sizes. Assembly is then made from the selected
groups. Selective assembly results in reduced cost of production without affecting the quality of
the product. It is often followed in air craft, automobile industries and in ball and roller bearing
industries.
Ex: Assembly of piston with cylinder bores. Bore size = 50 mm clearance required for
assembly= 0.12 mm Tolerance in both bore and piston = 0.04 mm Dimension of bore diameter =
50 ±0.02 mm Dimension of piston = 49.88 ±0.02 mm. By grading and marking the bores and
pistons, they can be selectively assembled as follows…
Cylinder Bore= 49.98mm 50 mm 50.02 mm
Piston = 49.86mm 49.88 mm 49.90 mm
Limitations:
 During usage of the assembly if one component fails, first we need manual of assembly
and identify the group to which failure component belongs to and search the component in
spare parts.
 By focusing on the fit between mating parts, rather than the absolute size of each
component so there will small deviation in size of component.
Advantages of Interchangeability
1. Production is increased considerably.
2. Results in reduced production cost.
3. Assembly time is reduced considerably, so production rate is increased.
4. It facilitates production of mating components at different parts of the country, depending upon
the availability of raw material, skilled labour and other facilities.
5. Simplifies replacement of worn out or defective parts and repair becomes very easy.
6. The cost of maintenance and shutdown period is also reduced to minimum.
7. A worker is concerned with limited work, so he can easily specialize himself in that work.
This results in superior quality of work.
8. Drift is a slow change of a metrological characteristic of a measuring instrument.
9. Stability refers to the ability of a measuring instrument to constantly maintain its metrological
characteristics with time.
10. Standardization is a process of formulating and applying rules for orderly approach to a
specific activity for the benefit and with the cooperation of all concerned in particular. This is
done for the promotion of overall economy, taking due account of functional conditions and
safety requirements.
11. Traceability means that a measured result can be related to stated references, usually national
or international standards, through an unbroken chain of comparisons, all having stated
uncertainties.

You might also like