The spindle of a micrometer graduated for the Imperial and US customary measurement systems has
40 threads per inch, so that one turn moves the spindle axially 0.025 inch (1 40 = 0.025), equal to
the distance between two graduations on the frame. The 25 graduations on the thimble allow the
0.025 inch to be further divided, so that turning the thimble through one division moves the spindle
axially 0.001 inch (0.025 25 = 0.001). Thus, the reading is given by the number of whole divisions
that are visible on the scale of the frame, multiplied by 25 (the number of thousandths of an inch that
each division represents), plus the number of that division on the thimble which coincides with the
axial zero line on the frame. The result will be the diameter expressed in thousandths of an inch. As
the numbers 1, 2, 3, etc., appear below every fourth sub-division on the frame, indicating hundreds of
thousandths, the reading can easily be taken.
Suppose the thimble were screwed out so that graduation 2, and three additional sub-divisions, were
visible (as shown in the image), and that graduation 1 on the thimble coincided with the axial line on
the frame. The reading would then be 0.2000 + 0.075 + 0.001, or .276 inch.
The spindle of an ordinary metric micrometer has 2 threads per millimetre, and thus one complete
revolution moves the spindle through a distance of 0.5 millimeter. The longitudinal line on the frame is
graduated with 1 millimetre divisions and 0.5 millimetre subdivisions. The thimble has 50 graduations,
each being 0.01 millimetre (one-hundredth of a millimetre). Thus, the reading is given by the number
of millimetre divisions visible on the scale of the sleeve plus the particular division on the thimble
which coincides with the axial line on the sleeve.
Suppose that the thimble were screwed out so that graduation 5, and one additional 0.5 subdivision
were visible (as shown in the image), and that graduation 28 on the thimble coincided with the axial
line on the sleeve. The reading then would be 5.00 + 0.5 + 0.28 = 5.78 mm.
A standard one-inch micrometer has readout divisions of .001 inch and a rated accuracy of +/- .
0001 inch[10] ("one tenth", in machinist parlance). Both the measuring instrument and the object
being measured should be at room temperature for an accurate measurement; dirt, abuse, and low
operator skill are the main sources of error.[11]
The accuracy of micrometers is checked by using them to measure gauge blocks,[12] rods, or similar
standards whose lengths are precisely and accurately known. If the gauge block is known to be
0.7500" .00005" ("seven-fifty plus or minus fifty millionths", that is, "seven hundred fifty thou plus or
minus half a tenth"), then the micrometer should measure it as 0.7500". If the micrometer measures
0.7503", then it is out of calibration. Cleanliness and low (but consistent) torque are especially
important when calibratingeach tenth (that is, ten-thousandth of an inch), or hundredth of a
millimeter, "counts"; each is important. A mere spec of dirt, or a mere bit too much squeeze, obscure
the truth of whether the instrument is able to read correctly. The solution is simply conscientiousness
cleaning, patience, due care and attention, and repeated measurements (good repeatability assures
the calibrator that his/her technique is working correctly).
Calibration typically checks the error at 3 to 5 points along the range. Only one can be adjusted to
zero. If the micrometer is in good condition, then they are all so near to zerothat the instrument seems
to read essentially "-on" all along its range; no noticeable error is seen at any locale. In contrast, on a
worn-out micrometer (or one that was poorly made to begin with), one can "chase the error up and
down the range", that is, move it up or down to any of various locales along the range, by adjusting
the barrel, but one cannot eliminate it from all locales at once.
Calibration can also include the condition of the tips (flat and parallel), any ratchet, and linearity of the
scale.[13] Flatness and parallelism are typically measured with a gauge called an optical flat, a disc of
glass or plastic ground with extreme accuracy to have flat, parallel faces, which allows light bands to
be counted when the micrometer's anvil and spindle are against it, revealing their amount of
geometric inaccuracy.
Commercial machine shops, especially those that do certain categories of work (military or commercial
aerospace, nuclear power industry, and others), are required by variousstandards organizations (such
as ISO, ANSI, ASME,[14] ASTM, SAE, AIA, the U.S. military, and others) to calibrate micrometers and
other gauges on a schedule (often annually), to affix a label to each gauge that gives it an ID number
and a calibration expiration date, to keep a record of all the gauges by ID number, and to specify in
inspection reports which gauge was used for a particular measurement.
Not all calibration is an affair for metrology labs. A micrometer can be calibrated on-site anytime, at
least in the most basic and important way (if not comprehensively), by measuring a high-grade gauge
block and adjusting to match. Even gauges that are calibrated annually and within their expiration
timeframe should be checked this way every month or two, if they are used daily. They usually will
check out OK as needing no adjustment.
The accuracy of the gauge blocks themselves is traceable through a chain of comparisons back to a
master standard such as the international prototype meter. This bar of metal, like the international
prototype kilogram, is maintained under controlled conditions at the International Bureau of Weights
and Measures headquarters in France, which is one of the principal measurement standards
laboratories of the world. These master standards have extreme-accuracy regional copies (kept in the
national laboratories of various countries, such as NIST), and metrological equipment makes the chain
of comparisons. Because the definition of the meter is now based on a light wavelength, the
international prototype meter is not quite as indispensable as it once was. But such master gauges are
still important for calibrating and certifying metrological equipment. Equipment described as "NIST
traceable" means that its comparison against master gauges, and their comparison against others, can
be traced back through a chain of documentation to equipment in the NIST labs. Maintaining this
degree of traceability requires some expense, which is why NIST-traceable equipment is more
expensive than non-NIST-traceable. But applications needing the highest degree of quality control
mandate the cost.
A micrometer that has been tested and found to be off might be restored to accuracy by recalibration.
On most micrometers, a small pin spanner is used to turn the barrel relative to the frame, so that its
zero line is repositioned relative to the screw and thimble. (There is usually a small hole on the barrel
to accept the spanner's pin.)
This calibration procedure will cancel a zero error: the problem that the micrometer reads nonzero
when its jaws are closed.
However, if the error originates from the parts of the micrometer being worn out of shape and size,
then restoration of accuracy by this means is not possible; rather, repair (grinding, lapping, or
replacing of parts) is required. For standard kinds of instruments, in practice it is easier and faster, and
often no more expensive, to buy a new one rather than pursue refurbishment.
The vernier scale is constructed so that it is spaced at a constant fraction of the fixed main scale. So
for a decimal measuring device each mark on the vernier is spaced nine tenths of those on the main
scale. If you put the two scales together with zero points aligned, the first mark on the vernier scale is
one tenth short of the first main scale mark, the second two tenths short, and so on up to the ninth
markwhich is misaligned by nine tenths. Only when a full ten marks are counted is there alignment,
because the tenth mark is ten tenthsa whole main scale unit short, and therefore aligns with the
ninth mark on the main scale.
Now if you move the vernier by a small amount, say, one tenth of its fixed main scale, the only pair of
marks that come into alignment are the first pair, since these were the only ones originally misaligned
by one tenth. If we move it two tenths, the second pair aligns, since these are the only ones originally
misaligned by that amount. If we move it five tenths, the fifth pair alignsand so on. For any
movement, only one pair of marks aligns and that pair shows the value between the marks on the
fixed scale.
Vernier scales work so well because most people are especially good at detecting which of the lines is
aligned and misaligned, and that ability gets better with practice, in fact far exceeding the optical
capability of the eye. This ability to detect alignment is called 'Vernier acuity'.[9] Historically, none of
the alternative technologies exploited this or any other hyperacuity, giving the Vernier scale an
advantage over its competitors
Zero error is defined as such a condition when a measuring instrument registers a reading when there
should not be any reading. In case of vernier calipers it occurs when a zero on main scale does not
coincide with a zero on vernier scale. Rather the zero error may be of two types i.e. when the scale is
towards numbers greater than zero it is positive else negative. The method to use a vernier scale or
caliper with zero error is to use the formula: actual reading = main scale + vernier scale (zero error).
Zero error may arise due to knocks that cause the calibration at the 0.00 mm when the jaws are
perfectly closed or just touching each other.
When the jaws are closed and if the reading is 0.10mm, the zero error is referred to as +0.10mm. The
method to use a vernier scale or caliper with zero error is to use the formula 'actual reading = main
scale + vernier scale (zero error)' thus the actual reading is 19.00 + 0.54 (0.10) = 19.44 mm
Positive zero error refers to the fact that when the jaws of the vernier caliper are just closed, the
reading is a positive reading away from the actual reading of 0.00mm. If the reading is 0.10mm, the
zero error is referred to as +0.10 mm.
Negative zero error refers to the fact that when the jaws of the vernier caliper are just closed, the
reading is a negative reading away from the actual reading of 0.00mm. If the reading is 0.08mm, the
zero error is referred to as +0.08mm. If positive,the error is subtracted from the mean reading the
instrument reads.Thus if the instrument reads 4.39 cm and the error is +0.05,the actual length will be
4.39-0.05=4.34 cm. If negative, the error is added from the mean reading the instrument reads.Thus if
the instrument reads 4.39 cm and as above the error is -0.05 cm, the actual length will be
4.39+0.05=4.44 cm. (Considering that,The quantity is called zero correction which should always be
added algebraically to the observed reading to the correct value.)
Zero Error (Z.E) = + or- n* Least Count(L.C).