Temperature Kelvin
Temperature Kelvin
Introduction
Temperature is one of the most commonly measured parameters in key sectors in industry, medicine,
meteorology and technology, and in everyday life. Some important sectors where temperature is a
vital parameter to measure and control are:
Manufacturing industry
In materials processing, for example steel, petrochemicals, plastics, glass, ceramics, etc, and foods
and pharmaceuticals, it is vital to measure and control temperature to maintain the quality of the
product and minimise scrap, and to ensure the energy efficiency of the process
Refrigeration
Lowering the temperature reduces physical, chemical and biological activity. Chilling or freezing are
important processes in the preservation, distribution and storage of foodstuffs and biological samples.
Liquefaction of gases is important for the transportation of natural gas, in the use of oxygen in
hospitals and in steel-making, and helium for the operation of MRI scanners.
Defence
Systems for night vision, tracking or avoiding missiles and monitoring the mapping of deep-ocean
temperatures are important temperature-related military applications.
Medical
Apart from the routine monitoring of patient temperature, including continuously-reading
thermometers in intensive care, temperature measurement is needed in many thermal and ultrasound
therapies for cancer and other conditions. The images below are of the internal temperature of the
brain of a new-born baby suffering from lack of oxygen, which needs to be cooled to reduce the
demand for oxygen. The process must be carefully controlled.
1
Twater cap = 10 oC; Tcore = 37 oC; contours at 2 oC intervals
37 oC 37 oC
27 oC
Environment
Good data is essential if the climate is to be accurately monitored, for short-term weather
forecasting or for long-term predictions of climate change. Traceablity to reliable standards is
needed, both in local ground stations and on board earth observation aircraft and satellit es.
2
▪ The sensor must not be affected by environmental influences such as vibration, electromagnetic
interference, heat radiation, chemical attack, etc.
There are many other factors affecting the performance and suitability of a thermom eter, such as size,
robustness, data communications, etc, and cost. A compromise is likely to be needed between the
measurement specifications and these other attributes to achieve an optimal solution. It is no good
using an accurate but fragile sensor in a hostile industrial environment. Equally a heavy-duty steel-
sheathed probe will respond slowly and should not be used unless such robustness is essential
What is temperature?
In simple terms, temperature is the ‘degree of hotness’ of an object or, more specifically in science
and technology, the ‘potential for heat transfer’.
A proper understanding of what it is, and how it relates to heat, was only developed in the mid-to-late
nineteenth century, when it was realised that it is a measure of the average energy of an ensemble of
particles at equilibrium. The particles may be the atoms or molecules of a gas, a liquid or a solid, but
they may also be the ‘photons’ of electromagnetic radiation inside a closed blackbody cavity.
If two objects are placed in contact, heat will flow from the hotter to the colder. Eventually, when no
more heat flows, we can say that they are at equilibrium with each other and that their temperatures
are the same. We use this property in measuring temperature when we place a thermometer in contact
with an object: the reading of the thermometer after they have reached equilibrium tells us what the
temperature of the object is.
Strictly this is the ideal case – to come to equilibrium they must be isolated from any other objects an d
their surrounding environment. We would also like the thermometer to be small enough that it does
not upset the temperature of the object under measurement. Many of the difficulties of measuring
temperature come from achieving good thermal contact.
Further ideas about temperature and its significance in physics and engineering came through the
development of the second law of thermodynamics, which considers the fundamental limits to the
conversion of heat into work. They are discussed in textbooks of thermodynamics.
The important point to note is that the second law shows how a ‘thermodynamic’ (absolute)
temperature can be derived as a fundamental parameter of physics and chemistry, independent of any
particular material property (like the expansion of mercury in glass, or the resistance of a platinum
wire).
Thus experiments to measure thermodynamic temperature, for example using the fundamental laws
governing the properties of gases or thermal radiation, should all give the same results. Such
experiments are very difficult and time-consuming, but they nevertheless form the basis of the
temperature scale that is used in science, technology and everyday life.
To put the measurement of temperature on a quantitative and objective basis, with sufficient accuracy,
we need an agreed unit and temperature scale, and reliable thermometers to work with.
3
Temperature units
As with other physical quantities, temperature measurement begins with the definition of a unit.
Historically, in the Celsius (centigrade) system the unit was based on the so-called ‘fundamental
interval’ of 100 ° between the melting point of ice and the boiling point of water, both at one
atmosphere pressure. Since 1954 the adopted unit has been the kelvin, which is defined by assigning
the value 273.16 K to the triple-point of water, the unique temperature at which the liquid, solid and
vapour phases of water coexist in equilibrium.
The triple point is the melting temperature of ice at the vapour pressure.
The old definition of the kelvin in the International System of Units (SI) reads:
The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic
temperature of the triple point of water
For everyday purposes, temperatures are still measured in degrees Celsius, using the definition: t / °C
= T / K – 273.15.
Thus the triple point of water is both 273.16 K and 0.01 °C exactly. The numbers in the definitions
were chosen such that there are (almost exactly) 100 units between the melting point of ice and
boiling point of water.
The Fahrenheit scale is formally obsolete, but is occasionally used. Relative to Celsius temperatures,
Fahrenheit temperatures are defined such that
t / °F = (9/5)(t / °C) + 32.
4
Alternatively, if γ and M are both accurately known, the product RT can be deduced. In this way it is
possible to determine R, and hence the Boltzmann constant, k, from the measurement at the triple
point of water.
The conceptual simplicity of the method conceals many difficulties in achieving the desired
uncertainties of only a few parts in 106. These include the design and manufacture of the cavity with
very tight tolerances, stabilising its temperature, determining its radius (to obtain the velocity from the
resonant frequency) and the thermal expansion, determining the purity and isotopic composition of
the gas and measuring its pressure, etc.
Thus the purpose of the International Temperature Scale is to define procedures by which certain
specified practical thermometers can be calibrated in such a way that the values of temperature
obtained from them are precise and reproducible, while at the same time approximating the
corresponding thermodynamic values as closely as possible.
The ITS-90 is defined using Standard Platinum Resistance Thermometers to interpolate between fixed
points in various parts of the range from the triple point of hydrogen, 13.8033 K, to the freezing point
of silver, 961.78 °C. At higher temperatures it specifies the use of a (spectral) radiation thermometer
calibrated at the freezing point of silver (or gold or copper). In this case the Planck law is used to
extrapolate the calibration to indefinitely high temperatures.
The scale extends down to 0.65 K, using special methods, and the Provisional Low Temperature Scale
of 2000, PLTS-2000, provides an agreed scale from 0.9 mK to 1 K.
The text of the ITS-90, together with related documents and guidance on its realisation, are available
from www.bipm.org.
From time to time, as better knowledge is gained, the fixed point values and prescriptions of the
International Temperature Scale are revised. At present work is in progress to develop new fixed
points, melting points of metal-carbon alloys, which may in future be used to redefine the ITS-90 at
temperatures up to 3000 K and considerably reduce the uncertainties.
5
Fixed points
Most fixed points are temperatures at which a substance changes its phase, for example from solid to
liquid or vice versa. During the phase change a significant amount of heat is absorbed or liberated,
and while this is happening the temperature remains almost constant, i.e. it is ‘fixed’. If a thermometer
is introduced so as to sense this temperature, its calibration at this point can be determined. A full
calibration can be obtained by using a series of fixed points over the range of interest.
Fixed points are also fixed in the sense that they are the same from day to day and from one place to
another, provided that the experiments are carefully done and that the materials are not contaminated.
The phase diagram for a substance is a map which shows whether the substance is solid, liquid or
vapour at any given pressure and temperature. Fixed points lie on the boundaries between the phases,
and the triple point is where the three boundary lines meet.
Pressure
This schematic phase diagram for
water indicates the melting point of liquid
273.16 K Temperature
6
This figure shows the triple point of pure
distilled water set up in a long glass cell
with a central tube for inserting the
thermometer. The space above the water is
evacuated, so that the pressure is that of the
water vapour alone.
The defining fixed points of the ITS-90 are the triple points of water, mercury, argon, oxygen, neon
and hydrogen, the melting point of gallium, and the freezing points of indium, tin, zinc, aluminium,
silver, gold and copper. Boiling points of hydrogen at two pressures are also specified. The figure
below show a freezing point cell, in which an ingot of pure metal is contained in a pure graphite
crucible. It is placed in a furnace in which it can be melted and refrozen. Below this a trace of an
indium freeze is shown, which is ‘fixed’ within about 0.3 mK for 8 hours or more.
7
Indium cell 9/08 Freeze No 2 (4 November 2009)
1 mK
0 2 4 6 8 10 12
Tim e elapsed (hours)
Thermometers
A thermometer is a device in which a property that changes with temperature is measured and used to
indicate the temperature.
• For a liquid-in-glass thermometer, the property measured is the length of the liquid column inside
a glass tube.
• For a platinum resistance thermometer or for a thermistor, the property measured is the electrical
resistance of a piece of 'sensing' material.
• For a thermocouple, the property measured is the voltage generated along the wires making up the
thermocouple.
• For a radiation thermometer, the property measured is the current generated by a photodiode on
to which the thermal (heat) radiation is focused.
Of course, nearly everything changes in some way with temperature, but not everyt hing is a
thermometer! To qualify as a useful thermometer, a device must have some other properties.
• It must be reproducible. This means that the measured property of the device should have the
same value (or very nearly so) whenever the temperature is the same. In particular, the device
should withstand excursions of temperature within its range of use.
• It must be insensitive to things other than temperature. This means that the measured property of
the device should not depend on factors such as the humidity or pressure.
• It must be calibrated. This means that we must know how to convert the measured property
(length, resistance, etc) to temperature. To do this, the device must be exposed to some
environments where the temperature is known, and the value of its measured property must be
recorded in those environments. In some cases the scale reads directly in temperature, and the
calibration then shows how accurate the thermometer scale is.
• It should be convenient to use. Factors such as size, cost, speed of response, ruggedness,
immunity to electrical interference, etc, will be important to varying degrees in different
applications.
Almost all scientific and industrial temperature measurements now use resistance thermometers,
thermocouples or radiation thermometers, and the measurements are often automated. They can be
designed for use at very low or very high temperatures with good accuracy, and the instrumentation
can be very sophisticated, with multiple channels and feedback for process control, etc.
Resistance Thermometers
8
Resistance thermometer sensors use a wire, film, chip or bead whose electrical resistance changes
with temperature. The resistor is connected to a measuring bridge or voltmeter, whose output is
processed to obtain the temperature. The thermometer must first be calibrated, and this may be done
using fixed points, if appropriate, or by comparison with known standard thermometers.
Standard Platinum Resistance Thermometers (SPRTs) are the thermometers specified for calibration
at the fixed points of the ITS-90. Capsule-type SPRTs filled with helium are suitable for use at very
low temperatures. Long-stem SPRTs are designed for insertion into furnaces at temperatures up to the
freezing point of aluminium (660 °C), and also at temperature down to about – 200 °C. In this case
the sensor is mounted on a support in dry air at the end of a long (>450 mm) silica tube, and the four
connecting wires are led out through a seal which is kept close to room temperature. Both these types
of SPRT are commonly made with sensing resistors of about 25.5 Ω at 0.01 °C, and the sensitivity is
then about 0.1 Ω/°C. Special high-temperature SPRTs are used for measurements up to the freezing
point of silver, 962 °C. They have lower resistances, 2.5 Ω or 0.25 Ω at 0.01 °C, to overcome
problems with the reduced insulation resistance of the silica supports at high temperatures.
Small capsule-type thermometers for low- Sensing element of a long-stem SPRT in its
temperature use silica sheath
Industrial platinum resistance thermometer sensors are usually made with wires or films of 100 Ω at
0 °C. They are commonly known as IPRTs, Pt100s, RTDs (resistance temperature detectors), etc.
They are designed to survive more robust treatment than the laboratory SPRTs. The sensors are
usually protected inside a steel tube, from which copper cable leads to the measuring instrument or
processor.
9
R vs t90 for Pt100 sensors
400
300
R / ohm
200
100
0
-200 0 200 400 600 800
t90 / °C
Using a typical 1 mA measuring current, a Pt100 sensor would have a sensitivity of approximately 0.4
mV/°C, and a resolution of 0.001 °C can be readily achieved using modern voltmeters or resistance
bridges. The accuracy of a measurement is limited by the calibration and stability of both the sensing
probe and the instrument, and on how they are used. Dedicated instruments can display the output
either as a resistance or directly in temperature, and the probe and instrument can be calibrated
together.
Thermistors are temperature sensitive resistors made from small beads of various semiconductive
oxides. In the more common NTC (negative temperature coefficient) types, the resistance increases
very strongly as the temperature falls. They are well suited for use in small probes with fast response,
eg in medical thermometry, where good sensitivity is achieved over useful temperature ranges.
Thermistors are not standardised, and manufacturer’s specifications must be referred to.
40
30
Resistance R(t) / kΩ
20
10
0
Small bead thermistors for use in electronic 0 20 40 60
circuitry. Right: curves showing their very Temperature t / °C
high sensitivity
Thermocouples
A thermocouple is a temperature sensor which relies on the Seebeck effect in thermo-electricity: the
production of an EMF due to the presence of a temperature gradient in a conductor. In its simplest
form, a thermocouple consists of two wires, of different conductors, which are joined at one end (the
measuring junction) the other ends being connected to a voltmeter for measuring the voltage (strictly,
the electromotive force, EMF) generated in the circuit.
10
As the EMF depends on the temperatures at both ends of the wires, a reference junction is needed. In
common practice the thermocouple is simply connected to the instrument, and compensation is
applied for the ‘cold-junction’ temperature. For more accurate use the reference junction is controlled
or fixed, typically using melting ice, and copper wires then connect to the measuring instrument.
Temperature Furnace
Indicator
732.2 C
Melting Ice-Water
Micture
This figure illustrates schematically a thermocouple inserted in a furnace, with the reference junctions
in melting ice, and connected to a temperature indicator. Most of the EMF signal is generated where
the thermocouple wires pass through the temperature gradient in the furnace wall.
The Seebeck effect arises because the hotter, more energetic, electrons tend to diffuse toward to
colder regions, with the result that in the steady state there is a charge gradient in the conductor, and
hence a voltage. To measure this, the circuit must be completed using another conductor, and this
must be of a material with a different thermoelectric coefficient; otherwise the EMFs in the two
conductors would be the same and there would be no net effect. Fortunately the thermoelectric
properties of metals and alloys vary widely, and the effects can even be of the opposite polarity, so it
is possible to choose pairs of conductors with significant net outputs. However, they are not large,
typically only about 40 µV/°C.
It is important to note that the charge gradients are bu ilt up along the lengths of the conductors, and
the EMF is therefore also generated bit by bit along the temperature gradient. It is not a property of
the junction, which is only needed to complete the circuit loop: the function of the junction is
connection for detection.
Therefore it doesn’t matter how the junctions are made, provided that they make good electrical
connection and are mechanically robust. They can be made in any convenient and reliable manner,
e.g. by welding, soldering, cramping or twisting the wires together.
Eight thermocouple combinations have been standardised in IEC 60584-1 for industrial use. Five
base-metal types predominantly using copper-nickel alloys, which are relatively inexpensive and can
be used (variously) down to – 270 °C and up to about 1200 °C. Three others, designated Types R, S
and B, use wires of platinum and platinum-rhodium alloys, and are expensive but more stable, and can
be used up to about 1600 °C.
Radiation thermometry
All objects emit radiation by virtue of their temperature. Most of the radiation is in the infrared, but as
the temperature increases beyond about 700 °C a dull ‘red heat’ can be seen, which gradually
brightens to orange, yellow and finally a brilliant white heat. The effect is very sensitive and radiation
thermometry (infrared thermometry, radiation pyrometry) is a powerful method of temperature
measurement, even at temperatures down to – 40 °C.
11
A radiation thermometer being sighted on a blackbody source at about 800 °C
Being a remote-sensing method, it has the advantage that no contact is made with the object being
measured. It can measure very hot objects, or moving objects on a production line. Modern detector
arrays allow thermal images (colour-coded temperature maps) of objects, structures or environments
to be produced.
These advantages are offset by some significant disadvantages. Firstly, the radiation emitted from an
object depends not only on its temperature but also on the surface emissivity. This is a property which
lies between zero (for a perfectly reflecting surface which emits no radiation), and 1, which is the
maximum possible and applies to ‘blackbodies’ (so called because they also absorb all radiation
incident on them, and hence appear black when cold). The emissivity depends on the material and its
surface condition (roughness, state of oxidation, etc). It also varies with the temperature, the
wavelength and the angle of view. When using a radiation thermometer, the emissivity must be
known if the signal measured is to be converted into an accurate temperature.
Secondly, radiation emitted by heaters or lighting and incident on the target will be partially reflected
and add to the radiation which is observed, potentially causing large errors. When measuring low
temperatures, ‘heaters’ may include human beings or even thermal radiation at ordinary ambient
temperatures.
In practical instruments, optical components are needed to focus the radiation on the detector, usually
through a filter to select the wavelength or waveband. Imperfections in the optics may be partly taken
into account in the instrument calibration, but they will also lead to an imperfectly defined field of
view. Thus, the ‘target size’ of the instrument may be significantly larger than is intended. The
calibration of radiation thermometers is usually done using standard blackbody sources, though
tungsten ribbon lamps are sometimes suitable; see measurement services.
Blackbody cavities
No real surface has an emissivity high enough for direct use as a blackbody source, but fortunately we
can use the concept of the truly blackbody radiation inside a closed cavity at a uniform temperature.
This depends only on the wavelength and temperature, and is independent of the emissivity of the
materials of which the cavity is made. Otherwise heat could be transferred from one part of a cavity to
another, even in the absence of a temperature gradient, contrary to the second law of thermodynamics.
12
It was the long-standing problem of deriving the relationship between the intensity of the cavity
radiation, as a function of the wavelength and the temperature, which was famously solved by Planck
by introducing the phenomenon of quantisation of the radiation. The crucial point which makes
blackbody cavity radiation ideal standard for the calibration of radiation thermometers is that it only
depends on the wavelength and temperature, and is independent of the material of which the cavity is
made.
The immediate practical difficulty is that if we make a hole in the cavity to observe the radiation, we
perturb the field and the radiation we see is no longer the ideal: the emissivity of the partially open
cavity is always less than 1. Nevertheless, if the cavity has good geometrical design and is large
compared with the aperture diameter, and if the materials used have high surface emissivities (i.e.
they are intrinsically good radiators), then very high cavity emissivities (>0.9999) can be achieved.
Calibration by comparison
We referred earlier to the use of fixed points for the calibration of platinum resistance thermometers at
the highest level. They are also used for calibrating thermocouple standards at high temperatures, and
fixed-point blackbody sources are used in radiation thermometry. However, only a few thermometers
can be measured while the fixed-point transition lasts and several fixed points are needed to cover the
range. The method is time-consuming and therefore expensive.
The thermal environments must be uniform in temperature over the critical volume, and stable (or
only slowly drifting) during the measurement period. They must be carefully proved in preliminary
investigations, but the conditions are also checked during the calibration by having more than one
standard thermometer present.
Low temperatures
13
1. Introduction
Physicists want to study and understand the behaviour and properties of matter, not just as
they are found under ambient conditions, but also at extreme conditions of high or low
pressure, temperature, energy, magnetic field and other parameters which change them. By
doing so, discoveries are made, new materials are produced, and new technical developments
become possible. Heating is clearly a process which changes material properties, stimulates
chemical reactions and enables new materials to be produced in industries ranging from
foods, pharmaceuticals, plastics and rubber, to petrochemicals, glass, steel and ceramics, but
what can be said about cooling to low temperatures? In fact there are many reasons why
cooling is useful, and many remarkable discoveries have been made at low temperatures.
Lowering the temperature reduces physical, chemical and biological activity. Chilling or
freezing are key to the preservation, distribution and storage of foodstuffs and biological
samples. Liquefaction of gases is important for the transportation of natural gas, in the use of
oxygen in hospitals and in steel-making, liquid hydrogen and oxygen are used for rocket
propulsion, and helium for cooling large superconductive magnets. Liquid nitrogen and
helium are used in many large and small activities as convenient portable sources of cooling.
Cryogenic (low temperature) engineering is therefore important and big business.
Cooling is sometimes needed just to reduce thermal noise as an unwanted background to a
detection system, but it is often fundamental to the study or application of the phenomenon of
interest: i.e. to observe or make use of low-energy physical effects which are otherwise
disrupted by thermal activity. Thus if the thermal energy of the particles of the system is
greater than the characteristic energy of a physical interaction, kT > Ei, where k is the
Boltzmann constant and T is the (thermodynamic) temperature, then the phenomenon will be
wiped out. Hence, all ordering phenomena have an associated critical temperature above
which they are not observed.
The temperature spectrum is therefore an energy spectrum, in principle extending indefinitely
upwards, and also indefinitely down towards, but never reaching, the absolute zero.
2. Refrigeration
The early history of refrigeration is best characterised by the efforts to liquefy the
‘permanent’ gases - those that could not be liquefied by pressure alone. In 1883 both oxygen
and nitrogen were liquefied (at -183 °C and -196 °C, respectively) by a repeated process of
compression, when heat is given off, and expansion, which is accompanied by cooling. A few
years later hydrogen was liquefied at -253 °C, 20 K, by Dewar, who invented the double-
walled glass vacuum flask to contain it. Finally, in a multi-stage process culminating in an
expansion through an orifice, Kamerlingh Onnes collected a small sample of liquid helium, at
4.2 K in a vacuum flask, in Leiden, Holland, in 1908.
Soon afterwards, in 1911, Onnes had developed the technique (and had enough helium) to
produce sufficient quantities of liquid to conduct experiments in it. He began to investigate
the conductivity of metals, and he found that mercury, when cooled just below 4 K (by
evaporating some of the liquid, see Figure 1) lost all its resistance to current flow. This was
the first superconductor, and the first macroscopic quantum effect to be discovered, though it
was many years before the nature of superconductivity was properly understood.
14
Figure 1: Cooling by evaporation
Evaporation is the change of phase from
liquid to vapour. In order for the molecules
in the liquid to escape into the vapour state,
they require energy to overcome the strong
inter-molecular forces. This energy is the
latent heat of vaporisation. The more
energetic molecules escape, and the
molecules left behind in the liquid are less
energetic and so cooler.
Onnes also noticed that when the vapour pressure of helium was reduced by pumping on the
liquid to cool it, at some point the cooling paused and the liquid stopped bubbling, and did
not start again until the temperature rose to the same point. Once more, he had stumbled on
an effect which was unexpected and he could not explain. In this case the liquid helium had
undergone a transition to the macroscopic quantum state now known as superfluidity -
because superfluid helium can flow through the narrowest of channels without resistance or
viscosity. In this state liquid helium also has extremely high thermal conductivity: hence the
liquid in Onnes’ container evaporated only from the surface and no bubbling occurred in the
bulk liquid below the transition temperature, ~2.2 K.
One further surprise is that helium does not freeze on further cooling: there is no triple point,
and the superfluid represents the ordered ground state that persists to the absolute zero. The
simple explanation for this is that helium is a light atom and there are only weak forces of
attraction between the atoms, so the ‘zero-point motion’ which exists even at the absolute
zero (according to the Heisenberg uncertainty principle) is sufficient to prevent the atoms
being fixed in a solid lattice – unless a pressure of ~25 atmospheres is applied.
3. Sub-kelvin temperatures
15
Having liquefied the last of the available gases, is it possible to reach even lower
temperatures? By pumping on liquid helium and evaporating it one can reach ~1 K: is this the
limit? The answer is: very definitely not.
First of all, helium-4 (4He) has a light isotope, helium-3 (3He) which, being lighter, condenses
at a lower temperature. 3He only occurs naturally in trace quantities because over a long
period of time in gas wells (where 4He is found) it is likely to capture a neutron and become
4
He, the nucleus of which is the highly stable alpha particle. The boiling point of 3He is 3.2
K, and it can be cooled by pumping to ~0.3 K.
This may not seem much progress, but it is a factor of 3 colder. At low temperatures this is
what counts: simply speaking, each factor of ten reduction in the temperature is as hard to
achieve, or as significant, as the previous factor of ten, so cooling from 3 K to 0.3 K is as
hard as from 300 K to 30 K or 30 K to 3 K. In that sense the low-temperature scale is best
considered to be logarithmic, on which basis, reaching the absolute zero is equivalent to
reaching minus infinity! It is impossible, and indeed the absolute zero can only be
approached more and more closely, but never reached.
Figure 3: A low-temperature
physicist’s temperature scale –
nothing much of interest between
liquid helium (and the Cosmic
Microwave Background) at a few
K and the surface of the sun!
It might be added that the centre
of the sun is at approximately
107 K, and the plasma fusion
temperatures at JET have reached
108 K.
BECs refers to Bose-Einstein
Condensates which have been
achieved by laser cooling, not
using thermodynamic processes.
‘Coldest measured atoms’ below
1 nK seems optimistic. Only the
nuclear spin sub-system can be
cooled this far, by the process of
nuclear cooling: the bulk material
lattice and electrons are probably
not cooled below 10-6 K.
This also ties in with the idea of temperature as an energy scale: each cooling by a factor of
ten will reveal physical effects that have characteristic energies a factor of ten weaker. This
process also continues indefinitely: one can never remove the all the energy and reach the
absolute zero.
The interest in pushing the limit of cooling to ever lower temperatures is the prospect of
finding yet more unexpected and revealing properties of matter. Some years ago there was
intense interest and competition to find whether 3He would be superfluid, and at what
16
temperature. It turned out not to occur until a much lower temperature than might be
expected, not until 2.4 mK, almost 1000 times colder than in 4He, and the properties of
superfluid 3He were found to be very different from those of 4He.
Why should 3He behave so differently from 4He? After all, it differs only in missing one
neutron from the nucleus, and hence being 25% lighter. In fact the differences are profound
because the nuclear spin of 3He is half-integral whereas 4He has zero spin. Therefore 3He
obeys Fermi-Dirac statistics whereas 4He obeys Bose-Einstein statistics. Superfluidity in 4He
is a Bose-Einstein condensation, whereas in 3He it is rather more subtle and has a much lower
characteristic energy.
The differences between 3He and 4He are particularly strongly manifested in the workings of
the ‘dilution refrigerator’, which is used for cooling from about 1 K to about 0.01 K.
4. Ultra-low temperatures
The dilution refrigerator, the main vehicle for cooling to ultra-low temperatures, is a truly
remarkable machine. It uses the heat of dilution, or mixing, of 3He in 4He which, as we will
see, is analogous to evaporative cooling. The dilution takes place in the mixing chamber,
where liquid 3He is encouraged to diffuse into liquid 4He from which it can be led off,
distilled and recirculated, thereby achieving a continuous process. To see how it works, we
must look at the phase diagram for mixtures of 3He and 4He.
In Figure 4, left, we see first the ‘lambda-line’, which marks the boundary between superfluid
and normal mixtures. In pure 4He it is at 2.2 K, but as 3He is added the transition takes place
at lower and lower temperatures, until at about 0.85 K the lambda-line splits into two
branches. These mark the boundaries of the phase-separation region in which the mixture
spontaneously separates into the normal 3He-rich ‘concentrated’ phase, and the superfluid
4
He-rich ‘dilute’ phase.
Thus if one cools a mixture of, say, 30% 3He in 4He, superfluidity is encountered at about
1.7 K, and phase separation at about 0.6 K. A concentrated phase appears at Point A’ and,
being lighter, this sits on top of the dilute phase (at Point A), the position of the interface
being such as to accommodate the total mixture. On further cooling, the concentrated phase
becomes more concentrated, and the dilute phase more dilute. Eventually the concentrated
phase becomes almost 100 % 3He, but the dilute phase can always hold at least 6 % 3He. This
is crucial, because 3He atoms can easily migrate through the superfluid dilute phase, which
behaves as a quasi-vacuum.
17
Figure 4, Left: phase diagram for 3He-4He mixtures, temperature versus 3He concentration. Right:
schematic diagram of a dilution unit. From McClintock, Meredith and Wigmore [30].
Referring to the Figure 4, right, in the dilution cycle 3He crosses the boundary from the
concentrated phase to the dilute phase in the lower (mixing) chamber. In doing so it absorbs
the heat of dilution in a kind of upside-down evaporation. It then diffuses through the dilute
phase to the still at about 0.7 K, where it is evaporated: it is preferentially vaporised because
its vapour pressure is much higher than that of 4He. It is then fed back in and recondensed
through a flow-regulating impedance in the concentrated side, from which it can again
‘evaporate’ into the dilute phase. Thus there are several extraordinary features, all of which
are crucial to the process:
• the existence of the phase separation
• the existence of a substantial heat of dilution
• the superfluidity of the dilute phase
• the significant solubility of 3He in 4He, even at T = 0
• the factor of ~30 between the vapour pressures of 3He and 4He at ~0.7 K, so that the
distillate is almost all 3He.
From the point of view of building a dilution refrigerator, the mixing chamber and still
present few problems: they are just chambers with inlets and outlets, as necessary. The key to
a successful design is the part in between: the heat exchangers. The cooling power, Q ,
though substantial, falls off with the square of the temperature:
Q W = 84nT 2
where n is the flow rate in mol s -1, and T is the mixing chamber temperature. The base
temperature of the refrigerator is reached when the dilution process can absorb only the heat
load on the mixing chamber due to the incoming 3He plus any other heat leaks. The returning
3
He must therefore be efficiently cooled by the flow of 3He in the dilute phase, and so it is the
18
design of the heat exchangers which is the limiting factor. Continuous counterflow heat
exchangers can initially be used but, to overcome the thermal boundary resistance below 0.1
K, which increases with T -3, discrete heat exchangers are needed with blocks of sintered
silver powder to provide large surface areas in both the dilute and concentrated phases. Base
temperatures are generally about 5 mK, though 2 mK has been achieved.
5. Nuclear cooling
Dilution refrigerators are available commercially and, while they are complex and require
skill to run, in modern versions they are at least semi-automated. However, they do not reach
1 mK, and for experiments below this they are just the platform for further cooling. This can
be achieved using nuclear magnetic cooling: demagnetisation of a nuclear paramagnet such
as copper, as illustrated in the entropy-temperature diagram in Figure 6.
The nuclei, of spin magnetic moment J, are magnetised by a powerful (superconducting)
magnet, and the heat of magnetisation is extracted by the mixing chamber of the dilution
refrigerator at about 10 mK. As the entropy falls in this process the sample moves from Point
x to Point y on the diagram. When the material is magnetised at the maximum available field,
a heat switch is opened to isolate it. The field is then ramped slowly down, adiabatically (i.e.
at constant entropy), and the sample therefore cools from Point y to Point z. Very large
temperature changes can be achieved, from 10 mK to 1 μK or less for copper – at least for the
nuclear spins: the temperature of the lattice and electrons lags behind, being cooled only
through the weak spin-lattice coupling. Cooling an experiment, for example a cell of 3He, in
addition requires elaborate measures to overcome the high thermal boundary resistances.
19
Figure 6: Entropy-temperature diagram for a system of magnetic dipoles at three fields,
illustrating the nuclear cooling process, from McClintock, Meredith and Wigmore [1].
9. ULT Thermometry
Having achieved the cooling, how can one measure the temperature reached? A wide variety
of techniques and devices have been used, and the more important thermometers are
summarised over five decades of T, down to 1 mK, in Figure 7, though we will not be able to
refer to them in detail.
At the top the figure indicates that various superconductors can be used as fixed point devices
which show when the temperature has reached their transition points. These points can then
be used to calibrate other thermometers or sensors.
Next we see that we can make use of the 3He and 4He refrigerants themselves. Their vapour
pressures depend on temperature, and hence can be used for temperature measurement, but
only down to about 0.65 K. After that, 3He melting pressures can be measured to provide a
temperature scale down to about 0.9 mK. Equations relating vapour and melting pressures to
temperature have been adopted as the basis for internationally agreed temperature scales.
The temperature dependence of certain magnetic properties (the susceptibility of electronic or
nuclear paramagnets) can also be used, and in this case there is often a good link to the
underlying physical law (the Curie law), so they need only limited additional calibration.
Electronic devices (SQUID noise thermometers or single-electron transport devices) have
been able to achieve accurate values of thermodynamic temperature, and underpin the 3He
melting-pressure scale mentioned earlier. Other electrical sensors shown are resistance
thermometers; low resistance metallic sensors (platinum and rhodium-iron), or high
resistance and more sensitive semiconductors. Since they all dissipate energy, they all
eventually reach a point where the heat cannot be effectively removed, and they become
useless.
20
Recent developments suggest that a SQUID which measures the thermal noise in a resistor
(which depends on temperature through the Nyquist law) may be useable as a truly practical
primary thermometer, providing thermodynamic values in a small device with good contact
and in a short time. If substantiated, this will at last solve the experimenter’s problem: having
produced the cooling, how can the temperature reached be simply and reliably measured?
Reference
21