Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views39 pages

Stochastic Process

Uploaded by

Solopro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views39 pages

Stochastic Process

Uploaded by

Solopro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

n signals, parking brake, headlights, transmission position).

Cautions may be displayed


for special problems (fuel low, check engine, tire pressure low, door ajar, seat belt
unfastened). Problems are recorded so they can be reported to diagnostic equipment.
Navigation systems can provide voice commands to reach a destination. Automotive
instrumentation must be cheap and reliable over long periods in harsh environments.
There may be independent airbag systems that contain sensors, logic and actuators.
Anti-skid braking systems use sensors to control the brakes, while cruise control affects

throttle position. A wide v Cathode


66 languages

Article
Talk
Read
Edit
View history
Tools
Appearance hide

Text
Small
Standard
Large
Width
Standard
Wide
From Wikipedia, the free encyclopedia
Diagram of a copper cathode in a galvanic cell (e.g., a battery). Positively charged cations move
towards the cathode allowing a positive current i to flow out of the cathode.

A cathode is the electrode from which a conventional current leaves a polarized


electrical device. This definition can be recalled by using the mnemonic CCD for
Cathode Current Departs. A conventional current describes the direction in which
positive charges move. Electrons have a negative electrical charge, so the movement
of electrons is opposite to that of the conventional current flow. Consequently, the
mnemonic cathode current departs also means that electrons flow into the device's
cathode from the external circuit. For example, the end of a household battery marked
with a + (plus) is the cathode.

The electrode through which conventional current flows the other way, into the device,
is termed an anode.

Charge flow[edit]

Conventional current flows from cathode to anode outside the cell or device (with
electrons moving in the opposite direction), regardless of the cell or device type and
operating mode.

Cathode polarity with respect to the anode can be positive or negative depending on
how the device is being operated. Inside a device or a cell, positively charged cations
always move towards the cathode and negatively charged anions move towards the
anode, although cathode polarity depends on the device type, and can even vary
according to the operating mode. Whether the cathode is negatively polarized (such as
recharging a battery) or positively polarized (such as a battery in use), the cathode will
draw electrons into it from outside, as well as attract positively charged cations from
inside.

A battery or galvanic cell in use has a cathode that is the positive terminal since that is
where conventional current flows out of the device. This outward current is carried
internally by positive ions moving from the electrolyte to the positive cathode (chemical
energy is responsible for this "uphill" motion). It is continued externally by electrons
moving into the battery which constitutes positive current flowing outwards. For
example, the Daniell galvanic cell's copper electrode is the positive terminal and the
cathode.

A battery that is recharging or an electrolytic cell performing electrolysis has its cathode
as the negative terminal, from which current exits the device and returns to the external
generator as charge enters the battery/ cell. For example, reversing the current
[1]
direction in a Daniell galvanic cell converts it into an electrolytic cell where the copper
electrode is the positive terminal and also the anode.

In a diode, the cathode is the negative terminal at the pointed end of the arrow symbol,
where current flows out of the device. Note: electrode naming for diodes is always
based on the direction of the forward current (that of the arrow, in which the current
flows "most easily"), even for types such as Zener diodes or solar cells where the
current of interest is the reverse current. In vacuum tubes (including cathode-ray tubes)
it is the negative terminal where electrons enter the device from the external circuit and
proceed into the tube's near-vacuum, constituting a positive current flowing out of the
device.

Etymology[edit]

The word was coined in 1834 from the Greek κάθοδος (kathodos), 'descent' or 'way
[2]
down', by William Whewell, who had been consulted by Michael Faraday over some
new names needed to complete a paper on the recently discovered process of
electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so
that electric current traverses the "decomposing body" (electrolyte) in a direction "from
East to West, or, which will strengthen this help to the memory, that in which the sun
appears to move", the cathode is where the current leaves the electrolyte, on the West
[3]
side: "kata downwards, `odos a way; the way which the sun sets".

The use of 'West' to mean the 'out' direction (actually 'out' → 'West' → 'sunset' → 'down', i.e.
'out of view') may appear unnecessarily contrived. Previously, as related in the first reference
cited above, Faraday had used the more straightforward term "exode" (the doorway where the
current exits). His motivation for changing it to something meaning 'the West electrode' (other
candidates had been "westode", "occiode" and "dysiode") was to make it immune to a possible
later change in the direction convention for current, whose exact nature was not known at
the time. The reference he used to this effect was the Earth's magnetic field direction,
which at that time was believed to be invariant. He fundamentally defined his arbitrary
orientation for the cell as being that in which the internal current would run parallel to
and in the same direction as a hypothetical magnetizing current loop around the local
line of latitude which would induce a magnetic dipole field oriented like the Earth's. This
made the internal current East to West as previously mentioned, but in the event of a
later convention change it would have become West to East, so that the West electrode
would not have been the 'way out' any more. Therefore, "exode" would have become
inappropriate, whereas "cathode" meaning 'West electrode' would have remained
correct with respect to the unchanged direction of the actual phenomenon underlying
the current, then unknown but, he thought, unambiguously defined by the magnetic
reference. In retrospect the name change was unfortunate, not only because the Greek
roots alone do not reveal the cathode's function any more, but more importantly
because, as we now know, the Earth's magnetic field direction on which the "cathode"
term is based is subject to reversals whereas the current direction convention on which
the "exode" term was based has no reason to change in the future.

Since the later discovery of the electron, an easier to remember, and more durably
technically correct (although historically false), etymology has been suggested:
cathode, from the Greek kathodos, 'way down', 'the way (down) into the cell (or other
device) for electrons'.

In chemistry[edit]

In chemistry, a cathode is the electrode of an electrochemical cell at which reduction


occurs. The cathode can be negative like when the cell is electrolytic (where electrical
energy provided to the cell is being used for decomposing chemical compounds); or
positive as when the cell is galvanic (where chemical reactions are used for generating
electrical energy). The cathode supplies electrons to the positively charged cations
which flow to it from the electrolyte (even if the cell is galvanic, i.e., when the cathode is
positive and therefore would be expected to repel the positively charged cations; this is
due to electrode potential relative to the electrolyte solution being different for the anode
and cathode metal/electrolyte systems in a galvanic cell).

The cathodic current, in electrochemistry, is the flow of electrons from the cathode
interface to a species in solution. The anodic current is the flow of electrons into the
anode from a species in solution.

Electrolytic cell[edit]
In an electrolytic cell, the cathode is where the negative polarity is applied to drive the
cell. Common results of reduction at the cathode are hydrogen gas or pure metal from
metal ions. When discussing the relative reducing power of two redox agents, the
couple for generating the more reducing species is said to be more "cathodic" with
respect to the more easily reduced reagent.

Galvanic cell[edit]

In a galvanic cell, the cathode is where the positive pole is connected to allow the circuit
to be completed: as the anode of the galvanic cell gives off electrons, they return from
the circuit into the cell through the cathode.

Electroplating metal cathode (electrolysis)[edit]

When metal ions are reduced from ionic solution, they form a pure metal surface on the
cathode. Items to be plated with pure metal are attached to and become part of the
cathode in the electrolytic solution.

In electronics[edit]

Vacuum tubes[edit]

Glow from the directly heated cathode of a 1 kW power tetrode tube in a radio transmitter. The
cathode filament is not directly visible

In a vacuum tube or electronic vacuum system, the cathode is a metal surface which
emits free electrons into the evacuated space. Since the electrons are attracted to the
positive nuclei of the metal atoms, they normally stay inside the metal and require
[4]
energy to leave it; this is called the work function of the metal. Cathodes are induced
[4]
to emit electrons by several mechanisms:

● Thermionic emission: The cathode can be heated. The increased thermal


motion of the metal atoms "knocks" electrons out of the surface, an effect
called thermionic emission. This technique is used in most vacuum tubes.
● Field electron emission: A strong electric field can be applied to the surface
by placing an electrode with a high positive voltage near the cathode. The
positively charged electrode attracts the electrons, causing some electrons to
[4]
leave the cathode's surface. This process is used in cold cathodes in some
[5][6][7] [6]
electron microscopes, and in microelectronics fabrication,
● Secondary emission: An electron, atom or molecule colliding with the surface
of the cathode with enough energy can knock electrons out of the surface.
These electrons are called secondary electrons. This mechanism is used in
gas-discharge lamps such as neon lamps.
● Photoelectric emission: Electrons can also be emitted from the electrodes of
certain metals when light of frequency greater than the threshold frequency
falls on it. This effect is called photoelectric emission, and the electrons
[4]
produced are called photoelectrons. This effect is used in phototubes and
image intensifier tubes.

Cathodes can be divided into two types:

Hot cathode[edit]

Main article: Hot cathode

Two indirectly-heated cathodes (orange heater strip) in ECC83 dual triode tube
Cutaway view of a triode vacuum tube with an indirectly-heated cathode (orange tube), showing
the heater element inside

Schematic symbol used in circuit diagrams for vacuum tube, showing cathode

A hot cathode is a cathode that is heated by a filament to produce electrons by


[4][8]
thermionic emission. The filament is a thin wire of a refractory metal like tungsten
heated red-hot by an electric current passing through it. Before the advent of transistors
in the 1960s, virtually all electronic equipment used hot-cathode vacuum tubes. Today
hot cathodes are used in vacuum tubes in radio transmitters and microwave ovens, to
produce the electron beams in older cathode-ray tube (CRT) type televisions and
computer monitors, in x-ray generators, electron microscopes, and fluorescent tubes.

[4]
There are two types of hot cathodes:

● Directly heated cathode: In this type, the filament itself is the cathode and
emits the electrons directly. Directly heated cathodes were used in the first
vacuum tubes, but today they are only used in fluorescent tubes, some large
transmitting vacuum tubes, and all X-ray tubes.
● Indirectly heated cathode: In this type, the filament is not the cathode but
rather heats the cathode which then emits electrons. Indirectly heated
cathodes are used in most devices today. For example, in most vacuum
tubes the cathode is a nickel tube with the filament inside it, and the heat
[8]
from the filament causes the outside surface of the tube to emit electrons.
The filament of an indirectly heated cathode is usually called the heater. The
main reason for using an indirectly heated cathode is to isolate the rest of the
vacuum tube from the electric potential across the filament. Many vacuum
tubes use alternating current to heat the filament. In a tube in which the
filament itself was the cathode, the alternating electric field from the filament
surface would affect the movement of the electrons and introduce hum into
the tube output. It also allows the filaments in all the tubes in an electronic
device to be tied together and supplied from the same current source, even
though the cathodes they heat may be at different potentials.

In order to improve electron emission, cathodes are treated with chemicals, usually
compounds of metals with a low work function. Treated cathodes require less surface
area, lower temperatures and less power to supply the same cathode current. The
untreated tungsten filaments used in early tubes (called "bright emitters") had to be
heated to 1,400 °C (2,550 °F), white-hot, to produce sufficient thermionic emission for
use, while modern coated cathodes produce far more electrons at a given temperature
[4][9][10]
so they only have to be heated to 425–600 °C (797–1,112 °F) There are two
[4][8]
main types of treated cathodes:

Cold cathode (lefthand electrode) in neon lamp

● Coated cathode – In these the cathode is covered with a coating of alkali


metal oxides, often barium and strontium oxide. These are used in low-power
tubes.
● Thoriated tungsten – In high-power tubes, ion bombardment can destroy the
coating on a coated cathode. In these tubes a directly heated cathode
consisting of a filament made of tungsten incorporating a small amount of
thorium is used. The layer of thorium on the surface which reduces the work
function of the cathode is continually replenished as it is lost by diffusion of
[11]
thorium from the interior of the metal.
Cold cathode[edit]

Main article: Cold cathode

This is a cathode that is not heated by a filament. They may emit electrons by field
electron emission, and in gas-filled tubes by secondary emission. Some examples are
electrodes in neon lights, cold-cathode fluorescent lamps (CCFLs) used as backlights in
laptops, thyratron tubes, and Crookes tubes. They do not necessarily operate at room
temperature; in some devices the cathode is heated by the electron current flowing
through it to a temperature at which thermionic emission occurs. For example, in some
fluorescent tubes a momentary high voltage is applied to the electrodes to start the
current through the tube; after starting the electrodes are heated enough by the current
[citation needed]
to keep emitting electrons to sustain the discharge.

Cold cathodes may also emit electrons by photoelectric emission. These are often
called photocathodes and are used in phototubes used in scientific instruments and
[citation needed]
image intensifier tubes used in night vision goggles.

Diodes[edit]

In a semiconductor diode, the cathode is the N–doped layer of the PN junction with a
high density of free electrons due to doping, and an equal density of fixed positive
charges, which are the dopants that have been thermally ionized. In the anode, the
converse applies: It features a high density of free "holes" and consequently fixed
[citation
negative dopants which have captured an electron (hence the origin of the holes).
needed]

When P and N-doped layers are created adjacent to each other, diffusion ensures that
electrons flow from high to low density areas: That is, from the N to the P side. They
leave behind the fixed positively charged dopants near the junction. Similarly, holes
diffuse from P to N leaving behind fixed negative ionised dopants near the junction.
These layers of fixed positive and negative charges are collectively known as the
depletion layer because they are depleted of free electrons and holes. The depletion
layer at the junction is at the origin of the diode's rectifying properties. This is due to the
resulting internal field and corresponding potential barrier which inhibit current flow in
reverse applied bias which increases the internal depletion layer field. Conversely, they
allow it in forwards applied bias where

ariety of services can be provided via communication links on the OnStar system.
Autonomous cars (with exotic instrumentation) have been shown.

Aircraft[edit]
[7]
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle
deflections that could be interpreted as altitude and airspeed. A magnetic compass
provided a sense of direction. The displays to the pilot were as critical as the
measurements.

A modern aircraft has a far more sophisticated suite of sensors and displays, which are
embedded into avionics systems. The aircraft may contain inertial navigation systems,
global positioning systems, weather radar, autopilots, and aircraft stabilization systems.
Redundant sensors are used for reliability. A subset of the information may be
transferred to a crash recorder to aid mishap investigations. Modern pilot displays now
include computer displays including head-up displays.

Air traffic control radar is a distributed instrumentation system. The ground part sends
an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders
that transmit codes on reception of the pulse. The system displays an aircraft map
location, an identifier and optionally altitude. The map location is based on sensed
antenna direction and sensed time delay. The other information is embedded in the
transponder transmission.

Laboratory instrumentation[edit]

Among the possible uses of the term is a collection of laboratory test equipment
controlled by a computer through an IEEE-488 bus (also known as GPIB for General
Purpose Instrument Bus or HPIB for Hewlitt Packard Instrument Bus). Laboratory
equipment is available to measure many electrical and chemical quantities. Such a
collection of equipment might be used to automate the testing of drinking water for
pollutants.

Instrumentation engineering [edit]


The instrumentation part of a piping and instrumentation diagram will be developed by an
instrumentation engineer.

Instrumentation engineering is the engineering specialization focused on the principle


and operation of measuring instruments that are used in design and configuration of
automated systems in areas such as electrical and pneumatic domains, and the control
of quantities being measured. They typically work for industries
[3]
devices did not become standard in meteorology for two centuries. The concept has
remained virtually unchanged as evidenced by pneumatic chart recorders, where a
pressurized bellows displaces a pen. Integrating sensors, displays, recorders, and
controls was uncommon until the industrial revolution, limited by both need and
practicality.

Early industrial[edit]

The evolution of analogue control loop signalling from the pneumatic era to the electronic era

Early systems used direct process connections to local control panels for control and
indication, which from the early 1930s saw the introduction of pneumatic transmitters
and automatic 3-term (PID) controllers.

The ranges of pneumatic transmitters were defined by the need to control valves and
actuators in the field. Typically, a signal ranged from 3 to 15 psi (20 to 100kPa or 0.2 to
1.0 kg/cm2) as a standard, was standardized with 6 to 30 psi occasionally being used
for larger valves. Transistor electronics enabled wiring to replace pipes, initially with a
range of 20 to 100mA at up to 90V for loop powered devices, reducing to 4 to 20mA at
12 to 24V in more modern systems. A transmitter is a device that produces an output
signal, often in the form of a 4–20 mA electrical current signal, although many other
options using voltage, frequency, pressure, or ethernet are possible. The transistor was
[4]
commercialized by the mid-1950s.

Instruments attached to a control system provided signals used to operate solenoids,


valves, regulators, circuit breakers, relays and other devices. Such devices could
control a desired output variable, and provide either remote monitoring or automated
control capabilities.

Each instrument company introduced their own standard instrumentation signal,


causing confusion until the 4–20 mA range was used as the standard electronic
instrument signal for transmitters and valves. This signal was eventually standardized
as ANSI/ISA S50, "Compatibility of Analog Signals for Electronic Industrial Process
Instruments", in the 1970s. The transformation of instrumentation from mechanical
pneumatic transmitters, controllers, and valves to electronic instruments reduced
maintenance costs as electronic instruments were more dependable than mechanical
instruments. This also increased efficiency and production due to their increase in
accuracy. Pneumatics enjoyed some advantages, being favored in corrosive and
[5]
explosive atmospheres.

Automatic process control[edit]

Example of a single industrial control loop, showing continuously modulated control of process
flow

In the early years of process control, process indicators and control elements such as
valves were monitored by an operator, that walked around the unit adjusting the valves
to obtain the desired temperatures, pressures, and flows. As technology evolved
pneumatic controllers were invented and mounted in the field that monitored the
process and controlled the valves. This reduced the amount of time process operators
needed to monitor the process. Latter years, the actual controllers were moved to a
central room and signals were sent into the control room to monitor the process and
outputs signals were sent to the final control element such as a valve to adjust the
process as needed. These controllers and indicators were mounted on a wall called a
control board. The operators stood in front of this board walking back and forth
monitoring the process indicators. This again reduced the number and amount of time
process operators were needed to walk around the units. The most standard pneumatic
[6]
signal level used during these years was 3–15 psig.

Large integrated computer-based systems[edit]

Pneumatic "three term" pneumatic PID controller, widely used before electronics became reliable
and cheaper and safe to use in hazardous areas (Siemens Telepneu Example)

A pre-DCS/SCADA era central control room. Whilst the controls are centralised in one place,
they are still discrete and not integrated into one system.

A DCS control room where plant information and controls are displayed on computer graphics
screens. The operators are seated and can view and control any part of the process from their
screens, whilst retaining a plant overview.
Process control of large industrial plants has evolved through many stages. Initially,
control would be from panels local to the process plant. However, this required a large
manpower resource to attend to these dispersed panels, and there was no overall view
of the process. The next logical development was the transmission of all plant
measurements to a permanently staffed central control room. Effectively this was the
centralization of all the localized panels, with the advantages of lower manning levels
and easy overview of the process. Often the controllers were behind the control room
panels, and all automatic and manual control outputs were transmitted back to plant.

However, whilst providing a central control focus, this arrangement was inflexible as
each control loop had its own controller hardware, and continual operator movement
within the control room was required to view different parts of the process. With coming
of electronic processors and graphic displays it became possible to replace these
discrete controllers with computer-based algorithms, hosted on a network of
input/output racks with their own control processors. These could be distributed around
plant, and communicate with the graphic display in the control room or rooms. The
distributed control concept was born.

The introduction of DCSs and SCADA allowed easy interconnection and re-
configuration of plant controls such as cascaded loops and interlocks, and easy
interfacing with other production computer systems. It enabled sophisticated alarm
handling, introduced automatic event logging, removed the need for physical records
such as chart recorders, allowed the control racks to be networked and thereby located
locally to plant to reduce cabling runs, and provided high level overviews of plant status
and production levels.

Application[edit]
In some cases, the sensor is a very minor element of the mechanism. Digital cameras
and wristwatches might technically meet the loose definition of instrumentation because
they record and/or display sensed information. Under most circumstances neither would
be called instrumentation, but when used to measure the elapsed time of a race and to
document the winner at the finish line, both would be called instrumentation.

Household[edit]

A very simple example of an instrumentation system is a mechanical thermostat, used


to control a household furnace and thus to control room temperature. A typical unit
senses temperature with a bi-metallic strip. It displays temperature by a needle on the
free end of the strip. It activates the furnace by a mercury switch. As the switch is
rotated by the strip, the mercury makes physical (and thus electrical) contact between
electrodes.

Another example of an instrumentation system is a home security system. Such a


system consists of sensors (motion detection, switches to detect door openings), simple
algorithms to detect intrusion, local control (arm/disarm) and remote monitoring of the
system so that the police can be summoned. Communication is an inherent part of the
design.

Kitchen appliances use sensors for control.

● A refrigerator maintains a constant temperature by actuating the cooling


system when the temperature becomes too high.
● An automatic ice machine makes ice until a limit switch is thrown.
● Pop-up bread toasters allow the time to be set.
● Non-electronic gas ovens will regulate the temperature with a thermostat
controlling the flow of gas to the gas burner. These may feature a sensor bulb
sited within the main chamber of the oven. In addition, there may be a safety
cut-off flame supervision device: after ignition, the burner's control knob must
be held for a short time in order for a sensor to become hot, and permit the
flow of gas to the burner. If the safety sensor becomes cold, this may indicate
the flame on the burner has become extinguished, and to prevent a
continuous leak of gas the flow is stopped.
● Electric ovens use a temperature sensor and will turn on heating elements
when the temperature is too low. More advanced ovens will actuate fans in
response to temperature sensors, to distribute heat or to cool.
● A common toilet refills the water tank until a float closes the valve. The float is
acting as a water level sensor.
Automotive[edit]

Modern automobiles have complex instrumentation. In addition to displays of engine


rotational speed and vehicle linear speed, there are also displays of battery voltage and
current, fluid levels, fluid temperatures, distance traveled, and feedback of various
controls (turn signals, parking brake, headlights, transmission position). Cautions may
be displayed for special problems (fuel low, check engine, tire pressure low, door ajar,
seat belt unfastened). Problems are recorded so they can be reported to diagnostic
equipment. Navigation systems can provide voice commands to reach a destination.
Automotive instrumentation must be cheap and reliable over long periods in harsh
environments. There may be independent airbag systems that contain sensors, logic
and actuators. Anti-skid braking systems use sensors to control the brakes, while cruise
control affects throttle position. A wide variety of services can be provided via
communication links on the OnStar system. Autonomous cars (with exotic
instrumentation) have been shown.

Aircraft[edit]
[7]
Early aircraft had a few sensors. "Steam gauges" converted air pressures into needle
deflections that could be interpreted as altitude and airspeed. A magnetic compass
provided a sense of direction. The displays to the pilot were as critical as the
measurements.
A modern aircraft has a far more sophisticated suite of sensors and displays, which are
embedded into avionics systems. The aircraft may contain inertial navigation systems,
global positioning systems, weather radar, autopilots, and aircraft stabilization systems.
Redundant sensors are used for reliability. A subset of the information may be
transferred to a crash recorder to aid mishap investigations. Modern pilot displays now
include computer displays including head-up displays.

Air traffic control radar is a distributed instrumentation system. The ground part sends
an electromagnetic pulse and receives an echo (at least). Aircraft carry transponders
that transmit co

The instrumentation part of a piping and instrumentation diagram will be developed by an


instrumentation engineer.

Instrumentation engineering is the engineering specialization focused on the principle


and operation of measuring instruments that are used in design and configuration of
automated systems in areas such as electrical and pneumatic domains, and the control
of quantities being measured. They typically work for industries

ngineering

h controllers, electronics control engineers may use electronic circuits, digital signal
processors, microcontrollers, and programmable logic controllers (PLCs). Control
engineering has a wide range of applications from the flight and propulsion systems of
[66]
commercial airliners to the cruise control present in many modern automobiles. It
also plays an important role in industrial automation.

Control engineers often use feedback when designing control systems. For example, in
an automobile with cruise control the vehicle's speed is continuously monitored and fed
[67]
back to the system which adjusts the motor's power output accordingly. Where there
is regular feedback, control theory can be used to determine how the system responds
to such feedback.

Control engineers also work in robotics to design autonomous systems using control
algorithms which interpret sensory feedback to control actuators that move robots such
as autonomous vehicles, autonomous drones and others used in a variety of industries.
[68]

Electronics[edit]

Main article: Electronic eng yearsor transistors, although all main electronic components
(resistors, capacitors etc.) can be created at a microscopic level.
Nanoelectronics is the further scaling of devices down to nanometer levels. Modern
devices are already in the nanometer regime, with below 100 nm processing having
[72]
been standard since around 2002.

Microelectronic components are created by chemically fabricating wafers of


semiconductors such as silicon (at higher frequencies, compound semiconductors like
gallium arsenide and indium phosphide) to obtain the desired transport of electronic
charge and control of current. The field of microelectronics involves a significant amount
of chemistry and material science and requires the electronic engineer working in the
[73]
field to have a very good working knowledge of the effects of quantum mechanics.

Signal processing[edit], broadcast engineering, power electronics, and


biomedical engineering as many alreadtruments measure variables such as wind speed
and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples
use the Peltier-Seebeck effect to measure the temperature difference between two
[79]
points.

Often instrumentation is not used by itself, but instead as the sensors of larger electrical
systems. For example, a thermocouple might be used to help ensure a furnace's
[80]
temperature remains constant. For this reason, instrumentation engineering is often
viewed as the counterpart of control.eneration, transmission, amplification, modulation,
detection, and analysis of electromagnetic radiation. The application of optics deals with
design of optical instruments such as lenses, microscopes, telescopes, and other
equipment that uses the properties of electromagnetic radiation. Other prominent
applications of optics include electro-optical sensors and measurement systems, lasers,
fiber-optic communication systems, and optical disc systems (e.g. CD and DVD).
Photonics builds heavily on optical technology, supplemented with modern
developments such as optoelec

Talk
Read
Edit
View history
Tools
Appearance hide

Text
Small
Standard
Large
Width
Standard
Wide
Color (beta)
Automaticent, devices, and systems which use electricity, electronics, and
electromagnetism. It emerged as an identifiable occupation in the latter half of
the 19th century after the commercialization of the electric telegraph, the
telephone, and electrical power generation, distribution, and use.

Electrical engineering is divided into a wide range of different fields, including computer
engineering, systems engineering, power engineering, telecommunications, radio-

frequency engineeri Weather radar


28 languages

Article
Talk
Read
Edit
View history
Tools

Appearance hide

Text
Small
Standard
Large
Width
Standard
Wide
Color (beta)
Automatic
Light
Dark
Report an issue with dark mode
From Wikipedia, the free encyclopedia
Weather radar in Norman, Oklahoma with rainshaft

Weather (WF44) radar dish

University of Oklahoma OU-PRIME C-band, polarimetric, weather radar during construction

Weather radar, also called weather surveillance radar (WSR) and Doppler weather
radar, is a type of radar used to locate precipitation, calculate its motion, and estimate
its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars,
capable of detecting the motion of rain droplets in addition to the intensity of the
precipitation. Both types of data can be analyzed to determine the structure of storms
and their potential to cause severe weather.

During World War II, radar operators discovered that weather was causing echoes on
their screens, masking potential enemy targets. Techniques were developed to filter
them, but scientists began to study the phenomenon. Soon after the war, surplus radars
were used to detect precipitation. Since then, weather radar has evolved and is used by
national weather services, research departments in universities, and in television
stations' weather departments. Raw images are routinely processed by specialized
software to make short term forecasts of future positions and intensities of rain, snow,
hail, and other weather phenomena. Radar output is even incorporated into numerical
weather prediction models to improve analyses and forecasts.

History[edit]

Typhoon Cobra as seen on a ship's radar screen in December 1944.

During World War II, military radar operators noticed noise in returned echoes due to
rain, snow, and sleet. After the war, military scientists returned to civilian life or
continued in the Armed Forces and pursued their work in developing a use for those
[1]
echoes. In the United States, David Atlas at first working for the Air Force and later for
MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H.
[2][3]
Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral
student Walter Palmer are well known for their work on the drop size distribution in mid-
latitude rain that led to understanding of the Z-R relation, which correlates a given radar
reflectivity with the rate at which rainwater is falling. In the United Kingdom, research
continued to study the radar echo patterns and weather elements such as stratiform rain
and convective clouds, and experiments were done to evaluate the potential of different
wavelengths from 1 to 10 centimeters. By 1950 the UK company EKCO was
[4]
demonstrating its airborne 'cloud and collision warning search radar equipment'.

1960s radar technology detected tornado producing supercells over the Minneapolis-Saint Paul
metropolitan area.

Between 1950 and 1980, reflectivity radars, which measure the position and intensity of
precipitation, were incorporated by weather services around the world. The early
meteorologists had to watch a cathode ray tube. In 1953 Donald Staggs, an electrical
engineer working for the Illinois State Water Survey, made the first recorded radar
[5]
observation of a "hook echo" associated with a tornadic thunderstorm.

The first use of weather radar on television in the United States was in September 1961.
As Hurricane Carla was approaching the state of Texas, local reporter Dan Rather,
suspecting the hurricane was very large, took a trip to the U.S. Weather Bureau WSR-
57 radar site in Galveston in order to get an idea of the size of the storm. He convinced
the bureau staff to let him broadcast live from their office and asked a meteorologist to
draw him a rough outline of the Gulf of Mexico on a transparent sheet of plastic. During
the broadcast, he held that transparent overlay over the computer's black-and-white
radar display to give his audience a sense both of Carla's size and of the location of the
storm's eye. This made Rather a national name and his report helped in the alerted
population accepting the evacuation of an estimated 350,000 people by the authorities,
which was the largest evacuation in US history at that time. Just 46 people were killed
thanks to the warning and it was estimated that the evacuation saved several thousand
lives, as the smaller 1900 Galveston hurricane had killed an estimated 6000-12000
[6]
people.

During the 1970s, radars began to be standardized and organized into networks. The
first devices to capture radar images were developed. The number of scanned angles
was increased to get a three-dimensional view of the precipitation, so that horizontal
cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the
organization of thunderstorms were then possible for the Alberta Hail Project in Canada
and National Severe Storms Laboratory (NSSL) in the US in particular.
The NSSL, created in 1964, began experimentation on dual polarization signals and on
Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just
west of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from
[7]
NSSL documented the entire life cycle of the tornado. The researchers discovered a
mesoscale rotation in the cloud aloft before the tornado touched the ground – the
tornadic vortex signature. NSSL's research helped convince the National Weather
[7]
Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of
tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get
[citation needed]
funding for further developments.

NEXRAD in South Dakota with a supercell in the background.

Between 1980 and 2000, weather radar networks became the norm in North America,
Europe, Japan and other developed countries. Conventional radars were replaced by
Doppler radars, which in addition to position and intensity could track the relative
velocity of the particles in the air. In the United States, the construction of a network
consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar
[7][8]
1988 Doppler), was started in 1988 following NSSL's research. In Canada,
[9]
Environment Canada constructed the King City station, with a 5 cm research Doppler
radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar
[10]
Observatory) in 1993. This led to a complete Canadian Doppler network between
1998 and 2004. France and other European countries had switched to Doppler
networks by the early 2000s. Meanwhile, rapid advances in computer technology led to
algorithms to detect signs of severe weather, and many applications for media outlets
and researchers.

After 2000, research on dual polarization technology moved into operational use,
increasing the amount of information available on precipitation type (e.g. rain vs. snow).
"Dual polarization" means that microwave radiation which is polarized both horizontally
and vertically (with respect to the ground) is emitted. Wide-scale deployment was done
by the end of the decade or the beginning of the next in some countries such as the
[11]
United States, France, and Canada. In April 2013, all United States National Weather
[12]
Service NEXRADs were completely dual-polarized.

Since 2003, the U.S. National Oceanic and Atmospheric Administration has been
experimenting with phased-array radar as a replacement for conventional parabolic
antenna to provide more time resolution in atmospheric sounding. This could be
significant with severe thunderstorms, as their evolution can be better evaluated with
more timely data.

Also in 2003, the National Science Foundation established the Engineering Research
Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a
multidisciplinary, multi-university collaboration of engineers, computer scientists,
meteorologists, and sociologists to conduct fundamental research, develop enabling
technology, and deploy prototype engineering systems designed to augment existing
radar systems by sampling the generally undersampled lower troposphere with
inexpensive, fast scanning, dual polarization, mechanically scanned and phased array
radars.

In 2023, the private American company Tomorrow.io launched a Ka-band space-based


[13][14]
radar for weather observation and forecasting.

Principle[edit]
Sending radar pulses[edit]

A radar beam spreads out as it moves away from the radar station, covering an increasingly
large volume.
Weather radars send directional pulses of microwave radiation, on the order of one
microsecond long, using a cavity magnetron or klystron tube connected by a waveguide
to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the
diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs
at these frequencies. This means that part of the energy of each pulse will bounce off

these small particles, back towards the Weather


radar
28 languages

Article
Talk
Read
Edit
View history
Tools

Appearance hide

Text
Small
Standard
Large
Width
Standard
Wide
Color (beta)
Automatic
Light
Dark
Report an issue with dark mode
From Wikipedia, the free encyclopedia
Weather radar in Norman, Oklahoma with rainshaft

Weather (WF44) radar dish

University of Oklahoma OU-PRIME C-band, polarimetric, weather radar during construction

Weather radar, also called weather surveillance radar (WSR) and Doppler weather
radar, is a type of radar used to locate precipitation, calculate its motion, and estimate
its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars,
capable of detecting the motion of rain droplets in addition to the intensity of the
precipitation. Both types of data can be analyzed to determine the structure of storms
and their potential to cause severe weather.

During World War II, radar operators discovered that weather was causing echoes on
their screens, masking potential enemy targets. Techniques were developed to filter
them, but scientists began to study the phenomenon. Soon after the war, surplus radars
were used to detect precipitation. Since then, weather radar has evolved and is used by
national weather services, research departments in universities, and in television
stations' weather departments. Raw images are routinely processed by specialized
software to make short term forecasts of future positions and intensities of rain, snow,
hail, and other weather phenomena. Radar output is even incorporated into numerical
weather prediction models to improve analyses and forecasts.

History[edit]

Typhoon Cobra as seen on a ship's radar screen in December 1944.

During World War II, military radar operators noticed noise in returned echoes due to
rain, snow, and sleet. After the war, military scientists returned to civilian life or
continued in the Armed Forces and pursued their work in developing a use for those
[1]
echoes. In the United States, David Atlas at first working for the Air Force and later for
MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H.
[2][3]
Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral
student Walter Palmer are well known for their work on the drop size distribution in mid-
latitude rain that led to understanding of the Z-R relation, which correlates a given radar
reflectivity with the rate at which rainwater is falling. In the United Kingdom, research
continued to study the radar echo patterns and weather elements such as stratiform rain
and convective clouds, and experiments were done to evaluate the potential of different
wavelengths from 1 to 10 centimeters. By 1950 the UK company EKCO was
[4]
demonstrating its airborne 'cloud and collision warning search radar equipment'.

1960s radar technology detected tornado producing supercells over the Minneapolis-Saint Paul
metropolitan area.

Between 1950 and 1980, reflectivity radars, which measure the position and intensity of
precipitation, were incorporated by weather services around the world. The early
meteorologists had to watch a cathode ray tube. In 1953 Donald Staggs, an electrical
engineer working for the Illinois State Water Survey, made the first recorded radar
[5]
observation of a "hook echo" associated with a tornadic thunderstorm.

The first use of weather radar on television in the United States was in September 1961.
As Hurricane Carla was approaching the state of Texas, local reporter Dan Rather,
suspecting the hurricane was very large, took a trip to the U.S. Weather Bureau WSR-
57 radar site in Galveston in order to get an idea of the size of the storm. He convinced
the bureau staff to let him broadcast live from their office and asked a meteorologist to
draw him a rough outline of the Gulf of Mexico on a transparent sheet of plastic. During
the broadcast, he held that transparent overlay over the computer's black-and-white
radar display to give his audience a sense both of Carla's size and of the location of the
storm's eye. This made Rather a national name and his report helped in the alerted
population accepting the evacuation of an estimated 350,000 people by the authorities,
which was the largest evacuation in US history at that time. Just 46 people were killed
thanks to the warning and it was estimated that the evacuation saved several thousand
lives, as the smaller 1900 Galveston hurricane had killed an estimated 6000-12000
[6]
people.

During the 1970s, radars began to be standardized and organized into networks. The
first devices to capture radar images were developed. The number of scanned angles
was increased to get a three-dimensional view of the precipitation, so that horizontal
cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the
organization of thunderstorms were then possible for the Alberta Hail Project in Canada
and National Severe Storms Laboratory (NSSL) in the US in particular.
The NSSL, created in 1964, began experimentation on dual polarization signals and on
Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just
west of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from
[7]
NSSL documented the entire life cycle of the tornado. The researchers discovered a
mesoscale rotation in the cloud aloft before the tornado touched the ground – the
tornadic vortex signature. NSSL's research helped convince the National Weather
[7]
Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of
tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get
[citation needed]
funding for further developments.

NEXRAD in South Dakota with a supercell in the background.

Between 1980 and 2000, weather radar networks became the norm in North America,
Europe, Japan and other developed countries. Conventional radars were replaced by
Doppler radars, which in addition to position and intensity could track the relative
velocity of the particles in the air. In the United States, the construction of a network
consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar
[7][8]
1988 Doppler), was started in 1988 following NSSL's research. In Canada,
[9]
Environment Canada constructed the King City station, with a 5 cm research Doppler
radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar
[10]
Observatory) in 1993. This led to a complete Canadian Doppler network between
1998 and 2004. France and other European countries had switched to Doppler
networks by the early 2000s. Meanwhile, rapid advances in computer technology led to
algorithms to detect signs of severe weather, and many applications for media outlets
and researchers.

After 2000, research on dual polarization technology moved into operational use,
increasing the amount of information available on precipitation type (e.g. rain vs. snow).
"Dual polarization" means that microwave radiation which is polarized both horizontally
and vertically (with respect to the ground) is emitted. Wide-scale deployment was done
by the end of the decade or the beginning of the next in some countries such as the
[11]
United States, France, and Canada. In April 2013, all United States National Weather
[12]
Service NEXRADs were completely dual-polarized.

Since 2003, the U.S. National Oceanic and Atmospheric Administration has been
experimenting with phased-array radar as a replacement for conventional parabolic
antenna to provide more time resolution in atmospheric sounding. This could be
significant with severe thunderstorms, as their evolution can be better evaluated with
more timely data.

Also in 2003, the National Science Foundation established the Engineering Research
Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a
multidisciplinary, multi-university collaboration of engineers, computer scientists,
meteorologists, and sociologists to conduct fundamental research, develop enabling
technology, and deploy prototype engineering systems designed to augment existing
radar systems by sampling the generally undersampled lower troposphere with
inexpensive, fast scanning, dual polarization, mechanically scanned and phased array
radars.

In 2023, the private American company Tomorrow.io launched a Ka-band space-based


[13][14]
radar for weather observation and forecasting.

Principle[edit]
Sending radar pulses[edit]

A radar beam spreads out as it moves away from the radar station, covering an increasingly
large volume.

Weather radars send directional pulses of microwave radiation, on the order of one
microsecond long, using a cavity magnetron or klystron tube connected by a waveguide
to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the
diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs
at these frequencies. This means that part of the energy of each pulse will bounce off
[15]
these small particles, back towards the radar station.

Shorter wavelengths are useful for smaller particles, but the signal is more quickly
attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm
C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm Ka-
band weather radar is used only for research on small-particle phenomena such as
[15]
drizzle and fog. W band (3 mm) weather radar systems have seen limited university
use, but due to quicker attenuation, most data are not operational.

Radar pulses diverge as they move away from the radar station. Thus the volume of air
that a radar pulse is traversing is larger for areas farther away from the station, and
smaller for nearby areas, decreasing resolution at farther distances. At the end of a 150
– 200 km sounding range, the volume of air sca

[15]
adar station.

Shorter wavelengths are useful for smaller particles, but the signal is more quickly
attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm
C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm Ka-
band weather radar is used only for research on small-particle phenomena such as
[15]
drizzle and fog. W band (3 mm) weather radar systems have seen limited university
use, but due to quicker attenuation, most data are not operational.

Radar pulses diverge as they move away from the radar station. Thus the volume of air
that a radar pulse is traversing is larger for areas farther away from the station, and
smaller for nearby areas, decreasing resolution at farther distances. At the end of a 150
– 200 km sounding range, the volume of aiContents hide

(Top)
Construction and operation
Toggle Construction and operation subsection
Conventional tube design
Hull or single-anode magnetron
Split-anode magnetron
Cavity magnetron
Common features
Applications
Toggle Applications subsection
Radar
Heating
Lighting
History
Health hazards
See also
References
External links
Cavity magnetron
45 languages

Article
Talk
Read
Edit
View history
Tools

Appearance hide

Text
Small
Standard
Large
Width
Standard
Wide
Color (beta)
Automatic
Light
Dark
Report an issue with dark mode
From Wikipedia, the free encyclopedia

"Magnetron" redirects here. Not to be confused with Megatron, Metatron, or Magneton


(disambiguation).
Magnetron with section removed to exhibit the cavities. The cathode in the center is not visible.
The antenna emitting microwaves is at the left. The magnets producing a field parallel to the long
axis of the device are not shown.

A similar magnetron with a different section removed. Central cathode is visible; antenna
conducting microwaves at the top; magnets are not shown.

Obsolete 9 GHz magnetron tube and magnets from a Soviet aircraft radar. The tube is embraced
between the poles of two horseshoe-shaped alnico magnets (top, bottom), which create a
magnetic field along the axis of the tube. The microwaves are emitted from the waveguide
aperture (top) which in use is attached to a waveguide conducting the microwaves to the radar
antenna. Modern tubes use rare-earth magnets, electromagnets or ferrite magnets which are
much less bulky.

The cavity magnetron is a high-power vacuum tube used in early radar systems and
subsequently in microwave ovens and in linear particle accelerators. A cavity
magnetron generates microwaves using the interaction of a stream of electrons with a
magnetic field, while moving past a series of cavity resonators, which are small, open
cavities in a metal block. Electrons pass by the cavities and cause microwaves to
oscillate within, similar to the functioning of a whistle producing a tone when excited by
an air stream blown past its opening. The resonant frequency of the arrangement is
determined by the cavities' physical dimensions. Unlike other vacuum tubes, such as a
klystron or a traveling-wave tube (TWT), the magnetron cannot function as an amplifier
for increasing the intensity of an applied microwave signal; the magnetron serves solely
as an electronic oscillator generating a microwave signal from direct current electricity
supplied to the vacuum tube.
The use of magnetic fields as a means to control the flow of an electric current was
spurred by the invention of the Audion by Lee de Forest in 1906. Albert Hull of General
Electric Research Laboratory, USA, began development of magnetrons to avoid de
[1]
Forest's patents, but these were never completely successful. Other experimenters
picked up on Hull's work and a key advance, the use of two cathodes, was introduced
by Habann in Germany in 1924. Further research was limited until Okabe's 1929
Japanese paper noting the production of centimeter-wavelength signals, which led to
worldwide interest. The development of magnetrons with multiple cathodes was
proposed by A. L. Samuel of Bell Telephone Laboratories in 1934, leading to designs by
Postumus in 1934 and Hans Hollmann in 1935. Production was taken up by Philips,
General Electric Company (GEC), Telefunken and others, limited to perhaps 10 W
output. By this time the klystron was producing more power and the magnetron was not
widely used, although a 300W device was built by Aleksereff and Malearoff in the USSR
[1]
in 1936 (published in 1940).

The cavity magnetron was a radical improvement introduced by John Randall and Harry
[2]: 24–26 [3]
Boot at the University of Birmingham, England in 1940. Their first working
example produced hundreds of watts at 10 cm wavelength, an unprecedented
[4][5]
achievement. Within weeks, engineers at GEC had improved this to well over a
kilowatt, and within months 25 kilowatts, over 100 kW by 1941 and pushing towards a
megawatt by 1943. The high power pulses were generated from a device the size of a
small book and transmitted from an antenna only centimeters long, reducing the size of
[6]
practical radar systems by orders of magnitude. New radars appeared for night-
[6]
fighters, anti-submarine aircraft and even the smallest escort ships, and from that
point on the Allies of World War II held a lead in radar that their counterparts in
Germany and Japan were never able to close. By the end of the war, practically every
Allied radar was based on the magnetron.

The magnetron continued to be used in radar in the post-war period but fell from favour
in the 1960s as high-power klystrons and traveling-wave tubes emerged. A key
characteristic of the magnetron is that its output signal changes from pulse to pulse,
both in frequency and phase. This renders it less suitable for pulse-to-pulse
comparisons for performing moving target indication and removing "clutter" from the
[7]
radar display. The magnetron remains in use in some radar systems, but has become
much more common as a low-cost source for microwave ovens. In this form, over one
[7][8]
billion magnetrons are in use today.

Construction and operation[edit]


Conventional tube design[edit]
In a conventional electron tube (vacuum tube), electrons are emitted from a negatively
charged, heated component called the cathode and are attracted to a positively charged
component called the anode. The components are normally arranged concentrically,
placed within a tubular-shaped container from which all air has been evacuated, so that
the electrons can move freely (hence the name "vacuum" tubes, called "valves" in
British English).

If a third electrode (called a control grid) is inserted between the cathode and the anode,
the flow of electrons between the cathode and anode can be regulated by varying the
voltage on this third electrode. This allows the resulting electron tube (called a "triode"
because it now has three electrodes) to function as an amplifier because small
variations in the electric charge applied to the control grid will result in identical
variations in the much larger current of electrons flowing between the cathode and
[9]
anode.

Hull or single-anode magnetron[edit]

The idea of using a grid for control was invented by Philipp Lenard, who received the
Nobel Prize for Physics in 1905. In the USA it was later patented by Lee de Forest,
resulting in considerable research into alternate tube designs that would avoid his
patents. One concept used a magnetic field instead of an electrical charge to control
current flow, leading to the development of the magnetron tube. In this design, the tube
was made with two electrodes, typically with the cathode in the form of a metal rod in
the center, and the anode as a cylinder around it. The tube was placed between the
[10][better source needed]
poles of a horseshoe magnet arranged such that the magnetic field
was aligned parallel to the axis of the electrodes.

With no magnetic field present, the tube operates as a diode, with electrons flowing
directly from the cathode to the anode. In the presence of the magnetic field, the
electrons will experience a force at right angles to their direction of motion (the Lorentz
force). In this case, the electrons follow a curved path between the cathode and anode.
The curvature of the path can be controlled by varying either the magnetic field using an
electromagnet, or by changing the electrical potential between the electrodes.

At very high magnetic field settings the electrons are forced back onto the cathode,
preventing current flow. At the opposite extreme, with no field, the electrons are free to
flow straight from the cathode to the anode. There is a point between the two extremes,
the critical value or Hull cut-off magnetic field (and cut-off voltage), where the electrons
just reach the anode. At fields around this point, the device operates similar to a triode.
However, magnetic control, due to hysteresis and other effects, results in a slower and
less faithful response to control current than electrostatic control using a control grid in a
conventional triode (not to mention greater weight and complexity), so magnetrons saw
limited use in conventional electronic designs.
It was noticed that when the magnetron was operating at the critical value, it would emit
energy in the radio frequency spectrum. This occurs because a few of the electrons,
instead of reaching the anode, continue to circle in the space between the cathode and
the anode. Due to an effect now known as cyclotron radiation, these electrons radiate
radio frequency energy. The effect is not very efficient. Eventually the electrons hit one
of the electrodes, so the number in the circulating state at any given time is a small
percentage of the overall current. It was also noticed that the frequency of the radiation
depends on the size of the tube, and even early examples were built that produced
signals in the microwave regime.

Early conventional tube systems were limited to the high frequency bands, and although
very high frequency systems became widely available in the late 1930s, the ultra high
frequency and microwave bands were well beyond the ability of conventional circuits.
The magnetron was one of the few devices able to generate signals in the microwave
band and it was the only one that was able to produce high power at centimeter
wavelengths.

Split-anode magnetron[edit]

Split-anode magnetron (c. 1935). (left) The bare tube, about 11 cm high. (right) Installed for use
between the poles of a strong permanent magnet

The original magnetron was very difficult to keep operating at the critical value, and
even then the number of electrons in the circling state at any time was fairly low. This
meant that it produced very low-power signals. Nevertheless, as one of the few devices
known to create microwaves, interest in the device and potential improvements was
widespread.

The first major improvement was the split-anode magnetron, also known as a
negative-resistance magnetron. As the name implies, this design used an anode that
was split in two—one at each end of the tube—creating two half-cylinders. When both
were charged to the same voltage the system worked like the original model. But by
slightly altering the voltage of the two plates, the electrons' trajectory could be modified
so that they would naturally travel towards the lower voltage side. The plates were
connected to an oscillator that reversed the relative voltage of the two plates at a given
[10]
frequency.

At any given instant, the electron will naturally be pushed towards the lower-voltage side
of the tube. The electron will then oscillate back and forth as the voltage changes. At the
same time, a strong magnetic field is applied, stronger than the critical value in the
original design. This would normally cause the electron to circle back to the cathode, but
due to the oscillating electrical field, the electron instead follows a looping path that
[10]
continues toward the anodes.

Since all of the electrons in the flow experienced this looping motion, the amount of RF
energy being radiated was greatly improved. And as the motion occurred at any field
level beyond the critical value, it was no longer necessary to carefully tune the fields
and voltages, and the overall stability of the device was greatly improved. Unfortunately,
the higher field also meant that electrons often circled back to the cathode, depositing
their energy on it and causing it to heat up. As this normally causes more electrons to
[10]
be released, it could sometimes lead to a runaway effect, damaging the device.

Cavity magnetron[edit]

The great advance in magnetron design was the resonant cavity magnetron or
electron-resonance magnetron, which works on entirely different principles. In this
design the oscillation is created by the physical shape of the anode, rather than external
circuits or fields.
A cross-sectional diagram of a resonant cavity magnetron. Magnetic lines of force are parallel to
the geometric axis of this structure.

Mechanically, the cavity magnetron consists of a large, solid cylinder of metal with a
hole drilled through the centre of the circular face. A wire acting as the cathode is run
down the center of this hole, and the metal block itself forms the anode. Around this
hole, known as the "interaction space", are a number of similar holes ("resonators")
drilled parallel to the interaction space, connected to the interaction space by a short
channel. The resulting block looks something like the cylinder on a revolver, with a
[11]
somewhat larger central hole. Early models were cut using Colt pistol jigs.
Remembering that in an AC circuit the electrons travel along the surface, not the core,
of the conductor, the parallel sides of the slot acts as a capacitor while the round holes
form an inductor: an LC circuit made of solid copper, with the resonant frequency
defined entirely by its dimensions.

The magnetic field is set to a value well below the critical, so the electrons follow arcing
paths towards the anode. When they strike the anode, they cause it to become
negatively charged in that region. As this process is random, some areas will become
more or less charged than the areas around them. The anode is constructed of a highly
conductive material, almost always copper, so these differences in voltage cause
currents to appear to even them out. Since the current has to flow around the outside of
the cavity, this process takes time. During that time additional electrons will avoid the
hot spots and be deposited further along the anode, as the additional current flowing
around it arrives too. This causes an oscillating current to form as the current tries to
[12]
equalize one spot, then another.
The oscillating currents flowing around the cavities, and their effect on the electron flow
within the tube, cause large amounts of microwave radiofrequency energy to be
generated in the cavities. The cavities are open on one end, so the entire mechanism
forms a single, larger, microwave oscillator. A "tap", normally a wire formed into a loop,
extracts microwave energy from one of the cavities. In some systems the tap wire is
replaced by an open hole, which allows the microwaves to flow into a waveguide.

As the oscillation takes some time to set up, and is inherently random at the start,
subsequent startups will have different output parameters. Phase is almost never
preserved, which makes the magnetron difficult to use in phased array systems.
Frequency also drifts from pulse to pulse, a more difficult problem for a wider array of
radar systems. Neither of these present a problem for continuous-wave radars, nor for
microwave ovens.

Common features[edit]

Cutaway drawing of a cavity magnetron of 1984. Part of the righthand magnet and copper anode
block is cut away to show the cathode and cavities. This older magnetron uses two horseshoe
shaped alnico magnets, modern tubes use rare-earth magnets.

All cavity magnetrons consist of a heated cylindrical cathode at a high (continuous or


pulsed) negative potential created by a high-voltage, direct-current power supply. The
cathode is placed in the center of an evacuated, lobed, circular metal chamber. The
walls of the chamber are the anode of the tube. A magnetic field parallel to the axis of
the cavity is imposed by a permanent magnet. The electrons initially move radially
outward from the cathode attracted by the electric field of the anode walls. The
magnetic field causes the electrons to spiral outward in a circular path, a consequence
of the Lorentz force. Spaced around the rim of the chamber are cylindrical cavities.
Slots are cut along the length of the cavities that open into the central, common cavity
space. As electrons sweep past these slots, they induce a high-frequency radio field in
each resonant cavity, which in turn causes the electrons to bunch into groups. A portion
of the radio frequency energy is extracted by a short coupling loop that is connected to
a waveguide (a metal tube, usually of rectangular cross section). The waveguide directs
the extracted RF energy to the load, which may be a cooking chamber in a microwave
oven or a high-gain antenna in the case of radar.

The sizes of the cavities determine the resonant frequency, and thereby the frequency
of the emitted microwaves. However, the frequency is not precisely controllable. The
operating frequenc

You might also like