Thanks to visit codestin.com
Credit goes to www.scribd.com

100% found this document useful (1 vote)
866 views114 pages

Avionics Systems and Architecture

This document provides an overview of avionics systems used in aircraft. It discusses the need for avionics in both civil and military aircraft and spacecraft to improve flight control, navigation, safety and efficiency. The document outlines some key avionics subsystems and technologies including integrated digital avionics architecture, flight decks and cockpits, navigation systems, air data systems, and autopilots. It provides background on the history and modern development of avionics and describes their importance in next generation air transportation initiatives.

Uploaded by

Sreenivasa Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
866 views114 pages

Avionics Systems and Architecture

This document provides an overview of avionics systems used in aircraft. It discusses the need for avionics in both civil and military aircraft and spacecraft to improve flight control, navigation, safety and efficiency. The document outlines some key avionics subsystems and technologies including integrated digital avionics architecture, flight decks and cockpits, navigation systems, air data systems, and autopilots. It provides background on the history and modern development of avionics and describes their importance in next generation air transportation initiatives.

Uploaded by

Sreenivasa Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 114

AE8751 AVIONICS LTPC

3003
OBJECTIVES:
To introduce the basic of avionics and its need for civil and military aircrafts
To impart knowledge about the avionic architecture and various avionics data buses
To gain more knowledge on various avionics subsystems

UNIT I INTRODUCTION TO AVIONICS 9


Need for avionics in civil and military aircraft and space systems – integrated avionics and
weapon systems – typical avionics subsystems, design, technologies – Introduction to digital
computer and memories.

UNIT II DIGITAL AVIONICS ARCHITECTURE 9


Avionics system architecture – data buses – MIL-STD-1553B – ARINC – 420 – ARINC –
629.

UNIT III FLIGHT DECKS AND COCKPITS 9


Control and display technologies: CRT, LED, LCD, EL and plasma panel – Touch screen –
Direct voice input (DVI) – Civil and Military Cockpits: MFDS, HUD, MFK, HOTAS.

UNIT IV INTRODUCTION TO NAVIGATION SYSTEMS 9


Radio navigation – ADF, DME, VOR, LORAN, DECCA, OMEGA, ILS, MLS – Inertial
Navigation Systems (INS) – Inertial sensors, INS block diagram – Satellite navigation
systems – GPS.

UNIT V AIR DATA SYSTEMS AND AUTO PILOT 9


Air data quantities – Altitude, Air speed, Vertical speed, Mach Number, Total air
temperature, Mach warning, Altitude warning – Auto pilot – Basic principles, Longitudinal
and lateral auto pilot.

TOTAL: 45 PERIODS

OUTCOMES:
Ability to built Digital avionics architecture
Ability to Design Navigation system
Ability to design and perform analysis on air system

TEXT BOOKS:
1. Albert Helfrick.D., "Principles of Avionics", Avionics Communications Inc., 2004
2. Collinson.R.P.G. "Introduction to Avionics", Chapman and Hall, 1996.

REFERENCES:
1. Middleton, D.H., Ed., "Avionics systems, Longman Scientific and Technical", Longman
Group UK Ltd., England, 1989.
2. Spitzer, C.R. "Digital Avionics Systems", Prentice-Hall, Englewood Cliffs, N.J.,U.S.A.
1993.
3. Spitzer. C.R. "The Avionics Hand Book", CRC Press, 2000
4. Pallet.E.H.J., "Aircraft Instruments and Integrated Systems", Longman Scientific
UNIT I INTRODUCTION TO AVIONICS
Avionics are the electronic systems used on aircraft, artificial satellites,
and spacecraft. Avionic systems include communications, navigation, the display and
management of multiple systems, and the hundreds of systems that are fitted to aircraft to
perform individual functions. These can be as simple as a searchlight for a police
helicopter or as complicated as the tactical system for an airborne early warning platform.
The term avionics is a portmanteau of the words aviation and electronics.

Radar and other avionics in a Cessna Citation

F-105 Thunderchief with avionics laid out

History

Roughly 20 percent of the costs of the F15E is in avionics.


The term avionics was coined by journalist Philip J. Klass as a portmanteau of aviation
electronics. Many modern avionics have their origins in World War II wartime developments.
For example, autopilot systems that are prolific today were started to help bomber planes fly
steadily enough to hit precision targets from high altitudes. Famously, radar was developed in
the UK, Germany, and the United States during the same period. Modern avionics is a
substantial portion of military aircraft spending. Aircraft like the F-15E and the now
retired F-14 have roughly 20 percent of their budget spent on avionics. Most
modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-
wire) and new navigation needs brought on by tighter airspaces, have pushed up development
costs. The major change has been the recent boom in consumer flying. As more people begin
to use planes as their primary method of transportation, more elaborate methods of
controlling aircraft safely in these high restrictive airspaces have been invented.
Modern avionics
Avionics plays a heavy role in modernization initiatives like the Federal Aviation
Administration's (FAA) Next Generation Air Transportation System project in the United
States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint
Planning and Development Office put forth a roadmap for avionics in six areas:

 Published Routes and Procedures – Improved navigation and routing


 Negotiated Trajectories – Adding data communications to create preferred routes
dynamically
 Delegated Separation – Enhanced situational awareness in the air and on the ground
 Low Visibility/Ceiling Approach/Departure – Allowing operations with weather
constraints with less ground infrastructure
 Surface Operations – To increase safety in approach and departure
 ATM Efficiencies – Improving the ATM process

Founded in 1957, the Aircraft Electronics Association (AEA) represents more than 1,300
member companies, including government-certified international repair stations specializing
in maintenance, repair and installation of avionics and electronic systems in general aviation
aircraft. The AEA membership also includes manufacturers of avionics equipment,
instrument repair facilities, instrument manufacturers, airframe manufacturers, test equipment
manufacturers, major distributors, engineers and educational institutions.

Aircraft avionics
The cockpit of an aircraft is a typical location for avionic equipment, including
control, monitoring, communication, navigation, weather, and anti-collision systems. The
majority of aircraft power their avionics using 14- or 28-volt DC electrical systems; however,
larger, more sophisticated aircraft (such as airliners or military combat aircraft)
have AC systems operating at 400 Hz, 115 volts AC. There are several major vendors of
flight avionics, including Panasonic Avionics Corporation, Honeywell (which now
owns Bendix/King),Rockwell Collins, Thales Group, GE Aviation Systems, Garmin, Parker
Hannifin, UTC Aerospace Systems and Avidyne Corporation.
Need for Avionics in civil and military aircraft and space systems:

Avionics (defined as combination of aviation and electronics) are the advanced electronics
used in aircraft, spacecraft and satellites. These systems perform various functions include
communication, navigation, flight control, display systems, flight management etc. There is a
great need for advanced avionics in civil, military and space systems.

Civil aircraft
 For better flight control, performing computations and increased control over flight
control surfaces.
 For navigation, provide information using sensors like Altitude and Head Reference
System (AHRS).
 Provide air data like altitude, atmospheric pressure, temperature, etc.
 Reduce crew workload.
 Increased safety for crew and passengers.
 Reduction in aircraft weight which can be translated into increased number of
passengers or long range.
 All weather operation and reduction in aircraft maintenance cost.

Military aircraft
Avionics in fighter aircraft eliminates the need for a second crew member like navigator,
observer etc., which helps in reducing the training costs.
 A single seat fighter is lighter and costs less than an equivalent two seat version.
 Improved aircraft performance, control and handling.
 Reduction in maintenance cost.
 Secure communication.

Space systems
 Fly-by-wire communication system used for space vehicle's attitude and translation
control.
 Sensors used in the spacecraft for obtaining data.
 Autopilot redundancy system.
 On-board computers used in satellites for processing the data.

Integrated Avionics system:

Integrated Avionics system is used to pilot for all coordinated information available from a
single source. For software engineer this system is used to have full access to shared data
about situation, mission, and systems and to hardware designer for making systems as a
single unit with ample bandwidth to support processing.

The basic integrated Avionics system is divided into different systems:

 Navigation system
 Communication system
 Electronic Warfare
 Flight Control system
 Primary Flight Displays

Navigation System:
Navigation is the determination of position and direction on or above the surface of
the Earth. Avionics can use satellite-based systems (such as GPS and WAAS), ground-based
systems (such as VOR or LORAN), or any combination thereof. Navigation systems
calculate the position automatically and display it to the flight crew on moving map displays.
Older avionics required a pilot or navigator to plot the intersection of signals on a paper map
to determine an aircraft's location; modern systems calculate the position automatically and
display it to the flight crew on moving map displays. Navigation information such as aircraft
position, ground speed and track angle (direction of the aircraft relative to true north) is
clearly essential for the aircraft mission whether civil or military. Navigation system is
divided into two types:
1. Dead reckoning navigation systems (DR)
2. Radio Navigation systems

Dead reckoning systems:


It derives the vehicle’s present position by estimating the distance travelled from a
known position from knowledge of the speed and direction of the vehicle. They are used in
aircraft as Inertial Navigation systems: the accurate and widely used systems,
Doppler/heading reference systems: Widely used in helicopters and Air data: gives lesser
accuracy than above systems.

Radio navigation systems:

They are: ADF, VOR, DME, OMEGA, GPS, ILS, and TACAN

Communication System:
Communications System connects the flight deck to the ground and the flight deck to
the passengers. On-board communications are provided by public-address systems and
aircraft intercoms. The VHF aviation communication system works on the airband of
118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz
in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as
aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the
conversation is performed in simplex mode. Aircraft communication can also take place
using HF (especially for trans-oceanic flights) or satellite communication.

Electronic Warfare:
Electronic warfare (EW) is any action involving the use of the electromagnetic
spectrum or directed energy to control the spectrum, attack an enemy, or impede enemy
assaults via the spectrum. The purpose of electronic warfare is to deny the opponent the
advantage of, and ensure friendly unimpeded access to, the EM spectrum. EW can be applied
from air, sea, land, and space by manned and unmanned systems, and can target humans,
communications, radar, or other assets.

The electromagnetic environment


Military operations are executed in an information environment increasingly
complicated by the electromagnetic (EM) spectrum. The electromagnetic spectrum portion of
the information environment is referred to as the electromagnetic environment (EME). The
recognized need for military forces to have unimpeded access to and use of the
electromagnetic environment creates vulnerabilities and opportunities for electronic warfare
(EW) in support of military operations. Within the information operations construct, EW is
an element of information warfare; more specifically, it is an element of offensive and
defensive counter information.
Electronic warfare applications
Electronic warfare is any military action involving the use of the EM spectrum to include
directed energy (DE) to control the EM spectrum or to attack an enemy. This is not limited to
radio or radar frequencies but includes IR, visible, ultraviolet, and other less used portions of
the EM spectrum. This includes self-protection, standoff, escort jamming, and antiradiation
attacks. EW is a specialized tool that enhances many air and space functions at multiple
levels of conflict.
The purpose of EW is to deny the opponent an advantage in the EM spectrum and ensure
friendly unimpeded access to the EM spectrum portion of the information environment. EW
can be applied from air, sea, land, and space by manned and unmanned systems. EW is
employed to support military operations involving various levels of detection, denial,
deception, disruption, degradation, protection, and destruction.
EW contributes to the success of information operations (IO) by using offensive and
defensive tactics and techniques in a variety of combinations to shape, disrupt, and exploit
adversarial use of the EM spectrum while protecting friendly freedom of action in that
spectrum. Expanding reliance on the EM spectrum increases both the potential and the
challenges of EW in information operations. The entire core, supporting, and related
information operations capabilities either directly use EW or indirectly benefit from EW.
The principal EW activities have been developed over time to exploit the opportunities and
vulnerabilities that are inherent in the physics of EM energy. Activities used in EW include:
electro-optical, infrared and radio frequency countermeasures; EM compatibility and
deception; communications jamming, radar jamming and anti-jamming; electronic masking,
probing, reconnaissance, and intelligence; electronics security; EW reprogramming; emission
control; spectrum management; and wartime reserve modes.

Subdivisions
RAF Menwith Hill, a largeECHELON site in the United Kingdom, and part of the UK-
USA Security Agreement
Electronic warfare includes three major subdivisions: electronic attack (EA), electronic
protection (EP), and electronic warfare support (ES).
Electronic attack (EA)
Electronic attack (EA) involves the use of EM energy, directed energy, or anti-radiation
weapons to attack personnel, facilities, or equipment with the intent of degrading,
neutralizing, or destroying enemy combat capability. In the case of EM energy, this action is
referred to as jamming and can be performed on communications systems (see Radio
jamming) or radar systems (see Radar jamming and deception).
Electronic Protection (EP)

A right front view of a USAF Boeing E-4 advanced airborne command post (AABNCP)


on the electromagnetic pulse (EMP) simulator (HAGII-C) for testing.
Electronic Protection (EP) (previously known as electronic protective measures (EPM) or
electronic counter countermeasures (ECCM)) involves actions taken to protect personnel,
facilities, and equipment from any effects of friendly or enemy use of the electromagnetic
spectrum that degrade, neutralize, or destroy friendly combat capability. Jamming is not part
of EP, it is an EA measure.
The use of flare rejection logic on an Infrared homing missile to counter an adversary’s use of
flares is EP. While defensive EA actions and EP both protect personnel, facilities,
capabilities, and equipment, EP protects from the effects of EA (friendly and/or adversary).
Other examples of EP include spread spectrum technologies, use of Joint Restricted
Frequency List (JRFL), emissions control (EMCON), and low observability or "stealth".
An Electronic Warfare Self Protection (EWSP) is a suite of countermeasure systems fitted
primarily to aircraft for the purpose of protecting the aircraft from weapons fire and can
include among others: DIRCM (protects against IR missiles), Infrared
countermeasures (protects against IR missiles), Chaff (protects against RADAR guided
missiles), DRFM Decoys (Protects against Radar guided missiles), Flare(protects against IR
missiles).
An Electronic Warfare Tactics Range (EWTR) is a practice range which provides for the
training of aircrew in electronic warfare. There are two such ranges in Europe; one at RAF
Spade dam in the United Kingdom and the POLYGON range in Germany and France.
EWTRs are equipped with ground-based equipment to simulate electronic warfare threats
that aircrew might encounter on missions.
Ant fragile EW is a step beyond standard EP, occurring when a communications link being
jammed actually increases in capability as a result of a jamming attack, although this is only
possible under certain circumstances such as reactive forms of jamming. 
Electronic warfare support (ES)
Electronic Warfare Support (ES), is the subdivision of EW involving actions tasked by, or
under direct control of, an operational commander to search for, intercept, identify, and locate
or localize sources of intentional and unintentional radiated electromagnetic (EM) energy for
the purpose of immediate threat recognition, targeting, planning, and conduct of future
operations. These measures begin with systems designed and operators trained to make
Electronic Intercepts (ELINT) and then classification and analysis broadly known as Signals
intelligence from such detections to return information and perhaps actionable intelligence
(e.g. a ship's identification from unique characteristics of a specific radar) to the commander.
The overlapping discipline, signals intelligence (SIGINT) is the related process of analyzing
and identifying the intercepted frequencies (e.g. as a mobile phone or radar). SIGINT is
broken into three categories: ELINT, COMINT, and FISINT. the parameters of intercepted
txn are-: communication equipment-: freq, bandwidth, modulation, polarisation etc. The
distinction between intelligence and electronic warfare support (ES) is determined by who
tasks or controls the collection assets, what they are tasked to provide, and for what purpose
they are tasked. Electronic warfare support is achieved by assets tasked or controlled by an
operational commander. The purpose of ES tasking is immediate threat recognition, targeting,
planning and conduct of future operations, and other tactical actions such as threat avoidance
and homing. However, the same assets and resources that are tasked with ES can
simultaneously collect intelligence that meets other collection requirements.
Where these activities are under the control of an operational commander and being applied
for the purpose of situational awareness, threat recognition, or EM targeting, they also serve
the purpose of Electronic Warfare surveillance (ES).
Flight control system

A typical aircraft's primary flight controls in motion


A conventional fixed-wing aircraft flight control system consists of flight control surfaces,
the respective cockpit controls, connecting linkages and the necessary operating mechanisms
to control an aircraft's direction in flight. Aircraft engine controls are also considered as flight
controls as they change speed.

Cockpit controls
Primary controls
Generally, the primary cockpit flight controls are arranged as follows:

 a control yoke (also known as a control column), centre stick or side-stick (the latter


two also colloquially known as a control or joystick), governs the aircraft's roll and pitch by
moving the ailerons (or activating wing warping on some very early aircraft designs) when
turned or deflected left and right, and moves the elevators when moved backwards or
forwards
 Rudder pedals, or the earlier, pre-1919 "rudder bar", to control yaw, which move
the rudder; left foot forward will move the rudder left for instance.
 Throttle controls to control engine speed or thrust for powered aircraft.

The control yokes also vary greatly amongst aircraft. There are yokes where roll is controlled
by rotating the yoke clockwise/counter clockwise (like steering a car) and pitch is controlled
by tilting the control column towards you or away from you, but in others the pitch is
controlled by sliding the yoke into and out of the instrument panel (like most Cessna’s, such
as the 152 and 172), and in some the roll is controlled by sliding the whole yoke to the left
and right (like the Cessna 162). Centre sticks also vary between aircraft. Some are directly
connected to the control surfaces using cables, others (fly-by-wire airplanes) have a computer
in between which then controls the electrical actuators.
Even when an aircraft uses variant flight control surfaces such as a V-tail
ruddervator, flaperons, or elevons, to avoid pilot confusion the aircraft's flight control system
will still be designed so that the stick or yoke controls pitch and roll conventionally, as will
the rudder pedals for yaw. In some aircraft, the control surfaces are not manipulated with a
linkage. In ultra light aircraft and motorized hang gliders, for example, there is no mechanism
at all. Instead, the pilot just grabs the lifting surface by hand (using a rigid frame that hangs
from its underside) and moves it.
Secondary controls
In addition to the primary flight controls for roll, pitch, and yaw, there are often
secondary controls available to give the pilot finer control over flight or to ease the workload.
The most commonly available control is a wheel or other device to control elevator trim, so
that the pilot does not have to maintain constant backward or forward pressure to hold a
specific pitch attitude (other types of trim, for rudder and ailerons, are common on larger
aircraft but may also appear on smaller ones). Many aircraft have wing flaps, controlled by a
switch or a mechanical lever or in some cases are fully automatic by computer control, which
alter the shape of the wing for improved control at the slower speeds used for takeoff and
landing. Other secondary flight control systems may be available, including slats, spoilers, air
brakes and variable-sweep wings.
Mechanical FCS
The earliest ventures into manned flight were constrained to either tethered balloon rides or
short hops in a glider. The necessity of formal flight control systems was not realized until it
was demonstrated that extended flight times over substantial distances were feasible. The
experiments of Otto Lilienthal in the 1880s through 1890s showed that limited flight control
was possible through a process of weight shifting. Lilienthal discovered that by simply
changing the position of his body relative to the aircraft’s centre of gravity he could affect its
motion in any direction, much like hang gliders do today. Building on the discoveries of
Lilienthal, in the mid 1890s Octave Chanute began development of what we would now call a
mechanical flight control system for use on gliders of his own design. He worked closely
with the Wright brothers and was largely responsible for many of the control systems
innovations present on their 1903 Flyer. Mechanical systems are characterized by a physical
linkage between the pilot and control surfaces, as shown in Figure 1.
The pilot’s control inputs are transferred to the control surfaces via a series of cables
and/or pushrods. This type of FCS proved to be very effective in lightweight and relatively
slow moving aircraft because they were inexpensive to build, simple to maintain, and
provided the best control surface feedback of any FCS. However, mechanical systems tend to
be very sensitive to temperature and are prone to accelerated wear compared to the alternative
methods discussed below. Also, as designers began to build bigger and faster aircraft, they
discovered that the increased aerodynamic forces incident on the control surfaces were
simply too great for pilots to counter. Engineers had to develop a system to augment the
pilot’s commands.

Hydraulic FCS
The first such augmented control system, known as a boosted FCS, appeared in WWII era
aircraft. As Figure 2 depicts, the boosted system retained the physical coupling between the
cockpit and control surfaces with the addition of hydraulic spool valves and ram cylinders
tied in parallel into the input controls and actuators.
Hydraulic flight control systems such as the one shown in Figure were introduced to
cater to aircraft pushing the limits of control surface loading. As the name implies, the
hydraulic system relies primarily on a series of lines and hoses feeding hydraulic actuators
from a pump assembly and fluid reservoir.
In this form of FCS, the pilot is no longer physically connected to the control
surfaces. Rather, the pilot modulates the fluid pressure within the lines via a spool valve
connected to the control yoke or stick. This system has the advantage of being able to
generate massive forces to affect the attitude of any aircraft regardless of size or speed.
Hydraulic systems also afford designers greater flexibility in line routing and actuator
placement because there are no requirements for a “line of sight” coupling between the
cockpit and control surfaces.
Hydraulic Control System

Digital Flight Control Systems


Digital flight control systems are made possible by replacing the traditional
mechanical or hydraulic linkages between the pilot’s controls and aircraft control surfaces
with electrical signal connections, a technology known as fly-by-wire flight control. In the
realm of commercial aircraft, digital fly-by-wire FCSs were pioneered by Airbus and debuted
in 1984 with the A320 passenger jet. Each subsequent Airbus model has featured fly-by-wire
control, and the current A380 jetliner is no exception. In 1990, Boeing responded with the
777 jetliner featuring its own digital FCS, and the 787, Boeing’s next commercial aircraft,
will use a similar system. With both of the major commercial aircraft manufacturers
committed to fly-by-wire flight control for the foreseeable future, a careful examination of
the merits and drawbacks of this technology is warranted.
In a fly-by-wire FCS, pilot commands are relayed to control surfaces by electrical
signals. As the pilot deflects the control column or moves the rudder pedals, his commands
are translated into digital signals that pass through wiring to actuators affixed to each control
surface. The actuators then deflect the appropriate control surfaces to execute the pilot’s
commands according to the signals they received. If tactile feedback is provided to the pilot,
sensors mounted on the exterior of the aircraft pass aerodynamic data such as airspeed back
to the flight deck, where dampers or servo motors attached to the control column then
simulate the aerodynamic forces the pilot would feel through a traditional FCS.
Fly-by-wire systems offer significant advantages over mechanical and hydraulic
FCSs. From the design perspective, the fact that fly-by-wire systems use copper wiring to
convey pilot commands to control surfaces means that engineers are free to route the wires
through the aircraft wherever they choose without increasing cost or degrading the
performance of the controls. For airlines, the reduced weight of a fly-by-wire FCS translates
into lower operational costs and higher profit margins. Digital flight control systems offer
additional design benefits. In a typical digital flight control system, control column
deflections are measured by analog sensors and then converted into digital signals by an
analog-to-digital converter. These digital signals represent the control objectives, which are
processed by a flight computer and converted into surface actuator signals. The actuator
signals travel through the wiring in the aircraft to the control surfaces, where they are
converted into analog signals that drive the surface actuators. Representing control signals
digitally allows designers to build simpler interfaces to the autopilot and the flight
management system. The autopilot and FMS are already digital, and digital representation of
control signals eliminates the need for separate control infrastructures for the pilot and
copilot. Digital flight control systems also facilitate the introduction of computer-based
technology to monitor pilot input to ensure that the aircraft does not stall or otherwise depart
from its flight envelope.
Fly-by-wire control systems
A fly-by-wire (FBW) system replaces manual flight control of an aircraft with an electronic
interface. The movements of flight controls are converted to electronic signals transmitted by
wires (hence the fly-by-wire term), and flight control computers determine how to move
the actuators at each control surface to provide the expected response. Commands from the
computers are also input without the pilot's knowledge to stabilize the aircraft and perform
other tasks. Electronics for aircraft flight control systems are part of the field known
as avionics. Fly-by-optics, also known as fly-by-light, is a further development using fiber
optic cables.
Primary flight display

Example of a primary flight display


A primary flight display or PFD is a modern aircraft instrument dedicated to flight
information. Much like multi-function displays, primary flight displays are built around
an Liquid-crystal display or CRT display device. Representations of older six pack or "steam
gauge" instruments are combined on one compact display, simplifying pilot workflow and
streamlining cockpit layouts.
Most airliners built since the 1980s — as well as many business jets and an increasing
number of newer aviation aircraft — have glass cockpits equipped with primary flight and
multi-function displays.
Mechanical gauges have not been completely eliminated from the cockpit with the onset of
the PFD; they are retained for backup purposes in the event of total electrical failure.

Components
While the PFD does not directly use the pitot-static system to physically display flight data, it
still uses the system to make altitude, airspeed, vertical speed, and other measurements
precisely using air pressure and barometric readings. An air data computer analyzes the
information and displays it to the pilot in a readable format. A number of manufacturers
produce PFDs, varying slightly in appearance and functionality, but the information is
displayed to the pilot in a similar fashion.
Layout
The details of the display layout on a primary flight display can vary enormously, depending
on the aircraft, the aircraft's manufacturer, the specific model of PFD, certain settings chosen
by the pilot, and various internal options that are selected by the aircraft's owner (i.e., an
airline, in the case of a large airliner). However, the great majority of PFDs follow a similar
layout convention.
The center of the PFD usually contains an attitude indicator (AI), which gives the pilot
information about the aircraft's pitch and roll characteristics, and the orientation of the
aircraft with respect to the horizon. Unlike a traditional attitude indicator, however, the
mechanical gyroscope is not contained within the panel itself, but is rather a separate device
whose information is simply displayed on the PFD. The attitude indicator is designed to look
very much like traditional mechanical AIs. Other information that may or may not appear on
or about the attitude indicator can include the stall angle, a runway diagram, ILS localizer and
glide-path “needles”, and so on. Unlike mechanical instruments, this information can be
dynamically updated as required; the stall angle, for example, can be adjusted in real time to
reflect the calculated critical angle of attack of the aircraft in its current configuration
(airspeed, etc.). The PFD may also show an indicator of the aircraft's future path (over the
next few seconds), as calculated by onboard computers, making it easier for pilots to
anticipate aircraft movements and reactions.
To the left and right of the attitude indicator are usually the airspeed and altitude indicators,
respectively. The airspeed indicator displays the speed of the aircraft in knots, while the
altitude indicator displays the aircraft's altitude above mean sea level (AMSL). These
measurements are conducted through the aircraft's pitot system, which tracks air pressure
measurements. As in the PFD's attitude indicator, these systems are merely displayed data
from the underlying mechanical systems, and do not contain any mechanical parts (unlike an
aircraft's airspeed indicator and altimeter). Both of these indicators are usually presented as
vertical “tapes”, which scroll up and down as altitude and airspeed change. Both indicators
may often have “bugs”, that is, indicators that show various important speeds and altitudes,
such as V speeds calculated by a flight management system, do-not-exceed speeds for the
current configuration, stall speeds, selected altitudes and airspeeds for the autopilot, and so
on.
The vertical speed indicator, usually next to the altitude indicator, indicates to the pilot how
fast the aircraft is ascending or descending, or the rate at which the altitude changes. This is
usually represented with numbers in "thousands of feet per minute." For example, a
measurement of "+2" indicates an ascent of 2000 feet per minute, while a measurement of "-
1.5" indicates a descent of 1500 feet per minute. There may also be a simulated needle
showing the general direction and magnitude of vertical movement.
At the bottom of the PFD is the heading display, which shows the pilot the magnetic heading
of the aircraft. This functions much like a standard magnetic heading indicator, turning as
required. Often this part of the display shows not only the current heading, but also the
current track (actual path over the ground), current heading setting on the autopilot, and other
indicators.
Other information displayed on the PFD includes navigational marker information, bugs (to
control the autopilot), ILS glideslope indicators, course deviation indicators, altitude
indicator QFE settings, and much more.
Although the layout of a PFD can be very complex, once a pilot is accustomed to it the PFD
can provide an enormous amount of information with a single glance.
Drawbacks
The great variability in the precise details of PFD layout makes it necessary for pilots to study
the specific PFD of the specific aircraft they will be flying in advance, so that they know
exactly how certain data is presented. While the basics of flight parameters tend to be much
the same in all PFDs (speed, attitude, altitude), much of the other useful information
presented on the display is shown in different formats on different PFDs. For example, one
PFD may show the current angle of attack as a tiny dial near the attitude indicator, while
another may actually superimpose this information on the attitude indicator itself. Since the
various graphic features of the PFD are not labelled, the pilot must learn what they all mean
in advance.
A failure of a PFD deprives the pilot of an extremely important source of information. While
backup instruments will still provide the most essential information, they may be spread over
several locations in the cockpit, which must be scanned by the pilot, whereas the PFD
presents all this information on one display. Additionally, some of the less important
information, such as speed and altitude bugs, stall angles, and the like, will simply disappear
if the PFD malfunctions; this may not endanger the flight, but it does increase pilot workload
and diminish situational awareness.
Typical avionics sub systems:
The main avionic sub systems have been grouped into four divisions:
1. Systems, which interface directly with the pilot
2. Aircraft state sensor systems
3. External world sensor systems
4. Task Automation systems

Systems, which interface directly with the pilot:

Displays: The display systems provide the visual interface between the pilot and the
aircraft systems and comprise head up display (HUD), Helmet mounted displays (HMD) and
head down display (HDD). Night viewing goggles are also be integrated into the HMD. This
provides the night vision capability enabling the aircraft to operate at night or in conditions of
poor visibility.

 Primary Flight Displays information such as height, air speed, mach number, vertical
speed, artificial horizon, pitch angle, bank angle, heading and velocity vector.
 Navigation displays such as aircraft position and track relative to the destination or
way points together with the navigational information and distance and time to go. It
also gives Weather radar display information.
 Engine data are presented so that the health of the engine can be monitored and any
deviations from the normal can be highlighted.
 The aircraft systems such as electrical supply systems, cabin pressurization and fuel
management can be shown easily as line diagram format on multifunction displays.

Communication: Communication radio systems provide reliable two way


communication between the ground bases and the aircraft.

Data entry and Control: Data entry and control systems are essential for the crew to
interact with the avionic systems. Such systems range from keyboards to touch panel.
Flight Control: Flight control systems use electronic system technology in two areas
namely auto stabilization systems and fly by wire flight control systems. Most combat and
military aircraft require three-axis auto stabilization (pitch,yaw and roll) systems to achieve
acceptable control and handling characteristics across the flight envelope. FBW flight control
enables a lighter, higher performance aircraft to be compared with an equivalent conventional
design to reduce negative natural aerodynamic stability.

Aircraft state sensor systems:


These comprise the air data systems and the inertial sensor systems.

Air data systems: Information on the data qualities such as altitude, calibrated speed, vertical
speed, true air speed, Mach number and air stream incidence angle is essential for the control
and navigation of the aircraft air data computing systems calculates these qualities from
various sensors measures static pressure, total pressure, air stream incidence and outside air
temperature.

Inertial sensor systems: The altitude and the heading information are provided by the inertial
sensor systems. These consists of set of gyros and accelerometers, which measure the aircraft
angular and linear motion about the aircraft axis together with a computing system, which
derives aircraft’s altitude and heading from the gyro and accelerometer. These data are
utilized in INS (Inertial Navigation System) to provide aircraft velocity vector information. It
is especially self-contained.
Weather Radar systems: Weather radar is used to detect water droplets and provide warning
of storms, cloud turbulence and severe precipitation so that an aircraft can alter and avoid
such-turbulence conditions. Otherwise, in severe turbulence, the violence of the vertical gusts
can subject the aircraft structure to very high loads and stresses. Modern fighter aircraft uses
sophisticated multi-mode radars to have a ground attack role as well as the prime interception
role. In the ground attack or mapping mode, the radar system is also to generate a map type
display from the radar returns from the ground, enabling specific terrain features to be
identified for position fixing and target acquisition.

Task Automation Systems: The main purpose of these systems is to reduce the crew
workload by automating and manage as many tasks as appropriate so that the crew role is a
supervisory management one. The types are summarized one by one:

1. Navigation Management Systems: It collects the data of all navigation systems such as
GPS and INS to provide the best possible estimation of position, ground and track.

2. Auto pilot and Flight Management Systems: The modern auto pilot systems in addition
to height hold and heading hold can also provides a very precise control of the aircraft flight
path, for example, automatic landing in poor or even zero visibility conditions. In military
applications, auto pilot system in conjunction with a suitable guidance system can provide
automatic terrain following or terrain avoidance. This enables the aircraft to fly at very low
altitudes (100 – 200 ft) so that the aircraft can take advantage of terrain screening and stay
below the radar horizon of enemy radars. The tasks of FMS are:

 Flight planning
 Navigation Management
 Engine control to maintain the planned speed
 Control of the aircraft path to follow the optimized planned route
 Control of the vertical flight profile
 Minimizing the fuel consumption

3. Engine Control Management: Modern jet engines have a Full Authority Digital Engine
Control System (FADEC). This automatically controls the flow of fuel to the engine
combustion chambers by the fuel control unit so as to provide a closed loop control of engine
thrust in response to the throttle command. This ensures temperatures, engine speeds and
accelerations. It has a integrity failure survival control system so that in case of failure,
avoids the damage of the engine.

4. House Keeping Management: The tasks are:

 Fuel Management
 Electrical power supply system management
 Hydraulic power supply system management
 Cabin/cockpit pressurization systems
 Warning systems
 Environmental control system
 Maintenance and monitoring systems

Design approaches:
Design objectives for Avionic Systems:

 Fulfilling the required performance


 Acceptable levels of availability & failure conditions
 Ease of use and maintenance
 Environmental requirements
 System safety

Systems must be designed with inverse relationship between the probability of the
occurrence of a fault and the severity of its effect. The relationships for the availability of
function can be achieved by the provision of multiple systems and stand by services, which
ensure the capability of detecting failures. The architecture of a system must be designed to
ensure that it contains sophisticated segregation of vital components, so that a single external
failure source does not result in multiple system failures.
Physical and environmental causes can eliminate the use of separate locations for
duplicated equipments. Similarly, electrical power supplies, increasing utilizing digital data
buses, must be configured in such a way that interrupted supply to one bus does not affect the
continual operation of systems concerned with another bus.

Environmental requirements of Avionic equipments:


 Operating temperature is usually from -400c to 700c
 Full performance at 20,000 ft within two minutes of take-off
 Operate under maximum acceleration
 Electromagnetic Compatibility
 Withstand against lightning strikes. Very high magnetic pulses can be encountered
during such strikes.

Redundancy: For critical systems, a spare unit can be carried either hot or cold spare, the
former is connected to the data bus, ready to be operational in the event of component failure.
Reliability: Two measures of equipment are generally used both related, but dependent on
different factors. The measures are
 Mean Time Between Failures (MTBF): It is a component failure, either of the
whole device as far as the airline is concerned, or at part level for the manufacturer or
maintenance organization.
 Mean Time Between Unscheduled removals (MTBUR): This refers to the number
of times that a component is removed from the aircraft on the ground of suspended
failure, irrespective of whether it has been subsequently proved to have failed.

Built –In Test Equipment (BITE): These are an integral part of modern avionic design. The
BITE designed to provide a continuous, integrated monitoring system, both in flight and on
the ground, whether power is applied to the aircraft. The purposes are:

 To provide maintenance assistance to confirm pilot-generated fault reports.


 To improve the accuracy of identification of a failed component.
 To assess the serviceability after rectification or re-installation.

Most modern aircraft use cockpit displays and avionics bay read-out to provide access
to the BITE generated data which is retained in the non volatile memory of the aircraft
computer system. These facilities provide post-flight confirmation as well as storage of fault
segment of data, which is useful for further analysis after returning to the main engineering
base.

Automatic Test Equipment (ATE): These test fixtures can be considered as “filtering
devices” often designed to prevent unwarranted flagging of unit as faulty. ATE is designed to
perform number of roles.

 Confirm a fault that is believed to exist.


 Diagnosing the fault and its location.
 Testing the equipment function before reinstallation.

Recent Advances:
Advanced avionics systems can automatically perform many tasks that pilots and
navigators previously did by hand. For example, an area navigation (RNAV) or flight
management system (FMS) unit accepts a list of points that define a flight route, and
automatically performs most of the course, distance, time, and fuel calculations. Once en
route, the FMS or RNAV unit can continually track the position of the aircraft with respect to
the flight route, and display the course, time, and distance remaining to each point along the
planned route. An autopilot is capable of automatically steering the aircraft along the route
that has been entered in the FMS or RNAV system.
Advanced avionics perform many functions and replace the navigator and pilot in
most procedures. However, with the possibility of failure in any given system, the pilot must
be able to perform the necessary functions in the event of an equipment failure. Pilot ability
to perform in the event of equipment failure(s) means remaining current and proficient in
accomplishing the manual tasks, maintaining control of the aircraft manually (referring only
to standby or backup instrumentation), and adhering to the air traffic control (ATC) clearance
received or requested. Pilots of modern advanced avionics aircraft must learn and practice
backup procedures to maintain their skills and knowledge.
Risk management principles require the flight crew to always have a backup or
alternative plan, and/or escape route. Advanced avionics aircraft relieve pilots of much of the
minute-to-minute tedium of everyday flights, but demand much more initial and recurrent
training to retain the skills and knowledge necessary to respond adequately to failures and
emergencies. The FMS or RNAV unit and autopilot offer the pilot a variety of methods of
aircraft operation. Pilots can perform the navigational tasks themselves and manually control
the aircraft, or choose to automate both of these tasks and assume a managerial role as the
systems perform their duties. Similarly, information systems now available in the cockpit
provide many options for obtaining data relevant to the flight.
Advanced avionics systems present three important learning challenges as you
develop proficiency:
1. How to operate advanced avionics systems
2. Which advanced avionics systems to use and when
3. How advanced avionics systems affect the pilot and the way the pilot flies

How To Operate Advanced Avionics Systems: The first challenge is to acquire the “how-
to” knowledge needed to operate advanced avionics. These principles and concepts are
illustrated with a range of equipment by different manufacturers. It is very important that the
pilot obtain the manufacturer’s guide for each system to be operated, as only those materials
contain the many details and nuances of those particular systems. Many systems allow
multiple methods of accomplishing a task, such as programming or route selection.
A proficient pilot tries all methods, and chooses the method that works best for that
pilot for the specific situation, environment, and equipment. Not all aircraft are equipped or
connected identically for the navigation system installed. In many instances, two aircraft with
identical navigation units are wired differently. Obvious differences include slaved versus
non-slaved electronic horizontal situation indicators (EHSIs) or primary flight display (PFD)
units. Optional equipment is not always purchased and installed. The pilot should always
check the equipment list to verify what is actually installed in that specific aircraft. It is also
essential for pilots using this handbook to be familiar with, and apply, the pertinent parts of
the regulations and the Aeronautical Information Manual (AIM). Advanced avionics
equipment, especially navigation equipment, is subject to internal and external failure. You
must always be ready to perform manually the equipment functions which are normally
accomplished automatically, and should always have a backup plan with the skills,
knowledge, and training to ensure the flight has a safe ending.

Which Advanced Avionics Systems to Use and When: The second challenge is learning to
manage the many information and automation resources now available to you in the cockpit.
Specifically, you must learn how to choose which advanced cockpit systems to use, and
when. There are no definitive rules. In fact, you will learn how different features of advanced
cockpit avionics systems fall in and out of usefulness depending on the situation. Becoming
proficient with advanced avionics is learning to use the right tool for the right job at the right
time. In many systems, there are multiple methods of accomplishing the same function. The
competent pilot learns all of these methods and chooses the method that works best for the
specific situation, environment, and equipment.

How Advanced Avionics Systems Affect the Pilot: The third challenge is learning how
advanced avionics systems affect the pilot. Because of the limits of human understanding,
together with the quirks present in computerized electronic systems of any kind, you will
learn to expect, and be prepared to cope with, surprises in advanced systems. Avionics
equipment frequently receives software and database updates, so you must continually learn
system functions, capabilities, and limitations.
The Awareness series presents examples of how advanced avionics systems can
enhance pilot awareness of the aircraft systems, position, and surroundings. You will also
learn how (and why) the same systems can sometimes decrease awareness. Many studies
have demonstrated a natural tendency for pilots to sometimes drift out of the loop when
placed in the passive role of supervising an FMS/RNAV and autopilot. Keeping track of
which modes are currently in use and predicting the future behaviour of the systems is
another awareness skill that you must develop to operate these aircraft safely.
The Risk series provides insights on how advanced avionics systems can help you
manage the risk faced in everyday flight situations. Information systems offer the immediate
advantage of providing a more complete picture of any situation, allowing you to make better
informed decisions about potential hazards, such as terrain and weather. Studies have shown
that these same systems can sometimes have a negative effect on pilot risk-taking behaviour.
You will learn about situations in which having more information can tempt you to take more
risk than you might be willing to accept without the information. This series will help you use
advanced information systems to increase safety, not risk. As much as advanced information
systems have improved the information stream to the cockpit, the inherent limitations of the
information sources and timeliness are still present; the systems are not infallible.
When advanced avionics systems were first introduced, it was hoped that those new
systems would eliminate pilot error. Experience has shown that while advanced avionics
systems do help reduce many types of errors, they have also created new kinds of errors. This
handbook takes a practical approach to pilot error by providing two kinds of assistance in the
form of two series: Common Errors and Catching Errors.
The Common Errors series describes errors commonly made by pilots using advanced
avionics systems. These errors have been identified in research studies in which pilots and
flight instructors participated. The Catching Errors series illustrates how you can use the
automation and information resources available in the advanced cockpit to catch and correct
errors when you make them. The Maintaining Proficiency series focuses on pilot skills that
are used less often in advanced avionics. It offers reminders for getting regular practice with
all of the skills you need to maintain in your piloting repertoire.
Ilities of Avionic Requirement:

 Capability: It is the capability of the system within the constraints that are exposed.
 Reliability: The system must be reliable as possible since higher reliability, generally
leads to lower maintenance cost.
 Maintainability: A good system need to have less and easy maintenance. So it must
have built-in test, automated trouble shooting and easy equipment access.
 Availability: Systems that are reliable and maintainable will yield high availability of
aircraft. Aircraft that have to be repaired often or take too long to repair are not
contributing to the mission since they are not available to fly.
 Retrofit ability: The capability of a new design of equipment to be successfully
installed operated in place of older one that is less capable equipment.
 Supportability: A design should use parts and support equipments common to other
systems, so that support cost can be spread across more than on system.
 Survivability: Capability of the system to continue to function in the presence of non-
nuclear threat. It is the function of susceptibility and vulnerability.
 Susceptibility: A measure of the probability that an object will be hit by a given
threat.
 Vulnerability: It is a measure of the probability that damage will occur due to the hit.
 Flexibility & Adaptability: These measures describe how easily the system can be
changed or expanded as improvements become available and additional functions are
added.
 Certificability: Certification is conducted by the regularity agencies; is based on
detailed, expert examination of all facets of the aircraft design and operation.

Digital Computers: Digital computer, any of a class of devices capable of solving


problems by processing information in discrete form. It operates on data, including
magnitudes, letters, and symbols, that are expressed in binary form—i.e., using only the two
digits 0 and 1. By counting, comparing, and manipulating these digits or their combinations
according to a set of instructions held in its memory, a digital computer can perform such
tasks as to control industrial processes and regulate the operations of machines; analyze and
organize vast amounts of business data; and simulate the behaviour of dynamic systems (e.g.,
global weather patterns and chemical reactions) in scientific research.
 Functional elements:
A typical digital computer system has four basic functional elements: (1) output
equipment, (2) main memory, (3) control unit, and (4) arithmetic-logic unit. Any of a
number of devices is used to enter data and program instructions into a computer and
to gain access to the results of the processing operation. Common input devices
include keyboards and optical scanners; output devices include printers and cathode-
ray tube and liquid-crystal display monitors. The information received by a computer
from its input unit is stored in the main memory or, if not for immediate use, in an
auxiliary storage device. The control unit selects and calls up instructions from the
memory in appropriate sequence and relays the proper commands to the appropriate
unit. It also synchronizes the varied operating speeds of the input and output devices
to that of the arithmetic-logic unit (ALU) so as to ensure the proper movement of data
through the entire computer system. The ALU performs the arithmetic and logic
algorithms selected to process the incoming data at extremely high speeds—in many
cases in nanoseconds (billionths of a second). The main memory, control unit, and
ALU together make up the central processing unit (CPU) of most digital computer
systems, while the input-output devices and auxiliary storage units constitute
peripheral equipment.

Digital number system:

Number systems and codes:


Arithmetic operations using decimal numbers are quite common. However, in logical
design it is necessary to perform manipulations in the so-called binary system of numbers
because of the on-off nature of the physical devices used. Numbers expressed in base 2 are
called binary numbers. They are often used in computers since they require only two
coefficient values. The integers from 0 to 15 are given in Table 1.1-1 for several bases. Since
there are no coefficient values for the range 10 to b 1 when b > 10, the letters A, B, C, . . .
are used. Base-8 numbers are called octal numbers, and base-16 numbers are called
hexadecimal numbers. Octal and hexadecimal numbers are often used as shorthand for
binary numbers. An octal number can be converted into a binary number by converting each
of the octal coefficients individually into its binary equivalent. The same is true for
hexadecimal numbers. This property is true because 8 and 16 are both powers of 2. For
numbers with bases that are not a power of 2, the conversion to binary is more complex.

Binary to Decimal:

Octal to Decimal:

Hexadecimal to Decimal:

Conversions:
Conversions in fractions:

Conversion of decimal to binary ( base 10 to base 2):

Example: convert (68)10 to binary


68/2 = 34 remainder is 0
34/ 2 = 17 remainder is 0
17 / 2 = 8 remainder is 1
8 / 2 = 4 remainder is 0
4 / 2 = 2 remainder is 0
2 / 2 = 1 remainder is 0
1 / 2 = 0 remainder is 1
Answer = (1 0 0 0 1 0 0)2

Conversion of decimal fraction to binary Fraction:

Example: Convert (0.68)10 to binary fraction.


0.68 * 2 = 1.36 integer part is 1
0.36 * 2 = 0.72 integer part is 0
0.72 * 2 = 1.44 integer part is 1
0.44 * 2 = 0.88 integer part is 0
Answer = 0. 1 0 1 0…..

Example: convert (68.68)10 to binary equivalent.

Answer = 1 0 0 0 1 0 0. 1 0 1 0….

Conversion of decimal to octal (base 10 to base 8):

Example: convert (177)10 to octal


177 / 8 = 22 remainder is 1
22 / 8 = 2 remainder is 6
2 / 8 = 0 remainder is 2

Answer = (2 6 1)8

Conversion of hex to decimal (base 16 to base 10):

Example: convert (F4C)16 to decimal = (F x 162) + (4 x 161) + (C x 160) = (15 x 256) + (4 x


16) + (12 x 1) = (3916)10

Conversion of decimal to hex (base 10 to base 16):

Example: convert (4768)10 to hex.


= 4768 / 16 = 298 remainder 0
= 298 / 16 = 18 remainder 10 (A)
= 18 / 16 = 1 remainder 2
= 1 / 16 = 0 remainder 1
Answer: 1 2 A 0

Conversion of binary to octal and hex:

•Conversion of binary numbers to octal and hex simply requires grouping bits in the binary
numbers into groups of three bits for conversion to octal and into groups of four bits for
conversion to hex.

Thus, (11 100 111)2 =(347)8

(11 100 010 101 010 010 001)2 = (3025221)8

(1110 0111)2 = (E7)16

(1 1000 1010 1000 0111)2 = (18A87)16


Digital arithmetic: Many modern digital computers employ the binary (base-2) number
system to represent numbers, and carry out the arithmetic operations using binary arithmetic.
In performing decimal arithmetic it is necessary to memorize the tables giving the results of
the elementary arithmetic operations for pairs of decimal digits. Similarly, for binary
arithmetic the tables for the elementary operations for the binary digits are necessary.

Binary Addition:

Model 1:

Binary Subtraction:

Examples (Model 1):

Model 2:
Model 3:

Binary Multiplication:

Division:
Fundamentals of logic circuits: They are divided into different types:

Combinational Logic Circuits:


As combinational logic circuits are made up from individual logic gates only, they can
also be considered as “decision making circuits” and combinational logic is about combining
logic gates together to process two or more signals in order to produce at least one output
signal according to the logical function of each logic gate. Common combinational circuits
made up from individual logic gates that carry out a desired application
include Multiplexers, De-multiplexers, Encoders, Decoders, Full and Half Adders etc.
Unlike Sequential Logic Circuits whose outputs are dependant on both their present inputs
and their previous output state giving them some form of Memory, the outputs
ofCombinational Logic Circuits are only determined by the logical function of their current
input state, logic “0” or logic “1”, at any given instant in time.

Combinational Logic Circuits are made up from basic logic NAND, NOR or NOT gates


that are “combined” or connected together to produce more complicated switching circuits.
These logic gates are the building blocks of Combinational Logic Circuits. An example of a
combinational circuit is a decoder, which converts the binary code data present at its input
into a number of different output lines, one at a time producing an equivalent decimal code at
its output.
Combinational logic circuits can be very simple or very complicated and any combinational
circuit can be implemented with only NAND and NOR gates as these are classed as
“universal” gates.
The three main ways of specifying the function of a combinational logic circuit are:

 1. Boolean Algebra – This forms the algebraic expression showing the operation of


the logic circuit for each input variable either True or False that result in a logic “1”
output.
 2. Truth Table – A truth table defines the function of a logic gate by providing a
concise list that shows all the output states in tabular form for each possible
combination of input variable that the gate could encounter.
 3. Logic Diagram – This is a graphical representation of a logic circuit that shows
the wiring and connections of each individual logic gate, represented by a specific
graphical symbol that implements the logic circuit.
COMPUTER MEMORY
Memories are an essential element of a computer. Without its memory, a computer is of
hardly any use. Memory plays an important role in saving and retrieving data. The
performance of the computer system depends upon the size of the memory. Memory is of
following types:

1. Primary Memory / Volatile Memory.


2. Secondary Memory / Non Volatile Memory.

1. Primary Memory / Volatile Memory: Primary Memory is internal memory of the


computer. RAM AND ROM both form part of primary memory. The primary memory
provides main working space to the computer. The following terms comes under primary
memory of a computer are discussed below:

Random Access Memory (RAM): The primary storage is referred to as random access


memory (RAM) because it is possible to randomly select and use any location of the memory
directly store and retrieve data. It takes same time to any address of the memory as the first
address. It is also called read/write memory. The storage of data and instructions inside the
primary storage is temporary. It disappears from RAM as soon as the power to the computer
is switched off. The memories, which lose their content on failure of power supply, are
known as volatile memories .So now we can say that RAM is volatile memory. RAM or
Random Access Memory is the central storage unit in a computer system. It is the place in a
computer where the operating system, application programs and the data in current use are
kept temporarily so that they can be accessed by the computer’s processor. The more RAM a
computer has, the more data a computer can manipulate. Random access memory, also called
the Read/Write memory, is the temporary memory of a computer. It is said to be ‘volatile’
since its contents are accessible only as long as the computer is on. The contents of RAM are
cleared once the computer is turned off.

 Read Only Memory (ROM): There is another memory in computer, which is called


Read Only Memory (ROM). Again it is the ICs inside the PC that form the ROM. The
storage of program and data in the ROM is permanent. The ROM stores some standard
processing programs supplied by the manufacturers to operate the personal computer. The
ROM can only be read by the CPU but it cannot be changed. The basic input/output program
is stored in the ROM that examines and initializes various equipments are attached to the PC
when the power switch is ON. The memories, which do not lose their content on failure of
power supply, are known as non-volatile memories. ROM is non-volatile memory.
 PROM: There is another type of primary memory in computer, which is called
Programmable Read Only Memory (PROM). You know that it is not possible to modify or
erase programs stored in ROM, but it is possible for you to store your program in PROM
chip. Once the programmers’ are written it cannot be changed and remain intact even if
power is switched off. Therefore programs or instructions written in PROM or ROM cannot
be erased or changed.

 EPROM: This stands for Erasable Programmable Read Only Memory, which


overcome the problem of PROM & ROM. EPROM chip can be programmed time and again
by erasing the information stored earlier in it. Information stored in EPROM exposing the
chip for some time ultraviolet light and it erases chip is reprogrammed using a special
programming facility. When the EPROM is in use information can only be read. ROM or
Read Only Memory is a special type of memory which can only be read and contents of
which are not lost even when the computer is switched off. It typically contains
manufacturer’s instructions. Among other things, ROM also stores an initial program called
the ‘bootstrap loader’ whose function is to start the computer software operating, once the
power is turned on.
Read-only memories can be manufacturer-programmed or user-programmed. While
manufacturer-programmed ROMs have data burnt into the circuitry, userprogrammed ROMs
can have the user load and then store read-only programs. PROM or Programmable ROM is
the name given to such ROMs. Information once stored on the ROM or PROM chip cannot
be altered. However, another type of memory called EPROM (Erasable PROM) allows a user
to erase the information stored on the chip and reprogram it with new information. EEPROM
(Electrically EPROM) and UVEPROM (Ultra Violet EPROM) are two types ofEPROM’s.

 Cache Memory: The speed of CPU is extremely high compared to the access time of
main memory. Therefore the performance of CPU decreases due to the slow speed of main
memory. To decrease the mismatch in operating speed, a small memory chip is attached
between CPU and Main memory whose access time is very close to the processing speed of
CPU. It is called CACHE memory. CACHE memories are accessed much faster than
conventional RAM. It is used to store programs or data currently being executed or
temporary data frequently used by the CPU. So each memory makes main memory to be
faster and larger than it really is. It is also very expensive to have bigger size of cache
memory and its size is normally kept small.

 Registers: The CPU processes data and instructions with high speed; there is also
movement of data between various units of computer. It is necessary to transfer the processed
data with high speed. So the computer uses a number of special memory units called
registers. They are not part of the main memory but they store data or information
temporarily and pass it on as directed by the control unit.

2. Secondary Memory / Non-Volatile Memory:  Secondary memory is external and


permanent in nature. The secondary memory is concerned with magnetic memory. Secondary
memory can be stored on storage media like floppy disks, magnetic disks, magnetic tapes,
This memory can also be stored optically on Optical disks - CD-ROM. The following terms
comes under secondary memory of a computer are discussed below:

 Magnetic Tape: Magnetic tapes are used for large computers like mainframe
computers where large volume of data is stored for a longer time. In PC also you can use
tapes in the form of cassettes. The cost of storing data in tapes is inexpensive. Tapes consist
of magnetic materials that store data permanently. It can be 12.5 mm to 25 mm wide plastic
film-type and 500 meter to 1200 meter long which is coated with magnetic material. The
deck is connected to the central processor and information is fed into or read from the tape
through the processor. It’s similar to cassette tape recorder.

 Magnetic Disk: You might have seen the gramophone record, which is circular like a
disk and coated with magnetic material. Magnetic disks used in computer are made on the
same principle. It rotates with very high speed inside the computer drive. Data is stored on
both the surface of the disk. Magnetic disks are most popular for direct access storage device.
Each disk consists of a number of invisible concentric circles called tracks. Information is
recorded on tracks of a disk surface in the form of tiny magnetic spots. The presence of a
magnetic spot represents one bit and its absence represents zero bit. The information stored in
a disk can be read many times without affecting the stored data. So the reading operation is
non-destructive. But if you want to write a new data, then the existing data is erased from the
disk and new data is recorded.  For Example-Floppy Disk. These are small removable disks
that are plastic coated with magnetic recording material. Floppy disks are typically 3.5″ in
size (diameter) and can hold 1.44 MB of data. This portable storage device is a rewritable
media and can be reused a number of times. Floppy disks are commonly used to move files
between different computers. The main disadvantage of floppy disks is that they can be
damaged easily and, therefore, are not very reliable.

 Hard Disk: Another form of auxiliary storage is a hard disk. A hard disk consists of
one or more rigid metal plates coated with a metal oxide material that allows data to be
magnetically recorded on the surface of the platters. The hard disk platters spin at
5 a high rate of speed, typically 5400 to 7200 revolutions per minute (RPM).Storage
capacities of hard disks for personal computers range from 10 GB to 120 GB (one billion
bytes are called a gigabyte).

 Optical Disk: With every new application and software there is greater demand for
memory capacity. It is the necessity to store large volume of data that has led to the
development of optical disk storage medium. Optical disks can be divided into the following
categories: 
1. Compact Disk/ Read Only Memory (CD-ROM)
2. Write Once, Read Many (WORM)
3. Erasable Optical Disk

 CD: Compact Disk (CD) is portable disk having data storage capacity between 650-
700 MB. It can hold large amount of information such as music, full-motion videos, and text
etc. It contains digital information that can be read, but cannot be rewritten. Separate drives
exist for reading and writing CDs. Since it is a very reliable storage media, it is very often
used as a medium for distributing large amount of information to large number of users. In
fact today most of the software is distributed through CDs.

 DVD: Digital Versatile Disk (DVD) is similar to a CD but has larger storage capacity
and enormous clarity. Depending upon the disk type it can store several Gigabytes of data (as
opposed to around 650MB of a CD). DVDs are primarily used to store music or 6 movies and
can be played back on your television or the computer too. They are not rewritable media. Its
also termed DVD (Digital Video Disk), DVD-ROM – Over 4 GB storage (varies with
format), DVD- ROM (read only) – Many recordable formats (e.g., DVD-R, DVD-RW; ..) –
Are more highly compact than a CD. – Special laser is needed to read them.
 Blu-ray Technology: The name is derived from the blue-violet laser used to read and
write data. It was developed by the Blu-ray Disc Association with more than 180 members.
Some companies with the technology are Dell, Sony, LG. The Data capacity is very large
because Blu-ray uses a blue laser (405 nanometres) instead of a red laser (650 nanometres)
this allows the data tracks on the disc to be very compact. This allows for more than twice as
small pits as on a DVD. Because of the greatly compact data Bluray can hold almost 5 times
more data than a single layer DVD. Close to 25 GB!.Just like a DVD Blu-ray can also be
recorded in Dual-Layer format. This allows the disk to hold up to 50 GB!!

The Variations in the formats are as follows:


• BD-ROM (read-only) - for pre-recorded content
• BD-R (recordable) - for PC data storage
• BD-RW (rewritable) - for PC data storage
• BD-RE (rewritable) - for HDTV recording

MICROPROCESSOR: A microprocessor is a computer processor that incorporates the


functions of computer's central processing unit (CPU) on a single integrated circuit (IC), or at
most a few integrated circuits. The microprocessor is a multipurpose, programmable device
that accepts digital data as input, processes it according to instructions stored in its memory,
and provides results as output. It is an example of sequential digital logic, as it has internal
memory. icroprocessors operate on numbers and symbols represented in the binary numeral
system. The microprocessor is a programmable device that takes in numbers, performs on
them arithmetic or logical operations according to the program stored in memory and then
produces other numbers as a result.
The integration of a whole CPU onto a single chip or on a few chips greatly reduced
the cost of processing power. The integrated circuit processor was produced in large numbers
by highly automated processes, so unit cost was low. Single-chip processors increase
reliability as there are many fewer electrical connections to fail. As microprocessor designs
get faster, the cost of manufacturing a chip (with smaller components built on a
semiconductor chip the same size) generally stays the same. Before microprocessors, small
computers had been implemented using racks of circuit boards with
many medium- and small-scale integrated circuits. Microprocessors integrated this into one
or a few large-scale ICs. Continued increases in microprocessor capacity have since rendered
other forms of computers almost completely obsolete (see history of computing hardware),
with one or more microprocessors used in everything from the smallest embedded
systems and handheld devices to the largest mainframes and supercomputers.
Unit – 2

UNIT II DIGITAL AVIONICS ARCHITECTURE


Avionics system architecture – data buses – MIL-STD-1553B – ARINC – 420 – ARINC – 629
There are three fundamental types of architecture:

1. Centralized architecture
2. Federated architecture
3. Distributed architecture

A. Centralized Architecture:
This is the traditional olden architecture. In this the signal conditions and
computations take place in one (or) more computers in a LRU located in the avionics bay.
From the signals are transmitted over one-way data buses. The advantages of this architecture
are:
 All computers are located in a readily accessible avionics bay.
 The environment for the computers is relatively benign, which simplifies equipment
qualification.
 The software is more easily written and validated since there are only a few processor
types and a few large programs that can be physically integrated.
The disadvantages are:
 Many long buses to collect and distribute data and commands.
 Increased vulnerability to damage from a single hazardous event if it were to occur in
(or) nears the avionic bay.
 Partitioning or brick walling is difficult. It is a feature in an architecture that limits a
failure to the subsystem in which it occurred. In addition, the physical or operational
effects of the failure are not allowed to cascade to the rest of the system. Example: a
short in power lead and a bit hang-up.

B. Federal Architecture:
In the early 80’s these are followed. In this architecture, each major system such as
thrust management, sharing input & sensor data from a common set of hardware and
subsequently sharing their computed results over data buses. So this architecture permits the
independent design, configuration and optimization of the major systems while ensuring that
they are using a common data buses. Also the changes in hardware or software are relatively
easy to make.

C. Distributed Architecture:
This has the multiple processors throughout the aircraft that are assigned computing
tasks on a real-time basis as a function of mission phase and/or system basis. Processing is
performed in the sensors or actuators. Their advantages are:
 Fewer, shorter buses
 Faster program execution
 Intrinsic partitioning
 Reduced vulnerability (because a hazardous event will not destroy a substantial
fraction of the total capability.
The disadvantages are:
 Need more processors, software generation & validation and spares stocking.
 Each processor contains a large set of software that performs a variety of functions.
 Some processors may be placed in more severe, less accessible environments such as
wings and empennages.
MIL STD 1553B:

Purpose and application:


In recent years, the use of digital techniques in aircraft equipment has greatly increased, as
have the number of avionics subsystems and the volume of data processed by them. Because analog
point-to-point wire bundles are inefficient and cumbersome means of interconnecting the sensors,
computers, actuators, indicators, and other equipment onboard the modern military vehicle, a serial
digital multiplex data bus was developed. MIL-STD-1553 defines all aspects of the bus; therefore,
many groups working with the military tri-services have chosen to adopt it. The 1553 multiplex data
bus provides integrated, centralized system control and a standard interface for all equipment
connected to the bus. The bus concept provides a means by which all bus traffic is available to be
accessedwith a single connection for testing and interfacing with the system. The standard defines
operation of a serial data bus that interconnects multiple devices via a twisted, shielded pair of wires.
The system implements a command-response format. MIL-STD-1553, "Aircraft Internal
Time-Division Command/Response Multiplex Data Bus," has been in use since 1973 and is widely
applied. MIL-STD-1553 is referred to as "1553" with the appropriate revision letter (A or B) as a
suffix. The basic difference between the 1553A and the 1553B is that in the 1553B, the options are
defined rather than being left for the user to define as required. It was found that when the standard
did not define an item, there was no coordination in its use. Hardware and software had to be
redesigned for each new application. The primary goal of the 1553B was to provide flexibility without
creating new designs for each new user. This was accomplished by specifying the electrical interfaces
explicitly so that compatibility between designs by different manufacturers could be electrically
interchangeable.

The Department of Defense chose multiplexing because of the following advantages:


 Weight reduction
 Simplicity
 Standardization
 Flexibility

Some 1553 applications utilize more than one data bus on a vehicle. This is often done, for
example, to isolate a Stores bus from a Communications bus or to construct a bus system capable of
interconnecting more terminals than a single bus could accommodate. When multiple buses are used,
some terminals may connect to both buses, allowing for communication between them.

Multiplexing:
Multiplexing facilitates the transmission of information along the data flow. It permits the
transmission of several signal sources through one communications system.
BUS:
The bus is made up of twisted-shielded pairs of wires to maintain message integrity. MIL-
STD-1553 specifies that all devices in the system will connect to a redundant pair of buses. This
provides a second path for bus traffic should one of the buses be damaged. Signals are only allowed to
appear on one of the two buses at a time. If a message cannot be completed on one bus, the bus
controller may switch to the other bus. In some applications more than one 1553 bus may be
implemented on a given vehicle. Some terminals on the bus may actually connect to both buses.

BUS COMPONENTS:
There are only three functional modes of terminals allowed on the data bus: the bus
controller, the bus monitor, and the remote terminal. Devices may be capable of more than one
function. Figure 1 illustrates a typical bus configuration.

Bus Controller - The bus controller (BC) is the terminal that initiates information transfers on
the data bus. It sends commands to the remote terminals which reply with a response. The bus will
support multiple controllers, but only one may be active at a time. Other requirements, according to
1553, are: (1) it is "the key part of the data bus system," and (2) "the sole control of information
transmission on the bus shall reside with the bus controller."

Bus Monitor - 1553 defines the bus monitor as "the terminal assigned the task of receiving
bus traffic and extracting selected information to be used at a later time." Bus monitors are frequently
used for instrumentation.

Remote Terminal - Any terminal not operating in either the bus controller or bus monitor
mode is operating in the remote terminal (RT) mode. Remote terminals are the largest group of bus
components.

MODULATION:
The signal is transferred over the data bus using serial digital pulse code modulation.

DATA ENCODING:
The type of data encoding used by 1553 is Manchester II biphase. A logic one (1) is
transmitted as a bipolar coded signal 1/0 (in other words, a positive pulse followed by a negative
pulse). A logic zero (0) is a bipolar coded signal 0/1 (i.e., a negative pulse followed by a positive
pulse).
A transition through zero occurs at the midpoint of each bit, whether the rate is a logic one or a logic
zero. Figure 2 compares a commonly used Non Return to Zero (NRZ) code with the Manchester II
biphase level code, in conjunction with a 1 MHz clock.

BIT TRANSMISSION RATE:


The bit transmission rate on the bus is 1.0 megabit per second with a combined accuracy and
long-term stability of +/- 0.1%. The short-term stability is less than 0.01%. There are 20 1.0-
microsecond bit times allocated for each word. All words include a 3 bit-time sync pattern, a 16-bit
data field that is specified differently for each word type, and 1 parity check bit.

WORD FORMATS:
Bus traffic or communications travels along the bus in words. A word in MIL-STD-1553 is a
sequence of 20 bit times consisting of a 3 bit-time sync wave form, 16 bits of data, and 1 parity check
bit. This is the word as it is transmitted on the bus; 1553 terminals add the sync and parity before
transmission and remove them during reception. Therefore, the nominal word size is 16 bits, with the
most significant bit (MSB) first. There are three types of words: command, status, and data. A packet
is defined to have no intermessage gaps. The time between the last word of a controller message and
the return of the terminal status byte is 4-12 microseconds. The time between status byte and the next
controller message is undefined. Figure 3 illustrates these three formats.
COMMAND WORD:
Command words are transmitted only by the bus controller and always consist of:
 3 bit-time sync pattern
 5 bit RT address field
 1Transmit/Receive (T/R) field
 5 bit subaddress/mode field
 5 bit word count/mode code field
 1 parity check bit.

DATA WORD:
Data words are transmitted either by the BC or by the RT in response to a BC request. The
standard allows a maximum of 32 data words to be sent in a packet with a command word before a
status response must be returned.
Data words always consist of:
 3 bit-time sync pattern (opposite in polarity from command and status words)
 16 bit data field
 1 parity check bit.

STATUS WORD:
Status words are transmitted by the RT in response to command messages from the BC and consist of:
 3 bit-time sync pattern (same as for a command word)
 5 bit address of the responding RT
 11 bit status field
 1 parity check bit.
 The 11 bits in the status field are used to notify the BC of the operating condition of the RT
and subsystem.

INFORMATION TRANSFERS:
Three basic types of information transfers are defined by 1553:
 Bus Controller to Remote Terminal transfers
 Remote Terminal to Bus Controller transfers
 Remote Terminal to Remote Terminal transfers
These transfers are related to the data flow and are referred to as messages. The basic formats of these
messages are shown in Figure 4.
The normal command/response operation involves the transmission of a command from the BC to a
selected RT address. The RT either accepts or transmits data depending on the type (receive/transmit)
of command issued by the BC. A status word is transmitted by the RT in response to the BC
command if the transmission is received without error and is not illegal.
ARINC 429:

The ARINC 429 Specification defines the standard requirements for the transfer of
digital data between avionics systems on commercial aircraft. ARINC 429 is also known as
the Mark 33 DITS Specification. Signal levels, timing and protocol characteristics are defined
for ease of design implementation and data communications on the Mark 33 Digital
Information Transfer System (DITS) bus. ARINC 429 is a privately copy written
specification developed to provide interchange ability and interoperability of line replaceable
units (LRUs) in commercial aircraft. Manufacturers of avionics equipment are under no
requirement to comply to the ARINC 429 Specification, but designing avionics systems to
meet the design guidelines provides cross-manufacturer interoperability between functional
units.

The ARINC 429 Specification establishes how avionics equipment and systems
communicate on commercial aircraft. The specification defines electrical characteristics,
word structures and protocol necessary to establish bus communication. ARINC 429 utilizes
the simplex, twisted shielded pair data bus standard Mark 33 Digital Information Transfer
System bus. ARINC 429 defines both the hardware and data formats required for bus
transmission. Hardware consists of a single transmitter – or source – connected to from 1-20
receivers – or sinks – on one twisted wire pair. Data can be transmitted in one direction only
– simplex communication – with bi-directional transmission requiring two channels or buses.
The devices, line replaceable units or LRUs, are most commonly configured in a star or bus-
drop topology. Each LRU may contain multiple transmitters and receivers communicating on
different buses. This simple architecture, almost point-to-point wiring, provides a highly
reliable transfer of data.
A transmitter may ‘talk only’ to a number of receivers on the bus, up to 20 on one wire pair,
with each receiver continually monitoring for its applicable data, but does not acknowledge
receipt of the data. A transmitter may require acknowledgement from a receiver when large
amounts of data have been transferred. This handshaking is performed using a particular
word style, as opposed to a hard wired handshake. When this two way communication format
is required, two twisted pairs constituting two channels are necessary to carry information
back and forth, one for each direction.

Transmission from the source LRU is comprised of 32 bit words containing a 24 bit
data portion containing the actual information, and an 8 bit label describing the data itself.
LRUs have no address assigned through ARINC 429, but rather have Equipment ID numbers
which allow grouping equipment into systems, which facilitates system management and file
transfers. Sequential words are separated by at least 4 bit times of null or zero voltage. By
utilizing this null gap between words, a separate clock signal is unnecessary. Transmission
rates may be at either a low speed – 12.5 kHz – or a high speed – 100kHz.

Cable Characteristics:
The transmission bus media uses a 78 Ω shielded twisted pair cable. The shield must
be grounded at each end and at all junctions along the bus.

The transmitting source output impedance should be 75 ±5 divided equally between


Line A and Line B. This balanced output should closely match the impedance of the cable.
The receiving sink must have an effective input impedance of 8k minimum. Maximum
length is not specified, as it is dependent on the number of sink receivers, sink drain and
source power. Most systems are designed for under 150 feet, but conditions permitting, can
extend to 300 feet and beyond.

Transmission Characteristics:
ARINC 429 specifies two speeds for data transmission. Low speed operation is stated
at 12.5 kHz, with an actual allowable range of 12 to 14.5 kHz. High speed operation is 100
kHz ± 1% allowed. These two data rates can not be used on the same transmission bus. Data
is transmitted in a bipolar, Return-to-Zero format. This is a tri-state modulation consisting of
HIGH, NULL and LOW states. Transmission voltages are measured across the output
terminals of the source. Voltages presented across the receiver input will be dependent on
line length, stub configuration and the number of receivers connected. The following voltage
levels indicate the three allowable states:
TRANSMIT STATE RECEIVE:

+10.0 V ± 1.0 V HIGH +6.5 to 13 V


0 V ± 0.5V NULL +2.5 to -2.5 V
-10.0 V ± 1.0 V LOW -6.5 to -13 V

In bipolar, Return-to-Zero – or RZ – format, a HIGH (or 1) is achieved with the


transmission signal going from NULL to +10 V for the first half of the bit cycle, then
returning to zero or NULL. A LOW (or 0) is produced by the signal dropping from NULL to
–10 V for the first half bit cycle, then returning to zero. With a Return-to-Zero modulation
format, each bit cycle time ends with the signal level at 0 Volts, eliminating the need for an
external clock, creating a self-clocking signal. An example of the bipolar, tri-state RZ signal
is shown here:

Waveform Parameters:
Pulse rise and fall times are controlled by RC circuits built into ARINC 429 transmitters. This
circuitry minimizes overshoot ringing common with short rise times. Allowable rise and fall
times are shown below for both bit rates. Bit and ½ bit times are also defined.
Word Formats:
ARINC 429 protocol uses a point-to-point format, transmitting data from a single
source on the bus to up to 20 receivers. The transmitter is always transmitting, either data
words or the NULL state. Most ARINC messages contain only one data word consisting of
Binary (BNR), Binary Coded Decimal (BCD) or alphanumeric data encoded using ISO
Alphabet No. 5. File data transfers that send more than one word are also allowed.
ARINC 429 data words are 32 bit words made up of five primary fields:
 Parity – 1 bit
 Sign/Status Matrix (SSM) – 2 bits
 Data – 19 bits
 Source/Destination Identifier (SDI) – 2 bits
 Label – 8 bits

The only two fields definitively required are the Label and the Parity bit, leaving up to 23 bits
available for higher resolution data representation. Many non-standard word formats have
been adopted by various manufacturers of avionics equipment. Even with the variations
included, all ARINC data is transmitted in 32 bit words. Any unused bits are padded with
zeros.

Parity:
ARINC 429 defines the Most Significant Bit (MSB) of the data word as the Parity bit.
ARINC uses odd parity as an error check to insure accurate data reception. The number of
Logic 1s transmitted in each word is an odd number, with bit 32 being set or cleared to obtain
the odd count. ARINC 429 specifies no means of error correction, only error detection.

Sign/Status Matrix:
Bits 31-30 are assigned as the Sign/Status Matrix field or SSM. Depending on the
words Label, which indicates which type of data is being transmitted, the SSM field can
provide different information. This field can be used to indicate sign or direction of the
words.
Unit 3 – FLIGHT DECK AND COCKPITS

Control and display technologies: CRT, LED, LCD, EL and plasma panel – Touch screen –
Direct voice input (DVI) – Civil and Military Cockpits: MFDS, HUD, MFK, HOTAS.

CRT (Cathode Ray Tube):

Introduction:

Since the early 1900s, the Cathode-Ray Tube or CRT (sometimes called the Braun
Tube) has played an important part in displaying images, movies, and information. A patent
was filed in 1938 for the CRT; however, this was a very simple implementation. Over time,
CRTs have advanced employing many different techniques to increase image precision and
quality. While the technology has been around for several decades and is quite mature, it still
has much room for improvement. While today's CRT displays are much more advanced than
those of a decade ago, they are much simpler than other display technologies. This gives the
CRT several important advantages: cheap to manufacture and the ability to display high
quality images. Due to the low cost, CRTs have a very high resolution to price ratio
compared to other displays [Sherman, 2000]. Another important trait is the ability to display
colors with high fidelity. For example, on a CRT, to display black, no color is displayed at a
certain point giving the area a physically black color; in LCD displays, the closest is a
washed out dark gray color.

The purpose of this report is to explain the underlying mechanisms behind CRT
displays used in computer monitors and television displays. More specifically, how CRTs are
used and the technologies that work together to make the CRT function. Construction,
materials (for example phosphors used), and safe disposal methods will not be covered.
While there are many new technologies coming out and maturing, the CRT is forcing them to
compete on price and image quality. This results in a wide variety of high quality display
technologies for the consumer to choose from. The first section contains the main
components of the CRT: the electron gun, the electron beam deflector, and the screen. The
second section explains how the components work together to make the CRT work. The third
section contains a future view of the technology. In conclusion, the strengths and weaknesses
of the CRT will be discussed.

Main Components of the CRT:

The cathode ray tube (CRT) is a display device that uses electrons fired at phosphors
to create images. The CRT takes input from an external source and displays it, making other

devices, such as computers useful. The CRT consists of three main components: the electron
gun, the electron beam deflector, and the screen and phosphors (Figure 1).
The Electron Gun:

The electron gun fires electrons, which eventually strike the phosphors; this causes
them to display colors. The electron gun consists of two main components, the cathode and
the electron beam focuser. These two parts work together to fire an electron and help
determine where it will go (Figure 2).

The Cathode. The cathode is made of a metal conductor, usually nickel. Different
voltages at G1 and G2 (Figure 2). The difference in voltage between G2 and the cathode
causes a voltage potential, usually between 100 and 1000 Volts, that is large enough to pull
electrons off of the cathode [Sherman, 2000]. As the electrons are pulled off the metal, the
voltage potential accelerates them, until they reach a final velocity. They then travel at a
constant velocity until they reach the screen. The larger the velocity is, the more often
electrons are hitting the screen. This is seen as an increase in the brightness. In order to
increase brightness, the voltage potential is increased, which in turn increases the final
velocity. As electrons are being pulled off of the cathode, the metal causes electrons to give
up. Without any additional mechanisms, the metal would eventually have no more electrons
to be pulled off. The electrons are constantly being replenished so that the metal has more
electrons to be pulled off.
The Electron Beam Deflector:

The electron deflector is positioned at the base of the vacuum tube and controls what
part of the screen the electron strikes (Figure 3).

The electron beam deflector, like the focuser, can rely on either an electromagnetic or
electrostatic mechanism. An electrostatic mechanism operates in the same way as described
earlier, except in this situation it would change the path of the electron rather than focus it.
Most television and computer displays use an electromagnetic coil to control the path of the
electron. The deflector determines the path of the electron. There are really two deflectors,
one that controls the position in the x, or horizontal direction and one in the y, or vertical
direction. A current is run through the coil to control where the electron goes. The
electromagnetic force causes the electron to move towards either the left or the right in the
case of the horizontal deflector. The deflector uses the force to change the path of the electron
to determine what part of the screen the electron hits, but not which phosphor. While the
electron travels through the deflector, it accelerates due to the force applied on the electron.
Acceleration means a change in the velocity over time. In this case, the direction of the
velocity is changing. This new direction determines the path of the electron until it hits the
screen (Figure 4).

The Screen:

The screen, which is at the front of the CRT, is what actually displays the images. In
color CRTs, the screen contains inorganic light-emitting phosphors with three different
colors. When they are struck, they give off energy in the form of photons. When the electron
fired from the gun strikes the screen and hits an atom in the phosphor, it transfers its energy
to an electron in the phosphor. The excited electron then rises to a higher energy level. As the
electron falls, it emits energy as heat and visible light .When the electron rises, it can only
rise to certain positions called an orbital (Figure 5). When it falls, it will always emit the
same frequency of light.
Theory of Operation:

The CRT operates by firing an electron beam at phosphors, which give off light. The
electron beam is generated at the cathode in the electron gun. A potential (voltage) is applied,
which strips off and accelerates the electrons. The electrons then travel to the to the electron
beam focuser. An electrostatic mechanism is used to focus the beam after the beam exits from
the electron gun, it travels to the electron beam deflector. The deflector has two mechanisms,
one to change the vertical direction and one to change the horizontal direction of the beam.
This allows the electron beam to sweep over the entire screen. When an electron in the beam
strikes a phosphor, it excites an electron in the phosphor. After being excited, the electron
then releases the energy it got in a form of visible light, which is always the same for that
phosphor. Phosphors emitting red, blue, and green light form a color image. Figure 6 shows
the overall path of the electron beam. The electron beam is constantly being created and
focused. The refresh rate describes how many times the screen is being redrawn per second
and is usually 65 Hertz.
Plasma Panel:

For the past 75 years, the vast majority of televisions have been built around the same
technology: the cathode ray tube (CRT). In a CRT television, a gun fires a beam of electrons
(negatively-charged particles) inside a large glass tube. The electrons excite phosphor atoms
along the wide end of the tube (the screen), which causes the phosphor atoms to light up. The
television image is produced by lighting up different areas of the phosphor coating with
different colors at different Intensities. Cathode ray tubes produce crisp, vibrant images, but
they do have a serious drawback: They are bulky. In order to increase the screen width in a
CRT set, you also have to increase the length of the tube (to give the scanning electron gun
room to reach all parts of the screen). Consequently, any big-screen CRT television is going
to weigh a ton and take up a sizable chunk of a room. Recently, a new alternative has popped
up on store shelves: the plasma flat panel display. These televisions have wide screens,
comparable to the largest CRT sets, but they are only about 6 inches thick.

But there are a lot of disadvantages for a CRT. The display lacks in clarity, and the
size of the screen is huge. They become bulkier in size with the increase in screen width as
the length f the tube has to be increased accordingly. This, inturn increases the weight.

Plasma display is the best remedy for this. Their wide screen display, small size and
high definition clarity are their greatest advantages.

What is Plasma?

Plasma is referred to be the main element of a fluorescent light. It is actually a gas


including ions and electrons. Under normal conditions, the gas has only uncharged particles.
That is, the number of positive charged particles [protons] will be equal to the number of
negative charged particles [electrons]. This gives the gas a balanced position.

Suppose you apply a voltage onto the gas, the number of electrons increases and
causes an unbalance. These free electrons hit the atoms, knocking loose other electrons. Thus,
with the missing electron, the component gets a more positive charge and so becomes an ion.

In plasma, photons of energy are released, if an electrical current is allowed to pass


through it. Both the electrons and ions get attracted to each other causing inter collision. This
collision causes the energy to be produced. Take a look at the figure illustrated below.

Plasma displays mostly make use of the Xenon and neon atoms. When the energy is
liberated during collision, light is produced by them. These light photons are mostly
ultraviolet in nature. Though they are not visible to us, they play a very important factor in
exciting the photons that are visible to us.

In an ordinary TV, high beams of electrons are shot from an electron gun. These
electrons hit the screen and cause the pixels to light up. The TV has three types of composite
pixel colours which are distributed throughout the screen in the same manner. They are red,
green and blue. These colours when mixed in different proportions can form the other
colours. Thus the TV produces all the colours needed.

A plasma display consists of fluorescent lights which causes the formation of an


image on screen. Like a CRT TV, each pixel has the three composite fluorescent colour
lights. These fluorescent lights are illuminated and the different colours are formed by
combining the composite colours.

Working of Plasma Display

Two plates of glass are taken between which millions of tiny cells containing gases
like xenon and neon are filled. Electrodes are also placed inside the glass plates in such a way
that they are positioned in front and behind each cell. The rear glass plate has with it the
address electrodes in such a position that they sit behind the cells. The front glass plate has
with it the transparent display electrodes, which are surrounded on all sides by a magnesium
oxide layer and also a dielectric material. They are kept in front of the cell.

As told earlier when a voltage is applied, the electrodes get charged and cause the
ionization of the gas resulting in plasma. This also includes the collision between the ions and
electrons resulting in the emission of photon light.

The state of ionization varies in accordance to colour plasma and monochrome


plasma. For the latter a low voltage is applied between the electrodes. To obtain colour
plasma, the back of each cell has to be coated with phosphor. When the photon light is
emitted they are ultraviolet in nature. These UV rays react with phosphor to give a coloured
light. Take a look at the diagram given below.

Working of Plasma display

The working of the pixels has been explained earlier. Each pixel has three composite
coloured sub-pixels. When they are mixed proportionally, the correct colour is obtained.

There are thousands of colours depending on the brightness and contrast of each. This
brightness is controlled with the pulse-width modulation technique. With this technique, it
controls the pulse of the current that flows through all the cells at a rate of thousands of times
per seconds.

Characteristics of Plasma Display

1. Plasma displays can be made up to large sizes like 150 inches diagonal.
2. Very low-luminance “dark-room” black level.
3. Very high contrast.
4. The plasma display panel has a thickness of about 2.5 inches, which makes the total
thickness not more than 4 inches.
5. For a 50 inch display, the power consumption increases from (50-400) watts in
accordance with images having darker colours.
6. All displays are sold out in shop mode which consumes more power than the above
described. It can be changed to home mode.
7. Has a life-time of almost 100,000 hours. After this period, the brightness of the TV
reduces to half.

Plasma TV Resolutions

The resolution of a plasma display varies from the early enhanced definition [ED], to
the modern high-definition displays. The most common ED resolutions were 840*480 and
853*480.

With the emergence of HDTV’s the resolution also became higher. The modern plasma TV’s
have a resolution of 1,024*1,024, 1,024*768, 1,280*768, 1,366*768, 1,280*1080, and also
1,920*1,080.

Advantages of Plasma Display

 The slimmest of all displays


 Very high contrast ratios [1:2,000,000]
 Weighs less and is less bulky than CTR’s.
 Higher viewing angles compared to other displays [178 degrees].
 Can be placed even on walls.
 High clarity and hence better colour reproduction. [68 billion/236 vs 16.7
million/224]
 Very little motion blur due to high refresh rates and response time.
 Has a life span of about 100,000 hours.

Disadvantages of Plasma Display

 Cost is much higher compared to other displays.


 Energy consumption is more.
 Produces glares due to reflection.
 These displays are not available in smaller sizes than 32 inches.
 Though the display doesn’t weigh much, when the glass screen, which is needed to
protect the display, is included, weighs more.
 Cannot be used in high altitudes. The pressure difference between the gas and the air
may cause a temporary damage or a buzzing noise.
 Area flickering is possible.

The plasma behind the plasma TV screen


For the past 75 years, the vast majority of televisions have been built around the same
technology: the cathode ray tube (CRT). In a CRT television, a gun fires a beam of electrons
(negatively-charged particles) inside a large glass tube. The electrons excite phosphor atoms
along the wide end of the tube (the screen), which causes the phosphor atoms to light up. The
television image is produced by lighting up different areas of the phosphor coating with
different colors at different intensities.

Cathode ray tubes produce crisp, vibrant images, but they do have a serious
drawback: They are bulky. In order to increase the screen width in a CRT set, you also have
to increase the length of the tube (to give the scanning electron gun room to reach all parts of
the screen). Consequently, any big-screen CRT television is going to weigh a ton and take up
a sizable chunk of a room.

Recently, a new alternative has popped up on store shelves: the plasma flat panel
display. These televisions have wide screens, comparable to the largest CRT sets, but they are
only about 6 inches (15 cm) thick. Based on the information in a video signal, the television
lights up thousands of tiny dots (called pixels) with a high-energy beam of electrons. In most
systems, there are three pixel colors -- red, green and blue -- which are evenly distributed on
the screen. By combining these colors in different proportions, the television can produce the
entire color spectrum.

The basic idea of a plasma display is to illuminate tiny colored fluorescent lights to
form an image. Each pixel is made up of three fluorescent lights -- a red light, a green light
and a blue light. Just like a CRT television, the plasma display varies the intensities of the
different lights to produce a full range of colors.

The central element in a fluorescent light is a plasma, a gas made up of free-flowing


ions (electrically charged atoms) and electrons (negatively charged particles). Under normal
conditions, a gas is mainly made up of uncharged particles. That is, the individual gas atoms
include equal numbers of protons (positively charged particles in the atom's nucleus) and
electrons. The negatively charged electrons perfectly balance the positively charged protons,
so the atom has a net charge of zero.

If you introduce many free electrons into the gas by establishing an electrical voltage
across it, the situation changes very quickly. The free electrons collide with the atoms,
knocking loose other electrons. With a missing electron, an atom loses its balance. It has a net
positive charge, making it an ion. In plasma with an electrical current running through it,
negatively charged particles are rushing toward the positively charged area of the plasma, and
positively charged particles are rushing toward the negatively charged area.

In this mad rush, particles are constantly bumping into each other. These collisions
excite the gas atoms in the plasma, causing them to release photons of energy Xenon and
neon atoms, the atoms used in plasma screens, release light photons when they are excited.
Mostly, these atoms release ultraviolet light photons, which are invisible to the human eye.
But ultraviolet photons can be used to excite visible light photons, as we'll see in the next
section.

Inside the Display


The xenon and neon gas in a plasma television is contained in hundreds of thousands
of tiny cells positioned between two plates of glass. Long electrodes are also sandwiched
between the glass plates, on both sides of the cells. The address electrodes sit behind the
cells, along the rear glass plate. The transparent display electrodes, which are surrounded by
an insulating dielectric material and covered by a magnesium oxide protective layer, are
mounted above the cell, along the front glass plate.

Both sets of electrodes extend across the entire screen. The display electrodes are
arranged in horizontal rows along the screen and the address electrodes are arranged in
vertical columns. As you can see in the diagram below, the vertical and horizontal electrodes
form a basic grid.

To ionize the gas in a particular cell, the plasma display's computer charges the
electrodes that intersect at that cell. It does this thousands of times in a small fraction of a
second, charging each cell in turn.

When the intersecting electrodes are charged (with a voltage difference between
them), an electric current flows through the gas in the cell. As we saw in the last section, the
current creates a rapid flow of charged particles, which stimulates the gas atoms to release
ultraviolet photons.
The released ultraviolet photons interact with phosphor material coated on the inside
wall of the cell. Phosphors are substances that give off light when they are exposed to other
light. When an ultraviolet photon hits a phosphor atom in the cell, one of the phosphor's
electrons jumps to a higher energy level and the atom heats up. When the electron falls back
to its normal level, it releases energy in the form of a visible light photon.

The phosphors in a plasma display give off colored light when they are excited. Every
pixel is made up of three separate subpixel cells, each with different colored phosphors. One
subpixel has a red light phosphor, one subpixel has a green light phosphor and one subpixel
has a blue light phosphor. These colors blend together to create the overall color of the pixel.

By varying the pulses of current flowing through the different cells, the control
system can increase or decrease the intensity of each subpixel color to create hundreds of
different combinations of red, green and blue. In this way, the control system can produce
colors across the entire spectrum.

The main advantage of plasma display technology is that you can produce a very wide
screen using extremely thin materials. And because each pixel is lit individually, the image is
very bright and looks good from almost every angle. The image quality isn't quite up to the
standards of the best cathode ray tube sets, but it certainly meets most people's expectations.

The biggest drawback of this technology has to be the price. With prices starting at
$4,000 and going all the way up past $20,000, these sets aren't exactly flying off the shelves.
But as prices fall and technology advances, they may start to edge out the old CRT sets. In
the near future, setting up a new TV might be as easy as hanging a picture!

TOUCHSCREEN
A touchscreen is an electronic visual display that can detect the presence and location
of a touch within the display area. The term generally refers to touching the display of the
device with a finger or hand. Touchscreens can also sense other passive objects, such as a
stylus. Touchscreens are common in devices such as all-in-one computers, tablet computers,
and smartphones.The touchscreen has two main attributes. First, it enables one to interact
directly with what is displayed, rather than indirectly with a cursor controlled by a mouse or
touchpad. Secondly, it lets one do so without requiring any intermediate device that would
need to be held in the hand. Such displays can be attached to computers, or to networks as
terminals. They also play a prominent role in the design of digital appliances such as the
personal digital assistant (PDA), satellite navigation devices, mobile phones, and video
games.

Touchscreens are popular in hospitality, and in heavy industry, as well as kiosks such
as museum displays or room automation, where keyboard and mouse systems do not allow a
suitably intuitive, rapid, or accurate interaction by the user with the display's content.

Historically, the touchscreen sensor and its accompanying controller-based firmware


have been made available by a wide array of after-market system integrators, and not by
display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers
worldwide have acknowledged the trend toward acceptance of touchscreens as a highly
desirable user interface component and have begun to integrate touchscreen functionality into
the fundamental design of their products.

Technologies
There are a variety of touchscreen technologies.

Resistive

A resistive touchscreen panel is composed of several layers, the most important of


which are two thin, electrically conductive layers separated by a narrow gap. When an object,
such as a finger, presses down on a point on the panel's outer surface the two metallic layers
become connected at that point: the panel then behaves as a pair of voltage dividers with
connected outputs. This causes a change in the electrical current, which is registered as a
touch event and sent to the controller for processing.

Surface acoustic wave

Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the
touchscreen panel. When the panel is touched, a portion of the wave is absorbed. This change
in the ultrasonic waves registers the position of the touch event and sends this information to
the controller for processing. Surface wave touchscreen panels can be damaged by outside
elements. Contaminants on the surface can also interfere with the functionality of the
touchscreen.

Capacitive

Capacitive touchscreen of a mobile phone

A capacitive touchscreen panel is one which consists of an insulator such as glass,


coated with a transparent conductor such as indium tin oxide (ITO). As the human body is
also an electrical conductor, touching the surface of the screen results in a distortion of the
screen's electrostatic field, measurable as a change in capacitance. Different technologies may
be used to determine the location of the touch. The location is then sent to the controller for
processing. Unlike a resistive touchscreen, one cannot use a capacitive touchscreen through
most types of electrically insulating material, such as gloves; one requires a special capacitive
stylus, or a special-application glove with finger tips that generate static electricity. This
disadvantage especially affects usability in consumer electronics, such as touch tablet PCs
and capacitive smartphones.

Surface capacitance

In this basic technology, only one side of the insulator is coated with a conductive
layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When
a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically
formed. The sensor's controller can determine the location of the touch indirectly from the
change in the capacitance as measured from the four corners of the panel. As it has no
moving parts, it is moderately durable but has limited resolution, is prone to false signals
from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore
most often used in simple applications such as industrial controls and kiosks.

Projected capacitance

Projected Capacitive Touch (PCT) technology is a capacitive technology which


permits more accurate and flexible operation, by etching the conductive layer. An X-Y grid is
formed either by etching a single layer to form a grid pattern of electrodes, or by etching two
separate, perpendicular layers of conductive material with parallel lines or tracks to form the
grid (comparable to the pixel grid found in many LCD displays).

The greater resolution of PCT allows operation without direct contact, such that the
conducting layers can be coated with further protective insulating layers, and operate even
under screen protectors, or behind weather and vandal-proof glass. Due to the top layer of a
PCT being glass, PCT is a more robust solution versus resistive touch technology. Depending
on the implementation, an active or passive stylus can be used instead of or in addition to a
finger. This is common with point of sale devices that require signature capture. Gloved
fingers may or may not be sensed, depending on the implementation and gain settings.
Conductive smudges and similar interference on the panel surface can interfere with the
performance. Such conductive smudges come mostly from sticky or sweaty finger tips,
especially in high humidity environments. Collected dust, which adheres to the screen due to
the moisture from fingertips can also be a problem. There are two types of PCT: Self
Capacitance and Mutual Capacitance.

Mutual capacitance

In mutual capacitive sensors, there is a capacitor at every intersection of each row and
each column. A 16-by-14 array, for example, would have 224 independent capacitors. A
voltage is applied to the rows or columns. Bringing a finger or conductive stylus close to the
surface of the sensor changes the local electrostatic field which reduces the mutual
capacitance. The capacitance change at every individual point on the grid can be measured to
accurately determine the touch location by measuring the voltage in the other axis. Mutual
capacitance allows multi-touch operation where multiple fingers, palms or styli can be
accurately tracked at the same time.

Self-capacitance

Self-capacitance sensors can have the same X-Y grid as mutual capacitance sensors,
but the columns and rows operate independently. With self-capacitance, the capacitive load
of a finger is measured on each column or row electrode by a current meter. This method
produces a stronger signal than mutual capacitance, but it is unable to resolve accurately
more than one finger, which results in "ghosting", or misplaced location sensing.

Infrared

Infrared sensors mounted around the display watch for a user's touchscreen input on
this PLATO V terminal in 1981. The monochromatic plasma display's characteristic orange
glow is illustrated.

An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs
around the edges of the screen to detect a disruption in the pattern of LED beams. These LED
beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the
exact location of the touch. A major benefit of such a system is that it can detect essentially
any input including a finger, gloved finger, stylus or pen. It is generally used in outdoor
applications and point of sale systems which can't rely on a conductor (such as a bare finger)
to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not
require any patterning on the glass which increases durability and optical clarity of the overall
system.

Optical imaging

This is a relatively modern development in touchscreen technology, in which two or


more image sensors are placed around the edges (mostly the corners) of the screen. Infrared
back lights are placed in the camera's field of view on the other side of the screen. A touch
shows up as a shadow and each pair of cameras can then be pinpointed to locate the touch or
even measure the size of the touching object (see visual hull). This technology is growing in
popularity, due to its scalability, versatility, and affordability, especially for larger units.

Dispersive signal technology

Introduced in 2002 by 3M, this system uses sensors to detect the mechanical energy in
the glass that occurs due to a touch. Complex algorithms then interpret this information and
provide the actual location of the touch.[14] The technology claims to be unaffected by dust
and other outside elements, including scratches. Since there is no need for additional
elements on screen, it also claims to provide excellent optical clarity. Also, since mechanical
vibrations are used to detect a touch event, any object can be used to generate these events,
including fingers and stylus. A downside is that after the initial touch the system cannot
detect a motionless finger.

Acoustic pulse recognition

This system, introduced by Tyco International's Elo division in 2006, uses


piezoelectric transducers located at various positions around the screen to turn the mechanical
energy of a touch (vibration) into an electronic signal. The screen hardware then uses an
algorithm to determine the location of the touch based on the transducer signals. The
touchscreen itself is made of ordinary glass, giving it good durability and optical clarity. It is
usually able to function with scratches and dust on the screen with good accuracy. The
technology is also well suited to displays that are physically larger. As with the Dispersive
Signal Technology system, after the initial touch, a motionless finger cannot be detected.
However, for the same reason, the touch recognition is not disrupted by any resting objects.

Construction
There are several principal ways to build a touchscreen. The key goals are to
recognize one or more fingers touching a display, to interpret the command that this
represents, and to communicate the command to the appropriate application.

In the most popular techniques, the capacitive or resistive approach, there are typically
four layers;

1. Top polyester coated with a transparent metallic conductive coating on the bottom
2. Adhesive spacer
3. Glass layer coated with a transparent metallic conductive coating on the top
4. Adhesive layer on the backside of the glass for mounting.

When a user touches the surface, the system records the change in the electrical current
that flows through the display.

Dispersive-signal technology which 3M created in 2002, measures the piezoelectric effect


— the voltage generated when mechanical force is applied to a material — that occurs
chemically when a strengthened glass substrate is touched.
There are two infrared-based approaches. In one, an array of sensors detects a finger
touching or almost touching the display, thereby interrupting light beams projected over the
screen. In the other, bottom-mounted infrared cameras record screen touches.

In each case, the system determines the intended command based on the controls showing
on the screen at the time and the location of the touch.

Development
Most touchscreen technology patents were filed during the 1970s and 1980s and have
expired. Touchscreen component manufacturing and product design are no longer
encumbered by royalties or legalities with regard to patents and the use of touchscreen-
enabled displays is widespread.

The development of multipoint touchscreens facilitated the tracking of more than one
finger on the screen; thus, operations that require more than one finger are possible. These
devices also allow multiple users to interact with the touchscreen simultaneously.

With the growing use of touchscreens, the marginal cost of touchscreen technology is
routinely absorbed into the products that incorporate it and is nearly eliminated.
Touchscreens now have proven reliability. Thus, touchscreen displays are found today in
airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld
display devices including the Nintendo DS and the later multi-touch enabled iPhones; the
touchscreen market for mobile devices is projected to produce US$5 billion in 2009.

The ability to accurately point on the screen itself is also advancing with the emerging
graphics tablet/screen hybrids.

Screen protectors
Some touchscreens, primarily those employed in smartphones, use transparent plastic
protectors to prevent any scratches that might be caused by day-to-day use from becoming
permanent.

A touch screen is a computer display screen that is also an input device. The screens are
sensitive to pressure; a user interacts with the computer by touching pictures or words on the
screen.

There are three types of touch screen technology:

 Resistive: A resistive touch screen panel is coated with a thin metallic electrically
conductive and resistive layer that causes a change in the electrical current which is
registered as a touch event and sent to the controller for processing. Resistive touch
screen panels are generally more affordable but offer only 75% clarity and the layer
can be damaged

Learn More

 Desktops and laptops.


 CIO Midmarket Resources.
 By sharp objects. Resistive touch screen panels are not affected by outside elements
such as dust or water.

 Surface wave: Surface wave technology uses ultrasonic waves that pass over the
touch screen panel. When the panel is touched, a portion of the wave is absorbed. This
change in the ultrasonic waves registers the position of the touch event and sends this
information to the controller for processing. Surface wave touch screen panels are the
most advanced of the three types, but they can be damaged by outside elements.
 Capacitive: A capacitive touch screen panel is coated with a material that stores
electrical charges. When the panel is touched, a small amount of charge is drawn to
the point of contact. Circuits located at each corner of the panel measure the charge
and send the information to the controller for processing. Capacitive touch screen
panels must be touched with a finger unlike resistive and surface wave panels that can
use fingers and stylus. Capacitive touch screens are not affected by outside elements
and have high clarity.
THE MULTI FUNCTION KEYBOARD (MFK) is an avionics sub-system through which
the pilot interacts to configure mission related parameters like flight plan, airfield database
and communication equipment during initialization and operation flight phase of mission.
The MFK consists of a MOTOROLA 68000 series processor with ROM, RAM and
EEPROM memory. It is connected to one of the 1553B buses used for data communication.
It is also connected to the Multi Function Rotary switch (MFR) through a RS422 interface.
The MFK has a built-in display unit and a keyboard. The display unit is a pair of LCD based
Colour Graphical Display, as well as a Monochrome Heads-Up Display. The Real-time
operating specifications are very stringent in such applications because the performance
and safety of the aircraft depend on it. Efficient design of the architecture and code is
required for successful operation.

Technology Highlights:
1. pSOS Real- Time OS,
2. 68000 Processor,
3. C and Assembly code,
4. 1553B Bus Protocol
A multifunction control is a panel made up of several multifunction switches; each
switch is capable of performing more than one function. If the switches are push buttons or
keys, the device is called a multifunction keyboard (MFK). Each switch is capable of
inputting differentbits of information due to the implementation of a logic network. Thus, it is
essential that the pilot know the significance of each switch actuation. To accomplish this, the
legend for each switch must be appropriate to the function it is serving at the time. Projection
switch hardware changes a legend on the switch itself. Other mechanizations, e.g., plasma
panels, change a legend on a display surface adjacent to the switch. No matter what the type
of mechanization, the essential features of the MFK remain the same. Dedicated, single
purpose master switches enable the pilot to establish an initial set of capabilities for the
multifunction switches. Then, the multifunction switches allow the pilot to perform specific
operations. For example, a plasma panel version of an MFK is show In Figure 1.

Across the top of the display surface are nine dedicated master switches. The
multifunction switches are mounted in columns on the left and right portions of the bezel and
have legends on them. Each legend appears on the plasma panel next to the switch. The
number of these legends for each switch is limited only-.by the memory in digital computer.
In Figure 1, the master switch labelled COMMV (for commutations), has been selected.
Therefore, the legends appearing next to the multifunction switches indicate a variety of
communication radios which the pilot may wish to control. The next step would be to select
the s-ecific radic to be operated. This selection would change-the legend is to appropriate
titles for the multifunction Switches and would allow the pilot to turn on the radio, change
frequency or whatever. Each manage of switch function is called a logic level. The MFK
provides tremendous freedom for the cockpit designer in that he can allocate a number of
functions to a single control panel and, thus, reduce the number of control heads and switches
in the cockpit). This design helps the pilot by providing a single, easily reachable.

2. PURPOSE:
The MFK has been designed to integrate the many dedicated control functions found
in present day cockpits into a more efficient arrangement. The purpose of this study was to
examine pilot performance changes while operating MFKs during simulated flight. The
following specific factors related to MFK operation were investigated.
a. MFK Hardware Type: Three types of keyboards were used. The main thrust of the
investigation was to compare two of these (projection switch MFK with a plasma panel
MFK). For each task, one of the MFKs was mounted on the front panel and the other MFK
was mounted on the right console. Both MFKs were evaluated in both locations (Figures 2).
Each of these two keyboards was used in conjunction with a dedicated third keyboard for
some tasks. This dedicated keyboard included switches for mode selection and for digit entry;
hence, it was referred to as a Digit/Mode Panel. It was always located on the left console (see
Figures 2).

b. Logic Level Arrangements: Each task, whether a communication change or a navigation


update, required a four step operating sequence. Each step in these operating sequences is
called a logic level. For example, in the present study, the pilot had to go through the
following four steps or logic levels to change a UHF radio frequency:
Step I - select the communication function from all other functions on the keyboard.
Step 2 - select the UHF radio from among the other radios on board the aircraft.
Step 3 - select the frequency change function from among the other functions of the UHF
radio.
Step 4 - enter the appropriate frequency.

Each MFK had the capability for all four levels of systems control and information,
whereas the Digit/Mode Panel had only the capability to function for logic levels 1 and/or 4.
This study examined logic level arrangements in terms of whether operatories of the four
logic levels should be performed on one keyboard only (one of the MFKs) or two keyboards
(divided between one of the MFKs and the Digit/Mode Panel.

c. Control Stick Location: Another factor investigated in relation to MFK operation was the
effect of both a center and side control stick location on the operation of the MFKs. The
distinction should be made that it was not the intent of this study to evaluate differences in
stick location, but rather the effects of stick location on MFK operation. The MFK being
evaluated in the front location was designated as "primary" for a task. When failures of the
primary MFK were introduced, the other MFK, mounted on the right console, was used as a
"backup". It was expected that a center stick would tend to interfere with the primary MFK
and that a side stick would tend to interfere with the backup MFK.

d. Degraded Mode Performance between MFKs: One crucial drawback to the MFK is the
loss of capability with keyboard failure. This study dealt specifically with this problem in that
it studied the operation of backup MFKs to be used when the primary keyboard fails.
Operation during normal modes involved either the front instrument panel MFK or the front
instrument panel MFK and the Digit/Mode Panel. Failed modes involved operation of either
the riqht console MFK or the right console MFK and the Digit/Mode Panel. Failures were
initiated only between task events. During a failed mode, either the front panel MFK or the
Digit/Mode Panel, became inoperative. The logic levels that had been on that keyboard then
became operable on the right console backup MFK. (Both the projection switch and plasma
panel type 1FK were examined in the right console location during failed conditions.

Hands On Throttle-And-Stick (HOTAS)


HOTAS, an abbreviation for Hands On Throttle-And-Stick, is the name given to the
concept of placing buttons and switches on the throttle stick and flight control stick in an
aircraft's cockpit, allowing the pilot to access vital cockpit functions and fly the aircraft
without having to remove his hands from the throttle and flight controls. Application of the
concept was pioneered by the English Electric Lightning and is widely used on all modern
fighter aircraft such as the F-16 Fighting Falcon.

HOTAS is a shorthand term which refers to the pattern of controls in the modern
fighter aircraft cockpit. Having all switches on the stick and throttle allows the pilot to keep
his "hands on throttle-and-stick", thus allowing him to remain focused on more important
duties than looking for controls in the cockpit. The goal is to improve the pilot's situational
awareness, his ability to manipulate switch and button controls in turbulence, under stress, or
during high G-force maneuvers, to improve his reaction time, to minimize instances when he
must remove his hands from one or the other of the aircraft's controls to use another aircraft
system, and total time spent doing so.

The concept has also been applied to the steering wheels of modern open-wheel
racecars, like those used in Formula One and the Indy Racing League. HOTAS has been
adapted for game controllers used for flight simulators (most such controllers are based on
the F-16 Fighting Falcon's) and in cars equipped with radio controls on the steering wheel. In
the modern military aircraft cockpit the HOTAS concept is sometimes enhanced by the use of
Direct Voice Input to produce the so-called "V-TAS" concept, and augmented with helmet
mounted display systems such as the "Schlem" used in the MiG-29 and Su-27, which allow
the pilot to control various systems using his line of sight, and to guide missiles by simply
looking at the target.

The A-10's HOTAS control stick, in conjunction with the throttle, are the primary
pilot controls of the aircraft. The control stick's primary function is to provide pitch and roll
commands to the aircraft's flight control system. It also has several pushbuttons and hat
switches used to control various aircraft functions. Many of the control stick's controls have
different functions depending on the aircraft's current master mode and sensor of interest.
Some also have different functions depending on whether they are clicked once or held down.
Controls

The A-10 HOTAS reference PDF contains a summary of all of the A-10's HOTAS
switches and their functions.

Trigger

The trigger is located at the normal position in front of the control stick. It is a two-
stage, gun-style trigger which is used to control the aircraft's cannon and precision attitude
control (PAC) system. Depressing the trigger's first stage activates the PAC system, and
depressing the second stage fires the aircraft's GAU-8 Avenger cannon.

Master mode control button

The master mode control button (MMCB) is a small grey pushbutton located on the
right side of the control stick, which is used to set the master mode of the IFFCC and HUD.
Short button presses will cycle the HUD mode between NAV, GUNS, CCIP, and CCRP
modes. A long button press selects A-A mode, regardless of the currently selected mode.

Weapon release button

The weapon release button is a red pushbutton located on the back of the control stick,
to the left of the trim switch. It is used to fire the currently selected weapon, based on master
mode and DSMS settings. Some weapons require the button to be held for a second before
the weapon is released.

Trim hat switch

The trim hat switch is located immediately to the right of the weapon release button. It
is used to adjust the aircraft's pitch and roll trim, relieving control forces on the stick and
reducing the pilot's workload. Pressing the switch forward adjusts the pitch trim nose-down,
and pulling it back adjusts the pitch trim nose-up. Moving the switch to the left or right
adjusts the roll trim in the corresponding direction.

Yaw trim is not controlled by the trim hat switch - it is set using a knob on the SAS control
panel.

Data management switch

The data management switch (DMS) is used to control various functions of the
current sensor of interest. The functions activated by the DMS are dependent on the current
SOI, as well as the length of time that the DMS is held down. The table below summarizes
these functions.

Target management switch

Like the DMS, the target management switch (TMS) controls various functions of the
current SOI. The selected function is dependent on the current SOI, as well as the length of
time that the TMS is held down. The table below summarizes these functions.

Countermeasures switch

The countermeasures switch (CMS) is located on the left side of the control stick grip,
roughly where the pilot's thumb is located. It is used to control the operation of the aircraft's
countermeasure set, allowing the pilot to quickly choose, activate, and deactivate
countermeasure programs. It has the following functions:

 Forward press - activate countermeasure program


 Aft press - deactivate countermeasure program
 Left press - select previous countermeasure program
 Right press - select next countermeasure program

In addition to the four directions, the entire switch may be pressed in to toggle the jammer
function of the AN/ALQ-131 electronic countermeasures pod.

Nosewheel steering button

The nosewheel steering button is a small grey button located at the front bottom of the
control stick (intended to be pressed by the pilot's pinky). On ground, the NWS button
toggles the nosewheel steering system, allowing the rudder pedals to control the aircraft on
the ground. In air, it is used to trigger the targeting pod's laser, and also to disconnect from
the tanker during in-air refueling.

DIRECT VOICE INPUT (DVI): (sometimes called voice input control (VIC)) is a style of
human–machine interaction "HMI" in which the user makes voice commands to issue
instructions to the machine. It has found some usage in the design of the cockpits of several
modern military aircraft, particularly the Eurofighter Typhoon, the F-35 Lightning II, the
Dassault Rafale and the JAS 39 Gripen, having been trialled on earlier fast jets such as the
Harrier AV-8B and F-16 VISTA. A study has also been undertaken by the Royal Netherlands
Air Force using voice control in a F-16 simulator. The USAF initially wanted DVI for the
Lockheed Martin F-22 Raptor, but it was finally judged too technically risky and was
abandoned. DVI systems may be "user-dependent" or "user-independent". User-dependent
systems require a personal voice template to be created by the pilot which must then be
loaded onto the aircraft before flight. User-independent systems do not require any personal
voice template and will work with the voice of any user.

In 2006 Zon and Roerdink, at the National Aerospace Laboratory in the Netherlands,
examined the use of Direct Voice Input in the "GRACE" simulator, in an experiment in
which twelve pilots participated. Although the hardware performed well, the researchers
discovered that, before installation in a real aircraft their DVI system would need some
improvement, since operation of the DVI took more time than the existing manual method.
They recommended that:

 The syntax must become simpler;


 The recognition rate of the system must improve;
 Response time of the system must decrease.

They suggested that all of these issues were of a technological nature and thus seemed easible
to solve. They concluded that in cockpits, especially during emergencies where pilots have to
operate the entire aircraft on their own, a DVI system might be very relevant. During other
situations it seemed to be interesting but not of crucial importance.
UNIT IV INTRODUCTION TO NAVIGATION SYSTEMS

Radio navigation – ADF, DME, VOR, LORAN, DECCA, OMEGA, ILS, MLS – Inertial
Navigation Systems (INS) – Inertial sensors, INS block diagram – Satellite navigation
systems – GPS.

Navigation Systems:

A relatively modern Boeing 737 Flight Management System (FMS) flight deck unit, which automates many air navigation tasks

The basic principles of air navigation system are identical to general navigation,


which includes the process of planning, recording, and controlling the movement of a craft
from one place to another. Successful air navigation involves piloting an aircraft from place
to place without getting lost, breaking the laws applying to aircraft, or endangering the safety
of those on board or on the ground. Air navigation differs from the navigation of surface craft
in several ways: Aircraft travel at relatively high speeds, leaving less time to calculate their
position on route. Aircraft normally cannot stop in mid-air to ascertain their position at
leisure. Aircraft are safety-limited by the amount of fuel they can carry; a surface vehicle can
usually get lost, run out of fuel, then simply await rescue. There is no in-flight rescue for
most aircraft. Additionally, collisions with obstructions are usually fatal. Therefore, constant
awareness of position is critical for aircraft pilots.
The techniques used for navigation in the air will depend on whether the aircraft is flying
under visual flight rules (VFR) or instrument flight rules (IFR). In the latter case,
the pilot will navigate exclusively using instruments and radio navigation aids such as
beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will
largely navigate using "dead reckoning" combined with visual observations (known
as pilotage), with reference to appropriate maps. This may be supplemented using radio
navigation aids.

Types of Navigation systems:


 GPS
 TACAN
 RADAR
 DNS
 ILS
 DME
 VOR
 RDF
 ADF
 ONS

Global Positioning System

The Global Positioning System (GPS) is a space-based satellite


navigation system that provides location and time information in all weather conditions, anywhere on
or near the earth where there is an unobstructed line of sight to four or more GPS satellites.[1] The
system provides critical capabilities to military, civil, and commercial users around the world. The
United States government created the system, maintains it, and makes it freely accessible to anyone
with a GPS receiver.

The US began the GPS project in 1973 to overcome the limitations of previous navigation systems,
 integrating ideas from several predecessors, including a number of classified engineering design
[2]

studies from the 1960s. The U.S. Department of Defence (DoD) developed the system, which
originally used 24 satellites. It became fully operational in 1995. Bradford Parkinson, Roger L.
Easton, and Ivan A. Getting are credited with inventing it.

Advances in technology and new demands on the existing system have now led to efforts to
modernize the GPS system and implement the next generation of GPS Block IIIA satellites and Next
Generation Operational Control System (OCX). Announcements from Vice President Al Gore and
the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the
modernization effort, GPS III.

In addition to GPS, other systems are in use or under development. The Russian Global Navigation
Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from
incomplete coverage of the globe until the mid-2000s. There are also the planned European
UnionGalileo positioning system, India's Indian Regional Navigation Satellite System, and the
Chinese BeiDou Navigation Satellite System.
Fundamentals

The GPS concept is based on time. The satellites carry very stable atomic clocks that are synchronized
to each other and to ground clocks. Any drift from true time maintained on the ground is corrected
daily. Likewise, the satellite locations are monitored precisely. GPS receivers have clocks as well—
however, they are not synchronized with true time, and are less stable. GPS satellites continuously
transmit their current time and position. A GPS receiver monitors multiple satellites and solves
equations to determine the exact position of the receiver and its deviation from true time. At a
minimum, four satellites must be in view of the receiver for it to compute four unknown quantities
(three position coordinates and clock deviation from satellite time).

Tactical air navigation system

A tactical air navigation system, commonly referred to by the acronym TACAN, is


a navigation system used by military aircraft. It provides the user with bearing and distance (slant-
range) to a ground or ship-borne station. It is a more accurate version of the VOR/DME system that
provides bearing and range information for civil aviation. The DME portion of the TACAN system is
available for civil use; at VORTAC facilities where a VOR is combined with a TACAN, civil aircraft
can receive VOR/DME readings. Aircraft equipped with TACAN avionics can use this system for en
route navigation as well as non-precision approaches to landing fields. The space shuttle is one such
vehicle that was designed to use TACAN navigation but later upgraded with GPS as a replacement.

The typical TACAN onboard user panel has control switches for setting the channel (corresponding to
the desired surface station's assigned frequency), the operation mode for either Transmit/Receive
(T/R, to get both bearing and range) or Receive Only (REC, to get bearing but not range). Capability
was later upgraded to include an Air-to-Air mode (A/A) where two airborne users can get relative
slant-range information. Depending on the installation, Air-to-Air mode may provide range, closure
(relative velocity of the other unit), and bearing, though an air-to-air bearing is noticeably less precise
than a ground-to-air bearing.

Operation
TACAN in general can be described as the military version of the VOR/DME system. It operates in
the frequency band 960-1215 MHz. The bearing unit of TACAN is more accurate than a standard
VOR since it makes use of a two-frequency principle, with 15 Hz and 135 Hz components, and
because UHF transmissions are less prone to signal bending than VHF.
The distance measurement component of TACAN operates with the same specifications as civil
DMEs. Therefore to reduce the numbers of required stations, TACAN stations are frequently co-
located with VOR facilities. These co-located stations are known as VORTACs. This is a station
composed of a VOR for civil bearing information and a TACAN for military bearing information and
military/civil distance measuring information. The TACAN transponder performs the function of a
DME without the need for a separate, co-located DME. Because the rotation of the antenna creates a
large portion of the azimuth (bearing) signal, if the antenna fails, the azimuth component is no longer
available and the TACAN downgrades to a DME only mode.

Radar
Radar is an object-detection system that uses radio waves to determine the range, altitude, direction,
or speed of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor
vehicles, weather formations, and terrain. The radar dish (or antenna) transmits pulses of radio waves
or microwaves that bounce off any object in their path. The object returns a tiny part of the wave's
energy to a dish or antenna that is usually located at the same site as the transmitter.

Radar was secretly developed by several nations before and during World War II. The
term RADAR was coined in 1940 by the United States Navy as an acronym for Radio
Detection And Ranging. The term radar has since entered English and other languages as a common
noun, losing all capitalization.

The modern uses of radar are highly diverse, including air and terrestrial traffic control, radar
astronomy, air-defense systems, antimissile systems; marine radars to locate landmarks and other
ships; aircraft anticollision systems; ocean surveillance systems, outer space surveillance
and rendezvous systems; meteorological precipitation monitoring; altimetry and flight control
systems; guided missile target locating systems; and ground-penetrating radar for geological
observations. High tech radar systems are associated with digital signal processing and are capable of
extracting useful information from very high noise levels.

Other systems similar to radar make use of other parts of the electromagnetic spectrum. One example
is "lidar", which uses ultraviolet, visible, or near infrared light from lasers rather than radio waves.

Principles

A radar system has a transmitter that emits radio waves called radar signals in predetermined


directions. When these come into contact with an object they are usually reflected or scattered in
many directions. Radar signals are reflected especially well by materials of considerable electrical
conductivity—especially by most metals, by seawater and by wet ground. Some of these make the use
of radar altimeters possible. The radar signals that are reflected back towards the transmitter are the
desirable ones that make radar work. If the object is moving either toward or away from the
transmitter, there is a slight equivalent change in the frequency of the radio waves, caused by
the Doppler effect.

Radar receivers are usually, but not always, in the same location as the transmitter. Although the
reflected radar signals captured by the receiving antenna are usually very weak, they can be
strengthened by electronic amplifiers. More sophisticated methods of signal processing are also used
in order to recover useful radar signals.

The weak absorption of radio waves by the medium through which it passes is what enables radar sets
to detect objects at relatively long ranges—ranges at which other electromagnetic wavelengths, such
as visible light, infrared light, andultraviolet light, are too strongly attenuated. Such weather
phenomena as fog, clouds, rain, falling snow, and sleet that block visible light are usually transparent
to radio waves. Certain radio frequencies that are absorbed or scattered by water vapour, raindrops, or
atmospheric gases (especially oxygen) are avoided in designing radars, except when their detection is
intended.

Decca Navigator System


The Decca Navigator System was a hyperbolic radio navigation system which allowed ships and
aircraft to determine their position by receiving radio signals from fixed navigational beacons. The
system used low frequencies from 70 to 129 kHz. It was first deployed by theRoyal
Navy during World War II when the Allied forces needed a system which could be used to achieve
accurate landings. After the war it was extensively developed around the UK and later used in many
areas around the world.
Decca's primary use was for ship navigation in coastal waters, offering much better accuracy than the
competing LORAN system. Fishing vessels were major post-war users, but it was also used on
aircraft, including a very early (1949) application of moving map displays. The system was deployed
extensively in the North Sea and was used by helicopters operating to oil platforms. The opening of
the more accurate Loran-C system to civilian use in 1974 offered stiff competition, but Decca was
well established by this time and continued operations into the 1990s. Decca was eventually replaced,
along with Loran and other similar systems, by the GPS during the 1990s. The Decca system in
Europe was shut down in the spring of 2000, and the last worldwide chain, in Japan, in 2001.

Principles of Operation
The Decca Navigator System consisted of a number of land-based radio beacons organised
into chains. Each chain consisted of a master station and three (occasionally two) slave stations,
termed Red, Green and Purple. Ideally, the slaves would be positioned at the vertices of an equilateral
triangle with the master at the centre. The baseline length, that is, the master-slave distance, was
typically 60–120 nautical miles (110–220 km).

Each station transmitted a continuous wave signal that, by comparing the phase difference of the
signals from the Master and one of the Slaves, resulted in a set of hyperbolic lines of position called
a pattern. As there were three Slaves there were three patterns, termed Red, Green and Purple. The
patterns were drawn on nautical charts as a set of hyperbolic lines in the appropriate colour. Receivers
identified which hyperbola they were on and a position could be plotted at the intersection of
the hyperbola from different patterns, usually by using the pair with the angle of cut closest to
orthogonal as possible.
Detailed Principles of Operation:

When two stations transmit at the same phase-locked frequency, the difference in phase between
the two signals is constant along a hyperbolic path. Of course, if two stations transmit on the
same frequency, it is practically impossible for the receiver to separate them; so instead of all
stations transmitting at the same frequency, each chain was allocated a nominal frequency, 1f,
and each station in the chain transmitted at a harmonic of this base frequency, as follows:
Station Harmonic Frequency (kHz)
Master 6f 85.000
Purple Slave 5f 70.833
Red Slave 8f 113.333
Green Slave 9f 127.500

The frequencies given are those for Chain 5B, known as the English Chain, but all chains used similar
frequencies between 70 kHz and 129 kHz.

Decca receivers multiplied the signals received from the Master and each Slave by different values to
arrive at a common frequency (least common multiple, LCM) for each Master/Slave pair, as
follows:
Slave Slave Master Master Common
Pattern
Harmonic Multiplier Harmonic Multiplier Frequency

Purple 5f ×6 6f ×5 30f

Red 8f ×3 6f ×4 24f

Green 9f ×2 6f ×3 18f

It was phase comparison at this common frequency that resulted in the hyperbolic lines of position.
The interval between two adjacent hyperbolas on which the signals are in phase was called a lane.
Since the wavelength of the common frequency was small compared with the distance between the
Master and Slave stations there were many possible lines of position for a given phase difference, and
so a unique position could not be arrived at by this method.

Other receivers, typically for aeronautical applications, divided the transmitted frequencies down to
the basic frequency (1f) for phase comparison, rather than multiplying them up to the LCM frequency.

Instrument landing system

An instrument landing system (ILS) is a ground-based instrument approach system that provides


precision lateral and vertical guidance to an aircraft approaching and landing on a runway, using a
combination of radio signals and, in many cases, high-intensity lighting arrays to enable a safe landing
during instrument meteorological conditions (IMC), such as low ceilings or reduced visibility due to
fog, rain, or blowing snow.

An instrument approach procedure chart (or 'approach plate') is published for each ILS approach to
provide the information needed to fly an ILS approach during instrument flight rules (IFR) operations.
A chart includes the radio frequencies used by the ILS components or nav aids and the prescribed
minimum visibility requirements.

Radio-navigation aids must provide a certain accuracy (set by international standards of


CAST/ICAO); to ensure this is the case, flight inspection organizations periodically check critical
parameters with properly equipped aircraft to calibrate and certify ILS precision.

Principle of operation
An aircraft approaching a runway is guided by the ILS receivers in the aircraft by performing
modulation depth comparisons. Many aircraft can route signals into the autopilot to fly the approach
automatically. An ILS consists of two independent sub-systems. The localizer provides lateral
guidance; the glide slope provides vertical guidance.
Localizer:

A localizer is an antenna array normally located beyond the approach end of the runway and generally
consists of several pairs of directional antennas. Two signals are transmitted on one of 40 ILS
channels. One is modulated at 90 Hz, the other at 150 Hz. These are transmitted from co-located
antennas. Each antenna transmits a narrow beam, one slightly to the left of the runway centreline, the
other slightly to the right.

The localizer receiver on the aircraft measures the difference in the depth of modulation (DDM) of the
90 Hz and 150 Hz signals. The depth of modulation for each of the modulating frequencies is 20
percent when the receiver is on the centreline. The difference between the two signals varies
depending on the deviation of the approaching aircraft from the centreline.

If there is a predominance of either 90 Hz or 150 Hz modulation, the aircraft is off the centreline. In
the cockpit, the needle on the instrument part of the ILS (the omni-bearing indicator (nav
indicator), horizontal situation indicator (HSI), or course deviation indicator (CDI)) shows that the
aircraft needs to fly left or right to correct the error to fly toward the centre of the runway. If the DDM
is zero, the aircraft is on the LOC centreline coinciding with the physical runway centreline. The pilot
controls the aircraft so that the indicator remains centered on the display (i.e., it provides lateral
guidance). Full-scale deflection of the instrument corresponds to a DDM of 15.5%.
Glide slope, or path (GS, or GP):

A glide slope station uses an antenna array sited to one side of the runway touchdown zone. The GS
signal is transmitted on a carrier frequency using a technique similar to that for the localizer. The
centre of the glide slope signal is arranged to define a glide path of approximately 3° above horizontal
(ground level). The beam is 1.4° deep (0.7° below the glide-path centre and 0.7° above).

The pilot controls the aircraft so that the glide slope indicator remains centered on the display to
ensure the aircraft is following the glide path to remain above obstructions and reach the runway at
the proper touchdown point (i.e., it provides vertical guidance).
Outer marker

blue outer marker

The outer marker is normally located 7.2 kilometres (3.9 nmi; 4.5 mi) from the threshold, except that
where this distance is not practical, the outer marker may be located between 6.5 and 11.1 kilometres
(3.5 and 6.0 nmi; 4.0 and 6.9 mi) from the threshold. The modulation is repeated Morse-style dashes
of a 400 Hz tone (--) ("M"). The cockpit indicator is a blue lamp that flashes in unison with the
received audio code. The purpose of this beacon is to provide height, distance, and equipment
functioning checks to aircraft on intermediate and final approach. In the United States, a NDB is often
combined with the outer marker beacon in the ILS approach (called a Locator Outer Marker, or
LOM). In Canada, low-powered NDBs have replaced marker beacons entirely.
Middle marker
amber middle marker

The middle marker should be located so as to indicate, in low visibility conditions, themissed
approach point, and the point that visual contact with the runway is imminent, ideally at a distance of
approximately 3,500 ft (1,100 m) from the threshold. The modulation is repeated alternating Morse-
style dots and dashes of a 1.3 kHz tone at the rate of two per second (·-·-) ("Ä" or "AA"). The cockpit
indicator is an amber lamp that flashes in unison with the received audio code. In the United States,
middle markers are not required so many of them have been decommissioned.
Inner marker

white inner marker

The inner marker, when installed, shall be located so as to indicate in low visibility conditions the
imminence of arrival at the runway threshold. This is typically the position of an aircraft on the ILS as
it reaches Category II minima, ideally at a distance of approximately 1,000 ft (300 m) from the
threshold. The modulation is repeated Morse-style dots at 3 kHz (····) ("H"). The cockpit indicator is a
white lamp that flashes in unison with the received audio code.

Distance measuring equipment (DME) is a transponder-based radio navigation technology that


measures slant range distance by timing the propagation delay ofVHF or UHF radio signals.

Developed in Australia, it was invented by James Gerry Gerrand under the supervision of Edward


George "Taffy" Bowen while employed as Chief of the Division of Radiophysics of
the Commonwealth Scientific and Industrial Research Organisation (CSIRO). Another engineered
version of the system was deployed byAmalgamated Wireless Australasia Limited in the early 1950s
operating in the 200 MHz VHF band. This Australian domestic version was referred to by the Federal
Department of Civil Aviation as DME(D) (or DME Domestic), and the later international version
adopted by ICAO as DME(I).

DME is similar to secondary radar, except in reverse. The system was a post-war development of the
IFF (identification friend or foe) systems of World War II. To maintain compatibility, DME is
functionally identical to the distance measuring component of TACAN.
Knowledge of the aircraft’s position is a basic requirement for air navigation and one means
of satisfying this requirement is to present the pilot with bearing and distance information.
Bearing information may be derived in a variety of ways, some of which are via VOR or
ADF systems. Distance information may be derived from radar or by DME, which is a form
of radar.

In primary radar a short pulse is transmitted and the time interval from transmission to
reception of the reflected pulse is measured. As the speed of an electromagnetic pulse
through the atmosphere is 300 000 kilometres per second, or one nautical mile in 6.2 micro-
seconds, the distance between the transmitter and the target can be calculated. In the case of
radar sited on the ground an aircraft target may be easily identified and the distance measured
readily due to its relative freedom from other reflecting objects. If radar is installed in an
aircraft, precise identification of specific ground targets, for all practical purposes, is very
difficult to effect due to mass reflection from surrounding objects. Hence primary radar is
supplemented by additional equipment at the target to enable distance to be reliably measured
to the necessary degree of accuracy. When primary radar is supplemented to accomplish this
task it then becomes a form of secondary radar.

In secondary radar, pulses known as interrogation pulses are transmitted and when received at
the target they are passed through a ‘gate’ and then trigger transmission of reply pulses back
to the initial source where the time interval may be measured and displayed as distance. The
‘gate’ in the target receiver is an electronic device which is preset to receive only matching
pulses.

In the DME system the interrogating equipment, known as the ‘Interrogator’, is installed in
the aircraft and the target, located on the ground, is referred to as the ‘Transponder’ or
‘Ground Beacon’.
DME complies with the standards prescribed by the International Civil Aviation Organisation
(ICAO) and is installed at all international airports, at all capital city airports and many
regional airports in Australia and along routes serving international traffic. It was developed
from a composite distance and bearing facility known as ‘Tactical Air Navigation’ (TACAN)
which was designed in the USA as an aid to military aircraft. The VOR fulfills the bearing
requirements for civil aviation navigation; hence this component of the TACAN system is not
used to assist civil air operations. A combined VOR/TACAN installation is commonly
referred to as ‘VORTAC’. Where TACAN is not installed for military purposes then a DME,
manufactured to the same specifications as the DME portion of TACAN, is installed. This is
referred to as VOR/DME.
The DME will measure the distance in a straight line to the ground beacon (the slant range), not
the distance from a point on the ground vertically below the aircraft (ground range). The
difference is generally insignificant, except that when directly over a beacon when the distance
shown will be height above the beacon.

Operation
Aircraft use DME to determine their distance from a land-based transponder by sending and receiving
pulse pairs – two pulses of fixed duration and separation. The ground stations are typically co-located
with VORs. A typical DME ground transponder system for en-route or terminal navigation will have
a 1 kW peak pulse output on the assigned UHF channel.

A low-power DME can be co-located with an ILS glide slope antenna installation where it provides
an accurate distance to touchdown function, similar to that otherwise provided by ILS marker
beacons.

VHF omnidirectional range


VHF Omni Directional Radio Range (VOR) is a type of short-range radio navigation system
for aircraft, enabling aircraft with a receiving unit to determine their position and stay on course by
receiving radio signals transmitted by a network of fixed ground radio beacons. It uses frequencies in
the very high frequency (VHF) band from 108 to 117.95 MHz. Developed in the United
States beginning in 1937 and deployed by 1946, VOR is the standard air navigational system in the
world, used by both commercial and general aviation. By 2000 there were about 3,000 VOR stations
around the world including 1,033 in the US, reduced to 967 by 2013 with more stations being
decommissioned with the widespread adoption of GPS.

A VOR ground station sends out an omnidirectional master signal, and a highly directional second
signal is propagated by a phased antenna array and rotates clockwise in space 30 times a second. This
signal is timed so that its phase (compared to the master) varies as the secondary signal rotates, and
this phase difference is the same as the angular direction of the 'spinning' signal, (so that when the
signal is being sent 90 degrees clockwise from north, the signal is 90 degrees out of phase with the
master). By comparing the phase of the secondary signal with the master, the angle (bearing) to the
aircraft from the station can be determined. This bearing is then displayed in the cockpit of
the aircraft, and can be used to take a fix as in earlier ground-based radio direction finding (RDF)
systems. This line of position is called the "radial" from the VOR. The intersection of two radials
from different VOR stations on a chart gives the position of the aircraft. VOR stations are fairly short
range: the signals are useful for up to 200 miles.

VOR stations broadcast a VHF radio composite signal including the navigation signal, station's
identifier and voice, if so equipped. The navigation signal allows the airborne receiving equipment to
determine a bearing from the station to the aircraft (direction from the VOR station in relation to
Magnetic North). The station's identifier is typically a three-letter string in Morse code. The voice
signal, if used, is usually the station name, in-flight recorded advisories, or live flight service
broadcasts. At some locations, this voice signal is a continuous recorded broadcast of Hazardous
Inflight Weather Advisory Service or HIWAS.
Operation

VORs are assigned radio channels between 108.0 MHz and 117.95 MHz (with 50 kHz spacing); this
is in the Very High Frequency (VHF) range. The first 4 MHz is shared with the Instrument landing
system (ILS) band. To leave channels for ILS, in the range 108.0 to 111.95 MHz, the 100 kHz digit is
always even, so 108.00, 108.05, 108.20, 108.25, and so on are VOR frequencies but 108.10, 108.15,
108.30, 108.35 and so on, are reserved for ILS in the US. The VOR encodes azimuth (direction from
the station) as the phase relationship between a reference signal and a variable signal. The omni-
directional signal contains a modulated continuous wave (MCW) 7 wpm Morse code station
identifier, and usually contains anamplitude modulated (AM) voice channel. The conventional 30 Hz
reference signal isfrequency modulated (FM) on a 9,960 Hz subcarrier. The variable amplitude
modulated (AM) signal is conventionally derived from the lighthouse-like rotation of a directional
antenna array 30 times per second. Although older antennas were mechanically rotated, current
installations scan electronically to achieve an equivalent result with no moving parts. When the signal
is received in the aircraft, the two 30 Hz signals are detected and then compared to determine the
phase angle between them. The phase angle by which the AM signal lags the FM subcarrier signal is
equal to the direction from the station to the aircraft, in degrees from local magnetic north at the time
of installation, and is called the radial. The Magnetic Variation changes over time so the radial may
be a few degrees off from the present magnetic variation. VOR stations have to be flight inspected
and the azimuth is adjusted to account for magnetic variation.
This information is then fed over an analog or digital interface to one of four common types of
indicators:

1. A typical light-airplane VOR indicator, sometimes called an "omni-bearing indicator" or


OBI is shown in the illustration at the top of this entry. It consists of a knob to rotate an
"Omni Bearing Selector" (OBS), the OBS scale around the outside of the instrument, and a
vertical course deviation indicator or (CDI) pointer. The OBS is used to set the desired
course, and the CDI is centered when the aircraft is on the selected course, or gives left/right
steering commands to return to the course. An "ambiguity" (TO-FROM) indicator shows
whether following the selected course would take the aircraft to, or away from the station.
The indicator may also include a glideslopepointer for use when receiving full ILS signals.
2. A Horizontal Situation Indicator (HSI) is considerably more expensive and complex than a
standard VOR indicator, but combines heading information with the navigation display in a
much more user-friendly format, approximating a simplified moving map.
3. A Radio Magnetic Indicator (RMI), developed previous to the HSI, features a course arrow
superimposed on a rotating card which shows the aircraft's current heading at the top of the
dial. The "tail" of the course arrow points at the current radial from the station, and the
"head" of the arrow points at the reciprocal (180° different) course to the station. An RMI
may present information from more than one VOR or ADF receiver simultaneously.
4. An Area Navigation (RNAV) system is an onboard computer, with display, and may include
an up-to-date navigation database. At least one VOR/DME station is required, for the
computer to plot aircraft position on a moving map, or display course deviation and distance
relative to a waypoint (virtual VOR station). RNAV type systems have also been made the
use two VORs or two DMEs to define a waypoint; these are typically referred to by other
names such as "distance computing equipment" for the dual-VOR type or "DME-DME" for
the type using more than one DME signal.

5. In many cases, VOR stations have co-located Distance measuring equipment (DME) or


military Tactical Air Navigation (TACAN) — the latter includes both the DME distance
feature and a separate TACAN azimuth feature that provides military pilots data similar to the
civilian VOR. A co-located VOR and TACAN beacon is called a VORTAC. A VOR co-
located only with DME is called a VOR-DME. A VOR radial with a DME distance allows a
one-station position fix. Both VOR-DMEs and TACANs share the same DME system.
6. VORTACs and VOR-DMEs use a standardized scheme of VOR frequency to TACAN/DME
channel pairing[6] so that a specific VOR frequency is always paired with a specific co-located
TACAN or DME channel. On civilian equipment, the VHF frequency is tuned and the
appropriate TACAN/DME channel is automatically selected.
7. While the operating principles are different, VORs share some characteristics with
the localizer portion of ILS and the same antenna, receiving equipment and indicator is used
in the cockpit for both. When a VOR station is selected, the OBS is functional and allows the
pilot to select the desired radial to use for navigation. When a localizer frequency is selected,
the OBS is not functional and the indicator is driven by a localizer converter, typically built in
to the receiver or indicator.
Radio direction finder
A radio direction finder (RDF) is a device for finding the direction, or bearing, to a radio source. The
act of measuring the direction is known as radio direction finding or sometimes simply direction
finding (DF). Using two or more measurements from different locations, the location of an unknown
transmitter can be determined; alternately, using two or more measurements of known transmitters,
the location of a vehicle can be determined. RDF is widely used as a radio navigation system,
especially with boats and aircraft.

RDF systems can be used with any radio source, although the size of the receiver antennas are a
function of thewavelength of the signal; very long wavelengths (low frequencies) require very large
antennas, and are generally used only on ground-based systems. These wavelengths are nevertheless
very useful for marine navigation as they can travel very long distances and "over the horizon", which
is valuable for ships when the line-of-sight may be only a few tens of kilometres. For aerial use,
where the horizon may extend to hundreds of kilometres, higher frequencies can be used, allowing the
use of much smaller antennas. An automatic direction finder, often tuned to commercial AM
radiobroadcasters, is a feature of almost all modern aircraft.
For the military, RDF systems are a key component of signals intelligence systems and
methodologies. The ability to locate the position of an enemy broadcaster has been invaluable
since World War I, and play a key role in World War II's Battle of the Atlantic. It is estimated that the
UK's advanced "huff-duff" systems were directly or indirectly responsible for 24% of all U-
Boats sunk during the war. Modern systems often used phased array antennas to allow rapid beam
forming for highly accurate results. These are generally integrated into a wider electronic
warfare suite.

Several distinct generations of RDF systems have been used over time, following the development of
new electronics. Early systems used mechanically rotated antennas that compared signal strengths in
different directions, and several electronic versions of the same concept followed. Modern systems
use the comparison of phase or doppler techniques which are generally simpler to automate.
Modern pseudo-Doppler direction finder systems consist of a number of small antennas fixed to a
circular card, with all of the processing occurring in software.

Early British radar sets were also referred to as RDF, which was a deception tactic. However, the
terminology was not inaccurate; the Chain Home systems used separate omni-directional broadcasters
and large RDF receivers to determine the location of the targets.

Operation
Radio Direction Finding works by comparing the signal strength of a directional antenna pointing in
different directions. At first, this system was used by land and marine-based radio operators, using a
simple rotatable loop antenna linked to a degree indicator. This system was later adopted for both
ships and aircraft, and was widely used in the 1930s and 1940s. On pre-World War II aircraft, RDF
antennas are easy to identify as the circular loops mounted above or below the fuselage. Later loop
antenna designs were enclosed in an aerodynamic, teardrop-shaped fairing. In ships and small boats,
RDF receivers first employed large metal loop antennas, similar to aircraft, but usually mounted atop
a portable battery-powered receiver.

In use, the RDF operator would first tune the receiver to the correct frequency, then manually turn the
loop, either listening or watching an S meter to determine the direction of the null (the direction at
which a given signal is weakest) of a long wave(LW) or medium wave (AM) broadcast beacon or
station (listening for the null is easier than listening for a peak signal, and normally produces a more
accurate result). This null was symmetrical, and thus identified both the correct degree heading
marked on the radio's compass rose as well as its 180-degree opposite. While this information
provided a baseline from the station to the ship or aircraft, the navigator still needed to know
beforehand if he was to the east or west of the station in order to avoid plotting a course 180-degrees
in the wrong direction. By taking bearings to two or more broadcast stations and plotting the
intersecting bearings, the navigator could locate the relative position of his ship or aircraft.

Later, RDF sets were equipped with rotatable ferrite loopstick antennas, which made the sets more
portable and less bulky. Some were later partially automated by means of a motorized antenna (ADF).
A key breakthrough was the introduction of a secondary vertical whip or 'sense' antenna that
substantiated the correct bearing and allowed the navigator to avoid plotting a bearing 180 degrees
opposite the actual heading. The U.S. Navy RDF model SE 995 which used a sense antenna was in
use during World War I.  After World War II, there were many small and large firms making
direction finding equipment for mariners, including Apelco, Aqua Guide, Bendix, Gladding (and its
marine division, Pearce-Simpson), Ray Jefferson,Raytheon, and Sperry. By the 1960s, many of these
radios were actually made by Japanese electronics manufacturers, such as Panasonic, Fuji Onkyo,
and Koden Electronics Co., Ltd. In aircraft equipment, Bendix and Sperry-Rand were two of the
larger manufacturers of RDF radios and navigation instruments.

AUTOMATIC DIRECTION FINDER


 
ADF (Automatic Direction Finder) is the radio signals in the low to medium frequency band of 190
Khz. to 1750 Khz. It was widely used today. It has the major advantage over VOR navigation in the
reception is not limited to line of sight distance. The ADF signals follow the curvature of the earth.
The maximum of distance is depend on the power of the beacon. The ADF can receive on both AM
radio station and NDB (Non-Directional Beacon). Commercial AM radio stations broadcast on 540 to
1620 Khz. Non-Directional Beacon operate in the frequency band of 190 to 535 Khz.
 
ADF COMPONENTS

 ADF Receiver : pilot can tune the station desired and to select the mode of operation. The signal is
received, amplified, and converted to audible voice or morse code transmission and powers the
bearing indicator.

 Control Box (Digital Readout Type) : Most modern aircraft has this type of control in the cockpit .
In this equipment the frequency tuned is displayed as digital readout. ADF automatically determines
bearing to selected station and it on the RMI.

 Antenna : The aircraft consist of two antennas. The two antennas are called LOOP antenna and
SENSE antenna. The ADF receives signals on both loop and sense antennas. The loop antenna in
common use today is a small flat antenna without moving parts. Within the antenna are several coils
spaced at various angles. The loop antenna sense the direction of the station by the strength of the
signal on each coil but cannot determine whether the bearing is TO or FROM the station. The sense
antenna provides this latter information.

 Bearing Indicator : displays the bearing to station relative to the nose of the aircraft.
Relative Bearing is the angle formed by the line drawn through the center line of the aircraft and a line
drawn from the aircraft to the radio station.
Magnetic Bearing is the angle formed by a line drawn from aircraft to the radio station and a line
drawn from the aircraft to magnetic north (Bearing to station). 

Magnetic Bearing = Magnetic Heading + Relative Bearing.

Omega Navigation System:

Omega could determine position to a precision of ±2.2 km (1.4 mi). Later radio navigation
systems were more accurate.

OMEGA was the first truly global-range radio navigation system, operated by the United


States in cooperation with six partner nations. It enabled ships and aircraft to determine their
position by receiving very low frequency (VLF) radio signals in the range 10 to 14 kHz,
transmitted by a network of fixed terrestrial radio beacons, using a receiver unit. It became
operational around 1971 and was shut down in 1997 in favour of the Global Positioning
Satellite system.

Previous systems

Taking a "fix" in any navigation system requires the determination of two measurements.
Typically these are taken in relation to fixed objects like prominent landmarks or the known
location of radio transmission towers. By measuring the angle to two such locations, the
position of the navigator can be determined. Alternately, one can measure the angle and
distance to a single object, or the distance to two objects.

The introduction of radio systems during the 20th century dramatically increased the
distances over which measurements could be taken. Such a system also demanded much
greater accuracies in the measurements – an error of one degree in angle might be acceptable
when taking a fix on a lighthouse a few miles away, but would be of limited use when used
on a radio station 300 miles (480 km) away. A variety of methods were developed to take
fixes with relatively small angle inaccuracies, but even these were generally useful only for
short-range systems.

The same electronics that made basic radio systems work introduced the possibility of
making very accurate time delay measurements. This enabled accurate measurement of the
delay between the transmission and reception of the signal. The delay measurement could be
used to determine the distance between the two transmitters. The problem knew when the
transmission was initiated. With radar, this was simple, as the transmitter and receiver were
usually at the same location. Measuring the delay between sending the signal and receiving
the echo allowed accurate range measurement.

For other uses, air navigation for instance, the receiver would have to know the precise time
the signal was transmitted. This was not generally possible using electronics of the day.
Instead, two stations were synchronized by using one of the two transmitted signals as the
trigger for the second signal. By comparing the measured delay between the two signals, and
comparing that with the known delay, the aircraft's position was revealed to lie along a
curved line in space. By making two such measurements against widely separated stations,
the resulting lines would overlap in two locations. These locations were normally far enough
apart to allow conventional navigation systems, like dead reckoning, to eliminate the
incorrect position solution.

The first of these hyperbolic navigation systems was the UK'S Gee and Decca, followed by


the US LORAN and LORAN-C systems. LORAN-C offered accurate navigation at distances
over 1,000 kilometres, and by locating "chains" of stations around the world, they offered
moderately widespread coverage.

Atomic clocks

Key to the operation of the hyperbolic system was the use of one transmitter to broadcast the
"master" signal, which was used by the "secondaries" as their trigger. This limits the
maximum range over which the system could operate. For very short ranges, tens of
kilometres, the trigger signal could be carried by wires. Over long distances, over-the-air
signalling was more practical, but all such systems had range limits of one sort or another.

Very long distance radio signaling is possible, using longwave techniques (low frequencies),
which enables a planet-wide hyperbolic system. However, at those ranges, radio signals do
not travel in straight lines, but reflect off various regions above the Earth known collectively
as the ionosphere. At medium frequencies, this appears to "bend" or refract the signal beyond
the horizon. At lower frequencies, VLF and ELF, the signal will reflect off the ionosphere
and ground, allowing the signal to travel great distances in multiple "hops". However, it is
very difficult to synchronize multiple stations using these signals, as they might be received
multiple times from different directions at the end of different hops.

The problem of synchronizing very distant stations was solved with the introduction of
the atomic clock in the 1950s, which became commercially available in portable form by the
1960s. Depending upon type, e.g. rubidium, cesium, hydrogen, the clocks had an accuracy on
the order of 1 part in 1010 to better than 1 part in 1012 or a drift of about 1 second in 30 million
years. This is more accurate than the timing system used by the master/secondary stations.

By this time the Loran-C and Decca Navigator systems were dominant in the medium-range


roles, and short-range was well served by VOR and DME. The expense of the clocks, lack of
need, and the limited accuracy of a long wave system eliminated the need for such a system
for many roles.

However, the United States Navy had a distinct need for just such a system, as they were in
the process of introducing the TRANSIT satellite navigation system. TRANSIT was designed
to allow measurements of location at any point on the planet, with enough accuracy to act as
a reference for an inertial navigation system (INS). Periodic fixes re-set the INS, which could
then be used for navigation over longer periods of time and distances.
TRANSIT had the distinct disadvantage that it generated two possible locations for any given
measurements. This is true for hyperbolic systems like Loran as well, but the distance
between the two locations is a function of the accuracy of the system, and in the case of
TRANSIT this was close enough together that other navigation systems would not provide
the accuracy needed to resolve which was correct. Loran offered enough accuracy to resolve
the fix, but did not have global scope of TRANSIT. This produced the need for a new system
with global coverage and accuracy on the order of a few kilometres. The combination of
TRANSIT and the new OMEGA produced a highly accurate global navigation system.

Omega was approved for development in 1968 with eight transmitters and the ability to
achieve a four mile (6 km) accuracy when fixing a position. Each Omega station transmitted
a sequence of three very low frequency (VLF) signals (10.2 kHz, 13.6 kHz, 11.333... kHz in
that order) plus a fourth frequency which was unique to each of the eight stations. The
duration of each pulse (ranging from 0.9 to 1.2 seconds, with 0.2 second blank intervals
between each pulse) differed in a fixed pattern, and repeated every ten seconds; the 10-
second pattern was common to all 8 stations and synchronized with the carrier phase angle,
which itself was synchronized with the local master atomic clock. The pulses within each 10-
second group were identified by the first 8 letters of the alphabet within Omega publications
of the time.

The envelope of the individual pulses could be used to establish a receiver's internal timing
within the 10-second pattern. However, it was the phase of the received signals within each
pulse that was used to determine the transit time from transmitter to receiver. Using
hyperbolic geometry and radionavigation principles, a position fix with an accuracy on the
order of 5–10 kilometres (3.1–6.2 mi) was realizable over the entire globe at any time of the
day. Omega employed hyperbolic radionavigation techniques and the chain operated in the
VLF portion of the spectrum between 10 to 14 kHz. Near the end of its service life of 26
years, Omega evolved into a system used primarily by the civil community. By receiving
signals from three stations, an Omega receiver could locate a position to within 4 nautical
miles (7.4 km) using the principle of phase comparison of signals. Omega stations used very
extensive antennas to transmit their extremely low frequencies. This is because wavelength is
inversely proportional to frequency (wavelength in meters = 299,792,458 / frequency in Hz),
and transmitter efficiency is severely degraded if the length of the antenna is shorter than 1/4
wavelength. They used grounded or insulated guyed masts with umbrella antennas, or wire-
spans across both valleys and fjords. Some Omega antennas were the tallest constructions on
the continent where they stood or still stand.

When six of the eight station chain became operational in 1971, day-to-day operations were
managed by the United States Coast Guardin partnership with Argentina, Norway, Liberia,
and France. The Japanese and Australian stations became operational several years later.
Coast Guard personnel operated two US stations: one in LaMoure, North Dakota and the
other in Kaneohe, Hawaii on the island of Oahu.

Due to the success of the Global Positioning System, the use of Omega declined during the
1990s, to a point where the cost of operating Omega could no longer be justified. Omega was
shut down permanently on 30 September 1997. Several of the towers were then soon
demolished. Some of the stations, such as the LaMoure station, are now used for submarine
communications.

You might also like